The Mac Was the iPad of the 80s.

Today, the Mac and the graphical user interface that it introduced are ubiquitous, they’re seen in coffeeshops, classrooms, and airports all around the world, but for it wasn’t always that way. Back in 1984, when the original Macintosh was unveiled, it was met with about as much skepticism as it was with excitement, with many unsure if the computer’s headlining new graphical based operating system would take off, and at first, it didn’t. It took years for the Mac to catch on, but eventually, customers and competitors saw the genius in Apple’s design, and slowly but surely, the graphical user interface took over the computing world. Today, another one of Apple’s product lines is undergoing a similar story to that of the Mac’s, and that is the iPad. When Steve Jobs announced the iPad a decade ago, it was met with a familiar mix of hype and skepticism. Ten years on, the iPad and its touch based navigation is slowly taking the world by storm, and before we know it, the iPad will be as prominent as the Mac.

Why the iPad mini Sucks

Over the past few years, Apple has mades strides to turn the public’s perception of the iPad from a entertainment device to a computer replacement, but one product stands in their way: the iPad mini. The iPad mini epitomizes everything that is wrong with tablets, with the key reason for its existence being entertainment applications. The mini’s smaller screen restricts the type of work that can be done on it to a far to extreme extent, making it way to impractical to act as a stand in for a fully fledged laptop the way its larger brothers can, relegating it to a glorified larger phone, and its this restriction that is so dangerous to the iPad’s adoption and evolution. I would be fine with this, the mini’s existence wouldn’t bother me at all if it weren’t so dangerous to the rest of the iPad lineup. As I said before, the advancements and changes Apple has made with the iPad lineup are largely negated by the mini’s existence, which helps to retain the general public view that the iPad works best as an entertainment device, and not a next-generation computer.

What Will True Consumer Ready Augmented Reality Really Look Like?

For years, the subject augmented reality has been relegated solely to the stuff of science fiction, but finally, after years of seeing it in books, movies, and tv shows, true consumer augmented reality may soon come to fruition in our reality. For the past few years, more and more big tech firms such as Google, Facebook, and Apple have been hopping on the AR train, but it seems like each of these firms have different ideas on AR’s applications and what it can really be. Social media companies like Facebook and Snapchat are developing AR for entertainment and social uses, fitting in line with the services they provide. More business facing firms such as Google and Microsoft are pushing their AR products for enterprise use, whereas Apple, long rumored to be developing an AR headset, is seemingly developing their AR platform primarily for consumer use, with some speculating said platform could evolve into a product with as big of an impact on the tech landscape as the original iPhone. But which one this varied visions will AR fulfill? The answer is all of them, and none of them at the same time. If AR does have as big of an effect on the tech landscape as the smartphone did, and judging by the plethora of players in it, the chances of such are high, then AR will not be defined by these applications, and instead, it will define new applications. When the iPhone came out, it didn’t disrupt the smartphone space, it disrupted the personal computer space, and it redefined many computer applications, such as communication and entertainment. AR will do the same thing, rather than being restricted by preexisting applications, AR will create new ones, ones that will be informed by wide range of hardware and software potential made available with the platform

Has Computer Innovation Plateaued?

For the past 40 years, computer innovation has been solely driven by a demand for greater accessibility. First, computers became smaller, so that they could fit on your desk. Then, their operating systems became easier to use, weaning off of texted based interfaces and adopting far more user friendly graphical ones. After that, computers became even smaller, so we could viably take them any where in a backpack or pocketbook. Next, they became connected to one another with the advent of the internet, revolutionizing global communications and making data more accessible than ever before. Most recently, they became small enough to fit into our pockets, and simple enough to be controlled solely by our hands, without the need for any peripherals in between. Through all of these advancements, computers have become more and more accessible, both in terms of ease of use and availability, but now, many are quick to claim that this aforementioned rapid innovation in the computer space has stagnated, and the well that is computer innovation has run dry. However, this is not the case, while the rapid innovation in the computer space is definitely not as visible as it was around a decade ago, it certainly hasn’t stopped. What has changed is the goal computer innovations are being made in pursuit of. The last few decades’ goal of widespread computer accessibility has largely been met, with more people using computers than ever before, thanks to these aforementioned innovations. While the computer innovation well has not run dry, what has, in reality, is the computer accessibility innovation one. Now, as I said previously, computer innovations are being made in light of a new goal: integration. A majority of the computer innovations made across the past decade have been made to help integrate computers into more fields. AI advancements are made to push virtual assistants into our homes through smart home devices. Machine Learning is being used to put more powerful computers in our cars, with the ultimate goal of self driving capabilities. These goals and advancements are built upon those made by innovators who worked to make the computer better, and now computers are being used to make every aspect of our lives better. So to answer this question: “Has computer innovation plateaued?”, no, it has not, it has simply become part of a larger system: human innovation,

Have Accessibility Based Advancements Made the Computer Better— or Worse?

Over the course of the past few decades, the computer has quickly made itself an indispensable staple of our lives. There is evidence of this everywhere, almost everything we interact with in our lives is either made up of a computer or in some way reliant on one. But as computers have become more and more advanced and at the same time more and more tied to our lives, have these advancements truly made us better. As computers become more and more advanced, they simultaneously become more and more accessible, both in terms of how many people can operate them and how easy it is for said people to perform said operations. However, this advancement brings with it an oft-overlooked side affect, a decline in computer literacy. In the earliest days of the personal computer, one needed a specific and deep knowledge of a computer in order to operate it to the greatest extent, mostly due to less user friendly user experiences that relied on text based input methods in place of easier to use interaction methods, such as the graphical user interface or GUI for short. Since new accessibility innovations like the GUI have been introduced, countless people who would never have been able to operate a computer before now had a whole new world open to them, but at the same time, the necessity to have such a vast knowledge of the computer one was using was gone. So that is the tradeoff with accessibility-based-innovations in the computer space. So the question is, is it more important to have more well versed users, or more users in general? The reason why such a predicament exists stems around the issue of computer literacy not increasing in accessibility at the same rate computer use did. While it has certainly become substantially easier to learn both software and hardware engineering and design over the past few decades, the ease of use growth rate is nowhere near as substantial as it is for computer use itself. What we need now is for these two ease of use growth rates to meet and then continue to grow with each other, as of they do not, innovation could truly plateau from a lack of ideas from those with enough time and resources to become proficient in computers.

Steve Jobs (Probably) Would Have Hated the iPhone. Here’s Why.

Steve Job’s once infamously called the computer a “bicycle for the mind”, highlighting its potential to amplify our own abilities and give us the means to do great things. Back then, the computer’s potential in productivity, creativity, and accessibility was seemingly limitless, and its future exciting. But today, in that exciting future, where we have computers that can fit in our pockets and others around our wrists, the computer is less like a bicycle for the mind, and much more like a crutch. The introduction of the smartphone brought with it a new age of computing. Finally, we had fully fledged computers that could fit in our pockets and be with us all day, and at first, much of the limitless possibility attached to the original personal computers in the early 1980s was tacked on to the smartphone, mostly for the possibilities of the productivity applications of conventional computers/ computers finally being accessible anywhere you go. But we’ve quickly seen that the smartphone can be just as much of a obstacle in our lives as it can be a tool. The ever connectedness that made the smartphone so great is also what makes it so dangerous, due to its enabling of addictive practices apps use to keep us sucked in, providing yet another distraction in our already over-distracted lives. This enabling is what makes the smartphone such a crutch, that stops us from reaching our true potential, and what would make it a danger in the eyes of Steve Jobs.

The Mozart of the 21st Century.

They say art is subjective. But the application of this idea transcends the determining of quality of art, and is even more applicable when pertaining to its definition. First, it was argued whether music was art, then cinema, even now the argument that video games have an art to them is being waged. However, through the centuries of inserting the titles of all of these mediums under the dictionary definition of art, one medium has been sorely overlooked: products. The line most commonly drawn between products and art is the idea of art seeping its way into a product, most often through its masterful design or engineering, however, this line is rarely ever drawn the other way around, with products seeping into art. This is preposterous, as products, if they are well design ones more specifically, tend to be the form of art we appreciate and connect with most. Furthermore, products tend to have the least linear path of interaction and the largest field of interpretation, meaning their artists have a much harder time pushing users to consume their art the way the artist intends it to be consumed, whereas in other industries, such as music and film, it is much easier for artists to imbue linear paths of interaction into their products. Because of this, when a truly masterful user experience is imbued in a product, especially one that can guide users through that products functionality in a comprehensive yet engaging way, these artists have accomplished the highest overlooked art. Like with any form of art, the idea of the separation of the art and the artist inevitably surfaces. Perhaps the greatest artist in this theoretical medium is Steve Jobs, an artist from whom many are eager to separate his art. Because of the sheer prevalence of iconic products in our time, a prevalence heightened by a population ever hoping to be seen as intrinsically tied to its lifestyles, the product is the art form of the century. The salesman is the musician. Steve Jobs is the Bach, and there’s still room for Mozart.

The Coronavirus Could Revolutionize Manufacturing.

After World War II devastated Japan’s, the need for a new more cost effective and cost cutting form of manufacturing became more necessary than ever before to kickstart Japan’s post war economy. Hoping to find economic success and restore the Japanese economy in the car industry, Toyota devised a new method, a master class in industrial engineering known as the just-in-time system. The system, which is still used by many companies, and almost every one in the tech industry, revolved around not keeping any inventory, and only completing steps in the manufacturing when the components to do so became available. The practice had many key advantages, such as negating the need to stockpile of components and wait to use them until when it became necessary to do so, which required keeping extra inventory. Furthermore, this practice’s usefulness is amplified in the hardware industry, where components have extra short shelf lives, and keeping them as extra inventory often leads to manufacturers having to absorb large costs from unused inventory. However, this practice has drawbacks, which are more visible than ever right now. Hardware manufacturers lack of inventory of vital components due to the use of the just-in-time system has come back to bite them in this trying time, where the manufactures of these components are unable to provide them, due to their location in China, the epicenter of this crisis. The trials that this longstanding system faces now could shape its structure in the future. Companies will be forced to reevaluate their manufacturing practices in response to the problems that face them right now. These changes could offer several benefits and drawbacks for consumers. As a benefit, not having to wait for components could mean that electronic devices could be produced faster. On the other hand, the possibility of companies stockpiling components would almost definitely force those companies to absorb the added costs that come with that, which would in turn trickle down to consumers through higher prices that reflect higher production costs. To conclude, the flaws made visible by the crisis we face today will almost definitely lead to manufacturing innovations in the future.s

iPad Trackpad Support Doesn’t Kill Job’s Vision for the Device. It Reinforces It.

For the past few years, with every iteration of the iPad Apple releases, they take more steps towards bringing the iPad closer to the modern perception of a computer. Last years iPadOS completely changed the way that iPad is used, finally adding the much requested external storage support, desktop class browsing, and multitasking only rivaled by full fat PCs. However, Apple’s most recent step towards developing the iPad as a laptop replacement has garnered as much backlash as it has support. Apple’s recent choice to add native mouse and trackpad support to the iPad is a bold move and a massive step towards making the iPad a viable laptop replacement for many, and its implementation has come with criticism over what could be viewed as a betrayal of Steve Jobs’ original vision for the product. When Jobs introduced the iPad as a “magical piece of glass”, the device became synonymous with touch input, and that touch input became synonymous with the versatility that would hopefully allow the iPad to lead the post PC world. What made the iPad so special 10 years ago and what still makes it special today is the sheer magic and versatility that comes with it. You can use it anywhere you want, with its casual input methods, small and light profile, and comfortable design, you can use it anytime you want with its all day battery life and always connected abilities thanks to cellular options, and you can use it any way you want, with its versatile touch-centric OS that allows for capabilities and applications that are simply not possible on conventional computers. This last point is why mouse and trackpad support on the iPad is not a betrayal of Steve Job’s vision for it, but an advancement and reinforcement of it. The backlash that comes with this announcement comes not because of a mistake on Apple’s part, but a misunderstanding on critics part. Those who attack trackpad and mouse support in the name of protecting Jobs vision actually have the opposite effect. Touch input is not what makes the iPad special, versatility is. On top of this, the introduction of Trackpad support isn’t changing the fundamental idea behind the iPad, or what Steve Jobs envisioned it to be. First and foremost, the iPad is still a touch device, mouse and keyboard support just provides more functionality and versatility. The versatility of the iPad is what makes it so special, it doesn’t define the way you use it, instead, it is defined by the way you use it, and another input method in mouse and trackpad support allows you to use all new workflows not possible or productive with touch input, before, simultaneously reinforcing Job’s vision for the iPad and realizing that vision even more than before.

The iPad mini is still the stupidest piece of crap on the planet though, but that’s a rant for another day.

Technology’s Impact on the Way We Work.

When Alan Turing refined what would become the first iteration of the modern day computer way back in the late 1940s, he had no way of knowing the impact it would have on the world future generations would inherit from him. With the help of many great thinkers, the computer has truly infiltrated every corner of our lives since Turing’s initial model, defining many aspects of everyday life. One aspect of life impacted by the computer’s rise to prominence in a particularly large way is the way we work and the way our work interacts with the rest of our lives. Through multiple seperate avenues, the computer has worked to define the way that we work and simultaneously blur the line between our professional and personal lives. One way that the computer has done this that is more observable than ever right now it’s allowance of remote working. The ability to work from home is entirely reliant on the computer, and what has now become the way that 43 percent of Americans work (according to CNBC) would be impossible without the presence and prevalence of the computer. Another way that technology has changed our work culture is through email. The ability, a benefit or a curse, to maintain constant contact with colleagues has truly had a monumental impact on the way we work. Furthermore, this ability has further blurred the line between what’s work and what’s not by bringing work related concerns and conversations into our homes, which was, for the most part, a safe haven of personal life guarded from our professional ones up until now. Through these means and many more, technology has had a truly massive impact on the way we work and the way our work is related to the rest of our life.