The Apple Watch was the first big new product line for Apple since the death of Steve Jobs, and, needless to say, it had a lot riding on it. Leading up to its announcement, speculation around the Apple Watch ran wild, with many heralding it to be the next iPhone. But now, 5 years later, has the Apple Watch lived up to those lofty expectations? Well, yes, for Apple it has at least. When the Apple Watch was finally unveiled, it was met with some deal of disappointment. Some of this disappointment was understandable, as the Apple Watch could never live up to the insane amounts of hype surrounding it, but, on the other hand, some of this disappointment was definitely warranted. Ever since its release, the Apple Watch has felt more like an iPhone accessory than a stand alone device, with it falling closer to Apple’s AirPods than to its iPhone in terms of functionality and impact. But this doesn’t mean that the Apple Watch has been a letdown by any stretch of the imagination, it’s just not what we expected it to be. The Apple Watch is part of a broader technological future, wearables, where our computing is relegated into smaller, simplified computers that we interact with in more subliminal ways. The Apple Watch was never meant to be a stand alone device, it just wouldn’t work as one, instead, it is meant as supplement to preexisting devices, one that simplifies and advances the user experiences of those preexisting devices overall, most notably in the way in negates the need to use those other devices for the simpler tasks that the Apple Watch can complete. Like I said, the Apple Watch is a device for the future, where our computing needs are divided and spread out across multiple computing platforms, platforms like AR headsets, wireless earbuds, and, currently, smart watches like the Apple Watch. While this future isn’t quite here yet, for now, the Apple Watch offers an excellent companion to the iPhone for its health and fitness capabilities, messaging and calling applications, and more simplified tasks that negate the need to look at your iPhone.
Perhaps the most famous story of failure in Silicon Valley is that of Theranos. A unicorn company poised to take over the valley with billions of dollars worth of runway and massive amounts of hype backing it, it seemed the that health tech firm’s success was inevitable. But today, the name Elizabeth Holmes, and as an extension, Theranos, is not synonymous with success, instead, it is synonymous with fraud and corruption. But its times like these where the firms failure hurts the most. Theranos hoped to bring widespread instant blood testing to the market, which, in a global pandemic like this, would have made testing for viruses and diseases significantly less of a struggle.
Out of all up and coming technologies, AI has-almost indisputably-the greatest amount of hype surrounding it. This hype largely stems from the fact that AI’s true potential is relatively unknown, making its theoretical potential limitless. While this certainly does make it difficult to accurately gauge the true potential of artificial intelligence, some of its theoretical applications are incredibly intriguing and have true potential. People have found ways to cram artificial intelligence into practically ever use case, which might make you think that almost all practical use cases have been covered, but in reality, this isn’t the case. Perhaps the greatest use case for AI has not been discovered or in any case-realized-as of yet, and in my opinion, this overlooked use case is more accessible programming. Over the past few decades, as computing has become more and more ubiquitous, programming has failed to become noticeably easier to adopt with it. While coding has certainly become more user-friendly in certain aspects, it largely remains too difficult to be accessible by the masses. But AI could change this. AI and, as an extension, machine learning could help to make programing drastically easier and more accessible to the general population. By allowing for simpler commands and scripting by using machine learning algorithms and pattern recognition, machine learning could allow for more versatile syntax for programming languages, meaning coding would be significantly easier to adopt. But why does this untapped application have so much potential? It’s simple, more and more people would be able to realize their ideas if they had the ability to program them into reality, and when we can all realize our dreams, the world becomes a better place.
Over the past ten years, Apple has slowly but surely built up a case for the iPad being a replacement for the traditional computer, and as an extension of that, the Mac. But even today, Apple continues to release new and improved Mac models, with even more rumored to be on the horizon. So, in a future where almost everyone uses an iPad for their daily computing needs, where does the Mac stand? The best answer seems to be as a device to do the list of things the iPad can’t do, which, thanks to rapid innovation, is constantly growing smaller and smaller in size. The remaining contents of this list are becoming more and more made up of edge cases, but there are a few that may never be able to truly work on the iPad. One such case that comes to mind is enterprise use. Because of the iPad’s hallmark portability, enterprise server use is one application that the iPad will most likely never be able to handle to the same extent as the Mac does. Another use case is app and web development, which typically requires machines more powerful than those developed applications are running on in order to maximize said applications performance. However, besides these two use cases, which have relatively low user bases, the iPad should soon be able to do most of the most important things the Mac can, negating the need for it almost entirely.
Today, the Mac and the graphical user interface that it introduced are ubiquitous, they’re seen in coffeeshops, classrooms, and airports all around the world, but for it wasn’t always that way. Back in 1984, when the original Macintosh was unveiled, it was met with about as much skepticism as it was with excitement, with many unsure if the computer’s headlining new graphical based operating system would take off, and at first, it didn’t. It took years for the Mac to catch on, but eventually, customers and competitors saw the genius in Apple’s design, and slowly but surely, the graphical user interface took over the computing world. Today, another one of Apple’s product lines is undergoing a similar story to that of the Mac’s, and that is the iPad. When Steve Jobs announced the iPad a decade ago, it was met with a familiar mix of hype and skepticism. Ten years on, the iPad and its touch based navigation is slowly taking the world by storm, and before we know it, the iPad will be as prominent as the Mac.
Over the past few years, Apple has mades strides to turn the public’s perception of the iPad from a entertainment device to a computer replacement, but one product stands in their way: the iPad mini. The iPad mini epitomizes everything that is wrong with tablets, with the key reason for its existence being entertainment applications. The mini’s smaller screen restricts the type of work that can be done on it to a far to extreme extent, making it way to impractical to act as a stand in for a fully fledged laptop the way its larger brothers can, relegating it to a glorified larger phone, and its this restriction that is so dangerous to the iPad’s adoption and evolution. I would be fine with this, the mini’s existence wouldn’t bother me at all if it weren’t so dangerous to the rest of the iPad lineup. As I said before, the advancements and changes Apple has made with the iPad lineup are largely negated by the mini’s existence, which helps to retain the general public view that the iPad works best as an entertainment device, and not a next-generation computer.
For years, the subject augmented reality has been relegated solely to the stuff of science fiction, but finally, after years of seeing it in books, movies, and tv shows, true consumer augmented reality may soon come to fruition in our reality. For the past few years, more and more big tech firms such as Google, Facebook, and Apple have been hopping on the AR train, but it seems like each of these firms have different ideas on AR’s applications and what it can really be. Social media companies like Facebook and Snapchat are developing AR for entertainment and social uses, fitting in line with the services they provide. More business facing firms such as Google and Microsoft are pushing their AR products for enterprise use, whereas Apple, long rumored to be developing an AR headset, is seemingly developing their AR platform primarily for consumer use, with some speculating said platform could evolve into a product with as big of an impact on the tech landscape as the original iPhone. But which one this varied visions will AR fulfill? The answer is all of them, and none of them at the same time. If AR does have as big of an effect on the tech landscape as the smartphone did, and judging by the plethora of players in it, the chances of such are high, then AR will not be defined by these applications, and instead, it will define new applications. When the iPhone came out, it didn’t disrupt the smartphone space, it disrupted the personal computer space, and it redefined many computer applications, such as communication and entertainment. AR will do the same thing, rather than being restricted by preexisting applications, AR will create new ones, ones that will be informed by wide range of hardware and software potential made available with the platform
For the past 40 years, computer innovation has been solely driven by a demand for greater accessibility. First, computers became smaller, so that they could fit on your desk. Then, their operating systems became easier to use, weaning off of texted based interfaces and adopting far more user friendly graphical ones. After that, computers became even smaller, so we could viably take them any where in a backpack or pocketbook. Next, they became connected to one another with the advent of the internet, revolutionizing global communications and making data more accessible than ever before. Most recently, they became small enough to fit into our pockets, and simple enough to be controlled solely by our hands, without the need for any peripherals in between. Through all of these advancements, computers have become more and more accessible, both in terms of ease of use and availability, but now, many are quick to claim that this aforementioned rapid innovation in the computer space has stagnated, and the well that is computer innovation has run dry. However, this is not the case, while the rapid innovation in the computer space is definitely not as visible as it was around a decade ago, it certainly hasn’t stopped. What has changed is the goal computer innovations are being made in pursuit of. The last few decades’ goal of widespread computer accessibility has largely been met, with more people using computers than ever before, thanks to these aforementioned innovations. While the computer innovation well has not run dry, what has, in reality, is the computer accessibility innovation one. Now, as I said previously, computer innovations are being made in light of a new goal: integration. A majority of the computer innovations made across the past decade have been made to help integrate computers into more fields. AI advancements are made to push virtual assistants into our homes through smart home devices. Machine Learning is being used to put more powerful computers in our cars, with the ultimate goal of self driving capabilities. These goals and advancements are built upon those made by innovators who worked to make the computer better, and now computers are being used to make every aspect of our lives better. So to answer this question: “Has computer innovation plateaued?”, no, it has not, it has simply become part of a larger system: human innovation,
Over the course of the past few decades, the computer has quickly made itself an indispensable staple of our lives. There is evidence of this everywhere, almost everything we interact with in our lives is either made up of a computer or in some way reliant on one. But as computers have become more and more advanced and at the same time more and more tied to our lives, have these advancements truly made us better. As computers become more and more advanced, they simultaneously become more and more accessible, both in terms of how many people can operate them and how easy it is for said people to perform said operations. However, this advancement brings with it an oft-overlooked side affect, a decline in computer literacy. In the earliest days of the personal computer, one needed a specific and deep knowledge of a computer in order to operate it to the greatest extent, mostly due to less user friendly user experiences that relied on text based input methods in place of easier to use interaction methods, such as the graphical user interface or GUI for short. Since new accessibility innovations like the GUI have been introduced, countless people who would never have been able to operate a computer before now had a whole new world open to them, but at the same time, the necessity to have such a vast knowledge of the computer one was using was gone. So that is the tradeoff with accessibility-based-innovations in the computer space. So the question is, is it more important to have more well versed users, or more users in general? The reason why such a predicament exists stems around the issue of computer literacy not increasing in accessibility at the same rate computer use did. While it has certainly become substantially easier to learn both software and hardware engineering and design over the past few decades, the ease of use growth rate is nowhere near as substantial as it is for computer use itself. What we need now is for these two ease of use growth rates to meet and then continue to grow with each other, as of they do not, innovation could truly plateau from a lack of ideas from those with enough time and resources to become proficient in computers.