Why Apple should make an ARM Mac

In the tech world, a successful transition is a scarcity. Flops like Windows Vista, the Windows 8, and well, pretty much any version of Windows after ’95 just serve to prove this point. This curse, however, seemingly does not afflict Apple. Throughout its history, the company has pulled off numerous successful transitions, such as the transition from MacOS 9 to MacOS X, even further back, the transition from the Apple I to the Apple II, and, most relevant to this topic, the monumental architecture transition from PowerPC to Intel. The Intel transition should will down in history as a master class on product transition; In just one year, Apple successfully, and, more importantly, gracefully, moved the whole entire Mac lineup from the PowerPC Processor architecture to Intel’s, all while seeing widespread adoption after doing so. This monumental transition does not get nearly as much attention as it deserves, and even when compared to everything the company had done before and has done since, it still goes down as one of the most impressive feats in the company’s history. But now, another, equally as massive transition could be coming, and if the rumors are correct, the Mac could be moving to ARM. The ARM architecture, off of which Apple’s iPhones, iPads, and Watches are powered, has several key advantages over Intel’s, which is currently in use in all of Apple’s Macs. For one, they are significantly more efficient, with their lower power consumption leading for what can equate to up to double the battery life on Intel computers. Furthermore, ARM processors support native cellular radios, which, with the advent of 5G, could prove tremendously useful. Finally, specifically for Apple, the company already produces them for use in their phones, watches, tablets, and more. This could make Macs running ARM chips significantly cheaper than preexisting Intel versions, as in house processor manufacturing would severely undercut the cost of production. While all of these prospects could serve as reasons why Apple could make an ARM based Mac, they’re not why Apple should make one. In fact, the reason why Apple should make an ARM based Mac doesn’t even pertain to the Mac at all, it pertains to the iPad. Apple sees the iPad as the future of the personal computer, but it’s pretty clear that not everyone else does. While the company has made strides to bring the iPad closer to their vision for it, such as a dedicated, first party hardware keyboard, or more recently, iPadOS, with its range of previously Mac/PC exclusive abilities, it is still clear that many people don’t see the iPad as a computer. A major contributing factor to this sentiment stems from the lack of apps, many of which are deemed essential to a computing experience. This includes apps like Lightroom, Photoshop, and even Apple’s own Final Cut, all apps that people’s entire careers are based off of that have no equivalent offering on the iPad. The reason for this comes from a lack of developer support, as devs don’t see the iPad as a viable platform for their applications given the time and cost needed to develop apps for it. A large majority of this cost comes from having to port or translate PC or Mac apps that were designed to run on Intel processors to iOS apps that were designed to run on ARM processors. Porting an app from Mac or PC to iPad or iPhone is no easy task, and developers simply don’t see it as a worthwhile one given that the large majority of computer users reside on the Mac because that’s where the apps are -It’s a catch twenty-two. However an ARM Mac could break this loop. Releasing a Mac based off of the Arm architecture would force developers to develop versions of their apps for it, or risk losing relevancy for failing to do so. Once they developed ARM ports for the Mac, a large hurdle in porting to the iPad would be removed, and doing so would be an exponentially easier pill to swallow. This-with to all the advantages of the ARM architecture on top of it- is why Apple should put out an ARM Mac, not for the future of the Mac, but for the future of the iPad.

The iPad is a great computer, but it could be the greatest.

Recently, its gotten easy to wrap your head around the idea of the iPad as a computer. Through Apple’s addition of features like split screen multitasking, a file manager, and mouse support, the iPad is more like a computer than ever before. But it can be so much more. While these new additions are great, they take away some of the fundamental advantages that a tablet has over a traditional laptop computer. Tablets can me much more versatile, portable, and intimate computers than laptops can, yet these recent updates essentially turn the iPad into a touchscreen Mac. The prospect of the iPad that allowed it to garner so much hype before its launch was that the possibility of a whole new, completely different form factor for computing, one that would allow for truly endless possibilities and applications. Instead of pursuing this future for the iPad, Apple seems intent on pursuing an easier to use Mac, which isn’t what people want, when they should be pursuing a wholly different device, one that does things that the Mac can’t by harnessing its unique hardware, software, and form factor.

Apple isn’t done for… yet.

The Apple Logo, a worldwide symbol of reinvention and innovation
Image via Apple.com

Ever since the passing of Steve Jobs, the idea that Apple is not what it used to be has grown and grown. This sentiment has been amplified in recent years, as made clear by countless articles, YouTube videos, and Podcast rants that all seem to say that Apple is nothing compared to what it was under Jobs. While I personally believe that Apple, in recent years, has demonstrated a downward trend in terms of the number of their innovations, I do not think that has everything to do with the lack of Steve Jobs. I believe that Jobs was a key player in realizing and making ubiquitous Apple’s innovations, but I don’t think that his absence is the prime reason Apple’s innovations have been seemingly few and far between in recent years. I think that innovation as a whole has plateaued recently, as a result of a transition in the technology landscape. Most of Apple’s (and Silicon Valley’s as a whole) innovations of the past two or three decades revolved around making tech more personal and accessible. The sole purpose of the original Mac’s game-changing graphical user interface was to widen the accessibility of the personal computer, so that even a child could understand it. The iPhone put a computer in all of our pockets, virtually erasing the need for a computer to do most tasks, and forcing it to adopt new features to remain a viable product, while making computers more trendy and omnipresent than ever before. But now, computers are ubiquitous. They aren’t scary, massive boxes like they used to be. They’re stylish, popular, and they’re everywhere. Everyone can use them, from 4 year olds to senior citizens, computers can’t be much more easier to access and to use. What once was a push for innovation is now an upper limit that stalls it. But this doesn’t mean that Apple will never innovate ever again. New technologies, like AR haven’t yet been able to infiltrate the consumer landscape, and it’s up to innovative companies like Apple to push them to do so. Technology is always advancing, and we will always need companies like Apple to make its newest evolutions available to us.

We need an open source, easy to contribute to, virtual assistant.

I typically don’t support open source technologies. They are typically unpolished, inferior products that lack good user experiences and have unfocused visions and purposes. However, in the case of virtual assistants, I think the field could benefit greatly from an open source offering. The problems with most virtual assistants come from their privacy concerns, like in the case of Amazon or Google, or inferior databases, like in the case of Apple’s Siri. An open source virtual assistant could curb these setbacks by allowing supporters to cater to the assistants database and do so according to their own will. This would allow both users and manufacturers to develop apps for the assistant and integrate their products directly with it, specializing the features the assistant would offer when used with their products, while also allowing users to create a more personalized, user friendly personal assistant that could cater to their specific needs in ways that other assistants couldn’t. Eric Von Hippel, an expert on user innovation emphasizes the benefit of user innovators’ (users creating and contributing to something for their own gain) ability to make products that function for them, whereas manufacturers and companies are forced to take a one size fits all approach to maximize reach and market share. I think that, while a one size fits all approach to products is a good one in many situations, in a situation where the product is supposed to be tailored for the user, a broader data contribution range and information database that is donated voluntarily provided by users could lead to unprecedented success in creating a more user friendly and capable virtual assistant.

Analog Concepts in a Digital World Part 1: Animosity

Why do we wear masks? There is an inherent seduction that comes packaged with the mystique of a secret identity. We feel empowered by secrecy, it is like a drug, making us feel as if we can do anything, because no one is looking. The KKK used animosity as a weapon through hiding behind a mask and committing treacheries under the backdrop of the moon and acting like different men with the sun’s rise. Technology, like a mask, empowers us through secrecy, we can attack, hurl vicious insults and destroy people with a keyboard and mouse, without them ever seeing our face. Virtual secrecy is a danger often left unchecked, as in the past, fear of public humiliation or shame for attacking others, verbally or physically, has kept us in line. But now, technology enables the deepest levels of secrecy, and people use this to their advantage to wage virtual war with bombs and planes of derogatory remarks and insults. So the question is, will we become less sensitive, or more power high?

The future of computing through fractal geometry.

Fractal geometry, to put it simply, is the fourth dimension, it is supplementary to the first three or “euclidian” dimensions, length, width, and height, and it describes the space in between the first three dimensions. For example, a euclidian definition of a mountain would be a cone, as that can easily be translated from its length, width, and height, whereas a “fractallian” definition would describe it based on its components. This is done as pertaining to the three principles of fractals: self-similarity, recursiveness, and initiation, which state respectively, that each component of a fractal is similar to the whole fractal, and that the self-similarity is infinite in detail, going on forever. So how do I go from writing about iPads and the future of computing to this, I have to state that such a jump is not that far of one. You see, in my mind, fractals hold the key to the future of computing. A key element of fractals is their property of infinite detail in a limited amount of space, an element that could be translated into computing quite clearly. The use of standardized data utilizing minuscule variations to such data could allow for self similar binary (1s and 0s for the non tech savvy) that could be infinite in detail while still being finite in the amount of space it takes up. This would allow for infinite storage and memory, which would simultaneously expedite the convergence of memory and storage, and null another limitation to compute power. However this concept fringes on the translation of Mandelbrot’s formula (Z [the complete fractal, think the complete triangle from pascals triangle]= Z * C^2 [The little triangles multiplied by their amount]) to binary, which would be done in a similar way into its translation into color, where an integer for Z translates to a specified color as it inches towards infinity and a rational number with a decimal translates to black as it inches towards zero. Once this could be translated, a major barrier in computing could be broken, and major technological leaps could be made.

After 10 years, can we finally call the iPad a computer?

Steve Jobs with the original iPad
Image via Digital Trends

10 years ago on this day, the post pc era began. At least according to Steve Jobs it did. 10 years ago, Steve Jobs sauntered onto a San Francisco stage to announce possibly the most anticipated product in his company’s history: the iPad. To Jobs, the iPad was more than just a tablet, it was the final nail in the PC’s coffin and the first glimpse at the future of computing. But 10 years after its unveil, many are still hesitant to call the iPad a computer. This has nothing to do with the iPads functionality today, and everything to do with the iPads functionality 10 years ago. When the iPad first came out, it seemed to be focused on content consumption. Its reveal keynote highlighted its potential as an e reader, or its video streaming prowess, rather than paying attention to applications in writing, business environments or content creation, all fields that were dominated by the PC. While its obvious why the iPad was so centered around content consumption, with the tempting ease of advertising the device as a portable tv or a slate that lets you carry the Library of Alexandria in your backpack, these weren’t the features people were looking for out of a tablet, they wanted Word, Photoshop, GarageBand, and other apps previously exclusive to PCs and Macs. This association between the iPad and content consumption is still strong in the minds of many today, and is the key reason why so many people have such a difficult time seeing the iPad as a computer in the same way they see their laptop or desktop as one, where in reality the iPads of today are entirely different beasts than the first ten years ago. Modern iPads check all the boxes of a computer, they have physical keyboards, file managers, productivity and content creation applications, large screens, and other features key to a computer replacement, so yes, the iPad can be called a computer, and it should be, but that doesn’t mean that everyone will.

Is it too early to ditch the iPhone’s last port?

The Lightning Port
Image via CNBC

Apple is known for making its devices as simple as possible by taking away what they deem to be unnecessary additions. This policy goes all the way back to the Mac, which didn’t feature arrow keys to push users to use the mouse. More recently, it can be seen in the removal of the headphone jack on the iPhone and the narrow port selection on current MacBooks. While this practice has come under fire, it has allowed for the adoption of more futuristic technologies, such as the aforementioned mouse or wireless earbuds like Apples own AirPods, which wouldn’t have been nearly as popular as they are if you didn’t have to use a dongle to connect regular wired headphones. However, we might not be ready for the next step in this line of removing complications from our devices. According to multiple reports, the one with the most weight coming from CNBC, Apple could ditch the lightning port for wireless charging only power delivery on the next iPhone. Such a move would certainly be true to Apple’s history, but are we ready yet? Wireless charging has seen some widespread adoption, and the technology has some clear benefits over the conventional charging over wire, such as being easier to charge your device, easier to remove it from a charger, and more sleek design. It’s also important to note that, unlike the case of the removal of the headphone jack, the removal of the charging port would not be that big of a loss. With the massive batteries in iPhones these days that allow them to last all day without needing a charge, one of the major advantages of conventional charging is eliminated, that being the portability. You don’t really need a charging port when you have a wireless charger, which will presumably be included with the new iPhone in place of a conventional charging cable and brick, and when you don’t need to top off your phone, you wouldn’t need a charging brick and cable to bring with you. Sure, when going on road trips you might need a wired charger to charge your phone, but I foresee new cars coming with built in wireless charging pads and for the right now, many companies make portable wireless chargers. I think, just like in the case of the arrow keys, headphone jack, any countless other examples, taking something away would give be giving you an experience that is that much greater.

Should Apple release a cheaper iPhone?

The iPhone SE, the last “budget” iPhone Apple released
Image via The Next Web

Perhaps the most outspoken criticism of Apple in this day and age targets the price of their products. First it was the iPhone, then the iPad Pro, most recently, the Mac Pro. The second Apple releases a new product, someone is already making a twitter post about its “unbearable” price tag. These calls for cheaper products often go unanswered, isolated in the echo chambers that are Twitter and Facebook groups. But according to recent rumors, Apple could finally be listening to calls for more affordable products. According to a rumor first first publicized by Bloomberg, Apple could be working on a cheaper iPhone, rumored to be called the iPhone 9, that would serve as a successor to the popular iPhone SE, which was a more affordable offering from a few years ago. The new model would reportedly go back to the older home button design, last seen on the iPhone 8, with lesser specs and a smaller screen to keep costs down. But while introducing a cheaper iPhone sounds like music to some people’s ears, there are several less clear problems that could arise with such a move. The first pertains to their product lineup. As I have previously stated, Apple’s current product lineup is in pretty rough shape when compared those of previous years. Each product line has way to many devices in it, each with names that mean almost nothing and say almost nothing about the product, I mean, on their website, they’re selling four different iPhones, four different iPads, and four different Mac desktops. I’m all for covering all your bases, but making as many products as you can is the not the way of going about it, for each new product that you add, you lose some focus, and you make buying products harder for consumers, while simultaneously hindering the user experience of products as you make them more specialized and less general. This brings me to my biggest concern about introducing a cheaper iPhone. A lesser user experience. Steve Jobs understood the importance of imputing, or giving a good first impression, to your customers. This understanding led to the amazing user experience that built the foundation for Apple to become the most valuable company in the world. When you introduce cheaper products, with a more limited set of features and worse user experience, you impute a lesser image to your customers. I understand the need to reach users through financial accessibility, and I can see the business benefits of this move, but I think the effect that it would have on the brand and brand image as a whole far outweighs the benefits of a little bit more marketshare and a little bit more money to talk about at your annual report.

Have Subscription Services ruined our connections with products?

Before iTunes, the only legal way to download music onto your computer was through subscription services. But these subscription services were not like the Spotifys and Pandoras of today. No, they truly sucked. They were clunky, hard to navigate, and lacked a lot of popular songs as they were often produced by the record labels. In people’s eyes however, the biggest problem with these subscription services was that they didn’t own the music they were paying for. Steve Jobs understood this, and made owning the music you purchased the foundation for iTunes, on of Apple’s most successful products. But in 2019, subscription services rule the market, and only a small percentage of people still buy their music. So what changed? Was it us? Was Steve Jobs wrong? No. What really made iTunes so much better than all the streaming services it was contemporary to was not that you owned the music sold on it, but the connection you felt with the music for owning it. Today, music streaming services replicate this connection through user profiles, which allow us to define ourselves through the music we listen to and the playlists we make. We have this connection with the music on these services not because of whether or not we own it, but because of the things we can do with it. We can make playlists and share them with the world, and that makes the music feel like it is truly ours. User profiles that suggest music based on our tastes contribute to this connection as well. We feel connected because we are, things that are suggested to us are based on what we listen to, and our user profiles, playlists, and recommended feeds feel like a true reflection of us, just as much as owning a record or a cd.