The Great Divide

An Old IBM Mainframe, Quite Scary to the Average American in the Days Before the Dawn of The Personal Computer
Image via ARN

Over the past 60 or so years, the layers between humanity and the computer have gradually been pulled away, bringing the two closer and closer to their inevitable collision into one. Where once computers were room-sized machines used by highly trained, remotely positioned operators, whose inventors never could have dreamed of widespread consumer adoption, modern distillation’s of the computer can be manipulated and controlled directly with the touch of the finger, while simultaneously being usable by a toddler with no conception of what a computer is whatsoever. And It’s easy to see how we got from point a (the mainframe) to point b (the smartphone), seeing as each and every major iteration of the computer has slowly played into this idea of peeling away the layers of abstraction between us and the machine.

The Apple II (Often Stylized Apple ][), the First Mass Market Computer Apple Produced
Image via The Interface Experience

The first PCs, in their replacement of the mainframe, put the whole entire computer on our desks for the first time, with a screen to allow us to directly witness our interactions with our computers on a level not possible with the remotely operated computers that preceded them. After that, the Mac, and the countless Windows PCs that followed it, introduced and popularized the idea of graphical user interfaces, stripping away the one dimensional text based interaction layers and instituting more user friendly and efficient graphics based UIs in their places, while simultaneously instituting the mouse which brought us one level of interaction closer to our computers. Following that, early laptops, such as the PowerBook 500, introduced even more immersive interaction methods, specifically: the trackpad, allowing us to use touch to interact with and operate our machines effectively for the first time. From here a clear line can be drawn to the smartphones of today, touch-controlled windows into our digital lives that are with us wherever we go, the closest we -as a race- have come yet to our computers.

The Apple PowerBook 540C, The First Laptop to Feature a Trackpad
Image via Wikipedia

The next step is an obvious one. It’s our logical evolution to interface, directly, with our computers, removing any remaining layers of abstraction between the man and the machine, finally making them one. As tools, computers are already extensions of our hands and of our minds, but we are limited in our connection to them. The last hurdle that remains between us and our computers is, in fact, our bodies. In order for the best possible efficiency, immersion, and ease of use of computers, we must be able to interface directly with them.

While at first the main hurdle with bridging this technological divide resides on the engineering side, the issue of interfacing the our nervous system with the computer, the larger issue that stands between us and the widespread adoption of this eventual, and inevitable technology is not an engineering conundrum at all, but instead, a design one. And it’s here where the adoption of direct human-computer interfacing begins to differ from the aforementioned instances of bringing humans and computers together. Across all of these previous instances, it was, primarily, the work of one man that led to their widespread adoption: Steve Jobs. While he may not have invented the first PC, designed the first ever graphical user interface, or pioneered the first touchscreen computer, what he did for each of these technologies, and what he did for society, was that he popularized them. He took established technologies and made great products out of them, products that people wanted, products that changed the way we use computers like no other could, it was here where he changed the world. Before the Apple II, the computer was scary. Before the Mac, a majority of people would never have been able to operate a computer, the graphical user interface changed that. And finally, without the iPhone, we might still be using shitty PDAs with physical keyboards, crummy styluses, and unbearable desktop-port software. But this time, we won’t have Steve Jobs with his intuition and marketing prowess to pull us over the next great. divide. The man who put computers on our desks, then in our hands, then in our bags, and in our pockets won’t be here to put them in our heads, we’re on our own.

Steve Jobs, Pictured with the Original Macintosh (1984), the First Computer to Feature a Graphical User Interface, Still Used In Some Form by Most Computers Today.
Image via the Verge

The key difference between the next great divide and those that preceded it stem from its nature. While previous advances mainly centered around adapting and advancing the computer, making changes to it to better suit it’s role as a tool for humanity, the next great divide will differ in the way it demands us to adapt as much as it demands the machine to do so. And that’s scary. The thought of putting something in our heads, even if it doesn’t require any real surgery, is more than enough for a large chunk of the population to shrug off bridging the divide. So how do we package, how do we make the computer friendly, like we did 50 years ago? Well, we could make it impossible or nearly impossible to say no, use the fear of missing out, such an approach often works when pushing hesitant people to try drugs. If you want to make hesitant consumers adopt your products, make your products so good they can’t say no, put those who are scared or refuse to adopt at such a disadvantage they can’t afford to go without it, and let those who refuse suffer the consequences -survival of the fittest. Of course this sounds harsh, but it would be packaged much nicer than this, and it’s important that we get everyone possible involved, as, fundamentally, this is the next step in human evolution.

1 comment

Leave a Reply