Preface:
Yesterday, Wednesday January 6th, 2021, our country witnessed one of the greatest assaults on our democracy in the history of our democracy. When President Trump’s supporters (whether he wanted them to or not) marched upon Capitol Hill, in scarring and degrading the halls of the Capitol building, they scarred and degraded our nation, and its great history, in the process. But while the broken glass and discarded Confederate Flags can be cleaned up, the blood stained walls cleaned, and the numerous artifacts and articles stolen in this desecrated siege replaced, the deeper wounds we bore yesterday, those we took as a country, will take much more effort to heal.
But before we can heal, a process that will take the combined effort of all citizens of this great nation, no matter the political affiliation, there is something else we must do first. We must ensure that the injuries our nation bore yesterday are never sustained again. And that means destroying the algorithm.
I. Algorithms, Elusive and Ever Present
Before I explain the algorithms role in the tragedy our nation felt yesterday, it’s crucial to establish what exactly the algorithm is. But first, a note: there is not one algorithm, their are tens of thousands, likely even hundreds of thousands, possibly even millions, and, in our daily navigations of our ever technologically infused world, we probably encounter dozens if not hundreds of algorithms everyday, without even knowing they’re there. And because of just how many different algorithms are in use, it’s constantly getting harder and harder to pin a good definition for them down without making a gross overgeneralization, but, likely in vain, I will try. An algorithm, in technological terms, is, in essence, an equation, not unlike those you would find in the Algebra textbooks on the desks of middle and school students all over the world. These equations, which, really isn’t the right word but the closest one to it, take in data, often collected from users like you and I, interpret said data, and then use the interpreted data to inform a systematic decision. Like I said, incredibly vague and a gross oversimplification. And this is the first rule of algorithms: they’re elusive. The second: they’re everywhere. And thus, to best visualize both the nature of algorithms and, simultaneously, their widespread presence, I have few choices other than to provide a few examples:
You know how when you listen to music on Spotify or Apple, and, over time, these services begin to suggest music to you, often music that you find enjoyable? Well, that’s because of an algorithm. The algorithms that both of these services (and countless others like them) use are able to figure out what music you like and, from that information, suggest similar music to you, whether it be through comparing your listening patterns with other users and cross-referencing them, or deeper, looking at music from other similar artists and genres, or through comparing different tracks using the shapes of the waveforms of different songs. And this all seems great right?
Everyone loves when Spotify, Apple Music, Tidal, or whatever music streaming service they use recommends a new song or artist they never would have found before. Or when your’e scrolling through TikTok, and, because you liked an earlier video that had a certain hashtag in its description, you encounter another one, and your whole entire For You Page is filled with content that seems like its tailored directly for you. The same is true for Netflix, when it recommends movies and tv shows based on things you watched previously. And when you go on the “For You” section of Google News, and you find the news covering all the topics your interested in aggregated in one place. All of these great things we have, all of these incredible benefits to our individual user experiences, we have thanks to algorithms. The same is true for Amazon. And Pinterest. And Reddit. And eBay. And Siri. And Facebook.
II. The Dark Side of Algorithms
It’s here where the problem with algorithms lie. They are not inherently bad, or good, for that matter. This is dependent entirely on their execution, how they are used and their designers intentions in using them. For example, Spotify’s use of algorithms (and many others who implement them similarly) as a way of pushing new music to users is completely fine, great even, as it benefits the user experience at little to no cost. However, the story with Facebook (and Twitter, Google, and Youtube as well, though for now we will focus on Facebook for the role it played in yesterday’s events, which I’ll get to) is entirely different, even though, in essence, the algorithm isn’t. Like with Spotify’s (and that of almost every company that uses them) use algorithms, Facebook’s revolves around taking in data, interpreting it, and using it to inform later operations, similarly to Spotify, what type of new content will be presented to users. Not that different right? Wrong.
You see, while Spotify applies this principle to feed users songs and artists they will likely will enjoy and likewise might not have naturally encountered, entities like Facebook employ the same tools to exploit and manipulate our own natural political biases and feed us the things we want to see, the things that align with our political beliefs, leading to the spread of disinformation, the creation of echo chambers, and, if allowed to get out of hand in the way it did yesterday. It works on both sides, the left and the right.
But how does Facebook take something as harmless as a musical suggestion algorithm and turn into a tool for the construction of echo chambers and a medium for the spread of false information? Well, it does the same exact thing that Spotify does, only the information they collect and the content they feed us in return differs. Where Spotify will look at you listening to a song by Bob Dylan or Kanye West, run it through their algorithm, and determine that, from that information, you might like a song by someone like the Beatles or Kid Cudi, Facebook will see that you liked or commented on a post that made reference to Black Lives Matter or, conversely Trump 2020, run it through their algorithm, and, determine that, from that information, you would be interested in other posts and conversations that revolve around either of these things, and gradually, feel your feed up with posts pertaining to either of these topics, but, and this is where Facebook is most dangerous, the posts become more and more radical, more and more fringe, gradually drawing you away from the center, away from any real logic, and towards the hateful and anger driven extremes that exist at the opposite poles of each party. This is how the echo chambers are formed. This is how you get the idea that the Coronavirus is all a big hoax, or that the vaccine is all a ploy by Bill Gates to put a chip in your brain, or that everyone needs to refer to each other by pronouns, and not doing so is hateful, or the extremities of identity politics that we encounter so often today, or, more pertinently, the idea that election was stolen, an idea from the right that echoed the left’s response that Russia stole the election four years ago. Facebook, and other social media platforms for that matter, thrive of us this. They keep us engaged for hours a day because of this. Why? Because, secretly, we love it.
III. Confirmation Bias, The Greatest Threat to our Democracy That Lives Inside All of Us
What allows these companies to do this is a little idea called confirmation bias. Write that one down in your notebooks kids, it’s important. Confirmation bias is, well, I’ll let the Oxford Dictionary define it for you:
“The tendency to test one’s beliefs or conjectures by seeking evidence that might confirm or verify them and to ignore evidence that might disconfirm or refute them.”
…which is a fancy way of saying “show me what makes me feel I’m right, while hiding what might make me think I’m wrong”.
We all like to feel validated. We all want to know we’re right, even when, deep down inside, we know we might not be. These services know this, because, well, we all do, it’s in our nature, and, knowing this, they manipulate it. They take our own inherent human flaws, like confirmation biases, and manipulate them, using algorithms to show us things that feed into these biases. And we love it. And we spend 6 hours a day on Facebook liking posts about how stupid or terrible or Hitler-like or racist or insane the other side is, with out ever stopping to question whether some of it is even real. And the echo chambers echo louder, more baseless hate and anger bouncing off their walls every day. And the conspiracy theories, the disinformation, doesn’t seem so crazy anymore, and you share them with your family, or worse, you post it on your page, unwillingly and unknowingly perpetuating the problem. And all the while, Facebook rakes in the money. And their daily active users go up. And the liberals light fire to the cities. And the conservatives storm Capitol Hill. And America falls apart.
We’re sheltered from the harshest truths of the world, the ones that tell us our wrong, and, all the while, we’re blind to the cause that has in the destruction of our ability to have respectful and open minded discourse, the first step to the destruction of our society.
Facebook is the Biblical Tower of Babel, or, more in line with modern literature, the Party from Orwell’s 1984. The same threat Jobs warned that IBM posed to the computer industry 40 years ago, Facebook poses to society today. Through its construction of echo chambers, its manipulation of confirmation bias, it separates us into bubbles, cutting of the discourse of different ideas between us in the process, and thus, just like in the Bible, the tower, or here, America, falls apart.
Confirmation bias, it’s what’s at the heart of all of this, it’s an illness, really, and what’s most dangerous is that we all have it.
IV. The Role that Facebook (and Social Media) Played in 1/6
This brings me to the tipping point of the story, in fact, it was the point that led me to write this. What happened on 1/6. The darkest day for the American Democracy in a long line of dark days for the American Democracy.
This section will be a difficult one for me to write, as, I must admit, that I tend to lean left on the political spectrum, and, although I make every effort to see political conflicts from the eyes of both sides, I am aware of my natural of my own natural predisposition to lean to the left, as I am sure we all are, no matter what our political affiliation. Understanding this, and hopefully not having lost anyone, I will begin to discuss Facebook’s role in the events that transpired earlier this week, attempting to be as factual and non biased as humanly possible in doing so, as I have tried to be for the entirety of this piece up to this point, and I will continue to try to be following it.
On Wednesday, January 6th, 2021, a group of pro-Trump protestors, said to have been spurred by a speech the president gave the night before, marched upon Capitol Hill, in hopes of overturning the Senate conducted electoral college count set to take place there that day. As the protest went on, a number of the protesters, (not all of them) proceeded to enter into the capitol building, causing a wave of destruction inside and leading to four deaths in the process.
This, I’m sure, you already know, but just in case you didn’t, well, now you do.
Now, onto how social media, and especially Facebook, led to these events.
As I said before, one of the biggest threats that Facebook, and the larger social media platforms it stands among, plays in the disintegration of our society is in its role as a breeding ground for echo chambers. These echo chambers played a crucial role in the events we witnessed on Wednesday.
It’s easy to blame President Trump or his followers for the attack on Capitol Hill, and, while neither is free from blame, especially those who broke into the Capitol Building and caused the destruction there, social media is as much to blame as any one else is.
Without these echo chambers and the spread of misinformation that the algorithms of social media platforms like Facebook and Twitter afford and, more importantly, inform the construction of, A) the radical ideas (that the election was stolen) that spurred this group to do what they did would likely never have been spread, or at least not even close to the scale that they did (see source number 1). And B) the level of organization that exists now as a result of the network of this different far right echo chambers would not have existed, therefore stopping the organization of this movement or, at least, drastically limiting its size and the destruction and disarray that it caused.
You don’t need to take my word for it, multiple studies, media sources, and influential minds reinforce my idea on social medias role in the events that transpired earlier this week.
Here are just a few articles and examples that I used in the writing of this piece (cited at the end as well):
And, to Round It All Off, a Tweet from Elon Musk Targeting Specifically Facebook for it’s Role in The Events: (Humorous Intentions, perhaps, but effective nonetheless)
So now that you (hopefully) understand the role that social platforms played in inciting the events that took place earlier this week, and, in understanding this, understand the ways in which these platforms use algorithms in tandem with manipulating our confirmation biases, hopefully we can move on to the most important part of this piece, how we move forward and heal.
(side note: if you do not yet understand the role that social platforms played in these events, how algorithms played a role, how algorithms work, or anything else I have covered so far or will cover in this piece, or, if you need more evidence or think that others might, feel free to let me know by emailing me at Jay@willisbros.net. If we seek to use the ideas here to reform our society and restore discourse and the harmony it enables, I need help in refining the ideas present here, and only you can do that, thanks.)
V. Regulating the Algorithms
So, if, we are all sick with this illness that is confirmation bias, how do we cure ourselves of it? We can’t. We can’t stop human nature, no matter how hard we try. Confirmation bias is as key a component to our existence as individuals as our sense of identity. Sure, we can suppress it, temporarily, but this is neither feasible long term or on a large scale, both of which are crucial to our collective healing as a country. So then, if we cannot possibly cleanse ourselves of our confirmation bias, how can we hope to heal, how can we hope to come together again as a country. Through the destruction of the algorithm.
Let me make it clear. I don’t mean or want the destruction of all algorithms, only those that play into our confirmation biases. As I said before, when highlighting algorithms with implementations similar to Spotify’s, there are good algorithms and bad algorithms. We only seek to destroy the bad ones, while keeping the good ones, with the user experience benefits they afford, in tact. But how do we do that? How do we enforce these rules? How do we stop the bad algorithms from disintegrating our society without throwing the baby out with the bath water? We ban algorithms that revolve around the presentation of user generated content, thus drastically reducing the spread of disinformation to susceptible audiences.
The idea would essentially be, as long as the algorithm provide user generated content, meaning it could take it in, but not deal it out, it is legal.
This is the correct approach as it cracks down on the spread of disinformation and the construction of echo chambers it affords, while simultaneously allowing companies like Spotify, Netflix, and others (that use non-user generated, platform provided content like movies and music) to continue to use algorithms better user experiences, while simultaneously allowing for the growth of algorithms’ roles in society, something vastly important as we make advancements in fields like self driving automobiles, and machine learning.
So, now that we have an approach, how do we move forward.
Well, it might surprise you to know that the fight against social media’s contributions to the spread of disinformation is already being fought in the Supreme Court, albeit very poorly.
VI. The Algorithm, and The Grossly Misfought War Against Social Media and Its Responsibility in the Spread of Disinformation
The fight against social media’s role in the spread of disinformation is, as I said, already being fought, and lost, by the Supreme Court. Earlier this year, President Donald Trump began a war against Twitter, which quickly expanded into a war against social media in general, revolving around the idea that social media platforms should be held accountable for any disinformation spread on their platforms. This is in direct conflict with a Section 230 of the Communications Decency Act, which protects social networks like Facebook, Twitter, Reddit, Youtube, and countless others from being responsible for the content posted on their respective platforms, essentially separating them from other sources of information like newspapers, tv news channels, and magazines, all of which are liable for the content they put out. And inherently, this is good for social platforms, allowing them to function as mediums for the sharing of ideas among users without being liable for whatever those ideas are, the democratization of the news. Or at least it should be. We should be able to be responsible for the sharing of our own ideas, and the platforms we share them on should be able to be free of any liability about those ideas, as they should have no role in the spread of any disinformation on their platforms, that spread would be the sole responsibility of the users who spread it, except for, of course: the algorithms.
While these platforms and the companies behind them would have you believe that the diffusion of ideas on their platforms is natural, the use of algorithms forbids this from being true. As previously mentioned, and proven time and time again in study after study (cited at the end), these algorithms play into our biases and artificially spread ideas that perpetuate a cycle of us staying engaged in the platforms we find them on. And of course the companies behind these platforms love this, it’s how they make their money, because it’s how they keep us engaged. Their mining us for our attention like we’re just a resource, with a complete lack of regard for the societal disconnect they cause in the process.
Everyone gets the same copy of the New York Times every Sunday. When you log onto Twitter, or Facebook, you log into an algorithmically based echo chamber of confirmation biased fueled disinformation. That’s what separates the front page of the New York Times from the front page of Facebook, that’s what makes the latter much worse.
But the Supreme Court, in going after Section 230, is fighting the wrong fight. They are like generals in World War I, using century old tactics in a battle fought with modern weapons, modern technology. This way, they’ll never win. They don’t even know what the algorithms are, because everyone that does is on the other side. This is why Zuckerberg and Dorsey can strut right into the court rooms in all of these hearings, completely unfazed. They know that their legal pursuers know so little about them, their operations, that they can just sit back and watch them trip over themselves.
I would also like to add that going after Section 230 is entirely the wrong idea. You risk destroying a large part of what makes the internet great and so full of potential that way. You have to go after the algorithms. Don’t make them liable for the disinformation, we need to take responsibility as a race for that. Make the liable for its spread, that’s what’s in their control, and as long as the benefits outweigh the drawbacks, they’ll mine us like a resource they see us until there’s nothing left.
VII. Balancing the Scales: Stopping 1/6 From Happening Again
We have to make the drawbacks of using algorithms maliciously outweigh its benefits, which, seeing as the business models of most of the platforms revolve around them, is much easier said than done. But the answer is to ban algorithms that return user generated data. The reunion of our country is more important than the success of Twitter, Facebook, and Youtube. The destruction of the echo chambers constructed by these algorithms and their purveyors is more important than Mark Zuckerberg and Jack Dorsey getting any more rich than they already are.
But I understand that, this too, is easier said than done, as, with the ever present nature of algorithms informing the ways in which we must regulate them, the other nature I brought up earlier: their elusiveness, poses another challenge in the enforcement of these possible laws. While it easy to prove that the algorithms exist (as I have brought up earlier, dozens of studies have demonstrated the use of algorithms in the proliferation of radical ideas from both sides) it is harder to put the blame on them in a legal setting for the spread of disinformation than one might think. So I would venture to say we approach it like any other legal dispute, we investigate it, case by case, starting with Facebook (I will explain why Facebook should be the first and should probably be destroyed in a piece tomorrow, I have been writing for five hours now, and need to wrap this up). We deploy teams to test if there appears to be evidence of an algorithm in place, and, if there is sufficient evidence, we ask to see the code, which experts can pick apart and find the algorithms within. If we find an Algorithm, the platform is responsible for all disinformation spread on it, past and future, with all of the legal trouble that entails, if not, they are allowed to proceed free of liability.
Now obviously, this is heavily subject to change, and on top of that, its success is entirely contingent on numerous factors: a) sufficient fines for the abuse of algorithms, and b) the companies at large providing accurate and unaltered copies of the source code of their products, though I suppose doing otherwise would constitute as lying under oath.
VIII. The Benefits Removing the Algorithms
But how do we, as a society, stand to benefit from the destruction of these algorithms? I mean this doesn’t sound like any easy fight, is it even worth it, going up against some of the most powerful companies in the world?
Let me rephrase that question “Is it even worth saving our society, rebuilding the bridges we’ve burnt over the past decade or so? Is it even worth stopping what happened this Wednesday from happening again?
The answer is obvious: yes.
You have surely seen the division in America, not just over the past few years, but as part of a larger, longer trend, a trend that is, not coincidentally congruent with the trend of the rise of social platforms. The heated debates, the rising political tensions, the over the top political personalities, all of it is fake. It’s manufactured, the product of perpetual echo chambers on both sides of the aisle that stand only to benefit the companies whose algorithms bore them, disintegrating the complex, discourse based, bipartisan society we’ve built over the last three hundred years as a side effect of that. We need to restore the unity of our country, the discourse that keeps us in check, but in order to do that, we must first end the tale of two feeds, one for the left one for the right, instead, opening a new book with a message that rings true to both the core ideologies of the American and the power that technology offers us: one where we all get to read the same thing. To do this, we must destroy the use of algorithms in the spread of internet-enabled information.
IX. Closing Thoughts
The events we witnessed Wednesday will not be soon forgotten, but as they are cemented into our minds, we cannot fail to forget the role that the algorithmic spread of information played in its incitement. We must act on this. I plan to do so by writing a proposal of legislation for Senator Cory Booker. In truth, I wrote this as much for myself as I did for others, as, as I always say, writing for me is as much a form of ideation as it is expression, and thus, I often use it to clarify my thoughts, to quantify them into a more cohesive string of ideas, as I did here.
I have many more things to say, but, at this time, I have neither the time nor the mental bandwidth to do so, but my efforts here are far from over, and I will periodically update those of you interested in the fight for our society about my successes and failures. With that, thank you for reading and goodbye.
-Stephen J. Willis
(remember: don’t hesitate to email me with any questions, comments, or concerns that you may have) (Jay@willisbros.net)
X. Sources
- 1984, George Orwell
- the Bible, Various Authors
- https://twitter.com/elonmusk/status/1347031803987599360?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1347031803987599360%7Ctwgr%5E%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.foxbusiness.com%2Ftechnology%2Felon-musk-facebook-capitol-riots
- https://www.lawfareblog.com/section-230-and-supreme-court-is-too-late-worse-than-never
- https://www.theverge.com/21273768/section-230-explained-internet-speech-law-definition-guide-free-moderation
- https://news.virginia.edu/content/study-how-facebook-pushes-users-especially-conservative-users-echo-chambers
- https://www.cnet.com/news/mob-storms-capitol-as-facebook-twitter-roles-come-under-fire/
- https://www.nytimes.com/2021/01/06/us/politics/protesters-storm-capitol-hill-building.html
- https://www.oxfordreference.com/view/10.1093/oi/authority.20110810104644335
- https://www.wsj.com/articles/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11590507499
- https://www.wired.com/story/facebook-twitter-echo-chamber-confirmation-bias/
- https://www.nbcconnecticut.com/news/local/social-media-amplifying-political-tension/2399121/
- https://www.foxbusiness.com/technology/elon-musk-facebook-capitol-riots
- https://finance.yahoo.com/news/facebook-twitter-culpable-dipayan-ghosh-225443250.html