Excerpt: Chapter 6: Stasis of Thought

2021 May, 24
Meme - 'Your phone when you say you want to buy something' - [caricature of Amazon, Google, and Facebook eavesdropping]

This is an excerpt of one of the chapters from my book, The Descent Into Complete Order.

You may have heard recently about social media platforms like Facebook and Twitter announcing more content censorship moderation initiatives. Some are blocking political adverts, others are stepping up their programs of reviewing posts. We are told that this is good for us, it will make the platforms safer or something.

Twitter’s CEO on censoring moderating malicious posters stated 127: “We’re going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them.”

But who are ‘these people’, who is classifying ‘these people’, and what metrics are being used to identify ‘these people’? There’s a difference between an individual calling someone out for what they believe to be ridiculous and a massive organization systematically erasing the posts of thousands of people under opaque programs for what they unilaterally decide is ridiculous.

Regardless of what their standards are (and there has indeed been a tremendous amount of work put into designing them), these platforms have expansive powers to shape the information that we consume. Power which, even if responsibly used, is concentrated in the hands of a very few, and without the transparency for us to judge for ourselves whether it’s actually responsibly used. Are we OK with that?

And even if we are OK with the premise of the censorship moderation, are we comfortable with the cost of implementing it? The actual logistics of all that censorship moderation is not fancy Machine Learning (ML) automations, it is subjecting tens of thousands of people to grueling, traumatic, underpaid torture.

CENSORSHIP MODERATION SWEATSHOPS

Facebook, Twitter, and other companies with large censorship moderation strategies have reporting systems for users to flag posts, and ML bots to automatically flag content, but AI still doesn’t come close to humans for assessing context. So these companies then contract out the job of actually reviewing those posts to contractors making as little as $28,800 annually (yes, that’s in the US) who make the final call as to whether it should be taken down128. That involves sending all those posts flagged as potentially abusive, violent, or unpleasant to actual humans, who have to sit through and review these posts, often only getting a few seconds each to decide whether or not to ban the post.

What kind of posts are these people viewing? Content moderators are bound by Non Disclosure Agreements (NDAs) that prevent them from discussing their work (to protect users’ information, according to the firms), but a few have broken NDAs to discuss what they view as abhorrent conditions.129

One army-veteran-turned-moderator recounts that “on his second day of moderation duty, he had to watch a video of a man slaughtering puppies with a baseball bat. [He] went home on his lunch break, held his dog in his arms, and cried.”

Other moderators in the office “were crying, breaking down, throwing up. It was like one of those horror movies. Nobody’s prepared to see a little girl have her organs taken out while she’s still alive and screaming.”

All those bottom of the barrel twisted nightmares humans invent? There are people who spend day after day watching all of it. These jobs are so traumatic, that many contractors develop PTSD. Imagine that, PTSD, from a desk job! The problem got so bad that Facebook eventually agreed to pay out $52 million to 11,250 of its contracted moderators.130

Think about that the next time you’re scrolling through your newsfeed. Your feed is kept clean of nightmarish videos because tens of thousands of underpaid humans are toiling in filthy and abusive conditions to watch all that traumatic content so you don’t have to.

Facebook and the rest of them, of course, will deny any responsibility. They will point to their contracts and say that the companies they contracted the work out to are responsible. Google will tout how it is extending some benefits to contractors.131 Those are all token gestures. The reality remains unchanged.

Of course, it doesn’t have to be this way. Facebook, Twitter, Google, and the rest could simply hire these moderators directly as employees. Then, instead of being ‘shocked and awed’ at the abuse of their contractors, and declaring they will launch an ‘audit’ before sweeping the incident under the carpet and continuing as normal, they could actually manage the moderators directly and ensure decent pay and working conditions. Contracting out the work is a way to shift responsibility, but it doesn’t change the source of the problem. And that is the design of the platform.

The complete story is that this is far from new. Although now social media platforms are coming under pressure to moderate content, they’ve actually been heavily reviewing the content you see since their inception. Think about it, across all your friends and connections and the accounts you follow, there may be thousands of posts, comments, and likes each day. You can’t be shown them all, and they have to be sorted in your newsfeed somehow.

CALIBRATED ADDICTION

So companies have made a science out of curating your news feed to decide what posts you see in what order. But if we are filtering and sorting content, we need to have some criteria by which to do so. What is that criteria? Do they curate the posts to show you a diverse view of your entire network? Nope. What about trying to give you accurate and insightful exposure to the world as viewed by your network? Not even close.

These companies all provide services for free, and they’ve got to pay for their servers and developers somehow. Your newsfeed is curated with the sole intent of maximizing your time on the platform, so that the company has as many chances as possible to hit you with targeted adverts that make them money. You aren’t the customer.

To these platforms, you are the sweatshop worker. From their perspective, as long as they keep you drugged on new and ‘engaging’ content, you’ll keep clicking, tapping, and swiping to generate data for them to mine.

To keep you on the platform, they’ve designed their interfaces to be addictive.132 The infinite scroll nature of the platforms, such that there’s always more to be had, the autoplay to make the content stream non-stop, the big red notifications to get your heart racing in anticipation, it’s all to keep you stuck to their platform133. All the while they profile your activity.

And as they continuously profile your habits and personality, they’ll package you and thousands of others up into a ‘market segment’ to be auctioned off to the highest advertiser, whether that buyer be an auto brand, a clothing outlet, or some random activist bankrolled by a foreign government.

Algorithms are carefully designed to give you not the more accurate or informative posts, but the most emotionally charged, particularly those that incite anger, which has been shown to be very effective at getting you to share and engage with the content134 (thus increasing your time on the platform and the number of ads you are force fed).

Sure, the platforms pay lip service to combating fake news, and will throw up some more banner warnings, maybe even find the odd sacrificial goat. But the core feature of the platform – targeted posts to suck your attention – that remains unchanged.

That’s it. These platforms aren’t built or designed for you at all. Any benefit you get from the platform is, from their perspective, a happy coincidence. You are a resource from which they extract attention to sell to advertisers. In exchange for your endless attention these companies have designed engaging platforms to periodically hit you with dopamine and foster an addiction in you to your preferred newsfeed.

A friend of mine put it well. “Too many people now a days like to be entertained rather than educated”.

THE HARM WROUGHT

Well alright, so social media isn’t designed with our best interests, but it’s not actually like that’s really harmful for us users, right?

Unfortunately there are at least three severe detriments that algorithm curated social media has on your life.

The first detriment algorithm curation, on social media or on search engines, inflicts upon you is mental. There have been many studies looking at the effects of social media on sleep, depression, hyperactivity, and loneliness.135 136 137 However most of these are purely observational. This allows us to draw correlations, such as that those who use social media are more likely to be depressed, but doesn’t allow us to say whether using social media causes depression or if being depressed leads to more social media use.

To establish causality, we need to set up an experiment, in which some people receive treatment (the treatment group), and others none (the control group) in order to establish a baseline with which to compare. For best results, we want to rule out the chances of confounding variables (if all the members of our control group are men and everyone in the treatment group are women, we can’t tell if the differences are due to sex or the treatment).

So to rule out those confounding variables we want to randomly assign people to each group. Having a large group of participants is also useful. If our findings hold up in a group of 10,000 people we can be much more confident in the results’ accuracy than if the study was only done on six individuals.

Several studies that have done this, split participants into two groups randomly and allowed them different amounts of social media to measure changes in well being, have found notable effects. One, which sampled 143 UPenn students, found those who limited social media to 30 minutes per day versus normal use had “significant reductions in loneliness and depression” after just three weeks.138

A much larger study139 of several thousand people conducted by researchers from New York and Stanford Universities also conducted a randomized controlled trial in which the treatment group was paid (around $100) to quit Facebook completely for four weeks.

This study found that those who took a one month break from social media had about one hour of daily free time reclaimed from Facebook. They spent that mostly watching more TV and socializing with family and friends, not on other social media sites. The treatment group consumed less news, and was generally less up to date on current events but also less politically polarized. They also reported feeling happier.

In a sign that they felt the benefits of leaving Facebook were worth continuing, the participants also stated that they intended to reduce their time spent on Facebook even after the study concluded. That leads nicely into the second detriment of social media, they take up a lot of time. They’re designed to.

While the exact numbers vary across demographic groups and around the world, the average person is forecast to spend over six and a half years of their life on social media platforms.140 On a daily basis, YouTube, Facebook, Snapchat, and Instagram alone account for about two hours of that.141 In general, women use social media more than men, the young far more often than the old, and people in Africa and South America use social media for longer than do people in Europe. But across the board, the numbers are increasing over time.

Is that really all time well spent? Do you get more enjoyment catching up on tweets for an hour, or having a phone call with a close cousin or sibling for the same hour? Which will you cherish more, the morning spent sleepily in bed scrolling through your newsfeed, or that same morning waking up over coffee with a friend in a local cafe?

WHAT OF THE CREATORS

All of this is relevant to everyone who uses social media, whether consuming or uploading posts. However the story is a bit different for those who rely on social media for their livelihood. Let’s look at this from a different angle, that of the content producers. Many individuals have come to rely on social media as a way to attract customers, or simply to view their content and live off of the ad revenue they split with the platform.

Those who make a living producing content, whether digitally or physically, face a lot of stress. Professionals such as artists, performers, musicians, comedians face a constant pressure to keep their skills and content up to date and relevant. While digital platforms allow more creators to reach more viewers than ever before, the inherent stresses in the job remain in some form.

Those same challenges will exist no matter what distribution system one uses. And there are plenty of options to choose from. Patreon, Podia, Ko-Fi, and Memberful all allow content creators to share their work freely or in a restricted manner and solicit recurring membership fees or tips from fans. Twitch (owned by Amazon) is a platform for creators to stream videos, usually related to video games, and although it does have ads, also has a membership system and allows viewers to tip content creators.

My point is that, from both the perspective of the consumer and creator of content, the current mainstream platforms aren’t the only option. We can change how we produce, share, and digest information, by voting with our eyeballs and supporting platforms that give us greater control over our own experience.

And we can do it gradually. We don’t have to blow up the social networks and throw things into chaos. We can move one at a time, allowing time for others to adjust.

That’s not to say it will be easy or universally beneficial for all content producers and businesses that rely on social media advertising to shift to alternative models. Only to highlight that alternatives exist, and to help you begin to imagine what a life without the advertising giants might look like for you.

THE AUTONOMY OF YOUR MIND

The third detriment is perhaps the most pernicious. Curation by algorithms erodes your autonomy of mind. Algorithmically curated thought bubbles in social media have helped contribute to hysteria over immigrants,142 election manipulation,143 and genocide. 144 Yes, genocide.145

Did social media cause those things? No. It’s a tool. It can be used for good or ill. It has been used for good. It has also been used to exacerbate much ill.

How does social media erode your autonomy of mind? By tracking your actions and constructing an artificial concept of who you are, and then filtering all the information you see to conform to that artificial concept. The platforms we use daily know that when you browse the web, scroll through your newsfeed, or search on Google, you are far more likely to click on something that you agree with than something you disagree with. Since they want to maximize your time on the platform to maximize the ads they show you, you simply won’t see things that you’ve historically not shown an interest in.

And so you only see stuff you agree with, which creates a confirmation bias to constantly reinforce the same ideas, never challenging them. Of course, online platforms don’t have to be constructed that way, but it’s how they’ve chosen to be designed because of their underlying incentives.

Our use of these platforms empower them. In view of the tremendous good and tremendous ill they bring, are they tools we wish to empower?

GEOGRAPHIC THOUGHT BUBBLES

Social media isn’t the only way we’ve become trapped in ideological bubbles of course. We’ve also done it to ourselves, physically. Recently, we’ve found that many neighborhoods in America are steadily becoming more racially segregated.146 Why is that happening? To be sure, some of it could be top down design decisions. Just like our digital social platforms, housing laws have been designed to shape incentives (in this case for explicit segregation) in the past147, and nothing creates a thought bubble like living only around people who are like you.

But this recent trend appears to be something different. Rather than being driven primarily by top down social engineering, this shift appears to be driven primarily by individual decisions. Individuals have varying social networks, daily routines, and a complex web of preferences, and these subtle effects compound when added up on a societal scale.148

This doesn’t even necessarily mean that deep rooted racial bias is to blame (it might, but it’s not necessary for what we are observing). The Nobel Prize-winning Thomas Schelling wrote that “inferences about individual notices can usually not be drawn from aggregate patterns”.149 In other words, just because a complex system produces a certain result, doesn’t mean that all, most, or even any of the individuals within that system would have chosen that result.

Programmers Vi Hart and Nicky Case made a fun interactive website based on this insight.150 Through their small puzzles, they explain how even if no individual in a group expresses any overt group-based preferences (race, religion, wealth, politics; it doesn’t matter which group) for who they live around, a largely homogeneous neighborhood can still result.

Take the following example. You have a group where no one minds being around people who are different from them, but since few people want to stand out in every way, most people have a preference to move if less than one third of their neighbors are like them. As a result of the compounding preferences that creates, we see some degree of segregation, even if on one intends for that outcome. We end up with communities surrounded by people who have similar habits, preferences, and beliefs as us. A physical thought bubble to reinforce our digital one.

Fortunately this is a problem we’ve caused with our own choices, and one we can fix with our own choices by being more willing to live with people who disagree with us and even seek out less homogeneous living. Not for the sake of Diversity! in itself. Not for the sake of virtue signalling for some abstract notion of what someone else has framed is the ‘correct’ level of ‘diversity’. Rather for the sake of a more varied, challenging and interesting life.

So there are many places where thought bubbles, physically and especially digitally, are constructed. Maybe you don’t have a problem with how those bubbles shape your choices, how they Nudge you into viewing the world a particular way. I certainly do. But even if you don’t, my main critic is something different.

INVOLUNTARY FILTRATION

On a search engine, you often have the option to apply filters. These are enormously useful for narrowing your choices down to what you believe is relevant to you. Common filter options include by date created, popularity as measured by view count or upvotes, or posts by a specific individual or group. These are the filters you choose to apply and configure, or not. However there are other filters applied to your searches that you don’t get to decide.

When you search for ‘Chinese sit down restaurant’, search engines will automatically assume you want something nearby, not on another continent, so they will narrow your search results to those nearby, effectively adding a shadow-filter based on location.

They know what your subjective ‘nearby’ is because of trackers on your device to feed them your location.151 This can be somewhat annoying if you are not actually searching for restaurants in your home town of Sammamish, WA, but rather for results in downtown Seattle where you’ll be later that night, but that’s relatively easy to fix by adding ‘Seattle’ to your search (thereby correcting the search engine’s presumptive filter with one you intentionally applied).

OK, so that’s a pretty simple and innocuous example. But here’s the kicker. By using your entire history of engagement on the platform, and often reaching beyond to pull in your activity from around the internet, companies like Google and Facebook also filter your results based on what’s most likely to keep you engaged. Thus, the shadow-filter applied amounts to something like ‘what they are likely to believe, agree with, and enjoy watching’. And you can’t turn it off.

And so your digital world becomes a gated community of content that’s similar to content you’ve consumed before. Things that are different or contrary to your browsing history are silently filtered out, and you are left with a narrower and less informed set of options. Like with our example of where people choose to live, it doesn’t even take a very large bias for extreme results to arise.

This isn’t an issue that is confined to social media. It’s a principal-agent problem152 that arises whenever you abdicate your decision making authority to another person, or an algorithm. The principal- agent problem is well understood and varying degrees of checks and balances exist in many industries where individuals delegate some of their decision making powers to another person, or where one person has the ability to shape the decisions of their clients, like doctors or financial fiduciaries.

One area where such checks are largely absent however is algorithms. Algorithms are used to inform decisions such as whether you get a loan, whether you get hired, whether you get bail or investigated as a suspect in criminal cases. These are decisions that powerfully shape your life, and the algorithms that make them are biased.153

The reason for this is that algorithms are given enormous data sets and trained through repetition to recognize patterns. They then use the heuristics (rules of thumb) they develop to make decisions about real world events.

UNDISCLOSED BIAS

One of the biggest challenges is that the example data sets that they algorithms are trained on may not reflect real world circumstances. This means that when an algorithm encounters a problem that falls too far outside of what it is used to, it will still generate a response, but it will become much less accurate. The challenge is that it’s difficult to detect whether any individual case is mislabeled by an algorithm, much less what might be causing the issue.

One example of this is with facial recognition software. Algorithms are trained to recognize facial features and distinguish mood as well as identify individuals. This can be used to track criminals (or, if you’re Putin, political dissidents154) in a crowd. It can also be used to keep track of the mood of customers in a store (or, if you’re Taylor Swift, screen your concert’s attendants for stalkers155).

The widespread problem is that many facial recognition algorithms are trained on data sets that aren’t very diverse, so they tend to be much more accurate with certain faces and more likely to misidentify, or simply not detect at all, other types of faces. Often times, the data sets are disproportionately filled with white male faces, so minorities get the short end of the stick. This has real consequences. As one headline put it starkly, “There’s software used across the country to predict future criminals. And it’s biased against blacks.”156

And the potential bias of algorithms doesn’t even touch the privacy concerns associated with widespread data collection. For now it’s enough to acknowledge that algorithms suffer from complex biases, and we are often delegating our decisions to them. Should we?

Are regulations needed to review algorithms, or would that stifle innovation? I’d argue we should start with greater transparency and disclosures about how the algorithms are trained and what their biases are, to allow us to better make those decisions. In the end though, perhaps the easiest and most powerful thing we can do is choose which algorithms to opt out of.

FEED A COLD, STARVE AN ALGO

Let’s apply this to something everyone can agree on hating: fake news. Perhaps no one quite agrees on what exactly counts as fake news, but we all know it’s bad.

At no point in human civilization have we been strangers to rumors, but the digital thought bubbles we now live in allow fake news, as long as the Algos promote it, to echo endlessly without being challenged by opposing perspectives. Thus exaggerations and rumors get re-tweeted, shared, and forwarded.

Fake news works because it neatly fits into the system that we empower through our online activity. It doesn’t take large acts to create such ugly distortions when they are compounded across tens of millions of people. Fortunately, it also doesn’t take big changes to smooth out those distortions when they are again repeated across tens of millions of individuals.

How do you fight fake news? You starve it. You starve it of attention and you demand a higher standard from your news, by seeking out and consuming long form content instead of five minute satirical videos. And by taking back control over the way our information is filtered, we can better distinguish balanced facts from one-sided slurs.

By starving social media of cheap clicks, by avoiding surface level coverage packaged primarily for entertaining jabs, you incentivize publishers of content to up their quality and nuance in the form of long form articles, and then it becomes easier to distinguish the fake from the authentic.

It’s the same way you tell a real person from a fake bot: engage with it at length. Long form text gives you the time and detail with which to do so.

A lot of times we see big glacial trends in the world and ask ‘what can I possibly do about it?’ Here’s one thing each of us can choose to do: take back your independence of thought and ownership of time and attention from social media and their designed nudges. Read the news in long form and read critically, but don’t be sucked in. Step back into your own life and don’t let others pull you into theirs for their own ends.

Another way to take back your autonomy of mind? Burst the algorithm induced thought bubbles. There are a variety of ways you can browse online while reducing the amount of data collected about you. This makes it harder for advertisers and search engines to filter content individually, and therefore makes it easier for you to shape the lens through which you view the online world, as opposed to having your lens shaped for you.

I do this by browsing the internet without accounts as often as possible. When I watch YouTube, I often do it on Tor. This is a free open source web browser that doesn’t track you, and indeed is the strongest privacy protecting browser around. In addition to blocking cookies and trackers by default, it reroutes your internet traffic through several layers. This way your request for a video on YouTube, instead of being sent to YouTube directly, is sent to another tor router, which passes it to another and another, which then sends the request to YouTube.

So your request is sort of like the baton in a relay race, and at the end YouTube thinks this last router, which can be based anywhere in the world and has none of your identifiable information, is making the request. This way its response is not biased by its comprehensive profiling of your habits. The router then takes the response and the relay race happens in reverse to give you the result. That may be a bit complicated, however this all happens very quickly.

Tor is a bit extreme for casual web browsing of course, you can get some of those benefits by booting up a private browser and not logging in to your account. That way each time you visit a site it will appear to them as if you are a different person, and they will have a harder time aggregating your data. Private browsing doesn’t actually stop sites from collecting information on you, but it makes it more difficult for them to associate each of your separate visits as originating from the same person.

Naturally, this doesn’t work for everything. You still need to log in to sites like Netflix and Facebook to use them at all. And then it doesn’t matter if you’re using private browsing, tor, a VPN, or other privacy tools.

For these sites, pop into your settings occasionally and just clear your records on the platforms of searches, likes, views, posts, etc. Again, this doesn’t stop the sites from collecting your data, but it helps limit what they can aggregate over time.

On YouTube and Netflix, you can completely clear your browsing history.

With the exception of accepting friend requests, all your comments, likes, and posts on Facebook can be taken down and removed from your history. I can still chat with those friends, but who needs to know what I posted three years ago? Facebook’s mining the data, but none of my friends are looking at it.

Facebook Activity Feed from 2018 October to 2029 September, showing no posts, likes, or reactions

Jonathan Wood’s Facebook Inactivity Feed

As for blocking trackers, here’s a nice guide from GeekThis explaining how to limit Google Analytics tracking you across the web.157

Amazon sucks in this regard. On Amazon, while you can delete your browsing history of products you’ve searched for, there is currently no way to permanently delete your purchase history. You can ‘archive’ orders, but “Archiving orders doesn’t delete orders permanently. It removes them from your default order history view. Archived items will always show up when you search for them.”158

“Even if you disable your account, Amazon will maintain a record of everything you purchased”, according to a Huffington Post assessment. 159

For those sites where you do have control over your history, deleting historical data still allows you to use the sites while limiting the personal details of your lives collected by these tech giants.

Of course, the most powerful thing you can do to exercise your autonomy is to simply stop using those sites. It’s easy for me to keep my Facebook history cleared because I almost never go on it (usually for about 5 minutes once a month or so to check for any messages. I’ve responded to messages as old as three months. Most friends have learned by now to simply text or message me on Telegram).

And if you need just one more reason to quit social media, remember the moderators. When asked about the brutal job and what they thought needed to change, one said:

“I think Facebook needs to shut down.” – Speagle, a contracted Facebook moderator.160
Or, as more sophistically put by Ben Hunt at Epsilon Theory, “Facebook delanda est”.161

All of this you know. I’ve explained how social media platforms are designed in ways that create thought bubbles and erode your autonomy of mind, but this is nothing new. Lots162 of others have explained this before. We all know these things.

Can you have meaningful, nuanced, thoughtful, and level headed discussions on Twitter? Can you reconnect with loved ones and strengthen familial bonds on Facebook? Can you network and land your dream job on LinkedIn? Of course. All that and much more good is possible, however we all know that none of that is the point. Those may make for great marketing lines, however it’s all ancillary.

We all know that the point of these platforms is to package our attention into ads. They may have these other features that allow them to be used for things like socializing, reading the news, or networking, but that’s just the nectar in the fly trap. And we’re the flies.

We know this. We all know that social media platforms are designed for virality, not authenticity. We all know that they are designed to generate an infinite pool of user generated content that sucks us into its gravity well of data-mining to deprive us of our time and attention and rake in the sweet sweet advertising dollars.

Facebook, Instagram, LinkedIn, Pinterest, Snapchat, TikTok, Tumblr, Twitter and more are all cesspools of content. Sure, they can be used for organizing communities, or chatting with friends. But that has never been and will never be the primary purpose of those organizations.

Also, throughout this chapter, I’m including Google in this list. Though not a social media company, they still contract content moderators for their search results, as well as YouTube, which they also own. Google may not have a prominent newsfeed, but they still track you across their many products. They also provide a tool called Google Analytics, which they freely provide to anyone to put in their website and let them see detailed analytics about their visitors. Conveniently, this is also a free way for Google to collect that same information from millions of sites it doesn’t even own.

Facebook does the same thing, tracking you on sites you use their OAuth to login with (ever notice how you can ‘login with Facebook’ to comment on a blog. That ‘free’ service Facebook provides also allows them to mine more data).

Let’s be fair to the employees of these companies however. As individuals, most are good people (at least I have yet to meet any Facebook or Google employee that wasn’t). Many genuinely care about designing good products that are useful. Some even to the point of outright activism within their companies.163

That above paragraph is not a throwaway line or token reference. It’s one of the most important points in this chapter. Most people, as individuals, are decent human beings with loving families and friends and a full heart who want to do good by their fellow people. None of the criticisms of companies I lay out in this chapter extend to the individuals of those companies. My point is that in spite of the intelligence and compassion of many of these individuals, they are still operating within an organization reliant on monetizing consumer personal data.

As Chamath Palihapitiya, former vice-president of user growth at Facebook, put it in 2011, “The short-term, dopamine-driven feedback loops that we have created are destroying how society works. No civil discourse, no cooperation, misinformation, mistruth”.164

Good intent will fall short if it’s trapped within an institution that is organized around goals contrary to that good intent. Data-collecting advertisement platforms are organized around selling your data to marketers. You are the product, the marketers are the customer. And the customer is always right.

Social media companies all rely on advertising money, and so that business necessity will always drive their design and decisions.

We all know this. We’ve known it for years. So what do we do about it?

If an institution is broken, we can lobby for change. Having our voices heard and expressing discontent en mass can often affect change.

However not all systems are responsive to outside opinions, however loudly they are expressed. Mark Zuckerberg will never remove the “short-term, dopamine-driven feedback loops” at the heart of Facebook’s profitability so long as it is profitable, no matter how many Change.org surveys are submitted.

If the system is resistant to outside voices, exert pressure with your actions. Move outside of the system, or, where no alternatives exist, work to build a better one. We’ll explore that idea in more depth in chapter 14, when we talk about community structures.

Here, the answer is very simple: take back your data. How, exactly, you do that is up to you. I however will offer a few suggestions.

SAY NO TO SOMA

I’ve already explained above steps to reduce how much you’re tracked. But the most powerful action you can take is the simplest.

Stop using the platforms.

You don’t even have to delete your account, if that’s too drastic. Remember, the platforms don’t make money from your account, they make money from using your content to hold others’ attention hostage for advertisers. Stop giving them content. You can text, email, call, DM, or mail letters to your close friends instead of posting online for all.

Stop surrendering your attention to the dopamine-driven feedback loops. Just stop logging in for, say, one month. See how much impact one month’s abstinence actually has. I’m willing to bet it’s less difficult than you think.

Earlier we talked about a study which paid people to temporarily leave Facebook. They found that leaving Facebook improves your mental health, reduces anxiety and stress, and makes you happier.

For me, I still email, text, group text, and call. SMS still works fine. Or some of the newer apps like Telegram (they’re funded by a grant, so they aren’t pressured by advertisers). Those tools aren’t perfect either, but they’re a lot less cluttered than Facebook’s interface and I don’t waste my time with ads. I promise you you’ll still be able to socialize, organize events, and participate in the latest drama of friends without social media.

Treat the cesspools of content as the virus they are. Maintain your distance and starve them of content. Without you, their host, they will die.

And don’t worry, many other companies will step in to fill the virtual void left by the collapse of the former giants. And as these new entrance step over their predecessors’ corpses, they will know that they’d better act differently, or soon meet the same fate.

One last important point on this topic of leaving social media platforms. I’m not calling for any drastic social campaign. I’m not advocating for the government to breakup, regulate, or ban any particular company or the industry as a whole. Those measures might be useful, however I don’t believe they are needed for us to solve this particular problem.

I’m also not telling you that you must leave, just laying out my view for the benefits of abstaining. If you read this chapter and still believe using social media to be worthwhile, not only do you have every right to continue to use them, but I would also love to 114 hear from you on why you weigh the tradeoff differently than I. Same for any disagreements you may have with this book.

What I am trying to do in this chapter is lay out what I see are the hidden costs of using social media. For me, those costs make using social media, and many other ‘free’ online services, unacceptably harmful to myself and society. For those reasons I choose to remove myself, for the most part, from their influence.

Will it make a difference? Not if it’s just me, no. At the same time, we don’t need everyone to leave Facebook for it to be drastically changed, even just a few hundred thousand people abstaining could be enough to squeeze its profits.

I’m not trying to tell you to delete Facebook. What I am asking is that, after reading this chapter, if you also decide that these costs outweigh the benefits you derive from social media, know that it is perfectly acceptable for you to also remove yourself. To step back from the digital giants and take back your data from their platforms, to take back your autonomy of mind from their algorithms.

And if enough of us decide to abstain from these data mining giants, and go off and create our own groups, then these companies will be forced to restructure their business models and their platforms with a greater respect for individual’ control over their own data.

This isn’t a call to revolution. This is a call for a referendum. Each of us voting with where we choose to swipe, tap, and click.

This is not a revolution. This is a referendum. And that is revolutionary.