Free Speech and Platforms
Disclaimer: my personal opinions are below, they are not affiliated with or should not be interpreted as opinions of my former employer
I was sitting by a campfire with some friends recently when one friend asked me something along the lines of “what do you think of the whole free speech thing?”. An important piece of context being that I most recently worked at Facebook on newsfeed ranking for political content. The question is a bit frustrating for me, because it is representative of the broader discourse around problems in social media: vague and focused on the wrong questions.
My immediate response was “free speech? love it, it’s great!” I think everyone working in the social media integrity space supports free speech, but this wasn’t really the question he was trying to ask.
Much of the public debate around social media and free speech is focused on a few different topics including misinformation, content moderation/takedowns, and section 230. In reality, the actual take down of content is a small part of the problem. Exploring this topic involves asking a series of questions that ultimately take us down a path leading away from where we start. I summarized my response to the question, and how our conversation unfolded below:
Question asked by the campfire: what do you think of free speech?
- What do you really mean by the question of “free speech”?
- Well actually it isn’t really freedom of speech he is asking about per se - he wanted to know what I thought about “the right to amplification”.
- i.e. what you are allowed to shout in the town square isn’t the same as what you can say in the privacy of your home, or what you can say in a billboard.
- What do we mean by amplification?
- We mean the distribution of some content on a social media platform relevant to other content on the same platform. Content is “amplified” when it is shown to more users in newsfeeds (by being ranked higher in the feed).
- It’s also worth noting that there isn’t even a widely accepted legal definition for amplification, and the technical approaches for measuring amplification are complex and expensive.
- How does content ranking work?
- At a high level, platforms want to show the most valuable content for users. Platforms use engagement as the primary signal of value, so content is ranked higher based on predictions of what you are most likely to engage with. Engagement means clicking, commenting, liking, sharing, or spending time viewing something.
- What are the implications of ranking based on engagement?
- For most topics, engagement is a reasonable proxy of user value, and ranking works by showing people the content that is most interesting/relevant to them. It can end up promoting things like clickbait and spam (e.g. “like this post if you like cats, comment if you like dogs!”), but these types of problems can be mitigated with targeted demotions.
- However, for sensitive topics where people have strong emotional reactions (like politics), research has shown that ranking based on engagement promotes content that is more sensational, toxic, and hate filled than average.
- Why is this a bad thing?
- Research has shown that highly engaging sensitive content is not actually aligned with user value when users are surveyed. Ultimately, we have a system that distributes content, and we want this system to reflect the values we hold as a society. If the content distributed by this system is toxic and hateful, and our society does not value toxicity/hate, then it is objectively not working as intended.
- What else could we optimize by? and what do we think would change if we optimized by these approaches?
- Look, there isn’t a silver bullet here that will magically fix everything. These are really hard problems, and any approach based on content-level understanding faces extreme challenges in scaling globally (although, advances in large language models may help). However, we know the current approach is broken, and we could be spending more time trying to experiment with and improve alternatives.
- The short answer as to what might be an improvement from the current state is greater reliance on a combination of user controls and survey based models. User controls give users the ability to adjust what they see in feeds based on their own preferences rather than using a one size fits all approach. Survey based models are an alternative to engagement optimization where platforms can predict qualitative/subjective characteristics about a piece of content (e.g. is this worth my time, is this good for the world). There is a broader long-term vision where we could explicitly model the values we care about as a society and optimize based on content that represents those principles.
- If we put more weight on survey based models, we would likely see decreases in behavioral engagement, but we would also see decreases in toxicity and hate speech, and we would likely see improvements in user’s reported satisfaction with the content they see in newsfeed.
- Who decides what we optimize by? (governance)
- Okay, so this sounds great, we know the problem (engagement optimization), and we have some potential alternatives (align ranking with human values using survey based models), how do we get started trying out improvements? Well, now the question is no longer about free speech or technical challenges in content ranking, the actual root cause is understanding how change can happen.
- Ultimately it is up to the senior management of platforms to decide what ranking systems should optimize for. Taking Facebook as an example, Mark Zuckerberg has a controlling interest in the company, so it is essentially up to him alone. At the same time, he also owes a fiduciary duty to shareholders, so even his framework for evaluating the decision must run legally through this lens. I’m not even going as far to say that this pushes his decisions in certain directions, as much as pointing out the fact of how the system works. So the question of “free speech” in this context is somewhat irrelevant, because it boils down to “who decides?” and “what criteria must they use to decide?”
- It’s worth noting that governance is an area that is evolving. Facebook established an independent oversight board, and there has been discussion of experimenting with more community driven governance models. The zeitgeist in the broader industry has discussed “Web3” based social media as a solution by rebuilding governance models through decentralized protocols and community driven decision making.
So, what are my thoughts?
- My most basic thought is that these are really hard problems (obviously). What makes it hard isn’t the technical challenge (although these are also really tough), but the fact that we are dealing with systemic human problems. They are inherently political. They are messy and span borders, cultural norms, and economic systems. Trying to simplify the issues to “free speech” and “censorship” really misses the point, and focusing on the problems through the lens of censorship isn’t going to lead to the solutions we need.
- We shouldn’t be asking “what content should/shouldn’t be taken down?”, we should be asking “how do we determine what content is rewarded with distribution?” This second question is more important in terms of impact, and more difficult to answer.
- At the same time, the most important question might not even be about content ranking at all, it is about platform governance. Government regulation isn’t going to solve this, we need new and better models for platform governance so that these types of questions can be handled going forward, and ideally in a way that benefits society overall. The most actionable step we can pursue in the short term is implementing standards for platform transparency so the issues can be studied by external researchers, as the existing transparency offered by platforms is not sufficient.
- While new external governance mechanisms are being explored, I think an interesting, and under-explored avenue is shifting the balance of power between employees and management. Maybe this would take the form of a union or some other recognized organization vehicle. But, right now the integrity employees who are closest to the problems/solutions, and who are largely free of the incentives facing management, have little agency in the decision making. At minimum, it would likely be positive for rank and file employees to 1.) have a seat at the table in the senior executive decision making meetings, and 2.) have an officially recognized mechanism to share areas of concern with the external Oversight board. Otherwise, the only option for employees to appeal decisions has historically been to leak internal documents.
- As for the leaders of the platforms, I have empathy for their position. These are impossible problems where any decision results in criticism. It would be hard to justify making a decision that would result in billions of dollars of reduced revenue, or an increase in regulatory threats, if it meant choosing to optimize for pro-societal outcomes. In that sense it largely not their fault, this is just the system behaving the way it is designed.