The Internet’s largest user-generated content platforms – including YouTube, Facebook, and others – have a serious problem with harmful content. Misinformation, radicalization, and exploitation have all found homes on these sites. These are complex phenomena, reflecting social and psychological issues that predate our era, yet modern technology can amplify them in new and powerful ways. At least in part, this amplification appears to be inherent in the content recommendation algorithms and in the business models of the companies that build them. Greater transparency and responsibility are needed in order to ensure that these companies are taking the appropriate steps to avoid harming our society.
Dividing posts and videos into piles of “good” and “bad” content is hard, if not impossible. This article is not advocating for censorship – laws vary between nations, but within appropriate limits, people should have the right to create and distribute whatever content they want to. However, ultimately, the platforms choose what content to recommend, even if this choice is obfuscated through algorithms. If content recommendation engines are amplifying voices and broadening audiences for content that is making people feel unsafe online or otherwise harmful to society, then solving this problem is not censorship.
To understand the possible link between the business models of the content platforms and harmful content, we must understand something about how these business models function. The types of companies we’re talking about can be classified as “attention merchants”. There is an excellent exposé written by Dan McComas, the former product head at Reddit, that summarizes the idea succinctly:
The incentive structure is simply growth at all costs. There was never, in any board meeting that I have ever attended, a conversation about the users, about things that were going on that were bad, about potential dangers, about decisions that might affect potential dangers. There was never a conversation about that stuff.”
For the attention merchants, the primary business goals are to get more users and more engagement from those users. The more people spending more time with the product, the more ads can be shown and sold. And as users engage with the platform, uploading or sharing content, liking and commenting, the platform collects data that can be sold or used to better target those ads. This focus on growth and engagement is baked into the core of the algorithms that power the Internet’s largest content platforms.
How is this connected to harmful content? If the primary goal is to maximize engagement, then we might ask: “can recommending harmful content lead to more engagement for a platform?” Only the platform companies themselves are in a position to decisively answer this question, but all the evidence points to “yes”; the recommendation engines are very good at recommending content that will lead to engagement, and so the very fact that so much harmful content is recommended is quite telling. As well, it seems that harmful content can receive a large amount of engagement. Recommending harmful content may be an unintended consequence of optimizing a recommendation engine for engagement. Even though these companies have no intent to promote harmful content, their content recommendation engines may be doing exactly that.
Of course there are trade-offs to be made. The companies care about their long-term success and recognize that surfacing excessive harmful content is not good for business. But when suppressing harmful content hurts the bottom line, the business logic leads to the question of “how much harmful content can we still recommend without harming our long-term success?” The appropriate balance here for a business is not necessarily the appropriate balance for preventing harm to our society.
To better understand engagement and how it is measured, let’s get to a few details1. One of the main tools of the trade for data scientists and quantitative analysts is the “metric”. A metric reduces complex information about how a product is doing to a number. One common metric is “daily active users”, commonly referred to as “DAU”. This measures the number of unique people using the product on any given day. Another metric might be “average time in app”, which would measure the time spent using the app, among all users on a given day. A third metric might be “like button interaction probability”, which might measure the probability of a user clicking on a like button when they view a post.
As you can imagine, there are many possible metrics. They also may measure how much content users share, how much they interact with particular features in the product, etc. But typically, just a few very important metrics are chosen, often referred to as “North Star Metrics” or “Key Performance Indicators”. Most product development effort focuses on increasing these metrics.
There are two primary ways a product is optimized for a metric, meaning the product is changed in ways that will increase the metric: experimentation (A/B testing is a common type of experiment) and machine learning optimization. In the case of A/B testing, a change to the product can be tested by showing the changed version to some users and the original version to others. The metrics can then be calculated separately for each group, and if the changed version improves the metrics, it will be “launched” and the product will be updated for everyone. It’s worth noting that many large tech companies run thousands of such experiments every year.
Machine learning works similarly – you can think of them as continuously running experiments. The model is tasked with making some decision about how the product operates (for example, which video to suggest that a YouTube user watches next). The model is constantly receiving feedback (did a user watch the recommended video, what kind of video was it, and what do we know about the user) and adjusting how it makes its recommendations. This adjustment is always guided by some kind of metric, just like in experimentation.
Content platforms are constantly tuning their recommendation engines in order to increase certain metrics. Of course, the type of metrics that we’ve been talking about (“growth metrics”) are not the only ones used. There are many other types, measuring interactions with user interface elements, product performance in terms of speed and reliability, and measures of views and recommendations of content with different topics or by different creators.
There are even metrics to measure exposure to harmful content. Typically, a company will have a written policy to describe how content can be classified into defined categories. Some of these categories will be content that is explicitly unacceptable in the product’s terms of service and will probably be deleted when it is identified. Another category will be what is considered “borderline content” that does not violate any rules but may still be harmful to show to users in some or all cases. It is important to make clear that the content platform companies are writing these policies – they make their own definitions of harmful or borderline content. As I mentioned, the true concept of harmful content is complex and contextual, but these companies make their approximate generalizations.
Once the definitions are established, metrics can be developed. Some sample of content is sent to human raters (usually contractors) for review and classification. At this point, they now know, for some small subset of the platform’s content, “what is good and what is bad”. This data can be used to train machine learning models to classify every other piece of content on the platform. Critically, these models are imperfect: some harmful content will pass as apparently harmless; likewise, some innocent content will be incorrectly flagged as harmful. But statistically, these models should provide a fairly accurate measure of how much harmful content the users are being exposed to.
What this means is that the platform companies can not generally say with certainty that any particular piece of content is harmful. So it is not feasible to simply “filter out” all the bad content. But there are changes to the content recommendation engines that can increase or decrease the overall level of harmful content that users are exposed to, and the platform companies are able to effectively measure the impact of these changes – due to a statistical property known as “the law of large numbers” even if the classification of an individual piece of content is sometimes wrong, the proportion of harmful content in a large sample can be known quite accurately.
Preventing harmful content from being surfaced is not easy, but is not impossible either. Google Search does an excellent job of preventing inappropriate content from being returned in the list of results. The fact that YouTube recommendations have so much more of a problem with harmful content than Google Search does suggests that there are some fundamental differences between the two systems.
I would argue that this has to do with objectives: Google Search can surface content that best meets the user’s search query. YouTube recommendations have no particular search intent to work with and so optimize simply for engagement: getting the user to watch more videos. As I suggest, it is this optimization for engagement that amplifies harmful content. This is supported by the observation that there is less of a problem with harmful content in YouTube search results as compared to YouTube recommendations. When there is a search query to work with, the optimization is not purely for engagement.
So now, we get to the core question: what if an experiment shows that a particular change to a content recommendation algorithm will increase the key growth metrics, but also slightly increase the amount of harmful content users are exposed to? Will the company decide to make that change? We don’t know. We don’t even know for sure if these sorts of situations arise, but given the large scale of the harmful content problem on these services, and given how much engagement harmful content tends to receive, it seems very likely.
Conflicting incentives like these are a major reason why we need greater public awareness and why we need to push for real responsibility and accountability in the implementation of content recommendation engines. The companies behind these platforms claim to be making progress in solving these problems; but we need those claims to be backed up with data and evidence, and we need external researchers and journalists to have the access and data necessary to be part of the solution.
In the next instalment, I will go into more detail about what these companies could (and should) do to demonstrate their commitment to preventing their products from creating social harm.
- In this post, I present an oversimplified view that leaves out some technical details; I hope that it is comprehensible for everyone and that experts will forgive the omissions.