As discussed in my previous article, content recommendation engines (CREs) like the Facebook newsfeed and YouTube’s “watch next” feature appear to be amplifying harmful content. Further, there may be an inherent conflict of interest in which the business models of the companies behind these CREs may disincentivize them from pursuing adequate measures to solve the harmful content problem. Given the widespread recognition1 of the social harms due to online dissemination of harmful content, and especially given the potential conflict of interest, greater participation from regulatory bodies is needed to ensure that progress is made.
My view is that a co-regulatory approach is most appropriate for tackling this problem, calling both governments and companies into action. The benefit of this approach is that it harnesses the expertise and insight of companies – who control the data, content, and CRE algorithms at the heart of the problem – while also ensuring effective transparency and accountability – as democatic governments set the guardrails and verify reasonable efforts. More extreme approaches – strict rule-based regulation on the one hand, and pure self-regulation on the other – have both failed to make inroads into the problems with CREs today2.
In a co-regulatory framework, access to the relevant data by privileged third parties ( governments, auditors, academics) is essential in order to evaluate the progress companies are making. We do not set out a vision of who this auditor might be and under exactly what circumstances the data should be provided, but assume that effective public and private law and basic constitutional safeguards are in place to prevent abuse of power by the auditors.
We focus here on the data needed to measure the extent of the problem and how much progress is being made (in a follow-up article, we will focus on the data needed to ensure that reasonable efforts are being made and that conflicts of interest are not hindering progress). Conceptually, this is simple: we need to measure the prevalence of harmful content on these platforms, and how much of it is being exposed to users, over time.
But there are many subtleties. We must be clear on the operational definitions of harmful content, which evolve as new laws and policies are written. We must understand how much content the site hosts, which may change constantly. We must have clear documentation of the methods used to identify harmful content on the site, whether human review or machine learning model. Then, based on the output of these methods, we need the identified rates of harmful content. It is important to note that human review of only a small (appropriately chosen) sample of a site’s content can allow us to infer the overall rates of harmful content on the platform with reasonable accuracy3.
As we have discussed previously, harmful content is inherently subjective with no single concrete definition. We can consider various definitions to operationalize the concept, but they will carry their own limitations and biases. For example, we can consider “illegal content”. In countries where there is less emphasis on freedom of speech than in the US, much of what we would consider harmful content could well be illegal content. However, judicial review is generally needed to establish whether the material is illegal. As such, “illegal content” is not a practical operational definition.
Other definitions are created by the companies operating the online platforms. Internet companies have a terms of service (ToS) document that spells out generally what content they allow on their services, although the definitions may still be subject to some interpretation. Content that violates the ToS can be referred to as “disallowed content”.
Many such companies (especially the large ones) employ contractors to evaluate and rate their content. In addition to the ToS, they provide written rating policies (like Google’s Search Quality Rating Guidelines) that clearly define particular categories of content. For example, YouTube refers publicly to “borderline content” and claims specific numerical reductions of views of this content – there must necessarily be a concrete definition that the company has written to classify content as “borderline”. There may be multiple policies and each single policy may identify multiple categories of content, including multiple rating scales on metrics such as quality, accuracy, or trustworthiness.
Finally, we can also consider user-flagged content. Most online platforms provide a mechanism for users to flag content that they consider objectionable. Of course, users may have many reasons to do that, so the rate of flagged content has to be interpreted with care. Often, flagged content is prioritized for rating by employees or contractors.
These categories are not completely independent. Some users may flag content simply because they think it violates the ToS; the ToS will probably reflect legal requirements, and if certain types of content are frequently flagged, they may be specifically called out in the ToS or other rating policy. Ultimately, no definition is going to be adequate – what is important is that a reasonable definition is operationalized to the point that content can be objectively determined to be harmful or not. The existence of such a definition should be a requirement for all but the smallest companies. They should then reasonably be expected to report on:
- Rates of removal of content due to reports or findings that it is illegal or disallowed. This should include the grounds for removal, who requested the removal, and any review or analysis to verify the claim.
- As well as a measure of the actual rate of illegal content on the site, this can shine a light on censorship: companies often take down content that is flagged as illegal by a government authority without waiting for a court assessment (see here for a discussion of this issue and page 5 of this document for some data and analysis). Additionally, this kind of data can be valuable for understanding the impact of changing regulations.
- Rates of flagged content.
- Rates of content in any categories that the company has the capacity to assess, either through policies (or “rating guides”) used for human review or through machine learning models.
This begs the question of how disallowed content is identified. If a piece of content is reviewed and found to be disallowed, presumably it would be immediately removed from the service. However, typically it is only possible to review a small proportion of a service’s content. Imagine a video-sharing site that hosts 100 000 videos. Perhaps the company hires contractors to assess a random 1000 of those videos – they find that 40 of those 1000 videos are disallowed by the ToS. Because the 1000 videos reviewed were a random sample of the 100 000 on the platform, we know that about 4% (40 out of 1000) of the videos on the site would be disallowed if they were reviewed. We have only needed to review 1000 of the 100 000 videos, but using a statistical method known as a “confidence interval for a proportion4” we can report that we are 95% confident that the true rate of disallowed content on the platform is between 3.0% and 5.4%.
Additionally, many online platforms will make use of statistical models to classify their content. Such models need training data, so as in our example above, some random sample of the service’s content will be classified by contractors according to a written guide produced by the company (perhaps as “good” / “borderline” / “disallowed” or, in more sophisticated cases, there may be categories for individual types of problematic content, such as “conspiracy theory”, “hate speech”, etc.). The statistical model can then learn to predict the category of any other piece of content on the service.
These statistical models have limited effectiveness for filtering. The model predictions will have uncertainty and could have errors or bias. For example, the model, when applied in an automated content recognition setting – might state that a particular video has a “72% chance of being disallowed” – this is probably not sufficient grounds for deleting the content preemptively, although content that the model predicts is highly likely to be problematic may be flagged for further review or could be suppressed for more sensitive audiences (children, etc.). However, the models are quite effective at determining rates of harmful content. Due to a statistical concept known as the law of large numbers, even if the model is wrong about many individual pieces of content, it is likely to be quite accurate in determining how much of the content is harmful overall. This provides an excellent measure for the overall magnitude of the problem that a service has with harmful content.
We have so far remained nonspecific about what harmful content is. We suggest that various categories should be reported, such as disallowed, user-flagged, illegal, etc.; however, not all harmful content is equal: exposure to child sexual abuse material (CSAM) is likely to be considered much worse than exposure to a conspiracy theory. We do not set out a full taxonomy of harmful content here (although that would be a worthwhile endeavour), but one can imagine defining various categories such as CSAM, conspiracy theories, medical misinformation, etc. Within each of these categories, there might be different tiers of material, perhaps conspiracy theories in the highest tiers would be those that might lead to violence against a particular group.
With this taxonomy in place, one could calculate many different harmful content rates: the rate of harmful content of any kind, the rate for a particular category or set of categories, or the rate of harmful content in the highest (or top two) tiers, to give some examples. Additional categories can be defined as needed: for example, we may define a category of content that perpetuates racial discrimination, another that advocates violence, and another that provides misinformation related to an election.
We must also consider that there are many ways of measuring rates of content in any category. Take, for example, a video sharing site. We might care about the proportion of videos that are harmful. But maybe it’s important if longer videos are more likely to be harmful, in which case we might care about the proportion of hours of videos that are harmful. Next, it may not matter if the site hosts harmful content if no-one is watching it, so we might care about the proportion of videos viewed or hours of videos viewed that are harmful. We also might instead care about the proportion of the service’s users that view at least one harmful video in a given month. Finally we may care about videos that are only “impressed”, meaning that the title, description, and perhaps first frame are shown on the screen, but are never played. Generally speaking, there are many metrics we can use to measure rates of bad content. They all involve a “numerator” (how much harmful content) and a “denominator” (how much content or users total). For example, we might have a numerator of “hours of harmful content watched” and a denominator of “total hours of content watched”. Alternatively, we might have “users that watched at least one harmful video in February 2020” and “total users in February 2020”.
We now describe, generally, what data these companies might be compelled to make available to auditors. A technical report specifying the details of this data could be written, but we do not take that on here.
Firstly, we need concrete definitions. Every company should have, as a minimum:
- A ToS document that spells out what content is allowed on the platform.
- A mechanism for users to flag content that they consider problematic – at the very simplest, this might be just an email address that users can send reports to, but typically should be an in-product user interface affordance such as a button close to the content itself. A document should be provided explaining the functioning of this mechanism.
- The policy describing how content can be removed from the site due to claims that it is illegal or disallowed from governments or other third-parties, or for any other reason.
Many companies will also define additional content categories and this may be considered mandatory for larger platforms. These may be “borderline content” that does not strictly violate the ToS but may still be considered harmful. Alternatively, these categories can include different types of ToS violations or different content themes. Documents defining these categories should also be provided.
As discussed above, typically employees or contractors will review and rate content. This should be mandatory for all but the smallest platforms with clear guidelines and instructions provided to the reviewers. Additionally, reporting should be done on the type of reviewers (contractors, speciality employees, other employees, etc.), what cultures and languages they represent, and the number of reviewers and time spent on rating.
It is also common that statistical models are used to identify harmful content. This should be a requirement for platforms above a certain size. The performance characteristics of the model should be shared.
Each document and its change history should be provided, as changing definitions can make rates of harmful content appear to vary over time when in reality only the definition has changed.
In order to measure rates of harmful content and also to contextualize any findings, it is necessary to report on the number of users the platform receives and the amount of content that they view or consume. Required measures should be reported over the history of the platform and would include measures counting users and how much content they are consuming.
Then, data on the presence of harmful content is needed. This should include the results of any human rating of content as well as the output of any models designed to predict content ratings. In order to support validation of content ratings, the ratings (from both human review and model predictions) should be provided for some reasonable sample of content so that a third party can evaluate the accuracy of the ratings. Additionally, there should be a full log of any content removed based on requests from governments or any other parties.
Additionally, all this data should be possible to restrict to particular geographical or linguistic subsets of the site. It should be possible to, for example, to compare the rate of bad content between English and non-English content, or between the USA and Canada. If the site collects or infers demographics such as age or gender, restriction to various demographics should also be supported.
To summarize, it is quite reasonable to expect that digital platform companies know the overall extent of their problem with harmful content. By sharing clear definitions, policies for assessment, and data about usage and identified harmful content, greater transparency can be achieved. Then, in collaboration with regulators and researchers, progress towards a solution can be possible.
-
-
- See the links in the first paragraph of my previous article.
- See this report for an example of a strict approach being ineffective. The fact that this is still such a problem today makes it clear that self-regulation has not been effective.
- Facebook discusses their methods to do this here: https://github.com/facebookincubator/ml_sampler
- Note that this method is probably not effective in many relevant cases, but that there are more sophisticated methods that are.