How do we trust: does social media level the field between fact and fiction?

We have recently seen many examples of the danger of misinformation distributed on social media.  The COVID-19 pandemic has been accompanied by a misinformation pandemic, including conspiracy theories questioning the existence of this deadly disease, anti-mask propaganda, and claims of false cures.  One study shows that at least 800 people died in the first three months of this year due to misinformation about COVID-19 cures, to say nothing of how many COVID-19 deaths could have been prevented if the flow of misinformation discouraging proper mask use and social distancing were staunched.  Similarly, the US election has been capturing the world’s attention with a backdrop of all sorts of bizarre falsifications and illegitimate claims.  The “Stop the steal” Facebook group was shut down quickly, but only after, as the New York Times reports, it “had amassed more than 320,000 users — at one point gaining 100 new members every 10 seconds.”  Many similar groups have popped up since.

While this problem, as we’ve discussed previously, is the consequence of a complex interaction of technological and social factors, a key question is how people decide what content to trust.  In a talk at the Mozilla Emerging Technology Speakers Series, Jeff Hancock, founding director of the Stanford Social Media Lab, presents a very useful model, describing how social media may be creating a shift in how we trust.  Research from the Media Insight Project suggests that this shift may explain why disinformation seems to compete so effectively with the truth on social media.  This is critical in our present media environment, in which, according to the Pew Research Center, more than a quarter of US adults get news from YouTube.

Hancock describes three types of trust in his talk:

  • Individual trust exists between people based on their personal experiences of each other.
  • Distributed trust is the social network of trust – those you trust and those trusted by those you trust.
  • Institutional trust is based on an institution’s role in society – for example, trust in established media, government, academia, etc.

Critically, content on a person’s social media feed mostly comes from sources that the person has distributed trust for.  Hancock’s argument has two parts: that social media is strengthening distributed trust and that the increase in distributed trust is at the expense of institutional trust.  Based on Hancock’s theory, we can hypothesize that this redistribution of trust from social institutions to social networks damages social cohesion and erodes the importance of shared truth in public discourse.

Social media is fundamentally built on the concept of distributed trust, in that most or all content presented to a user is produced or shared by the user’s social network.  It is, thus, quite plausible that when users browse social media for news or other information, distributed trust will dominate.  Related research from the Media Insight Project provides some additional insight.  Referring to news content on social media, the research reports that “how much [users] trust the content is determined less by who creates the news than by who shares it”.  In this particular study, the people sharing the content were public figures rather than members of the users’ social network, but, assuming that the finding generalizes, the insight that trust depends primarily on the person sharing the content is critical.

In a sense, social media creates a sort of context collapse, in which the author of content is overpowered by the identity of the person or organization that shared the content.  While, prior to social media, health advice for coping with a pandemic might have come in the form of a professional media campaign from a public health authority, claims that the entire pandemic is just a conspiracy are more likely to be seen on hastily scrawled cardboard signs at a rally.  Contextualization of messages is highly salient and creates useful signals to assess trustworthiness.  On social media, however, the two messages could appear similarly as a standard block in a news feed with the name of a trusted member of the user’s social network attached to it.  This collapse of context can potentially strip critical signals of institutional trust, denying institutionally generated content the advantage it might otherwise have had.

There are other reasons why a social-media-driven growth of distributed trust might decay institutional trust.  Hancock points out that social media allows the criticism of institutions in a way that the errors in judgement from an institution can receive much more publicity than they might have without social media.  This can compound the context collapse described above.  During the early days of the internet, there was a graph making the rounds trumpeting how the internet was democratizing media, showing that the diversity of media sources that people got their news from had exploded.  I can’t find the graph now, but it demonstrated something like that while previously 95% of media consumption was concentrated in only 7 different media companies, with the internet, now the top 100 media companies accounted for only 80% of media consumption.

This was seen as a victory, and perhaps in some ways it was.  But perhaps it has also been carried too far.  It seems fair to argue that a functioning society requires some level of shared beliefs and values.  As institutional trust becomes weaker and the choice of which stories that one listens to becomes more diverse, there is a breakdown in shared beliefs and values – this may be the root of the social challenges we are dealing with today.  Greater respect for and trust in institutions may be a critical antidote to the filter bubble news environment that we now live in.

If social media has driven us away from institutional trust and towards distributed trust, it seems plausible that the decreased value in shared truth could lead to a decay in the importance of truth itself.  Although our institutions are certainly fallible and will sometimes spread falsehoods that must be challenged, their overall role in our society is to help separate truth from falsehoods: the role of the media with fact-checking, the scientific method adhered to by much of academia, and the political and legal processes.  As the institutional trust is weakened, perhaps our public discourse comes adrift to float further and further from the groundedness of truth.  Of course, this is highly dependent on local context and culture.  In regions with weaker and less trustworthy institutions, a move to greater distributed trust has actually had a very positive social impact, allowing the dissemination of truth and free discussion critical of the very institutions we speak of.  Especially on a global scale, the point is not that institutions must overpower social networks, just that an appropriate balance must be struck.

In any case, both the erosion of shared values and beliefs and the weakened respect for truth have the potential to be destructive to our societies.  So what can we do?  If social media is, in some cases, weakening the institutional trust that helps ensure the respect for truth and social cohesion, perhaps social media is beginning to take on part of the role of the institutions that it is weakening – an arbiter of shared values and beliefs.This is a role that can not be trusted to a for-profit corporation, at least not in our current regulatory environment.  If social media is to be allowed to continue to have such an important role in our society, it must be treated as such an institution and either under public control or properly regulated.  The problem is hard, but there are some potential technical solutions that could be explored if the incentives were correct.  For example, the rating guidelines that inform much of the data Google and YouTube use to train their models, quite explicitly work to assess authoritativeness of information.  Proper use of this data to, perhaps, “recontextualize” content and differentiate more trustworthy content from the rest could make a difference.  Ultimately, however, no algorithm can manufacture trustworthiness – instead technology must support our social and institutional processes for building shared truth.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s