Much has been said about how digital products like Facebook and YouTube threaten our psychological well-being, our democracies, and even the very fabric of our society. But it is difficult to characterize this threat concretely. Yuval Noah Harari makes an important contribution in “The myth of freedom”, arguing that most or all of the decisions that a person makes are determined by external factors – that free will is a myth. He also says that machine learning (like that powering digital platforms) can know us so well that it can become an extremely effective manipulator, controlling our actions and beliefs.
I believe that Harari is right in that the threat from digital products stems largely from their extraordinary effectiveness in manipulating our behaviour. But this does not hinge on accepting the contentious proposition that free will is a myth. Instead, we can turn to statistics.
The critical argument is that manipulation is effective (and important) at scale, not at the individual level. If I wish to manipulate an election, there are two reasons for which it is ineffective to simply try to convince a single individual to vote as I wish:
- Changing a single vote is unlikely to affect the outcome of the election.
- I may fail to convince that individual to vote as I wish, possibly because he or she has free will.
However, if I attempt to convince many people, I can overcome both of these limitations. To see why, let’s take some philosophical liberty and model people as coins. This is an oversimplification, but let’s say that, in a two party system, a person’s vote in an election is a coin flip that determines if that person votes for the “heads party” or the “tails party”. The coin might be weighted. Many people are represented by a coin that is virtually certain to land on one particular side. But many voters are undecided or are open to changing their minds. So one person’s coin might have a 60% chance of voting heads party and a 40% chance of voting tails party. The details don’t matter, so for simplicity let’s say there is a population of “swing voters” who are modeled as fair coins: 50/50.
Let’s say that if I attempt to manipulate one of those swing voters, I can change the coin so it is now 52% likely to fall heads, and 48% likely to fall tails. This doesn’t really seem like a very effective manipulation and, intuitively, does not seem to violate free will. I still don’t know much about how that person is going to vote – it’s still very close to totally random.1 However, if I can manipulate 50 000 people in this way, while each of them still “has free will” and may vote one way or the other, due to statistical law,2 I can be 99.9996% sure that at least 51% of them will vote the way I wanted. Without my manipulation, this would have been only 0.00038% likely to happen.
The same principle applies widely: if I advertise a product, there is a very small probability of any one individual buying it, but if enough people see the ad, I can be certain I’ll make a lot of sales; if I attempt to manipulate people to spend more time using my app, each individual will still make a personal choice, but I can rest assured that I will increase the overall amount of time that people spend.
To begin solving these problems, we need to understand that even though an individual has free will and is difficult to manipulate, people as a population can be manipulated effectively – and the scale of the internet and power of machine learning provide a ideal system for manipulation at scale. These methods are now being used with insufficient oversight in ways that may harm our well-being, our political systems, and our social fabric. The solutions are not yet clear but, whatever they will be, the starting point is to recognize our own vulnerability.
-
- Using the popular understanding of “totally random”, which really should be phrased “uniformly distributed”.
- From the cumulative distribution function of a binomial random variable.
One thought on “Even with free will, we’re still in trouble”