The Problem With Growth Metrics

The conversation around data ethics often centers on issues of privacy and ownership of personal data.  However, there is increasing concern for how our digital services may be manipulating or addicting us.  In this post, we focus on the problem of addiction and how a standard data science practice – optimization for growth metrics – may be contributing to it.

Metrics are critical to data science.  From the multitude of things that we can measure, we must develop a small set of metrics that show how well our product is doing.  There are quality metrics, which try to measure the quality of a product, usually from the user’s perspective.  A simple example would be an app that, after the user has been using it for some time, asks the user to report how satisfied he or she is with the app so far.  The metric might be the proportion of users that say they are “very satisfied” with the product.

Optimizing for a metric means trying to find a way to increase it.  A product experiment might test two variations of a user interface or product behavior and the variation with higher metrics will be chosen for the next product release.

In contrast to quality metrics, growth metrics  measure frequency and duration of product use – they are called “growth” metrics as, over time, they measure product growth .  A typical growth metric is “daily active users” (DAU) of a product.  This counts any user of your product once per day.  In other words, if ten people use the product today, they count as ten DAUs – it doesn’t matter if each of them uses the product once today or five times.  Another typical growth metric is “time-in-app”, which counts the total amount of time that each user spends in an app.

These growth metrics seem reasonable.  It can be argued that optimizing for these metrics is ethical – that they are in the user’s interest: “if the user didn’t want to use the product, he or she wouldn’t be using it” or “the user would stop using the app if he/she wasn’t enjoying it”.

But this justification assumes that users have perfect self-control and know what’s best for themselves.  We can imagine tobacco companies making a similar argument while optimizing their tobacco recipe for “daily active smokers” to boost cigarette sales1.

Optimizing for these sorts of metrics can create addiction.  One of the key principles of ethical data science is the recognition that we can influence people (even unintentionally) to take actions against their own best interest.  To continue our analogy, human physiology is vulnerable to certain chemicals in tobacco that can create addiction and cause a person to smoke regularly, even if that person knows that he or she would be better off quitting.  Similarly, human psychology is vulnerable to certain user interface designs, interaction patterns, and digital behaviors that can also create addiction or, more generally, shape a person’s behavior and habits in ways that may be unhealthy and that the person may even recognize as unhealthy.  Of course, there is more evidence for the harm of tobacco addiction than there is for the harm of technology addiction, but that is just a matter of time.

One of the great powers of data science techniques is that, even without any understanding of the human mind, they can find vulnerabilities in our psychology – the techniques can explore many different designs and behaviors to find the “most effective”.  With metrics like DAU or time-in-app, “most effective” can equate to “most addictive” – whether or not that is the intent of the product team.

Whether creating an addictive product is intrinsically unethical can be argued for or against.  However, when a user’s own data is being used to make a product addictive, and the user is not consenting to having his or her data used for this purpose, it seems clear that there is an ethical problem – nonconsensual use of personal data violates accepted ethical norms and when that use may harm the user, it becomes very difficult to justify.

Many companies’ data policies specify that data may be used to “personalize and improve [their] Products”2.  Following the argument that “if the user didn’t want to use the product, he or she wouldn’t be using it”, they justify promoting addiction as improving the product.  As I will discuss in a future post, I feel that data policies need more clarity around to whose benefit the product is being improved.  If an “improvement” to a product only increases the amount of time users spend using it, without a commensurate benefit to the user, the improvement is entirely to the benefit of the owners of the product, not to the users.

In another future post, I will consider how we can design metrics that truly reflect the user’s best interest.


  1. Thanks to Istvan Lam, CEO of Tresorit for the analogy.
  2. Example from Facebook’s policy as of 2018-10-10.

 

2 thoughts on “The Problem With Growth Metrics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s