Skip to main content Skip to secondary navigation
Page Content
Image
A young woman frowns at her computer screen

Stanford scholars are tracking trends in dark patterns - the website designs that can lead us to buy something we don't really want or make it difficult to unsubscribe from various services. | iStock/pixdeluxe

Many companies routinely rely on design features that deceive, coerce, or manipulate us online. They ask us to sync our contact list or allow cookie tracking while hiding options that would enable us to decline; they convince us to buy things by pitching a product as available in limited supply or “for a limited time only” when that is not actually the case; they sign us up for subscription services and then make it extremely difficult for us to unsubscribe. All of these are examples of what have come to be known as “dark patterns” – user interface designs that benefit online companies at our expense.

To better understand the universe of dark patterns, Stanford’s Digital Civil Society Lab (DCSL) now hosts a Tip Line where individuals can submit their sightings of dark patterns online. Lucy Bernholz, the director of DCSL, and Jennifer King, Privacy and Data Policy Fellow at the Stanford Institute for Human-Centered Artificial Intelligence, are coordinating the project. Here, King describes the increased use of dark patterns, plans for the Tip Line, and her hopes for future regulation of dark patterns.

Have dark patterns become more prevalent?

Having watched this space for the last decade, I would say the use of dark patterns has increased. It’s hard to put a finger on how many companies use them, but I have observed their use move from the fringe to the mainstream. Companies playing at the edge of being outright deceptive or fraudulent were the ones that pioneered the use of dark patterns. These are companies that have no business model without the dark pattern.

But over time, the use of dark patterns has gone more mainstream. We now see big platforms and general consumer companies relying on dark patterns in routine ways, such as making it hard to cancel or using a countdown timer to make it seem as if a purchase opportunity is about to disappear, or making claims – often disconnected from reality – that “we sold 900 of these today.” These companies try to make you feel that you’re looking at a scarce resource when in fact you are not. That type of online sales pressure didn’t exist on most e-commerce sites until very recently, but now suddenly, even the Gap, which I consider to be a fairly mainstream clothing site, has instituted those little prompts saying “25 people are looking at this now.” And in some cases these claims are completely fictional: They’re not connected to anything. That’s especially true of countdown timers: The code is literally just JavaScript that will reset every time you reload. 

The bigger and more regulated a company, the more likely these scarcity claims might be connected to some form of infrastructure. But when they aren’t, these are illegitimate high-pressure, deceptive sales tactics that the Federal Trade Commission or states’ attorneys general should police.

Do dark patterns affect vulnerable populations differently?

Dark patterns, by their nature, are ads or designs that try to pass themselves off as legitimate when in fact they are taking advantage of human weaknesses. They count on a person’s inability to deal with a lot of complicated formatting and information on a web page as a way to get them to purchase something, or sign up for a subscription, or consent to a privacy policy. Some of these are patterns that would seem shady to a digital native or a person with at least a high school education, but might successfully trick a person who is older and less digitally proficient, or less educated, or a non-native English speaker.

How can regulators distinguish between harmful dark patterns and something more innocuous?

Drawing the line between dark and more “gray” or borderline patterns is quite honestly the core of the challenge of regulation. It’s not a simple thing to do because context matters. In instances where there are financial interests at stake, or the company’s behavior is particularly egregious, there might be some low-hanging fruit that could be easily regulated. In fact, right now there are a few things the FTC could issue rules or guidance about. For example, they could state that you can’t auto-renew subscriptions without giving people timely notice; or that you can’t slip purchases into people’s carts online without them taking some affirmative action to include them. They could outlaw what we call negative option continuity plans, where you get signed up to pay monthly for a service or subscription without even realizing you have done it. They could require that the option to cancel a subscription or close an account must be as easy to locate and initiate as signing up for the service in the first place. These are some examples of obvious dark patterns that could be reined in now.

To regulate patterns that aren’t quite so blatant, we need to convene experts in the field and then work through clear examples of what’s permissible persuasion and what crosses the line to become bullying, coercion, or manipulation.

The Digital Civil Society Lab at Stanford is now hosting the Dark Patterns Tip Line, where people can report dark patterns they find online. What do you hope to accomplish with the Tip Line?

We have several ideas for the Dark Patterns Tip Line as we take it over from Consumer Reports, which has been shepherding it since it launched earlier this year. One is to continue it as a public resource not just to educate the public about dark patterns, but also to expand the reach of the Tip Line’s data collection so that it includes more types of dark patterns, particularly those that might be harming vulnerable communities or populations. To achieve that goal, we plan on reaching out to advocacy and civil society organizations to get them involved with identifying and submitting dark patterns. After we’ve received more submissions, perhaps in about a year, we will also analyze the data and issue a report on what the data collection efforts have yielded.

Second, we’re hoping to share the data we collect with researchers and policymakers. For independent researchers, collecting this type of data can be quite hard. And policymakers need it as well if they are going to understand the problem, comprehend how widespread it is, and decide to take appropriate action.

And third, we’re looking to develop and teach an undergraduate course on dark patterns this spring. It will be what we call a Policy Lab, in which students will learn some of the building blocks of dark patterns in the areas of communications, human-computer interaction, and cognitive theory. The students will also look at examples of dark patterns submitted to the Tip Line, decompose what we think makes them dark, and then have a hand at trying to find and classify new ones in the wild. They may also draft recommendations for policymakers based on their findings. These would be some of the first students to be trained in this area, which is important because this type of expertise does not currently exist in regulatory agencies like the FTC. 

How do machine learning and AI relate to dark patterns? 

We are at the beginning of seeing AI influence this area. Certainly, companies can and do use machine learning models to generate and test thousands of versions of ads or user interfaces, a practice known as “A/B testing.” And since those models are typically optimizing for clicks or for attention, it’s possible or even likely that what they generate will be dark by design: The machine learning models will produce ad designs that are coercive because those types of ad designs in fact get people to buy.

Most famously, we saw this strategy used in the presidential election, particularly by the Trump campaign, where on Facebook they were testing something like 4,000 iterations of a campaign ad to determine, at scale, which versions got the most clicks. And many of us in the research community have concerns that models trained to generate the highest click rate will inherently generate dark patterns.

I’m also concerned about the use of AI algorithms to deliver content. For example, on YouTube the default setting is autoplay, meaning an algorithm automatically plays the next video and will endlessly serve you more and more content. And the problem is not just that it’s serving you videos that might be related to what you just watched. It’s that the algorithms have been optimized to keep you engaged, serving up more and more outrageous, attention-getting content. This is how your kid starts by watching a video on airplanes and two hops later is viewing videos about conspiracy theories.

Autoplay was called out as a dark pattern by the drafters of the Detour Act, legislation proposed in 2019. So, even though autoplay is a design element that can be switched on and off, having that as a default is harmful because it coerces us to watch content we weren’t looking for – including mis- and dis-information – and wastes our time.

What kinds of regulations already apply to dark patterns?

The Federal Trade Commission already has broad powers to regulate deceptive business practices and could right now take action to outlaw some very specific marketing practices online – the worst of the worst, essentially. And various states’ attorney generals also go after companies where the site design or the interaction design is outright deceptive. But it’s unclear whether existing laws support FTC or state action around more subtle uses of dark patterns to coerce and manipulate users online.

In California, we have two laws that address dark patterns, but only with respect to privacy. So, for example, the CCPA [California Consumer Privacy Act], which went into effect in 2020, focuses on users’ right to opt out of the sale of their personal information. When companies design an opt-out interface, they can’t use a dark pattern to make opting out difficult or impossible. And, starting in 2023, the CPRA [California Privacy Rights Act] will bar the use of dark patterns when users are asked to consent to information sharing. These laws are a start, but they are also very narrow. 

What types of additional policies do you think will be effective in preventing and mitigating the harmful effects of dark patterns?

We need policies on multiple fronts. In addition to regulating the low-hanging fruit, we should see some broad guidance from the FTC that will make companies tread far more carefully. We should look across e-commerce into areas like online gaming, and products and content that focus on kids. And then, more broadly, policies related to online consent practices in general, targeting how your information is collected from you online and in mobile apps by third parties.

We’ll also be debating and defining the point at which algorithms go beyond delivering a desired search result to delivering something that manipulates you. This is a hard issue because the problem is not the fact of the algorithm, it’s what the algorithm is being optimized to do. Algorithms that are optimized to engage you will be one of the most contentious areas for regulation.

 Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics