We believe you deserve transparency and control over how you’re being manipulated. Attention Settings demonstrates practical pathways for regulating attention-extracting products to increase consumer agency.
The attention economy uses persuasive design techniques to exploit human vulnerabilities and extract our attention. Because of the advertising-based business model and the limited nature of human attention, these companies will continue to compete to get better at hijacking our brains.
However, for policy-makers, both political (governments) and eco-systemic (Apple & Google) alike, there isn’t a clear path forward when it comes to regulating the design of these products.
The Digital Services Act recently introduced by the EU includes an intention to ban ‘dark patterns’:
“Under new rules, ‘dark patterns' are prohibited. Providers of online platforms will be required not to design, organise or operate their online interfaces in a way that deceives, manipulates or otherwise materially distorts or impairs the ability of users of their services to make free and informed decisions.”
However, product design is a shady business — it isn’t black and white. The problem of manipulation and attention extraction goes far beyond dark patterns. Addictive loops are at the core of the user experience of the most popular social media platforms. The lines are blurry.
That’s where Attention Settings come in.
The basic idea is to give users awareness over how they're being manipulated and tools to selectively opt-out of the most engaging features — to reclaim their attentional, cognitive, and behavioral sovereignty.
This is roughly what it could look like:
Products with over 10 million users could be required to provide a) transparency about the techniques they use to increase engagement and b) the option to opt-out of these techniques and features — as a condition for passing App Review.
If a product uses persuasive design, every one of these techniques and features should be listed in the AppStore nutrition card. The Attention Settings then provide the user with the option to opt-out. There could be an educational element to inform users about how these techniques work and why they might want to consider opting out.
Here are some examples of what these changes might look like in different apps:
For Facebook, choosing a chronological feed, hiding all short-form videos like Reels, and disabling suggested content would make all the difference. The one on the right is a kind of Facebook I would enjoy using. The one on the left I simply refuse to have on my phone.
When I open YouTube, I mostly have an intention for what I want to do. Like, maybe I want to play a song, show a specific video to a friend, or learn how to make a perfect poached egg.
But then YouTube shoves an auto-playing ad in my face and shows me an infinite feed of videos and shorts that are probably really fun to watch, but that will also cost me 2 hours that I didn’t mean to spend. With 70% of watch-time coming from videos suggested by YouTube’s algorithm, it’s not about the one video you consciously signed up for.
With Attention Settings, you could turn off suggested videos, prevent auto-play, and remove the Shorts tab.
Just like with Facebook, time on Instagram is much more worthwhile when you get to see what your friends have been up to than when you’re watching a bunch of ‘entertaining’ videos.
With a chronological feed, “Hide Explore Section”, and “Hide Suggested Content”, it might be viable again for me to have Instagram on my phone and occasionally check what the ~100 people I follow have been up to.
There’s some genuinely wholesome content on TikTok. At the same time, the algorithm underlying the For You Page is incredibly good at keeping us scrolling and watching. Attention Settings could help users see the content they want to see, without getting sucked into binge-watching sessions.
These are merely examples of the kinds of things that could be done and would be helpful. Eventually, a list should be put together by a consortium of independent organizations that track the leading edge of this space.
The basic principle is this: If you use design techniques that increase user engagement, you need to provide the user with the option to opt-out.
The power asymmetry between the design teams at Facebook and the average user is enormous. They have weapons-grade behavior modification technology aimed at isolated individuals who don’t know what’s going on.
Their business model incentivizes these companies to maximize engagement at all costs. That means they can’t be trusted to honor their duty of care to act in the users best interest.
In order to level the playing field and re-establish the symmetry of power, the user needs to understand what’s going on and have control to turn these things off selectively.
These companies have nation-state scale and utilities-grade network effects and lock-in. Voting with your feet doesn’t apply — you can’t just leave. Right now, that means you’re forced to take whatever they offer, regardless of how manipulative it is, how badly it undermines your mental health, etc.
When you’re forced to stay, you should at least be empowered to stay on your terms, not just in when it comes to privacy but especially when it comes to attention and manipulation.
It's a very audacious proposal, and I hear your doubts:
- This interferes with 3rd party products in unprecedented ways
- Legal and political pushback is almost inevitable
- On what basis could we possibly do this?
- This will overwhelm users
These are valid concerns, yet I will ask you to put them aside for a minute. If I addressed all your doubts and we would be able to implement this on a waterproof legal basis, at a user-friendly level of simplicity — would it be the right thing to do?
I believe so. And it's possible to a) create this basis of legitimacy and b) make it simple. There are two fundamental questions to answer:
1: What is classified as engagement-maximizing, and by whom?
Who is to say that a particular feature is attention-maximizing? Where is the line between good UX and manipulation? I believe this is a relatively straightforward question to answer. A consortium of independent organizations, informed by foundational research (funded by Apple or the legislators), could surely come up with practical definitions.
2: What might be a sufficient basis of legal legitimacy?
The surveillance-industrial complex will push back on this, and there are plenty of directions they could take. On what basis could Apple or the EU possibly set these requirements?
Two frames seem especially promising as a basis of legitimacy:
Instead of framing engagement-maximizing products as "bad for everyone", there is a reasonably straightforward argument that problematic smartphone use and social media addiction are handicapping a minority of the population.
From poor self-control to mental health issues and precarious life circumstances, regardless of what causes someone to be more vulnerable to manipulative design and cheap dopamine, the vicious feedback loops of compulsive behavior and increased addiction can have catastrophic consequences on people's wellbeing and productivity.
Not providing the comprehensive tooling required for neurodiverse and otherwise disadvantaged people to participate in the use of these products safely and effectively means we’re neglecting to support some of our most vulnerable populations.
Persuasive technologies, especially social media, exert undue influence — a severe and systematic form of manipulation that is legally recognized in other contexts. Platform providers like Apple have a fiduciary responsibility to protect their customers from such undue influence.
Based on the technical legal definitions provided by psychologists, persuasive technologies have crossed the threshold from persuasion to coercion — and thus need to be classified as undue influence.
I suggest investing significant resources in constructing a solid legal specification of Undue Influence in the context of persuasive tech. With recursive loops of red-teaming and risk analysis, it should be possible to reach a point where Apple or policymakers could confidently use this as a legal basis for Attention Settings.
Facebook et al. will complain, and legal pushback is likely. Still, they would have to make a case against Undue Influence, in democratic courts, under the eye of legislators and a public that has been waiting for a comprehensive articulation of and response to these companies' perverted practices.
Four qualities of persuasive technology stand out on which legal experts could construct a definitive argument:
- Privileged Information: With sophisticated psychological profiles, thousands of data points, and years of meta-data on every user, social media companies have a comprehensive set of privileged information.
- Statistical Influenceability: If a particular feature or design change influences real-world behaviors with statistical reliability, that's a sound basis for undue influence.
- Asymmetry of Power: Facebook’s data and its ability to use it to modify user behavior are orders of magnitude greater than the average user’s awareness, understanding, and ability to protect themselves from that manipulation.
- Lack of Informed Consent: Most users have no clue that an algorithm — let alone weapons-grade behavioral modification technology — is playing against them. Facebook's research shows that they can shape emotions and behavior without triggering the users' awareness.
The cultural, ethical, and philosophical case for limbic capitalism's immorality has already been made extensively. What is undue influence, if not ruthless attention capture and behavior modification, motivated by advertising revenue?
It is a worthwhile cause. This is an opportunity to create a cultural inflection point, to take a stand on the right side of history.
It’s also an opportunity to directly improve the user experience of hundreds of millions of people, helping them get less distracted from the lives they want to live.
But why stop with Attention Settings? There are countless opportunities for minimizing unintended social media usage. Here are our best ideas for how to help people be more intentional in their technology usage.
Mindful UX Interventions
iOS could provide users with the option to enjoy a mindful moment before continuing with distracting apps and provide Siri suggestions with activities that might be more attractive than mindless scrolling.
While you can already have this experience with apps like one sec or Potential, it takes some setup and is a less ideal UX than what Apple could build. Even better, Apple could provide an AttentionKit and empower developers to trigger modals like this on top of other apps, based eg. on data from the ScreenTime API.
Three possible contexts for these interventions: App Opening Modal, App Overlay, and Time Limit Suggestions.
That way the ingenuity of the developer ecosystem could be applied to help people allocate their attention and make better choices in mindless moments. Needless to say that this would be a much better user experience than repeatedly unlocking 15 more minutes of Screen Time…
We put together a whole library of humane tech products, concepts, and ideas — but here are three ideas we want to highlight:
Give Researchers Access to Screen Time Data
We need to understand better the science of problematic smartphone use and effective interventions. This is a no-brainer, and it’s a shame it hasn’t been done already.
ScreenTime API and AttentionKit
Apple could share the current Focus mode, Screen Time schedule and events (eg. the user has been on for 20 minutes) via the ScreenTime API.
Based on that, eg. apps like YouTube could suggest focus music when in work focus, workout videos when in fitness, and calming videos when winding down.
The greater opportunity here is to empower developers to design experiences that are more respectful of people’s attention and intentions — and as a result, provide a more wholesome and delightful experience of Apple products.
This could enable a whole ecosystem of apps that specialize in helping people direct their attention and make better choices. With purpose-built tooling and ScreenTime data, apps like one sec and Potential could build even more elegant and empowering experiences.
Very large platforms like Facebook and TikTok represent the primary social environment for a whole generation of teenagers. Their current design is a serious threat to this generation’s mental health, yet opting out of the service entirely comes at great social cost and exclusion.
These very large platforms (over 50 million users) should be required to offer dedicated messenger apps that don’t include any persuasive design techniques. That way, social participation remains possible without having to sacrifice one’s mental health.
This concept doesn’t solve the root problem of misaligned incentives due to the advertising-based business model. But it levels the playing field in a really important way: It gives people control over their experience.
Attention Settings are a practical pathway for policymakers and ecosystem providers to give users the tools they need to make these products work as intended.
Attention Settings have teeth because they allow users to interact with these products on their own terms. They don’t need to accept the constant lure of unhealthy doom-scrolling and binge-watching. They get the opportunity to exercise their attention and agency freely — without having to lock themselves out of the public sphere.
A concept designed by Potential.