Business

When Politicians Play Web Designers

Privacy problems are age-old. The extravagant Sun King, Louis XIV of France, popularized the envelope as he used them to protect his letters from the prying eyes of the chambermaid or shopkeeper. In the colonial era, adhesive envelopes and patterned lining helped hide the contents of commonly intercepted mail. Along with increased regulation, these efforts created more friction against snooping and made privacy more tangible.

WIRED OPINION

ABOUT

Stephanie Thien Hang Nguyen is a research scientist at MIT Media Lab focusing on data privacy, design, and tech policies that impact marginalized populations. She previously led privacy and user experience design projects with the National Institutes of Health and Johns Hopkins’ Precision Medicine team. Lea Kissner is the chief privacy officer of Humu. She was previously the global lead of privacy technology at Google, and she earned a PhD in computer science (with a focus on cryptography) at Carnegie Mellon University.

Centuries later, invisible, untouchable, and omnipresent information about us now spreads across databases, from internet browsers to doctors’ offices. Just as the envelope was a design solution intended to prevent people from reading each other’s mail, the creators of data systems have turned to design to solve privacy challenges.

Politicians, however, have turned to regulation. Many regulatory proposals have focused on suppressing dark patterns, which are design tricks that push you the user to do things you didn’t intend to, like subscribing to newsletters or paying for extra services. In April, senators Mark Warner of Virginia and Deb Fischer of Nebraska introduced a bill to ban many of these features, such as LinkedIn’s “Add connection” button (which harvests email addresses and grows LinkedIn’s member base) and Venmo’s default public setting. In July, another senator, Josh Hawley of Missouri, introduced legislation to ban “addictive” and “deceptive” features like Instagram and Facebook’s infinite scroll and YouTube’s autoplay.

The term “dark patterns” does draw attention to maliciously intentioned organizations that hoover user data, counter to their users’ needs. Most of these regulatory proposals, however, fail to recognize that dark patterns are only a subset of opinionated design that guide users toward a goal in a particular way. Most good design is opinionated. Whether a particular design is good or bad is entirely dependent on whether one agrees with the goal, rather than on the techniques employed. For the sake of users, politicians, researchers, and technology companies alike must remember that design cannot be reduced to binary categorizations of dark or light—there is nuance.

For example, consider designs that require multiple difficult-to-find clicks can make it exceedingly difficult to cancel a subscription. But they can also better protect people from online threats like phishing and malware. Research shows that making malicious sites easier to access means that people access them and get hacked in droves. Spam filtering is also opinionated; there are different beneficial goals both at the individual and societal level. Filters protect individuals from scammy content and disincentivize society at large from producing scammy material. If, however, an app store were to filter all competing apps, people would cry foul on antitrust grounds. Same techniques, very different results.

There is more nuance than just assessing whether a feature pushes a user toward a certain outcome. There is a constant trade-off to balance user empowerment and user ease. How can companies inform users of important services (signing up for health care or paying student loans) without unintentionally creating barriers by overwhelming them with information? And how might we do this by taking the unquantifiable, context-dependent messiness of culture and societal inequities into account?

By banning features without considering the context in which they are used, we may inadvertently limit a designers’ toolbox to create privacy-protecting design. Hawley has proposed enforcing “conspicuous pop-ups to a user not less than once every 30 minutes that the user spends on those platforms.” But this will only provoke warning fatigue and mindless clickthroughs. We know that people quickly learn to click through these pop-ups without registering their message, making them useless, annoying, and an excellent way to detract from communication about other matters like security. Rather than focus solely on whether a design pattern is good or evil, we should examine whether outcomes meet user privacy needs in context. We need to measure user success: Does a user’s expected outcome match what they wanted to achieve? Does this help people live the way they want?

Even more ambitiously, we should measure peoples’ satisfaction and happiness with a system both in the short term and long term. Do they have appropriate options? Can they use those options with the proper amount of friction? Sometimes friction is appropriate. Putting a warning in front of a dangerous action (like deleting your email account or transferring a large amount of money) can help users pay appropriate heed to balancing risks. Making it too easy to factory-reset a phone or delete an account can be worse than making it too hard, leading to users accidentally losing important data like baby pictures and love letters.

Products You May Like

Articles You May Like

How Do You Solve a Problem Like Polestar?
Why It’s So Hard to Fully Block X in Brazil
The NSA Has a Podcast—Here’s How to Decode It
Elon Musk Has Backed Himself Into a Corner in Brazil
A New Group Is Trying to Make AI Data Licensing Ethical

Leave a Reply