Business

Why Netflix Features Black Actors in Promos to Black Users

Last week, Netflix users raised concerns that the company was targeting African American users by race in the way it promoted films—highlighting black characters who sometimes had only minor roles in a movie.

The debate began after Stacia L. Brown, creator of the podcast Charm City, tweeted a screenshot of the promotion she was shown for Like Father, featuring two black characters, Leonard Ouzts and Blaire Brooks, who had “20 lines between them, tops,” rather than the movie’s famous white stars, Kristen Bell and Kelsey Grammer. Brown, who is black, posted a handful of other examples where Netflix highlighted black actors, presumably to entice her to watch, even though the films’ casts were predominantly white.

In response, Netflix issued a carefully worded statement emphasizing that the company does not track demographic data about its users. “Reports that we look at demographics when personalizing artwork are untrue,” the company said. “We don’t ask members for their race, gender, or ethnicity so we cannot use this information to personalize their individual Netflix experience. The only information we use is a member’s viewing history.” The company added that the personalized posters are the product of a machine-learning algorithm that it introduced last year.

In other words, Netflix cares about keeping you hooked, rather than your race. Yet the focus on explicit questions about race is something of a dodge, allowing the company to distance itself from an outcome that researchers say was easily predictable. “If you personalize based on viewing history, targeting by race/gender/ethnicity is a natural emergent effect,” Princeton professor Arvind Narayanan tweeted in response to Netflix’s statement. “But a narrowly worded denial allows companies to deflect concerns.”

The company’s effort to optimize every aspect of the service, down to its thumbnail promotional images, was on a collision course with racial and ethnic identity. That’s because a sophisticated data-tracking operation like Netflix knows some viewers are bound to watch content that reflects their own race, gender, or sexuality. So it likely anticipated that artwork based on that viewing history would reflect preferences in race or gender. While users might appreciate suggested categories like “Movies with a strong female lead,” hyper-targeting thumbnails inevitably ran into a problem.

The algorithm may have been testing seemingly innocuous variables, such as whether minor movie characters could entice viewers. But it applied the formula to a repository of content that reflects bias in Hollywood, where people of color are offered fewer and less prominent parts. Highlighting minor black characters in a predominantly white movie such as Like Father left Netflix users like Brown feeling manipulated.

Did Netflix anticipate this outcome? The company’s response to WIRED skirted the question: “We are constantly testing different imagery to better understand what is actually helpful to members in deciding what to watch. The goal of all testing is to learn from our members and continuously improve the experience we are delivering,” a company spokesperson said by email.

Why bother customizing down to the thumbnail? “We have been personalizing imagery on the service for many years,” the spokesperson added. “About a year ago, we began personalizing imagery by member as we saw it helped members cut down on browsing time and more quickly find stories they wanted to watch. In general, all of our service updates and feature[s] are designed around helping members more quickly find a title they would enjoy watching.”

The spokesperson would not elaborate on what aspects of our viewing habits are used for personalized imagery. “We don’t go into depth on this topic as much of it is proprietary,” the spokesperson wrote.

Whether Netflix’s profiling was intentional or not, Georgetown law professor Anupam Chander thinks the company owes users more transparency. “It’s so predictable that the algorithm is going to get it wrong,” he says. “Black people have so few actual speaking parts, trying to promote a movie to me as a person of color might pull out the side character who is killed in the first 10 minutes.”

Chander adds that Netflix is missing an opportunity to educate its users. “The worry here is manipulation, and the way to avoid being manipulated is to be an educated consumer. The companies need to educate us about how their products and their algorithms work.” Chander considers himself a savvy consumer, but until Tuesday, he didn’t know that the thumbnails Netflix serves him are just as personalized as its movie selection.

Selena Silva, a research assistant at University of California at Davis, who co-authored a recent paper on racially biased outcomes, also sees room for more candor from Netflix. Algorithmic decisionmaking has dangerous consequences for black and Hispanic people when used in areas like criminal justice and predictive policing. In those cases too, technologists behind the algorithms may not explicitly ask about race. There are plenty of proxies, such as high school or zip code that are closely correlated to race.

In those arenas, there is no visibility, whereas “Netflix could easily explain everything that’s happening, if it’s making large populations uncomfortable,” Silva says. “When it’s something as trivial like artwork being shown to advertise a movie, in the grand scheme of things, it doesn’t need to be hidden.”


More Great WIRED Stories

Products You May Like

Articles You May Like

Europe Votes to Slap China-Made EVs With Tariffs—but Tesla Gets Off Easy
Epic Games Is Suing Samsung Now
Hacking Generative AI for Fun and Profit
Waymo’s New Agreement With Hyundai Raises Questions About China
Meta Can’t Use Sexual Orientation to Target Ads in the EU, Court Rules

Leave a Reply