Business

YouTube Will Crack Down on Toxic Videos, But It Won’t Be Easy

YouTube is trying to reduce the spread of toxic videos on the platform by limiting how often they appear in users’ recommendations. The company announced the shift in a blog post on Friday, writing that it would begin cracking down on so-called “borderline content” that comes close to violating its community standards without quite crossing the line.

“We’ll begin reducing recommendations of borderline content and content that could misinform users in harmful ways—such as videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11,” the company wrote. These are just a few examples of the broad array of videos that might be targeted by the new policy. According to the post, the shift should affect less than 1 percent of all videos on the platform.

Social media companies have come under heavy criticism for their role in the spread of misinformation and extremism online, rewarding such content—and the engagement it gets—by pushing it to more users. In November, Facebook announced plans to reduce the visibility of sensational and provocative posts in News Feed, regardless of whether they explicitly violate the company’s policies. A YouTube spokesperson told WIRED the company has been working on its latest policy shift for about a year, saying it has nothing to do with the similar change at Facebook. The spokesperson stressed that Friday’s announcement is still in its earliest stages, and the company may not catch all of the borderline content immediately.

Over the past year, YouTube has spent substantial resources on trying to clean up its platform. It’s invested in news organizations and committed to promoting only “authoritative” news outlets on its homepage during breaking news events. It’s partnered with companies like Wikipedia to fact check common conspiracy theories, and it’s even spent millions of dollars sponsoring video creators who promote social good.

The problem is, YouTube’s recommendation algorithm has been trained over the years to give users more of what it thinks they want. So if a user happens to watch a lot of far-right conspiracy theories, the algorithm is likely to lead them down a dark path to even more of them. Last year, Jonathan Albright, director of research at Columbia University’s Tow Center for Digital Journalism, documented how a search for “crisis actors” after the Parkland, Florida, shooting led him to a network of 9,000 conspiracy videos. A recent BuzzFeed story showed how even innocuous videos often lead to recommendations of increasingly extreme content.

With this shift, YouTube is hoping to throw people off that trail by removing problematic content from recommendations. But implementing such a policy is easier said than done. The YouTube spokesperson says it will require human video raters around the world to answer a series of questions about videos they watch to determine whether they qualify as borderline content. Their answers will be used to train YouTube’s algorithms to detect such content in the future. YouTube’s sister company, Google, uses similar processes to assess the relevance of search.

It’s unclear what signals both the human raters and the machines will analyze to determine what videos constitute borderline content. The spokesperson, who asked not to be named, declined to share additional details, except to say that the system will look at more than just the language in a given video’s title and description.

For as much as these changes stand to improve platforms like Facebook and YouTube, instituting them will no doubt invite new waves of public criticism. People are already quick to claim that tech giants are corrupted by partisan bias and are practicing viewpoint censorship. And that’s in an environment where both YouTube and Facebook have published their community guidelines for all to see. They’ve drawn bright lines about what is and isn’t acceptable behavior on their platforms, and have still been accused of fickle enforcement. Now both companies are, in a way, blurring those lines, penalizing content that hasn’t yet crossed it.

YouTube will not take these videos off the site altogether, and they’ll still be available in search results. The shift also wouldn’t stop, say, a September 11 truther from subscribing to a channel that only spreads conspiracies. “We think this change strikes a balance between maintaining a platform for free speech and living up to our responsibility to users,” the blog post read.

In other words, YouTube, like Facebook before it, is trying to appease both sides of the censorship debate. It’s guaranteeing people the right to post their videos—it’s just not guaranteeing them an audience.


More Great WIRED Stories

Products You May Like

Articles You May Like

Microsoft’s AI Boss Wants Copilot to Bring ‘Emotional Support’ to Windows and Office
OpenAI’s ChatGPT Breaks Out of Its Box—and Onto a Canvas
Waymo’s New Agreement With Hyundai Raises Questions About China
Hurricane Helene Will Send Shockwaves Through the Semiconductor Industry
Pavel Durov Defends Telegram’s Privacy Changes Amid User Unrest

Leave a Reply