In a move that’s months in the making, Facebook announced Wednesday that beginning next week, it will take down posts supporting both white nationalism and white separatism, including on Instagram. It’s an evolution for the social network, whose Community Standards previously only prohibited white supremacist content while allowing posts that advocated for ideologies like race segregation.
The odyssey to this moment started last May, when Motherboard published excerpts of leaked internal training documents for Facebook moderators that outlined the platform’s stance on white nationalism, white separatism, and white supremacy. In short, Facebook banned white supremacist content, but allowed white separatist and white nationalist content because it “doesn’t seem to be always associated with racism (at least not explicitly.)” The company later argued it couldn’t institute a global rule forbidding white nationalism and separatism because it would inadvertently ensnare other, legitimate movements like “black separatist groups, and the Zionist movement, and the Basque movement.”
This prompted civil rights organizations, lawyers, and historians to push back on Facebook’s notion that there is a legitimate distinction between white nationalist and white supremacist ideologies. In a September letter to the company, the Lawyers’ Committee for Civil Rights Under Law and other groups wrote that Facebook didn’t “discuss nationalism and separatism as neutral, general concepts. Instead, the training materials focus explicitly on white nationalism and white separatism—specific movements focused on the continued supremacy (politically, socially, and/or economically) of white people over other racial and ethinic groups.” Later that same month, Facebook said it was reexamining its white nationalism policy.
And now the formal decision to alter that policy has been made, as Motherboard first reported. Starting next week, when US users try to search for or post this content, they will instead be directed to a nonprofit that works to help people leave hate groups.
The move comes two weeks after a terrorist attack on two mosques in Christchurch, New Zealand, that killed 50 people was livestreamed on Facebook. The man charged with the shooting is reportedly linked to a white nationalist group. In a blog post published Wednesday, Facebook said that after three months of talking with civil society and academics, it agrees that “white nationalism and separatism cannot be meaningfully separated from white supremacy and organized hate groups.” The company’s content moderators will start to remove posts next week that include explicit phrases like “I am a proud white nationalist.”
A Facebook spokesperson said the new policy won’t immediately extend to less explicit or overt white nationalist and separatist sentiments. Becca Lewis, an affiliate researcher at Data & Society and the author of a recent study about far right content on YouTube, says that’s troubling. She notes that people and groups advocating for white nationalist ideas online don’t always use explicit language to describe their beliefs. It’s not easy for automated systems to detect things like hate speech, but Facebook has also left the line between outright white nationalism and implied white nationalism vague. “It’s always tricky to implement these [policies] in a meaningful way,” says Lewis. “I’m cautiously optimistic about the impact that it can have.”
Sarah T. Roberts, a professor at the UCLA who studies content moderation, says the details of how Facebook implements its new policy will be the key to whether it’s effective. She notes it’s crucial Facebook’s content moderators “have the bandwidth and have the space” to make nuanced judgement calls about white nationalist and separatist content. An investigation from The Verge published last month found that Facebook moderators in the US may sometimes have less than 30 seconds to make a decision about whether a post should remain on the platform.
Starting Wednesday, Facebook users in the US who try to post or search for white nationalist or separatist content will instead be greeted with a pop-up directing them to the website for the organization Life After Hate, a nonprofit founded in 2011 by former extremists that provides education and support to people looking to leave hate groups. (In 2016, Google enacted a similar tactic, where users who searched for content related to ISIS were shown YouTube videos that debunk the terrorist group’s ideology.) “Online radicalization is a process, not an outcome,” Life After Hate said in a statement. “Our goal is to insert ourselves in that continuum, so that our voice is there for people to consider as they explore extremist ideologies online.”
It’s not clear how Life After Hate will handle an influx of people who may arrive at their organization from the largest social network in the world. The nonprofit lists just six staff members on its website. Shortly before President Obama left office, his administration awarded a $400,000 federal grant to the Chicago-based nonprofit, but it was later rescinded by the Trump administration.
Facebook’s policy change comes as tech companies are facing increased pressure to curb the spread of white supremacist content on their platforms after they struggled to stop the Christchurch shooter’s livestreamed video from going viral. While online platforms have poured resources into stopping terrorist groups like ISIS and Al Qaeda from using their sites, white supremacist groups have historically been treated differently.
During a 2018 Senate hearing about terrorism and social media, for instance, policy executives from YouTube, Facebook, and Twitter boasted about their ability to take down posts from terrorists like ISIS with artificial intelligence, but made little mention of similar efforts to combat white supremacists. “We believe there’s a lot of content generated from white nationalist groups generally that would violate” tech platform guidelines, but “it takes a lot on the part of advocacy groups to see some action,” Madihha Ahussain, special counsel for anti-Muslim bigotry for the group Muslim Advocates, told WIRED earlier this month.
In a statement, Rashad Robinson, president of the civil rights group Color of Change, said Facebook’s move should encourage other platforms to “act urgently to stem the growth of white nationalist ideologies.” Twitter and YouTube did not immediately comment on the record about whether their platforms explicitly ban white nationalism or separatism. Twitter already prohibits accounts that affiliate with groups that promote violence against civilians and YouTube similarly bans videos that incite violence. But Facebook appears to be the first major platform to take a stance against white nationalism and separatism specifically.
Updated 3-29-19, 5:20 PM EDT: This story was updated to correct the name of the organization Color of Change.
More Great WIRED Stories
- Want Apple Card’s security benefits? Just use Apple Pay
- Can AI be a fair judge in court? Estonia thinks so
- The beautiful benefits of contemplating doom
- Zeroing in on the best presidential impressions
- These brainy bikes do everything but ride themselves
- 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round
- 📩 Hungry for even more deep dives on your next favorite topic? Sign up for the Backchannel newsletter
Leave a Reply
You must be logged in to post a comment.