Earlier this month, I was on the phone with Ryan Fox, cofounder of New Knowledge, a cybersecurity firm that tracks Russian-related influence operations online. The so-called Yellow Vest protests had spread across France, and we were talking about the role disinformation played in the galvanizing French hashtag for the protests, #giletsjaunes. Conversations like these are a regular part of my job and usually focus on the quantifiable aspects of social media manipulation campaigns—volume of posts, follower count, common keywords, signs of inauthenticity, that sort of thing. But something else creeped into our discussion, an immeasurable notion so distracting and polarizing for most in the disinformation research community that I learned long ago to stop bringing it up: What is the impact of these misinformation campaigns?
While I didn’t ask this question of Fox, he addressed it as though I had: “We get this question a lot: Did they cause this? [Meaning, the gilets jaunes protests.] Did they make it worse? They’re pouring fuel on the fire, yes. They are successful at exacerbating the narrative. But I don’t know what the world would look like had they not done it.”
Oft asked and rarely satisfactorily answered, the question of impact is the disinformation research community’s white whale. You can measure reach, you can measure engagement, but there’s no simple data point to tell you how one coordinated influence campaign affected an event or someone’s outlook on a particular issue.
There has never been a more exciting or high-stakes time to study or report on social media manipulation, yet therein lies the issue. It’s difficult to balance the urge to report complicated and impressive analyses of large swaths of data from propaganda-pushing networks with the responsibility to hedge your findings behind the seemingly nullifying admission that there is no way to truly understand the actual effect of these actions. Especially when much of the discourse on the subject is plagued by inaccuracies and exaggerations, often caused by media efforts to simplify pages of nuanced research into something that fits in a headline. Coordinated influence campaigns are reduced to “bots” and “trolls,” despite the fact that those are rarely, if ever, accurate descriptions of what’s actually going on.
The internet has always been awash with misinformation and hate, but never has it felt so inescapable and overwhelming as it did this year. From Facebook’s role in fanning the flames of ethnic cleansing in Myanmar to the rise of QAnon to the so-called migrant caravan to the influence campaign conducted by the Kremlin’s Internet Research Agency, 2018 was a rough year to be online, regardless of the strength of your media literacy skills.
It has become increasingly difficult to parse the real from the fake, and even harder to determine the effect of it all. On December 17, cybersecurity firm New Knowledge released a report on the IRA’s campaign to sow division and influence American voters on Twitter, Facebook, and other platforms. It’s one of the most thorough analyses of the IRA’s misdeeds to take place outside of the companies themselves. At the behest of the Senate Intelligence Committee, New Knowledge reviewed more than 61,500 unique Facebook posts, 10.4 million tweets, 1,100 YouTube videos, and 116,000 Instagram posts, all published between 2015 and 2017. But even with that mountain of data, the researchers were unable to reach concrete conclusions about impact.
“It is impossible to gauge the full impact that the IRA’s influence operations had without further information from the platforms,” the authors wrote. New Knowledge said that Facebook, Twitter, and Google could provide an assessment of what users who were targeted by the IRA thought of the content they were exposed to.
This is a significant claim, but the researchers say the platforms could study the activities of the victims of information warfare rather than the perpetrators, and ask: What were users saying in the comments of voter suppression attempts on Instagram? What conversations were happening between IRA members and users in DMs? Where did users go on the platform, and what did they do after being exposed to IRA content? But the platforms failed to turn any of this information over. This is particularly problematic, the researchers said, because “foreign manipulation of American elections on social platforms will continue to be an ongoing, chronic problem,” and by keeping people in the dark about the effectiveness of old tactics—which have almost certainly been improved upon in the years since—platforms leave users vulnerable to any future attempts.
This is far from the first time that platforms’ attempts at transparency have left researchers wanting. When Twitter released a trove of more than 9 million tweets posted by accounts associated with IRA and Iranian propaganda efforts back in October, many members of the research community found the data dump lacking most of the information necessary to speak to present and future threats, much less derive impact. Tweets, posts, and stories don’t exist in a vacuum, and they can’t be effectively analyzed in a vacuum. The researchers I’ve spoken with recently have been grappling with the ramifications of a dearth of data on impact for much of the past year. They have more tools to analyze the way we interact online than ever before, and more cooperation from the platforms themselves than they ever thought possible, yet they still lack some of the most crucial bits of information. More often than not, the information provided by companies like Twitter and Facebook in their high-profile data dumps is nothing new to any platform researcher worth their salt. Third-party users and academics can collect most of the public-facing information—like retweets, likes, follower count, friends, and total views—but what they can’t access are the internal metrics: the DMs, the fake likes purchased, the likelihood of engagement gaming, and so on.
In the coming year, we—meaning not just journalists and researchers but everyday social media users—have got to do better. Or at least try to. We need to reckon with the fact that there are no easily available means to determine the efficacy of such actions online, and we must derive new ways of conveying their newsworthiness and consequence. If we can’t parse the impact of all of this through traditional means, those who are waging these information wars likely can’t either. What else are they gaining from it?
So long as we continue to hide behind vague language and half-measures, we lose out on the opportunity to demand the information and tools necessary to understand this nightmarish new world we live in. We shouldn’t continue to be placated by simple announcements that a particular company has wiped its platform clean of some genre of “bad actor,” but rather demand a comprehensive analysis of the effects of the disinformation it spread. That means researchers need access to live pages and posts, and analytics beyond what they can get themselves from tinkering with the API. For users, the simplest (albeit most depressing) way to suss out false information in a world where even the most innocuous of accounts could be playing the long con to take advantage of your hard-earned trust, is to assume that everything is probably false until proven true. This is the internet, after all.
More Great WIRED Stories
- Sci-fi promised us home robots. So where are they?
- Lin-Manuel Miranda is “so Slytherin,” naturally
- Aston Martin’s $3 million Valkyrie gets a V12 engine
- Facebook’s dirty tricks are nothing new for tech
- How to use Apple Watch’s new heart rate features
- 👀 Looking for the latest gadgets? Check out our picks, gift guides, and best deals all year round
- 📩 Hungry for even more deep dives on your next favorite topic? Sign up for the Backchannel newsletter
Leave a Reply
You must be logged in to post a comment.