Facebook’s Reason for Banning Researchers Doesn’t Hold Up

The company says privacy concerns forced it to block access for a team of academics. Whose privacy, exactly?

When Facebook said Tuesday that it was suspending the accounts of a team of NYU researchers, it made it seem like the company’s hands were tied. The team had been crowdsourcing data on political ad targeting via a browser extension, something Facebook had repeatedly warned them was not allowed.

“For months, we’ve attempted to work with New York University to provide three of their researchers the precise access they’ve asked for in a privacy-protected way,” wrote Mike Clark, Facebook’s product management director, in a blog post. “We took these actions to stop unauthorized scraping and protect people’s privacy in line with our privacy program under the FTC Order.”

Clark was referring to the consent decree imposed by the Federal Trade Commission in 2019, along with a $5 billion fine for privacy violations. You can understand the company’s predicament. If researchers want one thing, but a powerful federal regulator requires something else, the regulator is going to win.

Except Facebook wasn’t in that predicament, because the consent decree doesn’t prohibit what the researchers have been doing. Perhaps the company acted not to stay in the government’s good graces but because it doesn’t want the public to learn one of its most closely guarded secrets: who gets shown which ads, and why.

The FTC’s punishment grew out of the Cambridge Analytica scandal. In that case, nominally academic researchers got access to Facebook user data, and data about their friends, directly from Facebook. That data infamously ended up in the hands of Cambridge Analytica, which used it to microtarget on behalf of Donald Trump’s 2016 campaign.

The NYU project, the Ad Observer, works very differently. Facebook doesn’t give it access to data. Rather, it’s a browser extension. When a user downloads the extension, they agree to send the ads they see, including the information in the “Why am I seeing this ad?” widget, to the researchers. The researchers then infer which political ads are being targeted at which groups of users—data that Facebook doesn’t publicize.

Does that arrangement violate the consent decree? Two sections of the order could conceivably apply. Section 2 requires Facebook to get a user’s consent before sharing their data with someone else. Since the Ad Observer relies on users agreeing to share data, not Facebook itself, that isn’t relevant.

When Facebook shares data with outsiders, it “has certain obligations to police that data-sharing relationship,” says Jonathan Mayer, a professor of computer science and public affairs at Princeton. “But there’s nothing in the order about if a user wants to go off and tell a third party what they saw on Facebook.”

Joe Osborne, a Facebook spokesperson, acknowledges that the consent decree didn’t force Facebook to suspend the researchers’ accounts. Rather, he says, Section 7 of the decree requires Facebook to implement a “comprehensive privacy program” that “protects the privacy, confidentiality, and integrity” of user data. It’s Facebook’s privacy program, not the consent decree itself, that prohibits what the Ad Observer team has been doing. Specifically, Osborne says, the researchers repeatedly violated a section of Facebook’s terms of service that provides, “You may not access or collect data from our Products using automated means (without our prior permission).” The blog post announcing the account bans mentions scraping 10 times.

Laura Edelson, a PhD candidate at NYU and cocreator of the Ad Observer, rejects the suggestion that the tool is an automated scraper at all.

“Scraping is when I write a program to automatically scroll through a website and have the computer drive how the browser works and what’s downloaded,” she says. “That’s just not how our extension works. Our extension rides along with the user, and we only collect data for ads that are shown to the user.”

Bennett Cyphers, a technologist at the Electronic Frontier Foundation, agrees. “There’s not really a good, consistent definition of scraping,” he says, but the term is an odd fit when users are choosing to document and share their personal experiences on a platform “That just seems like it’s not something that Facebook is able to control. Unless they’re saying it’s against the terms of service for the user to be taking notes on their interactions with Facebook in any way.”

Ultimately, whether the extension is really “automated” is sort of beside the point, because Facebook could always change its own policy—or, under the existing policy, could simply give the researchers permission. So the more important question is whether the Ad Observer in fact violates anyone’s privacy. Osborne, the Facebook spokesperson, says that when the extension passes along an ad, it could be exposing information about other users who didn’t consent to sharing their data. If I have the extension installed, for instance, it could be sharing the identity of my friends who liked or commented on an ad.

Edelson agrees that this would be a privacy problem. But, she says, it’s simply not how the Ad Observer works. It only looks at the information inside the frame of the ad, not the comments or reactions below it.

Neither Facebook nor its users should be expected to take Edelson’s word for it, which is why the NYU team made all the code open source. Mozilla, the privacy-focused company behind the Firefox browser, reviewed the code twice before recommending it to its users. According to Marshall Erwin, Mozilla’s chief security officer, “it does not collect personal posts or information about your friends. And it does not compile a user profile on its servers.” Facebook’s claims about privacy problems, he wrote in a blog post, “simply do not hold water.”

So if the Ad Observer isn’t sharing information from other users, whose privacy is at stake? As Issie Lapowsky reported for Protocol in March, Facebook’s biggest concern may be the advertisers themselves. The company seems to believe that a person or business that pays to target users with ads on Facebook is entitled to a degree of secrecy about it. After all, Facebook could make this whole controversy go away by simply making the data on how ads are targeted public, which would eliminate the need for workarounds like the Ad Observer.

Osborne points out that Facebook invited Edelson and her team to participate in the Facebook Open Research and Transparency initiative, which lets researchers access some data about political ad targeting. But Edelson says that project only includes data from the three months before the November 2020 election, meaning it isn’t an ongoing solution, and omits the large number of ads seen by fewer than 100 people.

Perhaps the strangest wrinkle to the ongoing tussle between Facebook and the NYU researchers is that the company hasn’t actually shut down the Ad Observer project. By suspending the accounts of Edelson and her colleagues for repeat violations of its terms of service, Facebook has made it impossible for them to continue a different project—the Ad Observatory, not Observer—that helps journalists and academics analyze political ad data the platform shares directly. (“Don’t let engineers name things,” Edelson concedes.) But it does nothing to the Ad Observer project itself, because shutting down the researchers’ accounts doesn’t stop people from sharing information using the browser extension. To Edelson, that feels punitive.

“Their beef is with Ad Observer, but they’re cutting off Ad Observatory,” she says. “There are certainly things that Facebook could have done that would have either stopped or severely impeded Ad Observer. But they didn’t do any of those things. This is stopping our other work that isn’t connected to that.”

Transparency and privacy are two important goals that at times are in tension. Facebook would like you to believe this is one of those times. But the real tension may be between Facebook’s public commitment to transparency and some other, unstated values that it prefers to keep, well—private.

Updated, 8-4-21, 10:05pm ET: An earlier version of this article incorrectly said the Facebook Open Research and Transparency initiative only includes data on ad buys in excess of $100.

More Great WIRED Stories

Products You May Like

Articles You May Like

AI Can’t Replace Teaching, but It Can Make It Better
The Metaverse Was Supposed to Be Your New Office. You’re Still on Zoom
At 25, Metafilter Feels Like a Time Capsule From Another Internet
Amazon Ramps Up Security to Head Off Project Nimbus Protests
How Watermelon Cupcakes Kicked Off an Internal Storm at Meta

Leave a Reply