Business

Congress Takes Aim at the Algorithms

It wasn’t long ago that congressional hearings about Section 230 got bogged down in dismal exchanges about individual content moderation decisions: Why did you leave this up? Why did you take that down? A new crop of bills suggests that lawmakers have gotten a bit more sophisticated.

At a hearing on Wednesday, the House energy and commerce committee discussed several proposals to strip tech companies of legal immunity for algorithmically recommended content. Currently, Section 230 of the Communications Decency Act generally prevents online platforms from being sued over user-generated content. The new bills would, in various ways, revise Section 230 so it doesn’t apply when algorithms are involved.

Content moderation, on its own, is a sucker’s game. Thanks in part to the testimony of Frances Haugen, the Facebook whistleblower, even Congress understands that when it comes to massive social platforms like Facebook, Instagram, or YouTube, the root of many problems is the use of ranking algorithms designed to maximize engagement. A system optimized for engagement rather than quality is one that supercharges the reach of plagiarists, trolls, and misleading, hyper-partisan outrage bait.

The goal of the new Section 230 bills is to give platforms a reason to change their business models. As Haugen put it in her Senate testimony in October, “If we reformed 230 to make Facebook responsible for the consequences of their intentional ranking decisions, I think they would get rid of engagement-based ranking.”

Why use Section 230 reform to get platforms to stop designing for engagement? In part, it’s because it’s one of very few leverage points that Congress has. Tech platforms that host user content love Section 230 and would hate to lose its protections. That makes it an appealing vehicle to try to extract behavior changes from those companies. Nice immunity you got there—shame if anything happened to it.

“Liability is just a means to an end—the objective is to incentivize changes to the algorithm,” Congressman Tom Malinowski, a New Jersey Democrat who introduced one of the bills, told me. “The premise of the bill is that without the incentives created by liability, they’re not likely to make those changes on their own, but that they do know how to make things better and would do so if there’s sufficient pressure.”

There’s a certain conceptual elegance to trying to reform Section 230 in this way. The underlying logic of the law is that internet users should bear the responsibility for what they say and do online—not the platforms that host the content. But when the law was passed, in 1996, the world had not yet seen the rise of personalized recommendation systems tailored to keep users maximally engaged. To the extent that platforms are deciding what to promote, rather than acting as neutral conduits, it seems like a simple matter of fairness to say they should face legal responsibility for what they, or their automated systems, choose to show users.

In practice, however, attaching legal liability to algorithmic amplification is anything but elegant. For one thing, there are all sorts of tricky definitional, even philosophical, questions.

“I agree in principle that there should be liability, but I don’t think we’ve found the right set of terms to describe the processes we’re concerned about,” said Jonathan Stray, a visiting scholar at the Berkeley Center for Human-Compatible AI who studies recommendation algorithms. “What’s amplification, what’s enhancement, what’s personalization, what’s recommendation?”

New Jersey Democrat Frank Pallone’s Justice Against Malicious Algorithms Act, for example, would withdraw immunity when a platform “knew or should have known” that it was making a “personalized recommendation” to a user. But what counts as personalized? According to the bill, it’s using “information specific to an individual” to enhance the prominence of certain material over other material. That’s not a bad definition. But, on its face, it would seem to say that any platform that doesn’t show everyone the exact same thing would lose Section 230 protections. Even showing someone posts by people they follow arguably relies on information specific to that person.

Malinowski’s bill, the Protecting Americans from Dangerous Algorithms Act, would take away Section 230 immunity for claims invoking certain civil rights and terrorism-related statutes if a platform “used an algorithm, model, or other computational process to rank, order, promote, recommend, amplify, or similarly alter the delivery or display of information.” It contains exceptions, however, for algorithms that are “obvious, understandable, and transparent to a reasonable user,” and lists some examples that would fit the bill, including reverse chronological feeds and ranking by popularity or user reviews.

There’s a great deal of sense to that. One problem with engagement-based algorithms is their opacity: Users have little insight into how their personal data is being used to target them with content a platform predicts they’ll interact with. But Stray pointed out that distinguishing between good and bad algorithms isn’t so easy. Ranking by user reviews or up-voting/down-voting, for example, is crappy on its own. You wouldn’t want a post with a single up-vote or five-star review to shoot to the top of the list. A standard way to fix that, Stray explained, is to calculate the statistical margin of error for a given piece of content and rank it according to the bottom of the distribution. Is that technique—which took Stray several minutes to explain to me—obvious and transparent? What about something as basic as a spam filter?

“It’s not clear to me whether the intent of excluding systems that are ‘simple’ enough would in fact exclude any system that is actually practical,” Stray said. “My suspicion is, probably not.”

In other words, a bill that took away Section 230 immunity with respect to algorithmic recommendation might end up looking the same as a straight-up repeal, at least as far as social media platforms are concerned. Jeff Kosseff, the author of the definitive book about Section 230, The Twenty-Six Words That Created the Internet, pointed out that internet companies have many legal defenses to fall back on, including the First Amendment, even without the law’s protection. If the statute gets riddled with enough exceptions, and exceptions to exceptions, those companies might decide there are easier ways to defend themselves in court.

This points to a weird quirk of Section 230 debates: Both supporters and critics strenuously argue that changing the law wouldn’t be that big a deal. Taking away Section 230 doesn’t mean a company is automatically liable, after all; it just means it loses a form of immunity. Cases would still be hard to win against a platform just for hosting someone else’s speech, because there would be all kinds of questions of causality and responsibility. To reformers, this means no one should panic about a post-230 world. As Carrie Goldberg, a prominent critic of the law, put it in her testimony at Wednesday’s hearing, “Fears that tech companies will be overwhelmed with litigation are unfounded and, frankly, reveal the fearmonger’s unfamiliarity with how litigation works.” But the law’s champions flip that argument on its head: If plaintiffs will mostly lose even absent Section 230, then rolling it back will invite frivolous lawsuits that serve only to drain a company’s resources.

There’s a similar ambiguity at play with the proposals focused specifically on algorithms. As Mary Anne Franks, another Section 230 critic who testified at the hearing, put it in an email, the bills would be “both over- and under-inclusive.” On the one hand, they could end up as a de facto repeal of the law for all large platforms that host and recommend user-generated content, not just the likes of Facebook and Instagram. (The bills exempt small platforms.) On the other hand, they would have no effect on some of the worst scofflaws currently benefiting from Section 230 immunity. A site like The Dirty or She’s a Homewrecker, which publish cruel, potentially defamatory user-submitted gossip posts about private citizens, can do plenty of damage without any personalized algorithms. So might a site that facilitates illegal gun sales.

So even if Congress’s goal is to incentivize companies to change their algorithms, targeting the algorithms directly might not be the best way to go about it. Franks proposes something both simpler and more sweeping: that Section 230 not apply to any company that “manifests deliberate indifference to unlawful material or conduct.” Her collaborator Danielle Citron has argued that companies should have to prove they took reasonable steps to prevent a certain type of harm before being granted immunity. If something like that became law, engagement-based algorithms wouldn’t go away—but the change could still be significant. The Facebook Papers revealed by Haugen, for example, show that Facebook very recently had little or no content-moderation infrastructure in regions like the Middle East and Africa, where hundreds of millions of its users live. Currently Section 230 largely protects US companies even in foreign markets. But imagine if someone defamed or targeted for harassment by an Instagram post in Afghanistan, where as of 2020 Facebook hadn’t even fully translated its forms for reporting hate speech, could sue under an “indifference” standard. The company would suddenly have a much stronger incentive to make sure its algorithms aren’t favoring material that could land it in court.

It’s good to see Congress begin to focus on the design questions at the root of many of social media’s problems. It’s also understandable that they haven’t yet arrived at an ideal fix. Building an effective algorithm is complicated. So is writing a good law.


More Great WIRED Stories

Products You May Like

Articles You May Like

Large Language Models’ Emergent Abilities Are a Mirage
A Deepfake Nude Generator Reveals a Chilling Look at Its Victims
The EU Targets Apple, Meta, and Alphabet for Investigations Under New Tech Law
The Case Against Apple Weaponizes the Cult of Cupertino
The Baltimore Bridge Collapse Is About to Get Even Messier

Leave a Reply