Congress Takes Aim at the Algorithms
It wasn’t long ago that congressional hearings about Section 230 got bogged down in dismal exchanges about individual content moderation decisions: Why did you leave this up? Why did you take that down? A new crop of bills suggests that lawmakers have gotten a bit more sophisticated.
At a hearing on Wednesday, the House energy and commerce committee discussed several proposals to strip tech companies of legal immunity for algorithmically recommended content. Currently, Section 230 of the Communications Decency Act generally prevents online platforms from being sued over user-generated content. The new bills would, in various ways, revise Section 230 so it doesn’t apply when algorithms are involved.
Content moderation, on its own, is a sucker’s game. Thanks in part to the testimony of Frances Haugen, the Facebook whistleblower, even Congress understands that when it comes to massive social platforms like Facebook, Instagram, or YouTube, the root of many problems is the use of ranking algorithms designed to maximize engagement. A system optimized for engagement rather than quality is one that supercharges the reach of plagiarists, trolls, and misleading, hyper-partisan outrage bait.
The goal of the new Section 230 bills is to give platforms a reason to change their business models. As Haugen put it in her Senate testimony in October, “If we reformed 230 to make Facebook responsible for the consequences of their intentional ranking decisions, I think they would get rid of engagement-based ranking.”
Why use Section 230 reform to get platforms to stop designing for engagement? In part, it’s because it’s one of very few leverage points that Congress has. Tech platforms that host user content love Section 230 and would hate to lose its protections. That makes it an appealing vehicle to try to extract behavior changes from those companies. Nice immunity you got there—shame if anything happened to it.
“Liability is just a means to an end—the objective is to incentivize changes to the algorithm,” Congressman Tom Malinowski, a New Jersey Democrat who introduced one of the bills, told me. “The premise of the bill is that without the incentives created by liability, they’re not likely to make those changes on their own, but that they do know how to make things better and would do so if there’s sufficient pressure.”
There’s a certain conceptual elegance to trying to reform Section 230 in this way. The underlying logic of the law is that internet users should bear the responsibility for what they say and do online—not the platforms that host the content. But when the law was passed, in 1996, the world had not yet seen the rise of personalized recommendation systems tailored to keep users maximally engaged. To the extent that platforms are deciding what to promote, rather than acting as neutral conduits, it seems like a simple matter of fairness to say they should face legal responsibility for what they, or their automated systems, choose to show users.
In practice, however, attaching legal liability to algorithmic amplification is anything but elegant. For one thing, there are all sorts of tricky definitional, even philosophical, questions.
via Wired https://ift.tt/2uc60ci
December 2, 2021 at 10:42AM