Twitter’s Former Head Of Trust & Safety Explains Why, For All His Billions, Elon Musk Can’t Magically Decide How Twitter Will Work

Twitter’s Former Head Of Trust & Safety Explains Why, For All His Billions, Elon Musk Can’t Magically Decide How Twitter Will Work


https://ift.tt/Teqwzpj

There are a variety of myths about how the world works that get people really screwed up when they make big bets on trying to “fix” things. I think Elon Musk has fallen prey to a few of them in how he’s trying to run Twitter. First, he falsely believes (as was the widespread myth among many, especially in right wing circles) that Twitter’s content moderation/trust & safety efforts were driven mainly by extreme “woke” employees who were seeking to silence opinions and viewpoints they disagreed with. As we’ve discussed repeatedly, that’s never been the case. Twitter had a far more free speech-supporting position than any other site, and the trust & safety decisions were made based on what they believed was actually best for the site, which means trying to minimize hate and harassment as that drives both users and advertisers away.

A second big myth that Musk seems to have bought into is the idea that this is all a technical, rather than human, problem. And that he can just “hardcore” nerd harder his way through these challenges. But, that’s not true either. Yes, Twitter actually had some pretty sophisticated technology, which was mainly around scaling a massive many-to-many messaging system. But the real “product” innovation at Twitter was the human element, and the community that built up around it. And it was often that the community required the kind of moderation policies that Twitter put in place, otherwise the site would not have been nearly as useful or valuable to so many.

The key point in both of these things, though, is that the moderation policies are not being driven by ideological viewpoints of the employees of the company, but what actually worked best for Twitter. It’s unclear if Musk is realizing this as he speedruns the content moderation learning curve (such as by announcing his first big “innovation” in content moderation is “shadowbanning” — one of the main things his fans were sure he’d get rid of).

Either way, there’s a much larger point here. Elon bought Twitter because he felt that the current management was doing a terrible job with content moderation (among other things) and he thought it was obvious to him how to do it better. But he’s quickly learning that there are reasons that fences are put where they are, and you might not want to tear them down so quickly.

Anyway, for the first two weeks of his reign, Musk came to rely on Yoel Roth, a long time Twitter employee who was running trust & safety. Yoel knows the challenges of trust & safety better than just about anyone, and it seemed like a good thing that Musk appeared to trust him. However, Roth resigned, and has now published an op-ed in the NY Times that does an interesting job highlighting how you can’t actually just show up and run a website like Twitter the way you want and expect to stay in business. As Roth notes:

The truth is that even Elon Musk’s brand of radical transformation has unavoidable limits.

Roth then highlights three major outside forces that are well beyond Musk’s ability to control, even as he controls Twitter itself: advertisers, governments, and app store operators (i.e., Google and Apple). Each one has some interesting elements to them. We may not like the fact that any of these outside forces have so much power over Twitter (and I could make arguments for why all three are problematic), but the actual reality is that they do.

Advertisers (at least for now) remain critical to the business:

Advertisers have played the most direct role thus far in moderating Mr. Musk’s free speech ambitions. As long as 90 percent of the company’s revenue comes from ads (as was the case when Mr. Musk bought the company), Twitter has little choice but to operate in a way that won’t imperil the revenue streams that keep the lights on. This has already proved to be challenging.

Almost immediately upon the acquisition’s close, a wave of racist and antisemitic trolling emerged on Twitter. Wary marketers, including those at General Mills, Audi and Pfizer, slowed down or paused ad spending on the platform, kicking off a crisis within the company to protect precious ad revenue.

While Musk has whined about “activists” causing the advertisers to leave, and even implied in tweets that he might file lawsuits against the activists or “name and shame” the advertisers, this is simply the free market (and free speech!) at work. Businesses want to avoid brand risk, and especially when the returns from advertising on Twitter are low, why bother?

The reality of most social media is that this business (not social) reality has driven much of the decision making behind trust & safety teams and how they operate. And, I should be clear that it is rarely that advertisers are directly pressuring trust & safety teams to moderate content. In hours upon hours of interviews I’ve done with trust & safety professionals, there are very, very, very few examples where there was any direct pressure from the advertisers to make changes. But there’s a natural (and common sense) understanding that for advertisers to want to put their money into your site, they have to feel that the site is trustworthy and they won’t get burned.

Next up: regulators. We’ve been writing a bit about this of late as well. And Roth highlights the realities at play:

But even if Mr. Musk is able to free Twitter from the influence of powerful advertisers, his path to unfettered speech is still not clear. Twitter remains bound by the laws and regulations of the countries in which it operates. Amid the spike in racial slurs on Twitter in the days after the acquisition, the European Union’s chief platform regulator took to the site to remind Mr. Musk that, in Europe, an unmoderated free-for-all won’t fly. In the United States, members of Congress and the Federal Trade Commission have raised concerns about the company’s recent actions. And outside of the United States and the European Union, the situation becomes even more complex: Mr. Musk’s principle of keying Twitter’s policies on local laws could push the company to censor speech it has been loath to restrict in the past, including political dissent.

Regulators have significant tools at their disposal to enforce their will on Twitter and on Mr. Musk. Penalties for noncompliance with Europe’s Digital Services Act could total as much as 6 percent of the company’s annual revenue. In the United States, the F.T.C. has shown an increasing willingness to exact significant fines for noncompliance with their orders (like a blockbuster $5 billion fine imposed on Facebook in 2019). In other key markets for Twitter, such as India, in-country staff work with the looming threat of personal intimidation and arrest if their employers fail to comply with local directives. Even a Musk-led Twitter will struggle to shrug off these constraints.

While I think I’ve been pretty clear that I’m not at all comfortable with much of this regulatory oversight, especially when it touches on editorial discretion and speech issues (and when it appears to be about retaliation), it is a reality now. And Musk can’t just wish it away. Especially over in the EU, where they’ve built this entirely ridiculous structure for regulating content moderation issues online.

Finally, perhaps just as problematic is the power of Apple and Google as the gatekeepers to get on phones:

There is one more source of power on the web — one that most people don’t think much about, but which may be the most significant check on unrestrained speech on the mainstream internet: the app stores operated by Google and Apple.

While Twitter has been publicly tight-lipped about how many people use the company’s mobile apps (rather than visiting Twitter.com on a browser), the company’s 2021 annual report didn’t mince words: “Our release of new products … is dependent upon and can be impacted by digital storefront operators” that decide the guidelines and enforce them, it reads in part. “Such review processes can be difficult to predict and certain decisions may harm our business.”

“May harm our business” is an understatement. Failure to adhere to Apple and Google’s guidelines would be catastrophic, risking Twitter’s expulsion from their app stores and making it more difficult for billions of potential users to access Twitter’s services. This gives Apple and Google enormous power to shape the decisions Twitter makes.

Roth notes that Apple and Google take this role seriously, even if in patently ridiculous ways:

In my time at Twitter, representatives of the app stores regularly raised concerns about content available on our platform. On one occasion, a member of an app review team contacted Twitter, saying with consternation that he had searched for “#boobs” in the Twitter app and was presented with … exactly what you’d expect. Another time, on the eve of a major feature release, a reviewer sent screenshots of several days-old tweets containing an English-language racial slur, asking Twitter representatives whether they should be permitted to appear on the service.

Just as an aside, um, who the fuck searches for “#boobs” and what is wrong with them?

Anyway, much of this makes me… uncomfortable. We had some discussion about this when Parler was yanked from the app stores. It’s more troubling for Apple/iOS than Google, because Android does allow sideloading, even if it keeps making it more difficult. But, in the end, even if you’re not in the app store, you can access the services via the web (even on mobile). It would be nice if mobile app stores were more open, and there were more competition.

But, again, this is a current market reality.

All three of these outside forces could change over time. And Musk could take steps to help change them. But none will change quickly, and many of Musk’s actions over the last couple of weeks actually make it more difficult to avoid these issues, rather than less.

There are, of course, other outside forces at play as well, including a big one: users. If you don’t make your website welcoming, people will go elsewhere. No one signs on to a website looking to be harassed, abused, and yelled at.

Roth notes that a key reason he left Twitter was that Musk’s view of trust & safety did not seem to be driven by principles or carefully developed policies. Rather, it seemed focused on Musk’s whims of the day:

It’s this very lack of legitimacy that Mr. Musk, correctly, points to when he calls for greater free speech, and for the establishment of a “content moderation council” to guide the company’s policies — an idea Google and Apple would be right to borrow for the governance of their app stores. But even as he criticizes the capriciousness of platform policies, he perpetuates this same lack of legitimacy through his impulsive changes and tweet-length pronouncements about Twitter’s rules. In appointing himself “Chief Twit,” Mr. Musk has made clear that at the end of the day, he’ll be the one calling the shots.

It was for this reason that I ultimately chose to leave the company: A Twitter whose policies are defined by unilateral edict has little need for a trust and safety function dedicated to its principled development.

Of course, there’s quite a lot of irony here. One of the reasons Musk insisted he needed to take over was the false belief that the earlier trust & safety policies were driven by ideology, and that he needed to come in and set forth some basic principles to make it “fairer.” Yet the reality, as we see it, is that the old system was driven by thought-out policies with processes to enforce them. Not always policies you or I might agree with, and not always enforced all that well, in part because it’s impossible to do it well, but there were policies and there were processes.

And now Musk is basically doing it all by edict… or random polls.

Thus, we’re in this funny(ish) state whereby everything that Musk and his fans insisted was true was not… but now that Musk is in charge, he’s implementing things in exactly the way he thought they were implemented before and railed about.

Oh, and on that note, Roth drops this little tidbit in the middle of the article:

In response, Mr. Musk empowered my team to move more aggressively to remove hate speech across the platform — censoring more content, not less.

Huh. Look at that. Meanwhile, people are still yelling at me when I point out that the previous regime was more supportive of free speech than anyone realizes.

Tech

via Techdirt https://ift.tt/USKvF0B

November 21, 2022 at 12:28PM

Leave a Reply

%d bloggers like this: