Spanish English French German Italian Portuguese
Social Marketing
HomeGeneralSocial MediaMusk's 'Twitter Files' offer insight into moderation

Musk's 'Twitter Files' offer insight into moderation

Twitter's new owner, Elon Musk, is feverishly promoting his “Twitter Files”: curated internal company communications, painstakingly tweeted by sympathetic clerks. But Musk's obvious conviction that he has launched some partisan kraken is misguided: far from being a conspiracy or systemic abuse, the files are a valuable glimpse behind the curtain of large-scale moderation, suggesting the works of Sisyphus. made by each social media platform.

In Greek mythology, Sisyphus He was the founder and king of Ephyra, later known as Corinth, and inherited the throne from Medea. Sisyphus was an example of an impious king and is known for his exemplary punishment, which was to push a stone up a mountain but, before reaching the top, it rolled back down, a fact that was repeated over and over again as an example of the frustrating and absurdity of the process

For a decade, companies like Twitter, YouTube, and Facebook have performed an elaborate dance to keep the details of their moderation processes equally out of the hands of bad actors, regulators, and the press.

Revealing too much would open processes to abuse by spammers and scammers (who actually take advantage of every leaked or published detail), while revealing too little leads to damaging reports and rumors as they lose control over the narrative. . In the meantime, they must be ready to justify and document their methods or risk censorship and fines from government agencies.

The result is that, although everyone knows a a little As to exactly how these companies inspect, filter, and organize the content posted on their platforms, it's enough to be sure that what we're seeing is just the tip of the iceberg.

Sometimes there are revelations of methods we suspected: hourly contractors clicking on violent and sexual images, an abominable but seemingly necessary industry. Sometimes companies go overboard, such as repeated claims of how AI is revolutionizing moderation and subsequent reports that AI systems for this purpose are inscrutable and unreliable.

What almost never happens (companies typically don't do this unless they are forced to) is that the actual content moderation tools and processes at scale are exposed without a filter. And that is what Musk has done, perhaps to his own peril, but surely to the great interest of anyone who has ever wondered what moderators do, say, and click when making decisions that could affect millions.

Do not pay attention to the honest and complex conversation.

The email threads, Slack conversations, and screenshots (or rather screenshots) posted over the past week provide insight into this important and little-understood process. What we see is some of the raw material, which is not the partisan illuminati some expected, though it is clear from their highly selective presentation that this is what we are meant to perceive.

Beyond this, the people involved are by turns cautious and confident, practical and philosophical, outspoken and accommodating, demonstrating that the choice to limit or prohibit is not made arbitrarily but according to an evolving consensus of opposing viewpoints. .

Before the election to temporarily restrict the Hunter Biden laptop story, probably at this point the most contentious moderation decision in recent years, behind the Trump ban, there is no partisanship or conspiracy hinted at by the package of document bomb.

Robert Hunter Biden  He is an American lawyer and son of the President of the United States, Joe Biden. Biden was part of the corruption of the board of Burisma Holdings, a major Ukrainian natural gas producer, from 2014 to 2019, for which reason his business in Ukraine has been the subject of a journalistic investigation published by the American newspaper New York Post . President Donald Trump's alleged attempt to pressure the Ukrainian government into investigating Joe Biden and Hunter Biden by withholding US aid to Ukraine triggered a September 2019 impeachment inquiry and impeachment petition.

Instead, we find serious and thoughtful people trying to reconcile conflicting and inappropriate definitions and policies: What constitutes "hacked" material? What confidence do we have in this or that evaluation? What is a provided response? How should we communicate it, to whom and when? What are the consequences if we do, if we don't limit? What precedents do we set or break?

The answers to these questions aren't entirely obvious, and are the sort of things that are usually resolved over months of research and discussion, or even in court (legal precedent affects legal language and repercussions). And they had to do it fast, before the situation got out of control one way or another. Dissent from within and without (from a US representative, no less, ironically, mentioned in the thread along with Jack Dorsey in violation of the same policy) was honestly considered and integrated.

“This is an emerging situation where the facts remain unclear,” former Trust and Security chief Yoel Roth said. "We were wrong to include a warning and prevent this content from being amplified."

Some question the decision. Some question the facts as they have been presented. Others say it is not supported by their reading of the policy. One says they need to make the ad hoc basis and scope of the action very clear, as it will obviously be scrutinized as partisan. Deputy General Counsel Jim Baker asks for more information, but says caution is warranted. There is no clear precedent; the facts are at this point absent or unverified; some of the material is clearly non-consensual nude images.

“I think Twitter itself should restrict what it recommends or puts in hot news, and its policy against QAnon groups is a good one,” concedes representative Ro Khanna, arguing that the action in question is a step too far. . "It's a difficult balance."

Neither the public nor the press have been aware of these conversations, and the truth is that we are as curious as anyone. It would be wrong to call the published materials a complete or even accurate representation of the entire process (they are blatantly, if ineffectively, picked and chosen to fit a narrative), but even so we are more informed than before. .

moderation tools

Even more directly revealing was the following thread, which contained screenshots of actual moderation tools used by Twitter employees. While the thread falsely attempts to equate the use of these tools with a shadow ban, the screenshots don't show nefarious activity, nor do they need to to be interesting.

Image credits: Twitter

On the contrary, what is shown is convincing for the very reason that it is so mundane, so blandly systematic. These are the various techniques that every social media company has time and time again explained that they use, but where before we had them couched in the gleeful diplomatic chant of public relations, now they're presented without comment: "Trend Blacklisting." , “High profile”, “DO NOT ACT” and the rest.

For his part, Yoel Roth explains that actions and policies need to be better aligned, that more research is required, that plans are being made to improve:

“The assumption behind much of what we have implemented is that if exposure to, for example, misinformation directly causes harm, we should use remedies that reduce exposure, and limiting the spread/virality of content is a good way to if we do…we are going to need to make a stronger case for including this in our policy remediation repertoire, especially for other areas.”

Once again, the content belies the context in which it is presented: these are not the deliberations of a secret liberal cabal attacking its ideological enemies with a prohibition hammer. It's an enterprise-level dashboard like you might see for lead tracking, logistics, or accounts, discussed and iterated on by no-nonsense people working within practical constraints and with the goal of satisfying multiple stakeholders.

As it should be: Twitter, like other social media platforms, has been working for years to make the moderation process efficient and consistent enough to work at scale. Not only so that the platform is not invaded by bots and spam, but also to comply with legal frameworks such as the orders of the FTC and the GDPR. (Of which the "extensive and unfiltered access" given to outsiders to the image tool may well constitute violation.)

A handful of employees making arbitrary decisions without rubric or supervision is no way to effectively moderate or comply with such legal requirements; neither (like the resigns of several on Twitter's Trust and Safety Council) is automation. You need a large network of people who cooperate and work according to a standardized system, with clear boundaries and escalation procedures. And that's certainly what the screenshots Musk has caused to be released appear to show.

What the documents don't show is any kind of systematic bias, which Musk's surrogates hint at but fail to fully corroborate. But whether or not it fits the narrative they want, what's posted is of interest to anyone who thinks these companies should be more forthcoming about their policies. That's a win for transparency, even if Musk's opaque approach achieves it more or less by accident.

RELATED

Leave a response

Please enter your comment!
Please enter your name here

Comment moderation is enabled. Your comment may take some time to appear.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

SUBSCRIBE TO TRPLANE.COM

Publish on TRPlane.com

If you have an interesting story about transformation, IT, digital, etc. that can be found on TRPlane.com, please send it to us and we will share it with the entire Community.

MORE PUBLICATIONS

Enable notifications OK No thanks