After reviewing these sample deletions, Meta came to the conclusion that content moderators made 51 mistakes in enforcing company policies. In other words, 40 percent of content moderators’ deletions went wrong.
If these samples are broadly representative of the Meta platform’s user content, it means that the bugs found by the auditors are just the tip of the iceberg.
Just this week, the Meta Content Oversight Commission announced that it had received a total of 1.1 million complaints over its handling of user content on Facebook platforms and Instatram.
The serious dissatisfaction of netizens over the deletion of posts, coupled with the misjudgment of the moderators, these facts seem to support a point of the US billionaire Elon Musk’s recent acquisition of Twitter, that the moderation control of speech content should be minimized. .
Musk has publicly stated that one of the reasons for buying Twitter is to remove barriers to online speech (as long as it is legal). However, Musk has softened his stance in recent times, saying things may not be as simple as he thought.
Last month, Musk said he would delete “world-destructive” posts on Twitter, while at the same time using tactics such as limiting the reach of certain tweets and temporarily banning certain tweets. user’s account.
Last week, Musk told Twitter employees that he would fight bad behavior on the platform that harassed other users.
There are indications that Musk will face the same challenges as Meta when it comes to content management.
For Facebook’s owner, cyberbullying and the handling of harassing information are the two areas that users report the most dissatisfaction, accounting for one-third of all complaints from content moderation committees. Other areas of user dissatisfaction include Meta’s handling of online hate speech and incitement to violence.
If Musk were to moderate Twitter content and prevent mass complaints from users, he might have to take stricter measures than Meta.
Meta let an external independent committee make an afterthought decision on moderators’ post deletion, which is equivalent to handing over an important aspect of the platform’s user experience.
Of course, there are pros and cons to Meta’s approach, which can keep the company at a distance from some post deletion disputes. By letting an external independent agency handle post deletion disputes, Meta is equivalent to “outsourcing justice and conscience.”
Meta’s outsourcing behavior has also made the outside world see the difficulty of dealing with disputes over deletion of posts. It is quite challenging to implement a company’s rigid regulations on “language and speech” of netizens.
In the first report released by the content moderation committee, the committee said that in the first 15 months, they only dealt with a total of 20 disputes, and in 14 disputes, the committee revised the decision of Meta moderators to delete posts. The number of cases dealt with is a drop in the bucket compared to the flood of complaints the Commission has received.
For Meta, there is also some benefit to disclosing the details of the handling of certain post deletion disputes, which can silence some of the critics who have been bombarding social media with “censorship speech”. You know, on the online platform, it is not black and white, and there are gray areas.
This independent review and oversight committee also brought some discomfort to Meta. The committee asked Meta to provide more information about the post deletion process, and to be more transparent with users about post deletion decisions. In addition, in the metaverse that Meta is building, this regulatory committee also wants to have a say in content moderation policies.
The operation of the independent supervisory committee allows Meta to focus on important work, and Meta can also show to the outside world that we are responding to external pressures and voices. Such a regulatory board could also silence another voice that would allow government agencies to regulate user content on online platforms.
From the error rate of 40% in a small sample, it can be seen that the accuracy of the content review work is not high. The content of human speech is inherently vague, and human reviewers can also make mistakes in their own judgments, which makes it difficult to implement speech monitoring policies.
If Musk succeeds in acquiring Twitter, he may need to learn some lessons from Meta when it comes to content moderation and deletion. On the other hand, Musk has never been afraid to be the protagonist of the online controversy storm, so in terms of regulating the speech of global netizens, the world’s richest man is actually looking forward to exerting his personal influence under the media spotlight.