Twitter faces real tough task to moderate voice tweets

Date:

As Twitter rolled out a limited test to record audio tweets and attach those to the original tweet, the concerns are now been raised on how the company will moderate such tweets as tackling hateful, abusive, or racist audio messages require more effort than using AI to curb disinformation on normal tweets.

One good thing is that audio can only be added to original tweets and the users can’t include those in replies or retweets with a comment.

This makes the job a bit easier to find a person who posts a bad audio tweet, and the moderators swing into action to block or flag his tweet or account.

However, unlike Facebook which currently has over third-party15,000 content moderators policing its main app as well as Instagram, Twitter has a small team of human moderators.

In case of an audio tweet, one has to listen to it to reach a conclusion if the voice tweet contains inflammatory or abusive content which then needs to be flagged.

Or AI models get on to the job to go through audio tweets but then, how are they supposed to scan voice tweets in various languages?

Even Facebook moderators do blunders. Tasked with reviewing about three million posts a day, Facebook moderators make about three lakh mistakes in 24 hours in deciding what should stay online and what should be taken down, according to a new report from New York University’s Stern Center for Business and Human Rights.

The number of blunders was derived on the basis of a statement made by Facebook CEO Mark Zuckerberg in a white paper in November 2018. The Facebook CEO admitted that moderators “make the wrong call in more than one out of every 10 cases.”

According to a report in Vice, at a time when online platforms are struggling to remove misinformation and fake content, audio tweets may be “a new mechanism to harass people”.

“As we’ve previously reported, Twitter has far fewer human moderators than other social media giants, so adding such a labor-intensive type of content to moderate seems like it could go poorly,” said the report.

In the case of Facebook, the research found that to efficiently sanitize the platform, Facebook needs to end the outsourcing of content moderation and double the number of people who moderate the content on a daily basis and significantly expand fact-checking to debunk misinformation.

Most of these workers are employed by third-party vendors, said the report, adding that the frequently chaotic outsourced environments in which content moderators work impinge on their decision making.

The onus is now on Twitter to sort these things out while voice tweets are still in the testing phase and create a good mix of AI-human moderation to control what people utter via voice tweets on its platform before the users flood the micro-blogging platform with complaints.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

spot_img

Popular

More like this
Related

Pooja Ceremony: New Family Entertainer ‘Krishna From Brindavanam’ Kickstarts Production

The highly anticipated project starring Aadi Sai Kumar, 'Krishna...

Asaduddin Owaisi Questions Authorities’ Silence Over BJP Candidate Madhavi Latha’s Actions

AIMIM President Asaduddin Owaisi raised concerns over the alleged...

KT Rama Rao Slams Congress for Betraying Youth on Job Promises

BRS working president KT Rama Rao criticized the Congress...