In a bid to assess racial and gender bias in its artificial intelligence/machine learning systems, Twitter is starting a new initiative called Responsible Machine Learning.
Terming it a long journey in its early days, Twitter said the initiative will assess any “unintentional harms” caused by its algorithms.
“When Twitter uses ML, it can impact hundreds of millions of Tweets per day and sometimes, the way a system was designed to help could start to behave differently than was intended,” said Jutta Williams and Rumman Chowdhury from Twitter.
“These subtle shifts can then start to impact the people using Twitter and we want to make sure we’re studying those changes and using them to build a better product,” they said in a statement late on Thursday.
Twitter’s �Responsible ML’ working group is interdisciplinary and is made up of people from across the company, including technical, research, trust and safety, and product teams.
“Leading this work is our ML Ethics, Transparency and Accountability (META) team: a dedicated group of engineers, researchers, and data scientists collaborating across the company to assess downstream or current unintentional harms in the algorithms we use and to help Twitter prioritize which issues to tackle first,” the company elaborated.
Twitter said it will research and understand the impact of ML decisions, conduct in-depth analysis and studies to assess the existence of potential harms in the algorithms it uses.
Some of the tasks will be a gender and racial bias analysis of its image cropping (saliency) algorithm, a fairness assessment of our Home timeline recommendations across racial subgroups, and an analysis of content recommendations for different political ideologies across seven countries.
“The most impactful applications of responsible ML will come from how we apply our learnings to build a better Twitter,” the company said.
This may result in changing its product, such as removing an algorithm and giving people more control over the images they Tweet.
Twitter said it is also building explainable ML solutions so people can better understand its algorithms, what informs them, and how they impact what they see on the platform.