Twitter scraps image-cropping algorithm after finding it is biased
Twitter’s image-cropping algorithm is biased against black people and men, according to new research released from the social media giant.
An example of ‘algorithmic bias’, the social media giant added that ‘how to crop an image is a decision best made by people’.
The study, made by three of Twitter’s machine learning researchers, was conducted after users criticised the feature last year when image previews appeared to exclude black faces.
It found an 8% difference from equality in favour of women, and a 4% bias toward white individuals.
While the paper said there were several reasons for the discrepancy, like image backgrounds and eye colour, the authors admitted there was no excusing the error.
The researchers wrote: ‘Machine learning based cropping is fundamentally flawed because it removes user agency and restricts user’s expression of their own identity and values, instead imposing a normative gaze about which part of the image is considered the most interesting’.
Twitter has recently started showing full photos with no cropping in an effort to reduce bias on the platform.
‘We considered the trade-offs between the speed and consistency of automated cropping with the potential risks we saw in this research,’ Rumman Chowdhury, director of software engineering at Twitter, said on Wednesday.
‘One of our conclusions is that not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people.’
As well as looking at skin colour and gender, the study also examined whether crops preferred women’s bodies over heads, replicating the ‘male gaze’, but found no strong indication.
The paper argued that the findings are another example of algorithmic bias in facial recognition and text analysis.
There has been a long history of demographic biases in artificial intelligence systems, for a variety of reasons – a largely white, male technology workforce designing the systems, as well as datasets that are biased towards white and male demographics, like a collection of text from social news site Reddit.
In 2018, Microsoft and MIT found that facial analysis systems misidentify people of colour more often than white people. In the same year, Amazon scrapped an AI recruiting tool that was biased against women.
Google, one of the world’s biggest designers of AI algorithms, came under fire last year for firing a researcher, Timnit Gebru, that had assessed algorithmic bias within the company.
The firing led to further resignations and public pressure. As part of Google’s annual developer conference, held this week, the company announced features that would aim to make its tech more inclusive, like a less gendered ‘assistant writing’ tool and camera tech that’s better at taking pictures of black faces.
Twitter said that it’s still working on improving its image-cropping techniques for photos displayed on the website and for multiple photos.