Using face-altering apps on photos is common nowadays, but FaceApp continues to miss the mark with its filters. The app just launched in January and is again under fire for producing tone-deaf technology deemed racist by users.
FaceApp alters or perfects selfies using artificial intelligence. It became widely popular for allowing users to radically transform a face to make it smile, look younger or older or even change gender. However, its newest update on Wednesday included”ethnic filters” “Black,” “Caucasian,” “Asian” and”Indian.”
Social media users criticized the filters, likening the Black and Asian filters to the perpetuation of racist stereotypes used in the practice of “Blackface” and “yellowface.” And the reaction to the filters was swift on Twitter:
Everyone loves FaceApp, the phone app that adds smiles and wrinkles to your friends’ faces!
We regret to inform you that FaceApp is racist pic.twitter.com/2tRSlcfWdc
Jennifer Unkle (@jbu3) August 9, 2017
Are you kidding me, FaceApp! Blackface/yellowface! Your company is run by idiots! pic.twitter.com/nuurAffGEG
Magnus Tonning Riise (@MagnusRiise) August 9, 2017
TechCrunch writer Lucas Matney demonstrated the capabilities of the several filters by using photos of President Donald Trump, Vice President Mike Pence and former President Barack Obama.
Wow… FaceApp really setting the bar for racist AR with its awful new update that includes Black, Indian and Asian “race filters” pic.twitter.com/Lo5kmLvoI9
Lucas Matney (@lucasmtny) August 9, 2017
The app’s creator is Yaroslav Goncharov, CEO of the Russian app development company Wireless Lab. Goncharov, a former Microsoft and Yandex engineer, said in a statementWednesday”the new controversial filters will be removed in the next few hours.”
He added, “The ethnicity change filters have been designed to be equal in all aspects. They don’t have any positive or negative connotations associated with them. They are even represented by the same icon. In addition to that, the list of those filters is shuffled for every photo, so each user sees them in a different order.”
The filters are no longer available. But,this isn’t the first time Goncharov was placed in the hot seat.
In April, the app’s “hot” filter, said to make one look more attractive, automatically lightened people’s skin. The filter came under fire for perpetuating the age-old association of pale skin with beauty.
Shahquelle L. (@RealMoseby96) April 20, 2017
Goncharov apologized, saying it was an “unfortunate side-effect of the underlying neural network caused by the training set bias.”
Basically, a diverse data set was not used when training the filter to define “hotness.” In essence, the AI technology built was racist. Goncharov said in Aprilthat the data set used to train the “hotness” filter is not a public data set, rather its own.
He noted Wednesday that “algorithmic bias is something that requires ongoing attention,” which means offensive outputs may continue.
“When dealing with such complex algorithms and neural networks, it would be naive and even irresponsible to declare such an issue completely fixed. It is an ongoing process,” he toldTechCrunch.
“What I can say is that we are committed to keeping this issue in mind at all phases of our product cycle from creating datasets and training neural networks to quality assurance and processing customer feedback.
“Our style filters are designed to preserve ethnicity origin and our tests show that they do this quite well.”
It seems that Goncharov needs a system-wide update in understanding and implementing customer feedback. Ultimately, the app, which he said has had more than 45 million downloads since it launched, will sink or swim based on consumer opinion, not algorithms.