(Pictured: Twitter’s CEO Jack Dorsey)
Twitter has announced they are doing in-house research in an effort to determine what to do about the problem of racists tweeting their hateful and violence-provoking views. They are supposedly trying to figure out if they should ban white nationalist speech from the platform or allow it because then people can debate with them and try to change their minds.
(Ever tried to change a racist’s mind on Twitter? Pretty impossible.)
But according to a recent article in Motherboard, a Twitter staff member said that Twitter already has an algorithm that would solve the issue of racists tweeting out hate – but they don’t want to use it because it would also affect Republicans who share many of the same views as white nationalists.
Twitter’s CEO Jack Dorsey was recently speaking at TED2019 when floods of questions started coming in asking why it’s so hard for Twitter to get Nazis off of the platform.
Dorsey’s nonsensical reply was that Twitter’s policies around violent and extremist groups were based “on conduct, not content.” So as long as Nazis politely state their views that all people of color should be kicked out of the U.S. or enslaved again, that’s fine.
“So, we’re actually looking for conduct. So conduct being using the service to periodically or episodically to harass someone, using hateful imagery that might be associated with the KKK or the American Nazi Party. Those are all things that we act on immediately,” Dorsey said at the event.
“The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material. Banning politicians wouldn’t be accepted by society as a trade-off for flagging all of the white supremacist propaganda, he argued,” the article reads.
But the research team is shrouded in secrecy. Twitter’s head of trust and safety, legal and public policy, Vijaya Gadde, also spoke to Motherboard for the same article.
Gadde said that external researchers have been hired to look into the “issue,” but he wouldn’t say what researchers were involved and they also signed non-disclosure agreements.
“We’re working with them specifically on white nationalism and white supremacy and radicalization online and understanding the drivers of those things; what role can a platform like Twitter play in either making that worse or making that better?” Gadde told Motherboard. “Is it the right approach to deplatform these individuals? Is the right approach to try and engage with these individuals? How should we be thinking about this? What actually works?”