On Wednesday, March 17, Facebook announced its most far-reaching efforts ever to remove racist, sexist and other hate-fueled content from its platform.
Brandy Zadrozny of NBC News reported that the social networking giant is “changing the way it recommends groups and will limit the reach of those that break its rules, a move that comes amid scrutiny of the platform’s propensity to push some of its users to extremism.”
Tom Alison, Facebook’s vice president of engineering, said in a post, “it’s important to us that people can discover and engage safely with Facebook groups so that they can connect with others around shared interests and life experiences. That’s why we’ve taken action to curb the spread of harmful content, like hate speech and misinformation, and made it harder for certain groups to operate or be discovered, whether they’re Public or Private.”
Alison continued, saying “We know we have a greater responsibility when we are amplifying or recommending content. As behaviors evolve on our platform, though, we recognize we need to do more.”
According to NBC’s Zadrozny, “Under the new rules, which will apply to its tens of millions of active groups, Facebook will show rule-breaking groups lower in the recommendations bar, making them less discoverable to other users. The more rules a group breaks, the more it will increase restrictions until it is removed completely.”
She added that “Facebook also plans to inform would-be members of rule-violating groups with a pop-up that warns the group ‘allowed posts that violate our Community Standards,’ and suggests a user review the group before joining. For existing group members, it will reduce the reach of rule-breaking groups by giving it lower priority in a user’s general news feed.”
The new change will also force administrators and moderators of groups to more strictly oversee their communities, limiting the publication of rule-breaking messages. Members of problematic groups will also be tracked, making it harder for them to start new groups in an attempt to evade punishment and skirt the company’s new policies. Facebook said groups that repeatedly break content policies are now more likely to be removed and deleted from the platform completely.
“The move is the most recent and far-reaching expansion of efforts by the social media giant to mitigate harmful content in groups on its platform,” Zadrozny reported. “In January, CEO Mark Zuckerberg announced Facebook would no longer recommend ‘civic’ and ‘political’ groups, following the Capitol riot. In 2020, it said it would stop recommending ‘health’ groups, and banned militia and QAnon groups entirely.”
But critics of Facebook say these renewed efforts may be too little, too late. The Wall Street Journal reported that, as far back as the 2016 election year, the company’s own internal investigation showed that the majority of members in extremist groups on the platform only joined because the group had been “recommended” to them by the company’s algorithms.
Facebook is also widely seen by many as a “private echo chamber” where ideas of racism, misinformation, propaganda and conspiracy theories have been allowed to flourish unchecked for years. In 2020, this troubling concept culminated with the spread of both anti-mask and COVID-19 misinformation; conspiracy theories over the results of the 2020 Presidential election; and even the insurrection attempt on the Capitol.
Facebook said its proposed changes will go into effect globally over the coming months.
Related: For more recent diversity and inclusion news, click here.