Under the latest rules, Twitter will require users to delete tweets with dehumanizing language targeting people based on their race, ethnicity or national origin. The company will monitor tweets reported by users, and use automation technology to “proactively” detect potential violations. Twitter has cited research that links dehumanizing language to offline violence.
Here are a few examples of the types of tweets covered under the policy:
If you’re surprised that these kinds of comments haven’t been barred up until now, you’re likely not alone. Other major platforms, like Facebook, have had these kinds of hate speech rules on the books for years. But Twitter has been much slower to make changes. The company first announced it would tackle dehumanizing language in September 2018 following a backlash against the company for its seeming reluctance to ban Alex Jones. In the more than two years since, the company has introduced just three updates to the rules (in 2019, Twitter prohibited dehumanizing language targeting religious groups).
Twitter says this isn’t reluctance but its desire to get things right. The company notes that it works with outside groups and takes feedback from the public before implementing changes in order to “expand our understanding of cultural nuances and ensure we are able to enforce our rules consistently.”