Twitter Announces Ban on Language ‘Dehumanizing Religious Groups’

Twitter announced in a statement issued Tuesday a new protected class is being recognized by the online forum: Religious groups. The move comes as one religion has campaigned for decades for a global ban on speech critical of Islam, while other major religions have been content to use public persuasion rather then the jackboot of government or corporate totalitarianism to deal with critics.
Several examples posted by Twitter are already covered under existing rules, so the addition of a protected class makes the move appear more toward controlling thought and debate than maintaining civil behavior online.
Twitter has in recent years sent warning notices to critics of Islam from the government of Pakistan and has also suspended accounts for criticism of Islam. This notice makes it official, however the guidelines are ambiguous and will likely have a chilling effect of muting critics.
Twitter said this is the first step and that more groups will be added to the protected list later.
Announcement posted by Twitter, “Today we’re announcing an expansion to this policy which will address dehumanizing language towards religious groups. This is just the first step. Over time we’ll expand the policy to include more groups and update you along the way. Learn more:”



We create our rules to keep people safe on Twitter, and they continuously evolve to reflect the realities of the world we operate within. Our primary focus is on addressing the risks of offline harm, and research* shows that dehumanizing language increases that risk. As a result, after months of conversations and feedback from the public, external experts and our own teams, we’re expanding our rules against hateful conduct to include language that dehumanizes others on the basis of religion.


Starting today, we will require Tweets like these to be removed from Twitter when they’re reported to us:
If reported, Tweets that break this rule sent before today will need to be deleted, but will not directly result in any account suspensions because they were Tweeted before the rule was set.
Why start with religious groups?
Last year, we asked for feedback to ensure we considered a wide range of perspectives and to hear directly from the different communities and cultures who use Twitter around the globe. In two weeks, we received more than 8,000 responses from people located in more than 30 countries.
Some of the most consistent feedback we received included:
Clearer language — Across languages, people believed the proposed change could be improved by providing more details, examples of violations, and explanations for when and how context is considered. We incorporated this feedback when refining this rule, and also made sure that we provided additional detail and clarity across all our rules.
Narrow down what’s considered — Respondents said that “identifiable groups” was too broad, and they should be allowed to engage with political groups, hate groups, and other non-marginalized groups with this type of language. Many people wanted to “call out hate groups in any way, any time, without fear.” In other instances, people wanted to be able to refer to fans, friends and followers in endearing terms, such as “kittens” and “monsters.”
Consistent enforcement — Many people raised concerns about our ability to enforce our rules fairly and consistently, so we developed a longer, more in-depth training process with our teams to make sure they were better informed when reviewing reports. For this update it was especially important to spend time reviewing examples of what could potentially go against this rule, due to the shift we outlined earlier.
Through this feedback, and our discussions with outside experts, we also confirmed that there are additional factors we need to better understand and be able to address before we expand this rule to address language directed at other protected groups, including:
How do we protect conversations people have within marginalized groups, including those using reclaimed terminology?
How do we ensure that our range of enforcement actions take context fully into account, reflect the severity of violations, and are necessary and proportionate?
How can – or should – we factor in considerations as to whether a given protected group has been historically marginalized and/or is currently being targeted into our evaluation of severity of harm?
We’ll continue to build Twitter for the global community it serves and ensure your voices help shape our rules, product, and how we work. As we look to expand the scope of this change, we’ll update you on what we learn and how we address it within our rules. We’ll also continue to provide regular updates on all of the other work we’re doing to make Twitter a safer place for everyone @TwitterSafety.
*Examples of research on the link between dehumanizing language and offline harm:
Dr. Susan Benesch, Dangerous Speech
Nick Haslam and Michelle Stratemeyer, Recent research on dehumanization
###
Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.
Hateful imagery and display names: You may not use hateful images or symbols in your profile image or profile header. You also may not use your username, display name, or profile bio to engage in abusive behavior, such as targeted harassment or expressing hate towards a person, group, or protected category.
Rationale
Twitter’s mission is to give everyone the power to create and share ideas and information, and to express their opinions and beliefs without barriers. Free expression is a human right – we believe that everyone has a voice, and the right to use it. Our role is to serve the public conversation, which requires representation of a diverse range of perspectives.
We recognise that if people experience abuse on Twitter, it can jeopardize their ability to express themselves. Research has shown that some groups of people are disproportionately targeted with abuse online. This includes; women, people of color, lesbian, gay, bisexual, transgender, queer, intersex, asexual individuals, marginalized and historically underrepresented communities. For those who identity with multiple underrepresented groups, abuse may be more common, more severe in nature and have a higher impact on those targeted.
We are committed to combating abuse motivated by hatred, prejudice or intolerance, particularly abuse that seeks to silence the voices of those who have been historically marginalized. For this reason, we prohibit behavior that targets individuals with abuse based on protected category.
If you see something on Twitter that you believe violates our hateful conduct policy, please report it to us.
…###
Twitter suspended the account of TGP Associate Editor Cristina Laila last year for criticism of Islam.
Several months later, in December 2018, Twitter forwarded a threatening notice to Laila from Pakistan.
Michelle Malkin has also been sent a warning from Pakistan by Twitter.


For some reason, Tom Leher’s National Brotherhood Week comes to mind, perhaps because it seems every week is National Brotherhood Week for Twitter Safety.
Powered by Blogger.