Twitter is planning to expand their hateful conduct policy to restrain dehumanizing language; the process is in the development stage as of now and would assess feedback as well.
Dehumanization means depriving a person or a group of human qualities. It can be animalistic or mechanistic dehumanization, like comparing an individual or a group of animals or viruses or reducing to their genitalia. Groups are identified as people classed together by their similar attributes like race, sexual orientation, gender, serious disease, religion, political beliefs, location or social practices.
Our hateful conduct policy is expanding to address dehumanizing language and how it can lead to real-world harm. The Twitter Rules should be easier to understand so we’re trying something new and asking you to be part of the development process. Read more and submit feedback.
— Twitter Safety (@TwitterSafety) September 25, 2018
The development process would predominantly include feedback from the platform’s users as Twitter wants to recognize perspectives from all around the globe as different individuals or groups would have different definitions for dehumanization which might confuse or confirm the hate speech policy. The platform wants to consider all opinions and realize them as a reciprocal policy.
The policy would forbid tweets with violent threats, references to mass murder, violent events, ignites fear, racial, homophobic, or sexist slurs.
Also Read: Snapchat collaborates with Amazon for Visual Search
The consequences of a violation depend on the severity of the violation and the violator’s previous record; it may extend from being asked to remove the tweet to a suspension of the account. The context of the tweet would mainly influence the decision, as some tweets may seem offensive in isolation but not in the conversation. Anyone can report a tweet, the target does not have to do it necessarily but all parties including the target would be heard to make sure the context is clear. The number of reports would not impact the removal but would help prioritize it.
While this move is praiseworthy and reasonable considering the increasing hate disguised as trolls on all social media platforms, not just Twitter, they also need to have a stringent and bug-free process to only remove the necessary and not the ones who have references to violence but are against it and not supportive. A peripheral and smooth system would realize this commendable move.