Twitter has rolled out its new Safety Mode, which aims to make users feel safer and more comfortable on the platform. The mode can be toggled from the settings menu and it will screen your replies for harassment and abusive language. “We’ve rolled out features and settings that may help you to feel more comfortable and in control of your experience, and we want to do more to reduce the burden on people dealing with unwelcome interactions,” said Twitter Product Lead Jarrod Doherty in a blog post.
Once turned on, it will temporarily block accounts automatically when it detects harassment such as insults, hurtful remarks, or repetitive and uninvited replies or mentions. The feature is currently in beta and is being rolled out to small feedback groups on iOS, Android, and Twitter.com that have English-language settings enabled. If you’d like to test it out, go to your “Privacy and safety” settings and see if you’re able to turn on Safety Mode.
Jarrod says that it takes relationships into account, like how often you interact with each other or if you follow them. This would hopefully prevent the system from accidentally autoblocking your friends but we’ve yet to see how it performs. They added that you’ll be able to see if any accounts get autoblocked and that you can undo the ban if it’s a mistake.
The blog post mentions that this feature was developed by consulting with experts in online safety, mental health, and human rights, including the independent members of Twitter’s Trust and Safety Council, such as freedom of expression group Article 19. Jarrod added that this move is in line with the website’s commitment to the World Wide Web Foundation’s framework to end online gender-based violence.
Safety Mode, which they had announced earlier this year at its Analyst Day, joins Twitter’s suite of new safety features meant to make the platform less toxic. Other anti-abuse settings include unmentioning yourself from tweets and controlling who gets to reply to your tweets, even being able to change this after it’s been posted. The platform also rolled out a feature that makes users think twice before posting something potentially harmful or offensive.