Twitter tests a warning message that tells users to rethink offensive replies
Twitter is experimenting with a new moderation tool that will warn users before they post replies that contain what the company says is “harmful” language.
Twitter describes it as a limited experiment, and it’s only going to show up for iOS users. The prompt that is now supposed to pop up in certain situations will give “you the option to revise your reply before it’s published if it uses language that could be harmful,” reads a message from the official Twitter Support channel.
When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
The approach isn’t a novel one. It’s been used by quite a few other social platforms before, most prominently Instagram. The Facebook-owned app now warns users before they post a caption with a message that says the caption “looks similar to others that have been reported.” Prior to that change, Instagram rolled out a warning system for comments last summer.
It’s not exactly clear how Twitter is labeling harmful language, but the company does have hate speech policies and a broader Twitter Rules document that outlines its stances on everything from threats of violence and terrorism-related content to abuse and harassment. Twitter says it won’t remove something simply because it is offensive: “People are allowed to post content, including potentially inflammatory content, as long as they’re not violating the Twitter Rules,” the company says. But it does have the rule sets that allow it to carve out exceptions to its broad speech policies.
That said, this new experiment seems less concerned with curbing the more extreme forms of content for which Twitter might normally remove, suspend, or ban users. Instead, it seems more designed to lightly encourage users to avoid unnecessary and inflammatory language that escalates feuds and might lead to suspensions. After all, you can simply ignore Twitter’s warning and post the reply anyway. But perhaps with a little nudge, Twitter thinks at least some users might reconsider.
Twitter is experimenting with a new moderation tool that will warn users before they post replies that contain what the company says is “harmful” language. Twitter describes it as a limited experiment, and it’s only going to show up for iOS users. The prompt that is now supposed to pop…
Recent Posts
- Blue Origin’s first crewed launch since 2022: Where to watch
- This modder proves everything’s better with a GBA SP screen attached
- Mobile industry is quietly preparing for the biggest change to your smartphone in a decade — iSIM will hasten the end of SIM cards and allow networks to preload plans on devices
- Replacing the OLED iPad Pro’s battery is easier than ever
- Ecobee’s Smart Thermostat Premium is nearly matching its all-time low
Archives
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- December 2011