Many of us were bursting with the lively response in the heat of this moment. right Now, Twitter He wants to appeal to the inside out with even the most ruthless trolls in an effort to tone up his social network.
Starting Thursday, the company will release a new router for users who are about to send a Tweet that its algorithms believe could be “malicious or offensive.” Those trying to send such a message will be asked if they “want to review this before Tweeting,” with options to edit, delete or send anyway.
The feature, which appeared first on iPhones and later on users on Android devices, was under testing last year, and the social network says it has significantly reduced the amount of abuse.
“These tests ultimately resulted in people sending less offensive responses through the service and improving behavior on Twitter,” wrote Anita Butler and Alberto Barrilla, the company’s product design director and product manager. “We learned that: If asked, 34% of people either revised their initial response or decided not to send their response at all. After asking them once, people wrote, on average, 11% fewer offensive responses in the future; if asked People were less likely to receive abusive and harmful responses. “
The initial tests drew some criticism, Butler and Barrilla admitted, because the algorithms that attempted to discern offensive language “struggled to capture the nuances of many conversations and did not often differentiate between potentially offensive language, sarcasm, and friendly banter.” Users who took the test reported flagging Tweets simply for using swear words, even in friendly messages to shared followers.
“If two accounts are following each other and responding to each other often, then there is a greater chance that they will have a better understanding of their preferred dialect,” the couple said, explaining how they avoided such mistakes.
Unlike many experiments with moderation in AI, the social network can err on the side of caution, because the penalties for wrong guessing are a simple popup, not censorship, account bans, or worse.
Twitter has been a pioneer in attempts to “galvanize” users into better behavior on social networks by adding “friction” to unwanted activities. The company also warns users who are about to retweet an article they haven’t read that the headline “may not tell the full story,” and recommends that they click to read the article – but they are still allowed to follow regardless.
In October and November last year, in an effort to “encourage people to add their own comments before inflating content” in the run-up to the US election, the company temporarily changed the Retweet button so that it becomes the default “Quote” Tweet. Once again, users can ignore the prompt If they want to, but Twitter said at the time, “We hope it encourages everyone to not only think about why a Tweet is inflated, but also increase the likelihood that people will add their thoughts, reactions, and conversation views.”
Others suggested adding more friction. Novelist and technologist Robin Sloan, for example. and it has been suggested Set retweet delays, and set the maximum number of people who can see any individual message. He wrote in 2019: “Social media platforms should be running small, slow and cool”.