In an effort to promote a philosophy of “more informed sharing,” if you try to share an article without first reading it, Facebook will now warn you that you should reconsider.
Earlier this week, the social media company revealed that it is going to begin testing a new feature that will require users to open and read articles before posting them on their profiles. This is not a new trend, seeing as Twitter began testing a similar feature in June of last year, and then proceeded to roll it out to all of its users in September.
Facebook’s decision is the latest in a series of moves by social networking firms to slow the spread of misinformation and offensive content on their sites by encouraging users to take their time and actually read content before posting it. Some social media researchers have long called for information protection via prompting like this, in the hopes of reducing the number of people who respond to a controversial headline without first learning the full context of the story—as we all know, clickbait and titles that are meant to spur controversy have been on the rise since 2016.
However, since these features are new, it’s unclear how effective these measures will be, or whether people will simply skip over prompts and post news without reading it. And if anyone clicks on an article after being prompted by Facebook, there’s no guarantee they’ll read the whole tale — so this isn’t a full solution, but it definitely is a start.
Facebook will tell you the following if you open an article without clicking on it:
“You’re about to share this article without opening it. Sharing articles without reading them may mean missing key facts.”
Users will then be given the option of reading the article first or continuing to share without reading it. Even if features like this don’t completely stop the spread of misleading information or polarizing material, there are some early indications that they can help people read more detail about the news of the day, or at the least stop to think about their actions and intentions behind reposting.
Twitter also launched a feature last week that encourages users to think twice before tweeting “offensive or hurtful words.” And, ahead of the 2020 presidential election in the United States, Twitter and Facebook began countering false information on their websites by tagging politically misleading tweets and prohibiting users from “liking” or replying to them.
Companies that operate social media platforms have a number of levers at their disposal to slow or discourage the dissemination of negative content and inflammatory rhetoric. Features like these will surely help (even if a little bit) with the spread of misinformation on the worldwide web. Let’s hope that these will actually lead to a better, and safer, digital world.