Facebook is testing pop-up messages telling people to read a link before they share it

Years after popping open a Pandora’s box of bad behavior, social media companies are trying to figure out subtle ways to reshape how people use their platforms.

Following Twitter’s lead, Facebook is trying out a new feature designed to encourage users to read a link before sharing it. The test will reach 6% of Facebook’s Android users globally in a gradual rollout that aims to encourage “informed sharing” of news stories on the platform.

Users can still easily click through to share a given story, but the idea is that by adding friction to the experience, people might rethink their original impulses to share the kind of inflammatory content that currently dominates on the platform.

Starting today, we’re testing a way to promote more informed sharing of news articles. If you go to share a news article link you haven’t opened, we’ll show a prompt encouraging you to open it and read it, before sharing it with others. pic.twitter.com/brlMnlg6Qg

— Facebook Newsroom (@fbnewsroom) May 10, 2021

Twitter introduced last June prompts urging users to read a link before retweeting it, and the company quickly found the test feature to be successful, expanding it to more users.

Facebook began trying out more prompts like this last year. Last June, the company rolled out pop-up messages to warn users before they share any content that’s more than 90 days old in an an effort to cut down on misleading stories taken out of their original context.

At the time, Facebook said it was looking at other pop-up prompts to cut down on some kinds of misinformation. A few months later, Facebook rolled out similar pop-up messages that noted the date and the source of any links they share related to COVID-19.

The strategy demonstrates Facebook’s preference for a passive strategy of nudging people away from misinformation and toward its own verified resources on hot-button issues like COVID-19 and the 2020 election.

While the jury is still out on how much of an impact this kind of gentle behavioral shaping can make on the misinformation epidemic, both Twitter and Facebook have also explored prompts that discourage users from posting abusive comments.

Pop-up messages that give users a sense that their bad behavior is being observed might be where more automated moderation is headed on social platforms. While users would probably be far better served by social media companies scrapping their misinformation and abuse-ridden existing platforms and rebuilding them more thoughtfully from the ground up, small behavioral nudges will have to do.