op-ed —

Facebook’s latest “groups” disaster will only make it more toxic

Every single time Facebook could improve, it doubles down on causing more harm.

A person in a Hazmat suit covers the Facebook logo with warning tape.

Facebook is pushing yet another set of new features and policies designed to minimize harm in the homestretch to Election Day while also increasing "community" for users. But these features will do nothing to mitigate existing problems—and they will likely cause new, more widespread harms to both users and to society.

The most recent issue is a frustrating set of changes to the way that Facebook handles groups. Last week, Facebook announced yet another new way to "help more people find and connect with communities," by putting those communities in your face whether you want to see them or not. Both the groups tab and your individual newsfeed will promote group content from groups you are not subscribed to in the hope that you will engage with the content and with the group.

These changes are new, small inconveniences piled atop frustrating user-experience decisions that Facebook has been making for more than a decade. But they are the latest example of how Facebook tries to shape every user's experience through black box algorithms—and how this approach harms not only individuals but the world at large. At this point, Facebook is working so hard to ignore expert advice on how to reduce toxicity that it looks like Facebook doesn't want to improve in any meaningful way. Its leadership simply doesn't seem to care how much harm the platform causes as long as the money keeps rolling in.

Unpleasant surprise

Facebook groups can be great. When kept to a reasonable size and managed properly, they can be incredibly beneficial, especially when their members might not have the time, resources, and knowledge to put together independently hosted forum solutions. I find private groups helpful for connecting to other parents at my daughter's school, and I have friends who have benefited enormously from groups for cancer survivors and survivors of child loss.

But those are groups that we, the users, sought out and joined. Unsolicited content from other, unsubscribed groups is not always welcome. I myself noticed in recent weeks that posts from groups I am not a member of appeared when I tried to use Facebook's increasingly user-hostile app to engage with the handful of friends-and-family groups I do regularly use. And those out-of-the-blue posts include content from two groups I explicitly and intentionally left a month prior because they were making my life worse.

Having that kind of content also appear in your personal newsfeed (which has not yet been rolled out to me) is apparently even worse. "It was creepier than I expected to see 'related discussions' hyped next to a short comments thread between my mom and my brother about her latest post," tech writer Rob Pegoraro (who has occasionally written for Ars) tweeted after experiencing the new feature. (He added that Facebook's obsession with engagement "needs to be shot into the sun," a sentiment with which I agree.)

Facebook at the same time has introduced a slew of tweaks to the user interface on both Web and mobile that make it significantly harder to promote high-quality engagement on the platform, particularly in groups. First, all groups now sort by "latest activity" as their default setting rather than by "recent posts." Sorting by "latest activity" drives users to posts that already have comments—but every post is then sorted by "top comments," an inscrutable, out-of-sequence muddle that seems to have almost nothing to do with the conversations themselves. Users can again choose to sort by "all comments" or "most recent," but those choices do not stick. Whether by design or by flaw, the decision to sort by recent posts isn't sticky, either, and you'll need to reselect it every single time you post a comment or navigate between posts.

Meaningful, thoughtful conversation—even in small, serious, well-moderated groups—has become almost impossible to maintain. That, too, drives sniping, bickering, and extremism on a small, conversational scale.

Engagement drives disaster

Facebook's first director of monetization, Tim Kendall, testified to Congress in September that Facebook's growth was purely driven by the pursuit of that vaunted "engagement" metric. He compared the company to Big Tobacco and lamented social media's effect on society.

"The social media services that I and others have built over the past 15 years have served to tear people apart with alarming speed and intensity," Kendall told Congress. "At the very least, we have eroded our collective understanding—at worst, I fear we are pushing ourselves to the brink of a civil war."

Kendall left the company in 2010, but Facebook's senior executives have known for years that the platform rewards extremist, divisive content and drives polarization.

The Wall Street Journal back in May of this year obtained internal documentation showing that company leaders were warned about the issues in a 2018 presentation. "Our algorithms exploit the human brain’s attraction to divisiveness," one slide read. "If left unchecked," the presentation warned, Facebook would feed users "more and more divisive content in an effort to gain user attention and increase time on the platform."

Even worse, the WSJ found that Facebook was totally and completely aware that the algorithms used for groups recommendations were a huge problem. One Facebook internal researcher in 2016 found "extremist," "racist," and "conspiracy-minded" content in more than one-third of German groups she examined. According to the WSJ, her presentation to senior leadership found that "64 percent of all extremist group joins are due to our recommendation tools," including the "groups you should join" and "discover" tools. "Our recommendation systems grow the problem," the presentation said.

Facebook in a statement told the WSJ it had come a long way since then. "We've learned a lot since 2016 and are not the same company today," a spokesperson said. But clearly, Facebook hasn't learned enough.

Violent, far-right extremists in the United States rely on Facebook groups as a way to communicate, and Facebook seems to be doing very little to stop them. In June, for example, Facebook said it removed hundreds of accounts, pages, and groups linked to the far-right, anti-government "boogalooo" movement and would not permit them in the future. And yet in August, a report found more than 100 new groups had been created since the ban and "easily evaded" Facebook's efforts to remove them.

USA Today on Friday reported a similar trend in Facebook groups devoted to anti-maskers. Even while more than two dozen known cases of COVID-19 have been tied to an outbreak at the White House, COVID deniers claiming to support President Donald Trump are gathering by the thousands in Facebook groups to castigate any politician or public figure who calls for the wearing of masks.

Bad idea!

Amid the rise of conspiracy theories and extremism in recent years, experts have had a strong and consistent message to social media platforms: you need to nip this in the bud. Instead, by promoting unsolicited group content into users' newsfeeds, Facebook has chosen to amplify the problem.

Speaking about the spread of QAnon, New York Times reporter Sheera Frenkel said last month, "The one idea we hear again and again is for Facebook to stop its automated recommendation systems from suggesting groups supporting QAnon and other conspiracies."

The Anti-Defamation League in August published a study finding not only that hate groups and conspiracy groups are rampant on Facebook, but also that Facebook's recommendation engines still pushed those groups to users.

One week later, The Wall Street Journal reported that membership in QAnon-related groups grew by 600 percent from March through July. "Researchers also say social media make it easy for people to find these posts because their sensational content makes them more likely to be shared by users or recommended by the company's algorithms," the WSJ said at the time.

These recommendations allow extremist content to spread to ordinary social media users who otherwise might not have seen it, making the problem worse. At this point, the failure to heed the advice of academics and experts isn't just careless; it's outrageous.

Facebook does nothing

Facebook's policies put the onus of moderation and judgement on users and group administrators to be the first set of eyes responsible for content—but when people do file reports, Facebook routinely ignores them.

Many Facebook users have at least one story of a time they flagged dangerous, extreme, or otherwise rule-breaking content to the service only for Facebook to reply that the post in question does not violate its community standards. The company's track record of taking action on critical issues is terrible, with a trail of devastating real-world consequences, creating little confidence that it will act expeditiously with the problems this expansion of group reach will likely create.

For example, a Facebook "event" posted before the shooting of two people in Kenosha, Wisconsin, was reported 455 times, according to an internal report BuzzFeed News obtained. According to the reports BuzzFeed saw, fully two-thirds of all the complaints Facebook received related to "events" that day were tied to that single Kenosha event—and yet Facebook did nothing. CEO Mark Zuckerberg would later say in a company-wide meeting that the inaction was due to "an operational mistake."

More broadly, a former data scientist for Facebook wrote in a bombshell whistleblower memo earlier this year that she felt she had blood on her hands from Facebook's inaction. "There was so much violating behavior worldwide that it was left to my personal assessment of which cases to further investigate, to file tasks, and escalate for prioritization afterwards,” she wrote, adding that she felt responsible when civil unrest broke out in areas she had not prioritized for investigation.

Facebook's failure to act on one event may have contributed to two deaths in Kenosha. Facebook's failure to act in Myanmar may have contributed to a genocide of the Rohingya people. Facebook's failure to act in 2016 may have allowed foreign actors to interfere on a massive scale in the US presidential election. And Facebook's failure to act in 2020 is allowing people—including the sitting US president—to spread rampant, dangerous misinformation about COVID-19 and the upcoming election.

The consequences of Facebook's failures to take content seriously just keep piling up, and yet the change to promote groups will create even more fertile ground for the spread of extremism and misinformation. Facebook's services are used by more than 2.7 billion people. How many more of Facebook's "operational mistakes" can the world afford?

Channel Ars Technica