Nation/World

Inside Facebook, Jan. 6 violence fueled anger and regret over missed warning signs

Relief flowed through Facebook in the days after the 2020 presidential election. The company had cracked down on misinformation, foreign interference and hate speech - and employees believed they had largely succeeded in limiting problems that, four years earlier, had brought on perhaps the most serious crisis in Facebook’s scandal-plagued history.

“It was like we could take a victory lap,” said a former employee, one of many who spoke for this story on the condition of anonymity to describe sensitive matters. “There was a lot of the feeling of high-fiving in the office.”

Many who had worked on the election, exhausted from months of unrelenting toil, took leaves of absence or moved on to other jobs. Facebook rolled back many of the dozens of election-season measures that it had used to suppress hateful, deceptive content. A ban the company had imposed on the original Stop the Steal group stopped short of addressing dozens of look-alikes that popped up in what an internal Facebook after-action report called “coordinated” and “meteoric” growth. Meanwhile, the company’s Civic Integrity team was largely disbanded by management that had grown weary of the team’s criticisms of the company, according to former employees.

But the high fives, it soon became clear, were premature.

On Jan. 6, Facebook staffers expressed their horror in internal messages as they watched thousands of Trump supporters shouting “stop the steal” and bearing the symbols of QAnon - a violent ideology that had spread widely on Facebook before an eventual crackdown - thronged the U.S. Capitol. Many bashed their way inside and battled to halt the constitutionally mandated certification of President Joe Biden’s election victory.

Measures of online mayhem surged alarmingly on Facebook, with user reports of “false news” hitting nearly 40,000 per hour, an internal report that day showed. On Facebook-owned Instagram, the account reported most often for inciting violence was @realdonaldtrump - the president’s official account, the report showed.

Facebook has never publicly disclosed what it knows about how its platforms, including Instagram and WhatsApp, helped fuel that day’s mayhem. It rejected its own Oversight Board’s recommendation that it study how its policies contributed to the violence and has yet to fully comply with requests for data from the congressional commission investigating the events.

ADVERTISEMENT

But thousands of pages of internal company documents disclosed to the Securities and Exchange Commission by the whistleblower Frances Haugen offer important new evidence of Facebook’s role in the events. This story is based on those documents, as well on others independently obtained by The Washington Post, and on interviews with current and former Facebook employees. The documents include outraged posts on Workplace, an internal message system.

“This is not a new problem,” one unnamed employee fumed on Workplace on Jan. 6. “We have been watching this behavior from politicians like Trump, and the - at best - wishy washy actions of company leadership, for years now. We have been reading the [farewell] posts from trusted, experienced and loved colleagues who write that they simply cannot conscience working for a company that does not do more to mitigate the negative effects on its platform.”

The SEC documents, which were provided to Congress in redacted form by Haugen’s legal counsel and reviewed by The Post and other news organizations, suggest that Facebook moved too quickly after the election to lift measures that had helped suppress some election-related misinformation.

The rushed effort to restore them on Jan. 6 was not enough to stop the surge of hateful, violent posts, documents show. A company after-action report concluded that in the weeks after the election, the company did not act forcefully enough against the Stop the Steal movement that was pushed by Trump’s political allies, even as its presence exploded across the platform.

The documents also provide ample support that the company’s internal research over several years had identified ways to diminish the spread of political polarization, conspiracy theories and incitements to violence but that in many instances, executives had declined to implement those steps.

Facebook officials counter that they planned exhaustively for the election and its aftermath, even anticipating the potential for post-election violence, and always expected the challenges to last through Inauguration Day on Jan. 20.

They acknowledge rolling back some protective measures in December but said they did so only when measures of problematic content had declined markedly, and that numerous other effective measures remained in place through Jan. 6 and beyond. They laid the blame for the Capitol siege on broader political forces and content that flowed more freely on other online platforms.

“We spent more than two years preparing for the 2020 election with massive investments, more than 40 teams across the company, and over 35,000 (now 40,000) people working on safety and security,” said spokeswoman Dani Lever. “In phasing in and then adjusting additional measures before, during and after the election, we took into account specific on-platforms signals and information from our ongoing, regular engagement with law enforcement. When those signals changed, so did the measures.

“It is wrong to claim that these steps were the reason for January 6 - the measures we did need remained in place well into February, and some like not recommending new, civic, or political groups remain in place to this day,” she added. “These were all part of a much longer and larger strategy to protect the election on our platform - and we are proud of that work.”

But many employees with a close-up view of the company’s action before and during Jan. 6 emerged from the experience angry and demanding answers. Both feelings were well represented in posts on Workplace and echoed in interviews with The Post.

“Never forget the day Trump rode down the escalator in 2015, called for a ban on Muslims entering the US, [and] we determined that it violated our policies, and yet we explicitly overrode the policy and didn’t take the video down,” one wrote bitterly. “There is a straight line that can be drawn from that day to today, one of the darkest days in the history of democracy and self-governance. Would it have made a difference in the end? We can never know, but history will not judge us kindly.”

[Ex-Facebook employee tells Congress that company’s products hurt kids and fuel polarization]

The documents and interviews with former employees make clear that Facebook has deep, highly precise knowledge about how its users are affected by what appears on its sites. Facebook relentlessly measures an astonishing array of data points, including the frequency, reach and sources of falsehoods and hateful content and often implements measures to suppress both.

The company exhaustively studies potential policy changes for their impacts on user growth and other factors key to corporate profits, such as engagement, the extent of sharing and other reactions. Public relations and political impacts also are carefully weighed - to the point that potentially flattering and unflattering news headlines about the company are sketched out for review. The documents show that Facebook has declined to deploy some mitigation tactics when chief executive Mark Zuckerberg has objected on the basis that they will cause too many “false positives” or might stop people from engaging with its platforms.

The documents report, for example, that Facebook research, based on data from 2019, found that misinformation shared by politicians was more damaging than that coming from ordinary users. Yet the company maintained a policy that year that explicitly allowed political leaders to lie without facing the possibility of fact checks.

That same year, a report titled “Carol’s Journey to QAnon” examined how Facebook’s recommendation algorithms affected the feeds to an experimental account supposedly representing a conservative mother in North Carolina. The report found that rapid polarization was an entrenched feature in how the platform operated. The first QAnon page landed in the conservative user’s feed in just five days.

“The content in this account (followed primarily via various recommendation systems!) devolved to a quite troubling, polarizing state in an extremely short period of time,” concluded the research, which was among the documents Haugen shared with the SEC but had also been previously reported by Vice News. Still, Facebook allowed QAnon to operate on its site largely unchecked for another 13 months before its first attempted crack down, which proved so ineffective that the company imposed new sanctions two months later.

ADVERTISEMENT

Company research also revealed in an undated document that XCheck - the “cross-check” program created to prevent “pr fires” by imposing an extra layer of oversight when the accounts of politicians and other users with large followings faced enforcement action - had devolved into a widely abused “white list” that effectively placed the powerful largely beyond the reach of company policies.

The internal analysis called the program a “breach of trust,” saying bluntly that the company did “not actually do what we say we do publicly.” But the report itself remained undisclosed until it was revealed by the Wall Street Journal last month. On Thursday, Facebook’s own Oversight Board blasted the company for failing to disclose enough information for it to evaluate XCheck. In a statement, the board said “Facebook admitted it should not have said that cross-check only applied to a ‘small number of decisions.’ "

Even when Facebook did deploy tools to prevent harm, its own studies found those tools sometimes failed to operate as promised. After Facebook decided this year to stop recommending political groups entirely, according to the documents, a third of the listed civic and political groups continued to be seen by users over a period from just before the election through mid-January.

Stop the Steal, which became a rallying cry for those who attacked the Capitol on Jan. 6, was particularly problematic, the documents reveal. Facebook moved quickly to ban the main Stop the Steal group - and publicly announced the move as one of its “exceptional measures” for “this period of heightened tension”- after determining that it was surging with hateful comments, white supremacy and incitements to violence. But other groups touting the slogan “Stop the Steal” began to experience “meteoric growth,” one internal document reported. The ranks of one such group were artificially bolstered by 137 “super inviters” who each had invited more than 500 people.

The report called the company’s response “piecemeal” as Facebook officials struggled to determine whether these new groups were a “coordinated effort to delegitimize the election” or were “protected free expression from users who were confused or afraid or deserved our empathy.”

The answer became obvious in hindsight, prompting an urgent internal effort “to learn what we can about the growth of the election delegitimizing movements that grew, spread conspiracy, and helped incite the Capitol insurrection.”

- - -

Facebook’s sprawling effort to protect the 2020 election was years in the making, with many measures given trial runs in the 2018 U.S. midterm congressional elections and also during votes in other countries, current and former employees say.

ADVERTISEMENT

Facebook’s tool kit was extensive, including running banners and labels with authoritative information and measures such as dialing down the virality of problematic content through tweaks to the algorithms that determine what users see. Routine spreaders of false or hateful content, for example, might find their posts mysteriously reaching smaller audiences. Those seeking to suppress voter turnout might find themselves banned, company documents detailing these measures show.

Facebook also created a command center - initially called a “war room” when the idea debuted in 2018 - that was staffed around-the-clock to monitor and check abuses before they could spread widely.

This approach was shaped by the intense political backlash Facebook experienced after the discovery that Russian operatives had spread disinformation designed to polarize the American populace and help elect Trump in 2016. Scholars still debate how effective the Russian interference on Facebook was, but the ensuing scandal engulfed the company, prompting harsh congressional scrutiny and broad public anger.

The controversy, which continued for many months, prompted public apologies by Zuckerberg and other Facebook officials, and, former employees say, it strengthened the hands of reformers within the company. They pushed for research to develop teams and tools that made Facebook better at controlling malevolent, misleading efforts by foreigners.

The payoff came on Election Day in November. Although some problems did persist - one internal report obtained separately by The Post said that nearly a quarter of Facebook users reported seeing hate speech ahead of the election and that more than half reported seeing content that made them wary of discussing political issues in public - close races in several states and questions about the national outcome made problems at Facebook, at worst, a subplot.

One former executive, speaking on the condition of anonymity, said the overall goal of Facebook’s election strategy “was to make sure Facebook wasn’t the story.”

Many at the company felt that on that score, they had succeeded.

But problems began to arise immediately after the election, as Trump and his allies began touting false claims about his loss. In a Nov. 5 post on Workplace, an employee noted that the top comment appearing below a news story about coming vote counts included a link to a false news report. “Not only do we not do something about combustible election misinformation in comments, we amplify them and give them broader distribution,” the employee wrote.

A few weeks later the company began rolling back - as had always been planned - some unusually aggressive measures that had helped control toxic speech and misinformation on its platforms, although some others remained in place, officials say. In addition to outright bans and removals, these election-season measures had included what Facebook calls “Soft Actions” that quietly slowed the spread of problematic content but stopped short of outright bans or removals.

Those measures, detailed in company documents, included requiring administrators of groups that had a large number of strikes against them for violent or hateful content to review posts before they went up and freezing comments in those groups, reducing the number of invitations a group administrator could send from 100 to 30, and the triggering of a special set of algorithms that would automatically reduce the spread of content the software interpreted as possible hate speech.

Several measures also were aimed at groups and pages that, like the ones touting Stop the Steal, sought to “delegitimize” the 2020 election, according to a company document detailing such measures. One barred them from Facebook’s “recommendation engine.” Another prevented existing groups from changing their names to terms that undermined the election’s legitimacy. They blocked the algorithms ability to recommend live video on political topics.

As Facebook lifted these controls after the election, company leaders did one thing more that only a handful of insiders knew was coming: It dismantled the “Civic Integrity” group that included Haugen and others who had spent years working on combating dangerous content on Facebook.

ADVERTISEMENT

Officials said their goal was to redistribute these roughly 200 employees across the company. But when Haugen and others heard the news, they were aghast. This was a company - as they knew better than most - that was notorious for its record of facilitating the warping of elections, incitements to political violence and the pushing of voters everywhere into self-reinforcing partisan bubbles.

Facebook’s vice president for integrity, Guy Rosen, wrote in a Dec. 2 note to his team, “We’ve made incredible progress in getting problems under control, and many of us looked to the US 2020 elections as a major milestone on this path. I’m so very proud of this team and everything we’ve accomplished.”

He then outlined a new organization in which the responsibilities previously assigned to Civic Integrity were redistributed to several other teams. Rosen portrayed it in his note as a sign of Civic Integrity’s success and maturation. Those on the team and others familiar with internal deliberations at the company did not see it that way.

The issue, said people familiar with internal discussions at Facebook, was that Civic Integrity team members had grown too vocal, with a succession of them airing their concerns in posts on Workplace, visible in many cases to more than 60,000 employees worldwide. Many of these posts became fodder for unflattering news coverage.

“The leadership was not happy with how loud the Civic Integrity team was. They wanted to decentralize the team and the power they had,” said one former company official, speaking on the condition of anonymity. “The way they approached it, it didn’t feel like they were saying, ‘You guys did a great job; thank you for your service.’ It was like, ‘We’re going to split up the family.’”

But whatever public relations problems the Civic Integrity team was creating as an intact unit would become more serious as the company moved to dissolve the unit. Haugen and others began considering whether to speak out about what they had learned about their employer.

ADVERTISEMENT

- - -

As Jan. 6 approached, Facebook began tracking a rise in toxic content on its platforms, according to former employees and a Facebook document that also listed potential “Break The Glass” measures that had been in place during the election but “previously rolled back.” Facebook officials say these measures had been lifted in gradually into December, though some continued through February.

Just a few weeks later many Facebook employees watched, alarmed, as Trump made an incendiary speech to supporters gathered on the White House Ellipse. Soon they were filled with dread as rioters descended on the U.S. Capitol, said former employees, and it became clear that Stop the Steal was no mere Facebook group showing worrying signs of political extremism.

As violence engulfed the Capitol, the policy team based in Washington as well as data scientists in the company’s integrity division began preparing for unprecedented measures against Trump, including what would become an initial 24-hour ban of the president. The Washington policy office, whose leadership ranks were dominated by Republicans, had, since Trump announced his candidacy in 2015, played a key role in adapting company rules for the Trump era - creating exceptions, for example, that allowed hate speech from a political leader so long as it was deemed “newsworthy.”

But his central role in inciting violence at the Capitol had become impossible to ignore, former employees said. Zuckerberg announced the temporary ban on Trump later that day.

Policymakers in the Washington office also worried that some of their friends might be trapped inside the Capitol, in personal danger while simply doing their highly political jobs. When Zuckerberg sent a companywide note expressing relief that no employee was at risk, several in the Washington office bristled. How did he know? They soon concluded that he did not, and could not, have known yet, said several people familiar with the incident.

Another top executive who became the target of employee discontent that day was Chief Technology Officer Mike Schroepfer, a well-liked executive seen as being close to Zuckerberg.

“I’m saddened [by] the attack on the most fundamental part of America: the peaceful transfer of power,” Schroepfer wrote on Workplace, apparently in response to a post from Zuckerberg calling Jan. 6 “a dark moment in our nation’s history.”

Schroefer added, “Hang in there everyone as we figure out the best ways to support our teams and manage discourse on our platform to allow for peaceful discussion and organization but not calls for violence.”

That sentiment was met with rage.

“I’m struggling to match my values with my employment here,” wrote one employee on Workplace whose name was redacted in the copy shared with The Post. “I came here hoping to affect change and improve society, but all I’ve seen is atrophy and abdication of responsibility. I’m tired of platitudes; I want action items. We’re not a neutral entity.”

Another employee, whose name also was redacted from the documents, recalled the anger in June 2020, after Black Lives Matter protesters had been forcibly cleared from a park next to the White House before Trump walked through for a photo opportunity during which he denounced the demonstrations.

“Employees have been crying out for months to start treating high-level political figures the same way we treat each other on the platform. That’s all we’re asking for,” the employee wrote. “Last we spoke, innocent protesters were tear-gassed under the orders of a political figure whose message was amplified. Today, a coup was attempted against the United States. I hope circumstances aren’t even more dire next time we speak.”

Facebook finally took its most aggressive action yet the day after the Capitol attack, banning Trump “indefinitely” from the platform for inciting violence. The many exceptions Facebook had made to allow for the president’s rule-bending behavior had reached their limit.

But in the days after the insurrection, Facebook went out of its way to play down the company’s role in what happened on Jan. 6. Chief Operating Officer Sheryl Sandberg, appearing at a live event, made a point of saying that the events that day were largely organized on other platforms “that don’t have our abilities to stop hate.”

Sandberg’s comments struck some people within Facebook as tone-deaf, as if company leaders were attempting a victory lap in the wake of a historic tragedy.

“That was the last straw for me,” said one former executive, who quit shortly after Sandberg’s statement. “We didn’t know the full extent, but we knew it was disingenuous.”

- - -

The Washington Post’s Tom Hamburger contributed to this report.

ADVERTISEMENT