Nation/World

New Zealand massacre demonstrates how social media is used to spread violence

Friday’s slaughter in two New Zealand mosques, like mass shootings before it, had its seeds in one of the darkest corners of the internet, a chat room where anonymous people appeared to talk openly about the attack before, during and after it happened. But technology played a more visible - and arguably more troubling - role in publicizing the violence itself and, by extension, the hate-filled ideology behind it.

And yet again, the biggest players in America's rich, massive and sophisticated technology industry - YouTube, Twitter and Facebook - failed to rapidly quell this spread as it metastasized across platforms, bringing horrific images to internet users in a worldwide, dystopian video loop. The alleged shooter also released a manifesto denouncing Muslims and immigrants, police said.

The New Zealand massacre video, which appeared to have been recorded with a GoPro helmet camera, was announced on the fringe chat room 8chan, live-streamed on Facebook, reposted on Twitter and YouTube and discussed on Reddit. Users on 8chan - known for its politically extreme and often-hateful commentary - watched in real time, cheering or expressing horror. They traded links to the alleged shooter’s hate-filled postings and to mirrors of his videos, while encouraging each other to download copies before they were taken off line.

Even hours after the shooting, the social-media giants Facebook, Twitter and YouTube continued to host versions of the shooting video, even as New Zealand authorities said they were calling for it to be taken down.

YouTube tweeted Friday morning, "Our hearts are broken over today's terrible tragedy in New Zealand. Please know we are working vigilantly to remove any violent footage."

When a shooting video gets uploaded to social media sites, the sites often use that video to create a marked copy, known as a hash, that they can use to build an automatic blacklist for when it gets posted again. The years-old algorithmic technique, first popularized as a tactic to combat the spread of child pornography, has now been used to automatically block copyrighted material, porn and other content that violates the social-media sites’ rules.

But the algorithms remain critically flawed, experts say. Those uploading videos can sidestep the rules by altering the clips in small ways, such as attaching a watermark, distorting the music, or skewing the video's size, editing or speed. Several of the shooting videos reposted to YouTube appeared to have those alterations, though it's unclear whether those changes contributed to their remaining online.

ADVERTISEMENT

Friday's massacre in New Zealand is the third time Facebook has been used to broadcast video of a murder. In 2015, a gunman uploaded smartphone video of him shooting two television journalists from a station in Roanoke, Va. In 2017, a gunman posted video of his fatal shooting of a bystander in Cleveland, then went on Facebook Live to talk about the killing.

"Shock videos - especially with graphic first-person footage - is where reality television meets violent gaming culture meets attention-amplification algorithms," said Jonathan Albright, research director at the Tow Center for Digital Journalism at Columbia University. "The modern internet has been designed engagement-first, and this works in opposition to quickly halting the spread of harmful material and ideas - especially for sensational ultra violent terrorism footage."

[Mosque shooter described himself as an immigrant-hating Australian seeking revenge]

Facebook and YouTube have said artificial-intelligence algorithms will help them patrol the onslaught of content posted on their platforms every minute and that early successes have helped crack down on explicit video and terrorist propaganda. Both companies in recent years also have made major new investments in human and automated systems for detecting and removing problematic content, collectively hiring tens of thousands of new employees to help.

But critics have said that even powerful automated systems lack the context or precision to properly assess many types of inappropriate content, such as hate speech and violent videos. Critics also have questioned how highly the companies have prioritized content moderation, as opposed to algorithms for search, discovery or advertising optimization - all operations that contribute large chunks of the platforms' revenue.

Live video also is a challenge to police because today's content-monitoring algorithms aren't smart enough yet to automatically know violence when they see it.

Facebook removed the original video of the New Zealand shooting after about an hour. YouTube began attempting to surface news sources when people searched for it. On YouTube, some of the videos were presented as "inappropriate content" available only for users who had logged in, said they were adults and consented to watch it. But some clones of the video were available without those restrictions.

The rapid spread of shooting videos is likely to increase pressure on Capitol Hill to crack down on technology companies, which have largely evaded significant regulation in the past two decades. The companies generally have been shielded by broad protections for social-media sites that host content uploaded by someone else.

Live video has been one of the tech giants' biggest growth drivers. In 2016, when Facebook chief Mark Zuckerberg announced an expansion of live video, he said they built it to "support whatever the most personal and emotional and raw and visceral ways people want to communicate are as time goes on." But it has also lured in bad actors who want to use the full force of that technical infrastructure to propel violent videos and hate speech around the world.

With live-streaming, "the potential for profit and notoriety are astronomical, and that can incentivize certain types of behavior," said Joan Donovan, director of the Technology and Social Change Research Project at Harvard University's Shorenstein Center. "The point isn't to gain attention to the violence, the point is to gain attention to the ideology."

The companies, she said, have little motive to police content because fast and easy sharing helps boost users, views and advertising revenue. But content moderation is also incredibly expensive and carries with it the potential for politicization, including current arguments from some conservatives and liberals that their posts are being unfairly silenced.

The tech companies "have a content-moderation problem that is fundamentally beyond the scale that they know how to deal with," said Becca Lewis, a researcher at Stanford University and the think tank Data & Society. "The financial incentives are in play to keep content first and monetization first. Any dealing with the negative consequences coming from that is reactive."

Alice Marwick, a communication professor at the University of North Carolina at Chapel Hill, said, "If you are going to reap the advantages of having a massive platform - of tremendous ad revenues, huge user bases and incredible political influence - then you also have a responsibility to govern that platform."

The New Zealand Department of Internal Affairs said Friday that people who share the video online "are likely to be committing an offence" since "the video is likely to be objectionable content under New Zealand law."

"The content of the video is disturbing and will be harmful for people to see. This is a very real tragedy with real victims and we strongly encourage people to not share or view the video," the agency said in a statement.

The DIA said it is working with social media platforms to remove the clips and urged the public to report objectionable content if they come across it. The agency also acknowledged the issue of auto-play on social media and on the websites of traditional news outlets, in which people may see disturbing content without choosing to do so.

"We are aware that people may have unsuspectingly viewed the video on social media platforms thinking it is a media article, so please be vigilant of images that yourself and those around you are viewing, particularly our young people."

ADVERTISEMENT

The agency included contact helpline contact information for people struggling with the graphic images.

The New Zealand shooting highlights how social media companies continue to grapple with breaking news events, and raises questions about the effectiveness of their safeguards, which are designed to curb abusive content and incitements to violence.

Twitter said that the company suspended the account of one of the suspects and is working to remove the video from its network, both of which violate its policies.

"New Zealand Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter's Facebook and Instagram accounts and the video," said Mia Garlick, a Facebook spokeswoman. "We're also removing any praise or support for the crime and the shooter or shooters as soon as we're aware. We will continue working directly with New Zealand Police as their response and investigation continues."

Reddit, in a statement, said that it was "actively monitoring the situation in Christchurch, New Zealand. Any content containing links to the video stream are being removed in accordance with our site-wide policy."

Google, which owns YouTube, said in a statement, “Our hearts go out to the victims of this terrible tragedy. Shocking, violent and graphic content has no place on our platforms, and is removed as soon as we become aware of it. As with any major tragedy, we will work cooperatively with the authorities.”

ADVERTISEMENT