First, there’s content recognition technology, which uses artificial intelligence to compare newly-uploaded footage to known illicit material. “It becomes essentially like a game of whack-a-mole,” says Tony Lemieux, professor of global studies and communication at Georgia State University.įacebook, YouTube and other social media companies have two main ways of checking content uploaded to their platforms. The episode underscored social media companies’ Sisyphean struggle to police violent content posted on their platforms. Even as the platforms worked to take some copies down, other versions were re-uploaded elsewhere. Copies of that footage quickly proliferated to other platforms, like YouTube, Twitter, Instagram and Reddit, and back to Facebook itself. The original Facebook Live broadcast was eventually taken down, but not before its 17-minute runtime had been viewed, replayed and downloaded by users. "We are working with social media platforms, who are actively removing this content as soon as they are made aware of an instance of it being posted.In an apparent effort to ensure their heinous actions would “go viral,” a shooter who murdered at least 49 people in attacks on two mosques in Christchurch, New Zealand, on Friday live-streamed footage of the assault online, leaving Facebook, YouTube and other social media companies scrambling to block and delete the footage even as other copies continued to spread like a virus. "The content of the video is disturbing and will be harmful for people to see," the department said. We will do whatever is humanly possible for it to never happen again." New Zealand's Department of Internal Affairs said people posting the video online risked breaking the law. "The responsibility for content of the stream lies completely and solely on the person who initiated the stream." He said the company condemned "the actions of these horrible persons and their disgusting use of our app for these purposes. The app is usually used to share videos of extreme sports and live music, but on Friday the footage recreated the carnage of a computer game, showing the attacker's first-person view as he drove to one mosque, entered it and began shooting randomly at people inside.Īlex Zhukov, founder and chief technology officer of LIVE4 developer VideoGorillas, said the LIVE4 services transmitted footage directly to Facebook and his company did not have the ability to review it first? "The stream is not analysed, stored or processed by LIVE4 in any way, we have no ability (even if we wanted to) to look at the live streams as they are happening or after it's completed," he said in written comments to Reuters. The gunman filmed and shared the attacks using a mobile phone app called LIVE4, which allows users to broadcast directly to Facebook from personal body cameras, according to the app's developer and a Reuters review of videos available online.
"Platforms can't prevent that, but much more can be done by platforms to prevent such content from gaining a foothold and spreading." "Extremists will always look for ways to utilise communications tools to spread hateful ideologies and violence," she said. She said the attacks were shown live on Facebook for 17 minutes before being stopped. The shootings in New Zealand show how the services they offer can be exploited by extremist groups, said Lucinda Creighton, senior advisor to the Counter Extremism Project. Facebook, Twitter, Alphabet Inc and other social media companies have previously acknowledged the challenges they face policing content on their platforms.