Are social media firms deleting evidence of war crimes in Ukraine?

war crimes ukraine
IRPIN, UKRAINE - Mar. 09, 2022: Thousands of residents of Irpin abandon their homes and evacuate as Russian troops bomb the city. Image by palinchak | Bigstockphoto

ITEM: Social media platforms are working overtime to police content related to the Russian invasion of Ukraine, from disinformation to graphic videos of the violence and its grisly aftermath. However, the latter activity may also be inadvertently deleting evidence of war crimes.

Social media companies have been under tremendous regulatory pressure to combat disinformation and fake news for some time now, whether that means actual fake news or whatever governments who want to control social media content say is fake news.

The removal of graphic content is theoretically less controversial, as most if not all social media companies prohibit violent content in their T&C policies. For example, TikTok says it has been vigorously scrubbing its platform of videos that are “gratuitously shocking, graphic, sadistic or gruesome.”

On the other hand, this raises issues when it comes to video content that has some kind of value, either in terms of news journalism (be it citizen journalism or a major news outlet) or evidence of a crime. In the case of the war in Ukraine, some of the videos being uploaded on Tiktok, Twitter, Facebook, YouTube and others could potentially be used to prosecute war crimes.

However, this means that when social media platforms remove violent content, they’re also removing that evidence. This is a problem because there’s no real way for independent researchers to know how much content social media companies are removing and if any of the posts may constitute criminal evidence, reports the BBC:

“TikTok is not as transparent as some of the other companies – and none of them are very transparent,” Witness programme director Sam Gregory says.

“You don’t know what was not visible and taken down because it was graphic, but potentially evidence. 

“There’s a huge issue here.”

It’s not even a new issue – the same questions came up five years ago when human rights groups and war crimes investigators accused Facebook and YouTube of deleting evidence of war crimes in Syria and Myanmar.

Since then, NGOs have been calling for the creation of a centralized digital locker where deleted user-generated content originating from war zones can be stored and accessed by NGOs and investigators.

But social media companies have resisted such calls because they don’t want outsiders to see how their moderation policies work. Instead, investigators have to leap through all kinds of hoops to try and obtain evidence – the hoops are different for every company, and some even strip metadata from the content that could otherwise verify where and when the content was captured.

According to the BBC, which asked four big social media players about this, Meta (Facebook’s owner) said they’re “exploring ways to preserve this type and other types of content when we remove it.”

TikTok had no comment, and Google and Twitter didn’t respond.

In a way, it’s a damned if you do/don’t scenario for social media companies – they’re getting hammered for not doing enough to remove disinformation and fake news that is having a demonstrably damaging effect, and for not leaving up content that violates their T&Cs because it may be evidence of a crime. There’s also admittedly a double-edged sword here – some governments may use the same argument to demand that social media companies preserve and hand over “evidence” that can be used to prosecute their political enemies.

There’s no easy answer. As usual.

Full story here.

See also: this Technology Review report on amateur online sleuths who are at work sifting through war-related content to authenticate evidence of war crimes.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.