Deepfake porn and other ways AI can ruin your life

deepfake AI porn
Image credit: Lisa A / Shutterstock.com

Awful people are using AI tools to create deepfake porn videos to humiliate women. Who (or what) is going to stop them?

2019 is going to be a big year for artificial intelligence – and not necessarily in a good way. Unless you’re totally cool with having your face grafted seamlessly into a hardcore porn video that then goes viral.

Because that is a thing, and has been for some time: the use of AI tools (mainly from Google) to develop apps capable of creating “deepfake” porn videos that are then posted on porn sites like PornHub, or on Reddit, 8chan and other online communities. Motherboard reported on this a year ago – some Reddit users provide free tools that automate the face-swapping process, as well as tips and tricks for first-time users. With modest technical know-how and a decent library of images of the victim’s face, anyone can make a deepfake porn video. And if you can’t do it yourself, you can pay someone a modest fee to do it for you.

At the time the Motherboard report came out, the quality of deepfake porn ranged from quite convincing to fairly obviously fake. But experts warned the quality would only get better as AI technology matured – and that’s exactly what’s happened, according to a Washington Post report at the end of last month.

What’s worse, the WaPo report adds, while early deepfakes were focused mainly on celebrities like Gal Godot, Scarlett Johansson and Taylor Swift, the targets are increasingly women who aren’t public figures – they’re co-workers, classmates and girlfriends (or ex-girlfriends) of the creators who distribute these videos with the objective of humiliation, harassment and abuse.

According to media critic Anita Sarkeesian – who is also a victim of deepfake porn – such videos are horrifying enough for celebrities, but the effect is even worse for private citizens, she told WaPo:

“For folks who don’t have a high profile, or don’t have any profile at all, this can hurt your job prospects, your interpersonal relationships, your reputation, your mental health,” Sarkeesian said. “It’s used as a weapon to silence women, degrade women, show power over women, reducing us to sex objects. This isn’t just a fun-and-games thing. This can destroy lives.”

And that’s just in the US, which has slightly more liberal social attitude towards sex and porn. In countries where cultures are more conservative about sex, the impact could be even greater.

For now, victims of deepfake porn have limited ways to fight back. Legally, deepfakes haven’t yet been put to the test – laws covering libel, defamation, fraud and even identity theft may help, but it depends on the criteria spelled out by the law. Also, many deepfake creators are anonymous, so even if you could charge them, you’d have to find them first.

Regulation might help, but many regulators don’t seem to understand the underlying technology, and may likely come up with laws that are either ineffective will be swiftly outdated as the technology evolves. And for all the talk of AI ethics, tech companies like Google have been reluctant to impose restrictions on their AI tools, fearing it would hinder innovation for positive uses of the technology.

Fighting AI with AI

In fact, ironically the strongest tool to combat AI-assisted deepfake porn is AI. According to a separate WaPo report, DARPA is funding developers to come up with automated “media forensics” tools that can detect fake videos, while start-ups such as Truepic (which verifies the authenticity of digital photographs) are working on AI-powered techniques to spot giveaways (such as lack of blood pulsing in the person’s forehead).

But like with internet cyber security, deepfake porn is an arms race – as the technology for spotting fakes improves, so does the ability of deepfake creators to sidestep those detection techniques. And of course the underlying AI technology enabling all this in the first place is also evolving and improving.

If you think the risk of deepfake porn is overstated, remember too that all of this is happening in a broader context that goes way beyond porn – deepfake tech can be (and has been) applied to political propaganda and disinformation, which is exacerbating the problem of fake news on social media sites. As we saw at last year’s RISE conference, the technology to impersonate real people in real time with photorealistic digital avatars and natural spoken-language capabilities exists today, which increases the risk of identity theft and fraud.

We’re assured from Wired that 2019 won’t be all doom and gloom on the AI front – we can expect to see more transparency, accountability and ethics. Let’s hope so – we can talk all we want about the wonders and benefits of the digital economy, but it won’t be worth a dime if we can’t trust what we see, distinguish the fake from the real or protect ourselves from fraud, manipulation, blackmail and humiliation.

BONUS TRACK: For extra AI creepiness, check out this research paper from Nvidia that proposed an alternative generative adversarial network capable of creating photos of fake human faces that look entirely real [PDF].

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.