Digital Domain CEO Daniel Seah joked onstage that digital avatar technology could make fake news really fake. But somehow that joke isn’t funny anymore. We need to talk about why.
At the RISE conference in Hong Kong earlier this month, by far the most fascinating – and most disturbing – presentation came from the CEO of a Hollywood special effects company.
Daniel Seah – who is executive director and CEO of Digital Domain, the digital effects company started by film director James Cameron (Titanic, Avatar, the first two Terminator movies, etc) – gave a talk about “virtual humans”, which is just what it sounds like: computer-generated humans that look, move and talk like the real thing.
If you’ve seen movies like The Curious Case of Benjamin Button or Tron: Legacy, you’ve seen this technology in action already – it’s what made Brad Pitt age in reverse, and made Jeff Bridges look the same age he was in the first Tron film in 1982.
The technology basically works by scanning the actor’s face in detail and then animating it to match the real actor’s movements. The process has become so sophisticated that it no longer requires motion-capture gear or drawing dots on the person’s face to map it accurately – all you need now is a single iPhone camera, Seah said. Once the person has been scanned, the resulting digital avatar can be operated by pretty much anyone who has access to it. It can even be operated by the computer itself – and in real time.
“If I ever have the pleasure to invite any of you to our LA studio, then if you guys allow me to scan you for more than three hours, then we can duplicate you,” Seah said. “We can make you speak things you have never ever said before, we can make you to do anything that we want to. So we can be very dangerous.”
To summarize, then: Digital Domain can create a photorealistic digital version of you that can be operated in real time by you, someone else, or a computer armed with machine learning and a real-time engine.
Seah said Digital Domain has several potential apps in mind (besides film effects), from avatars for virtual reality environments to a social media app that could turn Siri and Alexa into a “real” person of your choice, from Brad Pitt and Angelina Jolie to your high school crush or even a deceased loved one (yikes!).
And then there’s this: “In the future … you know when President Trump says ‘don’t produce fake news’? We can produce fake news now,” Seah said. “Definitely.”
Which in itself is a rather unnerving proposition in an age where fake news and false information is major problem in social media. It’s even more unnerving considering that the day after Seah’s presentation, Sogou CEO Wang Xiaochuan took the stage at RISE to talk about the next frontier in AI – which happened to include a demo of an AI-powered virtual human presenting the news.
Meanwhile, we already know that it’s possible to produce a fake video of a real person saying something he/she didn’t say in real life. Last year, researchers at the University of Washington created a fake video of President Barack Obama generated by AI. No face scanning was required – they processed 14 hours of footage of President Obama to create the visual data, while the voice was synthesized.
Watch this [via BBC News]:
You see where this is going. We’re looking at the prospect of virtual humans passing themselves off for a real person – even though the real person represented by the avatar may not even be behind the controls. Whether it’s done for political propaganda, misinformation, fraud or a simple prank, the potential for abuse here is massive.
Which is why the thing that bothered me the most about Seah’s fake news joke was when he followed it up by saying, “Let’s leave the ethical issues aside for a moment …”
And I sat there thinking, “Well, no, let’s not leave that aside – let’s talk about that right now.”
To be fair, Seah did appear on a panel in a side track later that day that talked about the ethical questions of AI, which I was unable to attend, so perhaps he covered it there. (Also, here he is talking about it on CNBC that same week.)
The problem for me is that I’ve repeatedly encountered this attitude regarding the ethical ramifications of AI and other disruptive technologies: “let’s leave that aside for the moment.”
I understand the need for tech companies to focus on tech development, use cases and making money before they start worrying about how people might use it for bad things. The thing is, Facebook took that attitude, and look where we are now.
Also, the problem with putting ethics aside until later is that it’s easy to keep putting it aside, again and again, until “later” becomes “too late”. Tech development doesn’t necessarily have to wait until ethical issues are resolved first, but that discussion does need to happen in parallel. And it ought to be more at the forefront than it is.