ITEM: If you think fake news on social media is a problem now, just wait until AI starts writing it. And you won’t have to wait long.
At last week’s EmTech Digital event in San Francisco organized by MIT Technology Review, Sean Gourley – founder and CEO of Primer – said onstage that AI technology will enable “very effective” automation of fake news stories.
He should know – his company’s software, which mines data sources and automatically generates reports for clients (which include the US CIA), could not only be used to generate convincing fake stories automatically, but could also tailor each story to individual viewers based on their interests, Techology Review reports:
“I can generate a million stories, see which ones get the most traction, double down on those,” Gourley said.
According to Gourley, dissemination of fake news today is relatively crude – stories are still written by humans, and the only part that’s automated is posting them randomly on social media. Adding AI to the mix means that the stories can written automatically (and convincingly), but the real impact will come from AI’s ability to target the right audience, understand network dynamics and judge which content is more popular, he said:
“Where you inject information is going to have a massive impact on how it spreads and diffuses,” Gourley said. He went on to suggest that a platform like Facebook may be inherently flawed for sharing news. “All we’ve seen at the moment is primitive, and it’s had a profound impact, and more is coming,” he said.
And what’s coming isn’t just text-based stories. According to Wired, AI is becoming so capable of imitating human speech that fake news could soon include fake soundbites:
By 2018, a nefarious actor may easily be able to create a good enough vocal impersonation to trick, confuse, enrage or mobilise the public. Most citizens around the world will be simply unable to discern the difference between a fake Trump or Putin soundbite and the real thing.
Meanwhile, AI can also generate fake videos, reports Futurism:
If widely adopted, our ability to trust any video or image based solely on what our eyes tell us would be greatly diminished. Video evidence could become inadmissible in court, and fake news could become even more prevalent online as real videos become indistinguishable from those generated by AI.
To be clear, both of the above stories point out the positive uses of these capabilities. So arguably the problem isn’t AI tech so much as human deviousness and gullibility. Think of it this way: we’re already at a point where lots of people believe any fool meme or outrageous news story that pops up in their social media feed as long as it conforms with their sociopolitical worldview, no matter how shoddily it’s put together. One could argue that adding AI-level production values into the mix won’t make a huge difference in terms of people falling for this stuff.
Even so, once AI technology gives devious people the power to blur the lines between what is real and what is fake, the results could be very disruptive in an increasingly digital society in which almost every aspect of our daily interactions will rely on connectivity and digital platforms. The current dangers involving security, data mining and privacy have the potential to undermine trust in that ecosystem – the ability to seamlessly plant false information at the right audience could undermine that trust even further.
The good news is that AI tech can be used to combat fake news as well as create it. It may likely be something of an arms race, like cyber security, where social media and other services will have the fake-news equivalent of spam filters, for example.
But again, the problem is ultimately humans, notes PopSci:
Political operatives and partisan readers often don’t care if an article is intentionally wrong. As long as it supports their agenda—or just makes them snicker—they’ll share it. According to the 2017 Princeton, Dartmouth, and Exeter study, people who consumed fake news also consumed so-called hard news—and politically knowledgeable consumers were actually more likely to look at the fake stuff. In other words, it’s not like readers don’t know the difference. The media should not underestimate their desire to click on such catnip.
So, AI may be able to identify and flag fake news and content, but that can only do so much in an age where many people already live in a mindset where “real news” is whatever they already believe and “fake news” is anything that contradicts or criticizes those beliefs and opinions – to include independent fact-checking sites. I can’t imagine such people believing a news story they just forwarded is fake just because some AI software told them so.