Fake news isn’t just dumb Facebook memes: it’s a business model, and just one aspect of a broader trend of public opinion manipulation, says a new Trend Micro report.
Thanks primarily to last year’s US presidential election, we hear a lot now about fake news – not the kind that Donald Trump typically refers to (i.e. any news story that criticizes him in anyway whatsoever), but literally false stories dressed up as actual news, whether for satirical purposes, pranks, political propaganda or disinformation.
There’s been a lot of debate over the effectiveness of AI to spot and flag fake news. What’s interesting is that there’s a lot more to fake news than political trolls posting angry memes and alternative facts about the opposition, or your angry political friend getting fooled by an Onion article.
According to a new paper from Trend Micro, fake news is just one aspect of a much broader trend of public opinion manipulation and “cyber propaganda” by various groups engaged in – for lack of a better term – information wars.
For example, Trend Micro says in this blog post, fake news can used to discredit or silence a journalist:
The group can mount a four-week fake news campaign to defame the journalist using a range of services available in online marketplaces. Fake news unfavorable to the journalist can be commissioned once per week, promoted with 50,000 retweets and likes as well as 100,000 visits – all for $2,700 per week.
The group can also buy comments to create an illusion of believability. The purchase can start with 500 comments, 400 of which can be positive, 80 neutral, and 20 negative. Spending $1,000 for this kind of service will translate to 4,000 comments. After establishing an imagined credibility, an attacker can launch his smear campaign against his target. Poisoning a Twitter account with 200,000 bot followers costs $240. Ordering a total of 12,000 comments with most bearing negative sentiment and references/links to fake stories against the journalist will cost around $3,000. Dislikes and negative comments on a journalist’s article, and promoting them with 10,000 retweets or likes and 25,000 visits, can cost $20,400 in the marketplaces we have seen.
Note that all of this can be done via paid services. Trend Micro estimates you can sway public opinion against a journalist – or at least drown him/her out – for around $55,000 in total.
Note too that while the above example is hypothetical, the scenario isn’t – it happens regularly in Mexico, for instance.
It’s fascinating stuff – and also a bit scary, even if you don’t happen to be a journalist.
Trend Micro has written all this up in a new research paper, “The Fake News Machine: How Propagandists Abuse the Internet and Manipulate the Public”, which explores Chinese, Russian, Arabic/Middle Eastern, and English marketplaces online marketplaces and services used to help carry out fake news campaigns:
Everything from social media promotions, creation of fake comments, and even online vote manipulation are sold at very reasonable prices. Surprisingly, we found that fake news campaigns aren’t always the handiwork of autonomous bots, but can also be carried out by real people via large, crowdsourcing programs.
The paper also illustrates how a fake news or public opinion manipulation campaign can be structured and run efficiently using the “Public Opinion Cycle”, where each stage is backed up by a range of available online services:
This structure is based on the famous Cyber Kill Chain from Lockheed Martin, but applied to opinion manipulation. Case in point: a key story is prepared with secondary side stories planted during the weaponization phase, before online services are utilized for mass delivery.
One takeaway is that the fake news problem is classic tip-of-the-iceberg. Ever since the days of chain email hoaxes, we’ve been told not to believe everything you read on the internet. Yet people still do.
And yes, some of it is silly stuff like that story about NASA running a secret child slave colony on Mars. But more and more of it is becoming sophisticated enough that it will pass at face value – which is all it needs to do in this age where many people not only don’t read what they see on the internet with a critical eye, but also actively tailor their news intake to conform to the particular hyperpartisan echo chamber they subscribe to.
This is arguably the bigger problem with fake news – there’s a large and entirely too willing an audience for it. AI and best practices to spot fake news campaigns are all very good, but will only do so much to address the problem. Never underestimate the scope of human gullibility and the willingness of other humans to exploit it.