ITEM: Researchers claim they have developed an AI tool that can fact-check online news articles to see if they are fake news stories or legitimate ones.
The tool, developed by researchers at the University of Waterloo, uses a technique called “stance detection”, in which a deep-learning AI algorithm estimates the relative perspective (or stance) of two pieces of text relative to a topic, claim or issue. The object is to rate the accuracy of a post or article’s claims by comparing them with other posts and stories on the same subject.
For example, to fact-check a story headlined “Macau denies entry to Hong Kong media ahead of President Xi visit”, the AI tool would compare the headline to the body text to see if the story matches the headline, then compare the story to other news stories on the same topic to see if they generally match up. The algorithm could also take into account the reliability of the sites hosting the story by determining if the story is hosted on sites that specialize in satire or conspiracy theories, but doesn’t appear on professional news sites that would be likely to cover such a story if it were true.
According to the Waterloo research team, their AI algorithms were “shown tens of thousands of claims paired with stories that either supported or didn’t support them. Over time, the system learned to determine support or non-support itself when shown new claim-story pairs.”
Given a claim in one post or story and other posts and stories on the same subject that have been collected for comparison, the system can correctly determine if they support it or not nine out of 10 times.
That is a new benchmark for accuracy by researchers using a large dataset created for a 2017 scientific competition called the Fake News Challenge.
The eventual goal is to develop a fully automated stance-detection system. In the meantime, Waterloo’s AI algorithm could be used as a screening tool by fact-checkers at social media and news organizations, said Alexander Wong, professor of systems design engineering at Waterloo and a founding member of the Waterloo Artificial Intelligence Institute.
“It augments their capabilities and flags information that doesn’t look quite right for verification,” Wong said in a statement. “It isn’t designed to replace people, but to help them fact-check faster and more reliably.”
Wong’s team is quick to point out that stance-detection tools aren’t a total cure for fake news and disinformation, as the problem is larger than news stories masquerading (or being passed off) as real news stories. False information comes in other forms, such as viral memes on social media, trollbots pretending to support a particular viewpoint or (more recently) deepfake videos.
In other words, stance detection is a specific tool for a specific problem that is a subset over the overall fake news landscape.
That said, it’s unclear how well stance detection might work for certain kinds of stories. For example, if a politician says in a TV talk show that an opponent accepted bribes five years ago, and ten reliable news sites report that claim, the algorithm would presumably conclude that the story is true in the sense that the politician really did make that claim, not whether the claim itself is true.
It’s also unclear how big of a problem disinformation disguised as real news is in the Fake News pantheon compared to the aforementioned trollbots and memes, as well as politicians and pundits who circulate or repeat the claims in memes as though they were true.
Also, as we’ve mentioned before, a major element of the fake news problem is the target audience. Between things like affirmation bias and good old gullibility, many people are susceptible to stories that tell them what they want to hear – and there are far too many bad actors willing to give it to them.
On the other hand, every little bit helps. Certainly social media content moderators need all the help they can get to flag fake news stories.
As for news organizations, in theory they should be fact-checking stories before publishing them anyway, but the reality is that in the digital news arena, the tendency is to post fast and post first to maximize your clicks, which doesn’t leave a lot of time for fact-checking (something smaller sites often don’t have the manpower or the budget for in any case). Which is why even big-name news organizations sometimes get suckered by a viral story that turns out to be a hoax.
So any tool that helps speed up the fact-checking process is welcome. It arguably can’t hurt. Let’s hope the Waterloo algorithm and other stance-detection algorithms work as well in the real world as they do in a lab with a limited dataset.