Kids don’t trust Alexa – great, but it’s no cure for fake news

kids alexa fake news
Image credit: Wang Shao Chun

ITEM: Kids don’t believe everything digital assistants like Alexa and Siri tell them, according to recent research, which suggests today’s digital-native kids could grow up inoculated against fake news and disinformation on the internet. However, it’s not nearly that simple.

Various university researchers have been studying how children interact with the virtual assistants behind smart speakers and smartphones. Many kids ask Alexa all kinds of questions – what color is my shirt, is there such a thing as unicorns, etc. And while it’s all cute, researcher Judith Danovitch of the University of Louisville in the US says they’re not just asking questions for fun – they’re testing what Alexa does and doesn’t know and assessing how trustworthy Alexa as a source of information.

According to Technology Review, Danovitch and two other researchers conducted a study of kids aged five- to eight-years to see who they trusted more to answer their questions correctly – a virtual assistant, a teacher, or their peers. Results: kids are more likely to trust the teacher (even if the teacher gives the wrong answer) or their peers (even though their knowledge level is roughly the same).

This study follows several similar studies that found kids don’t unquestioningly believe everything virtual assistants tell them, and often interact with them to work out what kind of authority they have to answer questions knowledgably. Interstingly, part of this appears to be because virtual assistants are programmed to say “I don’t know” when asked questions about unicorns and Santa Claus – which is honest, but the point is that kids are more likely to trust people who can provide some kind of answer, even if it’s made up.

Danovitch told TR that we can extend this to the internet in general – Alexa et al are basically internet interfaces, and even when it comes to web browsers and apps, kids don’t take everything at face value:

“Kids are paying attention,” Danovitch says. “They’re keeping track of who knows what they’re talking about and who doesn’t. Kids don’t just blindly believe every answer they get. And we look at the internet or computer programs; they don’t blindly believe those either.”

Which certainly doesn’t reflect well on grown-ups in an age where memes, trollbots and fake news are plaguing social media. The implication is that whatever natural scepticism we have as children fades as we get older. Another implication is that if the digital-native generation is growing up trusting humans more than Alexa, that’s potentially encouraging news – if our generation is getting suckered by false internet content, at least our kids won’t be.

On the other hand, I suspect it’s more complicated than that.

For a start, the research studies themselves aren’t making that grand a conclusion. These studies focus on trust, not truth.

For example, the Danovitch study notes that kids trust teachers to give reliable answers, even if the answer isn’t actually correct (intentionally or mistakenly). In other words, it’s not that hard to fool a child with wrong information, especially if they trust the source.

This is key, because one of the main reasons fake news is able to proliferate, and why so many adults believe it, is that they trust the source, whether it’s a government official from the political party they support, or a news site that exclusively and consistently supports their worldview, or their favorite relative who forwards chain emails or political memes to them.

In theory, nurturing the scepticism that children in these studies display would go a long way in defusing the trustworthiness and effectiveness of online disinformation, and that should be encouraged in any case.

The challenge is that people have a tendency to trust online sources that provide information affirming their worldview. Earlier this year, An Xiao Mina explained in her book  Memes to Movements: How the World’s Most Viral Media Is Changing Social Protest and Power that the internet in 2019 serves less as an information superhighway and more as an “affirmation superhighway” that affirms people’s political beliefs and identities:

Memes play a key role in this problem, as they are more frequently emotive in nature, giving people a place to express themselves and their values. Sometimes this can help marginalized perspectives find voice and access; other times, this serves to marginalize viewpoints even further. Any attempts to circulate useful facts and figures can fail quickly, because networks can be so easily isolated online, and besides, the people making up those networks have different value systems that don’t always accept the presentation of evidence.

In short, fake news, memes and disinformation have been weaponized to exploit and amplify people’s need to connect with others who think like they do and affirm their beliefs and opinions. That’s why proposed solutions like AI filters and blockchain can only go so far in the war on fake news – like with cybersecurity, human nature is typically the weakest link.

So while today’s digital kids may know enough not to trust Alexia and Siri, that innate skepticism by itself won’t be enough to help them navigate the affirmation fake news highway. It’s going to take a combination of tech filters, sensible regulation and continuous education to do that.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.