LaMDA isn’t sentient – actually, it’s not even close

laMDA google
Image by Soodowoodo | Bigstockphoto

Another disagreement between Google and one of its researchers has sparked a big debate on whether Google’s AI language model LaMDA has become sentient. But all of the evidence that I can see points to the contrary, and that AI remains as dumb as ever.

This time, the researcher in question is Blake Lemoine who has been put on administrative leave by Google for violating his confidentiality agreements when he shared externally his views about LaMDA.

Lemoine’s view is that LaMDA is sentient like a human, and his failure to get his superiors to believe his claim led him to share his findings with persons outside of Google. It is only for sharing his findings with outsiders that Google is censoring Lemoine, and on that basis, I don’t think he has a leg to stand on.

Lemoine states that he thinks that LaMDA has reached a state of consciousness that would allow it to think and act (if it had a body) like a human being. This is the holy grail of AI, and not surprisingly it has hit a wall of scepticism and sharply divided opinion.

I think that LaMDA is nowhere close to being sentient for the following reasons.

1. Three goals of AI

RFM Research has identified three key goals in AI research which – if perfectly solved – could pave the way for AI to become sentient. RFM proposed these goals in 2018 and has been monitoring progress against them ever since. Progress has been extremely slow (as one would expect in a scientific undertaking of this nature) and there is no evidence that these goals have suddenly been solved. Hence, I doubt that LaMDA is sentient, but instead is a manifestation of so many data points that it can project the illusion of being sentient.

2. Human weaknesses

Specifically, anthropomorphisation, which is the tendency to attribute human traits to non-human entities. It is well known that humans have a predisposition to humanise non-human entities, which is described by Melanie Mitchell in her book on AI. Consequently, it is possible that this has had an impact on the decision-making of Blake Lemoine and helped lead him to this erroneous (in my view) conclusion.

3. Evidence

All of the evidence points in the non-sentient direction with only feelings, experiences and opinions supporting the sentient side of the debate.

For example, when Lemoine demonstrated his finding to the Washington Post, Lemoine had to guide the reporter in terms of how to phrase her questions in order to get human-like responses. I consider this to be evidence that the LaMDA is not sentient because problems like this do not arise when speaking to other human beings.

Furthermore, RFM research has found evidence that GPT-3 (Open-AI’s language model) has no ability to generalise at all despite giving the impression of being able to do so.

Buried in Open-AI’s research paper (see Fig. 3.10) is clear evidence that GPT-3 has no ability to generalise. Open AI used the fact that its model could do two operation basic mathematics as a sign that it could generalise from language to mathematics, but GPT-3 could not do three operations even with single-digit numbers.

In my opinion, this is evidence that GPT-3’s data set (175 billion parameters) had the answers to the two-digit operations buried in there that the researchers had been unable to find but GPT-3 could. Three operation maths will be much rarer and so less likely to have the answers hidden in the data set.

Therefore, I conclude that GPT-3 has no understanding of maths, as a 5-year-old can make the cognitive leap from two operations to three without difficulty, but somehow GPT-3 could not. I suspect that if one tested this on LaMDA, one would get a similar result.

Infinite monkeys

Consequently, I think that all of the evidence points towards another iteration of the Infinite Monkey Problem, which states that given a typewriter and enough time, a monkey will be able to come up with the complete works of Shakespeare.

These AI models have more parameters than a human can conceive of and access to huge amounts of computational power. So, in effect, there are billions and billions of monkeys all hammering away at the typewriter which, combined with all of the information available on the internet, is able to weave the illusion of sentience.

Hence, I do not think that Skynet has been created, nor do I think we are about to enter into a cataclysmic war with machines that results in humankind being enslaved. AI remains very good at solving problems where the task at hand is both finite and stable, and pretty bad at everything else – which is why computers still struggle with walking on legs and driving cars on open roads.

It is in the application of AI to these tasks where resources and time should be allocated, leaving algorithms like LaMDA, GPT-3 and so on as interesting research projects and little more.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.