Meta’s Galactica shows just how dumb AI can be

galactica AI meta
Bears in space! Image by Popmarleo | Bigstockphoto

The failure of Meta’s scientific language model Galactica to survive for more than three days online is just another sign that when it comes to AI, the machines are far too stupid to do anything that they have not been explicitly shown before.

Galactica is a language model created by Meta that is designed to help scientists when researching the existing literature for work that is linked to their own investigation. Galactica was trained on 48 million examples of scientific articles, textbooks, websites, and encyclopaedias. It has been promoted as a time-saving tool for searching and summarising the existing literature.

As a one-time PhD student who spent weeks sifting through printed articles in scientific libraries, I can completely understand what a fantastic tool this could be. However, to be useful, it has to be accurate – because bad scientific data is far worse than no data at all.

Bears in space

This, of course, is where Meta’s Galactica fell over. Experts in various scientific fields found that Galactica gave “wrong or biased information that sounded right” and even made-up fake data in some cases.

Galactica even came up with wiki articles on the history of bears in space. Something that silly is easy to spot, but it also raises the high likelihood of fake data being created in fields like quantum mechanics or game theory that would be much harder to detect.

Also troubling was the fact that Meta’s own chief scientist, the famous AI researcher Yan LeCun, failed to admit the failure of Galactica and instead seemed to blame the scientific community for abusing it.

Why Galactica failed

The reality of AI and the reason why Galactica failed is very simple to understand.

Deep learning models are simply sophisticated pattern-matching systems, and they have no causal understanding of what it is that they are doing. This makes these models “brittle” – should the dataset not be fully defined, or should something change, the model will break.

The practical upshot is that deep learning can often be taught to perform the task better than humans when the task at hand has a finite and stable dataset. Playing games, tuning antennas dynamically or scanning specific medical biopsies are very good use cases where the dataset meets these criteria.

However, tasks like conversation, driving and cataloguing and understanding the body of scientific knowledge do not fit these criteria. This is why when models are built to do these things, they always disappoint. Put another way, there is a reason why Alexa, Google, Siri and so on still have little use beyond setting timers, playing music and so on. And there is little prospect of this changing anytime soon.

Another black eye for Meta

Galactica is another black eye for Meta, underpinning RFM’s long-held view that AI is one of Meta’s biggest weaknesses and indicates that it still has some way to go to fix it.

This is a major problem because Meta has really struggled with automated moderation of content on its social media properties, and it needs to get this right to be able to reduce expenses. This is more urgent than ever with a 13% headcount reduction being put through, and a further heavy decline in EPS likely in 2023.

Hence, I think this is a sign that large operational savings from automation remain as illusive as ever, leaving me increasingly pessimistic in the short term. With the possibility of 2023 EPS being around $5.20, Meta becomes interesting at $70 per share, raising the possibility of further heavy declines in the share price should the environment continue to worsen.

For now, I would continue to look elsewhere.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.