The success of Meta’s Diplomacy algorithm is further evidence that the way to begin addressing the more difficult problems of AI is to combine the disciplines already in use as opposed to just throwing infinite data and infinite computing power at the problem.
Meta has produced an algorithm called Cicero that it created to play the online Diplomacy game. The fact that it fared fairly well against other human players is a sign of progress in building AI that can successfully complete more complex tasks.
This is in stark contrast to its recently published attempt at a scientific language model which relied entirely on deep learning and was a galactic fail, in my opinion.
Diplomacy is a complicated game that requires strategy, alliance forming, negotiation, persuasion, deception and even threats in order to win by occupying the most valuable territories on the board. The game is played with an unspecified time to allow for one-to-one discussion between players – when everyone is ready, they all make their move at the same time.
This is where Diplomacy differs from other games like Go or Chess – players need to deal with the unstructured environment of dialogue, and also interpret the intentions of other players.
The difficulties with Diplomacy
Deep learning models are simply sophisticated pattern-matching systems and as such, they have no causal understanding of what it is that they are doing. This makes these models “brittle” – should the dataset not be fully defined, or should something change, then the model will break.
The practical upshot is that deep learning can often be taught to perform the task better than humans where the task at hand has a finite and stable dataset.
Playing games, tuning antennas dynamically or scanning specific medical biopsies are very good use cases where the dataset meets these criteria. However, tasks like conversation and driving do not fit these criteria. That’s why when models are built to do these things, they always disappoint.
This is where the difficulties with Diplomacy immediately become apparent, because human interaction is a crucial part of the game and is something that deep learning really struggles with (as Meta’s own Galactic algorithm demonstrates).
Why Cicero works and Galactica doesn’t
The reason why Cicero works and Galactica does not is that Cicero is far more than just a massive amorphous neural network. It is made up of an intricate series of elements that allow it to play the game. These elements include a natural language model, a filter to stop it talking nonsense and a series of software modules that assess the board, calculate outcomes and plan the actions it is going to take.
It is a combination of deep learning for specific tasks (in this case, language) and rules-based software to do the reasoning, and is what practitioners refer to as neurosymbolic AI. This combination is precisely what RFM Research thinks will be the system to deal with the limitations of deep learning.
Cicero can score in the top 10% of players (both amateur and professional) in the blitz version of the game (five minutes of negotiation time) – which given the nature of the dataset, is an impressive achievement.
This provides solid evidence that in the absence of being able to fix the great weaknesses of neural networks, this combinatorial approach is most likely to yield the best results.
RFM refers to this as the Jigsaw Theory, which postulates that a series of neural networks all working on very specific tasks can between them describe the environment well enough that software is able to make the right planning decisions. This is how RFM Research is forecasting that autonomous driving will be mature enough to begin proper commercial deployments in 2028, a target it has held since 2017.
It is curious that this good success comes hot on the heels of such a spectacular failure and, in my opinion, demonstrates just how much better the neurosymbolic approach to complex problems is. It also comes just as the premier event in the AI industry (the NeurIPS conference) gets underway for 2022, and I expect this to be widely discussed.
To be sure, I don’t see this as a significant breakthrough against one of the key goals of AI, which is to create algorithms that can generalise. But it is a demonstration that this workaround approach can deliver economically viable results.
I expect to hear a lot more on neurosymbolic AI going forward, and continue to think that this approach will provide a workable solution for autonomous driving.