Google is proving the less humans involved in AI training, the better

AI code
Image credit: maxuser / Shutterstock.com

The first results from Google’s AutoML AI project are beginning to surface and are implying once again that machines may end up being better coders than humans.

AutoML was announced at Google i/o in May 2017 and failed to attract much attention, mainly because I suspect that most commentators did not grasp the significance of the concept. AutoML is neural network that is capable selecting the best from a large group neural networks that are all being trained for a specific task

This is potentially a hugely important development, as it marks a step forward in the quest to enable the machines to build their own AI models. Building models today is still a massively time and processor-intensive task that is mostly done manually and is very expensive.

If machines can build and train their own models, a whole new range of possibilities is opened up in terms of speed of development as well as the scope of tasks that AI can be asked to perform. RFM has highlighted automated model building as one of the major challenges of AI, and if Google is starting to make progress here, it represents a further distancing of Google from its competitors when it comes to AI.

In the subsequent months since launch, AutoML has been used to build and manage a computer vision algorithm called NASNet. AutoML has implemented reinforcement learning on NASNet to improve its ability to recognize objects in video streams in real time. When this was tested against industry standards to compare it against other systems, NASNet outperformed every other system available and was marginally better than the best of the rest.

I think that this is significant because it is another example of when humans are absent from the training process, the algorithm demonstrates better performance compared to those trained by humans.

The previous example is AlphaGo Zero. I see this as a step forward in addressing RFM’s three big challenges of AI, but there remains a very long way to go.

These problems are:

First: the ability to train AIs using much less data than today.

Second: the creation of an AI that can take what it has learned from one task and apply it to another.

Third: the creation of AI that can build its own models rather than relying on humans to do it.

When I look at the progress that has been made over the last year in AI, I think that Google has continued to distance itself from its competition. Facebook had made some improvements around computer vision, but its overall AI remains so weak that it is being forced to hire 10,000 more humans because its machines are not up to the task.

Consequently, I continue to see Google out front, followed by Baidu and Yandex with Microsoft, Apple and Amazon making up the middle ground.

Facebook remains at the back of the pack, and its financial performance next year is going to be hit by its inability to harness machine power.

For those looking to invest in AI excellence, Baidu is the place to look, as its search business and valuation has been hard hit by Chinese regulation but is now starting to recover. Baidu represents one of the cheapest ways to invest in AI available.

This article was first published on Radio Free Mobile.

It's only fair to share...Tweet about this on TwitterShare on LinkedInShare on FacebookPin on PinterestDigg thisShare on Google+Share on RedditEmail this to someone
Richard Windsor
About Richard Windsor 63 Articles

Dr Richard Windsor is the founder of Radio Free Mobile which is an independent research provider. The research helps clients to understand and evaluate the players in the digital ecosystem and presents a unique perspective on how all the pieces fit together in an easy to read and digest way. The product is available on a subscription basis and counts members of the handset, telecom carrier, Internet, semiconductor and financial industries as its subscribers. RFM is the land of the one man band, meaning that Dr. W. also makes the tea.

Be the first to comment

What do you think?