ITEM: Neural networks may be groundbreaking, but they’re slow learners and don’t scale well because eventually they get bogged down by all the math computations. Some researchers thinks the solution may lie in physical neural networks that use the physical world to do the math.
To explain: digital neural networks use artificial neurons (nodes that store values) and algorithms to think and learn. But compared to the human brain, it’s an inefficient process in terms of energy consumption, and the learning rate is much slower. Stuffing more neurons in the neural network will give it more abilities to do human-like things like pattern recognition and even writing essays. But it still takes an insane amount of computation – the biggest neural networks today have to crunch more than half a trillion numbers. And that amount just gets larger as more neurons are added.
According to Quanta, a number of physicists are working on neural network models that essentially offload the computational part to physical networks that harness sound vibrations, laser light and voltages to do the number-crunching.
Peter McMahon, a physicist-engineer at Cornell University leading a research team to build a physical neural network that uses sound vibrations, tells Quanta that many physical systems can do some computation wy more efficiently or faster than a computer, and offers wind tunnels as an example:
When engineers design a plane, they might digitize the blueprints and spend hours on a supercomputer simulating how air flows around the wings. Or they can stick the vehicle in a wind tunnel and see if it flies. From a computational perspective, the wind tunnel instantly “calculates” how wings interact with air.
This doesn’t mean that we’ll be turning wind tunnels into computers. The idea is to exploit the physical properties of things like light and sound to create circuits that make the same kinds of calculations that neural networks make without literally doing the math part.
Early experiments are promising, although they’re naturally nowhere close to outperforming an existing digital neural network. One key challenge is that physical neural networks have to be able to think (i.e. classify an image as a dog or a flower) and learn (i.e. classify it correctly each time). There are different methods being explored to achieve both, but researchers say the results so far are good enough to prove that the concept itself is feasible.
And while it’s early days, of course, it’s also early days for neural networks in general. And if the physicists are right, the digital neural networks will eventually hit a wall in terms of computation and will be too slow and inefficient to be of any practical use, while physical neural networks will be able to naturally keep doing what they do.
Lots of explanatory tech details here.
Be the first to comment