EmTech Asia roundup: data center power savings, better batteries and space lasers

At MIT’s EmTech Asia 2017 last week, much of the focus was on pure science, but there were also several tracks that provided a preview of new ideas and advances in the ICT space, from the role of artificial intelligence in services to different ways of thinking about smart cities. Even tech-heavy presentations on memory management, floating point numbers in computing and meta-material nano-lattices were strikingly relevant to comms services in terms of power consumption in data centers and improved capacity for device batteries. And then there’s the idea for a LEO satellite network using lasers to deliver gigabit connectivity to everyone on earth.

Memory management eats up more power than compute

The EmTech Asia computing track presented ways to dramatically improve power consumption. Computing in a post-Moore’s law world is all about radical re-designs of computing rather than the incremental steps we have had for the past 50 or so years.

Thomas Schomers, the 20-year old CEO of Rex Computing, said that Gordon Moore saw a trend where you can fit more transistors in without increasing the cost. While this has held true for over 50 years, today the increase in transistors is not yielding a corresponding increase in performance due to design choices in memory systems made in the early days of computing.

Thomas Schomers, CEO of Rex Computing

One of the design choices that IBM came up in 1968 was to use the free extra transistors to take the burden of memory management off the programmers. This led to a memory management unit (MMU) and later cache, but that comes at a cost.

Today, compute performance has hit a memory wall, and more and more elaborate multi-tier cache designs have been created to cope. However, the energy costs of cache are staggering. For Intel’s Sandy Bridge CPUs, it takes only 10 picojoules to do an operation but 4,200pj to transfer the data to the compute core. That is 42x the energy to move the data than to use it. People think that this is natural, as it involves moving data from off-chip DRAM, but in fact, 2,520 of that 4,200pj is consumed on-chip, and only 1680pj is off-chip.

Schomers founded Rex Computing, a startup that is designing a radically new CPU design from the ground up that does away with the legacy MMU and cache design, and instead introduces the concept of a memory scratchpad for the programmer to control directly. Cutting out the MMU and cache management leads to dramatic savings in power consumption, he says. The CPU is designed to be massively parallel with a “multiple instruction, multiple device” design. The way programmers work will need to change, and the company offers simulators to show developers how the scratch memory works and how to program for one. The company has recently rolled out a working 16-core prototype with its $1.25 million seed funding.

More power savings: change 64-bit floating point to 32-bit unum

Another way to dramatically reduce power consumption in compute loads was to rethink the 103-year old concept of floating point numbers in computers.

John Gustafson, former chief product architect at AMD and former director of Intel Labs

John Gustafson, former chief product architect at AMD and former director of Intel Labs, spoke of his work on a new way of representing floating point numbers in computing called the unum.

People think that computers do not make mistakes – but they do so all the time, and nowhere is it clearer than in floating point. Typing “1 divided by 3” in a calculator gives 0.33333333, but multiplying that by 3 does not yield 1 but 0.9999999.

These small floating point errors add up over time. During the Gulf War, for example, a floating point math calculation in Patriot missile defense systems meant that after 100 hours of deployment, the timing was off by 3.4 seconds. A Scud missile can fly over 3.4 km in that time – and that floating point error cost the lives of 38 soldiers.

The question is one of precision vs accuracy. Precision is the digits available to store a number; accuracy is the number of valid digits. Put another way, precision is the means – accuracy is the goal.

The way current floating point math works is a sign, significand and exponent. The problem is that with increased dynamic range of floating point, the precision suffers. Gustafson’s solution is called the unum with a sign, exponent, fraction, ubit, exponent size and fraction size of 29 bits instead of the floating point’s usual 80 bits. Adding bits can increase dynamic range or resolution as required.

He showed how calculations with 32-bit unums were much more accurate than 80-bit floating points, with one case showing just three decimal point accuracy with floating point and 6 bits of accuracy with unums.

This can be translated into immense power, energy and storage savings as well. The power needed of transferring 64 bits from memory to the compute core is 4,200 pj – but by using 32-bit unums, the power needed is halved to 2,100 pj.

Gustafson noted that the total energy costs of going from 32-bit to 64 bit are as much as 30x greater. Gustafson said he has written unum drop-in libraries to replace floating point under the MIT open-source license, and spoke of how one day the unum can be implemented in silicon in the compute core itself.

Nano-lattice batteries

Elsewhere, speakers at EmTech Asia focused on exotic meta-material nano-lattices that had the potential to revolutionize battery capacity.

Julia Greer, Professor of Materials Science and Mechanics at Caltech, spoke of how nano-architecture was now creating many new possibilities in material science, making materials that were until recently unimaginably lighter and stronger than before.

Julia Greer, Professor of Materials Science and Mechanics at Caltech

Greer’s work was all about nanoscale lattices of materials. At the nanoscale, some materials get stronger, some materials get weaker, and some materials stop breaking. One ceramic became extremely strong with 50-nanometer walls for its tubes – but at 10-nm, instead of being weaker, it turned into a springy sponge that could collapse and bounce back in shape.

Greer said that energy storage with microlattice silicon anodes over graphitic carbon would allow the anode to expand and shrink by 200% and not shatter, opening the door to much more efficient battery designs.

Zhi Wei-seh, research scientist at the Institute of Materials Research and Engineering, A*STAR, spoke of his work on lithium-sulfur batteries with 5x the current capacity. Current lithium-sulfur batteries have short life cycles because of leakage of sulfur into the electrolytes. Zhi solved that problem by creating a nano titanium dioxide yolk shell around the sulfur anode with space for it to expand. Encapsulation with graphene could further increase conduction dramatically allowing the battery to charge and discharge in just 15 minutes or less with 3x the capacity of today’s lithium-ion batteries.

Gigabit connections for everyone with LEO free-space optics

Meanwhile, over in the Space 4.0 section, Rohit Jha, Engineer and CEO at Transcelestial, shared his vision of a low-earth-orbit network of free-space lasers to transfer more data much faster, and without having to compete for scarce radio spectrum resources.

Transcelestial is developing free-space lasers with microradian accuracy. This means that someone in San Francisco would have enough precision to hit a picture frame in New York.

Today’s state-of-the-art radio broadband satellite, Viasat-2, offers 140 Gbps of combined bandwidth, but that translates to only 15 Mbps per user. Jha’s idea is to deploy a LEO constellation of satellites with free-space lasers that could offer every user on the planet gigabit-class connections – and do so with much, much lower latency than anything available before. In simulations, such a constellation has been shown to cut the latency between Tokyo and New York by up to 100ms – which is potentially worth a lot to high-frequency traders in stock markets.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.