Speed and scale will kill your IoT network if you get them wrong

speed
Image credit: Shahjehan / Shutterstock.com

The Internet of Things is all about the connection of things, simply because we can. If that observation was true, we could simply string cable from one thing to next and declare victory. Job done. Sadly, it is not the case. Two elements of the IoT data deluge confound the rollout of those transformative implementations futurists are so keen to describe (think Bladerunner 2049): speed and scale.

Speed first. Driving a car around a race track at 30kph is a leisurely and (frankly) moderately boring undertaking. There is loads of space to take turns, straights stretch to the horizon and the challenge level is barely one out of ten.

Ramping up that same experience to 300kph is a whole new level of experience. The environment is identical, the curves and runoffs the same size, placed in the same orientation. But everything happens not just ten times faster – it seems to happen at a level which is impossible to digest. The car, for the first thing, is straining for grip, for cornering, for acceleration and braking. Changes in the track come up so much faster – those braking signs preceding the corner flash by rather than floating leisurely across the windshield.

When the time available to make business decisions is measured in days, weeks or months, there is time to consider, to digest data, to formulate and discard proposals – the equivalent of the 30-kph drive. In the digitally enabled economies we are entering, customers, markets and competitors demand the 300-kph response – how will pricing change in response to a competitor, how will quotes and implementation of solutions be delivered? All of these dimensions happen so much faster that there is barely time for conscious thought – response must become instinctive, must happen with clarity and efficiency even though tremendous forces are bending and breaking components of the overall structure, whether it be a race car or a business.

A similar analogy is the SR-71 Blackbird, a spy plane from the sixties which flew at the edge of space at speeds over 3,500kph. In flight, surfaces of the plane reached temperatures of over 800ºC, getting air to the engines was a constant challenge, and the aircraft needed refueling for every two hours of flight.

This is the world of an economy at speed – decision based on instinct under incredible pressure.

Data makes this speed possible – not just small amounts of data but ever more granular collections. Where are the customers walking / looking / pausing now? The race car driver, the spy plane pilot – both rely on automation and data to make minute by minute, second by second decisions.

Bringing us to the second problem: scale.

Speed is fine and challenging when a single car is on a track or a single plane flies at 80,000 feet. Multiply the problem by adding another 20, 30, 40 cars to the race track, or consider a group of SR-71s  flying in tight formation at Mach 3. With each component delivering maximum data to the decision-making process, dealing with the scale of multiple devices becomes even more challenging. Tens of devices is okay in the IoT environment, all delivering multiple pieces of data per second. This is a typical proof-of-concept environment. Scale up to tens of thousands of devices, and things get more interesting from a network, storage and analytical problem. There is an excess of plenty and things start to break as scale pushes the digital enterprise to the edge of space, to the limits of traction on the race track.

Intel estimates that a single connected car with produce around 4TB of data per day, which you could fit on a hard drive the size of a cell phone. The Sydney harbor tunnel reports almost 90,000 vehicles passing through it on a daily basis today. If each of those vehicles generates 4TB of data per day, the subset of Sydney Harbor Tunnel vehicles alone would produce 360,000TB of data per day. Optus in Australia recently announced plans to roll out a 4.5G network capable of theoretical speeds of 1 Gbps. With that network covering all of Sydney, we could move a total of 125MB of data per second, which would mean we could transfer that daily load of 360,000TB of data from the Sydney harbor tunnel vehicles in 92 years. Admittedly, slicing and sharing of network capacity would increase the total throughput and some of the data would be staged on the vehicle and at the roadside. However, there’s still a serious scale problem with even a fraction of that data.

Scale and speed are the two killers in the desire to feast at the IoT buffet. Building the right infrastructure to ingest massive amounts of data from a variety of sources is the first stage in building connected devices – the second is analyzing that data at a speed appropriate to driving the business forward. Strap on your helmet, it’s going to be a wild ride.

Hugh Ujhazy, associate vice president of IOT & Telecoms at IDCWritten by Hugh Ujhazy, associate vice president of IOT & Telecoms at IDC | Originally published on LinkedIn

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.