While there has been much talk in China and Hong Kong about the “edge revolution”, the reality is that the migration to edge computing has been more of an evolution. Locally, edge computing has been around since the time of the handover, with the advent of content delivery networks that deployed servers closer to users to reduce latency, improve performance and lower network costs.
The “edge” is normally defined as the outside limit or the place where a change of status or control is encountered. For companies, this means the edge of an organisation’s IT architecture where data is exchanged with users, partners and other systems. However, this is not a static concept and firms are continually looking to expand their use of edge computing, often in a controlled and managed way. So, the legacy edge is slowly giving way to the purpose-built edge.
A survey conducted back in 2019, found that more than half of those polled with edge sites said they expected to double those sites by 2025, and 1 in 5 predicted an increase of 400% or more. A new survey, just out, supports this view.
Survey participants expect the edge component of total compute to increase from 21% to 27% over the next four years and the public cloud share — which increasingly includes cloud resources at the edge — to grow from 19% to 25%. Not surprisingly, this mirrors a continued shift away from centralised, on-premises computing, which is projected to decline from 45% of total compute to 35%.
Dig a little deeper and there appears to be not just a changing compute profile and growing edge component, but some significant changes to the individual edge sites. In short, they are getting bigger, and they are consuming more power. According to the survey, 42% of all edge sites have at least five racks with 13% housing more than 20 and 14% requiring more than 200 kilowatts (kW).
The need to reduce latency
The need to reduce latency or minimise bandwidth consumption by pushing computing closer to the user drove the initial shift to the edge and is driving these changes, but the race to the edge has resulted in an inconsistent approach to these deployments. These are increasingly sophisticated sites performing high-volume, advanced computing, too often stitched together almost as an afterthought.
The edge’s existential value lies in lower latency and reduced company costs for near real-time insights while reducing cloud bandwidth requirements; i.e. for AI, IoT, branch and micro data centre applications. Edge computing also allows organisations to keep all sensitive data and computations within local area networks and corporate firewalls while the data is processed at the location where it was collected. This reduces the risk of cybersecurity attacks and better compliance with strict and changing data laws.
Additionally, compared to traditional data centres, edge computing allows for greater portability, more rapid deployment, higher energy efficiency and more simplified IT management.
Types of deployments
Edge deployments can be broken down into four categories, namely, Device Edge, Micro Edge, Distributed Edge Data Centre, and Regional Edge Data Centre.
Device Edge: The compute is at the end-device itself, either built into the device or in a standalone form that is directly attached to the device, such as AR/VR devices or smart traffic lights.
Micro Edge: A small, standalone solution that can range in size from one or two servers up to four racks. It can be deployed at the enterprise’s own site, or could be deployed at a telco site, with common cases including real-time inventory management and network closets in educational facilities.
Distributed Edge Data Centre: This could be within an on-premise data centre (either a pre-existing enterprise data centre or network room or a new standalone facility). It also could be a small, distributed data centre or colocation facility located on the telco network or at a regional site. Distributed Edge Data Centres are currently common in manufacturing, telecommunications, healthcare and smart city applications.
Regional Edge Data Centre: A data centre facility located outside core data centre hubs. As this is typically a facility that is purpose-built to host compute infrastructure, it shares many features of hyperscale data centres, e.g. is conditioned and controlled, has high security and high reliability. This model is common for retail applications and serves as an intermediary data processing site.
Using these models as a starting point, it is possible to configure the broad strokes of the infrastructure needed to support any proposed edge site. This is critical to bringing the resiliency of those sites in line with their increasing criticality — a gap that is dangerously wide today.
How wide? About half of those responding to the latest survey say their edge sites have a level of resiliency consistent with the Uptime Institute’s Tier I or II classification — the lowest levels of resiliency. Likewise, more than 90% of sites are using at least 2 kW of power, which is the threshold at which dedicated IT cooling is recommended, but only 39% are using purpose-built IT cooling systems. Ironically, these conditions reveal increased risk levels at edge locations already challenged by on-site technical expertise that is limited or non-existent.
Our models consider use case, location, number of racks, power requirements and availability, number of tenants, external environment, passive infrastructure, edge infrastructure provider, and number of sites to be deployed among many other factors to categorise a potential deployment. At that point, we can match it to a standardised infrastructure that can be tailored to meet the operator’s specific needs.
Once we understand, firstly, the IT functionality and characteristics each site must support; secondly, the physical footprint of the edge network; and, thirdly, the infrastructure attributes required for each deployment, it is possible to configure, build, and deploy exactly what is needed faster and more efficiently while minimising time on site for installation and service. It’s a massive leap forward in edge design and deployment.
Today’s edge is more sophisticated, critical, and complex than ever before. By applying a systematic approach to site analysis, it’s possible to introduce customised standardisation to the edge. This will reduce deployment times and costs, increase efficiency and uptime, and deliver customers (and their customers) the seamless network experience they expect.
By Lawrence Tam, Technical Director at Vertiv
Related article: From Cloud to Edge – how Edge Computing will power the age of data