Analysis Whether typical end users know it or not, they are well and truly entrenched in the cloud computing era. Despite having more memory, drive space, and CPU horsepower on our personal PCs than ever, we increasingly use them to access centralised services such as Dropbox, Gmail, Office 365, and Slack.
Smart devices like Amazon Echo, Google Chromecast, and Apple TV wouldn’t even be possible without content and intelligence made possible by the power and scale of cloud. From infrastructure to hosting, machine learning to computer power, the majority of companies in the world today now rely on an oligopoly of cloud providers: Amazon AWS, Microsoft Azure, Google GCP, and IBM Cloud.
Amazon still dominates the public cloud space with 49% market share, but Azure and GCP continue their acceleration as we begin to dig into the real work – legacy enterprise modernization. And while the path to cloud-centric and hybrid architectures are relatively well-defined at this point, there’s another option seeing renewed promotion from public cloud vendors. We’re increasingly encouraged to explore “cloud” computing at the “edge.”
Distributing infrastructure is exactly the opposite of what is driving cloud adoption – a bigger attack surface, more real estate and increased complexity are all unwelcome compromises
Edge computing is a surprisingly loose term, describing the technology to support computing, data collection, and physical systems actuation, performed near to where data is produced or control is required. This is in some ways counterintuitive and opposite to cloud vendor’s calls to concentrate operations in a handful of huge data centres.
But recent service additions make it easier to deploy and manage applications at the edge, enabling edge’s primary benefits, like overcoming physics-based latency limits and reducing bandwidth into the cloud. Perhaps nowhere is the push to edge greater than with the Internet of Things (IoT).
The chicken and the egg
Technology succeeds or fails in the ways it answers the age-old question of the chicken or the egg. When new technologies emerge, there’s an immediate battle fuelled by a warehouse of technical debt and incomplete features in new tech. Typically, new solution drives changes from the outside-in, driving new requirements to infrastructure components and increasing overall complexity as the by-product of supporting niche applications. Although the concept of pushing processing to the network’s fringe, the edge, has been around for years, it won’t be possible to support knowledge generation and remote decision-making in billions of devices with cloud and on-premises alone.
Pre-processing newly collected data along with supporting billions of devices connecting at once is a real need, and the race is on to see which public cloud provider’s technology will power remote processing and simplify networks between cloud, edge, and on-premises systems. Computing boundaries are expected to be pushed even further to meet the needs of the IoT’s demanding applications next year. In fact, new research has shown that Europe has been leading the way in industrial IoT, with implementations that are three times more extensive than in the U.S.
Edge computing does come with a pain point that must be considered. Distributing infrastructure is exactly the opposite of what’s driving cloud adoption. A bigger attack surface, more real estate, increased complexity, and increased management are all unwelcome compromises.
To aid these issues, tools such as Amazon Greengrass, Azure Stack, or other function-as-a-service (FaaS) offerings are relieving much of this pain, but also making it possible to get on with the real work that is needed at the edge. The difference? They’re finally providing easier deployment wherever optional, not just in more cost-controlled main data centres. There’s an increasingly data-driven democratisation at work, that’s rewarding vendors who deliver simplified tooling and improved support. With some edge solutions, dev teams can get up and running in a few hours, and better, iterate quickly to take full advantage of their new edge resources.
The edge of glory
Migrating towards the edge starts with identifying each individual business’ use case. Each organisation will have its own unique requirements that need to be taken into account. The surrounding infrastructure is often a main consideration. In capital cities, it’s reliable and more robust. Whereas, in more rural areas, with fewer users, the infrastructure required can be quite different.
The approach of choice that is likely to meet the needs of different location scenarios is the hybrid-edge. With hybrid, workloads connected at the neighbourhood level can be delivered closer to the mobile network, and therefore, run essentially in a distributed co-location facility on the part of the mobile operator. And with most recent toolsets, workloads are actually managed through the cloud provider, resulting in unified management interfaces that puts edge computing resources neatly alongside cloud and on-premises systems.
Pushed over the edge
No matter what, an edge computing strategy will need to factor in compliance and security, even more so than centralised systems. Edge doesn’t necessarily imply more security investment but does demand shifting resources to planning and observation and eliminate reactionary post-deployment remediation wherever possible. Proper documentation should provide a clear checklist of what is required and verification plans to assure policies are consistently adopted.
Edge also rewards planning in another way—homogenization. IT pros who create uniform hardware and software profiles for their edge systems, find that when there are fewer differences to manage, the whole process is a lot easier. Commonality is a significant accelerator with edge adoption, just as it has been with cloud.
IT pros shouldn’t be afraid to ask tough questions to vendors and leadership alike:
- Are we using a consistent platform?
- What user experiences are we supporting, and is it a good fit for edge?
- How can we maintain a reliable change pipeline?
- How will the recovery process differ vs. monolithic or cloud-based applications?
- How will our business be affected in the event of a failure?
The right network, systems, cloud management, and monitoring tools will include distributed APM, SIEM, logging, and infrastructure monitoring to provide a solution to the questions above. It will also ensure optimisation and protection across each environment. With real, not proof-of-concept edge, ensuring service quality is more important than ever because of the relative intimacy of IoT. Car manufactures are judged by mechanical sounds as we interact with doors, shifters, and switches, and IoT will extend that assessment to countless newly connected things we touch, talk to, and watch. It’s not good enough to expect users to sit for 2000 milliseconds while the HVAC system decides if occupancy rules are sufficiently satisfied to flip on the lights. IoT will be compared directly with the non-intelligent systems it augments, and the brands with the most seamless experiences stand to win.
While single points of failure at the edge—like bricking millions of IoT devices—are less likely with distributed resources, large failures could cause not only thousands of pounds in repaid and lost revenue, but also tarnish a company’s reputation. Further, deploying a new, larger attack surface at a time when data privacy and security is at the top of customers agendas, requires extra attention. IT pros need to make sure that the entire team, from the director to the support team, takes part in the testing and recovery phase. Even the best security infrastructure can fail when teams assume existing security approaches won’t require adjustment to support edge.
Brave the edge
It’s easy for IT pros to be nervous about including edge computing into their hybrid operations, and some believe that it might be more hassle than good. But if you look out for use cases of organisations that have started deploying systems somewhere between their data centres and the public cloud, there may be an opportunity or two that IT pros hadn’t yet considered.
Edge computing doesn’t necessarily imply more security investment – but does demand shifting resources to planning and observation
What was once considered too complex, theoretical, or just not enterprise-ready, may well finally be set for the prime time. Businesses should experiment because they are likely to discover value and services that could help them outshine the competition. Even better, it’s increasingly possible to do so without building everything yourself from scratch, with many vendors providing (nearly) ready-to-go demonstration gear, and newfound willingness to give prospects enough time to learn hands-on.
While on-premises data centres will continue to do most of the heavy lifting, especially cloud for machine learning, edge will allow some companies to create new and uniquely engaging customer experiences. We’ve discovered that hybrid-IT wasn’t doom and gloom, and with a bit more learning and work, for the most part, we’ve adapted. Likewise, there’s a good chance edge computing, properly applied, might actually allow you to delight users in ways that set your business apart.
Data collection and knowledge generation close to users isn’t anything new, but new edge management and monitoring tech are finally making it feasible outside the lab. The key is to embrace and experiment, facilitated by platforms and services that allow admins to tame compute and data resources wherever they are, especially in new locations employed to defy the latency limits of physics.