Skip to content

How Application Management Needs are Driving Edge Computing

Last month, Scale Computing’s CEO and co-founder Jeff Ready joined up with Rob High, IBM Fellow, VP, CTO IBM Network and Edge Computing for a video meetup with Spiceworks.

This past Summer, Scale Computing and IBM announced a collaboration to help organizations adopt an edge computing strategy designed to enable them to move data and applications seamlessly across hybrid cloud environments, from private data centers to the edge.

In this informative and wide-ranging conversation, Jeff and Rob explore some of the trends driving the edge computing market — from the proliferation of connected devices generating voluminous amounts of data and the need to have greater application resiliency to ensuring compliance with an ever evolving regulatory environment — it’s no longer a question of ‘if’ edge computing will transform how we work and live, but when.

What follows are some of the highlights from their conversation. You can watch the video meetup in its entirety here.

What impact has the abrupt shift to remote work had on the edge computing market?

Jeff Ready (JR): First, it’s probably worth defining what we mean by edge computing which we can sum up as simply any place that you’re going to run a mission critical application that’s outside of the data center so the edge just means not in the data center. What’s happened through the pandemic is all of a sudden you have to run these applications in all sorts of different places.

The big challenge here is that the ‘edge’ by that definition just has some fundamental differences from the data center where you have redundant internet connectivity and reliable power and when something breaks, someone can walk into a room and fix it relatively quickly. But what if I have to do that same task across 500 locations and those locations are only online sometimes? This problem of horizontal scalability in which you have to replicate infrastructure tasks across a lot of locations is a serious issue and an area where we’re seeing a lot of very interesting use cases, especially in industries like manufacturing where for instance, industrial robots are generating tons of data.

Gartner says that today less than 10% of all data is generated at the edge, or outside of a data center but over the next four years, they expect 75% of the data to be generated at edge locations, which is a radical shift. This is the big wave that’s coming.

Rob High (RH): Much of what we’ve been talking about lies within the context of knowledge workers where our place of work has traditionally been the office. However, the vast majority of businesses are not about housing knowledge workers – they’re about running factories and retail stores and distribution centers. These businesses are fundamentally physical. And so when we think about the edge, we ought to be thinking about those kinds of places almost as much, if not more than remote office workers.

There’s not only a tremendous amount of data being generated at these locations and all that data is being used to make decisions. And the question becomes, how much data is being generated and how much are we having to transmit across the network? What’s the cost of that transfer? The latency of that transfer? What privacy issues are they being exposed to? All these places where there is an opportunity to take advantage of not only the increased volume of data but to do that locally so we can make better and faster decisions.

Since the cloud is everywhere, why not just go full cloud?

JR: There are a number of reasons why some of these applications are running out at the edge. On a practical level, it just makes more sense – think of a point of sale system in a retail store. You could run it in a cloud but in most retail stores, the internet is one of the least reliable components within that environment. The point of sale system is pretty critical obviously and it’s often linked to an EBT system, which is the food stamp system. And if both systems go down there are two compounding problems.

If cash registers are running slow people will abandon their shopping carts which is bad in its own right. If there are refrigerated items in that cart, by law they can’t be put back onto the shelves and that’s typically the most expensive stuff. The other thing is that if the EBT system goes down, by Federal law in the US, the food is now free so they’re losing money there as well. An hour of downtime across their stores can quickly result in hundreds of thousands of dollars in lost revenue.

Then there’s the issue of latency which comes down to a physics problem of moving packets of data 2,000 miles away to a data center. Until we can figure out how to go faster than the speed of light, the only solution is to move the decision-making closer. Finally there’s the issue of data privacy regulations which we haven’t seen as much here in the US as we are in Europe but will likely become more of an issue in the near future. For instance, there was recently a story in the news in Australia in which a convenience store had a kiosk where you could take a survey and it took a picture of you at the beginning and end of the survey to help the retailer gauge a consumer’s facial expressions. They then sent those images to the Azure cloud to process but that was a big no-no as sending that image with personal data to the cloud is against the law.

We’re moving to a true hybrid kind of world. In this context, hybrid simply means run the application where it makes the most sense to run the applications – whether it’s cloud, at the edge, or in a traditional data center, shouldn’t really matter.

RH: It’s important to remember that the edge is not just one thing. There are multiple potential tiers where you can locate compute which might be in a server in a retail store or on the factory floor. Most IoT equipment these days now includes some kind of general purpose compute embedded in the device itself – we’re seeing this with everything from cameras to industrial robots.

That becomes important to think about as on the other end you’ve got a number of Metro hosting environments, basically data centers located in metropolitan areas where the majority of businesses and users live. So it lives in between because it’s an edge to the data center. So now we can back to the line of business and understand the application requirements and choices about where it makes the most sense to place these applications considering the trade-offs of latency, network throughput, resiliency and privacy issues that they might care about. And it’s not going to be a one-size fits all approach.

We’re moving to a true hybrid kind of world. In this context, hybrid simply means run the application where it makes the most sense to run the applications – whether it’s cloud, at the edge, or in a traditional data center, shouldn’t really matter.

Can you tell us about the partnership between Scale Computing and IBM? How will the combination of your solutions really help some organizations out?

JR: The magic of the Scale Computing platform is in its self-healing capabilities. The challenge as it relates to edge and on-premise computing often comes down to manageability. What the Scale Computing platform does is lets you manage thousands of sites just as easily as a single site, all through a centralized portal. You can see exactly what’s going on, deploy an application to multiple sites at once, update the application, or spin up new locations. Take for example the grocery store chain I was talking about earlier. They don’t have to send a tech on-site when to deploy a new cluster. Someone can just literally plug it in and it will automatically reach back out to the management portal, download configuration files and applications and report back when it’s done. Our goal is to really simplify the management while maintaining that high availability.

The IBM edge application manager is the tool that allows you to manage these applications in the cloud, whether it’s a Kubernetes app or a legacy virtual machine, and deploy them to the location of your choice – whether that’s on-premises, on AWS, or the IBM cloud.

RH: The beauty of this partnership is that we both share this common understanding of the edge marketplace and the needs that are there – particularly, the need to get the right software to the right place at the right time. Scale Computing has been working on this for VM-based applications and we’ve been concentrating on that problem for containerized applications. And so we just brought those two things together and now the Scale Computing platform, you can do both. You can manage both your VM-based applications and your containers as an application from a single, centralized control point. There’s no need for IT specialists to be present at a remote location to manage this process.

Any parting thoughts?

JR: I think there is a natural inclination to think that edge computing is only suitable for a large enterprise or some big deployment. And that is just not the case. It certainly applies there, right? I mean, an 8,000 store deployment is one thing. But then, I’ve got a manufacturing customer that’s just a single location. It’s a large factory that has got about a dozen different edge computing deployments. There are a lot more use cases out there than you might naturally think of

RH: The cost of delaying the automation process far exceeds the cost of actually just putting the automation in place, even for the first one and getting to know it from day one and organizing your practices and processes around using the automation system for managing these edge environments.

About Version 2
Version 2 is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Scale Computing 
Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Scale Computing HC3 software eliminates the need for traditional virtualization software, disaster recovery software, servers, and shared storage, replacing these with a fully integrated, highly available system for running applications. Using patented HyperCore™ technology, the HC3 self-healing platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime. When ease-of-use, high availability, and TCO matter, Scale Computing HC3 is the ideal infrastructure platform. Read what our customers have to say on Gartner Peer Insights, Spiceworks, TechValidate and TrustRadius.

Discover more from Version 2

Subscribe now to keep reading and get access to the full archive.

Continue reading

×

Hello!

Click one of our contacts below to chat on WhatsApp

×