Friday, December 13, 2024

Is Edge Computing a Good Idea?

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

There is a huge push right now to drive the concept of Edge Computing for IoT devices. The idea is that the more computing power you put in the edge device, the lower the amount of data that, in theory, will need to come from it. 

Now this made a lot of sense before 5G, Wi-Fi 6, and Millimeter Wave, all of which can provide wireless connections that rival wired connection. Now that is certainly forcing a change in where you put data centers, because congestion on the back end will force a move back from the huge centralized data centers to a networking topology.

This will be more distributed. But I question the need for edge computing right at a time when the PC industry is moving from computing at the edge to computing in the cloud as more economical, more reliable, and more secure. 

While I don’t think Edge Computing is a bad idea, I also don’t think it makes sense for every deployment. And that those benefits the PC industry is beginning to embrace would also apply to IoT devices. 

Let’s talk about that this week.

A Little History

We tend to be all or nothing when it comes to technology waves. When the PC came out, it was going to take over the world, and IBM got so excited they almost killed their mainframe, which, today, remains very profitable for them nearly 40 years later. 

When Apple brought out the iPad we concluded the PC was dead even though, again, years later, the PC market looks pretty healthy, and the tablet market has almost ceased to be interesting.  It is somewhat ironic that Steve Jobs, seen as the father of the tablet wave, actually seemed to predict this outcome years earlier. 

What we should have learned is that most things aren’t mutually exclusive. Mainframes can still work besides PCs, tablets can supplement but don’t need to replace PCs. And, I expect, we’ll also discover that while edge computing may make sense for some things, centralized resources will be the more easily secured option. 

The Security Speed Bump To Edge Computing

Now, if it weren’t for the fact we now have state level players trying to breach networks, I’d be less concerned with this concept that Edge computing is going to dominate IoT.  

But as soon as you consider you’ll have to secure these devices individually, the desire to shift computing performance to the edge should decline substantially. This belief goes back to the idea that if you centralize the compute power you can reduce the related complexity and lower the potential attack surface. You can’t compromise a device with malware that won’t run malware. 

But as soon as you put compute power in a device that regularly needs patching and updating with complex code an attacker has an added opportunity to compromise the device, and with IoT we aren’t talking small numbers

Biased Data

The other problem with Edge Computing is the increased likelihood of introducing bias. Edge computing limits the data flowing to the central resource where the ML (Machine Learning) or DL (Deep Learning) AI will try to make sense of it, and they do it by policy. 

A few years back I was brought in to review an Amazon case where they were trying to determine why some customers were constantly receiving their packages very late. Amazon is one of the best-instrumented companies in the world. They brought in a forensic specialty firm, and they found that when the tracking systems were set up, it was assumed that no distributions from the central distribution centers would go to end customers. 

So they didn’t capture the related data because they thought the data set was zero. However, a manager thought that he was helping by allowing distributions from the end customers from these centers. And it was those distributions where the problem had resulted. 

Assuming managers or employees will follow policy is generally going to end badly, even with small companies. It is almost certain to be a problem with large companies, which is why the data flowing into the analytics systems can’t be limited as it was in this case. Because, if it is, you not only won’t see problems, when you do see one, there is a high probability you won’t easily be able (as was the case her) to see the cause. 

So, for two reasons, security and assuring the integrity of the data stream, I think that Edge Computing should be the exception and not the rule. 

Lemmings Aren’t Smart

This constant problem we have about getting excited about a trend without thinking it through makes us less safe. Edge Computing, like all big technology movements, does make sense for some things, but we shouldn’t blindly adopt it. 

Switching one problem, data traffic, for two other bigger ones (vulnerability and data corruption) may make sense if you have a huge congestion problem, but with higher bandwidth capabilities hitting the market, that driver should be waning not waxing.

What I’m saying is that with any solution, you start with the problem and then craft the solution around it; you don’t run out and try to force-fit a solution to every problem because that will end badly. Yet, that is what we most often seem to do. Maybe this time we should think about not doing that where it doesn’t make sense.

Just remember we don’t exactly think of Lemmings as being smart.  

 

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles