How Is AI Being Applied at the Edge?

Automation 14

By: Craig Matsumoto


(This Tech Primer is sponsored by ZEDEDA.)

The current discussion around AI tends to focus on big numbers: gigawatt-scale datacenters and hundreds of thousands of GPUs. But for many enterprises, AI is more important as a ground-level presence, in the form of ​edge ​inference.

Inference is where AI becomes more useful. It’s the process where​by​ users present new data that the model digests to form conclusions or make predictions.

Inferencing can take place in those gigawatt-scale halls, of course, but enterprises will also need AI in smaller doses, close to the sources of data. This has been happening for years with machine learning. Consider computer vision, where cameras and software detect faulty products on a manufacturing line.

For some industries, this edge is even more ambitious, with computing deployed to many independent locations, often remote, such as oil rigs or retail stores. This distributed edge can cover thousands of real-world devices, each with limited space and power for computing capacity (think IoT) ​and ​often​ residing​ in harsh environments. It's a milieu very different from the giant liquid-cooled datacenters used for AI training.

This tech primer examines why AI is important to the distributed edge, how it can be delivered there effectively, and what kinds of use cases are emerging from enterprises.

Data Everywhere

​​​The​​​​ explosion of available data​ has given rise to the need for edge computing​.​ ​Sophisticated​​​​ data processing at the edge is required to make sense of the volume and variety of data​​​​ ​​​​produced by sensors such as cameras, microphones, and more​​​​. ​​ AI inferencing can extract richer meaning from this data and/or mine it quickly for signs of trouble.

To do this, however, it makes sense to run inferencing as close as possible to the data, rather than transporting it to an enterprise datacenter or the public cloud. This is especially true for situations requiring real-time action, where the latency to the cloud can't be tolerated.

​​Even in more forgiving settings, ​ ​the cost and complication of moving data around might not be worth it. Network transport costs ​​can​​​​ ​​ add up, especially considering the amounts of data that the edge can generate​​; this is true even with new satellite networks like Starlink. ​​​ ​​​By using AI to summarize events of ​​​​interest​​​​​​—out-of-band measurement​​​​s​​​​, for instance—only exceptions need to be sent to the cloud.

Moreover, not all ​edge ​data needs to be retained for long period​​s; often it’s useful only for short-term decision making. If data is useful only ephemerally, why not act on it locally?

Finally, remote environments such as oil rigs often don't have a choice. Network connectivity might be too sporadic to rely upon​ ​when every second matters in preventing a disaster​​. The better strategy is to keep the work as local as possible​​, so decisions can be made quickly.​​​ ​

Bringing AI to the Edge

Deploying an AI application to the distributed edge presents some practical challenges, however. A 1,000-site deployment is difficult enough to simply execute, in terms of placing compute devices and turning them on. Once the software is deployed, keeping those sites in sync with new application versions​, AI ​model updates,​​ and security patches is a challenge that will never go away.

Remote locations can also vary greatly. Not all sites will be staffed with IT-trained personnel. Some sites might not be staffed at all.

Enterprises can ease these concerns with a more centralized approach, using an orchestration platform that can deliver AI applications and, ideally, allow untrained personnel to install them. That same platform could be the source for delivering upgrades universally throughout the fleet.

A note on security: The AI model being delivered ​to the edge ​is a core piece of intellectual property. It must be protected during provisioning and upgrades, making security a primary concern for whatever orchestration and management platform is in use. Security in situ—physical security—is also a concern. The hardware involved might be​ in a location with no p​hysical security or​​ small enough to carry away, in which case the models and data it contains must be secured against that possibility.

Proof Points: Production Use Cases

Still, many enterprises are working through these challenges. The reason is plain: AI can potentially benefit any line of business in every enterprise.

A few enterprise sectors have been quick to recognize those benefits when it comes to a distributed edge setting. Naturally, they are the same sectors that have already recognized the potential of edge computing and are ready to use AI to make further use of the data available.

​​1. Manufacturing. This is a classic example of machinery becoming smart. AI on edge devices can make predictive maintenance more data-driven by watching for signs of equipment failure. Computer vision, as noted above, has proven valuable in manufacturing and can do more when assisted by AI. Quality control can become sharper by applying AI to camera feeds​; ​​for ​example, there are​​​​​​ ​​companies using AI to shift from checking a subset of pieces to checking all pieces produced​​.

AI can ​​address​​​​ real-time safety ​and compliance ​concerns as well, such as detecting when employees aren't using personal protective equipment such as hard hats​, gloves,​ and goggles​, or when they step out of ​designated pathways​​.​ As ​production lines become increasingly automated, they become faster. One prominent manufacturer uses AI to detect when a human is on the line, and slows the line by a factor of 10. All these practices can reduce the number of workplace accidents,​​​​ improving safety and​​​​​​ ​​​​ ​​​​lowering healthcare expenses for a company.​​

2. Energy. Oil and gas is the poster child for difficult IoT edge environments, given the rugged conditions and lack of internet connectivity. Predictive maintenance here includes not only equipment failure but the subtler task of leak detection, as environmental damage from leaks can cause a shutdown. On the electrical side, smart grid management and power forecasting (think wind speed and sunshine levels) can be especially helpful in reducing outages and predicting output.

3. Transportation is yet another industry where edge computing, and now AI, have aided with predictive maintenance. As with manufacturing, the object of study—the vehicle—is under continual stress, and detecting signs of a pending failure can save the costly delays of a breakdown.

4. Retail. Cameras and computer vision have helped quantify what's happening in a retail space; now AI can help employees take action. AI can flag items that need restocking and can also help with loss prevention, detecting behavioral anomalies indicative of shoplifting-in-progress. Cameras can similarly watch the queuing at checkout lines to optimize staffing, freeing up employees from having to keep one eye open for congestion,​ ​improving​ overall retail customer e​​​xperience​​.

Conclusion

Edge computing provided access to a wealth of data. AI now gives companies more sophisticated ways to act on that information. Managing AI at the edge is a challenge, but the tools already available for edge computing, such as edge orchestration, ​​are also​​​​ ​​ suitable for managing AI applications​.

(For information about solutions for applying AI to the edge, please visit ZEDEDA.)