“Edge computing provides vital, actionable insights in real-time”
Edge computing brings data processing and streaming analytics next to the devices producing the data. We spoke with Ramya Ravichandar, VP of Products at FogHorn about edge computing, strong and weak edge solutions, and what the edge-cloud relationship will look like moving forward.
JAXenter: What is edge computing?
Ramya Ravichandar: Analyzing raw data in the cloud is often expensive and time-intensive due to data transport and processing costs. As a result, organizations frequently resort to using down-sampled and/or time deferred data to balance cost and timeliness — making it easy to miss anomalies in the numbers. Looking ahead, industry analysts expect that by 2025, 75% of enterprise-generated data will be created and processed outside the cloud for these reasons – edge computing is a key enabler of this trend.
Edge computing brings data processing and streaming analytics next to the devices producing the data (think pumps, machines, vehicles, or other local assets). Organizations started implementing edge computing to remedy the latency, security, and bandwidth costs associated with transmitting large amounts of data from centralized data centers to the cloud. Today, edge computing is critical for a wide variety of use cases where real-time capabilities are required, think worker safety monitoring or autonomous driving. Industries with security concerns or limited access to bandwidth, such as oil and gas, mining, and fleet, also greatly benefit from edge computing.
JAXenter: How does edge intelligence enhance conventional edge capabilities?
Ramya Ravichandar: There’s a lot of variety in edge computing solutions – and many solutions lack a way to derive actionable insights from the data collected. Organizations receive this data and often don’t know how to analyze the data to increase operational efficiencies — usually requiring further processing, typically in the cloud and with support from experienced data scientists.
Organizations can only make the right data-driven decisions, if the data used is correct and suitable for the use case at hand. Edge intelligence builds on the typical data ingestion capabilities common among edge computing platforms with layers of advanced functions, like machine learning (ML) and artificial intelligence (AI). Edge intelligence lifts workloads off the cloud and data centers by providing analytics and actionable insights right at the edge. Rather than merely ingesting and preparing data at the point of its creation, intelligent edge capabilities derive actionable insights from the streaming data and respond to it through real-time alerts to operators and other enterprise systems or even through closed-loop control of the asset. In short, they don’t rely on a cloud connection for advanced data enrichment and processing.
JAXenter: Why does the difference matter, beyond security and cost savings?
Ramya Ravichandar: When organizations build ML models in the cloud, an assumption is made the model will be accurate for a certain period of time, as the model has been trained on a particular set of data. If new data patterns emerge or if the model has not been trained on all possible data sets or workflows, the model might be biased and not continue to provide accurate results. With an intelligent edge, the models can be continuously updated with new, meaningful data and the learning sets updated.
For example, in a factory, a model can be deployed to detect defects on a part inspection assembly line or proactively identify patterns that may lead to defects after a period of time. Often, after a few months, the model’s accuracy may diminish due to new data patterns. This can be misleading, and the opportunity cost can be significant if the software uses traditional analytics exclusively.
Using these capabilities, IoT projects will extract a realistic view of daily operations and work towards a new level of predictability that will dramatically alter the industry landscape as we know it. Organizations can proactively interface with live data streams and cater to intelligence at or near the source, leading to increased overall productivity, efficiency, and cost-savings.
JAXenter: What will the edge-cloud relationship look like moving forward?
Ramya Ravichandar: By first implementing edge-native solutions, rather than cloud, organizations can synthesize data locally, identify machine learning inferences on core raw data sets, and deliver enhanced predictive capabilities (versus cloud-heavy, expensive, retroactive insights). By running ‘edgified’ versions of ML models in real-time, organizations enable faster responses to real-time events and the ability to act, react, pro-act to events of interest at the source. There is a virtuous cycle between the edge and cloud enabling federated learning and localized insights. This ensures a harmonious interplay of edge and cloud, leveraging the strengths of each ecosystem.
JAXenter: How can companies differentiate between strong and weak edge solutions?
Ramya Ravichandar: Intelligent edges rely on hyper-efficient complex event processors (CEP) that cleanse, normalize, filter, contextualize, and align “dirty” or raw streaming sensor data as it’s produced. Without a CEP, latency is higher, the data remains dirty, making analytics much less accurate, and ML models are significantly compromised. A CEP enhances data pre- and post-processing, so model size, layers, and memory needed for execution is often reduced by 10X or more, after being prepared for the edge.
Weak edge solutions claim they can process data at the edge but rely on sending data back to the cloud for batch or micro-batch processing. Strong edge solutions are able to run offline and on constrained compute, while delivering the same level (and more) of accurate insights with no real dependency on external elements.
JAXenter: What are some of the most common use cases of edge intelligence?
Ramya Ravichandar: For example, many industrial organizations struggle to reduce the cost of poor quality in terms of scrap, reworks, returns, and defects. Computer vision systems have been employed to help monitor and identify quality issues. Unfortunately, existing vision systems produce a large amount of false positive defect notifications, which are often ignored by operators due to its inadequate success rates. Through real-time ML and AI, edge intelligence capabilities enable manufacturers to identify anomalies or patterns more accurately using live video. This enables organizations to more effectively detect and address visual product defects, resulting in higher yield, improved product quality, and lower costs.
Moreover, oil and gas organizations are continually seeking new ways to use sensor data to become more proactive at reducing risk, lowering maintenance costs, and increasing overall uptime. Edge intelligence plays a critical role in this by monitoring and performing analytics on streaming data in real-time and by responding automatically to issues detected by the sensors. For instance, video analytics with edge intelligence in flare stack monitoring can help reduce flare stack emissions and immediately alert workers, or even shut down operations when they are outside of an acceptable range.
In summary, intelligent edge computing provides vital, actionable insights in real-time, helping organizations conquer a few fundamental challenges and open the door to advanced analytics.