Prior to 2019, most IoT systems consisted of ultra-low-power wireless sensor nodes, usually battery powered, providing sensing capabilities.

Their main purpose is to send telemetry data to the cloud for big data processing. As IoT becomes the new buzzword and market trend, almost every company is doing it to achieve a proof of concept (PoC). Cloud service providers have beautiful dashboards that present data in attractive graphs to help support the PoC. The main reason for a PoC is to convince stakeholders to invest in IoT and demonstrate a return on investment in order to fund larger projects.

As this ecosystem expands, it’s clear that it’s possible to send too much data back and forth through the cloud. This can clog up bandwidth pipes and make it difficult for data to get in and out of the cloud fast enough. This also creates at least annoying latency, and in extreme cases can break applications that need guaranteed throughput.

While standards such as 5G and Wi-Fi 6E promise major improvements in bandwidth and transfer speeds, the number of IoT nodes communicating with the cloud has exploded. In addition to the sheer number of devices, costs are also increasing. Early IoT infrastructure and platform investments need to be monetized, and as more nodes are added, the infrastructure needs to be scalable and profitable.

Around 2019, the idea of ​​edge computing became a popular solution. Edge computing enables more advanced processing in local sensor networks. This minimizes the amount of data that needs to go through the gateway to the cloud and back. This directly reduces costs and frees up bandwidth for other nodes when needed. Each node transmits less data, potentially reducing the number of gateways required to collect and transmit data to the cloud.

Another technology trend that is enhancing edge computing is artificial intelligence (AI). Early AI services were mostly cloud-based. As innovations and algorithms become more efficient, AI has moved to end nodes very rapidly, and its use is becoming standard practice. A notable example is the Amazon Alexa voice assistant. Detection and wake-up after hearing the trigger word “Alexa” is a common use of edge AI. In this case, trigger word detection is done locally in the system’s microcontroller (MCU). After a successful trigger, the rest of the command goes through the Wi-Fi network to the cloud, where the most demanding AI processing is done. In this way, wake-up delay is minimized for the best user experience.

In addition to addressing bandwidth and cost issues, edge AI processing brings additional benefits to applications. For example, in predictive maintenance, small sensors can be added to electric motors to measure temperature and vibration. A well-trained AI model can very effectively predict when a motor has or will experience bearing damage or overload. Getting this early warning is critical to servicing the motor before it fails completely. This predictive maintenance greatly reduces production line downtime as equipment is proactively repaired before it fails completely. This provides huge cost savings with minimal loss of efficiency. As Benjamin Franklin said, “An ounce of prevention is worth a pound of cure”.

As more sensors are added, gateways can also be overwhelmed with telemetry data from local sensor networks. In this case, there are two options to alleviate this data and network congestion. More gateways can be added, or more edge processing can be pushed to the end nodes.

The idea of ​​pushing more processing to end nodes (usually sensors) is underway and gaining momentum quickly. End nodes typically operate in the mW range and sleep in the µW range most of the time. End nodes also have limited processing power due to their low power consumption and cost requirements. In other words, they have very limited resources.

For example, a typical sensor node can be controlled by an MCU as simple as an 8-bit processor with 64 kB of flash memory and 8 kB of RAM with a clock speed of about 20 MHz. Alternatively, the MCU could be as complex as an Arm Cortex-M4F processor with 2 MB of flash memory and 512 kB of RAM and a clock speed of around 200 MHz.

Adding edge processing to resource-constrained end-node devices is challenging and requires innovation and optimization at the hardware and software levels. Still, since the end nodes will be in the system anyway, it is economical to add as much edge processing power as possible.

As a summary of the evolution of edge processing, it is clear that end nodes will continue to become smarter, but they must also continue to respect their low resource requirements for cost and power consumption. Edge processing will continue to be popular, as will cloud processing. Having the option to assign functions to the correct locations allows the system to be optimized for each application and ensures the best performance and lowest cost. Efficiently allocating hardware and software resources is key to balancing competing performance and cost goals. The right balance minimizes data transfers to the cloud, minimizes the number of gateways, and adds as much functionality to sensors or end nodes as possible.

Example of an ultra-low power edge sensor node

Developed by ON Semiconductor, the RSL10 Smart Shot Camera addresses these various challenges with a design that can be used as-is or easily added to applications. The event-triggered, AI-ready imaging platform uses many key components developed by ON Semiconductor and ecosystem partners to provide engineering teams with an easy way to access AI-enabled object detection and recognition capabilities in a low-power format.

The technology employed is to use the small but powerful ARX3A0 CMOS image sensor to capture a single image frame, which is then uploaded to a cloud service for processing. Images are processed and compressed by Sunplus’ Image Sensor Processor (ISP) before being sent. With JPEG compression applied, image data is much faster to transfer to a gateway or phone over a Bluetooth Low Energy (BLE) communication network (a companion app is also available). ISPs are a good example of local (end node) edge processing. Images are compressed locally and less data is sent over the air to the cloud, resulting in significant power and network cost savings through reduced airtime.

The ISP is designed for ultra-low power operation, consuming only 3.2 mW during operation. It can also be configured to provide some on-sensor preprocessing that can further reduce active power, such as setting a region of interest. This allows the sensor to remain in a low power mode until an object or movement is detected in the area of ​​interest.

Further processing and BLE communication is provided by a fully certified RSL10 system-in-package (RSL10 SIP), also from ON Semiconductor. The device offers industry-leading low-power operation and short time-to-market.

(Figure 1. The RSL10 Smart Shot Camera contains all the components needed for a rapidly deployable edge processing node.)

As shown in Figure 1, the board includes multiple sensors for triggering activities. These include motion sensors, accelerometers, and environmental sensors. Once triggered, the board can send the image to a smartphone via BLE, which can then be uploaded to a cloud service by a companion app, such as the Amazon Rekognition service. Cloud services implement deep learning machine vision algorithms. For the RSL10 Smart Shot Camera, the cloud service is set up for object detection. After processing the image, the smartphone app is updated with what the algorithm detected and its probability of success. These types of cloud-based services are very accurate because they actually have billions of images to train machine vision algorithms.

in conclusion

As mentioned earlier, IoT is changing and becoming more optimized for massive and cost-effective scaling. New connectivity technologies are continually being developed to help address power, bandwidth and capacity issues. AI continues to evolve, becoming more capable and efficient, enabling it to move to the edge and even end nodes. IoT is growing and adapting to reflect continued growth and prepare for future growth.

Reviewing Editor: Guo Ting

Leave a Reply

Your email address will not be published.