Author: Italian French semiconductor Asia Pacific artificial intelligence Innovation Center

1. AI introduction

AI (Artificial Intelligence) originated from the Summer Seminar held by Dartmouth College in 1956. At the meeting, the term “artificial intelligence” was formally put forward for the first time. The technological breakthrough of computing power has promoted the development of artificial intelligence round after round. In recent years, with the improvement of the availability of big data, the third round of artificial intelligence development wave has come. In 2015, the artificial intelligence algorithm based on deep learning surpassed human beings in the image recognition accuracy of Imagenet competition for the first time, and artificial intelligence made great progress on the road of development. With the breakthrough of computer vision technology, deep learning has achieved great success in different research fields such as speech recognition and natural language processing. Now, artificial intelligence has shown great potential in all aspects of life.

Combined with the development stage of artificial intelligence technology, some main concepts are roughly explained as follows.

AI: any technology that enables the computer brain to simulate human behavior.

Machine learning: a subset of artificial intelligence (AI). Algorithms and methods that are constantly improved by learning from data.

Deep learning: a subset of machine learning (ML). It is a learning algorithm to obtain valuable information from a large amount of data by using a multi-layer structure simulating human brain neural network.

2. Deep edge AI, a new force of artificial intelligence, came into being

At present, due to the demand of computing power, artificial intelligence technology is mainly used in cloud scenarios. Due to the limitation of data transmission delay and other factors, cloud based solutions may not meet the needs of some users for data security, system response ability, privacy and local node power consumption. In centralized AI solutions, embedded devices (smart speakers, wearable devices, etc.) usually rely on cloud servers to realize AI capabilities. In deep edge AI solutions, embedded devices themselves can run AI algorithms locally, with real-time environment perception, human-computer interaction, decision control and other functions.

Moving the reasoning process to deep edge computing will bring some advantages, such as system responsiveness, better user information privacy protection (not all data needs to be transmitted to the cloud through multiple systems), and reducing connection cost and power consumption.

According to ABI’s research results, the global shipment of deep edge AI devices will reach 2.5 billion units by 2030. STMicroelectronics noted that there are more and more communities and ecosystems around deep edge AI technology, focusing on independent, low-power and cost-effective embedded solutions. As the main driver of this trend, STMicroelectronics has invested a lot of resources in AI to help developers quickly deploy AI applications on embedded systems based on microcontrollers / microprocessors (STM32 Series) and sensors (MEMS, TOF…). STMicroelectronics provides a set of AI tools for STM32 series and MEMS sensors integrated with machine learning core (MLC), which can speed up the development cycle and optimize the trained AI model (stm32cube. AI).

As a general technology, artificial intelligence has made remarkable achievements in many fields. We believe that more and more intelligent terminal devices will have a more direct and positive impact on human life.

3. Rapidly deploy AI applications through the ecosystem of STMicroelectronics

STMicroelectronics provides an ecosystem of hardware and software to help quickly and easily develop a variety of deep edge AI algorithms for sensors and microcontrollers.

Machine learning in MEMS sensor ecosystem helps designers use AI at the edge to realize gesture, activity recognition, anomaly detection and so on through the decision tree classifier running on the sensor embedded engine called machine learning core (MLC).

Therefore, IOT solution developers can deploy any of our sensors (embedded with machine learning core) in the rapid prototyping environment to quickly develop ultra-low power applications using unico-gui tools.

With the built-in low-power sensor design, advanced AI event detection, wake-up logic and real-time edge computing functions, the MLC in the sensor greatly reduces the amount of system data transmission and the burden of network processing.

If developers decide to develop a solution based on the machine learning core in sensors, they need a new method to release their own applications.

To create any machine learning algorithm, the starting point is the data and its definition of the class (used to describe the complex problem to be solved). You can follow five steps to create and run AI applications in sensors. Unico-gui is a graphical user interface that supports all five steps, including decision tree generation.

In order to facilitate developers to quickly deploy the trained AI model to STM32, we have developed an easy-to-use and efficient tool – stm32cube.ai (also known as x-cube-ai). X-cube-ai can analyze and convert the trained neural network into optimized C language code, and conduct automatic test for STM32 target. Of course, x-cube-ai is a very powerful tool, and more functions will be introduced in subsequent articles.

In order to show how several different AI applications can run directly on STM32 and speed up the development, verification and deployment process of STM32 embedded developers, STMicroelectronics provides many AI applications as references.

Developers can carry out secondary development based on these embedded AI application software packages to quickly realize the deployment of user-defined models.

More details will be covered in subsequent articles.

AI development tools and embedded application packages are summarized below

Where there is STM32, there is deep edge AI.

All MCU of STM32 supports AI model deployment. For MCU with low computing power, machine learning algorithm (ML) is supported. For MCU with high computing power, neural network model (DL) is also supported.

The list of evaluation boards that can run application examples is summarized below.

Leave a Reply

Your email address will not be published. Required fields are marked *