Since the introduction of AI at Dartmouth conference in 1956, AI research has experienced several ups and downs. In the alternation of highs and lows, it is undeniable that AI has made great progress in both theory and practice. In particular, recently, AI technology represented by deep learning has made breakthrough progress, and has made great progress in computer vision, machine learning, natural language processing and robot technology, which has profoundly changed our lives. In this transformation, the results of the laboratory can soon enter the industry, which is very rare in the past history of technological development. In may2016, the national development and Reform Commission and other four departments jointly issued the three-year action implementation plan for Internet +ai. Premier Li Keqiang also mentioned the development of AI industry in his government report. The “science and technology innovation 2030 major project” of the Ministry of science and technology of China will add “ai2.0”, and AI has risen to a national strategy. How should we understand the current situation in the face of the upsurge of AI? How to view its progress? How to understand its functions and limitations? This article will introduce AI from many aspects, such as the core theoretical basis of AI, some current problems and possible future development direction.

1、 Core foundation of AI

1. special AI algorithm

Deep learning is essentially an autonomous learning system derived from traditional pattern recognition. Through training with a large amount of data, the deep learning network will automatically find the patterns of these data, and then use these patterns to predict the unknown data. Taking the cat and dog classification in the kaggle competition as an example, the specific steps are as follows: (1) let the computer “see” tens of thousands of images containing cats and dogs; (2) The program establishes patterns by classifying and clustering image data (such as edge, shape, color, distance between shapes, etc.), and enough patterns can get the final prediction model; (3) Run the program to view the new image set through the prediction model, and then compare it with the prediction model to determine whether the image is a cat or a dog.

The deep learning algorithm simulates the artificial neural network of our brain neural network to realize the function similar to the human brain. The algorithm will execute various cycles in operation, improve the prediction of each cycle by narrowing the gap between the model and the actual situation, and finally establish a prediction model.

The application of face recognition in the security industry is a good industrial application case of deep learning. The face recognition algorithm trains the model through a large number of labeled face data, and the algorithm will automatically recognize the key points of the face. By invoking the algorithm, the device will capture multiple key points, which will be sent to the deep learning model. The built-in engine and the execution prediction model will determine who is who.

Reinforcement learning is also an autonomous learning system, but it is mainly learned through repeated experiments. The answer is determined by maximizing rewards through a limited number of actions. In other words, it is learning through practice and finding results from practice. This is like learning to ride a bicycle when I was a child. At the beginning, I often fell down, but as the number of falls increases, I gradually master the tricks. This learning process is intensive learning. When computers use reinforcement learning, they will try different behaviors, learn from the feedback whether the behavior can get better results, and then remember the behaviors that can get good results. The standard point is that the computer will revise the algorithm autonomously in multiple iterations until it can make a correct judgment.

A good example of using reinforcement learning is to let robots learn to walk. First, the robot takes a big step forward and then falls, which is the response point of reinforcement learning system. Since the feedback is negative, continue to adjust. The system will adjust according to multiple negative feedback. Finally, it is determined that the robot should take a small step until the robot can walk without falling down.

Both deep learning and reinforcement learning are autonomous learning systems. The difference between them is that deep learning is learning from the training set, and then applying the learned knowledge to the new data set, which is a kind of static learning. Reinforcement learning is a process of continuous trial and error, which is dynamic learning. In addition, the in-depth learning algorithm and reinforcement learning algorithm put into market at this stage belong to supervised learning, which is different from unsupervised learning in automatically finding rules in the data set. Supervised learning requires a large number of labeled training data as the training set to find rules in the training set.

Both deep learning and reinforcement learning belong to special AI algorithms. When facing specific tasks (such as go, classification and detection), due to the single task, clear requirements, clear application boundary, rich domain knowledge and relatively simple model, they form a single breakthrough in AI, which can surpass human intelligence in a single test. Alphago won the championship in the go game by defeating human beings. AI program has surpassed human level in large-scale image recognition and face recognition. AI system has reached the level of professional doctors in diagnosing skin cancer.

2. calculation force

In addition to special AI algorithms, the development of computer hardware in recent years constitutes the basis for the development of AI. One of the reasons why AI entered the first low point in its early development is the lack of computer computing power. The training essence of deep neural network is matrix operation. Back propagation seeks the minimum loss of the whole network, which makes the training easy to parallelize. Using NVIDIA’s GPU can greatly speed up the training of deep neural networks. More and more traditional information manufacturers are using NVIDIA’s GPU to build GPU clusters. Intel’s Xeon chip provides powerful multi-core computing power, and can form multiple channels on the server and perform parallel optimization computation through multi-node clusters. Tasks with a small load can be directly completed with CPU. At present, Intel is developing chips that integrate the computing power of CPU and FPGA.

Special neural network chips have developed rapidly, mainly including FPGA, DSP, ASIC, arm expansion module and other technical routes. They have the characteristics of high speed, high bandwidth and low power consumption, and are mainly oriented to mobile and embedded systems. The basic models and algorithms of image processing and target recognition that have been solidified in the chips of many manufacturers are quickly integrated into embedded devices. At present, the main functions are face recognition, photo classification, image processing, image style migration, image super-resolution reconstruction, license plate recognition, intelligent security, automatic driving, UAV attitude maintenance and position tracking.

3. data

At present, we are in the era of data explosion. According to the white paper “data age 2025” sponsored by Seagate technology and released by IDC, the global data circle will expand to 163zb by 2025, equivalent to ten times the 16.1zb data generated in 2016; The total amount of global data belonging to data analysis will increase to 50 times of the original, reaching 5.2zb; The total amount of analysis data “touched” by the cognitive system will increase to 100 times, reaching 1.4zb. A large number of emerging data have spawned a series of new technologies. AI has transformed the uncommon and retrospective practice of data analysis into a driving factor for strategic decisions and actions.

2、 Some existing problems

1. data cost

As mentioned earlier, the extensive application of the deep learning network in the industrial field requires a large number of labeled data for training to achieve the desired effect. The labeling of these training data needs to be carried out manually, resulting in huge labor costs. Although the Internet has an inexhaustible amount of data, most of them are unmarked data. To solve this problem, we can try to solve it from the following two aspects:

(1) Unsupervised learning

Compared with supervised learning, unsupervised learning can make full use of these data without spending a lot of human and material resources to label the training data, which greatly reduces the cost of the training model. On the other hand, the training of the current deep learning model needs to use a large amount of data.

(2) Small sample learning

Machine learning ability is far from human learning ability. For example, children only need a few photos of cats to accurately identify cats, but deep learning models need millions of images. At present, the popular automatic driving technology requires millions of kilometers to train to a satisfactory effect, but people only need thousands of kilometers to become old drivers. In fact, small sample learning is closer to human intelligence model. The development of small sample learning ability can apply AI technology to more and more fields. A major breakthrough in the study of small sample learning is the “Bayesian program learning” method proposed by three researchers from MIT, New York University and the University of Toronto in 2015, and used it to solve the problem of “writing at a glance”.

2. model interpretability

Another problem of AI is the interpretability and stability of machine learning models. At present, most machine learning models are “black box” models, which are difficult to understand. Moreover, the stability of the model has always been a problem. For example, adding some white noise to the picture, the deep learning model will give a disappointing prediction result.

3. model size limit

The current computing power is difficult to train large-scale deep learning models. For example, the GB model training process requires high bandwidth. One of the reasons why GPU is more suitable for training the deep learning model than CPU is that the bandwidth of video memory is larger than that of memory. In addition, large models often over fit the benchmark data and do not extract more abstract features from the samples. In practical applications, if the depth network has deviation, it will bring very serious consequences. For example, in the data set of automatic driving training, there will be no baby sitting in the middle of the road. Deep neural network is very sensitive to standard adversarial attacks. These attacks will cause imperceptible changes to images, but will change the neural network’s cognition of objects. Alan yuille said that behind these problems is the combination explosion. The number of real-world images is too large from the combination point of view. To some extent, it is infinite. Any data set, no matter how large, is difficult to express the complexity of reality.

4. generalization performance

From special intelligent algorithm to general intelligent algorithm is the inevitable trend of the development of the next generation AI, and it is also a challenge in the research and application fields. General intelligence is considered as the Pearl on the crown of AI. From the perspective of goal, general intelligence means the improvement of generalization ability of neural network. In order to solve this problem, researchers have made various efforts. From regularization technology to dropout technology and then to BN technology, these techniques have slowed down the problem of over fitting of neural networks to a certain extent and improved the generalization ability. But these are just skills, and can not fundamentally solve the problem. At present, the method to solve this problem is transfer learning, which is to transfer the knowledge learned in one scene to another scene. For example, we can transfer the classification model trained by cat and dog images to other similar tasks to distinguish between eagle and cuckoo. Using transfer learning, the relationship obtained for a certain type of data in a model training task can also be easily applied to different problems in the same field. Migration learning eases the pressure of labeled data to a certain extent, which is a step towards our approach to general AI.

3、 Development trend

Although there are still some deficiencies in some aspects of deep learning, the scientific community has made some gratifying breakthroughs, and AI based on deep learning has profoundly changed people’s lives. In the future, AI will develop more rapidly. This paper believes that there are four development trends:

1. Accelerated development of AI chips

Even a fast and advanced CPU cannot improve the speed of the AI model. When the AI model is running, additional hardware is required to perform complex mathematical calculations. In particular, the application of front-end devices in the security industry requires smaller and more powerful embedded chips to run better algorithms for real-time tracking, face recognition and other applications.

2. AI edge computing and IOT integration development

At present, the continuous development of AI at the edge is one of the keys to control the data flood, and it is also an important trend for the future development of the Internet of things. With the rapid development of AI technology, massive data needs to be extracted and analyzed quickly and effectively, which greatly strengthens the demand for edge computing. In the future, AI technology, edge computing and the Internet of things will be more closely integrated and developed, especially in the field of video surveillance in the security industry.

(1) Interoperability between neural networks

The training of neural networks is based on the framework. Once the model is trained and evaluated in a specific framework, it is difficult to transplant it to another framework, which hinders the development of AI. In the future, the interoperability between neural networks will become an important technology in the AI industry.

(2) Automated AI will be more prominent

A trend that fundamentally changes AI solutions is automated AI, which enables business analysts and developers to efficiently discover machine learning models that can solve complex scenarios without typical training in machine learning models. Business analysts can focus more on business problems.

4、 Conclusion

AI technology has always been at the forefront of computer technology, and its research theory and development will determine the development direction of computer technology to a great extent. At present, many AI research achievements have profoundly changed people’s lives. In the future, the development of AI will be faster and will have a greater impact on people’s life, work and education.

Responsible editor: CT

Tagged:

Leave a Reply

Your email address will not be published. Required fields are marked *