Face recognition is an important part of biometric recognition. It entered the primary application stage in the late 1990s, and is mainly realized by the technology of the United States, Germany and Japan. In recent years, my country's related technologies have developed rapidly. While face recognition provides convenience to our lives, it also has serious security risks of being attacked or maliciously used. Therefore, the application of face recognition technology and the construction of face databases and face recognition models must be strictly controlled. Like genetic data, face data should be strictly protected through technical and legislative means.

First, the application status of face recognition technology

Biometric identification technology mainly uses the inherent physiological characteristics of the human body (such as fingerprints, voiceprints, faces, irises, etc.) and behavioral characteristics (such as handwriting, voice, gait, etc.) for personal identification. Compared with other biometric technologies, face recognition has the characteristics of high accuracy, non-contact and fast speed. huge.

The FBI's "New Generation Identity Database" (NGI) adds palmprint, iris, face and other biometric data to its more than 100 million personal fingerprint information records, and collects data from adults in the United States. more than half of the population. The Australian Immigration and Border Protection began to adopt a new "hands-free" entry system, set up electronic scanning stations, and use biometric technology to identify the faces, irises and fingerprints of inbound tourists, replacing the traditional entry process of showing passports. In recent years, my country's face recognition technology has also developed rapidly. As the core technology of the smart society, while face recognition is booming and widely used, all kinds of potential security risks cannot be underestimated. For example, at the "3.15" party in 2017, with the support of on-site technicians, the host successfully "changed faces" and cracked the "swipe face login" authentication system with only a selfie of the audience.

2. Attack methods and risks of face recognition technology

At present, the illegal collection, leakage, theft, and illegal transaction and utilization of face data are the main risks faced by face recognition technology and databases. If these security risks cannot be solved through technology, policies, laws and regulations, it will have a "double-edged sword" effect on the development of face recognition technology, highlighting the bottleneck.

Counterfeit authentication for face recognition. The face recognition system can identify the face image collected by the camera, but cannot identify whether the collected face image is from a real person or a photo. Face recognition systems are extremely vulnerable to various deliberate counterfeiting attacks. Common methods include stealing face photos of legitimate users, stealing videos of legitimate users' faces, and stealing 3D face masks. In order to deal with the fraudulent use of face photos, many face recognition systems have added biological activity detection (eye blinking, mouth opening, shaking his head, etc.), but attackers can still use video playback and automated face animation techniques to easily escape detection. . With the help of facial key point location and automated facial animation technology, the attacker can complete the blinking, mouth opening and other actions required for face-swiping login by changing the selfie from static to dynamic.

Attacks on face recognition algorithms. Legal user identity counterfeiting requires obtaining data such as face photos and videos of legal users as a prerequisite. A domestic security team has implemented a more threatening attack method, using face photos of completely different users to bypass the identity recognition system. This attack method causes evasion attacks and data pollution attacks on applications such as deep learning image recognition. It does not depend on specific deep learning models, and is effective against current mainstream deep learning frameworks such as Tensorflow and Caffe.

Ways of leaking face data. There are three main ways to leak face data: 1. Improper collection by Internet companies. At present, various network applications, including social platforms, e-commerce, shooting software, etc., widely collect user face data, and smart cameras also capture image data of various groups of people anytime, anywhere. The collection of face data of Chinese netizens by Internet companies, especially the illegal collection of overseas companies, will pose a great threat to Chinese users and national security. 2. User upload and share. In order to meet the requirements of the real-name system, most Internet companies require users to upload ID information and photos. Online payment and investment and wealth management sites even require users to submit photos of ID cards in hand. In addition, many netizens like to post all kinds of life information, including photos of themselves, friends and family members, in the circle of friends, which has become an important source of face data leakage. 3. The face database is attacked and stolen. The facial data collected by the company will be stored as the company's database. Judging from the current situation around the world, there are potential security risks of major data leakage and theft, and data leakage cases occur frequently. For example, in 2017, a domestic job search website was exposed to leak job application resume data, including the user's name, avatar and graduation school. In 2018, as many as 50 million users of a social platform in the United States that continued to ferment in Europe and the United States, including personal photos, videos and other private information, were leaked.

The threat of face recognition-based AI weapons of mass destruction development. Combining the latest artificial intelligence technologies such as facial recognition and robotic control, it is technically feasible to create autonomous weapons of mass destruction. Lethal weapons are equipped with high-tech functional equipment such as sensors, cameras, and GPS positioning, which can perform face recognition, avoid sniping, carry explosives, etc., to achieve one-shot kills. In the technical chain of autonomous lethal weapons, face recognition technology is a key link. Autonomous lethal weapons generally obtain battlefield information through cameras, radars, etc., and identify the human body and face in the camera, lock the target and attack. Using artificial intelligence technologies such as deep learning to accurately and automatically identify attack targets is an ability that traditional weapons do not have, and it is also the core technology of autonomous lethal weapons. In order to achieve accurate target recognition, especially to distinguish between enemy and friend, it is necessary to train a high-accuracy deep neural network model, and the training process requires a large amount of actual face data to support. The latest research results show that after the accumulation and training of a large amount of real data, artificial intelligence can reach and exceed the recognition accuracy of the human eye. Therefore, face data is the basis and core of the development of lethal weapons. At present, some countries such as the United States and South Korea are already developing AI killing robots. Once this artificial intelligence technology is used in future battlefields or used by terrorists, the consequences will be extremely serious. Therefore, it is imperative to strengthen the supervision, regulation and utilization of face recognition technology.

3. Security precautions for the application of face recognition technology

In order to deal with many risks in the application of face recognition technology, from the perspective of protecting face data and face models, laws and regulations related to biometric identification should be improved, a face big data center and a multi-factor identification management system should be established, and the supervision and management of technology applications should be strengthened. data protection.

Protect my country's face data and face models, and improve laws and regulations related to biometrics. High-quality face data and high-precision face models are the core of autonomous lethal weapons. Protecting the face data and face model of a race, just like protecting the gene pool of this race, is a key task to limit the development of autonomous lethal weapons against the race by the enemy. In the collection, storage, transmission, and use of face data, protection and supervision should be strengthened through technical and legislative means, not only to respect the scientific development of artificial intelligence, but also to protect the legal and compliant circulation and use of face data resources. First, it is necessary to establish management institutions and systems related to biometric information to ensure the safe and standardized use of face data. Building a transparent mechanism and a unified organization can not only constrain practitioners and data operators, protect privacy, but also gain public understanding and support, promote technological innovation and upgrade, and achieve a win-win situation. It is recommended to upgrade the biometric identification working group of China Automatic Identification Technology Association established in 2016 to a professional committee of biometric identification technology to better provide technical guidance and supervision services for industry standards and security. Second, it is necessary to speed up the formulation of laws and regulations related to personal information protection, establish information rights such as the right to control personal information, the right to delete, and the right to forget, and improve individual complaints and relief mechanisms for information rights. In 2017, the Office of the Central Network Security and Informatization Leading Group, the General Administration of Quality Supervision, Inspection and Quarantine of the People's Republic of China, and the National Information Security Standardization Technical Committee jointly issued the national standard "Information Security Technology Personal Information Security Specification" (GB/T 35273-2017). The development of personal information protection in my country provides a detailed and practical guide, which was officially implemented on May 1, 2018. However, this standard is a national recommended standard and does not have legal mandatory effect. Therefore, it is recommended to legislate as soon as possible to improve the national response level. The third is to implement the main responsibility of security in each link of the biometric data life cycle, clarify the data rights and responsibilities of the government, enterprises and individuals under big data, and promote the legal order of the data market.

In the field of security, standardize the training and circulation of face models, and establish a big face data center. If the legitimacy of face data and models is not guaranteed and emphasized, an underground market for face data and models will inevitably emerge, causing serious risks of data and model leakage. Technical and legal solutions exist to address the risk of leakage of face data and models. Technically, it is recommended that the public security system establish a face big data center at the national and provincial levels, and store face data in a physically isolated private network. The model of the algorithm supplier must be trained in the big data center of the public security network, and the generated model must also be applied in the private network of the public security system, so that the data and models are not physically separated from the private network. If the deep learning model needs to be circulated across provinces and cities and continuously trained and enhanced, it can only be circulated within the public security private network. Algorithm suppliers can rent the data and computing power of the public security big data center to upgrade and update the algorithm model. In terms of laws and industry norms, it is recommended to specify and distinguish the legal relationship between data and model flow. Programs are written by programmers, and the source of their information is the programmers themselves, which are products created by people. However, the model is trained from data, and the source of its information is face big data, which is the product of further processing of natural resources. Distinguish the program copyright of the algorithm supplier and the source of the model. For the model used, the authorization certificate of the owner of the training data source needs to be provided. It is stipulated that the circulation of data and models must be endorsed by detailed and differentiated legal procedures. Training the data to obtain a model cannot change the ownership relationship between the data owner and the model.

The application of face recognition in important fields should be cautious, and a multi-factor recognition management system should be established. Although issues such as the accuracy of biometric identification will gradually improve with technological progress, biometric identities have insurmountable security risks that may even endanger national security. Therefore, large-scale and widespread application, especially in important fields, must be cautious, and commercial enterprises cannot be allowed to pursue profits on their own. For biometric products, even if the detection accuracy reaches 99.99%, once it is applied on a large scale, because the base of hundreds of millions of users is too large, the number of people affected by misjudgment will be very large, which will bring harm to personal and property interests. The losses will also be enormous. Regardless of technical or legal considerations, it is not appropriate to rely too much on the face as the only identification method in biometrics. It is recommended to adopt a multi-factor identification management system, which can be combined with other identification methods to improve system security stability and science.

Leave a Reply

Your email address will not be published. Required fields are marked *