A new Pew Research Center study found that men appear twice as often as women in Facebook news pictures, and most of the pictures are about men.

Considering that 43% of American adult citizens obtain news information mainly through Facebook, Pew Research Center used machine vision to test the gender ratio of news pictures published by 17 national news media on Facebook from April to June 2018. The test algorithm finally identified 53067 people, among whom 33% were women and 67% were men, with a wide gap. But in real life, the sex ratio of the American population is roughly balanced.

So, who is “distorting” the two sexes?

In this issue, all media comprehensive Pew Research Center and MIT Media Lab Scientist joy buolamwini’s research found that approaching the imbalance of gender ratio in face recognition, we can explore together: why sometimes, in the eyes of the algorithm, you are in the ambiguous zone between men and women? Furthermore, what are the prejudices beyond gender? What can we do to deal with this situation?

Gender imbalance in face recognition

Pew’s report points out that in different types of news reports on Facebook, women’s “presence” in pictures is always lower than men’s. In economic related posts, only 9% of the images are pure female content, in sharp contrast, pure male images account for 69%. Women have more opportunities to show in entertainment news pictures, but they are still lower than men in general.

You may be puzzled by the scarcity of women, which is partly related to the larger social reality. For example, in news reports about professional football teams, most of the recognized images are male; In reports on the US Senate and house of Representatives (25% women), women’s faces were, of course, much less recognized than men’s.

Despite these smaller details, the study still reveals some alarming facts: men are more prominent than women in Facebook news images; In groups of two or more people, there are more males than females. At the same time, men will occupy more visual space.

The researchers also measured the size of the female and male faces in the images (the current technology can only capture the size of the face, ignoring the influence of hair, jewelry and headgear). The results show that the average face size of male is larger than that of female by 10%. In Facebook images, this shows that male characters can bring greater visual impact to readers.

Specifically, in economic related posts, the average size of female faces is 19% smaller than that of men, but in entertainment related content, the average size of female faces is 7% larger than that of men.

Machine vision tools such as face recognition are more and more widely used in law enforcement, advertising and other fields.

Machine learning can greatly improve the efficiency of our data processing, but different from traditional computer programs, machine learning follows a series of strict steps, their decision-making methods are largely implicit and highly dependent on the data used to train themselves. These characteristics may lead to more difficult to understand and predict systematic bias in machine learning tools.

From this point of view, Pew Research Center uses a simplified experiment to show how the data used to train the algorithm can introduce hidden deviations and unexpected errors into the system results. Researchers say that as algorithms are playing an increasingly important role in decision-making in human society, it is important to understand their limitations and deviations.

What does prejudice bring?

Recently, 26 top researchers in the AI field, including yoshua bengio, the Turing prize winner, asked Amazon to immediately stop selling its artificial intelligence service Amazon rekognition to the police in a public blog post. Anima anandkumar and others, former chief scientist of Amazon’s cloud computing department, also joined the appeal.

Earlier, Deborah Raji, a researcher at the University of Toronto, and joy buolamwini, a researcher at the media lab at MIT, wrote research reports that Amazon’s rekonition had a much higher error rate in detecting the gender of darker skinned women than lighter skinned men in images. The research results have also been supported by scholars, but Amazon has disagreed with the report and research methods.

Joy buolamwini led an AI research project called gender shadows. After studying the facial recognition systems of leading technology companies, he found that all systems performed better in recognizing male faces, and all systems had higher accuracy in recognizing light colored faces. The average recognition error rate of women with dark skin is 35%, that of men with dark skin is 12%, that of women with light skin is 7%, and that of men with light skin is less than 1%.

What might the bias of facial recognition system bring?

“Regardless of its correctness, facial recognition technology can be abused,” joy said. Accurate or inaccurate use of facial recognition technology to analyze other people’s identity, face and gender may infringe other people’s freedom. For example, inaccurate identification may cause innocent people to be wronged and subject to unreasonable review by law enforcement officers, which is not an imaginary situation. Big brother watch UK, a British non-profit organization, has published a report stressing that the face recognition technology used by the London police force has a gender recognition error rate of over 90%. Last summer, the British media reported that a young black man was searched by the police in full view because he was mistaken for a suspect because of his facial recognition technology.

A leaked report also shows that IBM has provided technology for law enforcement agencies to search for people in videos based on hair color, skin color and facial features. The news has raised concerns that the police will use the technology to focus on specific races.

In order to reduce the time required to search faces, law enforcement departments are using gender classification extensively. If the gender of the face to be matched is known, a simple dichotomy can greatly reduce the number of potential matches to be processed. Gender classification is widely used in police activities.

When these biased identification systems are widely used in social life, it may lead to worse consequences.

In his tedtalk, joy shared a little story:

Under the same light conditions, the facial recognition system can only detect the participants with light skin color; Dark skinned participants can only be detected by wearing a white mask Before artificial intelligence tools determine the face’s identity or identify the expression information, the most basic premise is to detect the face. However, the face recognition system has failed repeatedly in detecting black skin individuals. I can only comfort myself that the algorithm is not a racist, but that my face is too black. ” Joy said.

Where does the deviation come from?

If we compare the accuracy of the developers’ statements with the research conclusions of the researchers, we will find an interesting thing: the external accuracy of the data released by the company is always different from that of the independent third party. So what makes this difference?

Joy reminded us to pay attention to the deviation of the benchmark data set When we discuss the accuracy of facial analysis technology, we do it through a series of image or video tests. These image data constitute a benchmark, but not all benchmarks are equal. “

Amazon uses data from more than 1 million faces as a benchmark to test the accuracy of its products, according to the company’s official. But don’t be confused by this seemingly large sample Because we don’t know the detailed demographic data of the benchmark data. Without this information, we can’t judge whether racial, gender or skin color bias may be buried in the selection of benchmarks. “

Facebook has announced that the accuracy of its face recognition system is as high as 97% in the test of a dataset called labeled faces in the wild (LFW, one of the most famous face recognition datasets in the world). But when the researchers looked at the so-called gold standard data set, they found that nearly 77% of men were in the data set, and more than 80% were white.

In order to eliminate bias as much as possible in the data layer, joy proposed that a more inclusive benchmark data set should be built. To balance the benchmark data, she listed the top 10 countries in the world with the highest proportion of women in parliament, with Rwanda leading the world with more than 60% of women. Considering the typical representation of Nordic countries and a few African countries, joy selected three African countries and three Nordic countries to balance skin types in the data set by selecting young, dark skin individual data from these countries.

Based on this more balanced data set, they re evaluated the facial recognition systems of Amazon, kairos, IBM and face +. In a study conducted in August 2018, they found that Amazon and kairos performed well in white men’s face recognition, but Amazon had a low accuracy rate of 68.6% for women of color.

Joy said that face recognition in the real world is more complex and difficult than experimental detection, and the benchmark data set they set up does not fully stand the test, “but it’s just like running. The excellent performance in the benchmark test can at least ensure that you will not fall at the beginning.”

Even under the same benchmark, the accuracy of the facial recognition system may change. AI is not perfect. In this case, it is a useful way to provide more specific judgment information to users by providing confidence.

Face recognition technology has been widely used in large-scale surveillance, artificial intelligence weaponization and more law enforcement environment. However, this powerful technology is developing rapidly without sufficient supervision.

In order to reduce the abuse of face recognition technology, algorithmic Justice League and center on Privacy & Technology launched the “safe face pledge” campaign. At present, many technology companies, including Amazon, have yet to join this commitment. ” According to our research, it would be irresponsible to rashly sell facial recognition systems to law enforcement agencies or government agencies. ” As one of the founders of the algorithmic justice alliance, joy hopes that in the future, more organizations can join in the “safe face promise” and act responsibly and morally for the development of facial analysis technology.

After all, behind the algorithm bias is actually our own bias.

Editor in charge: CT

Leave a Reply

Your email address will not be published.