Facial Recognition Technology: Racism And Inaccuracy

0
Facial Recognition, a gateway to racism
Photo By: Sheila Scarborough/Flickr

A recent study revealed that artificial intelligence aimed to recognize or analyze human being’s images such as those in self-driving cars are more likely to hit people with darker skin tones.

The study was conducted by researchers from the Georgia Institute of technology, and they revealed that the state-of-the-art object recognition systems are less likely to yield accurate results at detecting pedestrians with dark skins.

In the study, eight image recognition systems were tested against a large pool of pedestrian images. The images were classified into two categories, lighter and darker skin color, using the Fitzpatrick skin type scale. The results revealed that the accuracy level of the image recognition systems that were tested was five percent lower with the darker skin category than those with lighter skin tones. The result even held true even when controlling for time of day and obstructed view.

Experts suggest that two factors contributed to the resulted inaccuracy: too few examples of darkly skinned pedestrians used in the development of the technology and too little emphasis on machine learning from those examples. They said that this problem could be reversed by adjusting both the data and the algorithm that runs object identification systems.

For the last decade, there has been a conversation on how technology can be biased against people with color. One reason that experts point out is the lack of diversity in the tech industry, as well as, in science itself.

In Nigeria, a man trying out a newly automated soap dispenser discovered that the object sensor of the soap dispenser does not recognize his hands while having no problem identifying the palm of his white friend.

When reviewing wearables, CNET spoke to Bharat Vasan, the COO of Basis Science, who explained how these monitors fail people of color:

“The light has to penetrate through several layers…and so the higher the person is on the Fitzpatrick scale (a measure of skin tone), the more difficult it is for light to bounce back,” he explained. “For someone who is very pale in a very brightly-lit setting, the light could get washed out. The skin color issue is something that our technology compensates for. The darker the skin, the brighter the light shines, the lighter [the skin], the less it shines.”

While soap dispensers are friendly glitches, there are actual technologies employing object recognition techs that have actual prejudice if they malfunction or misrecognize.

One case is when wearable fitness trackers and heart rate monitors were reported to have less accuracy in black people in tracking someone’s heart disease. Although the company does not market their product as a substitute to an actual professional doctor, it can still do damage to those who use it as it malfunctions.

Huge tech companies have also been plagued with reports of their facial recognition systems not being accurate. In January, Amazon came into heavy scrutiny after researchers from MIT and the University of Toronto have found out that their facial analysis software mistakes dark-skinned women to men.

Results have shown that Amazon’s facial analysis have mistaken 31% of black women as men compared to 7% of white women being mistaken to men. The results also revealed that the analysis for men has essentially no identification.

The issue was also exacerbated by Amazons move to sell their facial recognition technology, ‘Rekognition,’ to law enforcement authorities.

As a response to this pronouncement, 85 social justice advocates, human rights activists, and religious groups have collectively sent a letter to Microsoft, Google, and Amazon to ask them not to market their facial recognition software to the government.

Google has said that it will not be selling its technology unless all racial bias and misidentification issues are addressed while Microsoft has acknowledged that it is their company’s duty to ensure that their technology is used responsibly. On the other hand, Amazon has reportedly given a demonstration of their product to Immigration and Customs Enforcement Agency and will pilot the use of Rekognition to the FBI.

The report has caused a social outcry, and human rights groups are saying that the technology can be used to silence activists and the marginalized sectors, especially that a new report merged saying that the software has falsely matched people, including members of the Congressional Black Caucus, to images in a mugshot database.

The study conducted by MIT and the University of Toronto has pointed out how biases of scientist can seep into the artificial intelligence that they create.

MIT Media Lab researcher Joy Buolamwini said that any tech for human faces should be examined for biased.

“If you sell one system that has been shown to have bias on human faces, it is doubtful your other face-based products are also completely bias-free,” she wrote. /apr

LEAVE A REPLY

Please enter your comment!
Please enter your name here