Connect with us

Technology

Facial Recognition Technology: Racism And Inaccuracy

Published

on

Facial Recognition, a gateway to racism

A recent study revealed that artificial intelligence aimed to recognize or analyze human being’s images such as those in self-driving cars are more likely to hit people with darker skin tones.

The study was conducted by researchers from the Georgia Institute of technology, and they revealed that the state-of-the-art object recognition systems are less likely to yield accurate results at detecting pedestrians with dark skins.

In the study, eight image recognition systems were tested against a large pool of pedestrian images. The images were classified into two categories, lighter and darker skin color, using the Fitzpatrick skin type scale. The results revealed that the accuracy level of the image recognition systems that were tested was five percent lower with the darker skin category than those with lighter skin tones. The result even held true even when controlling for time of day and obstructed view.

Experts suggest that two factors contributed to the resulted inaccuracy: too few examples of darkly skinned pedestrians used in the development of the technology and too little emphasis on machine learning from those examples. They said that this problem could be reversed by adjusting both the data and the algorithm that runs object identification systems.

For the last decade, there has been a conversation on how technology can be biased against people with color. One reason that experts point out is the lack of diversity in the tech industry, as well as, in science itself.

In Nigeria, a man trying out a newly automated soap dispenser discovered that the object sensor of the soap dispenser does not recognize his hands while having no problem identifying the palm of his white friend.

When reviewing wearables, CNET spoke to Bharat Vasan, the COO of Basis Science, who explained how these monitors fail people of color:

“The light has to penetrate through several layers…and so the higher the person is on the Fitzpatrick scale (a measure of skin tone), the more difficult it is for light to bounce back,” he explained. “For someone who is very pale in a very brightly-lit setting, the light could get washed out. The skin color issue is something that our technology compensates for. The darker the skin, the brighter the light shines, the lighter [the skin], the less it shines.”

While soap dispensers are friendly glitches, there are actual technologies employing object recognition techs that have actual prejudice if they malfunction or misrecognize.

One case is when wearable fitness trackers and heart rate monitors were reported to have less accuracy in black people in tracking someone’s heart disease. Although the company does not market their product as a substitute to an actual professional doctor, it can still do damage to those who use it as it malfunctions.

Huge tech companies have also been plagued with reports of their facial recognition systems not being accurate. In January, Amazon came into heavy scrutiny after researchers from MIT and the University of Toronto have found out that their facial analysis software mistakes dark-skinned women to men.

Results have shown that Amazon’s facial analysis have mistaken 31% of black women as men compared to 7% of white women being mistaken to men. The results also revealed that the analysis for men has essentially no identification.

The issue was also exacerbated by Amazons move to sell their facial recognition technology, ‘Rekognition,’ to law enforcement authorities.

As a response to this pronouncement, 85 social justice advocates, human rights activists, and religious groups have collectively sent a letter to Microsoft, Google, and Amazon to ask them not to market their facial recognition software to the government.

Google has said that it will not be selling its technology unless all racial bias and misidentification issues are addressed while Microsoft has acknowledged that it is their company’s duty to ensure that their technology is used responsibly. On the other hand, Amazon has reportedly given a demonstration of their product to Immigration and Customs Enforcement Agency and will pilot the use of Rekognition to the FBI.

The report has caused a social outcry, and human rights groups are saying that the technology can be used to silence activists and the marginalized sectors, especially that a new report merged saying that the software has falsely matched people, including members of the Congressional Black Caucus, to images in a mugshot database.

The study conducted by MIT and the University of Toronto has pointed out how biases of scientist can seep into the artificial intelligence that they create.

MIT Media Lab researcher Joy Buolamwini said that any tech for human faces should be examined for biased.

“If you sell one system that has been shown to have bias on human faces, it is doubtful your other face-based products are also completely bias-free,” she wrote. /apr

A consumer tech and cybersecurity journalist who does content marketing while daydreaming about having unlimited coffee for life and getting a pet llama.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Hackers Can Manipulate Media Files Sent Through WhatsApp And Telegram With A Zero-Day

The vulnerability is dubbed as “Media File Jacking.”

Published

on

Photo: Microsiervos | Flickr | CC BY 2.0

Popular instant messaging apps WhatsApp and Telegram contain an unpatched zero-day vulnerability that can be exploited by threat actors and hackers to manipulate files shared across the messaging platform.

Security researchers from Symantec Modern OS Security team found out that there is an existing vulnerability that can allow hackers and cybercriminals to manipulate images, audio files, documents, and other forms of data sent from one user to another.

Both WhatsApp and Telegram, along with other instant messaging platforms, have end-to-end encryption — which makes the message safe to send and receive. End-to-end encryptions only allow the sender and the receiver to read the contents of the images, and even the company has no human-readable copies of the messages sent.

However, according to the researchers, the vulnerability, dubbed as “Media File Jacking” can bypass the end-to-end encryption in the said apps and works on Android by default for WhatsApp and on Telegram if certain features are enabled.

“It stems from the lapse in time between when media files received through the apps are written to the disk, and when they are loaded in the apps’ chat user interface (UI) for users to consume. This critical time lapse presents an opportunity for malicious actors to intervene and manipulate media files without the user’s knowledge,” wrote Yair Amit, VP & CTO, Modern OS Security in a blog post together with Alon Gat, a software engineer.

“If the security flaw is exploited, a malicious attacker could misuse and manipulate sensitive information such as personal photos and videos, corporate documents, invoices, and voice memos. Attackers could take advantage of the relations of trust between a sender and a receiver when using these IM apps for personal gain or wreak havoc.”

End-to-end encryption does not make an app immune to threat actors

The researchers said that users of instant messaging platforms are particularly vulnerable in this instance because of the assumptions that because these apps have end-to-end encryption, they are automatically immune from hacking. But that is definitely not the case, as illustrated by Symantec’s discovery.

“As we’ve mentioned in the past, no code is immune to security vulnerabilities. While end-to-end encryption is an effective mechanism to ensure the integrity of communications, it isn’t enough if app-level vulnerabilities exist in the code,” they added.

How the exploit works. Photo: Symantec

The problem comes from how these apps store media files as end-to-end encryptions don’t work if the files were saved externally. When files are stored on external storage, other apps can access and manipulate them. On WhatsApp, data are stored externally by default, while on Telegram, the vulnerability is present if “Save to Gallery” is enabled.

Additionally, the Media File Jacking vulnerability, as the researchers said, points to a more significant issue of app developers’ non-secure use of storage resources.

Impact of the exploits

Researchers from Symantec raised the alarms as malicious actors can use the discovered vulnerability in different ways. Hackers can fundamentally alter images in a near real-time manner as sent by one user to another just by exploiting the zero-day. In a demo video released by Symantec, the researchers were able to change the faces of two men in an image to that of Nicolas Cage as the picture was being sent from one test account to another.

Furthermore, threat actors can also exploit the vulnerability by altering numbers in invoices in a bid to rewire payments to a different bank account number. To make matters worse, researchers said that the invoice-jacking modus can also be carried out without a specific target and could be broadly distributed, looking for any invoices to manipulate, affecting multiple victims who use IM apps like WhatsApp to conduct business.

“As in the previous scenario, an app that appears to be legitimate but is, in fact, malicious, watches for PDF invoice files received via WhatsApp, then programmatically swaps the displayed bank account information in the invoice with that of the bad actor. The customer receives the invoice, which they were expecting to begin with, but has no knowledge that it’s been altered. By the time the trick is exposed, the money may be long gone,” the report said.

The exploitation of the vulnerability may also come in the form of audio-spoofing where an attacker exploits the relations of trust between employees in an organization a the attacker can also program the new and manipulated file to mimic the voice of another person.

At the end of the day, Symantec is encouraging IM users to by disabling the feature that saves media files to external storage in order the mitigate the possible attacks using the exposed vulnerability.

Continue Reading

Technology

Huawei Exec Backtracks: Hongmeng OS Is Not For Smartphones

Liang Hua said they prefer Android as the OS of their smartphones.

Published

on

Photo: Kārlis Dambrāns | Flickr | CC BY 2.0

When Chinese smartphone giant, Huawei, was caught off-guard by Google’s revocation of its Android license following the ban imposed by Washington against the company, the smartphone maker made people believe that they are ready for such situation, and announced that they are developing an alternative operating system called Hongmeng.

However, in an interview, Liang Hua, an executive from the tech superpower, backtracks and says that Hongmeng was developed not as an alternative for Android but for the development of their IoT products instead.

Liang Hua said at a Friday press conference in Shenzhen that the operating system, which was rumored to be 60% faster than android, was not developed for smartphones and that the company still prefers Android as their “first choice” for a smartphone OS.

“The Hongmeng OS is primarily developed for IoT devices that will reduce latency… In terms of smartphones, we are still using the Android operating system and ecosystem as a “first choice.” We haven’t decided yet if the Hongmeng OS can be developed as a smartphone operating system in the future,” said Liang Hua.

Earlier reports revealed that Huawei has been developing Hongmeng since 2012. The company has been testing the new OS on selected devices under a closed door and closed environment. The source also said that the testing was accelerated for the new operating system to be ready for situations such as the latter.

Nonetheless, it is still unclear whether Hongmeng will be the official name of the OS coming from Huawei. Experts note that even if Huawei can successfully launch its operating system, the company will still be faced with the challenge of establishing an app ecosystem. It would take Huawei a lot of time to build apps that are compatible with the new operating system.

When Huawei was subjected to a witch hunt by the US government for allegedly aiding the Chinese government in its efforts to spy on the country, and as a pivotal player to potentially economically sabotage the country, an executive order was launched against the China-based tech giant that effectively forced U.S. tech companies to sever ties with Huawei.

The ban from Google has brought Huawei’s future into limbo; making it uncertain for users, especially concerning security updates for their Huawei and Honor phones —or the general idea whether their devices will still be able to run altogether. Following the announcement, Huawei assured its users that all phones that were sold ahead of the banning and those that are already in stock would continue receiving updates from Android.

Now, Huawei’s backtrack follows the bilateral meeting between Trump and China’s Xi Jinping in the recently concluded G-20 Meeting held in Tokyo; the American president announced that American companies could already resume in selling their products to Chinese companies.

The two presidents, in a closely watched sit-down with each other, have agreed for a truce and cease-fire over the long-disputed trade wars between the two superpowers.

“U.S. companies can sell their equipment to Huawei. We’re talking about equipment where there’s no great national security problem with it. I said that’s O.K., that we will keep selling that product, these are American companies that make these products,” Donald Trump said after his meeting with the Chinese president. “That’s very complex, by the way. I’ve agreed to allow them to continue to sell that product so that American companies will continue.”

While the relief is what Huawei has been looking forward to from the G-20 meeting today, it seems like it could be a temporary relief as negotiations regarding the matter is bound to continue, and the ad hoc decision of Trump may still be overturned at some point of the negotiations. Nonetheless, it’s time for the Chinese smartphone superpower to breathe better.

Washington officials are reportedly holding meetings on how they will implement the new orders from Trump. However, special attention has to be given on how to deal with Huawei and its presence on the “entity list,” as the relief does not explicitly remove Huawei from the said list.

Continue Reading

Technology

This App Uses AI To Track Dogs By Their Unique Nose Prints

Authorities can also use it to monitor “uncivilized dog keeping.”

Published

on

Photo: Soumyaroop Chatterjee | Flickr | CC BY-ND 2.0

There’s no denying: facial recognition and biometrics identification is everywhere. They are in airports to help passengers board faster, in smartphones to allow users to unlock their devices automatically, in conservation reservoirs to track endangered animals, and in law enforcement agencies to help catch criminals.

And the development of artificial intelligence (AI) that allows facial recognition technology to evolve is moving faster every day. This time, a China-based start-up has developed an AI that has the capability of identifying and recognizing dogs through their nose prints.

Similar to how human fingerprints are unique to every human, dog nose prints are also unique to every dog. That is why, Megvii, a Chinese start-up, who is also an independent surveillance system contractor for the Chinese government, have developed and trained an AI to recognize dogs using their nose prints.

Photo: Megvii

The identification system is available through the Megvii app, and users need to scan their dog’s noses from multiple angles — same as how users register their fingerprint credentials to use the biometric unlock system of a smartphone.

The company says, that unlike previous identification methods like chip implants to pets the Megvii nose print identification app is much cheaper and is less invasive.

Apps that could identify and recognize animals like dogs aren’t new in the market at all. An app called Finding Rover uses facial recognition and machine learning to match photos of dogs submitted by owners of lost pets to a massive database of shelters and dog homes to recognize and find lost dogs.

Moreover, using nose prints to identify and recognize dogs and other pets aren’t new as well. Kennel clubs around the world are known to use nose prints to match lost dogs with shelter dogs. One primitive way to take a nose print is by coating the nose with ink and pressing it against white cardboard.

What’s new with Megvii’s market offering is the method by which dog prints are collected. In the new app, coating dog’s nose isn’t necessary anymore, as the AI only need photos of dog noses to locate key identifying markers — creating a unique profile of a dog in the database.

The company claims that amidst the differences in camera resolution, their identification system can verify a dog’s identity against an existing record with 95% accuracy. It also says that the system could identify a dog with “high precision” by checking it against records from a larger database, although the company didn’t elaborate on the accuracy rate in that scenario.

Aside from identifying lost dogs, Megvii says that their apps can also be used to track inappropriate pet-owner behaviors, and authorities can monitor “uncivilized dog keeping.” In China, actions, like walking a dog in public without a leash and not scooping after a dog has pooped, are considered uncivilized, and in some instances in several cities, are considered illegal.

Biometrics identification tech application on animals

The advent of facial recognition and biometric identification technology has not only helped pet owners in keeping track of their beloved pets. The technology has also been known to be used by conservators in China to track the movement of endangered animals like the endemic panda population.

A group of researchers from the China Conservation and Research Centre for Giant Pandas have developed an app that could recognize individual pandas using facial recognition technology. The app will draw from more than 120,000 images and video clips of giant pandas to identify the animals that are living in the wild.

Camera traps in China have captured images and video footage of giant pandas that are often difficult to see in the wild. The photographs and video are some of the most amazing photos ever of pandas and other species in their remote habitat, which were caught on film as part of long-term wildlife monitoring projects set up in panda nature reserves by the Chinese government and WWF.

The development of the new facial recognition app will presumably help conservationist monitor their programs by keeping track of how many pandas are left. It will also provide significant insight regarding the breeding program that conservationist has been implementing to encourage an increase in the panda population.

Continue Reading

Trending