Connect with us

Technology

Android Ice Cream Sandwich Face Unlock Hacked By Photo

Published

on

ad1

Android Ice Cream SandwichEveryone is pretty excited about Google’s Ice Cream Sandwich Android 4.0 for HTC Phones that is to be released in 2012. We wrote about the release not too long ago in a post here. There is a feature that is in the Android Ice Cream Sandwich release that everyone has found amazing. The new Face Unlock feature in Ice Cream Sandwich has been talked about and touted as a sought after technology.

Soya Cincau has revealed how to easily hack the face unlock technology in all Android Ice Cream Sandwich installs. Mr. Cincau demonstrates it through a YouTube video how to hack face unlock by using a photo of the persons face that is stored as the owner of the phone. The specific phone that Mr. Cincau did this on was the Galaxy Nexus.

Soya CincauTo everyone’s surprise a tech enthusiast named Soya CincauThere have been a lot of questions about the Face Unlock hack and how it’s actually working. In fact, some people on Twitter accused Mr. Cincau of faking the test to recongize his picture by storing his picture as his face. The replies Soya Cincau revealed on the video are below:

“UPDATE: Just to clarify, the Galaxy Nexus in the video was setup to recognise my real face and not the picture taken using the Galaxy Note. See video link for more info. Apologies for the confusion.

We received a question via Twitter asking if a printed photo can fool the Face Unlock to falsely recognise it as a real face and unlock the device.

So we tested it out. You’ll be surprised as to what happened.”

In Mr. Cincau’s Update Number two he talks about other news sites featuring his hacking of the face unlock for Google’s Ice Cream Sandwich.

“UPDATE 2: We’re featured on TheNextWeb and Phandroid! Yet the question still arises on whether I had set up the device to recognise my face or a pciture of my face to do this demo.

While some of you think that it is a trick and I had set the Galaxy Nexus up to recognise the picture, I assure you that the device was set up to recognise my face. I have a few people there watching me do the video and if any one of them is watching this video I hope you can confirm that this test is 100% legit.

I would love to do this test again but I don’t have a Galaxy Nexus, it is VERY hard to come by as it is not launched yet, but I urge anyone with a Galaxy Nexus to do the same test. Program the device to recognise YOUR FACE and then try to trick the same device with a similar looking picture, it will work. If anyone does do this test, please tell me so I can link it in this video. Once again people, I know it’s just my words right now but this claim is LEGIT.”

On Update number three it’s confirmed that someone else has actually confirmed and duplicated the test of using a photo to hack the face unlock. Hacking the Face Unlock can also be done by using a printed picture of someones face as proven in the final video.

“UPDATE 3: Someone has managed to repeat the same test with similar set up. A Galaxy Nexus was first set up using his face and then repeated with a digital picture of his face on a Galaxy Note. As expected, he was able to unlock with his photo.”

What you’ve learned from all of this is that if you want to hack into someones phone that is running Android Ice Cream Sandwich and they’re using Face Unlock you just need a picture of them and you’ll be good to go. Don’t try this at home, unless you want to really hack into someones phone!

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Apps

Google publicly reveals acquiring homework helper app, Socratic

Published

on

Google Socratic App
ad1

Google has publicly disclosed that it has acquired the mobile learning app, Socratic, this week. However, Google has yet to reveal any details regarding the acquisition.

Socratic is a mobile learning app that aims to help students who are in high school and university with their schoolwork when they are out of the classroom. It helps them through its available resources and it points them to the probable concepts that could lead them to the correct answer. 

Socratic was founded by Shreyans Bhansali and Chris Pedregal back in 2013. The app actually started as a web product. The founders were prompted to create Socratic as they were not able to find a platform where people — both teachers and students alike — can come together as a community and collaborate.  

This also led to the goal of the site to create a space that made learning reachable and available for and to all students. Socratic was also aimed at becoming a central space where teachers can share information.

On the Socratic website, students were able to ask post questions. These questions were often very detailed, allowing users to help each other by providing answers. The site became a homework helper that connected teachers, mentors, and students.

In 2015, two years after it was launched, Socratic had about 500,000 users in its community. In the same year, the education tech company was able to raise a $6 million funding from the Omidyar Network, Spark Capital, and Shasta Ventures.

Three years after its introduction, Socratic launched its app version. When the app first came out, it had a Quora-like Q&A platform. However, it soon evolved into focusing less on user contribution and more on its utility aspect.

On the Socratic app, a user can take a photo of the homework and put it up on the platform. This led to the user not only getting the answer but also showed the steps on how to get it right.  

Users were able to post math problems and they were taught the necessary steps to get the right answer. This feature was similar to other math solving apps available around that time.

What made Socratic different from those apps was the fact that it did not focus solely on math alone. It also covered other subjects like literature, science, history, and so much more.

Before its acquisition by Google last year, Socratic removed the social feature of the app. In June of the same year, it also closed the user contribution feature on its website and announced that it would be focusing on its app entirely.

In their quest to make Socratic a tool that can easily help students, Google has worked on improving the app and its features. With Google’s acquisition of Socratic, it has revamped and relaunched the app. Socratic is also seen as something that can support Google Assistant technology across different platforms.  

Just like with the old Socratic app, the latest version still covers a wide variety of subjects. Users can use the app to help them with different matters.

There are over 1,000 subject guides on the app that would cover high school and higher education topics. The study guides allow its users to study or go over a specific topic through key or highlighted points.

When using the app, it only takes the user two taps to see the subject matter they needed help with. If the user wants to learn more, the platform links to different resources that can be found on the web.

The new Socratic app is now also powered by AI. Apart from still being able to post photos of the problems, the app is now equipped with text and speech recognition. 

Google’s AI makes use of dedicated algorithms that can address complex problems. The AI component can also break concepts down and make them shorter and easier for its users.

Socratic by Google was recently relaunched on iOS last August 15. For Android users, they might have to wait a bit more. The revamped Android version of Socratic will be available for download from the Google Play store this coming fall. 

Continue Reading

Apps

Twitter to hide unwanted messages with a new filter

Twitter will also censor messages they think are offensive so users can decide if they want to read it or not.

Published

on

Photo by Con Karampelas on Unsplash
ad1

Twitter announced that they would be testing a new feature that would essentially protect people from receiving abusive and harassing messages through the microblogging platform’s Direct Messages feature.

Currently, messages on Twitter is open to everyone – meaning, everyone can send a message to anyone without even following them. The messages sent by the account with the matched following – or those that follow each other – will directly go to the Direct Messages panel and those coming from other users not followed (or not following) an account will go to the “Message Request” panel inside the Direct Message page.

The current system in Twitter’s messaging is designed for more ways to connect people together; however, this also invites all forms of messages, including abuse. The straightforward solution right now is for users to disable their Direct Message from people who don’t follow them and they don’t follow; however, this does not work for people, like journalists or doctors or businesses, to have their inbox open in case legitimate messages are sent their way.

This is the reason why Twitter is testing a new filter that would move unwanted, abusive, and harassing messages to a different tab in the Direct Messages panel.

“Unwanted messages aren’t fun. So we’re testing a filter in your DM requests to keep those out of sight, out of mind,” reads a post from Twitter’s support team’s account.

Now, instead of lumping messages in one view, Twitter is going to filter unwanted and spam messages so that users will not automatically see them. The Message Requests section will include the messages from people you don’t follow, and below that, you’ll find a way to access these newly filtered messages.

The new feature will have users to voluntarily click the “Show” button so that they can see and read filtered messages, which, according to Twitter, could include unwanted, harassing, and bullying messages.

And by showing filtered messages, users won’t automatically be able to read all messages as Twitter will also be hiding messages that they think could be abusive and harassing. Instead of a preview of the message, the user will be reading a warning that Twitter censored the specific message because they could possibly be harassing or abusive. That way, the user can decide whether to open the message or directly trash it using a delete button located on the right side of every message.

The change that Twitter is “testing” has the potential to make Direct Messages as a tool for people that needs their inbox open, and a move that will help stop the proliferation of online abuse and hate speech.

Similarly, Facebook has since had an option to filter messages that they felt were offensive. A similar process, where messages from people you are not friends with are clustered in the Filtered section in Messenger and those that appear to be offensive appears just below them.

And Twitter has been very clear that this feature is still on the “testing” phase, which puts into question the slow pace of Twitter in fighting abuse in their platform. Facebook Messenger has been filtering messages in this way since late 2017, and Twitter is still not launching this; they’re just “testing” it.

Hope is not totally left off of Twitter as the social media giant has also been testing the idea of hiding problematic messages in their platform. Earlier this year, Twitter has started to roll out a feature in Canada that would let users “Hide Replies” so that everyone cannot see them. The hidden replies are not deleted but are just hidden behind other replies that require people to click something to read them.

The new message filtering system is just one of the new changes that Twitter is testing right now to improve the environment in their platform. Aside from the Hide Replies function, Twitter is also developing different ways for users to follow a specific topic, which the company announced earlier this week in a press conference. Additionally, Twitter will also launch a search tool for the Direct Message inbox so that users can easily access messages from specific users or topics, as well as support for iOS Live Photos as GIFs, the ability to reorder photos, and more.

Continue Reading

Cybersecurity

Australia ruled that employees can refuse to provide biometric data to employers

It’s a right to say no.

Published

on

Australian court ruled that employees are allowed not to provide their biometric data
ad1

The Australian court ruled that employees are allowed to refuse to provide biometric data to their employees. The ruling follows the lawsuit filed by Jeremy Lee getting fired from his previous job due to his refusal of providing his fingerprint samples for the company’s newly installed fingerprint login system.

Jeremy Lee from Queensland, Australia, won a landmark case after he was fired from his job at Superior Wood Pty Ltd, a lumber manufacturer, in February 2018, for refusing to provide his fingerprints to sign in and out of his work, citing that he was unfairly dismissed from the company.

“Mr. Lee objected to the use of the scanners and refused to use them in the course of his employment, as he was concerned about the collection and storage of his personal information by the scanners and Superior Wood,” reads the suit.

“On February 12, 2018, Mr. Lee was issued with a letter of termination dismissing him from his employment on the grounds that he had failed to adhere to Superior Wood’s Site Attendance Policy,” it added.

Lee filed a suit with Australia’s Fair Work Commission in March 2018, saying that he owns the rights to the biometric data that is included in his fingerprints and he has the right to refuse from providing them to his employer under the country’s privacy laws.

“Mr. Lee was employed by Superior Wood as a regular and systematic casual employee. It is not contested, and I so determine that he had a reasonable expectation of continuing employment with Superior Wood on a regular and systematic basis. Mr. Lee’s annual earnings were less than the high-income threshold amount. Mr. Lee is protected from unfair dismissal under s.382 of the Act,” the case file reads.

However, during the first assessment of the case by the commissioner who examined the complaint, Lee’s suit was denied, and the commissioner sided on Superior Woods.

“I’m not comfortable providing my fingerprints to the scanner so I won’t be doing it at this stage,” said Lee in a testimony.

“I am unwilling to consent to have my fingerprints scanned because I regard my biometric data as personal and private.

If I were to submit to a fingerprint scan time clock, I would be allowing unknown individuals and groups to access my biometric data, the potential trading/acquisition of my biometric data by unknown individuals and groups, indefinitely,” reads Lee’s affidavit.

The rejection did not stop Lee from pursuing his right as he took it upon himself to represent himself in an appeal to the commission on November 2018. The appeal made by Lee directly challenges the country’s privacy laws and has opened a discussion on biometric data.

Good news came May 1, 2019, when the commission ruled in favor of Lee’s petition, affirming that he has the right to refuse to provide the company with his biometric data and that his dismissal from his position was unjust.

“We accept Mr. Lee’s submission that once biometric information is digitized, it may be very difficult to contain its use by third parties, including for commercial purposes,” case documents state.

The case of Lee is a first in Australia. While it did not change the law, it opens a new perspective on the ownership of biometric information like fingerprints and facial recognition and reinterpreted privacy laws on how they will apply to data like these.

The news about Lee’s case and the Australian court’s ruling comes after a popular biometrics service company, Biostar, fell into a massive data leak that exposed data from enterprises, banks and other financial institutions, and even the Metropolitan Police in the UK.

The researchers, who disclosed the data leak on Wednesday, said that “huge parts of Biostar 2’s database are unprotected and mostly unencrypted.”

More than 27.8 million records that comprise more than 23GB of data were leaked through the Biostar 2 database. These data belong to all the clients of the security and biometric company and include one million fingerprint records, images of users and linked facial recognition data, records of entry to secure areas, confidential employee information, user security levels and clearances, personal data of employees like emails and home address as well as their mobile device records.

The data breach highlights the importance of biometric data and how massive the implication when these kinds of sensitive information are leaked.

“The fact that this biometric data was stored plainly and not in hashed form raises some serious concerns and is unacceptable. Biometrics deserve greater privacy protections than traditional credentials, they’re part of you, and there’s no resetting a fingerprint or face. Once fingerprint and facial recognition data are leaked or stolen, the victim can never undo this breach of privacy. The property that makes biometrics so effective as a means of identification is also its greatest weakness,” said Kelvin Murray, senior threat research analyst for Webroot in an email to Z6Mag.

Continue Reading

Trending