Connect with us
Google Play hosted Government-owned malwares for two years Google Play hosted Government-owned malwares for two years

Technology

Google Play Store Hosted Government-Owned Malwares

Published

on

ad1

More than 20 government-linked malwares were recently uncovered, evading the filters set by Google to pin point malwares and problematic apps from being listed on Google Play. Hackers working for a surveillance company were suspected of infecting hundreds of people with several malicious Android apps that were hosted on Google Play Store for months.

A joint study between Security Without Borders, a non-profit organization that often investigates threats against activists and human rights advocates, and Motherboard. The team behind the investigation published their detailed findings and technical reports on Friday.

According to a feature made by Motherboard, they have learned that a new kind of Android malware on the Google Play store was sold to the Italian government by a company that sells surveillance cameras but was only recently known to produce malwares. The said apps would “remain available on the Play Store for months and would eventually be re-uploaded.”

Tech experts have said that the hundreds of innocent users may have been infected by the malware operation because of the poor and faulty targeting systems. Law enforcement and legal experts have also echoed the possibility that such malware is illegal.

“We identified previously unknown spyware apps being successfully uploaded on Google Play store multiple times over the course of over two years. These apps would remain available on the Play Store for months and would eventually be re-uploaded,” the researchers said.

Meet ‘Exodus’

The spyware, named Exodus, aims to trick targets to install them and are designed to look like harmless apps to receive promotions and marketing offers from local Italian cellphone providers, or to improve the device’s performance.

When alerted by the researchers about the existence of the said apps, Google took them down and said that the company has found 25 different versions of the spyware over the last two years, dating back to 2016. While Google confirmed that that the number of victims is below 1000, they refused to provide more precise data on how many people are affected by the malware, nor any information related to the targets.

Exodus was programmed to act in two stages. In the initial stage, the malware would self-install and checks the phone number and it’s IMEI (the device’s unique identifying number) and validate whether or not it was a target. For that apparent purpose, the malware has a function called “CheckValidTarget.”

But researchers suggest through their investigation, the spyware app’s verification mechanism does not work probably. “This suggests that the operators of the Command & Control are not enforcing a proper validation of the targets,” the report noted. “Additionally, during a period of several days, our infected test devices were never remotely disinfected by the operators.”

During the period of the test conducted by Security Without Borders, the dummy phone used to investigate the malware has gained access to most of the sensitive data on the infected phones, such as audio recordings of the phone’s surroundings, phone calls, browsing history, calendar information, geolocation, Facebook Messenger logs, WhatsApp chats, and text messages.

Furtherm9oore, the spyware would open up a port and a shell on the device that would allow the operators to send commands to the infected phones. The researchers highlighted that these open shells are not programmed to use encryption, and the port is open to anyone on the same Wi-Fi network as the target. This means that anyone connected to the network can have access and send commands to the infected devices.

“This inevitably leaves the device open not only for further compromise but for data tampering as well.”

Google Play’s app filter is limited

Many have already raised the concern of the limits of Google’s filters that are meant to prevent malware from slipping into its official app marketplace. Both government-sponsored hackers and those that are working for different criminal organizations were known for uploading malicious apps to the Play Store. The new discover only highlights Google’s inability to protect Android users from destructive applications downloaded from Google Play Store.

Other tech experts have expressed how the discovery was alarming, but not surprising. According to Lukas Stefanko, a researcher for ESET, who specializes in Android malware, said that he was not surprised but was alarmed that malware continues to make its way past Google Play Store’s filtering mechanism.

“Malware in 2018 and even in 2019 has successfully penetrated Google Play’s security mechanisms. Some improvements are necessary,” Stefanko noted. “Google is not a security company, maybe they should focus more on that.”

A consumer tech and cybersecurity journalist who does content marketing while daydreaming about having unlimited coffee for life and getting a pet llama. I also own a cybersecurity blog called Zero Day.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Apps

Google publicly reveals acquiring homework helper app, Socratic

Published

on

Google Socratic App
ad1

Google has publicly disclosed that it has acquired the mobile learning app, Socratic, this week. However, Google has yet to reveal any details regarding the acquisition.

Socratic is a mobile learning app that aims to help students who are in high school and university with their schoolwork when they are out of the classroom. It helps them through its available resources and it points them to the probable concepts that could lead them to the correct answer. 

Socratic was founded by Shreyans Bhansali and Chris Pedregal back in 2013. The app actually started as a web product. The founders were prompted to create Socratic as they were not able to find a platform where people — both teachers and students alike — can come together as a community and collaborate.  

This also led to the goal of the site to create a space that made learning reachable and available for and to all students. Socratic was also aimed at becoming a central space where teachers can share information.

On the Socratic website, students were able to ask post questions. These questions were often very detailed, allowing users to help each other by providing answers. The site became a homework helper that connected teachers, mentors, and students.

In 2015, two years after it was launched, Socratic had about 500,000 users in its community. In the same year, the education tech company was able to raise a $6 million funding from the Omidyar Network, Spark Capital, and Shasta Ventures.

Three years after its introduction, Socratic launched its app version. When the app first came out, it had a Quora-like Q&A platform. However, it soon evolved into focusing less on user contribution and more on its utility aspect.

On the Socratic app, a user can take a photo of the homework and put it up on the platform. This led to the user not only getting the answer but also showed the steps on how to get it right.  

Users were able to post math problems and they were taught the necessary steps to get the right answer. This feature was similar to other math solving apps available around that time.

What made Socratic different from those apps was the fact that it did not focus solely on math alone. It also covered other subjects like literature, science, history, and so much more.

Before its acquisition by Google last year, Socratic removed the social feature of the app. In June of the same year, it also closed the user contribution feature on its website and announced that it would be focusing on its app entirely.

In their quest to make Socratic a tool that can easily help students, Google has worked on improving the app and its features. With Google’s acquisition of Socratic, it has revamped and relaunched the app. Socratic is also seen as something that can support Google Assistant technology across different platforms.  

Just like with the old Socratic app, the latest version still covers a wide variety of subjects. Users can use the app to help them with different matters.

There are over 1,000 subject guides on the app that would cover high school and higher education topics. The study guides allow its users to study or go over a specific topic through key or highlighted points.

When using the app, it only takes the user two taps to see the subject matter they needed help with. If the user wants to learn more, the platform links to different resources that can be found on the web.

The new Socratic app is now also powered by AI. Apart from still being able to post photos of the problems, the app is now equipped with text and speech recognition. 

Google’s AI makes use of dedicated algorithms that can address complex problems. The AI component can also break concepts down and make them shorter and easier for its users.

Socratic by Google was recently relaunched on iOS last August 15. For Android users, they might have to wait a bit more. The revamped Android version of Socratic will be available for download from the Google Play store this coming fall. 

Continue Reading

Apps

Twitter to hide unwanted messages with a new filter

Twitter will also censor messages they think are offensive so users can decide if they want to read it or not.

Published

on

Photo by Con Karampelas on Unsplash
ad1

Twitter announced that they would be testing a new feature that would essentially protect people from receiving abusive and harassing messages through the microblogging platform’s Direct Messages feature.

Currently, messages on Twitter is open to everyone – meaning, everyone can send a message to anyone without even following them. The messages sent by the account with the matched following – or those that follow each other – will directly go to the Direct Messages panel and those coming from other users not followed (or not following) an account will go to the “Message Request” panel inside the Direct Message page.

The current system in Twitter’s messaging is designed for more ways to connect people together; however, this also invites all forms of messages, including abuse. The straightforward solution right now is for users to disable their Direct Message from people who don’t follow them and they don’t follow; however, this does not work for people, like journalists or doctors or businesses, to have their inbox open in case legitimate messages are sent their way.

This is the reason why Twitter is testing a new filter that would move unwanted, abusive, and harassing messages to a different tab in the Direct Messages panel.

“Unwanted messages aren’t fun. So we’re testing a filter in your DM requests to keep those out of sight, out of mind,” reads a post from Twitter’s support team’s account.

Now, instead of lumping messages in one view, Twitter is going to filter unwanted and spam messages so that users will not automatically see them. The Message Requests section will include the messages from people you don’t follow, and below that, you’ll find a way to access these newly filtered messages.

The new feature will have users to voluntarily click the “Show” button so that they can see and read filtered messages, which, according to Twitter, could include unwanted, harassing, and bullying messages.

And by showing filtered messages, users won’t automatically be able to read all messages as Twitter will also be hiding messages that they think could be abusive and harassing. Instead of a preview of the message, the user will be reading a warning that Twitter censored the specific message because they could possibly be harassing or abusive. That way, the user can decide whether to open the message or directly trash it using a delete button located on the right side of every message.

The change that Twitter is “testing” has the potential to make Direct Messages as a tool for people that needs their inbox open, and a move that will help stop the proliferation of online abuse and hate speech.

Similarly, Facebook has since had an option to filter messages that they felt were offensive. A similar process, where messages from people you are not friends with are clustered in the Filtered section in Messenger and those that appear to be offensive appears just below them.

And Twitter has been very clear that this feature is still on the “testing” phase, which puts into question the slow pace of Twitter in fighting abuse in their platform. Facebook Messenger has been filtering messages in this way since late 2017, and Twitter is still not launching this; they’re just “testing” it.

Hope is not totally left off of Twitter as the social media giant has also been testing the idea of hiding problematic messages in their platform. Earlier this year, Twitter has started to roll out a feature in Canada that would let users “Hide Replies” so that everyone cannot see them. The hidden replies are not deleted but are just hidden behind other replies that require people to click something to read them.

The new message filtering system is just one of the new changes that Twitter is testing right now to improve the environment in their platform. Aside from the Hide Replies function, Twitter is also developing different ways for users to follow a specific topic, which the company announced earlier this week in a press conference. Additionally, Twitter will also launch a search tool for the Direct Message inbox so that users can easily access messages from specific users or topics, as well as support for iOS Live Photos as GIFs, the ability to reorder photos, and more.

Continue Reading

Cybersecurity

Australia ruled that employees can refuse to provide biometric data to employers

It’s a right to say no.

Published

on

Australian court ruled that employees are allowed not to provide their biometric data
ad1

The Australian court ruled that employees are allowed to refuse to provide biometric data to their employees. The ruling follows the lawsuit filed by Jeremy Lee getting fired from his previous job due to his refusal of providing his fingerprint samples for the company’s newly installed fingerprint login system.

Jeremy Lee from Queensland, Australia, won a landmark case after he was fired from his job at Superior Wood Pty Ltd, a lumber manufacturer, in February 2018, for refusing to provide his fingerprints to sign in and out of his work, citing that he was unfairly dismissed from the company.

“Mr. Lee objected to the use of the scanners and refused to use them in the course of his employment, as he was concerned about the collection and storage of his personal information by the scanners and Superior Wood,” reads the suit.

“On February 12, 2018, Mr. Lee was issued with a letter of termination dismissing him from his employment on the grounds that he had failed to adhere to Superior Wood’s Site Attendance Policy,” it added.

Lee filed a suit with Australia’s Fair Work Commission in March 2018, saying that he owns the rights to the biometric data that is included in his fingerprints and he has the right to refuse from providing them to his employer under the country’s privacy laws.

“Mr. Lee was employed by Superior Wood as a regular and systematic casual employee. It is not contested, and I so determine that he had a reasonable expectation of continuing employment with Superior Wood on a regular and systematic basis. Mr. Lee’s annual earnings were less than the high-income threshold amount. Mr. Lee is protected from unfair dismissal under s.382 of the Act,” the case file reads.

However, during the first assessment of the case by the commissioner who examined the complaint, Lee’s suit was denied, and the commissioner sided on Superior Woods.

“I’m not comfortable providing my fingerprints to the scanner so I won’t be doing it at this stage,” said Lee in a testimony.

“I am unwilling to consent to have my fingerprints scanned because I regard my biometric data as personal and private.

If I were to submit to a fingerprint scan time clock, I would be allowing unknown individuals and groups to access my biometric data, the potential trading/acquisition of my biometric data by unknown individuals and groups, indefinitely,” reads Lee’s affidavit.

The rejection did not stop Lee from pursuing his right as he took it upon himself to represent himself in an appeal to the commission on November 2018. The appeal made by Lee directly challenges the country’s privacy laws and has opened a discussion on biometric data.

Good news came May 1, 2019, when the commission ruled in favor of Lee’s petition, affirming that he has the right to refuse to provide the company with his biometric data and that his dismissal from his position was unjust.

“We accept Mr. Lee’s submission that once biometric information is digitized, it may be very difficult to contain its use by third parties, including for commercial purposes,” case documents state.

The case of Lee is a first in Australia. While it did not change the law, it opens a new perspective on the ownership of biometric information like fingerprints and facial recognition and reinterpreted privacy laws on how they will apply to data like these.

The news about Lee’s case and the Australian court’s ruling comes after a popular biometrics service company, Biostar, fell into a massive data leak that exposed data from enterprises, banks and other financial institutions, and even the Metropolitan Police in the UK.

The researchers, who disclosed the data leak on Wednesday, said that “huge parts of Biostar 2’s database are unprotected and mostly unencrypted.”

More than 27.8 million records that comprise more than 23GB of data were leaked through the Biostar 2 database. These data belong to all the clients of the security and biometric company and include one million fingerprint records, images of users and linked facial recognition data, records of entry to secure areas, confidential employee information, user security levels and clearances, personal data of employees like emails and home address as well as their mobile device records.

The data breach highlights the importance of biometric data and how massive the implication when these kinds of sensitive information are leaked.

“The fact that this biometric data was stored plainly and not in hashed form raises some serious concerns and is unacceptable. Biometrics deserve greater privacy protections than traditional credentials, they’re part of you, and there’s no resetting a fingerprint or face. Once fingerprint and facial recognition data are leaked or stolen, the victim can never undo this breach of privacy. The property that makes biometrics so effective as a means of identification is also its greatest weakness,” said Kelvin Murray, senior threat research analyst for Webroot in an email to Z6Mag.

Continue Reading

Trending