Connect with us
Protect your data by not doing these things online Protect your data by not doing these things online

Technology

Cybersecurity 101: Online Behaviors That You Should Stop Doing

Published

on

ad1

An average person would share a significant amount of personal and identifiable data over the internet to different platforms every time they log in to their social media accounts, buy something online, sign up for a promotion, or just by browsing the internet out of boredom. These data include names, IP addresses, email addresses, location, credit card numbers, and in some cases, sensitive data like intimate messages, phone recordings, and social security numbers.

While many of the online companies right now are promising security to their users, many of these supposedly secured platforms still experience some sort of data leak and expose user data to the privy eyes of cybercriminals because when data are floating out there, there will always be someone who will fish for them.

Cyber attacks are everywhere, and this is the time that people become more vigilant in their cyber usage and data handling. Even huge corporations like Facebook have experienced it in the past.

Safe-keeping your data may be a responsibility of the company you entrusted them to, but you also have to be very careful with the data that you share. In most cases, recklessness of the data owner can lead to tremendous and dangerous data phishing, so prevention is better than cure.

But making sure that your data is protected is not hard. Besides, there’s just enough that you can do to prevent your data from falling in the hands of evil hackers. The first step to safe-keeping your data is by having good online behavior and following basic security protocols.

Nonetheless, we as people have taken for granted some of the basic internet don’ts, putting our data in serious security risk. Here are some of the behaviors that everyone should avoid to maximize the protection of your data.

USING ONE PASSWORD ON ALL YOUR ACCOUNTS

As an average person, it is understandable to have accounts in different social media platforms and websites and to use a single and having a universal password is beneficial in the short run. Memorizing different passwords is truly a tough task for everyone, but the thing about having a universal password is it maximizes the effect of a cyber attack.

When a hacker gets hold of your universal password, he can easily open all your accounts and download all data that is stored in them. Conversely, having a unique password for every account that you have isolates an attack to the site where the obtained password works.

If you’re bothered that having multiple passwords will result in you forgetting about them, there are apps that you can download to manage your passwords in one place. The downside of this, however, is that if the password manager is compromised, all of your passwords is also compromised. The trick in this is to use passwords that are strong but can be remembered easily or safe-keeping them in cold storage or any offline and secured storage in your house or your office.

NOT READING THE TERMS AND CONDITIONS

Terms and Conditions are long, and it’s most definitely understandable if you won’t sit and read all of what’s written there. But always remember that “Terms and Conditions” serve as a contract between you and the service that you are using. All the security and data handling are stipulated there. In most cases, T&C’s are where service would put what kind of data they are collecting from you and how they are going to use it. Without a knowledge of this information that was readily available to you before you signed up for the service, you maybe are giving them data that you don’t consent to or they are selling your data to a third party without you know. It always pays to read the T&Cs.

CONNECTING TO A PUBLIC NETWORK CONNECTION

Here’s the thing, the internet is readily accessible to anyone right now for a low price, so avoid connecting your devices and using public WiFi. By connecting to these networks, you allow other computing systems – computers, smartphones – to see what you are doing in your device. They can also have access to the files saved in your internal storage. This is dangerous especially if you are using public WiFi to submit sensitive information like credit card numbers and social security numbers. As much as possible, if the need is not that urgent, do not connect your device to public networks.

SUBMITTING DATA TO NON-SECURED WEBSITES

But wait, how do you know if the website is secured or not? Well, you have to check the URL. The simplest way to see the security of the site you are in is by looking if it uses HTTPS instead of HTTP. By being vigilant in that way, you can avoid sending data to websites that may potentially use your data in ways you don’t want your data to be used.

CLICKING SUSPICIOUS LINKS AND EMAILS

This is very self-explanatory. If you find a link suspicious, don’t click on it. The same goes with emails as well. If you were informed that you won $1 million for a contest you did not join, do not fall for the ruse. Chances are, these are phishing links, and they will extract your data without you realizing that you have been fooled.

IN CONCLUSION: Being able to protect yourself as much as you can from cyberattacks that may have dangerous consequences for you and your finances keeps your peace. Be vigilant and be cautious with the data that you share online. /apr

A consumer tech and cybersecurity journalist who does content marketing while daydreaming about having unlimited coffee for life and getting a pet llama. I also own a cybersecurity blog called Zero Day.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Apps

Google publicly reveals acquiring homework helper app, Socratic

Published

on

Google Socratic App
ad1

Google has publicly disclosed that it has acquired the mobile learning app, Socratic, this week. However, Google has yet to reveal any details regarding the acquisition.

Socratic is a mobile learning app that aims to help students who are in high school and university with their schoolwork when they are out of the classroom. It helps them through its available resources and it points them to the probable concepts that could lead them to the correct answer. 

Socratic was founded by Shreyans Bhansali and Chris Pedregal back in 2013. The app actually started as a web product. The founders were prompted to create Socratic as they were not able to find a platform where people — both teachers and students alike — can come together as a community and collaborate.  

This also led to the goal of the site to create a space that made learning reachable and available for and to all students. Socratic was also aimed at becoming a central space where teachers can share information.

On the Socratic website, students were able to ask post questions. These questions were often very detailed, allowing users to help each other by providing answers. The site became a homework helper that connected teachers, mentors, and students.

In 2015, two years after it was launched, Socratic had about 500,000 users in its community. In the same year, the education tech company was able to raise a $6 million funding from the Omidyar Network, Spark Capital, and Shasta Ventures.

Three years after its introduction, Socratic launched its app version. When the app first came out, it had a Quora-like Q&A platform. However, it soon evolved into focusing less on user contribution and more on its utility aspect.

On the Socratic app, a user can take a photo of the homework and put it up on the platform. This led to the user not only getting the answer but also showed the steps on how to get it right.  

Users were able to post math problems and they were taught the necessary steps to get the right answer. This feature was similar to other math solving apps available around that time.

What made Socratic different from those apps was the fact that it did not focus solely on math alone. It also covered other subjects like literature, science, history, and so much more.

Before its acquisition by Google last year, Socratic removed the social feature of the app. In June of the same year, it also closed the user contribution feature on its website and announced that it would be focusing on its app entirely.

In their quest to make Socratic a tool that can easily help students, Google has worked on improving the app and its features. With Google’s acquisition of Socratic, it has revamped and relaunched the app. Socratic is also seen as something that can support Google Assistant technology across different platforms.  

Just like with the old Socratic app, the latest version still covers a wide variety of subjects. Users can use the app to help them with different matters.

There are over 1,000 subject guides on the app that would cover high school and higher education topics. The study guides allow its users to study or go over a specific topic through key or highlighted points.

When using the app, it only takes the user two taps to see the subject matter they needed help with. If the user wants to learn more, the platform links to different resources that can be found on the web.

The new Socratic app is now also powered by AI. Apart from still being able to post photos of the problems, the app is now equipped with text and speech recognition. 

Google’s AI makes use of dedicated algorithms that can address complex problems. The AI component can also break concepts down and make them shorter and easier for its users.

Socratic by Google was recently relaunched on iOS last August 15. For Android users, they might have to wait a bit more. The revamped Android version of Socratic will be available for download from the Google Play store this coming fall. 

Continue Reading

Apps

Twitter to hide unwanted messages with a new filter

Twitter will also censor messages they think are offensive so users can decide if they want to read it or not.

Published

on

Photo by Con Karampelas on Unsplash
ad1

Twitter announced that they would be testing a new feature that would essentially protect people from receiving abusive and harassing messages through the microblogging platform’s Direct Messages feature.

Currently, messages on Twitter is open to everyone – meaning, everyone can send a message to anyone without even following them. The messages sent by the account with the matched following – or those that follow each other – will directly go to the Direct Messages panel and those coming from other users not followed (or not following) an account will go to the “Message Request” panel inside the Direct Message page.

The current system in Twitter’s messaging is designed for more ways to connect people together; however, this also invites all forms of messages, including abuse. The straightforward solution right now is for users to disable their Direct Message from people who don’t follow them and they don’t follow; however, this does not work for people, like journalists or doctors or businesses, to have their inbox open in case legitimate messages are sent their way.

This is the reason why Twitter is testing a new filter that would move unwanted, abusive, and harassing messages to a different tab in the Direct Messages panel.

“Unwanted messages aren’t fun. So we’re testing a filter in your DM requests to keep those out of sight, out of mind,” reads a post from Twitter’s support team’s account.

Now, instead of lumping messages in one view, Twitter is going to filter unwanted and spam messages so that users will not automatically see them. The Message Requests section will include the messages from people you don’t follow, and below that, you’ll find a way to access these newly filtered messages.

The new feature will have users to voluntarily click the “Show” button so that they can see and read filtered messages, which, according to Twitter, could include unwanted, harassing, and bullying messages.

And by showing filtered messages, users won’t automatically be able to read all messages as Twitter will also be hiding messages that they think could be abusive and harassing. Instead of a preview of the message, the user will be reading a warning that Twitter censored the specific message because they could possibly be harassing or abusive. That way, the user can decide whether to open the message or directly trash it using a delete button located on the right side of every message.

The change that Twitter is “testing” has the potential to make Direct Messages as a tool for people that needs their inbox open, and a move that will help stop the proliferation of online abuse and hate speech.

Similarly, Facebook has since had an option to filter messages that they felt were offensive. A similar process, where messages from people you are not friends with are clustered in the Filtered section in Messenger and those that appear to be offensive appears just below them.

And Twitter has been very clear that this feature is still on the “testing” phase, which puts into question the slow pace of Twitter in fighting abuse in their platform. Facebook Messenger has been filtering messages in this way since late 2017, and Twitter is still not launching this; they’re just “testing” it.

Hope is not totally left off of Twitter as the social media giant has also been testing the idea of hiding problematic messages in their platform. Earlier this year, Twitter has started to roll out a feature in Canada that would let users “Hide Replies” so that everyone cannot see them. The hidden replies are not deleted but are just hidden behind other replies that require people to click something to read them.

The new message filtering system is just one of the new changes that Twitter is testing right now to improve the environment in their platform. Aside from the Hide Replies function, Twitter is also developing different ways for users to follow a specific topic, which the company announced earlier this week in a press conference. Additionally, Twitter will also launch a search tool for the Direct Message inbox so that users can easily access messages from specific users or topics, as well as support for iOS Live Photos as GIFs, the ability to reorder photos, and more.

Continue Reading

Cybersecurity

Australia ruled that employees can refuse to provide biometric data to employers

It’s a right to say no.

Published

on

Australian court ruled that employees are allowed not to provide their biometric data
ad1

The Australian court ruled that employees are allowed to refuse to provide biometric data to their employees. The ruling follows the lawsuit filed by Jeremy Lee getting fired from his previous job due to his refusal of providing his fingerprint samples for the company’s newly installed fingerprint login system.

Jeremy Lee from Queensland, Australia, won a landmark case after he was fired from his job at Superior Wood Pty Ltd, a lumber manufacturer, in February 2018, for refusing to provide his fingerprints to sign in and out of his work, citing that he was unfairly dismissed from the company.

“Mr. Lee objected to the use of the scanners and refused to use them in the course of his employment, as he was concerned about the collection and storage of his personal information by the scanners and Superior Wood,” reads the suit.

“On February 12, 2018, Mr. Lee was issued with a letter of termination dismissing him from his employment on the grounds that he had failed to adhere to Superior Wood’s Site Attendance Policy,” it added.

Lee filed a suit with Australia’s Fair Work Commission in March 2018, saying that he owns the rights to the biometric data that is included in his fingerprints and he has the right to refuse from providing them to his employer under the country’s privacy laws.

“Mr. Lee was employed by Superior Wood as a regular and systematic casual employee. It is not contested, and I so determine that he had a reasonable expectation of continuing employment with Superior Wood on a regular and systematic basis. Mr. Lee’s annual earnings were less than the high-income threshold amount. Mr. Lee is protected from unfair dismissal under s.382 of the Act,” the case file reads.

However, during the first assessment of the case by the commissioner who examined the complaint, Lee’s suit was denied, and the commissioner sided on Superior Woods.

“I’m not comfortable providing my fingerprints to the scanner so I won’t be doing it at this stage,” said Lee in a testimony.

“I am unwilling to consent to have my fingerprints scanned because I regard my biometric data as personal and private.

If I were to submit to a fingerprint scan time clock, I would be allowing unknown individuals and groups to access my biometric data, the potential trading/acquisition of my biometric data by unknown individuals and groups, indefinitely,” reads Lee’s affidavit.

The rejection did not stop Lee from pursuing his right as he took it upon himself to represent himself in an appeal to the commission on November 2018. The appeal made by Lee directly challenges the country’s privacy laws and has opened a discussion on biometric data.

Good news came May 1, 2019, when the commission ruled in favor of Lee’s petition, affirming that he has the right to refuse to provide the company with his biometric data and that his dismissal from his position was unjust.

“We accept Mr. Lee’s submission that once biometric information is digitized, it may be very difficult to contain its use by third parties, including for commercial purposes,” case documents state.

The case of Lee is a first in Australia. While it did not change the law, it opens a new perspective on the ownership of biometric information like fingerprints and facial recognition and reinterpreted privacy laws on how they will apply to data like these.

The news about Lee’s case and the Australian court’s ruling comes after a popular biometrics service company, Biostar, fell into a massive data leak that exposed data from enterprises, banks and other financial institutions, and even the Metropolitan Police in the UK.

The researchers, who disclosed the data leak on Wednesday, said that “huge parts of Biostar 2’s database are unprotected and mostly unencrypted.”

More than 27.8 million records that comprise more than 23GB of data were leaked through the Biostar 2 database. These data belong to all the clients of the security and biometric company and include one million fingerprint records, images of users and linked facial recognition data, records of entry to secure areas, confidential employee information, user security levels and clearances, personal data of employees like emails and home address as well as their mobile device records.

The data breach highlights the importance of biometric data and how massive the implication when these kinds of sensitive information are leaked.

“The fact that this biometric data was stored plainly and not in hashed form raises some serious concerns and is unacceptable. Biometrics deserve greater privacy protections than traditional credentials, they’re part of you, and there’s no resetting a fingerprint or face. Once fingerprint and facial recognition data are leaked or stolen, the victim can never undo this breach of privacy. The property that makes biometrics so effective as a means of identification is also its greatest weakness,” said Kelvin Murray, senior threat research analyst for Webroot in an email to Z6Mag.

Continue Reading

Trending