Connect with us
Grindr Grindr

Technology

Users Are Opting Out From Using Grindr, But Management Doesn’t Seem To Care

Published

on

ad1

Grindr, the famous gay dating app, turned ten years old this Monday. From a geosocial networking and online dating application to a campaign-centered app which supports several advocacies, Grindr lifted connections within LGBTQ communities.

Through the use of a mobile device geolocation, found mostly in smartphones, the app allows users to locate others who are nearby, displaying a grid of profiles arranged from nearest to furthest away. Just one tap of the picture and you’ll have the chance to get a match, and if luck’s on you, that person might be your destined partner in life.  

Dating back on March 25, 2009, Grindr was the first gay geosocial app launched in the iTunes App store and has since become the largest and most popular gay mobile community online. It is geared towards gay, bi, trans and queer people who do not only seek affection but also recognition and equality among others.

Now, a decade after the dating app first launched, it has revolutionized the gay and bisexual community. It has become an avenue for several activists who fought for equality and spread information about LGBT-related issues in different parts of the world.

Turning a decade old is a milestone for the app. Although based in the United States, Grindr quickly gained popularity worldwide at almost 192 countries through word of mouth and various media outlets. Now, as it plans to expand its reach to a wider audience, researchers take an in-depth look at how the dating app affects the lives of many love-seekers.

According to the research from Time Well Spent back in 2018, Grindr topped the list of apps which left users feeling unhappy. The study was conducted from 200,000 iPhone users which measures just how much do people enjoy using their favorite apps.

The primary objective of the study was to find out which application left users feeling happiest or unhappy. The result showed that health and wellness apps ranked first as the most favorite apps that resonate happiness and positive vibes. However, the data indicated that Grindr, which is becoming a phenomenon in the world of online dating, topped the list with 77 percent of its users claiming that they felt miserable after using the said app.

Despite its advantages, the app faced a lot of controversies even after its establishment, which became the basis of negative feedbacks gathered from its users. As the app celebrates its existence, it is also reasonable to check its impacts on people and how these controversies shaped users’ views and opinions towards the app.

Users claimed that the top reason for feeling miserable when using the app is the fact that their personal data is being compromised.

In the past years, Grindr has been bombarded with issues of pinpointing users’ location. In 2014, it was reported that the app’s relative distance measurements could allow people to locate individual users, thus, compromising its privacy. Grindr, in its true sense, uses the mobile device’s geolocation to specifically locate users’ location that enables the site to pair people and create matches.

In 2016, more than two million detections were performed to prove the claim. The issue heightened after a public protest was held by the LGBTQ community that the Egyptian police used Grindr to hunt and torture gay people. The app, being in the limelight for what seemed to be the worst controversy during that time, disabled distance display.

However, the ‘show distance’ feature of Grindr was re-established last 2018. The feature has again created a spur of dismay and disappointment from the LGBTQ community. According to Frederick Brian Jay, author of several gay hook-up technologies book, by exploiting a novel-attack model called ‘colluding-trilateration’ (a technique used in surveying and mapmaking), locating any targeted user becomes an easy task without employing any hacking methods. In Russia, Grindr becomes a source of violence, as well as, harassment for the gay community, as authorities were provided information to their location and whereabouts.

Today, the said app is being used by countries especially in United Arab Emirates, Indonesia, Ukraine, Russia, and Egypt to track and arrest gay men; which is a significant violation to the individuals’ data privacy. The weird thing is, Grindr knew its security flaws but didn’t seem to care at all.

Some countries were only blocked after the management discovered about its tracking activities, but other nations were exercising the same inhumane act. Blocking the app on these countries only provides a band-aid solution to the bigger problem; it gives a temporary cure to the wound, but it will never solve the issue at hand.

Grindr has been contacted by various media outlets but refused to be interviewed on issues such as security and privacy concerns. The thing is, finding someone’s precise location is very alarming. Based on the user’s current location, Grindr tells the whereabouts of other users in the area with an exact level of precision. It is fair that the app’s executive team should be worried on its major security flaw because personal and privacy implications are terrifying; physical harm is natural when people have a map telling the location of gay men in real time.

What makes Grindr’s flaws intensely bad is its lack of action in the matter. Rather than addressing a crucial security flaw to lift the LGBTQ community from risk and possible harm, Grindr continues to rely mainly on band-aids and not on long-term solutions. As it celebrates its anniversary, we hope that the dating app will also acknowledge the importance of privacy and security measures.

I've been contributing news since 2010, both online and print. Aside from Z6Mag, I manage independent news blogs that provide awareness on a diverse list of topics to every reader.

Continue Reading
1 Comment

1 Comment

  1. Pingback: Lesbian Dating App Exposed Account Data Of 5.3 Million Users — Z6 Mag – Everythingrelationship : Clarity & Conversation

Leave a Reply

Your email address will not be published. Required fields are marked *

Apps

Google publicly reveals acquiring homework helper app, Socratic

Published

on

Google Socratic App
ad1

Google has publicly disclosed that it has acquired the mobile learning app, Socratic, this week. However, Google has yet to reveal any details regarding the acquisition.

Socratic is a mobile learning app that aims to help students who are in high school and university with their schoolwork when they are out of the classroom. It helps them through its available resources and it points them to the probable concepts that could lead them to the correct answer. 

Socratic was founded by Shreyans Bhansali and Chris Pedregal back in 2013. The app actually started as a web product. The founders were prompted to create Socratic as they were not able to find a platform where people — both teachers and students alike — can come together as a community and collaborate.  

This also led to the goal of the site to create a space that made learning reachable and available for and to all students. Socratic was also aimed at becoming a central space where teachers can share information.

On the Socratic website, students were able to ask post questions. These questions were often very detailed, allowing users to help each other by providing answers. The site became a homework helper that connected teachers, mentors, and students.

In 2015, two years after it was launched, Socratic had about 500,000 users in its community. In the same year, the education tech company was able to raise a $6 million funding from the Omidyar Network, Spark Capital, and Shasta Ventures.

Three years after its introduction, Socratic launched its app version. When the app first came out, it had a Quora-like Q&A platform. However, it soon evolved into focusing less on user contribution and more on its utility aspect.

On the Socratic app, a user can take a photo of the homework and put it up on the platform. This led to the user not only getting the answer but also showed the steps on how to get it right.  

Users were able to post math problems and they were taught the necessary steps to get the right answer. This feature was similar to other math solving apps available around that time.

What made Socratic different from those apps was the fact that it did not focus solely on math alone. It also covered other subjects like literature, science, history, and so much more.

Before its acquisition by Google last year, Socratic removed the social feature of the app. In June of the same year, it also closed the user contribution feature on its website and announced that it would be focusing on its app entirely.

In their quest to make Socratic a tool that can easily help students, Google has worked on improving the app and its features. With Google’s acquisition of Socratic, it has revamped and relaunched the app. Socratic is also seen as something that can support Google Assistant technology across different platforms.  

Just like with the old Socratic app, the latest version still covers a wide variety of subjects. Users can use the app to help them with different matters.

There are over 1,000 subject guides on the app that would cover high school and higher education topics. The study guides allow its users to study or go over a specific topic through key or highlighted points.

When using the app, it only takes the user two taps to see the subject matter they needed help with. If the user wants to learn more, the platform links to different resources that can be found on the web.

The new Socratic app is now also powered by AI. Apart from still being able to post photos of the problems, the app is now equipped with text and speech recognition. 

Google’s AI makes use of dedicated algorithms that can address complex problems. The AI component can also break concepts down and make them shorter and easier for its users.

Socratic by Google was recently relaunched on iOS last August 15. For Android users, they might have to wait a bit more. The revamped Android version of Socratic will be available for download from the Google Play store this coming fall. 

Continue Reading

Apps

Twitter to hide unwanted messages with a new filter

Twitter will also censor messages they think are offensive so users can decide if they want to read it or not.

Published

on

Photo by Con Karampelas on Unsplash
ad1

Twitter announced that they would be testing a new feature that would essentially protect people from receiving abusive and harassing messages through the microblogging platform’s Direct Messages feature.

Currently, messages on Twitter is open to everyone – meaning, everyone can send a message to anyone without even following them. The messages sent by the account with the matched following – or those that follow each other – will directly go to the Direct Messages panel and those coming from other users not followed (or not following) an account will go to the “Message Request” panel inside the Direct Message page.

The current system in Twitter’s messaging is designed for more ways to connect people together; however, this also invites all forms of messages, including abuse. The straightforward solution right now is for users to disable their Direct Message from people who don’t follow them and they don’t follow; however, this does not work for people, like journalists or doctors or businesses, to have their inbox open in case legitimate messages are sent their way.

This is the reason why Twitter is testing a new filter that would move unwanted, abusive, and harassing messages to a different tab in the Direct Messages panel.

“Unwanted messages aren’t fun. So we’re testing a filter in your DM requests to keep those out of sight, out of mind,” reads a post from Twitter’s support team’s account.

Now, instead of lumping messages in one view, Twitter is going to filter unwanted and spam messages so that users will not automatically see them. The Message Requests section will include the messages from people you don’t follow, and below that, you’ll find a way to access these newly filtered messages.

The new feature will have users to voluntarily click the “Show” button so that they can see and read filtered messages, which, according to Twitter, could include unwanted, harassing, and bullying messages.

And by showing filtered messages, users won’t automatically be able to read all messages as Twitter will also be hiding messages that they think could be abusive and harassing. Instead of a preview of the message, the user will be reading a warning that Twitter censored the specific message because they could possibly be harassing or abusive. That way, the user can decide whether to open the message or directly trash it using a delete button located on the right side of every message.

The change that Twitter is “testing” has the potential to make Direct Messages as a tool for people that needs their inbox open, and a move that will help stop the proliferation of online abuse and hate speech.

Similarly, Facebook has since had an option to filter messages that they felt were offensive. A similar process, where messages from people you are not friends with are clustered in the Filtered section in Messenger and those that appear to be offensive appears just below them.

And Twitter has been very clear that this feature is still on the “testing” phase, which puts into question the slow pace of Twitter in fighting abuse in their platform. Facebook Messenger has been filtering messages in this way since late 2017, and Twitter is still not launching this; they’re just “testing” it.

Hope is not totally left off of Twitter as the social media giant has also been testing the idea of hiding problematic messages in their platform. Earlier this year, Twitter has started to roll out a feature in Canada that would let users “Hide Replies” so that everyone cannot see them. The hidden replies are not deleted but are just hidden behind other replies that require people to click something to read them.

The new message filtering system is just one of the new changes that Twitter is testing right now to improve the environment in their platform. Aside from the Hide Replies function, Twitter is also developing different ways for users to follow a specific topic, which the company announced earlier this week in a press conference. Additionally, Twitter will also launch a search tool for the Direct Message inbox so that users can easily access messages from specific users or topics, as well as support for iOS Live Photos as GIFs, the ability to reorder photos, and more.

Continue Reading

Cybersecurity

Australia ruled that employees can refuse to provide biometric data to employers

It’s a right to say no.

Published

on

Australian court ruled that employees are allowed not to provide their biometric data
ad1

The Australian court ruled that employees are allowed to refuse to provide biometric data to their employees. The ruling follows the lawsuit filed by Jeremy Lee getting fired from his previous job due to his refusal of providing his fingerprint samples for the company’s newly installed fingerprint login system.

Jeremy Lee from Queensland, Australia, won a landmark case after he was fired from his job at Superior Wood Pty Ltd, a lumber manufacturer, in February 2018, for refusing to provide his fingerprints to sign in and out of his work, citing that he was unfairly dismissed from the company.

“Mr. Lee objected to the use of the scanners and refused to use them in the course of his employment, as he was concerned about the collection and storage of his personal information by the scanners and Superior Wood,” reads the suit.

“On February 12, 2018, Mr. Lee was issued with a letter of termination dismissing him from his employment on the grounds that he had failed to adhere to Superior Wood’s Site Attendance Policy,” it added.

Lee filed a suit with Australia’s Fair Work Commission in March 2018, saying that he owns the rights to the biometric data that is included in his fingerprints and he has the right to refuse from providing them to his employer under the country’s privacy laws.

“Mr. Lee was employed by Superior Wood as a regular and systematic casual employee. It is not contested, and I so determine that he had a reasonable expectation of continuing employment with Superior Wood on a regular and systematic basis. Mr. Lee’s annual earnings were less than the high-income threshold amount. Mr. Lee is protected from unfair dismissal under s.382 of the Act,” the case file reads.

However, during the first assessment of the case by the commissioner who examined the complaint, Lee’s suit was denied, and the commissioner sided on Superior Woods.

“I’m not comfortable providing my fingerprints to the scanner so I won’t be doing it at this stage,” said Lee in a testimony.

“I am unwilling to consent to have my fingerprints scanned because I regard my biometric data as personal and private.

If I were to submit to a fingerprint scan time clock, I would be allowing unknown individuals and groups to access my biometric data, the potential trading/acquisition of my biometric data by unknown individuals and groups, indefinitely,” reads Lee’s affidavit.

The rejection did not stop Lee from pursuing his right as he took it upon himself to represent himself in an appeal to the commission on November 2018. The appeal made by Lee directly challenges the country’s privacy laws and has opened a discussion on biometric data.

Good news came May 1, 2019, when the commission ruled in favor of Lee’s petition, affirming that he has the right to refuse to provide the company with his biometric data and that his dismissal from his position was unjust.

“We accept Mr. Lee’s submission that once biometric information is digitized, it may be very difficult to contain its use by third parties, including for commercial purposes,” case documents state.

The case of Lee is a first in Australia. While it did not change the law, it opens a new perspective on the ownership of biometric information like fingerprints and facial recognition and reinterpreted privacy laws on how they will apply to data like these.

The news about Lee’s case and the Australian court’s ruling comes after a popular biometrics service company, Biostar, fell into a massive data leak that exposed data from enterprises, banks and other financial institutions, and even the Metropolitan Police in the UK.

The researchers, who disclosed the data leak on Wednesday, said that “huge parts of Biostar 2’s database are unprotected and mostly unencrypted.”

More than 27.8 million records that comprise more than 23GB of data were leaked through the Biostar 2 database. These data belong to all the clients of the security and biometric company and include one million fingerprint records, images of users and linked facial recognition data, records of entry to secure areas, confidential employee information, user security levels and clearances, personal data of employees like emails and home address as well as their mobile device records.

The data breach highlights the importance of biometric data and how massive the implication when these kinds of sensitive information are leaked.

“The fact that this biometric data was stored plainly and not in hashed form raises some serious concerns and is unacceptable. Biometrics deserve greater privacy protections than traditional credentials, they’re part of you, and there’s no resetting a fingerprint or face. Once fingerprint and facial recognition data are leaked or stolen, the victim can never undo this breach of privacy. The property that makes biometrics so effective as a means of identification is also its greatest weakness,” said Kelvin Murray, senior threat research analyst for Webroot in an email to Z6Mag.

Continue Reading

Trending