Connect with us

Technology

In-App Appeal Resolves Suspension Faster, Twitter Claims

Published

on

Twitter In-App Appeal

Twitter users who get suspended for violating guidelines and conduct procedures can now appeal directly inside the app; a new feature that the company says will give a solution to the previous longer waiting time it takes before getting a response.

The social media giant unveiled the feature on Tuesday in a tweet that showed how a recently suspended user goes through the steps of filing an appeal. The goal here is to maintain a steady relationship between users and the company without violating any rules and procedures conducted by Twitter. Furthermore, the new app will give a democratic response to users by not curtailing its right to express, but give them a secure avenue that does not tolerate wrongful behavior.

In the previous process, after a user tweet something that gets reported or surpasses what the company deems constitutional, Twitter moderators decides whether or not your account deserves to be suspended. Users who believed that Twitter’s action was unnecessary, had to resort to an online form, and response times usually varied from a few hours to more than a week depending on the offense.

Twitter claims that its new in-app reporting feature will cut down response time by 60 percent. If Twitter decides you broke its rules, you will then receive a notification together with the content in question, the law in violation, and a link to its guidelines. You will have a choice of either removing your tweet totally or apply for an appeal. If you choose the appeal process, a write-in box appears that lets you add any context that the moderators may have missed. This means that you have the power to explain your point or defend your thoughts on the matter.

The in-app appeal process is part of a larger effort by Twitter to be more transparent about how it conducts harmful behavior. Over the past years, the company has been bombarded by a lot of issues especially on the rampant abuse that pervades on its platform. It confessed that it was a struggle for the social media giant to get a hold on the abusive users who proliferate violence, bullying, and threat among others.

Although Twitter is finally getting more active in implementing its terms and policies, others are questioning the appealing feature. They expressed that the appeal process is another manifestation of Twitter to control its users, especially on sensitive topics like politics, gender issues, and terrorism. This is an app where people used to express their opinions and side of the story about problems that are mostly forgotten, but what happened to it? Twitter has been the most reliable and most robust media platform before, generating tweets which allow users to discuss hot issues every now and then freely; but those days are gone.

Some challenged the moderators’ way on detecting tweets which they deem wrongful or against its rule. Looking at the terms and policies of the company, one has to agree that it is too broad which can be simplified first, so an average person understands the draw line between what’s acceptable to post or not. The problem is, if a normal person tweets and suddenly gets flagged by Twitter without even understanding the latter’s terms, chances are, that particular individual may resort to an appeal and choose to remove the content in question, so his or her account recovers. That situation alone becomes a problem because one needs to know the cause of his or her violation before jumping into the solution.

Some tactics are also explored by Twitter including changing its algorithm to rank the health of conversations and purging accounts by white nationalists and other hate groups. As Twitter ramps up its enforcement efforts, more benign behavior will get swept up in the process. The new feature will surely be of advantage for innocent users to make a fast return online, but they need to comprehend why enforcement actions were taken.

The in-app feature of Twitter is a preventive way to deal with violence and obscenity that pervade on its platform. But if Twitter wants a free-speech and violence-free platform, it needs to simplify the terms and policies first so everyone can fully understand the limitation of their tweets. The appeal process should clarify things and not complicate it. Twitter should cultivate a culture of people who question and understand things, not tolerate idiocy among its users

Photo: marek.sotak

I've been contributing news since 2010, both online and print. Aside from Z6Mag, I manage independent news blogs that provide awareness on a diverse list of topics to every reader.

Technology

Instagram Roll Out Trial Updates In Australia

Instagram removes the feature that allows users to see the accumulated likes of every photo.

Published

on

Photo by Prateek Katyal on Unsplash

As a way to overhaul Instagram in Australia, the popular photo-sharing platform plans to stop showing the total number of likes your photo has accumulated — relatively bad news for most social media influencers or what we call “Instagram models.”

Instagram on Thursday rolls out a trial update that removes several functionalities such as the total number of likes on photos, the viewing of videos on user feeds and profiles, and permalink pages. Meanwhile, you can still check the total number of your photos.

The said update will be mandatory in all devices, says in a report by Channel 7. However, in Australia, that has the app, will receive the update whether they like it or not.

The trial update expands on a similar change to other countries—which was initially introduced in Canada last May— such as New Zealand, Japan, Ireland, Italy, and Brazil.

Notably, the Guardian noted that Australia was among the first countries chosen for the trial due to its fast-growing and highly engaging community of Instagram users and tech enthusiasts.

The change Instagram is taking follows the research that accuses the platform of becoming a very hostile application and threatens teenagers’ mental health.

“The idea is to just really let you focus on the content and the experience of engaging without being worried or feeling pressured over how many likes a post has received,” says Instagram Australia’s Director of Policy.

In today’s generation, people are seeking validation from the Internet. For Instagram, the more red hearts a photo gets means that more people like that perception of you. Furthermore, the photo-based platform was appreciated because it provided a space for self-expression and self-identity.

However, it also cannot be denied that it has helped shape unrealistic goals based on two-dimension photos or curated content. Recently, the platform was associated with high levels of anxiety, depression, bullying, and FOMO, or the “fear of missing out.”

A 2017 UK study found that out of five major social networks, Instagram was the most harmful to young people’s mental health. Snapchat followed, with Facebook going third, Twitter fourth, and YouTube fifth.

Last year, the Pew Research Center found that 37 percent of teens felt “pressure” to post content that will get a lot of likes and comments; and this year, research from the American Psychological Association linked mental health issues and an increase of suicide rates in young Americans due to digital media.

To workaround the pressing issues within the platform, Instagram has decided to remove the number of likes from appearing on other people’s profile.

“We want to make sure that people are not feeling like they should like a particular post because it’s getting a lot of likes and that they shouldn’t feel like they [are] sharing solely to get likes,” The Facebook Australia and New Zealand director of policy, Mia Garlick said.

The company hopes that this change will foster a new environment, where users will get to share content, photos, videos, and “the things you love” without the fear of judgment or the pressure “accumulating likes.”

“We are now rolling the test out to Australia so we can learn more about how this can benefit people’s experiences on Instagram, and whether this change can help people focus less on likes and more on telling their story,” Garlick adds.

In line with Instagram’s move on mitigating the brewing problem on its users’ mental health, it has also taken steps in addressing the issue of bullying.

Earlier in July, Instagram released an AI-powered feature that lets users know of a possible offensive comment. When a user types out “You are so ugly and stupid,” for example, a user will get a notification that states “Are you sure you want to post this?”

Instagram hides likes on posts in feeds of Australian users
Source: Channel 7

On the other hand, it is quite uncertain how this change will significantly affect Instagram influencers. Primarily, these people earn their salaries based on the number of likes a specific post gets, their number of followers, and overall engagement.

Instagram says that the change won’t affect measurement tools for businesses and creators on Instagram, such as likes and engagement metrics. Influencers could still see those numbers and share them via self-reporting to brands looking to work for them.

However, in a society where the number of likes attracts new followers and more engagement, it will be interesting to see how the business model will change once all of that said features were removed.

Continue Reading

Technology

Another China-Based Server Discovered Open Containing 1TB Of Personal Data

The data appears to come from more than 100 loan-related apps and exposing a handful of sensitive data that researchers think caused by the problematic security of Chinese Fintech industry.

Published

on

Photo: Petter Lagson | Unsplash

Nearly one terabyte of data has been left open by a China-based server for everyone to see that includes sensitive information of people, including their SMS and call logs.

The unprotected database was discovered by Safety Detectives’ research team led by Anurag Sen, which contained at least 889 gigabytes of data and was growing every day until it was closed. The researchers were not able to determine who owns the database, but they were able to confirm that it originated from China.

According to the researchers, the information contained in the exposed database came from more than 100 different loan-related applications. They also said that the database they discovered was a “treasure trove of data” and has contained sensitive information of millions of Chinese citizens.

The most crucial pieces of information that made the researchers conclude that the database includes data from loan-related applications is the discovery of several credit evaluation reports, which contain loan records and details, risk management data, and real ID numbers, as well as, personal information like name, phone number, and address.

In 4.6 million unique entries, the researchers were able to find other data like:

  • GPS location
  • A detailed list of contacts
  • SMS logs
  • IMSI numbers
  • IMEI numbers
  • Device model/version
  • Stored app data
  • Memory data
  • Operator reports
  • Transaction details
  • Mobile billing invoices
  • Full names
  • Phone numbers
  • Bill amount per month
  • Call log
  • Credit and debit card details
  • Concentrated list of apps on each mobile device
  • Detailed tracking of app behavior
  • Device information
  • Device location
  • Launch & exit times
  • Duration on the content, etc.
  • Passwords with MD5 encryption, which can be decoded

Furthermore, the amount and type of data discovered by the team inside the previously exposed database led them to conclude that citizens are being tracked in detail.

“Things including a user’s IP address and duration of a given activity, call logs, SMS exchanges (including content of the SMS), and the various apps installed on the devices are all within the scope of data made available by this leak,” reads the report of Safety Detectives penned by Jim Wilson.

The researchers raised many concerns regarding the database they have uncovered. According to them, the database could be used by marketers to “hyper-target” their customers and “fine-tune” their messages to them. Worse, the data could also be used by threat actors to carry out fraud, and “it could also be easily used in either ‘friendly’ government spying or not-so-friendly espionage.”

There is enough amount of data for anyone to completely take over someone’s identity without any considerable effort. “If this data were to be sold on the Dark Web, it could easily be packaged into a ‘deal’ where an individual’s financial, medical, and personal life are up for grabs,” the researchers warn.

This is not the first time that a database originating from China was discovered to include sensitive personal and financial information of Chinese citizens. Earlier this year, Victor Gevers, a security researcher from GDI.Foundation, found a similar database that contains sensitive information of Chinese citizens that appeared to be coming from servers of the popular payment platform, Alipay.

The database includes transaction details of Alipay users, and Gevers claimed that it is being sold to third parties for a price.

Alipay denied the accusation and offered an alternative explanation on the data that was discovered. They said that they are not selling transaction details of their users. Instead, the transaction details could have been willingly uploaded by users through a loan app.

According to the investigation conducted by the company, some Alipay customers submitted their Alipay account names and passwords to a particular online lending platform. Such information was obtained by crawler companies that work with these online lending companies and was then stolen by hackers.

Back in March, Gevers said that the continuous discovery of databases, like what he discovered earlier this year and the one discovered by Safety Detectives recently, highlights the massive problem with China’s fintech industry.

He noted that most financial data leaks happen because sources trust third parties with their data. Most of the time in Fintech, experts see third parties doing machine learning and analytics to generate insight. “Knowing what the Chinese people are spending their money on based on one of the biggest financial institutions has a very high market value in and outside China,” he said.

Continue Reading

Technology

24 Million Images Used For Facial Recognition Were Secretly Scraped All Over The Internet

They were taken from people’s social media accounts, websites, social media, photo sharing, and online dating platforms, and also taken by digital cameras in public places, as well as, unencrypted communications.

Published

on

Photo: MegaPixels.cc

If ever you’re wondering where do facial recognition systems compare your photos with, know that it is probably compared to your picture that was secretly gathered by governments and tech companies to develop the facial recognition AI.

A database of millions of images secretly extracted by the US and Europe from people’s social media accounts, websites, social media, photo sharing, and online dating platforms and also taken by digital cameras in public places, as well as unencrypted communications, exists and is now being used facial recognition systems around the world. These systems are ubiquitously by police and state intelligence agencies without you knowing that they have a copy of your face on their system.

In research published by Megapixels.cc, a cybersecurity research firm focused on facial recognition, 24 million non-cooperative, non-consensual photos in 30 publicly available face recognition and face analysis datasets.

Out of these 24 million images, 15 million face images are from Internet search engines, over 5.8 million from Flickr.com, over 2.5 million from the Internet Movie Database (IMDb.com), and nearly 500,000 from CCTV footage.

“All 24 million images were collected without any explicit consent, a type of face image that researchers call “in the wild.” Every image contains at least one face, and many photos contain multiple faces,” reads the study.

The researchers approximated that out of all the millions of images they have found in the said datasets; there are one (1) million people who owned those photos. Furthermore, the researchers found out that the majority of the images originated from the USA and China.

Embassy photo found in the dataset. Photo: MegaPixels.cc

However, they claimed that with all the research papers they have analyzed, only 25% of the datasets originated from the USA, and most of the images are taken from Chinese IP addresses. They also highlighted that limitations in their study only allowed them to evaluate research papers written in English and the big implication for this is that there is a possibility that foreign use could be bigger than the actual number they have found out.

The images in the datasets are not only those that can be found from online databases and social media platforms. A considerable number of photos in the analyzed dataset were taken from government databases.

Related: Celebrity Photos, Composite Sketches, And Other Things The Police Feed The Facial Recognition System To Find A Match

For example, out of the 24 million images they have analyzed, at least 8,428 embassy images from at least 42 countries (with most originating from China and US embassies, as earlier mentioned) were found in face recognition and facial analysis datasets. Over 6,000 of the images were from US, British, Italian, and French embassies (mostly US embassies).

“These images were found by cross-referencing Flickr IDs and URLs between datasets to locate 5,667 images in the MegaFace dataset, 389 images in the IBM Diversity in Faces datasets, and 2,372 images in the Who Goes There dataset,” they added.

As part of their findings, the researchers said that these images were used for commercial research by Google (US), Microsoft (US), SenseTime (China), Tencent (China), Mitsubishi (Japan), ExpertSystems (Italy), Siren Solution (Ireland), and Paradigma Digital (Spain); and military research by National University of Defense Technology (China).

The facial recognition phenomenon

Facial recognition technology has been the center of public conversation as well as legislative and regulatory dialogue in the past few years. The focus of the conversation points to how law enforcement agencies, government offices, as well as a private business, use an unregulated technology.

Law enforcement has been very defensive with their use of facial technology in their operations. They argue that technology helps them keep the security of citizens against unlawful elements.

Relevant: Ethical Regulation Of ‘Facial Recognition’ Is A Shared Responsibility

However, the opposite side of the pole asserts that facial recognition technology violates people’s privacy. Human rights and privacy advocates believe that the premise behind facial recognition systems is problematic in itself and law enforcement, big brother governments, and even businesses who have access to the technology can easily track people’s movements against their consent.

They raise their fears that facial recognition may grow to be a social enemy instead of a friend as regulations governing its use is not enough to protect people’s security and privacy.

“Unless we really rein in this technology, there’s a risk that what we enjoy every day — the ability to walk around anonymous, without fearing that you’re being tracked and identified — could be a thing of the past,” said Neema Singh Guliani, the American Civil Liberties Union’s senior legislative counsel.

Continue Reading

Trending