Connect with us

Technology

Google AI Team Uses ‘Mannequin Challenge’ Videos To Improve AI Depth Perception

Google AI team selected 2,000 Mannequin Challenge videos out of the thousands uploaded on the internet.

Published

on

Photo: Pricilla Du Prez | Unsplash.com

Google AI researchers of “Learning the Depths of Moving People by Watching Frozen People” received the Best Paper Honorable Mention Award last week in the Computer Vision and Pattern Recognition (CVPR) 2019 held in California.

The paper used data sets of uploaded Mannequin Challenge videos to teach robots to perceive 3D spaces within 2D images. The team was composed of seven researchers: Zhengqi Li, Tali Dekel, Forrester Cole, Richard Tucker, Ce Liu, Bill Freeman, and Noah Snavely. 

Robots, unlike humans, cannot perceive 3D spaces when presented with a 2D picture or video. Thus, it creates navigation problems for machine learning — most notable example is a self-driving car that cannot recognize sudden changes on the road like crossing pedestrians and bikers.

Enter the Mannequin Challenge, which are social media videos of people striking a pose, seemingly “frozen,” while a cameraman pans through the different scenes. It turns out that those videos are perfect for training AI on how to perceive 3D spaces in a 2D video format. 

The team combed through thousands of Mannequin Challenge videos from YouTube and picked 2,000 videos to be included in the data set. Videos were then checked to weed out ones that would give invalid data. Samples of these are videos in which someone “unfroze” or videos that used specialized lens or filters. 

After selecting the valid videos, the team used it to train a neural network to predict the depth of a moving object. Based on the series of tests, they were able to conclude that their method yielded more accurate perceptions of depth than previous state-of-the-art methods. 

Google AI Research’s Methods 

Computer Vision is an interdisciplinary study that aims for computers or machines to understand digital images or videos. One of its sub-studies is scene reconstruction, where computers can reconstruct a scene’s geometry from 2D image data. 

According to Dekel and Cole, the team worked on the computer vision problem with a deep learning-based approach. The more videos, or data sets, fed into the neural network, the more it learns how to perceive depth.

To achieve the Google AI team’s success model, they used Structure for Motion (SfM) and Multi-View Stereo (MVS) techniques to compute for depth. The depths were recorded and tagged as “ground truth” for the neural network to recognize. 

The MVS predict the depth for non-moving elements in a frame, for example, a picture of “frozen” people. However, in reality, the people within a frame are moving as well. SfM technique helps in improving the accuracy of predicted depth. Before running the SfM, the team computes for the optical flow of the video that isolates images. 

Aside from SfM, the team also computed for motion parallax and 2D optical flow between an input frame and another frame in the video for the computer to compute depth distances. 

After training the neural network with videos of stationary humans, the team then choose real-world videos of complex human actions captured by a moving hand-held camera. Based on the computations the computer analyzed from previous videos, it then adjusted and started learning how to predict the depth of the images even if they were moving freely.

The team compared their computer’s predicted depth maps with other depth-prediction models like Deep Ordinal Regression Network for Monocular Depth Estimation (DORN), Chen et al.’s Single-image depth perception in the wild, and Depth and Motion Network for Learning Monocular Stereo (DeMoN). They found out that their model has the highest percentage of accuracy. 

Google AI team’s research helps further advance the studies in robotics. 

The Mannequin Challenge 

The uploaded videos of the Mannequin Challenge were critical elements of the team’s research. 

Following from viral internet memes like Planking and Ice Bucket Challenge, the Mannequin Challenge trended back in 2016.

The goal of the challenge is to imitate mannequins in various poses while music plays in the background. A single cameraman records the video as the challengers hold their poses for as long as possible. 

Challengers created creative scenes often in limited spaces, sometimes showcasing a general theme with each pose. The challenge was created in high schools in the US. The first video was uploaded to Twitter last October 26, 2016.

Other videos show people doing the challenge at parties, bars, and wide-open spaces like theaters or parks. There are videos with varied poses that were complex and challenging for the computer’s machine learning. 

Related: Smarter and More Practical ‘Google AI’ Techs?

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Lenovo Patches Security Flaw Exposing 36TB Of Financial Data In The Wild

The compromised data include sensitive financial information like card numbers and financial records.

Published

on

Photo: lenovophotolibrary | Flickr | CC BY-ND 2.0

A recent breach that has exposed more than 36TB of data owned by users of specific network-attached storage devices has been confirmed by the computer tech giant, Lenovo, and said that a vulnerability in some of their products “could allow an unauthenticated user to access files on NAS shares via the API.”

Security researchers from Vertical Structures, who made the discovery, said that they found “about 13,000 spreadsheet files indexed, with 36 terabytes of data available. The number of files in the index from scanning totaled to 3,030,106.” Worse, these data include sensitive financial information like card numbers and financial records.

According to a security notification from Lenovo, the breach affected both Iomega and LenovoEMC NAS products. Vertical Structures was able to track down the source, a legacy Iomega storage product acquired by EMC and co-branded Lenovo-EMC in a joint venture. They added that it is “trivially easy” to exploit that application programming interface (API) and allow attackers to access the data stored upon any of several Lenovo-EMC network-attached storage (NAS) devices.

Screenshot of discovered files. Photo: Vertical Structures

Discovery was verified by WhiteHat Security

Researchers from Vertical Structures said they commissioned the help of WhiteHat Security, a security firm known to have patched up network-related vulnerabilities in the past, to verify their discovery because “of its world-renowned reputation in helping secure applications, to work together to verify the vulnerability found.”

“Verifying vulnerabilities is a very important step in securing applications, networks, and devices. After all, on an average day, WhiteHat scanners discover hundreds upon hundreds of new potential vulnerabilities,” they added.

After the team has notified Lenovo of their discovery of the said vulnerability, they said that the company swiftly responded and took measures to mitigate the impacts of the vulnerability.

When asked for comments regarding the problem, Simon Whittaker, cybersecurity director at Vertical Structures, said that “this is definitely a huge problem but one which we see every day.”

“Many organizations fear change and are cautious about retiring old devices. If they can’t replace devices, then they should be using threat modeling techniques to consider how better to protect them and ideally removing them from internet access completely,” he added. 

In order to let their users utilize their services, Lenovo pulled three of its old versions out of retirement and brought them back to life while they are patching the said vulnerability. Lenovo then pulled old software from version control to investigate any other potential vulnerabilities to fix and release updates.

“High” severity problem

In a security advisory that Lenovo released, they said that vulnerability has “high” severity and they advised their users to “update to the firmware level (or later) described for your system in the Product Impact section,” and if update is not feasible, “partial protection can be achieved by removing any public shares and using the device only on trusted networks.”

In the advisory, Lenovo lists the products that were impacted by the said flaw. They include:

  • px12-350r and ix12-300r, version 4.0.24.34808
  • HMND (Home Media Network Hard Drive) Cloud Edition, version 3.2.16.30221
  • StorCenter ix2-200, Cloud Edition, version 3.2.16.30221 StorCenter ix4-200d, Cloud Edition, version 3.2.16.30221 StorCenter ix2-200, version 2.1.50.30227
  • StorCenter ix4-200d, version 2.1.50.30227
  • StorCenter ix4-200rl, version 2.1.50.30227

For their security advisory, Lenovo disclaims that “the information provided in this advisory is provided on an “as is” basis without any warranty or guarantee of any kind” and advised users to “please remain current with updates and advisories from Lenovo regarding your equipment and software” for more recent and updated information about the problem.

Learning opportunity

As part of their report, Verticle Structures said that there are a lot of things tech companies can learn from what happened in Lenovo. They characterized Lenovo’s approach to the problem as “professional” and hoped that other companies experiencing similar problems could learn from them.

“Not only did they have a clearly stated vulnerability disclosure policy on their site with contact information, but they responded quickly and worked with WhiteHat and Vertical Structure to understand the nature of the problem and quickly resolve it,” said Vertical Structures.

“In sharing this story, both WhiteHat and Vertical Structure hope companies are inspired to always keep cybersecurity top of mind to keep up with the constant barrage of new vulnerabilities and exposures,” they added.

Continue Reading

Technology

This Free Service Detects And Blocks Suspicious Behaviors Of Android Apps

This service is still on the beta phase but they promise to release improvements and expand their territorial reach.

Published

on

Secure D Index by Upstream

As a smartphone owner, you can have a plethora of apps available for download via Google Play Store or Apple App Store. However, not all of these apps are secured and are safe to be installed on your devices. Some of them are either fake apps posing as a legitimate version of another app, or worse; they could be carriers of infectious malware that could potentially put your device or yourself in harm’s way.

Amid the risk of threat actors and hackers invading someone’s phone or tablet, leading tech company, Upstream, launched an online index that screens, catalogs, and blocks suspicious Android apps in the market around the world.

“The information on the Secure-D Index, currently in beta, allows anyone to easily find what apps pose a threat to their privacy and pocket, in one place, for free. Data is openly available to the whole mobile industry, from app developers, ad networks and publishers, to media, advertisers and mobile network operators that all fall prey to mobile ad fraud.”

The Secure-D Index is still in beta and is currently being tested. Nonetheless, the company and the platform promise to help resolve the problem of malicious apps that serve as a trojan horse for a more significant and more destructive attack against people’s privacy.

Currently, the platform included an aggregate list of suspected malicious Android Apps, and the index is growing every day as the platform continue to scan the internet to flag these unwanted applications. For each app, the Secure-D Index center features pertinent information such as the number of downloads, market infection rate, and markets where the app is active.

The data is available to 17 regions, and they are working on expanding their reach shortly. They are available in countries like the US, Russia, India, Germany, South Africa, and Egypt, covering up to 1.3 billion mobile data subscribers.

The number of the listed apps in the platform is currently at 1,500, with malicious apps estimating to 13.5 billion downloads. The platform allows users to check whether the apps are available on Google Play, have been removed from Google Play, or are distributed through third-party app stores.

Furthermore, along with the entry of each malicious app, the index also includes data such as the developer’s website, whenever the information is available.

“Secure-D leads the fight against malware, an ever-growing threat for mobile security worldwide. We believe a crucial part of this fight is awareness, which mobile users and, surprisingly, a large part of the industry lacks,” Dimitris Maniatis, Head of Secure-D at Upstream said

“At Upstream, we have been steadily and openly sharing Secure-D’s proprietary findings on suspicious and fraudulent apps in an effort to eliminate digital mobile fraud. The publication of these findings through our Secure-D Index highlights the level of awareness we aim to achieve and the transparency we believe is required to more effectively target the shady practices of threat actors that prey on a whole ecosystem.”

The platform is available for everyone where the Index is available, and it is free of charge, according to the press release of the company. Users can access the top 20 most active malware from the previous day and register for free to access full data — either global or country-specific — see historical data, or search for a specific app.

In 2018 alone, Secure-D having processed over 1.8 billion mobile transactions, detected and blocked over 63,000 malicious apps in 16 countries. They added that the platform is currently processing and blocking an average of 170 malicious applications every day.

Earlier this year, Secure-D reported on the suspicious background activity of 4shared, a popular file-sharing app, Vidmate, a video downloader, and Weather Forecast a preinstalled app on Alcatel devices. They said that all these apps were previously available at Google Play Store and had more than 600,000 downloads before their platform was able to flag their suspicious behavior. The company said that in these three cases alone, Secure-D detected and blocked near 250 million suspicious mobile transactions

“By providing information on suspicious apps freely to the public via Secure-D Index, Upstream aims to further protect mobile subscribers, operators, and advertisers from the ever-growing threat of mobile ad fraud, whose value is currently estimated at $40 billion,” they added.

Continue Reading

Technology

‘User Data Are Not Transferred To Russia,’ Says FaceApp

Published

on

Photo: charlene mcbride | Flickr | CC BY 2.0

The popular photo-manipulation app, Face App, has taken social media platforms like Facebook, Twitter, and Instagram by storm. And with “by storm,” it means that a lot of people, including celebrities and famous individuals, have jumped on the bandwagon to see how they would look like when they grow old.

There are a lot of things interesting about the app; it can manipulate a photo that a user submits to make a realistic version of the picture as the face ages. There is no surprise as to why Face App has gained popularity among young users around the world.

There is a problem though: you need to submit your photo to the app. This means that providing the chosen selfie, FaceApp will have access to your photo at their disposal. That’s why concerns were raised by security experts and data privacy advocates regarding the implication of sending a photo to an app.

One thing that concerns advocate and experts the most is the fact that the company that built and developed the app is from Russia. It is owned by a Russian company named Wireless Labs and has been downloaded by more than 100 million people via Google Play on the Android platform, and by over 50 million people across other platforms including Apple’s iOS.

The Russia issue

Many advocates have cited the human rights record of Russia, as well as the heightened citizen surveillance they have in their country. The fears of advocates and experts are amplified after the privacy terms and conditions for the app reveals that it sneakily includes a clause that would “grant FaceApp a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you [the user or the owner of the photo].”

The app’s terms of use also grant the developers to publish the photos they gathered in public at their discretion. “When you post or otherwise share User Content on or through our Services, you understand that your User Content and any associated information (such as your [username], location or profile photo) will be visible to the public,” they added.

The polarizing opinions about FaceApp have opened the discussion on how people are carelessly sharing their photos on social media platforms and smartphone apps without a thorough understanding of the implications of such action. In an article published by Wired, they said that what FaceApp is doing is rather common than new.

They said that the same thing is happening when someone uploads a photo on Facebook and Instagram. Instead of demonizing FaceApp and singling it out, the article encourages users to be more vigilant with the data they share across all platforms.

FaceApp clarifies

However, security experts and advocates still press on the idea that FaceApp could be used by the Russian government in its surveillance and technology-versus-people agenda. However, FaceApp is strong in its position that it is protecting the privacy of their users, saying that they “perform[s] most of the photo processing in the cloud. We only upload a photo selected by a user for editing. We never transfer any other images from the phone to the cloud.”

“We might store an uploaded photo in the cloud. The main reason for that is performance and traffic: we want to make sure that the user doesn’t upload the photo repeatedly for every edit operation. Most images are deleted from our servers within 48 hours from the upload date. We don’t sell or share any user data with any third parties,” they added.

They also countered the claims that they can be used as Russia’s trojan horse and said that “even though the core R&D team is located in Russia, the user data is not transferred to Russia.”

Furthermore, they clarified that they don’t require users to log in their app for them to use it and while they ask for device permission to access the phone’s camera and photo roll, they only access those that are selected by the users for editing.

“You can quickly check this with any of network sniffing tools available on the internet,” they said.

Continue Reading

Trending