A Neural Network Tool From Adobe Can Detect Manipulated Images Using Photoshop

Adobe partnered with UC Berkeley to develop a neural network that can detect photo manipulation in Photoshop and "undo" the edit.Adobe released a new tool to detect photo manipulation. Photo: Adobe

The era of manipulated images used in crimes and other forms of deception is closer to its end — as collaborative research between Adobe, and UC Berkeley has developed trained neural network software to spot manipulated photos in Photoshop.

Adobe Photoshop becomes one of the most popular photo manipulation software used by creatives in all walks of life. Developed in 1990, Adobe prides that Photoshop has “democratized creativity and expression.” However, as technology is supposed to accessible to everyone regardless of intention, there are those who exploit the ability of the software to manipulate images in a sometimes unlawful manner. The types of photo-manipulation deception range from people over manipulating their photos to fool people regarding their physical appearance on social media and dating sites to using manipulated images to incriminate innocent people and as evidence in courts.

“While we are proud of the impact that Photoshop and Adobe’s other creative tools have made on the world, we also recognize the ethical implications of our technology. Trust in what we see is increasingly important in a world where image editing has become ubiquitous – fake content is a serious and increasingly pressing issue,” Adobe wrote in a press release.

The new software that aims to detect manipulated photos is one of the efforts of the software giant to balance out the use of their technology. As they are saying, the company is “firmly committed to finding the most useful and responsible ways to bring new technologies to life – continually exploring using new technologies, such as artificial intelligence (AI), to increase trust and authority in digital media.”

It has to be noted that the new software has only the capability to detect whether an image was manipulated using Photoshop’s “Face Aware Liquify” tool and living in a world free from deceiving images is still a pipe dream. However, the development of the neural, deeply trained software, gives people the ability to detect photoshopped images and provides suggestions on how to revert the image to its original state.

Adobe x UC Berkeley

The study that developed the photo manipulation detection software was spearheaded by Adobe researchers Richard Zhang and Oliver Wang, along with their UC Berkeley collaborators, Sheng-Yu Wang, Dr. Andrew Owens, and Professor Alexei A. Efros. According to Adobe, while the technology is still in its early stages “this collaboration between Adobe Research and UC Berkeley, is a step towards democratizing image forensics, the science of uncovering and analyzing changes to digital images.”

The focus of the study is on the “Face Aware Liquify” feature in Photoshop because it’s popular for adjusting facial features, including making adjustments to facial expressions. The feature’s effects can be delicate, which made it an interesting test case for detecting both drastic and subtle alterations to faces.

The software was developed by training a Convolutional Neural Network (CNN), a form of deep learning; the research project can recognize altered images of faces. The researchers have edited images they swept across the abyss of the internet and fed them to the neural network for the technology to learn from identifying the original image from the manipulated ones.

Image from Adobe

“We started by showing image pairs (an original and an alteration) to people who knew that one of the faces was altered,” Oliver Wang says. “For this approach to be useful, it should be able to perform significantly better than the human eye at identifying edited faces.”

Computer vs. human eyes

The area of interest among the researchers revolves around the ability of the technology to accurately identify manipulated images in contrast to the strength of humans to make the identification. It turns out that when tested with human subjects, where the respondents were asked to identify which ones among the set of image pairs were manipulated, human subjects were only able to identify manipulation 53% of the time accurately. Statistically, this is nothing better than randomly guessing whether an image is manipulated or not.

Image from Adobe

As disappointing as the results with human tests are, the software provides hope as results reveal that the deeply trained neural network was able to identify photo manipulation 99% of the time.

“It might sound impossible because there are so many variations of facial geometry possible,” says Professor Alexei A. Efros, UC Berkeley. “But, in this case, because deep learning can look at a combination of low-level image data, such as warping artifacts, as well as higher level cues such as layout, it seems to work.”

While the study is very limited to a specific software feature and could not be able to detect other forms of image manipulation, Adobe believes that the development of this neural network is a step towards bringing image forensic closer to everybody.

“The journey of democratizing image forensics is just beginning.”

About the Author

Al Restar
A consumer tech and cybersecurity journalist who does content marketing while daydreaming about having unlimited coffee for life and getting a pet llama. I also own a cybersecurity blog called Zero Day.

Be the first to comment on "A Neural Network Tool From Adobe Can Detect Manipulated Images Using Photoshop"

Leave a comment

Your email address will not be published.


*