New tool to help Asian news agencies spot fake images

Hello everyone. Today we are sharing with you a translation of an article that was prepared on the eve of the launch of a new course from OTUS - Computer Vision .




Journalists and fact checkers face enormous difficulties in separating reliable information from rapidly spreading misinformation. And this applies not only to the texts that we read. Viral images and memes fill our news feeds and chats, and often they distort the context or are fakes. In Asia, where there are eight times more social network users than in North America, the scale of the problem is much more serious.

There are tools that Asian journalists can use to determine the origin and reliability of news images, but they are relatively old, unreliable, and for the most part only available on desktop computers. This is an obstacle for fact checkers and journalists in countries where most people connect to the Internet using their mobile phone.

Over the past two years, the Google News Initiative has worked in collaboration with journalists on the technology for identifying processed images. At the 2018 Trusted Media Summit in Singapore, a team of experts from Google, Storyful and a wide range of representatives of the news industry joined forces to develop a new tool optimized for mobile devices and using the achievements of artificial intelligence. With the support of the Google News Initiative , the GNI Cloud Program and volunteer engineers from Google, the prototype received then turned into an application called Source, powered by Storyful .

Now that the application is already in use by journalists across the region, we asked Eamonn Kennedy, Storyful's product director, to tell us a little more about him.

What does Storyful see the problems faced by journalists and fact checkers all over the world, and particularly in Asia?

[ ] , . , . . , . , , , .

Source , AI ?

[] Storyful, , , .

— , , , . -, — , . -, , , , , , .

Source Google's AI, , , , . , . Source , , .



Source , .


Source 2020 ?
[] 130 17 , . , 30 Source , , — , — .

Looking ahead, we listen to fact checkers when we think about what the next version of the application will be. We know that Source was used, for example, to study frames from video, which shows us the development potential of an application for working not only with text or images. The ultimate goal is to create a “toolbox” of publicly available fact-checking resources, with Source at the center, using Google AI to help journalists around the world.




On this, the translation came to an end, but we asked for comment on the article by the course leader - Arthur Kadurin:
One of the current “hot” topics in the field of computer vision, “Adversarial attacks”, is methods of “deceiving” modern algorithms for recognizing and processing visual information using new, specially designed images. In recent years, applications that have processed photos and videos in a special way (FaceApp, Deepfake, etc.) have been widely publicized, one of the key questions is whether we can use neural networks to distinguish real images from processed ones. One of the topics of the Computer Vision course is devoted to this issue. In the lesson we will analyze modern approaches to how to correctly determine “deception” using neural networks and how to successfully “deceive” them.

Learn more about the course

All Articles