FF4ALL

Research Area:
Research Lines:
Research Lines:
NRRP
DEIB Role: Partner
Start date: 2024-02-01
Length: 23 months
Project abstract
The phenomena of creation/diffusion of fake multimedia contents have reached levels never seen before, owing to the availability of artificial intelligence tools devoted to generating fake contents almost indistinguishable from real contents in a relatively easy way. The project FF4LL - Detection of Deep Fake Media and Life-Long Media Authentication aims to develop theoretical and practical tools to limit the spread of false or counterfeit content, both through passive analysis techniques that allow to distinguish false content from true ones, and through active techniques, to be adopted when the contents are created, to facilitate subsequent authentication.
Within FF4ALL, Politecnico di Milano is responsible of two specific tasks, more precisely:
Audio/Video Deepfake
Deepfakes exist for both audio and video, and their proliferation in recent years has raised significant concerns. Video deepfakes involve the use of advanced techniques to manipulate or generate realistic-looking video content (e.g., superimpose one person's face onto another person's body, change facial expressions, etc.). Audio deepfakes use similar technology to manipulate or generate audio content (e.g., change a person's voice, mimic their speaking style, etc.).
However, deepfakes are not limited to either video or audio manipulation in isolation: they can involve both audio and video together. Within this framework, the necessity of developing audio/video deepfake detectors is becoming more than an urgent necessity. The main goal of this task is to develop effective and efficient forensic techniques for the analysis of audio/video deepfakes.
Advanced Methods for Deepfake Detection
A deepfake is a type of digital manipulated video where artificial intelligence and deep learning techniques are used to create hyper-realistic content that convincingly depict individuals doing things they never did. The widespread use of deepfake videos can be dangerous, as they can be used for misinformation or disinformation campaigns, for frauds and scams, and to erode trust. It is therefore necessary to develop forensic techniques to detect video deepfakes.
The goal of this task is to develop forensic detectors that make use of advanced techniques to detect if a video is a new generation deepfake. Most of the video deepfake detectors developed in the literature are based on a frame-by-frame analysis. Each frame is separately processed, and decision at video level is made by merging each frame detection result. However, to cope with advanced and up-to-date deepfake videos, it is necessary to extend the analysis at video level. This means that frames should not be analyzed as separate entities. They should rather be processed altogether. Indeed, deepfake artifacts may be better visible by observing temporal inconsistencies.
Within FF4ALL, Politecnico di Milano is responsible of two specific tasks, more precisely:
Audio/Video Deepfake
Deepfakes exist for both audio and video, and their proliferation in recent years has raised significant concerns. Video deepfakes involve the use of advanced techniques to manipulate or generate realistic-looking video content (e.g., superimpose one person's face onto another person's body, change facial expressions, etc.). Audio deepfakes use similar technology to manipulate or generate audio content (e.g., change a person's voice, mimic their speaking style, etc.).
However, deepfakes are not limited to either video or audio manipulation in isolation: they can involve both audio and video together. Within this framework, the necessity of developing audio/video deepfake detectors is becoming more than an urgent necessity. The main goal of this task is to develop effective and efficient forensic techniques for the analysis of audio/video deepfakes.
Advanced Methods for Deepfake Detection
A deepfake is a type of digital manipulated video where artificial intelligence and deep learning techniques are used to create hyper-realistic content that convincingly depict individuals doing things they never did. The widespread use of deepfake videos can be dangerous, as they can be used for misinformation or disinformation campaigns, for frauds and scams, and to erode trust. It is therefore necessary to develop forensic techniques to detect video deepfakes.
The goal of this task is to develop forensic detectors that make use of advanced techniques to detect if a video is a new generation deepfake. Most of the video deepfake detectors developed in the literature are based on a frame-by-frame analysis. Each frame is separately processed, and decision at video level is made by merging each frame detection result. However, to cope with advanced and up-to-date deepfake videos, it is necessary to extend the analysis at video level. This means that frames should not be analyzed as separate entities. They should rather be processed altogether. Indeed, deepfake artifacts may be better visible by observing temporal inconsistencies.