Technical and Socio-Technical Responses to Deepfakes
Generously funded by the Monash Data Futures Institute, this is a joint research program between the Faculties of Arts, Law and Information Technology. It examines the responses to deepfake technology from the technological, criminological, sociological and legal points of view. A particular concern of the spread of this technology is misinformation and technology-facilitated abuse. AiLECS is leading the technological research component of the project.
Deepfake technology is based on deep learning algorithms. The most prevalent strategies for creating deepfake employ autoencoders and generative adversarial networks (GANs). These generators learn to generate realistic fake data, by training across massive volumes of real data. Although they have positive application, the vast majority of their real-world application are for purposes of fraud, posing threats to personal privacy and in some cases national security.
Deep learning algorithms, such as convolutional neural networks (CNN), and recurrent neural networks (RNN), can extract signature traits (fingerprints) left behind during data manipulation automatically, unlike typical machine learning models. Those fingerprints have distinct patterns which can be used to identify real or fake GAN-generated deepfake. This way, deep learning-based detection approaches can improve detection accuracy. This is a rapidly evolving research area. Our project aims to create and test a deepfake detection system using an audio-visual deepfake dataset, representing a diversity of people and various algorithmic modifications.