Swin-Fake: A Consistency Learning Transformer-Based Deepfake Video Detector

aut.relation.endpage3045
aut.relation.issue15
aut.relation.journalElectronics
aut.relation.startpage3045
aut.relation.volume13
dc.contributor.authorGong, Liang Yu
dc.contributor.authorLi, Xue Jun
dc.contributor.authorChong, Peter Han Joo
dc.date.accessioned2024-08-14T02:51:01Z
dc.date.available2024-08-14T02:51:01Z
dc.date.issued2024-08-01
dc.description.abstractDeepfake has become an emerging technology affecting cyber-security with its illegal applications in recent years. Most deepfake detectors utilize CNN-based models such as the Xception Network to distinguish real or fake media; however, their performance on cross-datasets is not ideal because they suffer from over-fitting in the current stage. Therefore, this paper proposed a spatial consistency learning method to relieve this issue in three aspects. Firstly, we increased the selections of data augmentation methods to 5, which is more than our previous study’s data augmentation methods. Specifically, we captured several equal video frames of one video and randomly selected five different data augmentations to obtain different data views to enrich the input variety. Secondly, we chose Swin Transformer as the feature extractor instead of a CNN-based backbone, which means that our approach did not utilize it for downstream tasks, and could encode these data using an end-to-end Swin Transformer, aiming to learn the correlation between different image patches. Finally, this was combined with consistency learning in our study, and consistency learning was able to determine more data relationships than supervised classification. We explored the consistency of video frames’ features by calculating their cosine distance and applied traditional cross-entropy loss to regulate this classification loss. Extensive in-dataset and cross-dataset experiments demonstrated that Swin-Fake could produce relatively good results on some open-source deepfake datasets, including FaceForensics++, DFDC, Celeb-DF and FaceShifter. By comparing our model with several benchmark models, our approach shows relatively strong robustness in detecting deepfake media.
dc.identifier.citationElectronics, ISSN: 2079-9292 (Print); 2079-9292 (Online), MDPI AG, 13(15), 3045-3045. doi: 10.3390/electronics13153045
dc.identifier.doi10.3390/electronics13153045
dc.identifier.issn2079-9292
dc.identifier.issn2079-9292
dc.identifier.urihttp://hdl.handle.net/10292/17885
dc.languageen
dc.publisherMDPI AG
dc.relation.urihttps://www.mdpi.com/2079-9292/13/15/3045
dc.rights© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
dc.rights.accessrightsOpenAccess
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subject40 Engineering
dc.subject4009 Electronics, Sensors and Digital Hardware
dc.subject0906 Electrical and Electronic Engineering
dc.subject4009 Electronics, sensors and digital hardware
dc.titleSwin-Fake: A Consistency Learning Transformer-Based Deepfake Video Detector
dc.typeJournal Article
pubs.elements-id564702
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Gong et al_2024_Swin fake.pdf
Size:
1.04 MB
Format:
Adobe Portable Document Format
Description:
Journal article