Face Swapping in the Deepfake Era: Technologies, Applications, and Ethical Concerns

Authors

  • Kaige Zhang

DOI:

https://doi.org/10.54097/73b15v54

Keywords:

Face swapping, GHOST, Inswapper, SimSwap.

Abstract

The technology of face-swapping has developed from conventional manual editing to the emergence of deepfake technology powered by deep learning and then to the stage of diversified development, achieving improvements in terms of speed, realism and stability, and being applied to film, television, entertainment and virtual makeup, while also giving rise to problems such as false information and privacy violation attracting attention to the ethical and legal restrictions; among the mainstream models, GHOST, a deep-learning-based framework combining GANs and autoencoders, performs well in high-precision face-swapping by extracting and mapping facial features, keeping details and working stably under complex lighting but with large parameters and slow inference; Inswapper, an advanced pre-trained model trained on millions of diverse facial images, performs well in 3D facial structure inference and feature separation, making realistic transfers possible and being integrated into various tools flexibly but with potential blurring in results; SimSwap, a GAN-based framework with a generator and a discriminator, uses an ID Injection Module and weak feature loss for arbitrary swapping and target attribute keeping, although requiring considerable computing resources and possibly producing artifacts at extreme angles.

Downloads

Download data is not yet available.

References

[1] Du P, Li C, Dong C. Face Swapping for Film and Television Video based on FaceNet and Local Translation Warp. 2022 IEEE/ACIS 22nd International Conference on Computer and Information Science (ICIS). IEEE, 2022: 290-295.

[2] Tuysuz M K, Kılıç A. Analyzing the legal and ethical considerations of deepfake technology. Interdisciplinary Studies in Society, Law, and Politics, 2023, 2(2): 4-10.

[3] Groshev A, Maltseva A, Chesakov D, et al. GHOST—a new face swap approach for image and video domains. IEEE Access, 2022, 10: 83452-83462.

[4] Nirkin Y, Masi I, Tuan A T, et al. On face segmentation, face swapping, and face perception. Proceedings of 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, 2018: 98-105.

[5] Li L, Bao J, Yang H, et al. Advancing high fidelity identity swapping for forgery detection. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 5074-5083.

[6] Deng J, Guo J, Xue N, et al. Arcface: Additive angular margin loss for deep face recognition. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 4690-4699.

[7] Agarwal A, Sen B, Mukhopadhyay R, et al. FaceOff: A video-to-video face swapping system. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023: 3495-3504.

[8] Chen R, Chen X, Ni B, et al. Simswap: An efficient framework for high fidelity face swapping. Proceedings of the 28th ACM international conference on multimedia. 2020: 2003-2011.

[9] Blanz V, Scherbaum K, Vetter T, et al. Exchanging faces in images. Computer Graphics Forum. Oxford, UK and Boston, USA: Blackwell Publishing, Inc, 2004, 23(3): 669-676.

[10] Natsume R, Yatagawa T, Morishima S. Fsnet: An identity-aware generative model for image-based face swapping. Asian Conference on Computer Vision. Cham: Springer International Publishing, 2018: 117-132.

[11] Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139-144.

[12] Thies J, Zollhofer M, Stamminger M, et al. Face2face: Real-time face capture and reenactment of rgb videos. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2387-2395.

Downloads

Published

29-01-2026

Issue

Section

Articles

How to Cite

Zhang, K. (2026). Face Swapping in the Deepfake Era: Technologies, Applications, and Ethical Concerns. Academic Journal of Science and Technology, 19(2), 364-368. https://doi.org/10.54097/73b15v54