An Overview of Generative Adversarial Networks

Authors

  • Xinyue Long
  • Mingchuan Zhang

DOI:

https://doi.org/10.54097/jceim.v10i3.8677

Keywords:

Generative Adversarial Network, Derivative Models, Development Trend

Abstract

Generative adversarial network (GAN), put forward by two-person zero-sum game theory, is one of the most important research hotspots in the field of artificial intelligence. With a generator network and a discriminator network, GAN is trained by adversarial learning. In this paper, we aim to discusses the development status of GAN. We first introduce the basic idea and training process of GAN in detail, and summarize the structure and structure of GAN derivative models, including conditional GAN, deep convolution DCGAN, WGAN based on Wasserstein distance and WGAN-GP based on gradient strategy. We also introduce the specific applications of GAN in the fields of information security, face recognition, 3D and video technology, and summarize the shortcomings of GAN. Finally, we look forward to the development trend of GAN.

References

S. Zhenguo, Z. Chengsheng, and C. Feixiong, “Review of generative adversarial networks and their applications in power systems,” Proceedings of the CSEE, vol. 43, no. 03, pp. 987–1004, 2023.

Z. Enqi, G. Guanghua, and Z. Chen, “Research progress of generative adversarial network gan,” Application Research of Computers, vol. 38, no. 04, pp. 968–974, 2021.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, Curran Associates, Inc., vol. 27, 2014.

T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,” CoRR, vol. abs/1710.10196, 2017.

D. Pathak, P. Kr¨ahenb¨uhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, pp. 2536–2544.

P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, pp. 5967–5976.

K. Lin, D. Li, X. He, M. Sun, and Z. Zhang, “Adversarial ranking for language generation,” in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, pp. 3155–3165.

R. M. Neal, “Markov chain sampling methods for dirichlet process mixture models,” Journal of Computational and Graphical Statistics, vol. 9, no. 02, pp. 229–265, 2012.

N. Guan, D. Tao, Z. Luo, and B. Yuan, “Manifold regularized discriminative nonnegative matrix factorization with fast gradient descent,” IEEE Trans.Image Process., vol. 20, no. 7, pp. 2030–2048, 2011.

M. Mirza and S. Osindero, “Conditional generative adversarial nets,” CoRR, vol. abs/1411.1784, 2014.

A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” in 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds., 2016.

M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN,” CoRR, vol. abs/1701.07875, 2017.

I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Improved training of wasserstein gans,” in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. Garnett, Eds., 2017, pp. 5767–5777.

A. Triastcyn and B. Faltings, “Generating differentially private datasets using gans,” CoRR, vol. abs/1803.03148, 2018.

C. Huang, P. Kairouz, X. Chen, L. Sankar, and R. Rajagopal, “Generative adversarial privacy,” CoRR, vol. abs/1807.05306, 2018.

L. Frigerio, A. S. de Oliveira, L. Gomez, and P. Duverger, “Differentially private generative adversarial networks for time series, continuous, and discrete open data,” CoRR, vol. abs/1901.02477, 2019.

J. Kim, S. Bu, and S. Cho, “Zero-day malware detection using transferred generative adversarial networks based on deep autoencoders,” Inf. Sci., vol.460-461, pp. 83–102, 2018.

U. Fiore, A. D. Santis, F. Perla, P. Zanetti, and F. Palmieri, “Using generative adversarial networks for improving classification effectiveness in creditcard fraud detection,” Inf. Sci., vol. 479, pp. 448–455, 2019.

C. Yin, Y. Zhu, S. Liu, J. Fei, and H. Zhang, “An enhancing framework for botnet detection using generative adversarial networks,” in 2018 International Conference on Artificial Intelligence and Big Data (ICAIBD), 2018, pp. 228–234.

M. Abadi and D. G. Andersen, “Learning to protect communications with adversarial neural cryptography,” CoRR, vol. abs/1610.06918, 2016.

A. N. Gomez, S. Huang, I. Zhang, B. M. Li, M. Osama, and L. Kaiser, “Unsupervised cipher cracking using discrete gans,” in 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018.

B. Hitaj, P. Gasti, G. Ateniese, and F. P´erez-Cruz, “Passgan: A deep learning approach for password guessing,” CoRR, vol. abs/1709.00440, 2017.

Z. Y. e. a. Zeng Fanzhi, Zou Lei, “Application of conditional gan deblurring algorithm in face recognition,” Small microcomputer system, vol. 42, no. 12, pp. 2607–2613, 2021.

R. T. Marriott, S. Romdhani, and L. Chen, “A 3d GAN for improved large-pose facial recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021. pp.13445–13455.

G. Antipov, M. Baccouche, andJ. Dugelay, “Face aging with conditional generative adversarial networks,” in 2017 IEEE International Conference on Image Processing, ICIP 2017, Beijing, China, September 17-20, 2017. IEEE, 2017, pp. 2089–2093.

A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther, “Autoencoding beyond pixels using a learned similarity metric,” in Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, ser. JMLR Workshop and Conference Proceedings, M. Balcan and K. Q. Weinberger, Eds., vol. 48. JMLR.org, 2016, pp. 1558–1566.

Z. Zhang, Y. Song, and H. Qi, “Age progression/regression by conditional adversarial autoencoder,” CoRR, vol. abs/1702.08423, 2017.

J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum, “Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling,” in Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, D. D. Lee, M. Sugiyama, U. von Luxburg, I. Guyon, and R. Garnett, Eds., 2016, pp. 82–90.

P. Henzler, N. J. Mitra, and T. Ritschel, “Escaping plato’s cave: 3d shape from adversarial rendering,” in 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019.IEEE, 2019, pp. 9983–9992.

T. Nguyen-Phuoc, C. Li, L. Theis, C. Richardt, and Y. Yang, “Hologan: Unsupervised learning of 3d representations from natural images,” in 2019 IEEE/CVF International Conference on Computer Vision Workshops, ICCV Workshops 2019, Seoul, Korea (South), October 27-28, 2019.IEEE, 2019, pp. 2037–2040.

B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: representing scenes as neural radiance fields for view synthesis,” Commun. ACM, vol. 65, no. 1, pp. 99–106, 2022.

K. Schwarz, Y. Liao, M. Niemeyer, and A. Geiger, “GRAF: generative radiance fields for 3d-aware image synthesis,” in Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.2020.

M. Niemeyer and A. Geiger, “GIRAFFE: representing scenes as compositional generative neural feature fields,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021. Computer Vision Foundation / IEEE, 2021, pp. 11453–11464.

C. Vondrick, H. Pirsiavash, and A. Torralba, “Generating Videos with Scene Dynamics,” in Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016.

Barcelona, Spain, D. D. Lee, M. Sugiyama, U. von Luxburg, I. Guyon, and R. Garnett, Eds., 2016, pp. 613–621.

Y. Zhou and T. L. Berg, “Learning temporal transformations from time-lapse videos,” CoRR, vol. abs/1608.07724, 2016.

W. Xiong, W. Luo, L. Ma, W. Liu, and J. Luo, “Learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks,” in 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018.Computer Vision Foundation / IEEE Computer Society, 2018, pp. 2364–2373.

A. Clark, J. Donahue, and K. Simonyan, “Adversarial video generation on complex datasets,” 2019.

Y. Tian, J. Ren, M. Chai, K. Olszewski, X. Peng, D. N. Metaxas, and S. Tulyakov, “A good image generator is what you need for high-resolution video synthesis,” 2021.

I. Skorokhodov, S. Tulyakov, and M. Elhoseiny, “Stylegan-v: A continuous video generator with the price, image quality and perks of stylegan2,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022. IEEE, 2022, pp. 3616–3626.

Downloads

Published

24-05-2023

Issue

Section

Articles

How to Cite

Long, X., & Zhang, M. (2023). An Overview of Generative Adversarial Networks. Journal of Computing and Electronic Information Management, 10(3), 31-36. https://doi.org/10.54097/jceim.v10i3.8677

Similar Articles

1-10 of 123

You may also start an advanced similarity search for this article.