Effect Analysis based on GAN and CGAN Comparison
DOI:
https://doi.org/10.54097/hset.v39i.6750Keywords:
Image Generation; Deep Learning; GAN; VAE.Abstract
Image generation has traditionally been. a popular research direction in the vision community of computer, which aims to learn the distribution of a given data set, to generate realistic images obeying this distribution. Owing to the rapid growth of networks of convolutional neurons, image generation which is based on deep learning has made breakthroughs in both accuracy and speed. For different scenarios, however, the results of the same generation algorithm may differ dramatically. In order to probe the application limitations of various algorithms, in this paper, taking the Minst data set as the research object, we choose two representative generation algorithms, GAN and CGAN, to construct models and analyze their generation effects. We initially present the basic models of GAN and CGAN, which include their structures and differences, respectively. By comparing the GAN and CGAN models, we can see that the randomness of generated images is greatly decreased with the extra controls. Finally, the paper analyzes the effect of CGAN handwritten image generation, and different numbers have different effects under the same epoch.
Downloads
References
LeCun Y, Bengio Y, Hinton G. Deep learning[J]. nature, 2015, 521(7553):436-444.
Kingma DP, WellingM. Auto-Encoding Variational Bayes[J/OL]. arXivpreprint arXiv:1312.6114.2013.
Dosovitskiy, A., & Brox, T. (2016). Generating images with perceptual similarity metrics based on deep networks. arXiv preprint arXiv:1602.02644.
Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks [J]. Advances in neural information processing systems, 2012,25:1097-1105.
Goodfellow I JPouget-AbadieJMirza Met al. Generative adversarial nets[C]// Proceedings of the International Conference on Neural Information Processing Systems Montreal. Canada. 2014:2672-2680.
Mirza M, Osindero S. Conditional Generative Adversarial Nets, 10.48550/arXiv.1411.1784[P]. 2014.
Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors[J] nature. 1986. 323(6088):533.
OordA, KalchbrennerN, VinyalsO, et al. Conditional image generation with PixelCNN decoders [C]// Proceedings of the 30th International Conference on Neural Information Processing Systems. 2016:4797-4805.
Kupyn O, Budzan V, Mykhailych M, et al. DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018.
Larsen, A. B.L., Sønderby, S. K., &Winther, O.(2015). Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300.
Yan X, Yang J, SohnK, etal. Attribute2 image: Conditional image generation from visual attributes[J/OL]. arXiv preprint arXiv:1512.00570, 2015.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.







