Research on Flower Image Classification Based on Transfer Learning
DOI:
https://doi.org/10.54097/ajst.v4i3.5055Keywords:
Deep learning, Transfer learning, Neural networks, Image classification.Abstract
Due to the high similarity between flowers, it is difficult to identify them if they do not have the corresponding biological knowledge when classifying varieties manually. Given the above problems, to improve the accuracy and efficiency of flower classification, this paper proposes a migration parameter pre-training and fine-tuning VGG16 model based on the ImageNet data set to solve this problem. In this paper, the grid coverage enhancement method enhances the flower classification data set to expand the training sample data. The model uses the migration learning pre-training and fine-tuning method to improve network stability and accelerate network convergence. The results of comparative experiments show that the performance of the improved model has been significantly improved, and the result is better on the flower image data set, which has specific practical value.
Downloads
References
Nilsback, ME, Zisserman, et al. Automated flower classification over a large number of classes[J]. -, 2008.
Liu Y, Tang F, Zhou D, et al. Flower classification via convolutional neural network[C]//2016 IEEE International Conference on Functional-Structural Plant Growth Modeling, Simulation, Visualization and Applications (FSPMA). IEEE, 2016: 110-116.
Cıbuk M, Budak U, Guo Y, et al. Efficient deep features selections and classification for flower species recognition[J]. Measurement, 2019, 137: 7-13.
Yin H, Fu X, Zeng J X, et al. Flower image classification with selective convolutional descriptor aggregation [J][J]. Journal of Image and Graphics, 2019, 24(05): 0762-0772.
YANG Wanggong, HUAI Yongjian. Flower fine-grained image classification based on multilayered feature fusion and region of interest. Journal of Harbin Engineering University, 2021, 42(4): 588-594. DOI: 10.11990/jheu.201912064.
Singh K K, Yu H, Sarmasi A, et al. Hide-and-seek: A data augmentation technique for weakly-supervised localization and beyond[J]. arXiv preprint arXiv:1811.02545, 2018.
Simonyan K , Zisserman A . Very Deep Convolutional Networks for Large-Scale Image Recognition[J]. Computer Science, 2014.
Yu W, Yang K, Bai Y, et al. Visualizing and comparing AlexNet and VGG using deconvolutional layers [C]//Proceedings of the 33 rd International Conference on Machine Learning. 2016.
Zhuang F, Qi Z, Duan K, et al. A comprehensive survey on transfer learning[J]. Proceedings of the IEEE, 2020, 109(1): 43-76.
Kingma D P, Ba J. Adam: A method for stochastic optimization[J]. arXiv preprint arXiv:1412.6980, 2014.
Liu L, Jiang H, He P, et al. On the variance of the adaptive learning rate and beyond[J]. arXiv preprint arXiv:1908.03265, 2019.
He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.