Tourist attraction reviews based on deep learning Sentiment Analysis System
DOI:
https://doi.org/10.54097/066t1j26Keywords:
Deep learning, Sentiment analysis, BERT, Bi-LSTM, Attention mechanismAbstract
In recent years, with the rapid development of information technology, online travel reviews have become a key source of information for tourists to understand destinations. In the face of massive and unstructured travel review data, this paper innovatively proposes a sentiment analysis system for tourist attraction reviews based on deep learning. The system cleverly uses the BERT pre-trained model, and combines Bi-LSTM and attention mechanism to significantly improve the accuracy of sentiment classification. In order to verify its effectiveness, this paper conducts experiments on the evaluation data of scenic spots in Hubei Province. The experimental results are very impressive through multi-dimensional indicators such as accuracy, precision, recall, and F1-score. The accuracy rate is as high as 93.5%, the accuracy rate is 94.1% for positive reviews, the accuracy rate for negative reviews is 92.8%, the recall rate is 92.1%, and the F1-score is 92.8%. This fully shows that the system can accurately identify the emotional tendency of tourist attraction reviews, provide a strong basis for scenic spot management, help it adjust service strategies in a timely manner, and also assist tourists to make more informed decisions, providing a reliable technical reference for the digital transformation and intelligent management of the tourism industry.
References
[1] Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 1, 4171-4186. doi:10.18653/v1/N19-1423.
[2] ZHOU Leshan, FENG Xiwei. Image-text sentiment analysis based on reconstruction of dual-attention network[J/OL].Computer Technology and Development,1-11[2024-08-13].https://doi.org/10.20165/j.cnki.ISSN1673-629X.2024.0238.
[3] Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780. doi:10.1162/neco.1997.9.8.1735.
[4] LI Dayu, LI Yang, WANG Suge. Automatic construction of large-scale Chinese financial sentiment analysis dataset[J].Journal of Shanxi University (Natural Science Edition) ,2024,47(04):776-785.
DOI:10.13451/j.sxu.ns.2024111.
[5] Pennington, J., Socher, R., & Manning, C. D. (2014). GloVe: Global vectors for word representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1532-1543. doi:10.3115/v1/D14-1162.
[6] Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
[7] Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends® in Information Retrieval, 2(1–2), 1-135. doi:10.1561/1500000011.
[8] ZHANG Dianqiu, XIA Li. Sentiment analysis of microblog text combining bidirectional recurrent neural network and attention mechanism[J].Intelligent computer and application,2024,14(07):236-240.DOI:10.20169/j.issn.2095-2163.240738.
[9] Liu, B. (2012). Sentiment analysis and opinion mining. Synthesis lectures on human language technologies, 5(1), 1-167. doi:10.2200/S00416ED1V01Y201204HLT016.
[10] Young, T., Hazarika, D., Poria, S., & Cambria, E. (2018). Recent trends in deep learning based natural language processing. IEEE Computational Intelligence Magazine, 13(3), 55-75. doi:10.1109/MCI.2018.2840738.
[11] Sun, C., Qiu, X., Xu, Y., & Huang, X. (2019). How to fine-tune BERT for text classification?.Chinese Computational Linguistics: 18th China National Conference, CCL 2019, Kunming, China, October 18–20, 2019, Proceedings (pp. 194-206). Springer International Publishing.
[12] Zeng Biqing, Yao Yongtao, Xie Liangqi, et al. Multimodal aspect-level sentiment analysis combining local perception and multi-level attention[J/OL].Computer Engineering,1-12[2024-08-13].https://doi.org/10.19678/j.issn.1000-3428.0069705.
[13] Zeng Mengjia,Guo Weiqiang, Huang Xu. Sentiment analysis of bilibili reviews based on ELMO-TextCNN-Reformer[J]. Modern Information Science and Technology,2024,8(12):146-150+154.DOI:10.19850/j.cnki.2096-4706.2024.12.031.
[14] Wu, Z., & He, D. (2019). Enriching pre-trained language model with entity information for relation classification. Proceedings of the 28th International Conference on Computational Linguistics (COLING 2019), 2365-2375.
[15] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., & Le, Q. V. (2019). XLNet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32.
Downloads
Published
Issue
Section
License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.