Collaborative Integration of Large Language Models and Computer Algebra Systems for Simulation Verification and Code Generation
DOI:
https://doi.org/10.54097/hw85q020Keywords:
Llms, scientific computing, deep learning.Abstract
This paper explores the transformative role of Large Language Models (LLMs) in scientific computing and research. As scientific datasets increasingly contain unstructured or semi-structured text, preprocessing and structuring this data are crucial challenges. LLMs have shown great potential in automating data extraction, transforming raw text into structured formats for computational models. By utilizing techniques such as Named Entity Recognition (NER), LLMs can extract critical information like chemical names, experimental conditions, and research findings, significantly reducing manual efforts. In addition to data preprocessing, LLMs facilitate literature reviews by rapidly scanning vast amounts of research papers, summarizing key insights, and constructing knowledge graphs to visualize relationships across complex datasets. Furthermore, LLMs contribute to problem-solving by generating theoretical insights and assisting in mathematical and computational tasks. Finally, LLMs can support scientific programming by automating code generation for data analysis and simulations. These capabilities enhance efficiency, foster faster discoveries, and lower the barrier to managing complex scientific data, ultimately accelerating research in multiple scientific domains.
Downloads
References
[1] Gupta A, Chen Y, Kumar R, et al. Symbolic machine learning: Combining symbolic computation with neural networks. In: Proc Int Conf Mach Learn (ICML). 2022.
[2] Zhang L, Li X. Accelerating scientific computation with hybrid models of LLMs and CAS. J Sci Comput. 2021; 56 (2): 123-39.
[3] Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S. Language models are few-shot learners. Advances in neural information processing systems. 2020; 33: 1877-901.
[4] Xiang L, Zhao Y, Zhang Y, Zong C. A Survey of Large Language Models in Discipline-specific Research: Challenges, Methods and Opportunities. Studies in Informatics and Control. 2025 Mar 5; 34: 1.
[5] Amalvy A, Labatut V, Dufour R. The Role of Natural Language Processing Tasks in Automatic Literary Character Network Construction. arXiv preprint arXiv: 2412.11560. 2024 Dec 16.
[6] Kumar S, Guruparan D, Aaron P, Telajan P, Mahadevan K, Davagandhi D, Yue OX. Deep learning in computational biology: Advancements, challenges, and future outlook. arXiv preprint arXiv: 2310.03086. 2023 Oct 2.
[7] Wang Z, Zhang F, Ren M, Gao D. A new multifractal-based deep learning model for text mining. Information Processing & Management. 2024 Jan 1; 61 (1): 103561.
[8] Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S. Language models are few-shot learners. Advances in neural information processing systems. 2020; 33: 1877-901.
[9] Hao G, Wu J, Pan Q, Morello R. Quantifying the uncertainty of LLM hallucination spreading in complex adaptive social networks. Scientific reports. 2024 Jul 16; 14 (1): 16375.
[10] Kshetri N, Hutson J, Revathy G. healthAIChain: Improving security and safety using Blockchain Technology applications in AI-based healthcare systems. In2023 3rd International Conference on Innovative Mechanisms for Industry Applications (ICIMIA) 2023 Dec 21 (pp. 159-164). IEEE.
[11] Brynjolfsson E, McAfee A. The second machine age: Work, progress, and prosperity in a time of brilliant technologies. WW Norton & company; 2014 Jan 20.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Academic Journal of Science and Technology

This work is licensed under a Creative Commons Attribution 4.0 International License.








