Ethical and Legal Considerations behind the Prevalence of ChatGPT: Risks and Regulations

: In response to the prevalence of ChatGPT, this paper summarized its technical model, core capabilities, and application scenarios, analyzed the ethical risks such as autonomy, moral misconduct, human alienation, and value reconstruction, as well as legal risks such as legal entity status, copyright ownership, data security, and algorithmic black box. Then proposed the regulating principles of tool positioning, safety, legality, transparency, credibility, fairness, shared responsibility, and the regulating path of encouraging the development of autonomous, safe and controllable technologies, improving artificial intelligence legislation and justice, and establishing a diverse governance system.


Introduction
OpenAI released ChatGPT-3.5, an artificial intelligence chatbot program on November 30, 2022, this program rapidly surged and accumulated over 100 million users within two months [1]. ChatGPT is a large natural language model, which can understand natural language commands entered by humans and output responses that simulate humans, with advanced cognitive abilities like human, for example, literary creation, computational reasoning, copywriting plan, coding, which can greatly improve learning and work efficiency, and even drive changes in modes of production. On March 14, 2023, ChatGPT-4 was rapidly introduced again, as compared to the previous version, it supports multimodal input of text and images and sharply improved the accuracy of answers [2]; on March 24, ChatGPT removed networking restrictions, supported plug-in functionary, could visit websites and applications to obtain real-time information [3], which showed the powerful technological evolution capabilities of artificial intelligence.
However, the rapid application of ChatGPT may also bring risks such as data security, intellectual property, and discrimination. To this end many countries are exploring and making corresponding regulating methods. For example, EU lawmakers plan to include binding clauses for advanced artificial intelligence tools such as ChatGPT in the Artificial Intelligence Act [4]; Group of Seven (G7) agrees to introduce a regulatory act for AI [5]; and Cyberspace Governance of China issued the Control Regulations of Generative Artificial Intelligence Service (consultation paper) on April 11 [6].
The change in modes of production will inevitably lead to change in social relations, with the rapid development of powerful artificial intelligence technologies such as ChatGPT and its increasingly significant impact on human production and life, the ethical and legal risks that may arise deserve people's calm thinking and active prevention. Therefore, it is necessary to explore the ethical and legal risks of ChatGPT and the corresponding regulating paths, so that it can serve the development of human society.

Technology Model and Core
Capabilities of ChatGPT

Technology Model of ChatGPT
ChatGPT is a chatbot program developed by OpenAI, its full name is "Chat Generative Pre-trained Transformer", it is essentially an AIGC (Artificial Intelligence Generated Content) technology, can generate text, image, video, code, and other forms of contents through natural language commands. Compared to previous content generation technologies, the biggest feature and advantage of ChatGPT is that it can learn, understand and analyse human natural language, and on this basis, it can have a powerful anthropomorphic nature and even make users feel that it has emotionality.

Core Capabilities of ChatGPT
ChatGPT has powerful functions, which can accomplish many types of complex tasks and likely be widely used in many scenarios such as business, education, law, government, medical, media, art, technology, the achievement of these functions is mainly based on the following core capabilities of ChatGPT: 1.Multimodal Capability ChatGPT-3.5 only supports text input and recognition, while the rapidly iterative ChatGPT-4 achieves multimodal processing capability, it not only raises the limit of text input to 25,000 words, but also supports image input. This makes ChatGPT have more powerful information recognition and processing capability, can answer more forms of questions, which will breed more application scenarios.
2. Information comprehension Capability ChatGPT has powerful comprehensive capability for input information compared with other AI software, which can recognize and analyse complex text and image information, extract key elements from them, so as to precisely understand the inputter's intention and generate highly relevant and highquality responses.
3. Logical Reasoning Capability ChatGPT can clearly understand the logical relationships between input text and image content elements, carry out reasoning based on the directivity of the user's questions, show the ability to think like human, this makes ChatGPT can finish more difficult tasks. ChatGPT-4 has achieved scores that exceed those of most humans on various standardized tests, such as ranking in the top 1% on the GRE-verbal exam, top 11% on the SAT math exam, and top 10% on the bar exam [7].

Contextual Dialogue Capability
In the continuous dialogue with users, ChatGPT can understand contextual information and guess the user's intention, so as to give natural, coherent and targeted replies, even have certain moral and emotional colours, making users have an interactive experience of real dialogue scenarios.

Content generation Capability
On the basis of the learning of massive amount of information in the database, ChatGPT has rich knowledge storage and creation materials, ChatGPT can analyse correlate, combine and reconstruct the information in the database via continuous reinforcement training of machine learning algorithms, and generate creative contents that meet the users' input instructions. ChatGPT can create poetry, copywriting, planning, news, songs, codes and other topics and forms of contents.

Does Powerful Artificial Intelligence Have a Sense of Autonomy?
In the Turing test, powerful natural language processing technology of ChatGPT make it can mimic human responses in the test and talk naturally with the evaluator. So, dose the powerful AI represented by ChatGPT have a sense of autonomy?
For this problem, one school of thought has it that consciousness is a unique product of the human brain, which is generated via billions of neurons and their complex connections in the brain, although current artificial intelligence attempts to endow machines with a sense of autonomy by simulating neural networks, the simulation level of computers is far from the level of complexity of human brain neural networks [8], so a sense of autonomy of machine cannot be achieved technically. Although artificial intelligence can deal with text, number, and symbol, it cannot truly understand their meaning and only runs according to predetermined programs.
The other school takes the opposite view: who believe that the research on brain neuroscience and artificial intelligence is rapidly advancing, the product iteration of artificial intelligence has exponential improvement in various parameters and indexes, the tasks that were initially believed to be completed only by human beings can also be completed more efficiently by artificial intelligence, with the development of natural language processing technology, machines can talk like people, even has a certain emotional colour. Therefore, the generation of a sense of autonomy of artificial intelligence is not impossible [10]. Should artificial intelligence acquire a sense of autonomy, it would pose ethical challenges to human society, and it is necessary to prevent artificial intelligence from engaging in behaviours that harm society.
2. Risk of Moral Anomie ChatGPT does not have a sense of autonomy at present, therefore, it lacks human reflective ability and moral sense. So, will ChatGPT output unethical content? According to response of ChatGPT, "my response content is subject to strict rules and can only answer contents that meet the law and moral, it is not allowed to answer unethical content such as violence, porn, discrimination, bias." When users enter sensitive words that break the law and moral, ChatGPT will remind and warn them.
On a technical level, ChatGPT uses the human feedback reinforcement learning technology, in the process of model training, human annotators need to provide sample answers to certain questions or to score and rank the different answers output by the model, so that the answers of model can more accord with human expectations. However, human annotators are influenced by their personal values and social culture, their morals values are not universal, it is difficult to avoid personal or social bias, this bias will be incorporated into ChatGPT's training model, making output contents only represent the moral values of some societies or cultures, and has bias against the morals of other cultures.
3. Risk of Artificial Intelligence Dependence and Human Alienation ChatGPT may lead humans into an "algorithmic comfort zone" -when people use artificial intelligence, the algorithm learns to understand human choice preferences and forms a comfort zone by outputting results that cater to such preferences, that impacts human psychology and behaviours [11]. In the comfort zone of the algorithm, individuals receive more information that is consistent with their original cognitive system and values, and their original cognitive structure will be continuously solidified, form "information cocoon", and reject new knowledge and multiple values.
On the basis of powerful natural language processing technology, ChatGPT can talk to users with anthropomorphic language and even reply with emotional contents, it is no longer a cold machine but seems to have some kind of personality. This may make users build emotional connections and dependency with ChatGPT in the process of long-term conversations, and develop social relationship, prompting the ethical issues of man-machine relationships [12].
In the digital society, everything, including human cognition, behaviour and even emotion can be digitized, algorithm can make decisions based on these data as humans do, this may promt society's transition from human-centered to data-centered, and there is alienation risk for humans to become 'digital humans' [13]. It's also possible for human to think like machine while we attempt to make machines think like humans, and gradually lose their morals, emotions, and values [14].

Reconstruction of Human Subject Value
The change of production technology will create change in productivity and production relations. In the artificial intelligence society, mental labour of more and more industries and positions can greatly improve efficiency or even be completely replaced with the help of artificial intelligence, and the value of human is challenged.
With the rapid development of artificial intelligence technology, there will be new stratification in social classes, those few people who master artificial intelligence technology, create artificial intelligence machines, and navigate artificial intelligence capital will become a special class, and they use artificial intelligence create huge economic value. Under such a system, most people can hardly create enough economic value and may end up in "useless class" [15].
Computing power serves as the core productive factor for artificial intelligence, the increasingly wide application of artificial intelligence requires powerful computing power to sustain them, more and more computing devices and resources need to be invested. One concern believes that as this trend develops, artificial intelligence will no longer be a tool to serve human; on the contrary, human may become tool to provide computing resources for artificial intelligence, and human subjectivity will be cleared up [16] Therefore, human subject value faces the risk of reconstruction. Artificial intelligence may provoke a crisis of human identity [17], human will have to rethink their natural and social values as well as unique values that distinguish them from artificial intelligence, and it will also certainly push major changes in social relations and social structures.

Whether Powerful Artificial Intelligence Can Become a Legal Subject
The current legal systems of countries around the world have not established clear rules concerning the legal subject status of artificial intelligence, the research on this issue in law circle also remain inconclusive [18], and there are mainly the following views: There is the idea that powerful artificial intelligence does not have a sense of autonomy as humans at present, it lacks knowledge of the significance of its output contents and behavioural, but merely operates on the basis of predetermined algorithm, it does not have rationality, empathy and responsibility, cannot assume the corresponding legal responsibility, therefore it does not have the status of legal subject [19].
The opposite view is that although powerful artificial intelligence temporarily does not have a sense of autonomy, it also has some ability of autonomous judgment, autonomous decision making, and autonomous behaviour, and it should be responsible for the consequences of its behaviour. Should a powerful artificial intelligence exhibit behaviour that jeopardizes society, it should also be subject to legal liability [20]. Moreover, given the rapid learning and evolutionary ability of powerful artificial intelligence, it could transcend its algorithms and develop a sense of autonomy. Therefore, in order to curb the social and legal risks that powerful artificial intelligence may bring, it should be given the status of legal subject, so that it can be better regulated and serve the wellbeing of human [21].
2. The Issue of Ownership of Copyright of Generated Works ChatGPT, as an AIGC type of artificial intelligence, there are two major debates on the copyright ownership issue of the generated contents of ChatGPT: First, whether the generated contents of ChatGPT can be recognized as a work. Article 3 of China's Copyright Law states that "work" is "intellectual achievements in the fields of literature, art and science that have originality and can be expressed in a certain form". Thus, it can be seen that the core of the recognition of "work" is the element of "originality". As generated contents of ChatGPT go, some argue that the contents are merely duplications or reconfigurations of information in the training database, produced by specific algorithms [22], which is a kind of procedural creation and does not have originality, some scholars believe that human creative inspiration does not appear without foundation, it is also based on the learning of extensive knowledge, algorithm of ChatGPT simulates the neural network of the human brain, the contents produced through deep learning have a certain originality [23], so it can be regarded as a "work" Second, whether ChatGPT can be the copyright owner. Article 9 of the Copyright Law states that copyright owners include authors and other natural persons, legal persons or unincorporated organizations that enjoy copyright according to law. Therefore, as a legal entity, the developer of the artificial intelligence can claim ownership over the generated content. In 2019, Tencent suedYingxun Company for infringing the copyright of works generated by Dream writer, an artificial intelligence robot developed by Tencent, People's Court of Shenzhen Nanshan District of Shenzhen supported it [24]. However, this also need to refer to the agreement between the software developer and the user. According to user terms of OpenAI, OpenAI transfer all rights to the contents generated by ChatGPT to the user as long as the user complies with the contents in the use terms [25], and does not claim copyright to the generated content.
Given the controversy, the right to attribution of generated content is not provided in China's " Administrative Provisions on Generative Artificial Intelligence Services (Draft for Comments)" released on March 16, 2023, The United States Copyright Office issued Chapter 37 of Part 202 of the U.S. Regulations, it clearly states that works generated entirely by artificial intelligence platforms such as ChatGPT are not protected by copyright law [26]. The introduction of this law plays an exemplary role for relevant legislation worldwide.
3. Data Compliance and Security Risks ChatGPT may entail compliance risks pertaining to its data sources. ChatGPT is a large language model trained on a large database, and although OpenAI claims that the data comes from public information on the web, the nature of the public information collected remains unclear, whether the methods used to access such information are lawful, whether the information is publicly available, and whether authorization is required. Therefore, should a developer deliberately or inadvertently incorporate unauthorized materials, issues concerning the compliance of data sources could surface, and the generated contents may draw on these unauthorized materials, as a result, infringement disputes of intellectual property appear.
There may be security risks of personal information protection in ChatGPT. ChatGPT's terms of use stipulate that OpenAI has the right to use the information entered by users. Users may enter personally identifiable information or personal privacy information in the process of using ChatGPT, this information could be retained by the software or employed for training, posing data security concerns, and faces the risk of violating the EU's General Data Protection Regulation and China's Personal Information Protection Law.
There may be leaking risk of trade secret in ChatGPT. As ChatGPT is integrated with office software and are increasingly used in work scenarios, people are bound to use it to handle a lot of business information, which may include trade secrets, there exist certain security risks. Therefore, some companies have expressly prohibited employees from using ChatGPT to process confidential work-related information [27].
There may be risk of illegally use in ChatGPT. ChatGPT has the capability to write code, it may be used by lawbreaker to write malicious code and develop malware for making illegal benefits. ChatGPT can also check vulnerabilities in websites or software, hackers can use vulnerabilities to engage in network attacks on targets, harming network security [28].
4. Risks of Algorithmic Black Box and Algorithmic

Hegemony
The algorithmic black box of artificial intelligence is the non-openness and opacity of the processing process between the input information and output results of the algorithm [29].There are three main reasons behind this: firstly, because algorithms are extremely technical and complex, their development requires large talent and resource costs, which belong to intellectual property, developers are reluctant to actively open the contents of algorithms from the angle of commercial interests; secondly, even if the algorithm code is unrevealed, the vast majority of ordinary users and even technicians cannot fully understand all the contents; thirdly, partial content of algorithms based on neural network technology are formed by deep automatic learning [30], not written by human, developers cannot fully grasp their learning process and accurately predict their output results, these algorithms are inherently black-box in nature.
Algorithmic black box may lead to algorithmic hegemony. The complexity and opacity of artificial intelligence algorithm can only be mastered by a very small number of technicians or companies, since algorithms are powerful productivity tools, those who have mastered algorithm wield algorithmic power, which may even lead to algorithm hegemony if they cannot be effectively regulated.

Basic Principles of Risk Regulation of ChatGPT
In order to guarantee the healthy development of artificial intelligence, develop and use artificial intelligence in accordance with legal and ethical norms, China has issued a series of guidance documents and guidelines-the State Council issued the "Development of a New Generation of Artificial Intelligence" in 2017, the Ministry of Science and Technology issued the "Governance Principles of a New Generation of Artificial Intelligence-Developing Responsible Artificial Intelligence" in 2019, the Ministry of Science and Technology issued Ethical Norms of a New Generation of Artificial Intelligence in 2021, and the State Internet Information Office issued the Generative Artificial Intelligence Services Management Rules (Consultation Paper) in 2023. Combining the above documents and methods, the following basic principles should be followed for the regulation of ethical and legal risks of ChatGPT: 1. Principle of Tool Positioning Artificial intelligence does not yet have a sense of autonomy, but in order to prevent the risk of loss of control of artificial intelligence to human civilization, its positioning needs to be pre-regulated, and the instrumental positioning of artificial intelligence should be made clear and adhered to. Ensure that artificial intelligence is always under human control, promote harmonious man-machine collaboration, give humans the right to choose whether to use artificial intelligence, and the right to withdraw form and terminate the operation of artificial intelligence at any time. The development and application of ChatGPT technology should aim at promoting human welfare, should meet common human values and ethics, should be people-oriented, technology can be used for people and serve people, and continuously promote the development of productivity and social progress.

Principle of Safety and Lawfulness
The technology development and product services of artificial intelligence should be carried out in accordance with laws and regulations, artificial intelligence should not be abused to engage in activities that harm national security and social stability, infringe on the legitimate rights and interests of others, or violate morals, ethics, laws and regulations.
ChatGPT should ensure the compliance and legality of training data sources, should not use non-public or unauthorized data; respect and protect users' personal privacy, fully ensure users' right to know and consent in all aspects of collecting, storing and processing personal information, strengthen data security monitoring, prevent leakage of users' personal and commercial information; ensure users' data "right to be forgotten", and improve the mechanism for canceling personal data authorization.
3. Principle of Transparency and Reliability Breaking the black box of algorithm, improving transparency, interpretability and understandability in the development and application of ChatGPT algorithm, achieve algorithmic verifiability, oversight, traceability and predictability within the "black box" through proactive public or independent agency reviews, ensure the compliance, legality, fairness and justice of the algorithm, and increase the trust of users and society for the artificial intelligence algorithm.

Principle of Fair Sharing
The development of artificial intelligence should promote fairness and impartiality, ChatGPT should minimize data and algorithm discrimination to the greatest exten possible in the process of data training and algorithm development, ensure users' right to fair access to ChatGPT. Artificial intelligence should be popularized and shared throughout societyin order to bridge the digital divide and narrow developmental disparities across regions, and share the development gains of artificial intelligence. In order to prevent technological hegemony caused by technological blockade and monopoly, it is necessary to strengthen open collaboration, joint research and development, joint share among governments, enterprises, and research institutions, and promote the development of artificial intelligence technology and the progress of human civilization.
5. Principle of Shared Responsibility Developers, users and managers of ChatGPT should enhance their sense of self-discipline and responsibility, and strictly abide by laws and regulations and ethical norms related to artificial intelligence. Accountability mechanism of artificial intelligence should be established, adhering to the status of human responsibility subject, clarifying the responsibilities of developers, users and managers and other relevant subjects, each performs its own responsibility, assume accountability, vigilantly forestall the ethical and legal risks that may arise from artificial intelligence, promote shared governance and full-cycle governance, and always ensure that the development of and always ensure that the development of artificial intelligence serves to enhance human well-being.

Encouraging the Development of Autonomous, Safe and Controllable Technologies
Artificial intelligence technology, represented by ChatGPT, may bring changes in production methods and leaps in productivity, a country's advancement in artificial intelligence will have an increasingly significant impact on its economic development level and affect its position in international competition." On March 16, 2023, Baidu launched its large language model "ERNIE Bot" with five core functionsliterary creation, commercial copywriting, mathematical logic reasoning, Chinese comprehension and multi-modal generation. However, its actual performance falls far short of ChatGPT's [31]. This reflects that there is still a certain gap in China's current technological level in the field of artificial intelligence in comparison with the United States. In addition, due to the technological blockade of the United States, China also faces a "bottleneck" in artificial intelligence computing infrastructure-chips and imports. Therefore, it is imperative to actively develop autonomous, secure, and controllable artificial intelligence technologies.
The government should improve the industrial layout of artificial intelligence, give policy guidance and support to the development of artificial intelligence industry, encourage qualified enterprises and research institutions to actively invest in the field of artificial intelligence, and increase the research and development of the underlying technology of artificial intelligence and the construction of infrastructure. Support the development and application of AI technologies through dedicated funding, major scientific projects, tax incentives for businesses, loans, financing and other means, continue to make up for the weak links of the core technology, independently studying and manufacturing chips, improving the basic computing power of artificial intelligence, breaking through the technical blockade, and solving the "bottleneck" problem.
Under the market economy system, enterprises, especially Internet technology giant companies, should become the main body of artificial intelligence technology development and industrialization, follow policy guidance, take full advantage of government policies and financial support, step up investments in R&D of fundamental hardware and core technologies, and accelerate the deployment of relevant industries, promote the transformation of technological achievements and commercial applications, form good cycle of economic benefits and scientific and technological research and development, continuously narrow the technological gap with the world's leading companies, and master independent, controllable and advanced artificial intelligence technologies.
As individuals in the era of artificial intelligence, we should keep up with trends in technological and social progress, and fully recognize that AI has become a key productive tool deeply embedded in our work and lives, should maintain open and tolerant attitude towards artificial intelligence, avoid excessive worry about the transformative impact of artificial intelligence technology, actively learn to understand the technical connotations of artificial intelligence, master the use of artificial intelligence products, man-machine collaboration, improve learning and working efficiency, and improve living standards and well-being.
2.Improving Legislation and Justice of Artificial Intelligence With the accelerated development and extensive adoption of AI technologies, there is a heightened awareness of their potential hazards, hence, various countries have enacted laws and regulations to govern AI development. In January 2021, the United States issued the National Artificial Intelligence Initiative Act, it carries out a top-level design for development of artificial intelligence, sets up a series of special agencies, establishes a systematic management mechanism from top to bottom, and enhance the development and governance of artificial intelligence [32]. In March 2023, the EU published the Artificial Intelligence Act, this Act is premised on European ethical and legal norms, categorizing the risks of AI systems into four tiers: minimum risk, limited risk, high risk, and unacceptable risk, adopted corresponding regulatory measures centered around governance at the source of AI and set strict pre-review process.
In China, in November 2022, The Cyberspace Administration of China, together with the Ministry of Industry and Information Technology and the Ministry of Public Security, released the Provisions on the Governance of Deep Synthesis of Internet Information Services, which proposed systematic governance scheme to prevent the risks of content generated by artificial intelligence [33]; in April 2023, the Cyberspace Administration of China formulated the Governance Measures for Generative AI Services (Draft for Solicitation of Comments) based on the Network Security Law, the Data Security Law, the Personal Information Protection Law, and other laws and administrative regulations; however, the relevant stipulations remain to be further refined.
At the judicial level, due to the rapid development of artificial intelligence technology and the lagging of legal regulation, it is bound to raise new issues and bring new challenges to judicial practice. The judiciary should step up professional training in AI jurisprudence, promote cooperation and exchange, study and trial of cases, case summaries, and keep up with the times in applying AI technologies to judicial practice, continuously improve judicial capa and efficiency, hear cases related to artificial intelligence such as intellectual property infringement, personal information leakage, and commercial disputes in accordance with the law, improve judicial construction in the field of artificial intelligence, and promote the healthy development of artificial intelligence industry.
3. Building Diversified Governance System Addressing the ethical and legal risks posed by AI technologies such as ChatGPT necessitates a multi-pronged governance framework in which the government, companies, research institutions and users share the responsibility.
As the role of coordinator, the government should topdown design an administrative system for AI development, systematically govern AI development, deployment and regulation across various levels, such as macro policies, legal systems, and administrative regulations. From the perspective of dynamic governance, the government can exercise oversight over the entire process -before, during and after events. First, in the regulating link before the event, the government set technical standards and access thresholds for the artificial intelligence industry, strictly check the business qualifications, workforce composition, technology capabilities, capital status, hardware facilities, safety norms, records of violations and other information of artificial intelligence technology companies, support those that meet the conditions and deny those that do not meet the conditions; second, in the process of supervision during the event, the government set up or third-party independent regulators to monitor potential ethical and legal hazards across all aspects of AI -algorithm development, data storage, processing, and business deployment, pierce the algorithmic black box, promptly identify and regulate unlawful conduct by developers and users; in the process of supervision after the event, various illegal and irregular behaviors such as vthics, iolation of edata leakage, and intellectual property infringement in the process of development and application of the company should be strictly handled according to the law.
According to the concept of meta-regulating governance, enterprises and scientific institutions, as the primary actors driving AI technology development and application, should reinforce self-regulation [34]. First, enterprises and scientific institutions should make perfect internal management systems such as technical norms, ethical norms, and safety norms of artificial intelligence development and application in accordance with the law, and set special internal supervision departments to supervise the whole process of technology development and application; second, in the process of artificial intelligence technology development, enterprises and scientific institutions should use technical means and strengthen management, obtain data through legal channels, ensure data storage security, process data in compliance with laws and regulations, and avoid ethical risks such as algorithm discrimination; furthermore, enterprises and scientific institutions should actively fulfill their social responsibility disclosure mechanisms, open information on data sources, storage and processing, and actively disclose possible negative impacts on society, users and other relevant stakeholders in the process of development and application of artificial intelligence technology, and accept public supervision.
Users, as users of ChatGPT and other human intelligence tools, should reasonably consider the risks brought by artificial intelligence technology, seeking benefit and avoid harm, strengthen their own moral and legal constraints, use artificial intelligence products in accordance with the law, and strictly prohibit the abuse of artificial intelligence to engage in network attacks, data theft, intellectual property infringement, monopoly discrimination, commercial leakage, illegal operations and other acts that harm national security, social stability and infringement of the legitimate rights and interests of others.

Conclusion
So far ChatGPT represents the most advanced large-scale natural language processing model, which has five core capabilities: multimodal processing, information comprehension, logical reasoning, contextual dialogue, and content generation, and it has a wide application prospect in many scenarios such as business, education, law, government, medical, media, art, and technology. However, the ethical and legal risks behind the ChatGPT spiking are also worthy of consideration, ChatGPT may have ethical risks such as the dispute over a sense of autonomy, moral anomie, human dependence and alienation, and reconstruction of human subject value, there are also legal risks such as disputes over legal subjectivity, intellectual property rights over generative outputs, data compliance and security, the algorithmic black box, and algorithmic dominance. In order to effectively regulate the above risks, it is necessary to follow the principles of tool positioning, safety and legality, transparency and credibility, fair sharing, and shared responsibility of artificial intelligence, encourage the development of autonomous, safe controllable technologies, improve legislation and justice of artificial intelligence, and build a diversified governance system.