. > FOCUS > SCIENCE & HUMANITY
Generative AI restructures knowledge production, human thinking
Author :  Liu Shuwen and Guo Liang Source : Chinese Social Sciences Today 2023-05-05
Generative AI is the technology to create new content from existing texts, audio files or images. In November 2022, OpenAI launched ChatGPT, a generative AI which has caused a sensation all over the world due to its outstanding human-like performance. The recently launched GPT-4 is a multimodal AI model. On the basis of reinforcement learning from human feedback (RLHF), GPT-4 can better deal with the conversion and mutual “understanding” between texts and images, audio and video. This constitutes another breakthrough in the field of generative AI. People initially marveled at the capabilities of GPTs (generative pre-trained transformers) and similar models but have now begun to contemplate the technology’s broad societal implications.
Generative AI’s various challenges
Generative AI is on the verge of entering the realms of “creation” and ethics. “Creation” has long been regarded as a capacity exclusive to humans. Now, generative AI can participate in “creation” to some extent. In one sense, the nature of “creation” can be understood as the generation of new content and products by learning key elements from data and information. Generative AIs that adopt a self-attention mechanism are not only capable of identifying the relationship between texts, sentences or codes, but also carrying out multiple rounds of human-computer dialogue and interaction based on previous dialogues, thereby enhancing their own learning ability and creativity. To mitigate bias, mainstream generative AIs usually follow pre-designed ethical guidelines, and are thus able to politely decline “inappropriate requests” in order to align with humans’ cognitive needs and values as much as possible during knowledge production.
Generative AI raises legal risks and challenges in three key aspects. The first aspect concerns the ownership of AI generated intellectual property. The use of generative AI has further blurred the “boundary between programmed algorithms and independent thinking.” Issues around the ownership of AI generated content and approaches to related conflicts have not been clarified. This has sparked widespread concern and heated debate among legal scholars and practitioners. The second aspect is liability for misinformation and malicious content. While mainstream generative AIs are designed to avoid delivering discriminatory, prejudicial, insulting or violent content, it can nonetheless generate malevolent, fake or illegal information, if users intentionally bypass the rules in practice. The third aspect is the legality of data sources. Given that data sources used for training generative AIs lack transparency, the legality of their actions will be questioned if specific information is used without permission.
The use of generative AI can impact human learning and critical thinking skills. ChatGPT, for example, relies on large language models (LLMs) and extensive data training, and while its output can be verified, it is difficult to verify the accuracy of its knowledge production process. Generative AI can offer convenient basic knowledge production services and help human users increase the efficiency of knowledge production. However, over-reliance on the technology may make users passive recipients of generative AI and its output, leading to “intellectual laziness.” To some extent, this can weaken or even take away human’s ability to learn, understand and question. If “tittytainment” is a “low-end replacement” for meaningful entertainment content, generative AI might become a “middle-end replacement” or even a “high-end replacement” at the intellectual level.
Generative AI affects the way humans think. Unlike traditional search engines which list a variety of possible answers from which users may choose, mainstream generative AIs present what it considers the “best answer.” Many believe that answers proposed by generative AI are “reasonable”, though they cannot cite definite and direct evidence. Shifting away from the old Internet model based on open or semi-open information towards a new model driven by data and algorithms, generative AI is essentially a “black box,” no longer open to the public and impossible to be captured by search engines. This represents a rather radical reconstruction that will profoundly impact how humans think.
Generative AI changes the way human knowledge is produced. Currently, mainstream generative AIs are supposed to comply with preset ethical guidelines and emotional bottom lines in its knowledge output. However, it lacks explicit “regulating algorithms” to meet social needs and has many problems in terms of social prejudice, hallucination and adversarial prompting. Probabilistic language (such as “the most reasonable combination of words” and “platitude”) and patchwork innovation rendered by generative AI seems to disrupt the systematicity and balance of traditional human knowledge production and knowledge authentication and, to a certain degree, restrains the free development of human knowledge production. In brief, too much “unverified” knowledge might enter the human knowledge system and display itself from a vantage point with the aid of generative AI, thus restructuring how human knowledge is produced.
Ensuring guidance, control over AI systems
We can see that the challenges brought by generative AI are multi-dimensional and future-oriented. A viable basic strategy to cope with these diverse challenges is to restrict the impact of generative AI operating in opacity upon humans within an acceptable scope, so that humans decide and influence AI, not the other way around. Ensuring proper guidance and targeted control over AI systems through legal frameworks, technical means and ethical regulations is an important measure to respond to the disruptive reconstruction that comes with generative AI.
Liu Shuwen is from the School of Marxism at Soochow University. Guo Liang is from the Research Center for Technology and Law at Zhejiang University.
Ye Shengtao made Chinese fairy tales from a wilderness
Ye Shengtao (1894–1988) created the first collection of fairy tales in the history of Chinese children’s literature...
2023-09-18
2023-09-12
2023-09-12
2023-08-29
2023-08-28
2023-08-23