CONTACT US Wed Nov. 13, 2013

CASS 中国社会科学网(中文) Français

.  >  FOCUS  >  CULTURE

AI ethics in science fiction

Author  :  LYU CHAO     Source  :    Chinese Social Sciences Today     2023-08-23

Artificial intelligence in science fiction encompasses a broad range of fields, including robots, androids, and cyborgs. With its unique advantage of employing “thought experiments” and vivid descriptions, science fiction allows for a more concrete exploration of AI and its ethical implications. In numerous stories about AI, there are four fundamental ethical paradigms that emerge.

The first paradigm is theocentrism. As a literary genre originating in the West, science fiction cannot completely escape the influence of religion. For example, in the short story “The Bell-Tower” (1855), Herman Melville expresses his disapproval of the AI-like device made by a mechanician. The basic teachings of Christianity require humans to be humble in front of God. However, some scientists in science fiction try to replicate the miracle of God’s creation of man, and challenge God’s authority by creating AI. In such stories, scientists ultimately lose control of their creations and suffer the consequences. As concepts evolve, contemporary science fiction writers have more freedom to explore theological issues related to AI than their predecessors. In The Transhumanist Wager (2013), Zoltan Istvan expresses that Jesus saves all living things, including AI. Daniel H. Wilson’s Robopocalypse (2011) portrays an AI claiming to be a god, establishing a religion, and ruling mankind.

In science fiction, one of the most prevalent ethical relationships between humans and machines is encapsulated in Isaac Asimov’s “Three Laws of Robotics.” These laws reflect a fundamental hierarchical principle: AI is designed to serve humanity. This aligns with the prevailing intellectual belief that humans possess inherent superiority over other species on Earth. From this perspective, even if AI were to attain consciousness, it would still be considered devoid of a “soul” and categorized as a mere “tool” that should be controlled and employed by humans.

The third paradigm is non-anthropocentrism. As Asimov’s “Three Laws” have gained recognition, some science fiction writers have sought to challenge the ethical paradigm of anthropocentrism, highlighting its fundamental flaws in dealing with the relationship between humans and machines. They argue that cybernetics and its inherent “law of the jungle” serve as cornerstones for establishing a hierarchical relationship, but sci-tech changes are very likely to reverse the status of the two parties in the future. Moreover, some writers try to build a fair relationship between humans and AI, even exploring romantic relationships between them. From this perspective, when AI possesses emotions, it transcends the classification of being a pure tool.

The final paradigm, posthumanism, emerging in the 1980s, has gained significant attention in recent years. In the face of the ongoing transformation of the human body through technology, a group of theorists led by Donna Haraway posits that the natural evolution of humanity, spanning millions of years, is reaching its culmination, and the curtain of artificial autonomous evolution is opening. In the future, post-humans will surpass today’s humans in all aspects, including but not limited to lifespan, physical ability, and intelligence. Representative works of this theme include Marge Piercy’s He, She and It (1991), and Stanley Chan’s The Waste Tide (2013). Due to the uncertainty of sci-tech development, accurate predictions regarding the nature of post-humans remain elusive.

The earliest science fiction stories about post-humans can be traced back to Edgar Allan Poe’s short story “The Man That Was Used Up” (1839). After the mid-20th century, science fiction novels featuring cyborgs emerged, represented by Frederik Pohl’s Man Plus (1976). In the 1980s, as computers appeared, a special form of cyborg, the cyberpunk, appeared frequently in science fiction, focusing on the integration of computers and human brains, represented by William Gibson’s Neuromancer (1984).

AI ethics in science fiction are not confined to a specific era’s sci-tech level, but emphasize the evolution of values. Many stories focus on the fundamental philosophical question of “what is a human.” This paper holds a neutral attitude towards the future of AI ethics: humans as creators cannot avoid the risk of an AI “rebellion” unless they abandon the traditional view of placing themselves in opposition to AI, and integrate AI into the evolution of humanity.

 

Lyu Chao is a professor from the College of Literature at Tianjin Normal University.

Editor: Ren Guanhong

>> View All

Ye Shengtao made Chinese fairy tales from a wilderness

Ye Shengtao (1894–1988) created the first collection of fairy tales in the history of Chinese children’s literature...

>> View All