A diachronic reflection on technology from a humanistic dimension

Humanity is now entering the AI era. Photo: TUCHONG
Technology and the humanities are two positive forces that sustain human development, and both are indispensable to the fabric of everyday human existence. Yet because technology has developed unevenly across historical stages—especially since the modern era, when its level has risen dramatically—human society has witnessed widespread misunderstanding, misuse, and even abuse of technology. A humanistic examination of technology from a diachronic perspective is therefore necessary to clarify the deeper logic linking technology and humanity and to bring that relationship back into view.
Ji Xianlin once said, “Everyone strives for a perfect life. However, from ancient times to the present, both at home and abroad, a life that is one hundred percent perfect does not exist. Therefore, I say that imperfection is life.” This observation contains two closely related dimensions: from the standpoint of fact, life is inevitably imperfect, while from the standpoint of value, it continually aspires to perfection. “Fact” thus refers to the reality of life as it is presently lived, whereas “value” gestures toward life’s future-oriented hopes. Precisely because human beings possess a humanistic stance and sensibility, they are unwilling to passively accept present imperfection as mere fact, and instead cultivate a value-oriented drive to strive for and hope toward a more perfect future. It is within this tension between fact and value that technology emerges as an effective and practical means through which human life seeks to transcend its present limitations and orient itself toward future fulfillment. Technology’s basic nature as a tool directed toward human purposes therefore constitutes both the entry point and the evaluative standard for a humanistic examination of technology.
Evolutionary relationship
At the dawn of human civilization, the instrumental character of technology was especially evident. Early technologies—stone-tool making, bow-and-arrow crafting, and farming and animal husbandry—were crude by later standards, yet they served as effective and practical means by which early humans could overcome the “imperfections” of their time and strive toward a “perfect” future. In that period, technology functioned purely as a tool—a simple instrument aligned with human purposes. Even if the tool character of technology was occasionally misused or abused, the consequences were unlikely to bring substantial damage or harm to humanity. There was therefore neither the need nor the necessity to subject technology to humanistic scrutiny. In other words, questions of whether technology “should or should not” be used did not yet arise, nor did technology provoke psychological complaint or emotional condemnation. The relationship between technology and human beings remained in a primitive, low-level state of mutual accommodation. Technology was singular: What it could do was what it should do, and what it should do was what it could do. “Could” and “should” were essentially unified.
In modern times, technology advanced dramatically, bringing human society into the era of large-scale machine production. The instrumental character of this machinery was most clearly expressed in its capacity to liberate people from arduous physical labor, generate unprecedented material wealth, and become a key force driving the development of the productive forces. When properly used, technology enabled human survival to emerge from precarity and opened up new possibilities for everyday life, making its positive significance unmistakable. Western modernization, in essence, relied on technological factors to “transform the mode of production itself to enhance labor productivity,” thereby achieving wealth creation and increases in total value.
Yet large-scale machine technology, oriented toward practical ends, did not merely serve as a simple tool aligned with human purposes. It also often became an accomplice in humanity’s effort to transform and control nature. Human beings originally sought, through technology’s assistance, to extend their own capacities outward; under conditions of misunderstanding, however, this aspiration was distorted into a posture of conquest. With technology at their backs, people began to look down upon nature from a position of superiority, even to despise it, and then to intervene in it as conquerors—gradually casting themselves as nature’s “king.” Nature’s original state was disrupted: It became, in theory, an object of knowledge, and in practice, a resource to be exploited. An antagonistic relationship between humanity and nature thus took shape, and technology came to be regarded as the most effective tool for opposing nature.
From that point on, technology no longer retained a single, unified character, and the relationship between technology and humans became tense in the natural domain. Since the mid-17th century, mechanical naturalists represented by Francis Bacon, René Descartes, and Isaac Newton were convinced that technology seemed to require an enemy in nature to justify its own glory; it seemed to have to sustain hostility toward nature to display its brilliance.
As technology began to shape and regulate human life with unprecedented breadth and depth, it also—often inadvertently—became an accomplice in governing human life, extending the tension between technology and humanity into the social domain. Within the framework of capitalism, large-scale machine technology reshaped temporal rhythms and social divisions of labor, simultaneously controlling production and dominating labor, and in this way becoming a form of domination that exceeded the power of any individual human actor. Large-scale machines generated immense wealth, yet they also transformed people into “workers”—appendages of machines, deprived of freedom and reduced, at most, to “conscious limbs.” The machine character of modern technology itself, together with the human misuse and abuse of that character, thus pushed technology beyond its earlier singularity, positioning it increasingly in opposition to humanity and endowing it with a dual character.
Duality of technology
To illuminate this duality, we can borrow the textual language of Karl Marx and recast it in the form of a series of questions. Question one: Why is that “machinery, gifted with the wonderful power of shortening and fructifying human labour, we behold starving and overworking it?” Question two: Why is that “the new-fangled sources of wealth, by some strange weird spell, are turned into sources of want?” Question three: Why is that “the victories of art seem brought by the loss of character?” Question four: Why is that “at the same pace that mankind masters nature, man seems to become enslaved to other men or to his own infamy?” Question five: Why is that “even the pure light of science seems unable to shine but on the dark backdrop of ignorance?” Question six: Why is that “all our invention and progress seem to result in endowing material forces with intellectual life, and in stultifying human life into a material force?” Question seven: Why is that “this antagonism between modern industry and science on the one hand, modern misery and dissolution on the other hand; this antagonism between the productive powers and the social relations of our epoch is a fact, palpable, overwhelming, and not to be controverted?” These seven questions lay bare not only technology’s duality, but also the imbalance and disorientation that can arise in the course of human growth.
Humanity is now entering the era of artificial intelligence (AI). Compared with large-scale machine technology, AI represents a qualitative shift in the form and scope of technological agency and has already exerted—and will continue to exert—a sweeping impact on all aspects of human life. Yet unlike the duality displayed by large-scale machine technology, AI exhibits what might be called two-sidedness. If “duality” suggests a double-edged sword, with both positive and negative aspects, then “two-sidedness” means that technology’s negative aspect also reflects the authenticity of technology as technology. Together with the positive aspect, it forms two sides of the same coin. The two sides are equal: There is no hierarchy of primary and secondary, higher and lower, good and bad.
Fundamentally, two-sidedness points to an inherent and unavoidable self-reflexivity within technology. Such self-reflexivity already existed in the earliest technologies, but because those technologies were fragile, it often remained hidden and latent. Even in the era of large-scale machine production, self-reflexivity appeared only to a limited extent through the duality of technology. In the AI era, however, it is fully expressed: Whenever AI has a side that helps human beings escape imperfection, it simultaneously has a side that exposes new imperfections; whenever it has a side that seems to secure humanity’s future, it also has a side that risks causing humanity to lose its future. Although the AI era is brought into being by human hands, it also risks turning against humanity itself. The more AI appears all-knowing, all-powerful, and all-capable in its instrumental role, and the more efficiently it performs a wide range of human tasks, the more people in corresponding fields become redundant, dispensable, and unnecessary. As Yuval Noah Harari has argued, the “great development” of AI could result in 99% of the population being relegated to a so-called useless class, while the remaining 1%, who command AI technologies, effectively constitute a new species in the course of human evolution.
Humanistic perspectives
Duality thus entails a technological paradox, representing a profound and intrinsic self-negation that calls for a humanistic response. Viewed against the background of AI, the familiar claim that “technology is a double-edged sword” may already be outdated. That formulation treats technology’s flaws and vulnerabilities as its negativity, setting them in opposition to its positivity. Yet if there is no flawless or omnipotent technology in the world, then flaws and vulnerabilities cannot simply be classified as technology’s negative side. On the contrary, it is precisely these flaws and vulnerabilities that constitute the internal conditions of technological development, driving processes of self-adjustment and transformation rather than “self-destruction” or “self-resetting.”
The “two-sidedness” of technologies brought about by AI indicates that human beings have created something miraculous, and that this miraculous creation exerts a reflexive control and counter-discipline over its creators. In other words, people have built an AI that can be used to control and discipline themselves. What is fortunate—and especially thought-provoking—is that this reflexive control and reverse discipline can manifest itself as creative forms of resistance and a consciously accepted discipline, much as human growth itself is often expressed and realized through self-negation.
Technology’s two-sidedness likewise shows that it attains development through self-negation. From the singularity of early technology, to the duality of modern machinery, and from duality to the two-sidedness of AI, different humanistic perspectives are required. With respect to AI’s two-sidedness, the key to a humanistic examination lies in revealing technology’s self-reflexivity and the life metaphors concealed within it—and, through those metaphors, recovering an authentic sense of the self and entering a new stage of self-negation. If human beings achieve self-creation through self-negation, then AI can be understood as the technological mirror image of human life.
Dai Maotang is a professor from the Faculty of Arts and Sciences at Beijing Normal University.
Editor:Yu Hui
Copyright©2023 CSSN All Rights Reserved