History promotes ethical development and good governance of AI
History may facilitate the development of ethical AI. Photo: TUCHONG
Much has been written on how artificial intelligence (AI) may affect history, but far less on how history can affect AI. In fact, history not only can influence the trajectory of AI technologies, but also may contribute to improving AI governance. This is not a matter of theoretical conjecture but a reality already unfolding. In China, DeepSeek has long recruited “Data Know-It-Alls,” with “providing knowledge sources related to human history…” placed first in the job description. Abroad, Google DeepMind (hereinafter “Google”) explicitly lists history as a “relevant field” when hiring scientists for its advanced security and governance team.
How does history foster ethical development and good governance of AI, and what role does it play in advancing frontier AI technology? Three practical avenues stand out. First, AI practitioners can draw on the history of science and technology to achieve breakthroughs in ethical intelligence. Second, they can strengthen good governance by learning from the practices of institutional history. Third, they can promote the coordinated development of ethical AI and its good governance by absorbing and applying historical theories. History can not only assume a more proactive role in interdisciplinary research but also provide insights for rethinking the relationship between the digital and the humanistic.
From sci-tech history to ethical AI
AI is hardly a new concept, and its development has never been linear. Since the Dartmouth Conference introduced the idea in 1956, AI has gone through multiple bottlenecks, divergences, and advances and setbacks. Each time it confronts difficulties or disputes, history offers a valuable perspective.
To overcome technical bottlenecks and make the right choices at critical junctures, one must revisit the early insights of AI pioneers. As early as 1951, Claude E. Shannon proposed studying “text continuation,” predicting the next character based on the preceding text, as a way to probe the information content of a language. In the 70-plus years that followed, AI development experienced repeated disputes and downturns, with different approaches taking turns in the spotlight. Yet after OpenAI released ChatGPT at the end of 2022, “text continuation”—along with continuation-based generation of images and videos—has emerged as the most promising path toward achieving powerful artificial general intelligence (AGI). Continuous breakthroughs at the cutting edge often rest on the rediscovery, recombination, and application of pioneering ideas from the past. It is worth noting, in this respect, that one particular advanced model, Claude, was named in tribute to Shannon.
Following this continuation-based approach enables AI to generate texts, images, and videos of considerable quality. But ensuring the accuracy of outputs, enabling effective communication with humans, and completing tasks require aligning AI with human values. In 1960, Norbert Wiener advanced two related points. First, AI possesses powerful and highly efficient learning capacity. Once it crosses a certain threshold, it may evolve at a rate beyond human comprehension—an idea that later inspired debate about the “technological singularity.” Second, for this reason, it is essential that the goals humans set for AI correspond to human ends—that is, value alignment. Without it, powerful AI may not only fail to achieve intended aims but also cause harm.
These are only two representative cases. Even in a field of constant disruptive innovations, few changes are truly abrupt; continuities can usually be traced through history. Shannon formulated information theory, Wiener cybernetics. Their ideas, grounded in those contexts, retain vitality after multiple cycles of AI development. China’s own modern history of sci-tech development has a strong tradition of information theory, cybernetics, and systems theory, which were closely tied to the early development of AI in China. Exploring the contemporary significance of localized adaptations and innovations of these “Three Theories” remains a promising endeavor.
From institutional history to AI governance
The rise of AI has created a pressing need to govern it. To achieve good governance, rules must be devised that orient AI toward the good. The frontier practices of leading institutions and enterprises such as Google, OpenAI, and Anthropic show that addressing governance challenges at the forefront depends even more heavily on drawing lessons from institutional history.
These organizations have, without exception, converged on a similar approach: “drawing on ancient systems to reform modern intelligence.” To formulate governance rules that are content-appropriate, widely accepted, and easy to implement, it is necessary to hear and incorporate the demands of all stakeholders, including users. To this end, they look for inspiration in institutions of direct citizen participation, such as the Athenian city-state, while also leveraging generative AI’s growing ability to facilitate communication, summarize inputs, and draft proposals. According to their explorations, the process typically involves three steps: First, users convey their views through interaction with AI; second, AI organizes discussions among users to build consensus on governance issues; third, based on that consensus, rules are drafted, ratified, and implemented.
The choice and use of institutional history resources in these governance practices show both micro-level variation and macro-level convergence. At the micro level, the specific systems invoked differ, leading to variations in scope, method, and degree of participation. At the macro level, despite appearances of diversity across millennia, the institutional resources drawn upon remain confined to the traditions of a few countries. Critical reflection on these systems is often absent, and the institutional experiences of most developing countries are scarcely represented. As Google and other Western institutions expand globally and test their governance models worldwide, attention must be paid to the issue of unequal participation in the formulation of global rules for AI governance.
From historical theory to smart and good governance
History’s contribution to AI extends beyond informing technical choices or governance rules. In terms of both technical principles and application potential, AI may itself constitute a new genre of history. By pursuing good history, seeking ethical intelligence, and striving for good governance, history exerts a comprehensive influence on AI’s development and governance.
Technically, AI’s remarkable power comes from learning and compressing massive amounts of training data, much of which consists of historical material—often learned inaccurately and with bias. In terms of application potential, after absorbing so much historical data, AI naturally acquires the ability to answer historical questions and even engage in shallow forms of historical inquiry. Research into using AI to assist historical study is only beginning. As a new generation of scholars increasingly learns and studies history through AI, historical genres in the public imagination may no longer be limited to biography, chronicle, or event-based accounts, but may also include a new genre of large models.
Since AI outputs are themselves a form of history, the exploration and pursuit of good history are inherently tied to the exploration and pursuit of ethical intelligence and good governance. High-quality historical research, while diverse, is generally characterized by abundant historical materials, arguments grounded in evidence, and precise judgments. High-quality AI is much the same: Training data should be as rich as possible, outputs should remain accurate, and the logical reasoning ability should be enhanced. AI still falls short of historians in terms of historical material possession, historical argumentation, and depth of analysis, but it can nonetheless learn from them. Historical theory provides a profound understanding of how the selection, arrangement, interpretation, and conclusions of sources reflect particular positions, perspectives, and orientations. The process by which AI generates conclusions from training data is similarly prone to bias—biases that historical theory can help identify and correct. If “Data Know-It-Alls” were replaced with historians, China’s large models might advance another step.
Moreover, since AI constitutes a way of presenting history, developing AI is in some sense akin to compiling a history book. What is being written is no longer merely the textual and pictorial history of a region or era, but a history inscribed in numbers and codes, transcending boundaries and linking dynasties. From China’s grand history to large-scale AI models, worthy questions follow: How will Chinese models carry the nation’s history? How will foreign models engage in dialogue with China’s history? Improving the quality of such histories and correcting their orientations are tasks worth anticipating.
We have briefly outlined three practical ways in which history can promote AI, both reflecting current developments and projecting future possibilities. We now stand at a historical moment between history and AI. AI itself is at a historical juncture, with disruptive innovations emerging one after another; history’s role in shaping AI—and history itself—is at a similar moment. History must not be absent from the endeavor of writing the grand history of large models. As history gradually takes on a more active role in interdisciplinary research—transforming the ancient into the intelligent—the relationship between the humanities and the digital will also be reassessed and reconstructed. Put differently, we are transitioning from the digital humanities toward the humanistic intelligence.
Zhu Yue is an assistant professor from the Law School of Tongji University and Lin Zhan is a research fellow from the Research Center for Digital Humanities at Renmin University of China.
Editor:Yu Hui
Copyright©2023 CSSN All Rights Reserved