Evolving toward ‘ethics–technology co-construction’ governance paradigm for open-source AI

Ethics-technology co-construction facilitates human-machine coexistence. Photo: TUCHONG
Open-source artificial intelligence (AI) models such as DeepSeek and Llama are breaking technological monopolies and advancing more equitable access to intelligence, emerging as an irreversible global trend in the field of AI. Characterized by the public release of model parameters, community-based collaborative development, and rapid iterative innovation, the open-source ecosystem is fostering a form of “distributed co-creation” distinct from traditional centralized research and development, driving AI toward more open, shared, and collaborative technological forms.
Unlike closed-source AI, open-source AI—by virtue of its strong orientation toward the “democratization of technology”—has become a major force stimulating global innovation and the widespread diffusion of AI technologies. Its open technical architecture, community-based collaboration mechanisms, and exponential pace of dissemination not only constitute key pathways for breakthrough innovations and large-scale application, but also embody rich techno-humanistic ideals such as freedom, openness, and sharing.
At the same time, the innovative potential unleashed by open-source development has also generated unprecedented risks, including the creation of harmful models and the circumvention of safety mechanisms. These risks expose the structural inadequacy of existing governance paradigms in open-source contexts. Problems such as diffused responsibility, lack of consensus, value drift, and model misuse have become increasingly prominent, placing governance mechanisms under severe strain. In response, ethical governance must urgently move beyond approaches centered on “ethical discipline” or “ethical embedding,” and instead shift toward a paradigm of “co-construction” grounded in the reciprocal shaping of ethics and technology.
Within an “ethics–technology co-construction” framework, ethics is no longer treated as an external, prescriptive constraint imposed upon technology. Rather, it becomes an endogenous structure that is dynamically generated, embedded, and continuously adjusted through community collaboration, ethical consultation, and technological innovation. Technology, in turn, is no longer understood as a value-neutral instrument, but as a complex system that continually absorbs, mediates, and even reshapes ethical tensions in practice. What emerges is a new configuration in which ethics and technology are mutually embedded and mutually constitutive. The significance of “ethics–technology co-construction” lies not only in its capacity to respond to concrete governance challenges, but also in its reactivation of philosophical reflection on the fundamental relationship between ethics and technology, thereby providing theoretical and cognitive grounding for a paradigmatic transformation in AI governance.
Within the open-source AI ecosystem, ethics functions as a value system that is continuously generated and evolves through practical feedback, while technology operates as a dynamic system that reflects social expectations and embeds value preferences. The code layer and the community layer are deeply intertwined and mutually influential, pushing governance logic away from one-way intervention toward bidirectional co-construction, from singular control toward pluralistic collaboration, and from predefined rules toward emergent order. The evolution of governance paradigms for open-source AI thus exhibits a trajectory characterized by value-driven dynamics, pluralistic co-construction, ethical consultation, and technological symbiosis.
Value-driven dynamics: social motivations of ethical governance
As open-source AI increasingly penetrates social life, public concern for fundamental values such as fairness, dignity, and responsibility is gradually shifting from moral awareness to explicit governance demands, becoming a fundamental force driving the evolution of governance paradigms. In political, economic, and cultural domains, these demands are no longer limited to reflecting on the consequences of technology, but instead take the form of proactive inquiries into the legitimacy of technology itself. Open-source AI models possess capacities for self-optimization, contextual transfer, and content generation: Once widely deployed in value-sensitive fields such as education, the judiciary, healthcare, and journalism, they may disrupt moral cognition and social order.
From the perspective of the social construction of technology, public value demands are not merely reactions to technological outcomes, but normative forces that actively participate in defining the trajectories of technological development. The future of open-source AI is therefore unlikely to be a utopia of freedom and collaboration; it risks becoming a technological “black box” marked by blurred responsibility and value drift. In recent years, countries and regions at the forefront of open-source AI development—including China, Europe, and the United States—have begun to pay closer attention to risk management for open-source models. Governments, the public, and platforms alike are attempting to articulate value frameworks for open-source AI. In this sense, social value demands constitute the fundamental driving force behind the ethical governance of open-source AI.
Pluralistic co-construction: actor networks in ethical practice
As open-source AI is increasingly deployed across diverse sectors of society, ethical governance is undergoing a structural transformation from centralized control toward distributed collaboration. In this decentralized context, ethical governance must rely on the pluralistic actor networks embedded within the open-source ecosystem to reconstruct mechanisms of responsibility allocation and value coordination through practice. Actor-network theory offers an analytical lens for understanding this transformation. According to this perspective, society and technology are not independent entities, but networked systems formed through the dynamic coordination of heterogeneous elements—including human and non-human actors, institutions and technical artifacts, and cognition and tools.
In open-source AI governance, these networks are no longer organized around model-developing firms as central actors. Instead, they consist of multiple nodes, including model publishers, hosting platforms, code maintainers, community review mechanisms, and user feedback systems. More importantly, technical artifacts such as algorithms and interfaces function as “non-human actors” that are far from neutral. They serve as materialized carriers of ethical norms: Design elements such as model cards and interface restrictions directly encode the intentions and boundaries of ethical governance.
Within distributed responsibility regimes, responsibility is no longer prescribed and borne solely by centralized institutions, but is jointly articulated and assumed by multiple participants in the network, including platforms, developers, communities, and users. This arrangement enhances the adaptability and flexibility of community norms while strengthening the immediacy and democratic character of ethical responses. Ethical practice thus no longer depends on a single authority, but continuously evolves through the coordinated participation of diverse actors, gradually forming an ethical order grounded in co-construction.
Ethical consultation: consensus formation amid multi-actor negotiation
Within the open-source ecosystem, developers, platforms, users, and regulators hold divergent value interpretations and interest claims. Governance therefore becomes an ongoing process of value mediation unfolding within collaborative networks of heterogeneous actors. Under these conditions, the core task of governance is no longer the formulation of static rules, but the construction of sustainable mechanisms for ethical consensus formation.
In open-source ecosystems, such consensus is not a final or fixed outcome. Rather, it emerges through open consultation and continuous feedback, enabling the ongoing coordination of ethical norms amid competing interests and divergent understandings. In practice, ethics does not function as an a priori template, but as a sedimented consensus formed through the process of technological development itself. Within open-source ecosystems, technology not only enacts ethical norms, but also continuously absorbs, reshapes, and generates new ethical standards through use and iteration, gradually achieving consensus on boundaries, responsibilities, and appropriate use.
Ethical consultation thus constitutes a defining feature of the evolving governance paradigm for open-source AI. It signals a shift from a governance logic centered on enforcing static rules to one oriented toward the generation of dynamic consensus. Norms are no longer treated as a priori moral imperatives, but as ethical order jointly produced through technological evolution and the plural demands of multiple actors within the open-source ecosystem.
Technological symbiosis: technical responses that make ethics ‘present’
Technology responds to ethics through a process of “translation.” Mechanisms such as model cards, interface permissions, and terms of use function as concrete pathways through which ethical norms are translated into technical syntax and system logic. Through this translation, technical systems absorb ethical principles and encode them into executable algorithms, rendering ethics an endogenous component of system operation.
At the same time, technology is not merely a passive executor of ethical norms. Through feedback mechanisms, it continually modulates and revises ethical standards, participating in their re-articulation and regeneration, and thereby reinforcing a mode of ethics–technology co-construction. In large language models, for example, reinforcement learning based on human feedback internalizes human preferences through training processes, guiding model outputs toward closer alignment with human value judgments.
Yet technological responses to ethics do not represent an endpoint. More fundamentally, ethical norms themselves are continuously deliberated and reconfigured through technological practice. Ethics is no longer a fixed external standard, but gradually evolves into adaptive norms through activities such as code updates, interface definition, and permission setting. This dynamic process demonstrates that technology not only carries ethics, but actively participates in its evolution. As open-source AI develops, ethics continues to shape technological boundaries, while technology in turn reshapes ethical practice, making the generative logic of ethics–technology co-construction increasingly visible. Governance no longer relies on the coercive enforcement of static rules, but instead depends on the development of technical systems to realize technical responses in which ethics is present.
Open-source AI stands at the forefront of the intelligent technology revolution. On the one hand, it inherits the humanistic spirit of freedom and sharing embedded in open-source traditions, displaying immense potential for technological democratization. On the other hand, it goes far beyond traditional open-source software by deeply engaging with the production of social meaning and value order. The governance challenges posed by open-source AI compel a reexamination of the fundamental relationship between ethics and technology. Ethics emerges as a value system that is continuously generated and evolves through practice, while technology becomes a dynamic system that reflects social expectations and embeds value preferences. Through sustained interaction, the two mutually shape, co-construct, and co-evolve.
From value-driven dynamics to pluralistic co-construction, from ethical consultation to technological symbiosis, the ethics–technology co-construction paradigm not only elucidates modes of human–technology coexistence, but also anticipates the logic through which value order is generated in the age of intelligence. In the face of continually evolving open-source AI systems, genuinely effective governance should not remain confined to risk prevention alone. Rather, it must focus on building an ecological structure of ethics–technology co-construction—one in which ethics is present within technology, and technology, in turn, becomes responsive to ethical reflection—opening a future of symbiosis between humanity and intelligent technologies shaped by human agency, guided by public values, and sustained by technological innovation.
Xu Jin is Party secretary of the School of Physics at Southeast University; Wang Jue is dean of School of Humanities at Southeast University. This article has been edited and excerpted from Journal of East China Normal University (Philosophy and Social Sciences) Issue 4, 2025.
Editor:Yu Hui
Copyright©2023 CSSN All Rights Reserved