HOME>RESEARCH>JOURNALISM & COMMUNICATION

Towards system trust in digital journalism: Machine autonomy perspective

Source:Chinese Social Sciences Today 2025-12-23

Digital technologies are altering the logic of existing news trust. Photo: TUCHONG

Since the advent of modern society, trust has been a fundamental prerequisite for connecting news to its audiences and forms the bedrock of journalism’s democratic mission. Since 2020, however, advances in automation, algorithms, and artificial intelligence have unsettled the professional and authoritative structures that once underpinned news trust. Core journalistic practices—including sourcing information, enforcing gatekeeping standards, and verifying authenticity—are increasingly challenged by algorithmic recommendation systems, deepfake technologies, and automated content production. Taken together, these developments erode journalism’s longstanding public commitment to truthfulness.

These developments point to an increasingly evident reality: Technology is reshaping the very foundations on which news trust is built. Following insights from human–machine communication research, it has become necessary to reconceptualize “news trust” within an AI-mediated environment. Drawing on the theoretical lens of “machine autonomy,” this study positions “human–machine trust” as a generative precondition for news trust. By examining the fundamental differences between AI-era news trust and traditional models, it offers a renewed understanding of system trust in the digital news industry.

Material turn in news trust

The role and function of journalism in modern society have made trust an inseparable concern since the very inception of the news industry. Public trust constitutes the foundation of journalistic practice. Most theoretical discussions of news trust, however, predate the large-scale rise of social media and digital technologies. Even when extended to digital environments, many such discussions fail to fully capture the nature and implications of technological change. The central questions of this study therefore are how digital technologies alter the logic of existing news trust, and how journalism can incorporate technological trust into its theoretical framework. This article examines whether the material turn is adequate for this task.

The “material turn” in communication studies has also ushered in a “digital moment” in journalism research, rendering technological artifacts a salient presence within the field. Materiality treats “things” as co-producers and indeterminate factors, highlighting the agency of technology. From this perspective, traditional journalism theories—historically centered on “words”—are undergoing dynamic revision within the entanglement of “words and things.” In particular, science, technology and society (STS) have introduced a socio-technical perspective into journalism, prompting scholars to move beyond social determinism and acknowledge the agency and contested relations of technological artifacts.

Categorizing news solely by its material attributes is insufficient to grasp the full transformation of news trust. Yet this macro-level orientation is precisely what distinguishes newer approaches from earlier ones. To address this, the infrastructural tradition within STS has gained prominence, giving rise to two main analytical paths. One interprets news trust itself as an infrastructure, embedded in journalistic production structures and governing processes of production, dissemination, consumption, and participation, rather than treating trust simply as the outcome of good reporting. The other incorporates infrastructural elements—such as people, places, technological systems, and public symbols—into the study of news trust. Among younger audiences, for instance, trust is often closely tied to platforms and algorithmic mechanisms, with users relying on algorithms to filter and evaluate news through search interfaces and ranking systems.

From this vantage point, an infrastructural perspective enables macro-level inquiry into news trust at the level of platforms and technological systems, advancing a more structural understanding of journalism’s systemic form.

Overall, the material turn in journalism—represented by news artifacts, digital objects, and infrastructures—deconstructs the narrow assumption that news trust occurs solely between newsrooms and audiences. It reveals how material conditions, cultural factors, and resource configurations shape the emergence of trust, thereby “de-centering” news trust research and redirecting attention beyond journalists and news organizations themselves. A key limitation nevertheless remains. While previous studies acknowledge that technological artifacts can function as sites of trust formation, modes of ritualized interaction, or intermediaries in shifting power relations, they rarely foreground technology itself as an object of trust. In this respect, the material turn falls short of fully integrating “trust in technology” into the theoretical framework of news trust.

News trust through lens of machine autonomy

Early journalism scholars working from a social-constructivist orientation had already recognized the relational nature of trust, distinguishing their approach from the “media-credibility” paradigm. Scholars aligned with the material turn shifted the locus of this relational trust from the newsroom to broader platform infrastructures. Yet both approaches remain grounded in a theoretical logic centered on human social trust.

With the rapid rise of generative AI, alongside the continuous interplay between deep learning and reinforcement learning, the autonomy and complexity of large models and intelligent agents have increased exponentially. The role of technology across all domains of trust—news trust, digital trust, and automated trust—now exceeds the explanatory reach of the material turn. Technology is no longer merely a site for trust or a transparent intermediary; AI systems increasingly operate independently of direct human control or intervention, giving rise to a reality of human–machine co-trust. Under these conditions, introducing the theoretical perspective of “machine autonomy” becomes necessary for rethinking news trust in the age of artificial intelligence. Although machine autonomy functions to some extent as a metaphor for complex human–technology interactions, this study treats it as a dynamic theoretical lens shared across disciplines given the lack of a precise definition.

Building on this perspective, we propose conceptualizing human–machine trust as a foundational precondition for the emergence of news trust, thereby distinguishing news trust in the AI era from that of both the mass media era and the material-turn era. On one hand, human–machine trust constitutes a composite model encompassing digital trust, system norms, and interpersonal as well as emotional recognition. On the other hand, it challenges the primacy of human-centered trust. This theoretical repositioning is more than a semantic shift—it raises a series of questions: How professional journalists, generative AI systems, and audiences come to trust one another; how news trust is produced through interactions between humans and AI; and whether news trust involving generative AI can still be understood as “system trust” in Niklas Luhmann’s sense—trust between impersonal entities such as the government, media, currency, and social organizations. Focusing on news veracity as a prototypical site of trust formation, the study traces how news trust continues to evolve under the premise of human–machine trust.

Towards digital journalism grounded in system trust

The relational practices through which news authenticity is established have long been central to journalism’s ability to earn public trust. Earlier scholarship often treated news veracity as a self-evident precondition of news trust. Today, however, the linkage between truth and trustworthiness has become unstable, functioning more as a normative ideal than as an empirical description of how digital journalism relates to the public.

The widespread adoption of generative AI has intensified this tension. Large language models are prone to producing plausible but false information—a phenomenon commonly described as factual hallucination. While journalism’s mission is to provide fact-based reporting, AI cannot yet serve as a consistently reliable source. If hallucination problems remain unresolved, AI-generated news risks becoming a significant source of misinformation, posing a fundamental challenge to journalism’s truth-seeking and truth-bearing commitments.

Rather than conceptualizing human–machine trust as a psychological state, as in human–computer interaction research, or as a governance strategy, as in the Trustworthy AI agenda, this study—drawing on Lucy Suchman’s work on human–machine autonomy—understands human–machine trust as a new relational configuration emerging in AI-mediated contexts. This configuration encompasses all human and machine actors involved in news production. It illuminates the moral implications embedded in these relations and provides an analytical framework for examining the deepening entanglement between journalism and AI infrastructures. In doing so, it elevates technical trust to a status comparable to interpersonal and institutional trust and clarifies how journalistic trust is constructed in an environment where news knowledge and meaning are co-produced by humans and machines.

Human–machine trust also generates new questions for digital journalism, including how agency should be negotiated between journalists and generative AI, and how professionalism, moral responsibility, and journalistic authority ought to be reconceived within increasingly hybrid human–machine arrangements.

The distinctive dynamics of human–machine trust create potential for a theoretical shift, one that moves news trust towards a new form of system trust. Traditional theories of news trust, drawing on thinkers such as Luhmann and Stephen Coleman, already contain elements of system trust, but they typically conceptualize news trust as one subsystem within a functionalist model of social trust. Analogous to public trust in monetary or expert systems, news trust is seen as tied to journalism’s watchdog function—trust arising from system functionality.

The human–machine trust perspective reverses this logic. It incorporates all existing trust relations within digital journalism—interpersonal, institutional, technological, and digital trust—into a human–machine symbiotic ecology. Rather than deriving trust from system functions, it advances a viewpoint in which system effects emerge from trust relations themselves. This shift presents two directions for future research on news trust: First, at the epistemological level, the traditional dyadic framework of “journalism–audience” trust should be replaced by a triadic trust relationship among journalism, AI technologies, and audiences. Second, in practice, trust should move from an institutional form grounded in professionalism towards an organic form that requires ongoing technological mediation, ethical negotiation, and cognitive adaptation.

Today, human–machine trust brings journalism closer to a fuller realization of the “news ecosystem.” It offers not only a conceptual tool for understanding the evolving relationship between technology and news trust, but also a forward-looking framework for guiding digital journalism towards a future grounded in system trust.

 

Bai Hongyi (professor) and Wan Xuqi are from the School of Journalism at Fudan University. This article has been edited and excerpted from Journalism and Mass Communication Monthly, Issue 6, 2025.

Editor:Yu Hui

Copyright©2023 CSSN All Rights Reserved

Copyright©2023 CSSN All Rights Reserved