HOME>OPINION

Public administration should leverage intelligent algorithms while mitigating risks

Source:Chinese Social Sciences Today 2025-08-06

As emerging technologies exert a profound impact on public administration, algorithmic governance has come to be seen as a new form of modernization in 21st-century public administration. To fully harness the benefits of algorithmic governance while preventing and mitigating its negative effects, it is essential to clarify the forces behind its emergence, identify its application-related risks, and explore appropriate regulatory approaches—ultimately striving for “good governance” through algorithms.

Improving governance effectiveness, efficiency through algorithms

In the contemporary context, “algorithms” generally refer to intelligent algorithms powered by digital technologies, particularly artificial intelligence and big data. Algorithmic governance emerged in response to the increasingly multifaceted and complex demands of public administration practice.

Expanding the toolkit for social governance: Rapidly evolving intelligent algorithms are well-suited to managing highly datafied public affairs, functioning independently or as auxiliary tools in public administration. Algorithmic governance enables multi-level abstraction of public affairs data, reducing uncertainty caused by human cognitive limitations such as ambiguity and bounded rationality, thereby offering relatively reliable solutions.

Enhancing the efficacy of social risk governance: Intelligent algorithms can compensate for the limitations of traditional risk identification and early warning methods, which tend to be one-sided, fragmented, and subjective. They assist in accurately assessing risk probabilities and informing proactive risk mitigation measures, contributing to a shift from experience-based governance to scientific governance. Public administration authorities can also leverage algorithms to integrate idle, scattered, or fragmented resources, and reallocate them in real time based on changing risk levels—allowing for more flexible and efficient resource management.

Facilitating precision governance: As the level of social development continues to rise, the public’s aspirations for a better life are becoming increasingly personalized and diverse. Intelligent algorithms can assist governance entities in accurately identifying and classifying governance targets as well as responding to more layered and diverse social demands, thereby better meeting public expectations and improving public satisfaction.

Risks of algorithmic governance cannot be ignored

Despite the many benefits, algorithmic governance also introduces significant risks. These stem both from the inherent nature of the technology itself and from its potential misuse in governance contexts.

Covert overreach of public power: If public administration bypasses institutional constraints through intelligent algorithms and allows public power to unduly penetrate the private sphere, it may lead to forced relinquishment of individuals’ rights to privacy and to be forgotten.

Accountability dilemmas: Intelligent algorithms are often highly specialized, and, in some cases, inherently lack interpretability. Moreover, given that public administration involves national security and public interests, algorithms may be deliberately designed to be “opaque.” These factors can give rise to algorithmic “black boxes,” posing two key challenges to accountability in algorithmic governance: Who should be held accountable? How can they be held accountable?

Algorithmic discrimination: Algorithms reflect the value judgments and intentions of their developers. If implicit biases or vested interests are embedded during development, biased outcomes could be generated. Even in the absence of intentional bias, overemphasis on efficiency may lead to the neglect of fairness—which is difficult to quantify—and produce unintended discriminatory outcomes. Furthermore, the quantity and quality of training data directly affect the performance of algorithmic governance; digital divides may create data blind spots, which in turn give rise to algorithmic biases.

Democratic erosion and algorithmic capture: Overreliance on algorithms may not only erode public administrators’ capacity for independent thinking and autonomous action, but also intensify the complex evolution of algorithms, creating knowledge barriers that marginalize public political participation. This may elevate algorithms above both society and citizens, reducing individuals to mere appendages of algorithmic systems.

Strengthening regulations to support algorithmic governance

In light of the aforementioned risks, the field of public administration must establish and implement institutional frameworks to guide technology toward socially beneficial outcomes, enhance the safety and fairness of algorithmic governance, and strengthen public trust in it.

A comprehensive technical framework: First, it is necessary to develop mechanisms for value correction and filtering within algorithmic learning and decision-making, enabling these systems to identify discriminatory data. Second, meaningful disclosure of algorithms should be promoted to enhance their transparency and interpretability. Third, algorithmic risks should be classified and graded, with a corresponding negative list developed. Stricter regulatory measures should be applied to algorithms included on the list.

Legal and ethical normative systems: First, ethical guidelines for algorithms should be established, grounded in values such as fairness, safety, transparency, accountability, and inclusion. The rights and obligations of algorithmic entities should be clearly defined by law. Second, algorithmic accountability systems should be improved, with responsibility assigned based on the algorithmic process and the degree of harm caused.

Organizational safeguards: Governments should establish dedicated algorithm regulatory bodies and prioritize the cultivation of talent with algorithmic expertise. Considering the spatial fluidity and pervasiveness of algorithms, risk regulation should engage all stakeholders. While fulfilling its role as the primary regulatory authority, the government should also actively mobilize societal forces to build a collaborative regulatory framework.

 

Huang Xinhua (professor) and Wang Lichao are from the School of Public Affairs at Xiamen University.

Editor:Yu Hui

Copyright©2023 CSSN All Rights Reserved

Copyright©2023 CSSN All Rights Reserved