Constructing intelligence justice through three dimensions: data, algorithm, application
China’s AI industry showcasing its innovative achievements in embodied intelligent humanoid robots at the World Artificial Intelligence Conference 2025 held last month in Shanghai Photo: IC PHOTO
At present, digital technology is reshaping social structures at an exponential rate. The comprehensive digital and intelligent transformation of human life, together with persistent global disparities in digital development, underscores the urgency of intelligence justice as a fundamental value issue in the digital age.
Data justice as foundation of intelligence justice
The absence of data justice arises from structural imbalances in data production and distribution. Within the complex system of digital society, the volume of data largely determines the likelihood of “being seen.” Data from various sources is also heterogeneous: content creators’ subjective intentions, production processes, and socio-cultural contexts collectively shape both the form and value orientation of content. These multidimensional differences pose challenges for the effective construction of AI training datasets: How can we maintain data scale while effectively integrating raw materials that vary in form and value, thereby ensuring the representativeness and validity of training data?
The logic and fine-tuning strategies used to build training datasets directly affect AI’s cognitive schemas. Entities that control data collection, filtering, and usage rights essentially acquire the power to define knowledge forms and cultural patterns. When these entities continually reinforce the authority of their outputs through algorithmic recommendation mechanisms and content distribution advantages, cognitive centralization and exclusivity may ensue.
Furthermore, the digital divide has left more than 3 billion people worldwide excluded from data production systems. This exclusion not only leads to disparities in development opportunities but may also give rise to “data colonialism,” where technologically dominant groups encode their own cognitive paradigms as universal norms by controlling key stages such as data collection, labeling, and modeling. Achieving data justice therefore requires improvements at multiple stages—including data collection, cleansing, and representation, along with the equitable distribution of data across levels and categories, in order to dismantle “data hegemony.”
Algorithmic justice as functional safeguard for intelligence justice
While discriminatory outputs generated by AI systems due to biased training data can be addressed through dataset optimization, the control exerted by algorithmic decision-making over social systems is more difficult to detect and prevent. This is particularly evident in AI systems’ precise interventions in user decision-making pathways through behavioral data modeling. Once digital platforms accumulate sufficient behavioral trace data, they can create user profiles and implement dynamic intervention strategies accordingly: selecting optimal delivery timing and contextual triggers, designing persuasive modes of information presentation, and employing reward mechanisms to steer users toward choices that align with platform interests.
Such technological interventions have evolved beyond the traditional logic of information matching in recommendation systems and now function as disciplinary mechanisms with self-reinforcing properties. When digital capital transforms instrumental rationality into disciplinary power through algorithms, the presumption of technological neutrality is fundamentally challenged. Algorithms cease to function as mere value-neutral tools, becoming power apparatuses for predicting and controlling behavior.
Realizing algorithmic justice requires introducing interdisciplinary and cross-domain review mechanisms that involve ethicists, sociologists, and public representatives in the algorithm design phase. These mechanisms should critically examine the value assumptions embedded in algorithms and infuse them with public rationality. In doing so, they can help prevent the reduction of complex social relations to single metrics and safeguard the organic process through which social consensus is formed.
Application justice as value orientation of intelligence justice
The complexity of digital society requires AI systems to dynamically align technological logic with social values across diverse application scenarios. However, standardized algorithmic architectures often struggle to accommodate varied value demands. Intelligence justice emphasizes embedding ethical principles—such as fairness, transparency, accountability, and public wellbeing—throughout the entire lifecycle of AI systems, from design and development to deployment. Yet, these abstract principles cannot automatically or directly translate into appropriate outcomes in every specific scenario. Application justice serves as the bridge for this translation. Through the mechanism design and practice of application justice, the abstract conception of intelligence justice can be formalized into executable technical rules and decision-making logic tailored to specific scenarios, thereby delivering positive societal outcomes.
Given the interdependence among these three dimensions, achieving intelligence justice requires transcending piecemeal optimization and instead establishing a coordinated governance ecosystem.
Wu Jing is a professor from the Department of Philosophy at Nanjing Normal University.
Editor:Yu Hui
Copyright©2023 CSSN All Rights Reserved