Demystification: new mission of social sciences in age of AI
The demystification of AI through social science research aims to illuminate the production logic and structural limitations of AI, thereby promoting more judicious use. Photo: TUCHONG
In human history, few technologies have permeated every corner of society as rapidly as artificial intelligence, sparking both extensive public imagination and widespread collective anxiety. In the face of this wave, the role of the social sciences requires clarification. At present, most social science research on AI centers on the “user end”—issues such as automation’s impact on employment, algorithmic bias, data privacy, and the ways AI assists social science research itself. While these issues are undoubtedly essential, they should not limit the scope of inquiry. The mission of the social sciences is not only to analyze the impact of technological advancement, but also to demystify technology itself.
Social science research must therefore broaden focus from AI’s “user end” to its “production end,” delving into development stages such as algorithm design, data collection and processing, model training, and the underlying logic of capital and organizational culture. Only then can we recognize AI for what it is: a concrete, human-constructed technological practice with social attributes. This task is not simply an intellectual challenge but a crucial step toward recalibrating the human–technology relationship and safeguarding human agency.
Mystification of AI in public narratives
Public narratives about AI often display a tendency toward “mystification,” reinforced both by the futuristic spectacles of product launches by tech companiees and by workers’ fears of replacement. This tendency manifests in three dimensions. The first is cognitive mystification: equating statistical predictive capabilities with human understanding, reasoning, and creativity, thereby blurring the line between the current capacity of AI models and the still-murky “essence of intelligence.” The second is deterministic mystification: portraying AI advances as inevitable and beyond human control, fueling a fatalism tied to the notion of technological singularity. The third is process mystification: reducing intricate development processes to the illusion of “machines learning by themselves,” thereby obscuring the enormous human labor, resource consumption, and sociohistorical choices involved.
Such mystification can provoke polarized public sentiments—either unreserved optimism and reverence for AI, or a sense of helplessness in the face of skills seemingly outstripped by machines. Both erode individuals’ ability to envision future possibilities and make appropriate choices. Demystification, therefore, seeks to break this false dichotomy, enabling people to understand AI technology along with the sociotechnical structures that sustain it.
The social sciences’ responsibility to study AI’s production end also stems from the limitations of technology-driven demystification efforts. Several AI R&D companies, such as Anthropic, are advancing research on interpretability, which is undoubtedly valuable. Yet such work largely offers technical explanations of how models operate, rather than addressing the deeper social question of why they function in specific ways. Why do particular groups, guided by particular values, with certain goals and datasets, construct intelligent models shaped by specific assumptions and features? This is precisely the space where the social sciences, especially Science and Technology Studies, have a role to play.
A social science agenda for AI development
Treating AI model development as an object of social science inquiry does not mean that social scientists must become algorithm engineers. Rather, it calls for using the theoretical tools and fieldwork methods of the social sciences to critically examine several key aspects.
Knowledge archaeology of model architectures: Rather than taking prevailing model architectures for granted, social science research should trace how they evolved from core concepts such as neural networks and attention mechanisms. It should analyze the cognitive schemas embedded in these models and explore why particular schemas prevail within specific technological, commercial, and cultural environments.
Political economy of data practices: Data serves as the foundation of AI models, yet it is neither genuinely “raw” nor purely objective. Social scientists must examine the entire process of data collection, cleaning, and annotation, including the establishment of annotation standards and the emergence of data annotators in the labor market.
Materiality of infrastructure: AI relies on large-scale infrastructure—from advanced chips to energy-intensive data centers and cloud service platforms. The social sciences should analyze this “computation–energy–geopolitics” assemblage and the underlying logic governing the production, distribution, and circulation of computational resources.
Sociology of benchmark construction: The performance of AI models is defined by a series of benchmarks. The social sciences should investigate how these benchmarks are established and maintained, as they influence the direction of technological development, the definition of “success,” and the allocation of academic and commercial capital.
Sociology of science in innovation fields: The social sciences should analyze how innovations in AI are shaped by organizational culture, talent mobility, prestige patterns, and team structures within AI laboratories, corporate AI divisions, and open-source communities such as Hugging Face.
The goal of demystifying AI—from the production end through social science inquiry—is not to disparage or obstruct technological development, but to illuminate the production logic and structural limitations of AI, so that it can be used more judiciously. Ultimately, this contributes to building a healthier, more sustainable, and human-centered society where humans and intelligent machines co-exist.
Li Linzhuo is a research fellow from the Department of Sociology at Zhejiang University.
Editor:Yu Hui
Copyright©2023 CSSN All Rights Reserved