HOME>WHAT'S NEW>PHOTO

AI reshapes knowledge production in social sciences

Source:Chinese Social Sciences Today 2025-08-15

The 2025 World AI Conference and High-Level Meeting on Global AI Governance were held on July 26. Photo: IC PHOTO

The rise of artificial intelligence, particularly large language models (LLMs) and agent-based modeling (ABM), is profoundly reshaping research paradigms and methodologies in the social sciences. When given specific prompts, AI can generate testable hypotheses, synthesize relevant published literature, and propose potential experimental approaches. In this sense, AI offers researchers entirely new tools and methodological pathways, and is gradually integrating into the full cycle of scientific inquiry. Yet this development has also provoked vigorous academic debate, especially over the risks of technological misuse and the ethical challenges it presents.

Liberating scholars from mechanical labor

By integrating multimodal data such as text, images, and audio, AI can capture implicit information during the learning process, significantly broadening both the scope and methodological boundaries of research. Tian Qian, director of the Department of Anthropology and Ethnology at the School of History and Culture at Southwest University, noted that by comparing satellite remote sensing images with historical maps, AI can automatically identify patterns of ecological change in river basins. When combined with its text-analysis capabilities, scholars can uncover deep-seated dynamics of human–environment interaction from ancient documents. With AI, researchers can not only perform semantic mining of dialect recordings but also trace the evolution of collective memory during migration through sentiment analysis. This has shifted river basin studies from purely geographical analysis to the dynamic reconstruction of cultural ecosystems. AI can also detect emotional fluctuations in migrants’ social media posts and videos, constructing symbolic narratives of “homeland” and “new home.” Moreover, it can analyze architectural styles and clothing patterns in migrant communities, revealing micro-level dynamics of cultural integration and conflict. By comparing image datasets from different periods, researchers can reconstruct the fluidity of migrant identity.

In April 2023, a joint research team from Stanford University and Google created a virtual town for a study. By embedding ChatGPT into this simulated environment, its 25 virtual residents became generative agents capable of retaining memory, engaging in conversation, and interacting with one another. Jiao Jianli, a professor from the School of Information Technology in Education at South China Normal University, remarked that such work enables high-fidelity, dynamic simulations of social interaction, offering a controlled experimental setting for examining nonlinear phenomena such as cultural diffusion and group decision-making. Tian further noted that AI’s digital capabilities are transforming the preservation and transmission of intangible cultural heritage, such as traditional opera and handicrafts. For instance, AI speech recognition can decode the terminological systems of oral traditions, while computer vision can analyze craft techniques to produce interactive digital archives—allowing the living heritage to transcend physical and temporal constraints.

Generative AI is transforming the long-standing model of human-dominated knowledge production in which tools have played only a passive role. Wang Xiaoguang, dean of the School of Information Management at Wuhan University, explained that AI can now generate research hypotheses, construct problem spaces, and actively participate in key processes such as information organization, user modeling, and service design, vastly augmenting the cognitive capacities of researchers. Through ongoing human–AI interaction, research is evolving from a human-centered interpretive activity into a dynamic, reflective, and open-ended co-constructive process between humans and AI, injecting new structural momentum into the mechanisms of knowledge production.

Yang Feng, editor-in-chief of Contemporary Foreign Language Studies, observed that AI is liberating scholars from mechanical labor, allowing them to return to their essential roles as explorers, thinkers, and interpreters. A symbiotic relationship is emerging, characterized by “human intelligence leading, AI capabilities enhancing,” which is unlocking innovative potential through the restructuring of research workflows.

“AI is propelling educational research from ‘single-dimensional speculation’ toward ‘holographic dynamic analysis,’ offering revolutionary perspectives for understanding the nature of learning and optimizing educational practice,” Jiao continued. For example, video recognition can be used to analyze correlations between students’ gestures, postures, and digital note-taking, revealing patterns of nonverbal interaction in collaborative learning. By analyzing students’ problem-solving processes, verbal questions, and on-screen activities in real time, AI can generate precise, personalized educational interventions. In particular, through machine learning, natural language processing, and big data analytics, AI has broken through the interpretive limits of traditional qualitative research and the sample-size constraints of quantitative research, expanding the temporal and spatial dimensions of educational inquiry.

Safeguarding human subjectivity

Technology has always been a double-edged sword, and the application of AI in philosophy and the social sciences carries multiple potential risks and limitations.

“AI’s data analysis essentially involves extracting, copying, and pasting from existing data. In specific fields, the materials and data often exhibit homogeneity, and relying solely on AI can lead to a large number of repetitive results,” said Zhou Xinmin, vice chairman of the Hubei Provincial Writers’ Association and a professor from the School of Humanities at Huazhong University of Science and Technology. Taking literary criticism as an example, he explained that critics draw on subjective imprints—personal reading experience, life insight, aesthetic preference, and cultural context—to express genuine emotional and intellectual responses to a text. AI, by contrast, “replicates” patterns based on prior works of similar themes or earlier research on the same author.

Wang added that AI models generate results based on statistical probabilities, which may obscure the depth of cultural perception. While AI can assist in extracting and restoring data from historical portraits, it often struggles to grasp the symbolic meanings and cultural connotations behind them—sometimes reducing profound cultural heritage to mere visual or textual “simulacra.” This serves as a reminder that AI cannot yet replace interpretive tasks.

AI’s understanding of emotion and culture is, at its core, a matter of probabilistic computation rather than genuine experience. Jiao warned that entrusting one’s emotional expression to AI may unconsciously dull human empathetic capacity. When students outsource the process of deep thinking to AI, it may inevitably lead to intellectual substitution, cognitive outsourcing, and blunting critical thinking. The widespread adoption of AI has sparked ethical debates, particularly concerning data privacy, academic integrity, and the erosion of human subjectivity. Jiao argued that even the most advanced techniques for ethical alignment cannot fully eliminate the implicit biases embedded in AI models. He stressed the need for heightened vigilance against the alienation of academic evaluation, ruptures in knowledge production, and ethical dilemmas such as data colonialism, methodological traps, and the erosion of human subjectivity. It is crucial to maintain methodological self-awareness and position AI as a critical dialogue partner rather than a truth-generating machine.

Although AI can assist and augment research, core intellectual work must remain in human hands. Zhou emphasized that academic inquiry is a vital expression of human creativity and subjectivity—it is inherently human. Especially in domains of values and meaning, research that is directional and exploratory should be undertaken by human scholars.

“AI has fundamental limitations in areas such as deep meaning construction, value judgment, ethical reasoning, and cultural interpretation,” Tian said. In his view, matters such as assessing fairness in public policy, determining priorities in cultural heritage preservation, and setting value-oriented goals in education must be weighed and deliberated by anthropologists with a strong sense of social responsibility. The philosophical foundations of social justice and institutional arrangements, democratic participation and public reason in the digital age, ethical controversies in cultural heritage protection, and the cultivation of character and values in education—all must firmly uphold the primacy of human subjectivity.

“The irreplaceability of human research lies in its ability to interrogate motives, values, and purposes. In the humanities and social sciences, retaining the authority to explore these issues is essentially safeguarding these disciplines’ humanistic essence,” Jiao said. AI lacks moral subjectivity and cannot comprehend complex socio-cultural contexts or assume ethical responsibility. In fields like pedagogy and the humanities—where ethical and value judgments, subjective experience and empathy research, and contextual interpretation of historical texts are central—studies must rely on researchers’ philosophical reflection and sensitivity to social power. AI can only present superficial correlations. Creative theoretical construction and decision-making in complex scenarios require synthesizing unstructured factors such as political and cultural considerations, tasks that must be undertaken by human researchers.

Regarding AI’s ability to generate a “qualified” paper every minute, Yang believes that true academic value lies in those qualities algorithms cannot replicate: reverence for historical context, insight into the complexity of human nature, and relentless pursuit of meaning. In the meantime, academic research is moving toward a new paradigm of deep human-machine collaboration. In this context, ensuring that the brilliance of human wisdom is not overshadowed amid technological transformation requires safeguarding the innovative essence of humanistic inquiry while harnessing AI’s efficiency.

Regulating AI usage in journals

The misuse of AI risks profound crises such as intellectual homogenization and the degradation of academic capabilities. As gatekeepers of knowledge communication, academic journals bear a critical responsibility in guiding the appropriate use of AI. Currently, the academic publishing community has established a clear principle of “permitting auxiliary applications while prohibiting core substitution” in regulating AI usage.

In Yang’s view, the essence of these journal policies is to defend the human subjectivity within scholarly research through accountability anchoring, transparent disclosure, and scope limitations. He suggested that journals should collaboratively develop a “guidance-constraint-empowerment” mechanism to dynamically adjust the framework for AI governance in academia.

According to Wang, academia and publishers must act in concert. Academia should strengthen the awareness of norms by systematically incorporating AI ethics and methodological boundaries into research training to prevent “generated content” from replacing “academic thinking.”

Universities and research institutions should establish dedicated AI ethics committees to develop discipline-specific guidelines for AI usage, Tian concluded.

Editor:Yu Hui

Copyright©2023 CSSN All Rights Reserved

Copyright©2023 CSSN All Rights Reserved