HOME>RESEARCH>OTHERS

Three research paradigms of AI psychology

Source:Chinese Social Sciences Today 2026-01-26

Human-machine cooperation drives psychological studies. Photo: TUCHONG

The interdisciplinary integration of artificial intelligence (AI) and psychology has transcended the boundaries of single disciplines, giving rise to three complementary research paradigms: AI for Psychology, AI and Psychology, and Psychology for AI. Together, these paradigms define the emerging field of AI psychology as one that is not only usable and useful, but also easy and appealing to use. These qualities are reflected in a growing range of innovative applications, including psychological measurement and assessment, counseling and guidance, education and training, and human–AI interaction.

AI for Psychology: Advancing tool innovation

AI for Psychology refers to the application of AI technologies to psychological research, practice, and education. Through data-driven approaches, this paradigm enhances the objectivity of psychological assessment, the effectiveness of interventions, and the innovativeness of theory development. At its core, it seeks to address long-standing challenges in traditional psychology—including the subjectivity in assessment, insufficient standardization of interventions, and low efficiency in theory verification—by technological empowerment, thereby strengthening the validity of psychological research.

This paradigm encompasses three levels of technological application. At the foundational level, AI algorithms are used to process multimodal psychological data. At the application level, intelligent assessment tools are developed. At the theoretical level, simulation modeling and big-data analytics are employed to test and refine psychological theories.

In the domain of intelligent assessment, multimodal affective computing replaces traditional self-report scales. By collecting, preprocessing, and modeling data, assessment tools become more objective and robust. In scenarios such as large-scale mental health screening, automated interviews, and early auxiliary diagnosis of depression, these technologies have already demonstrated improved accuracy and robustness.

In the domain of precision intervention, personalized digital therapeutics are used to optimize intervention processes, dynamically adjusting strategies according to individual differences. For example, Cognitive Behavioral Therapy chatbots have been shown to alleviate symptoms of depression and anxiety in short-term treatments. In the domain of human–AI collaboration, human-centered and explainable AI (X-AI) design is adopted to construct decision-making models characterized by “human leadership with AI assistance,” aiming to balance task efficiency with trust and ethical considerations.

Through technological empowerment, this paradigm has promoted both the accessibility and precision of psychological research. Nevertheless, further exploration is needed in areas such as algorithm interpretability, decision-making transparency, brain- and cognition-inspired AI, and cross-cultural and value alignment. Looking ahead, interdisciplinary governance frameworks will be essential for balancing technological innovation with humanistic concern, thereby advancing paradigm shifts in psychological science.

AI and Psychology: Bidirectional inter-construction

The bidirectional inter-construction of AI and Psychology refers to the mutual shaping of AI technologies and psychological theories at the levels of methodology, cognitive modeling, and ethical frameworks. Its essence lies in methodological symmetry, model inter-embedding, and ethical co-development. Psychological experimental paradigms, such as double-blind controlled designs, can be used to optimize AI training, while AI simulation technologies, including virtual reality (VR), provide new means for validating psychological theories.

Psychological concepts can be encoded as AI modules, and AI systems can, in turn, be used to simulate human cognitive mechanisms. At the same time, psychological insights into human values help constrain the moral design of AI, while ethical dilemmas arising in AI development feed back into psychological research on moral judgment.

This paradigm operates at two interconnected levels. At the tool level, AI models are trained using psychological experimental data. At the theoretical level, AI cognitive architectures—such as neural AI—offer new computational models that inspire psychological interpretations of mental mechanisms, for example by viewing Transformers as forms of distributed cognition.

Whereas traditional psychology relies primarily on experimental control and theoretical deduction, and AI depends largely on data-driven approaches and algorithmic optimization, this paradigm breaks the one-way relationship between the two fields. Through methodological cross-fertilization and theoretical symbiosis, it promotes paradigm innovation in both fields. Methodological cross-fertilization enhances research validity and innovation by combining strengths from each field.

VR exposure therapy for anxiety, for example, achieves high ecological validity through immersive technology while remaining grounded in psychological theories such as embodied cognition. Theoretical symbiosis, in turn, provides new frameworks for understanding human–AI interaction. Generative adversarial networks (GANs) can simulate aspects of human creativity, yet their training depends on psychological insights into artistic creation processes. Similarly, the “exploration–exploitation” trade-off in reinforcement learning aligns closely with psychological theories of decision-making, including prospect theory.

By means of bidirectional inter-construction, this paradigm transcends traditional disciplinary boundaries—enabling both theoretical deepening and methodological innovation.

AI cognitive architectures offer psychology new computational models of cognition, while psychological theories provide design inspiration for the development of more general forms of AI. Future work on interdisciplinary theory integration, human–AI symbiotic models, embodied intelligence, and dynamic ethical frameworks must remain alert to the risk of technological reductionism—the over-simplification of psychological phenomena into computational variables—so as to avoid distorted or superficial interpretations of psychological theory.

Psychology for AI: Advancing smarter AI and ‘AI for good’

Psychology for AI grounds AI in psychological theory, using cognitive modeling, ethical design, and interaction optimization to guide AI’s transformation from a functional tool into a more human-like partner. Its core objective is to simulate human cognitive processes—such as perception, memory, and decision-making—so that AI systems can recognize and respond to users’ emotional needs, adhere to human values and moral norms, and acquire greater human-like cognitive abilities, emotional understanding, and social adaptability.

Unlike the data-driven paradigm of traditional AI, this approach emphasizes a theory-driven orientation. By drawing on psychological explanations of mental mechanisms, it seeks to address the black-box problem of AI and mitigate associated ethical risks, and is articulated across three technological levels. At the cognitive level, AI cognitive architectures are constructed on the basis of psychological theories such as working memory models and dual-process theory. At the affective level, affective computing techniques enable emotional interaction. At the ethical level, human moral principles—including fairness and responsibility—are encoded as constraints on AI decision-making.

In cognitive modeling, psychological theories of cognition are used to enhance AI’s information-processing capacity by simulating aspects of human mental mechanisms. Embedding limits such as the “7 ± 2” rule of working memory into attention modules, for instance, can improve information filtering efficiency, while incorporating human reinforcement learning mechanisms such as trial-and-error feedback can optimize strategy updating.

In human–AI interaction, social psychological theories enhance the naturalness and social adaptability of AI systems. For instance, Meta’s “virtual social companions,” designed using social penetration theory (namely, gradual self-disclosure), have significantly increased user engagement in virtual social environments.

In the ethical domain, psychological research on moral judgment informs the embedding of human value constraints in AI systems, helping to prevent value deviation. Kantian deontology (rule-based ethics) and utilitarianism (outcome-based ethics), for example, can be encoded as weighted principles in AI decision-making to produce moral reasoning models, while bias-correction algorithms draw on psychological experiments to identify and mitigate implicit biases through adversarial training.

Through deep empowerment of psychological theory, this paradigm seeks to address the problem of AI systems that are highly rational yet insufficiently human. Its primary value is evident in three dimensions: cognitive upgrading, whereby simulating mental mechanisms enhances AI’s capacity for complex tasks; ethical controllability, which reduces technological risks through value constraints; and interaction naturalness, which strengthens emotional resonance between humans and AI and supports AI’s shift from a “tool” to a“partner.”

Yet vigilance is required, as AI systems may inadvertently inherit biases and discriminatory orientations inherent in human society. Accordingly, psychological research must offer more fine-grained bias-correction mechanisms to prevent “biased learning.” Continued research into human-like mental systems, cross-cultural psychological models, and adaptive value alignment will be essential to advancing the goal of “AI for good.”

Three paradigms jointly driving disciplinary progress

Viewed comparatively, the three research paradigms differ in goal orientation, paradigm innovation, and technological intervention. AI for Psychology focuses on practical problem-solving through data- and algorithm-driven tools, addressing challenges in psychological science through its instrumental role. The bidirectional inter-construction of AI and Psychology pursues paradigm innovations such as human–AI hybrid cognitive models, emphasizes mixed methods, and opens up new research directions through its knowledge-oriented attributes. Psychology for AI emphasizes theoretical translation, which must be realized through cognitive computational modeling, and advances AI toward greater coordination and alignment through its value-oriented attributes. From the perspective of psychology’s broader mission, all three paradigms contribute to human well-being, social flourishing, national prosperity, and the building of a community with a shared future for humanity. This “people-centered” direction of psychological development calls for more open and open-source thinking to promote the technological integration, conceptual synthesis, and capacity convergence of AI and psychology.

It is important to note that the three paradigms do not exist in isolation; rather, they operate synergistically to drive disciplinary advancement. AI for Psychology provides foundational data and technological accumulation for the bidirectional inter-construction of AI and Psychology. In turn, AI and Psychology in bidirectional inter-construction fosters the maturation of cognitive models and ethical frameworks for Psychology for AI. Psychology for AI, in turn, feeds back into the continuous upgrading of AI tools, forming a virtuous cycle.

From a broader perspective, the three paradigms correspond to three types of research in AI psychology: research based on AI, research integrated into AI, and research originating from AI. AI for Psychology represents “research based on AI,” in which AI functions primarily as a tool and methodological resource for studying human psychology and behavior. The bidirectional inter-construction of AI and Psychology constitutes “research integrated into AI,” focusing on the distinctive psychological and behavioral patterns of individuals and groups within AI technologies. Psychology for AI corresponds to “research originating from AI,” examining the psychological and behavioral impacts of AI use from the perspective of traditional psychology.

In summary, AI psychology research that is “visible, tangible, usable, and trustworthy” is poised to constitute the founding mission of this emerging interdisciplinary field. AI for Psychology addresses issues of efficiency and scale, the bidirectional inter-construction of AI and Psychology drives innovation in disciplinary paradigms, and Psychology for AI advances the development of humanized AI. Together, these three paradigms promote the systematic generation of new theoretical knowledge, new technological tools, and new applied products.

 

Ni Shiguang is a professor from the Shenzhen International Graduate School at Tsinghua University and director of the Center for Computational Research on People’s Well-Being at Shenzhen Key Research Base for Humanities and Social Sciences; Peng Kaiping is a professor from the Department of Psychological and Cognitive Sciences at Tsinghua University.

Editor:Yu Hui

Copyright©2023 CSSN All Rights Reserved

Copyright©2023 CSSN All Rights Reserved