Reforming evaluation systems from perspective of open science
On June 20, the inaugural Global Research Evaluation Symposium, hosted by the Chinese Academy of Social Sciences Evaluation Studies (CASSES), was held in Beijing. The event brought together dozens of experts and scholars from China and abroad to explore the theme “Innovation and Development: Research Evaluation in the Context of Open Science.”
Academic evaluation plays a significant role in advancing the humanities and social sciences and enhancing the quality of knowledge production. However, the global academic community continues to grapple with entrenched challenges, such as an overdependence on quantitative metrics, journal-centric and author-focused assessments, and persistent issues of academic misconduct—including fraud, plagiarism, and data falsification. Wang Weiguang, former president of the Chinese Academy of Social Sciences (CASS) and a professor at the University of Chinese Academy of Social Sciences, underscored the need to build an inclusive, globally oriented evaluation framework. He emphasized the importance of respecting culturally diverse standards of academic evaluation, dismantling geographic, disciplinary, and institutional barriers, and fostering intercultural dialogue and knowledge exchanges. Wang called for collaborative governance to jointly construct a global academic evaluation community.
Andrea Bonaccorsi, a professor at the University of Pisa in Italy, reviewed two decades of reform in Italy’s Research Quality Evaluation (VQR) system, spanning from 2004 to 2024. He outlined significant shifts in practices such as bibliometrics and peer review. For instance, between 2004 and 2010, all evaluation expert panel members were nominated by ANVUR—the Italian National Agency for the Evaluation of Universities and Research Institutes. Since 2015, only 25% have been ANVUR-appointed, while the remaining 75% were selected by lottery from a pool of self-nominated candidates. Bonaccorsi highlighted the transparency and broad applicability of the VQR experience, noting its positive influence on both research project development and university governance.
Clara Judith Naidorf, a professor and principal researcher at the Institute for Research in Educational Sciences of Argentina’s National Scientific and Technical Research Council, introduced the Latin American Forum on Research Assessment (FOLEC)—a platform for critical debate on the meanings, policies, and practices of research evaluation in the region. She highlighted the profound linguistic and geopolitical disparities within the global academic system: While most Latin American scholars publish in Spanish or Portuguese, international scholarly communication is dominated by English. This imbalance fosters a form of “academic dependency,” compelling peripheral regions to conform to evaluation standards shaped by Western academic centers. In response, Naidorf advocated for reforms to reduce assessment inequality, embrace open science principles, and promote multilingualism—thereby laying the groundwork for a new research paradigm grounded in social responsibility, collaborative openness, and local languages.
Leon Heward-Mills, managing director academic and chief content officer at Taylor & Francis Group, shared his perspective on the transformative impact of artificial intelligence (AI) on academic evaluation and publishing, regarding it as a structural shift within the academic ecosystem. AI, he noted, can streamline writing processes, enhance research methodologies, detect errors, assess impact with greater speed, and generate multidimensional analyses. However, it also raises serious concerns, including hallucinated content, fabricated citations, bias in training data and algorithms, and the opacity of black box models. To address these risks, Heward-Mills proposed establishing AI governance frameworks, developing comprehensive policy systems, promoting AI literacy among all stakeholders, and advancing cross-disciplinary global cooperation to ensure a human-centered AI future.
Ludo Waltman, a professor from the Center for Science and Technology Studies at Leiden University in the Netherlands, emphasized that evaluation practices influence not only the allocation of resources and recognition of performance, but also shape research ecosystems, academic culture, and talent development. A leading advocate of the Leiden Manifesto, he reviewed its international impact over the past decade and reaffirmed the principle that evaluation should serve both research quality and societal value. Denis Kosyakov, head of the Laboratory for Scientometrics and Scholarly Communications at the Russian Research Institute of Economics, Politics, and Law in Science and Technology, provided a comprehensive account of Russia’s evolving research evaluation system, which has developed from the centralized, state-controlled attestation system of the Soviet era—where main outputs were internal research and development reports with little emphasis on publications—to the adoption of competitive grant funding and international publication standards in the 1990s. Since the early 2000s, Russia has gradually adopted a modernized system, with publications and bibliometrics now central to academic advancement. The “Publication Performance Score” (KBPR) has become a key quantitative tool.
During the forum, the Beijing Initiative on Global Research Assessment (LEAP Initiative) was officially released. Jing Linbo, director of CASSES, summarized its core concepts using four key terms: locations interfere with country and region, engagement, assessment, and platform. Under “locations interfere with country and region,” the initiative calls for the development of academic evaluation systems that reflect national and regional characteristics, recognize cultural diversity, and acknowledge the value of various forms of research. On “engagement,” “assessment,” and “platform,” it advocates for the active involvement of governments and other stakeholders in setting standards, the creation of transparent and rational metrics, and the development of global exchange platforms to foster an international academic evaluation community.
Jin Minqing, editor-in-chief of the Social Sciences in China Press at CASS, noted that the forum hosted in-depth discussions on a range of cutting-edge topics, including the impact of AI on research and publishing, the cultivation of core values such as originality and academic subjectivity, and innovations in classification-based evaluation systems. He concluded that the forum’s theoretical insights and practical experiences offer significant intellectual inspiration and methodological value for the ongoing modernization of evaluation systems in philosophy and the social sciences, providing both intellectual support and practical models for future research.
Editor:Yu Hui
Copyright©2023 CSSN All Rights Reserved