Personal Reflective Exercise on Ranking and Rating in Science
When scientists publish research papers, those papers are judged on the quality and impact. The careers of scientists, and the fortunes of their employers have come to depend heavily on the ranking of the journals where they publish, and subsequent citation by others (Arms and Larsen, 2007; Harley et al., 2010). This ranking and rating system has been built over the recent years by the create of a system that increasingly relies of calculation of impact derived from publisher databases. While introducing an ‘objective’ measure based on data, the system is also highly criticized (Adler and Harzing 2009) as distorting the entire scientific research process. Harzing and van de Wal propose a more ‘democratic’ approach to citation analysis using the Google Scholar, suggesting that the open approach of Google is at least as good as the existing closed systems, and in many cases may be better. Can a search engine really be relied to generate metrics that can make or break a career?
Reflective Discussion Topic
What are the arguments for and against the use of the current data-driven system of research ranking? Can we identify who are the primary users and producers of the rankings? Who do they benefit? Reflect on where else we find this type of ranking?