Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Hindu
The Hindu
National
Karishma Kaushik

Counting scientists’ productivity with numbers undermines science | Explained

Scientists at Stanford University recently ranked the ‘top’ 2% of scientists in a variety of fields. The ranking contained an up-to-date list of the most highly cited scientists in these disciplines. That is, the list consists of the top 100,000 scientists based on an aggregate of numerical indicators, equal in this case to those scientists who have authored papers whose citation count lies in the top 2% in each field.

The presence of Indian scientists in this list has garnered substantial public attention. They have been accompanied by institutional press releases, news features, and award citations.

Given this fanfare, it’s important that we understand what the top 2% ranking system actually measures and how well the measure correlates with real-world scientific achievement.

A combination of numbers

The 2% ranking system is based on standardised citation metrics across all scientific disciplines. In the scientific research setting, a citation is a reference to a piece of information, typically an already published entity like an article, book, or a paper in a journal. For a scientist, being cited means that some scientific publication that they have authored has served as a reference, basis or source for some parts of subsequent research in the field.

Using scientific publication data from the Scopus database of published papers, maintained by the publisher Elsevier, the 2% ranking system uses a composite citation index that is based on six citation indicators. These are: (i) the total number of citations for a paper, (ii) the total number of citation for a paper where the scientist is a single author (i.e. no co-authors), (iii) the total number of citations for papers where the scientist is a single or the first author, (iv) the total number of citations for papers where the scientist is a single, first or last author, (v) the number of papers for which the scientist has been cited at least the same number of times (h-index), and (vi) the number of citations per author for all the authors of a paper.

By combining the values of these indicators, the 2% ranking system assesses scientists’ citation impact in a single calendar year as well as throughout their careers.

Change of character

This way, using different scientometric indicators, many scientists have attempted to numerically quantify their peers’ scientific achievement. For example, the ‘AD scientific index’ measures the productivity coefficient of a scientist using the h-index, the i-10 index (number of publications with at least 10 citations), and other numbers; the ‘h-frac index’ tracks the fractional allocation of citations among co-authors; and the ‘Author Contribution Scores’ computes a continuous score that reflects a scientist’s contributions relative to those of other authors over time. There are many others.

Numerical measures were originally developed to track research productivity. But they have since backfired and today increasingly influence decisions related to hiring, funding, promotions, awards, recognitions, and professional growth. This is alarming because of fundamental concerns related to using rankings and metrics as a perfect measure of scientific productivity and the effect of quantitative indicators on academic practice at large.

Numbers aren’t everything

First, ranking systems and indices rely almost solely on quantitative data derived from citation profiles. They don’t – can’t – evaluate the quality or impact of some scientific work. For example, a scientist with 3,500 citations in biology could have accrued half of them from ‘review’ articles, which are articles that survey other published work instead of reporting original research. As a result, this person may rank among the top 2% of scientists by citations. On the other hand, a scientist with 600 citations all from original research will be out of the top 2%.

For another example, a biotechnologist with 700 citations from 28 papers published in 1971-2015 and a scientist with a similar number of citations from 33 papers published in 2004-2020 may both be in the top 2%. The latter’s advantage is that the rapid growth of and electronic access to academic publishing has resulted in citation inflation over time.

Hidden incentives

Second, citation metrics don’t allow us to extrapolate between fields or account for specific aspects of research in sub-fields. For example, in microbiology, the organism one is studying determines the timeline of a study. So a scientist working with, say, a bacterial species that’s difficult to grow (like that of tuberculosis) would appear to be less productive than a scientist working with rapid turnaround technologies like computational modelling.

Third, the overvaluation of the numbers of publications and citations, the position of authorship (single, first or last), etc. breeds unethical scientific practices. That is, such a system incentivises scientists to inflate their citation count by citing themselves, paying others to cite their work, and competing for author positions. Correcting these indices by accounting for shared authorship also devalues some fundamental tenets of scientific work and undermines the idea that research may take time to have an impact.

‘Extracurricular’ pressures

Fourth, rankings and metrics restrict the definition of a ‘top’ scientist to someone who has simply been productive in research. This won’t account for scientists’ other responsibilities, such as teaching, mentoring, community service, administration, outreach, etc. In fact, quantifying scientific productivity based only on research citations could appear to penalise those who are engaged in some or all of these other activities which are equally important for science to benefit society.

Academic publishing houses and institutional ranking systems also take advantage of the impact that quantitative indicators have on scientists’ career and recognition by using it to pressure scientists to ‘publish or perish’ and by promoting ‘high impact-factor journals’ that supposedly receive more citations. Scientific institutes also resort to quantitative indicators when evaluating scientists for hikes and promotion.

But beyond the complicated, oft-flawed systems developed to assess research productivity, the best way to evaluate scientists’ work remains relatively simple: read the science.

  • Scientists at Stanford University recently ranked the ‘top’ 2% of scientists in a variety of fields. The ranking contained an up-to-date list of the most highly cited scientists in these disciplines.
  • The 2% ranking system is based on standardised citation metrics across all scientific disciplines. In the scientific research setting, a citation is a reference to a piece of information, typically an already published entity like an article, book, or a paper in a journal.
  • Numerical measures were originally developed to track research productivity. But they have since backfired and today increasingly influence decisions related to hiring, funding, promotions, awards, recognitions, and professional growth.

Karishma Kaushik is the Executive Director of IndiaBioscience.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.