Publish or Perish Has Morphed Into the H-Index

If you saw the movie Moneyball and the impact of mega-statistics on major league baseball, then you won’t be surprised that Big Data (and Little Data) has invaded academic medicine as well, not to mention science in general. No longer is that iffy word “judgment” to be utilized to assess competence as distinct from brilliance. Today, it’s done with numbers.

Once, a researcher could get away with racking up large numbers of publications, using all sorts of tricks. The goal was simple, that is, get as many papers in print as possible. But someone caught on that quality ought to count, too.

Now, we have the h-index, another approach to converting performance into numbers that can be used for promotion, to award grants, or simply for bragging rights.

The h-index is an attempt to reward not only the number of published papers, but also how often those papers are cited by others. (Did your mother ever tell you not to base your self-worth on others? Well, if you are a scientist, your mother was wrong.)

In its attempt to include both quality and quantity, the h-index reflects both the number of publications and the number of citations per publication. For example, an h-index of 10 means that among all publications by one author, 10 of these publications have received at least 10 citations each. The h-index has little value when used across different fields. That is, it works only when comparing scientists in the same field. At least, that’s the way it was intended.

Of course, over the past 10 years since its introduction, it has been studied to the hilt, with statistics generated about the statistics, and at least some evidence that the h-index doesn’t help one twit, that is, that the old-fashioned method of counting papers worked just as well. But for some, it has become a goal unto itself. It certainly has become a science unto itself, with mathematical variations and versions out the wazoo.

The reason I bother to mention it on this web site is the fact that one way a researcher can pump their h-index number is through self-citation. Every time you publish a paper, you quote yourself to no end (my favorite variation here is the innate ability of some to not only quote themselves, but also to do so incorrectly, gradually building a case out of thin air).

Alternatively, you can promote your work directly to the media through various options, create a buzz about your work, then sit back and wait for others to cite your work, given its importance now confirmed.

Another way to pump numbers is to become embroiled in a controversy. This may explain why some experts enjoy generating controversial theories, sometimes oddball theories, just to get a reaction. For our purposes here, nothing will get you cited more frequently than announcing, “Screening mammography does not save lives.”

Then, of course, there are the inevitable rankings. You knew it had to happen. Is my number bigger than yours? Nobel laureate chemist Harry Kroto ranks a lowly 264th on the h-index for chemists, so he’s not too hot on the old h-index idea. Is this the route we really want to go? Is this how promotions and grants should be decided?

In the end, I think our mothers were right. It’s best not to worry about what others think about you. But still, you have to wonder if physicist Jorge E. Hirsch’s mother tried to teach him that lesson. Dr. Hirsch is the physicist who, in 2005, introduced the h-index   (h = Hirsch). Today, several software programs track your number, re-calculating on a regular basis as citations appear in the scientific literature. Appropriately enough, one of the programs is called Publish or Perish.

Advertisements