25
h-index
The h-index is commonly used to characterize the scholarly productivity of university
faculty. Sawilowsky (2012) noted the h-index suffers from numerous limitations. Suppose a
scholar has two publications. If they were both cited only once, the h-index is 1. If one was cited
once and the other being cited twice, then the h-index for that scholar is 2. If one was cited once
and the other was cited 1,000 times, the h-index remains at 2. The rapidity with which the h-index
may be calculated via search engines (e. g, scholar.google.com) has contributed greatly to its use.
However, there are a plethora of problems with this statistic, as Sawilowsky (2012) noted:
(1) Sometimes work is highly cited because it is wrong. (2) The number of publishing
outlets is related to the number of scholars in the field, favoring certain disciplines. (3)
There is no differentiation between exploration and explication. The same issue in
Psychological Bulletin that I published a new knowledge article that has been cited
160 times also contains a statistics primer for dummies by Jacob Cohen (1923 – 1998,
h-index = 62) that has been cited 8,547 times. (4) Credit is given in the index for a
citation even if it supports a position contrary to the publication. (5) These indices can
change extremely quickly. My -h, defined 5 as the number of additional citations of
specific publications that will change my h-index from 19 to 20, is only 3 additional
citations Shlomo S. Sawilowsky 88 of the 20th most cited publication. (6) These
indices can change extremely slowly. Some editors prefer authors to cite recent,
secondary references to seminal work instead of the original, not only because it makes
the literature review look fresher, but as time passes it becomes difficult to access
seminal work. (These are different reasons from that invoked by Wikipedia, which
relies on secondary sources to enable equal participation of editors who are completely
devoid of any substantive knowledge in the field.) Also, well known methods are
rarely referenced, such as Karl Pearson’s Chi-Squared test, Student’s t-Test, or
Wilcoxon’s Rank-Sum test. (7) Disciplines where the scholarly outcomes are lengthy
treatises, qualitative, or juried exhibits or performances will never be equitably served
by formulae based on numbers. Scholarship in the form of plenary or keynote
addresses before scholarly societies and professional associations that are not
abstracted or subject to proceedings, scholarship serving as the basis for legislative
language, and expensive and extensive literature reviews found in technical reports
from federally funded peer reviewed grants (e.g., the United States Department of
Education, National Science Foundation, National Institutes of Health) will not be
captured by these indices. Although the software programs listed above permit
searching for patents and post non-peer law review publications that are eventually
cited in judicial decisions, these forms of scholarship are generally not cited with the
same frequency as found in other disciplines. There are additional problems if the
index is based on a quick and cheap Google Scholar search. (1) Google Scholar doesn’t