When applying for a promotion — which generally means applying for Associate Professor status “with tenure,” or applying for Full Professor (the top of the heap) — an academic must use any tactics available to make a case for the value of his or her scholarly work.
In the good old days, at most institutions, this might not have taken much. In the humanities, a fairly common bar for getting tenure was having published a scholarly book; for full professor, a second one. But academic book publication is in transition and no longer as simple as it used to be. And peer-reviewed journal articles, still the standard in the “hard” sciences, are not going away; publish-or-perish remains the rule.
It has become increasingly standard in the natural and social sciences to be expected to refer to the “impact factor” of the journals one publishes in, and to provide some figures indicating how widely cited one’s work has been.
The arts and humanities have been slow to catch up to these expectations, for a variety of reasons.
One reason is that most analytical tools for doing this — particularly the two leaders, Thomson Reuters’ ISI Web of Knowledge and Elsevier’s SCIVerse Scopus — haven’t done a good job tracking these disciplines. Scopus is better; Web of Knowledge is noticeably worse. In Scholarship in the Digital Age, Christine Borgman writes:
The depth of coverage in the ISI Web of Knowledge, which is among the most comprehensive online bibliography databases, is deepest in the sciences, shallower in the social sciences, and most shallow in the humanities.
ISI indicators, she writes, are “least valid for the arts and humanities because they only include references made by journal articles,” omitting books, conference proceedings, and other sources (Borgman, Scholarship in the Digital Age: Information, Infrastructure, and the Internet, MIT Press, 2007, pp. 158-9).
Part of the problem is that the humanities are not as hellbent on newness as are the sciences. For one thing, much new research in the humanities still comes out in book form. Books take long to produce, and longer to reverberate through the cognitive alleyways in which book-focused academic thinking takes place.
But journals play an essential role in most, if not all, disciplines. A drawback in the humanities is that there hasn’t been as much at stake. Billions of dollars are spent on scientific research every year; only a fraction of that on humanities-based scholarship. So the citation engines have been slow to catch on.
The ISI Web of Knowledge Journal Citation Reports, which appear to be the most commonly used citation analysis system, rank over 8,000 science journals in their annual “Sciences Edition” and over 2,650 social science journals in their annual “Social Sciences Edition.” There is no similar edition for the arts and humanities. Scopus, with its SCImago Lab Journal Ranking system, includes some arts and humanities journals, and its total of some 18,000 journals exceeds that of ISI, though much of this is in international journals.
But with the increasing institutionalization of university-wide databases like Digital Measures, quantitative assessment of citation and journal “impact factors” will become increasingly institutionalized in all fields, despite their critics.
A limitation of both Web of Knowledge and Scopus is that they only count citations in articles found in journals that they index — not in books, conference proceedings, or journals not indexed by them. The omission of book citations is particularly significant in the humanities.
Another limitation is that they are primarily geared toward producing lists of the most high-impact journals. In an increasingly open-access system, where subscriptions are no longer to individual journals but to complete databases, the correlation between “impact factor” and high citation rates is no longer as significant as it used to be.
The best way, at present, to make up for these two limitations is by using Google Scholar. The remainder of this article will outline a quick method for doing that which can help assess “citationality” in interdisciplinary contexts.
(You might think, why bother? If you’re in a School like mine — avowedly interdisciplinary, but dominated by natural scientists, with some social scientists but almost no humanists — then you will have to figure out how to translate the “impact” of what you’ve done to very different audiences. I’ve found it useful for that. And if you’re not, with present trends — both toward increasing interdisciplinary and toward increasingly having to justify one’s academic work — you might be one day.)
Using Google Scholar for Comparative Citation Analysis
Using Google Scholar is easy. You do a simple search for your name (or the name of the author you’re looking for) and see what comes up; then you count the number of citations listed for each book or article authored. If the name is unusual, so much the better. If it’s common (say, John Johnson), you’ll have to do some digging and some parsing: use initials, filter out the publications that are obviously by someone else (because they’re not in the correct field), and so on.
Like other citation indexes, Google Scholar does not index all scholarly publications; it only includes those that are available in some form through the internet, and not all of them at that. But Google Scholar is more inclusive than either Web of Knowledge or Scopus because it counts not only peer-reviewed research articles, but also other forms of scholarly writing such as edited books, book reviews, and theses/dissertations. In the humanities, where book citations are more normative and more valued than they are in the hard sciences, this is a significant gain.
On the other hand, the lack of an easy method of distinguishing between peer-reviewed original scholarship and other forms of scholarly writing is a limitation. For this reason, Google Scholar should be considered a useful complement to other forms of scholarly impact assessment, more applicable in some fields than in others. It’s not a replacement.
The tricky part is that articles with a dozen co-authors count for just as much as sole-authored articles or books. So authors who tend to write alone — as most in the humanities do — will have far fewer citations to their names than authors who write in packs of ten or twelve or twenty, because researching and writing alone takes that much longer to do, while the peer-review process is no quicker for it.
The best way to deal with this imbalance is to divide the number of citations of any given publication by the number of authors: if there are 50 citations for a 5-author article, each author gets 10 “weighted-per-author” citations. Simple enough, and seemingly fair. (Determining who is a “lead” author isn’t always possible, so that should probably just be set aside.)
Authors who write in packs might quibble that this “weighting-by-author” works against them: that their work gets out a lot more, so even if they only contributed a minor part of an article, it’s the impact of that article that should count, not the impact of their specific contribution. They can’t help it if they keep good company.
The problem with that argument is not only that it isn’t very fair. Authors who write in packs benefit more from what’s been called the “self-citation effect.” Authors will tend to cite their own work more often than the work of others for the simple reason that they know their own work best. (We’ll leave aside the other reason — that they want to promote their own work.)
Citing a Scientometrics study by Asknes, Scientific American puts it more precisely:
The more authors an article has, the more self-citations it gets. Aksnes found, for example, that articles with one author receive 1.15 self-citations on average, but articles with 10 authors receive 6.7. Someone should really check the percentage of publication self-citations for high-energy physics, with their hundreds of authors per article.
Why it turns out to be 6.7 and not 10 or 11 self-citations is an interesting question; perhaps because authors who know they contributed very little will feel humble enough not to cite themselves, or might not even be aware that the article has come out. The more you publish, and the more authors you publish with, the less you’re aware of what you’ve published.
In any case, it’s not clear how one would make up for this self-citation effect in calculating citation impact. Various debates explore the question of varying citationality across disciplines, and varying standards by which to judge journal impact as well.
“Weighting-by-author” would seem to be the easiest and most reasonable approach for comparing Google Scholar citations across disciplines. And Google Scholar, to my mind, remains the best way to keep track of all scholarly publications for those fields that aren’t well covered by other citation measuring and journal ranking systems.
But since Google took Google Scholar off its main toolbar last year, making it invisible except to those who dig for it, one never knows what the behemoth of search engines has in store for it. I recommend using it while it’s there.
someday maybe they will look at usefulness of research to people outside of the circle of academic peers…