Tyler Cowen points to a paper by Bruce G. Charlton which uses Nobel Prizes as an indicator of trends in research quality. Cowen picks out the fact that the Western U.S. has been gaining at the expense of Europe and Harvard. Charlton writes,
In contrast to the picture of long term decline in Nobel-prize-winning revolutionary science; UK and European scientific production (also that of Chinese science) is probably catching up with the USA in terms of scientometric measures such as numbers of publications and citations [12,13]. This difference between national performance in normal and revolutionary science seems to suggest that the research systems of revolutionary science and normal science are evolving towards separation [3]. Clearly, growth of the two types of science does not always go-together.
This anomaly deserves further exploration. I have trouble figuring out how A’s scientists can be as heavily cited as B’s, but with fewer Nobel Prizes. My guess is that there is more to the story.
Charlton, the author, has been a valued commenter on this blog–see here and here, for example.
READER COMMENTS
Barkley Rosser
Dec 31 2006 at 1:44pm
I would guess that the resolution is the long time lag between when research is done and when Nobel Prizes are awarded. The Nobel Prizes reflect what was the situation 10-50 years ago, while those trends that are noted are more recent, the last few years. So, that is a warning that the complacency that Tyler Cowen exhibited about this (and the implicit chauvinistic chest thumping, u rah rah, USA Number One!) is misplaced and threatened.
Bruce G Charlton
Dec 31 2006 at 2:15pm
AK says: “I have trouble figuring out how A’s scientists can be as heavily cited as B’s, but with fewer Nobel Prizes.”
The answer is probably ‘Biomedical science’.
I’m the author of this paper, and the reason I decided to look at Nobels as an indicator of ‘revolutionary science’ is that our work on numbers of citations per university (hedweb.com/bgcharlton/oxmagarts) was revealing some anomalies among US universities when citation league tables are compared with the Shanghai Jiao Tong world university league tables (which include Nobels as part of the formula).
Our impression is that the most cited work in science is not usually the potential-Nobel ‘revolutionary science’ (which is, after all, rare), but the very best examples of ‘normal science’ especially in biomedicine – for example improved methodologies, large randomized trials, gene sequences etc. Many of the ISI Highly Cited scientists have produced work of this type (isihighlycited.com).
All the frequent-Nobel-prizewinning institutions are highly cited, but not necessarily the _most_ highly cited. Optimizing citations would probably lead to avoidance of the kind of high risk strategies necessary to do revolutionary science and win Nobels (and near-equivalent prizes such as Fileds Medals, Lasker and Turing Awards).
blink
Dec 31 2006 at 6:34pm
Bruce’s comments make sense to me. There may also be a “reading bias”: members of A are more likely to read and cite other A’s and B’s are more likely to read and cite other B’s. (Language preferences, for example, may account for this, as well as opportunities to collaborate. Surely there is at least a small tendency to cite friends and colleagues whenever possible.) If this is true, then we have a situation something like college football in which mediocre teams compile impressive records by beating up on low quality opponents. Perhaps we need a measure of the quality of the papers containing the citations as well – something as uncontroversial as the BCS computer rankings!
dearieme
Jan 1 2007 at 8:04am
Go on, he was just sucking up to you by calling Economics a “science”.
William Newman
Jan 1 2007 at 9:28am
Arnold Kling wrote “I have trouble figuring out how A’s scientists can be as heavily cited as B’s, but with fewer Nobel Prizes.”
Bruce Charlton offers a good reason. I’ll offer another two possible contributing factors.
First, some work can be so influential that it’s clearly visible even after people stop citing it directly. In physics, if you do a computer search for “quantum monte carlo” (like
http://arxiv.org/find/grp_physics/1/abs:+AND+carlo+AND+quantum+monte/0/1/0/all/0/1)
probably an overwhelming fraction of the papers descend from the “Metropolis Monte Carlo” idea of a 1953 paper by Metropolis and coauthors. That paper was enormously influential, but at some point people stop being careful to cite it directly. That effect can apply to Nobel-caliber work: e.g., I expect that by the time Feynman got the prize, sometimes people were using Feynman diagrams in physics papers without citing any paper written by Feynman.
Second, enormous tangles of citations are sometimes largely bypassed, becoming less interesting when someone finds a new approach to the problem. Very good work on vacuum tubes or sulfa drugs was still very good work and still had impressive citation counts, but looks less important after the development of integrated circuits or antibiotics.
Comments are closed.