Judging a paper’s quality may be hard for human referees, and people are looking for alternatives. For instance, this recent PhysicsWeb news gives an overview of P. Chen et al. article Finding Scientific Gems with Google, where the authors take advantage of Google’s page rating algorithm to assess the relative importance of all publications in the Physical Review family of journals from 1893 to 2003. Since the rating algorithm weights pages by number of referrers , there’s in principle no value added to traditional citation indexes: both popularity measures are linearly correlated. The catch is that there are exceptions: papers that are not widely cited but that, judging for the number of web pages linking to them, seem to be much more influential than one would think (the article mentions quite a few, Feynman, Murray and Gell-Mann’s one on Fermi interactions being an example). Amusing; although i must confess that this kind of democratic assessments of our scientific endeavours remind me somewhat of a well-known Planck dixit:
A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.
Max Planck, 1858–1947
(I remember i jotted down this same quotation some twenty years ago, together with a note showing my skepticism… nowadays i think i’m much more of a planckian than i used to be.)
Returning to our electronic referees, over at PhysOrg there’s a story about how computer science may help us in detecting bogus papers, where by bogus i mean automatically generated ones (looks like our human referees do sometimes find their task really hard!). Probably the most popular case of such a prank was the article accepted at WMSCI 2005 whose author came out to be SCIgen, an automatic paper generator created by the guy in portrait on the right. And our field is not immune to similar problems, as exemplified by the amusing Bogdanoff Affair (besides, as you’ll see, most probably no computer program would be of much help in this case).
 Of course, i’m oversimplifying: see here for the complete history behind Google’s PageRank.
Update: Andrew Jaffe, in his excellent blog (recommended, but you probably already knew it), has some interesting thoughts about peer review, and a recent initiative by Nature to open a debate on the issue and looking for ways of improvement.