Archive for the ‘Relativity’ Category


November 30, 2009

Some months ago, the FQXi ran a contest on essays on the nature of time. Many interesting articles were submitted, and most if not all of the awarded ones are worth reading. Perhaps my favourite among them is Carlo Rovelli’s Forget time (PDF). You can read an abstract and some reader comments here.

Rovelli’s a thought-provoking and quite fun to read article (i happen to like Rovelli’s writing quite a bit). The main idea is to get rid of a singled out time variable in the Hamiltonian formulation of general relativistic mechanics and, by extension, quantum mechanics. It is argued that our usual time parameter, as it is used in Newtonian and quantum mechanics, as well as in special relativity, is not well-defined in a general relativistic context. Therefore, it must be replaced by a notion of coordinated events that conform a configuration space. Physical systems follow special orbits in the configuration space. often parametrizable by a finite set of state variables (think for instance of the amplitude and phase of a pendulum), so that we can pair events and describe the evolution of one in terms of another. These special orbits are obtained from a variational principle, derived from a Hamiltonian function. When the latter has a separable time we’re in a classical, non-relativistic regime. But this is not usually the case. It is then shown how our everyday notion of time can be given a statistical interpretation, and derived in terms of the Gibbs theorem and the postulate of a Gibbs distribution for equilibrium states.

While i don’t feel really qualified to properly criticise Rovelli’s approach, i must say that it sounds reasonable and quite beautiful. Julian Barbour’s The nature of time also seeks to get rid of time as a fundamental concept by defining it as a (quite different) derived quantity, although i don’t find his arguments as compelling; the same happened to me with his book The end of time. And of course there are other physicists with some serious arguments on the opposite camp: Sean Carroll’s essay What if time does really exist? in the same contest, and Lee Smolin’s survey article The present moment in quantum cosmology: Challenges to the arguments for the elimination of time are some of the readings that could help making up your mind (or, if you’re like me, increase your incertitude!).

Or you can also watch all the talks in the seminar held at the Perimeter Institute last year, The Clock and the Quantum. Although i haven’t had time to do much more than skimming over a couple or three videos (for instance, Barbour’s and Roger Penrose’s), it looks like a pretty interesting set for those of you wondering what’s this queer thing we call time.

Lisa Randall talks with Charlie Rose

December 29, 2006

The first half of this Charlie Rose Video (hat tip Edge) is devoted to an interesting, if introductory, interview with Lisa Randall where she talks about physics and higher dimensions. Entertaining.

(I guess Rose is famous in some part of the world, but i’d never heard of him before. He is a good interviewer, and i like the minimalist stage with its zen-like black background, and the use of full-screen slides every now and then.)

The fifth way

August 1, 2006

Five-dimensional physics, P. WessonI’ve been reading a bit about modern Kaluza-Klein-like theories of gravity, as advocated (among others) by the people of the ‘Space-Time-Matter consortium‘. Although i don’t know yet whether i buy all their arguments (and i actually have some reservations, more on them later), these ideas are so elegant and simple that i’d like to share a bit of what i’ve learnt so far. Thus, this post is not an endorsement of the physics behind five-dimensional theories, but rather an exposition (hopefully understandable to non-specialists) of what i like about them [1].

As you surely know, multi-dimensional space-times are nowadays routine to string theorists (who use models of up to 26 dimensions, ten and eleven being also popular choices), but i’ve found little, if any, motivation to take them seriously; surely because of lack of study on my part, but also because i’ve never read an argument making them feasible from a physical point of view. Extra dimensions are needed in string theory to have consistent supersymmetry, and to avoid divergences of the vacuum zero-point field or get a massless photon. Now, that may be a perfectly physical and intuitive motivation for some of you (and one can even argue that the dimensionality of the world is being derived from the theory), but definitely not for me: my rusty intuition says that obtaining up to 22 extra-dimensions and having to hurriedly sweep them under the rug (er, i mean, compactify them) is a strong hint to go look for better ways [2]. More akin to my old-fashioned ways is the path followed by Paul Wessan: let’s just add an extra dimension, take it at face value and see if it makes any sense. Of course, the idea of a fifth dimension does not come out of the blue, so let me start with a bit of motivation.

Wood versus Marble

General Relativity singles out gravitational forces by converting them into pure geometry. Gravitation is no longer an interaction comparable to electromagnetic, strong or weak forces, but, rather, the manifestation of the structure of space-time [3]. As a consequence, the relativistic world is split in two quite dissimilar parts: geometric entities (represented by the space-time metric or, if you prefer, the connection vierbein) and energy/matter/fields (represented by the stress-energy tensor). These are by no means worlds apart; on the contrary, they’re deeply interconnected via Einstein’s equations:

G = k T …….. (1)

The stress-energy tensorOn the left hand side, one encounters magnitudes related exclusively to the geometry of space-time: the Einstein’s tensor G is constructed from the metric g and its derivatives (up to second order), which describe the curvature of space-time (G is constructed from the Ricci tensor, which in turns derives from Riemann’s, or curvature, tensor R). On the right, we find the world of matter, energy and fields: the stress-energy tensor T describes the pressure, stresses, density of energy and its flux, i.e., how matter moves through space-time (if you’re new to metrics, tensors and GR, see this nice page for a quick, conceptual introduction, or this one, with beautiful ASCII art). Energy influences geometry, which influences energy, which influences energy, and so on and so forth, in a highly non-linear way. But we have still two differentiated kinds of stuff, which Einstein was fond of calling marble and wood. He spent many years trying to get rid of wood and to find a unified theory containing only geometric entities; five-dimensional theories try to push forward towards that ivory tower.

Before jumping to a fifth dimension we can get a glimpse of how geometry can play a role similar to that of matter fields by taking a look at the famous cosmological constant. The Einstein tensor above is carefully constructed to ensure that its covariant divergence vanishes; as a consequence, matter fields satisfy a generalised conservation equation, i.e., the relativistic version of classical energy, momentum or charge conservation. But one can add a term proportional to the metric to the left hand side of (1) while still preserving this desirable property: the constant of proportionality is the cosmological constant. Introducing it, and considering the case of empty space (vanishing T), we can rewrite Einstein’s equations as

G + L g = 0 …….. (2a)

Everything’s marble here, but we can rewrite the equation above passing the cosmological term to the right hand side

G = -L g …….. (2b)

Compare with (1): Lg plays the role of a stress-energy tensor, made up of marble, which can be seen as representing the energy associated with vacuum. Admittedly, it’s a peculiar form of energy, because it represents a negative pressure, but it can be used to explain (more or less) dark energy and the acceleration of the universe’s expansion. But what about other fields, like the electromagnetic field? Can we recast them using geometric quantities? As long as we stick to four dimensions, we have no spare geometric terms at hand. Enter the fifth dimension.

A world without matter

The metric tensor in N dimensions consists of N(N+1)/2 independent functions. Therefore, going to five dimensions buys us 5 more geometric entities to play with and make the cause of, say, electromagnetism. Tensor equations such as those in the previous section do not depend on the number of dimensions: one just gets more components as we go from four to five of them. The trick is to group on the left the terms making up, say, the Einstein tensor in four dimensions, and to put on the right hand side the rest (containing terms arising from the fifth dimension), which can now be reinterpreted as stress-energy arising from geometry. So, if we start with a 5D universe without matter, its corresponding Einstein equations

G[5] = 0 …….. (3)

(where we explicitly denote the dimensionality) a subset of the equations represented by (3) can be algebraically re-arranged as

G[4] = T …….. (4)

where the new tensor T is made up of all the terms appearing in (3) that include the new metric functions associated with the new dimension. Thus, if one happens to life in a 4-dimensional subspace of the 5-dimensional universe, 4-dimensional observations will be naturally interpreted using (4). That is, to these lower dimensional beings, higher-dimensional geometric effects look wooden! But there’s more: as i mentioned, (4) is just a subset of (3) and the rest of equations (involving, again, terms relative to the extra dimension) can be reinterpreted as conservation/evolution equations for our geometrically induced matter fields.

Kaluza and KleinKaluza and Klein were the first, during the 20s, to apply these ideas, incorporating the electromagnetic potential as 4 of the 5 extra available metric coefficients [4] (for a quick summary of the maths involved, see this nice page by Viktor Toth [5], or this excellent paper by Jeroen van Dongen on Einstein’s reaction to Kaluza and Klein’s ideas). By further stipulating that the old 4D ones are independent of the new coordinate, the right hand side of (4), which contains combinations of the 5D metric functions and its derivatives, takes the form of the EM stress-energy tensor. And the additional conservation equations coming from (3) are Maxwell equations. Beautiful as this result is, people felt a need to explain where is the additional spatial dimension hiding [6]. While Kaluza was at first happy with just making the 4D fields of the theory independent of the fifth dimension by prescription, Klein proposed what is still today the most popular way of hiding extra dimensions: compactification. That is the famous hose pipe model, where the fifth dimension is finite and curled into a circle, giving rise to a slim cylinder: the extra dimension is there, but we cannot see it, in the same manner as we would take the hose pipe as a one-dimensional line when seen from a distance. This model has further interesting properties, on account of the periodicity of the additional dimension; for instance, constraining electromagnetism to a circle immediately explains charge quantization, as waves directed along a finite axis can only have discrete frequencies. Unfortunately, the same calculations give a mass for the electron which is wrong by twenty-two orders of magnitude, so that this line of inquiry was soon abandoned.

Compactification lives up in today’s theories of unification in string theory, where, however, the pure marble world idea has been abandoned: we have stress-energy contributions not coming from geometry, but from matter, or string, fields (see for instance Michael Duff’s writings, including his fun Flatland, Modulo 8, or Kaluza-Klein theory in perspective). But there’s a second perspective, namely, to take the extra dimensions at face value, without necessarily compactifying them, and assume that nature is only approximately independent of them. At the same time, we stick to the idea of avoiding non-geometrical quantities, deriving the stress-energy contents of the four-dimensional subspace exclusively from 5D geometry. This is the approach championed by P. Wessan’s Space-Time-Matter (SMT) theory, to which we turn our attention in the following sections.

It’s a small, embedded universe

Vanishing matterIt is not difficult to prove that any solution to Einstein’s equations (1) in N dimensions can be recast as an empty N+1 dimensional space satisfying the source-less equations (3). This purely mathematical result, known as Campbell’s theorem, allows us to reinterpret our 4D universe with its fields and matter contents as a subspace of a wider, five-dimensional, and empty one. Now, one of the key features of Einstein’s General Relativy is gauge invariance, that is, the fact that all relevant physical quantities and equations are independent of the coordinate system we chose to express them. This invariance is still valid in the whole 5D theory, but when we restrict ourselves to a 4D subspace we loose it: physics in four dimensions depends on how we choose to embed our perceived world in the five dimensional universe. Different embeddings yield different dependencies of 4D fields in the fifth coordinate and, therefore, different physics. Thus, if we start with a flat 5D universe where all test particles move along null geodesics, what we actually perceive in our limited 4D view depends on how our 4D hypersurface is embedded in that flat space-time. While i find this situation a bit disturbing (i have yet to make my mind up as to how disturbing), it is arguably not new: in regular GR accounts of solar system observations or universe models we use routinely very specific coordinate systems to match our measurements with the results of the theory (for instance, we could choose a co-moving coordinate system in our cosmological models and expansion would suddenly disappear).

LFRW embedded in five dimensionsAt any rate, there’s quite a few interesting ways of embedding our 4D slice in five dimensions providing new perspectives on our (so to speak) every day physics. For instance, the existence of an extra dimension can make big-bang singularities vanish, due to the fact that the geometry-induced stress-energy tensor comes with a time-varying cosmological constant. In a nutshell, the 4D big-bang becomes a coordinate singularity (due to an ill-chosen 4D coordinate system) which is regular in five dimensions. The figure on the left shows such an embedding, discussed in full in this article by Seahra and Wesson (in other models, the big-bang can also be interpreted as a a shock-wave in a curved, if empty, five-dimensional universe). We already see here one of the interesting traits of these theories: since the fifth dimension is not a priory hidden, observables such as the cosmological constant can depend on time.

The fifth force

In vanilla General Relativity, one is often interested in the motion of test particles, that is, particles whose associated stress-energy tensor is so tiny that it does not contribute as a source in the right hand side of (1). These free-falling particles, characterized by their rest mass, follow geodesics of the underlying metric (the curved space generalisation of straight lines in Euclidean space) and serve as a probe of the geometry of space-time. As we know since Galileo, the equation of motion of a free particle in flat space-time simply states that its velocity, v, is constant:

dv/dt = 0

The relativistic generalisation is the geodesic equation, which substitutes the 3-velocity by its four-dimensional version u, whose spatial components are still v and whose temporal component measures proper time s, i.e., time as measured from the point of view of the moving particle. In addition, one has to take into account the curvature of space-time, which plays funny games with parallel transport, and substitute regular derivatives by covariant ones, taking into account the effects of geometry on the velocity vector when it gets moved in a curved background. Thus, one can write our free-falling equation of motion as

Du/ds = 0 …….. (5)

where the capital D is not a typo, but denotes covariant differentiation. We can let our test particle interact with other fields: for instance, if it’s charged an there’s an electromagnetic field in the region of space-time that it traverses, the particle will no longer follow a geodesic, but deviate according to an equation of the form

D(mu)/ds = f …….. (6)

where i have included the constant rest mass (m) and f denotes the force four-vector, which describes the interaction of the test particle with the field (this is just the natural generalisation of Newton’s second law). But let’s go back to our five-dimensional universe. Since there are no matter fields in there, geodesic motion is described by the source-free equation (5), where u is now the five-velocity. We can play again the trick we used with Einstein equations and split the geodesic equation to see how it looks in our 4D embedding. The result is that motion as viewed in the four dimensional slice is not geodesic, but satisfies an equation of the form (6) with a geometry-induced force. Not only that, the rest mass can also depend on the fifth dimension; actually, with an appropriate embedding the mass is the extra-dimension. Another intriguing possibility is that null geodesics in five dimensions (representing causally connected paths) can appear as time-like or even space-like curves in four dimensions, so that apparently unconnected events in 4D are actually causally related in the wider five dimensional universe. In such a model, what we take for two separate particles could be reinterpreted as a single one following a 5D geodesic, making true the old speculation by Feynman/Wheeler according to which there are not many electrons, but a single one travelling in time. Bizarre as all this may sound at first, the math is relatively straightforward and, if nothing else, may serve to hone your differential geometry skills: see for instance this paper by Wesson and Seahra for details.

We encounter a by now familiar theme: the actual form of the fifth force and mass function is gauge dependent, i.e., it varies as we change the coordinate system used to describe the four dimensional subspace. Moreover, they seem to be mutually incompatible: the embedding where you get rid of big-bang singularities is different from the one that tries to reveal the nature of inertia, or (as we mention below) the quantum mechanical aspects of the four-dimensional subspace. But, again, i must concede that in regular GR we express different experimental outcomes using different coordinates systems, so maybe the situation is not as ugly as it seems at first. At any rate, these 5D gauge-dependent models have the redeeming quality of making concrete predictions which are, in principle, measurable, so that we should be able to let experiment decide for us, just as in the old good days.

Further reading

Besides the book linked above, there’s a quite long list of articles on STM and 5D gravity by Wessan and friends. A very good discussion of all the above topics, and then more, is the review paper Kaluza-Klein Gravity, which also discusses alternative multi-dimensional theories like strings or Randall and Sundrum’s braneworlds. Interestingly, it has been shown that the latter are algebraically equivalent to 5D STM, albeit their physical motivation is quite different (in Randall’s branes, stress-energy is not geometrical, but caused by regular fields constrained to the 4D subspace). Also worth reading are Wessan musings on the connection between 5D physics and 4D indeterminism (see, for instance, Space-Time Uncertainty from Higher-Dimensional Determinism), which i haven’t commented further because i don’t really understand them and, probably as a result, find too speculative (as far as i can see, quantization rules are introduced by hand–more on this when i find time to study it properly!).

All in all, higher-dimensional physics is intriguing and, i’d say, worth investigating further. I don’t have a problem with unseen dimensions, as long as they introduce new physics or provide explanation for ill-understood known phenomena (with due respect to Occam). After all, we do not see time, or atoms for that matter, but use them everyday because they explain in simple terms what we actually see. The theory’s ability to make testable predictions (variable rest-mass, variable cosmological constant, etc.) is undoubtedly a strong incentive to further study, as is its simplicity and (to a geometer’s eyes) elegance. Albeit the latter is somewhat tainted by the dependency of physics on the coordinate system used to describe the 4D embedding. In a sense, the 5D universe is playing the part of an absolute space-time (is that Newton laughing?), not a very fashionable thing these days. But, as Feynman said [1], the question is not whether we like Nature…


[1] During his Auckland lectures on QED, Richard Feynman was asked about his opinion on quantum indeterminism. His answer was that the point was not whether he liked or not how nature was, but, rather, how nature actually behaves. Caveat emptor.

[2] Feynman again, in a quotation that can be taken either to back or to debunk my critique, according to taste:

Let me say something that people who worry about mathematical proofs and inconsistencies seem not to know. There is no way of showing mathematically that a physical conclusion is wrong or inconsistent. All that can be shown is that the mathematical assumptions are wrong. If we find that certain mathematical assumptions lead to a logically inconsistent description of Nature, we change the assumptions, not Nature (Feynman Lectures on Gravitation).

[3] This is, by the way, one of my difficulties with unification attempts along the lines of QFT, for gravitation is not, in an essential way, a force in the sense of QFT. Therefore, trying to second quantize it in the same way as electroweak or strong interactions has always struck me as misguided in a fundamental way, as long as one takes the lessons of GR at face value. My first gut reaction was to consider QFT on curved space-time much more appropriate, although (for various reasons that i leave for a future post) it can be just a first approximation. If one is to quantize anything related to gravity, if feels more natural to quantize space-time, along the lines of, for instance, Loop Quantum Gravity. That’s also the reason for my being skeptical when people claim that string theory contains GR, meaning that a weak field perturbation of the flat background has to have spin 2.

[4] Actually, the first attempt at explaining electromagnetism by means of an extra dimension predates General Relativity. In 1914, Gunnard Nordström made such a proposal, in the context of his own theory of gravitation (see also this WikiPedia page for a quite detailed and pedagogical account).

[5] If you’re interested in CAS systems able to tackle tensor calculus, you may be also interested on Viktor’s work with Maxima, a quite nice free CAS.

[6] Arguments in favour of three spatial dimensions date back, at least, to Pythagoras:

For, as the Pythagoreans say, the world and all that is in it is determined by the number three, since beginning and middle and end give the number of an ‘all’, and the number they give is the triad. And so, having taken these three from nature as (so to speak) laws of it, we make further use of the number three in the worship of the Gods. Further, we use the terms in practice in this way. Of two things, or men, we say ‘both’, but not ‘all’: three is the first number to which the term ‘all’ has been appropriated. And in this, as we have said, we do but follow the lead which nature gives. (Aristotle, De Caelo.)

Some centuries later, Kepler’s theory explaining the solar system in terms of platonic solids (which unsettlingly reminds me of modern attempts at explaining everything in particle physics in terms of symmetry groups and their representations) depended on a three dimensional world, which Kepler linked to the Holly Trinity (for an entertaining discussion of Kepler’s physics in the context of extra dimensions, see this Physics Teachers article).

The dimensionality of the world

July 17, 2006

Although Bee has recently written an amazing and thorough article over at Backreaction with virtually everything one needs to know about extra dimensions in physics, let me add a sort of footnote in the form of some naive musings, a couple links and a Hertzian digression in this somewhat iffy post.

Multiple dimensions and the problem of time
As a student, i was in love with Kaluza-Klein theory and its extremely elegant explanation of electromagnetism as the purely geometrical effect of a fourth spatial dimension. The really magic thing is that the electromagnetic energy momentum tensor (in four dimensions) arises as a consequence of an empty five-dimensional space where particles follow geodesics; in other words, photons are purely geometry, just as gravitational forces. The problem, of course, was to explain why we don’t measure that fifth dimension. Kaluza just prescribed that no physical quantity depended on it, while Klein tried a somewhat more satisfactory solution by compactifying it to an unobservable size, and making it periodic, just as the second dimension of a long hose, which becomes one-dimensional when seen from a distance. Unfortunately, this beautiful picture seemed to lead to insurmountable difficulties with chirality or the mass of the electron, unless one goes the string way and adds more compact dimensions to our universe. I thought Kaluza-Klein theories were all but abandoned in their original 5-dimensional form these days, but following some links in the recent review article by Orfeu Bertolami, The Adventures of Spacetime, proved me utterly wrong. There’s been quite a lot of activity in the area during the last decade, leading even to a Space-Time-Matter consortium, a sort of physicists’ club promoting 5-dimensional gravity theories without compactification. The consortium is coordinated by P.S Wessan, and has quite a few members and interesting publications: see for instance this comprehensive review of KK theories of gravity for an introduction to Wessan and friend’s ideas. What i find compelling about their approach (and what, at the same time, of course reveals my prejudices) is that they tackle multidimensional physics from the point of view of general relativity, rather than particle physics. However, i guess that a word of caution is in order: i’ve read very little about these (to me) novel approaches to KK theories, and i’m not yet ready to endorse them; if they were right (and i definitely wish they were), they’d be quite revolutionary: for instance, they explain quantum indeterminacy as a result of particles travelling in higher dimensions… that’d be extremely cool (and actually make real one of my silly ideas of old), but perhaps too cool to be true? Well, i’ll leave it for you to decide (as for me, i think i’m going to read Wessan’s book, Five Dimensional Physics, lest student dreams can really come true!).

Returning to Bertolami’s paper, let me mention that it is part of a forthcoming book entitled Relativity and the Dimensionality of the World, the good news being that the above link points to freely available versions of many of its chapters, written by various authors, including Wessan and G.F.R. Ellis. The latter writes about his rather original ideas on time in General Relativity, and the Block Universe idea, familiar to all relativists, of a world represented as a frozen 4-dimensional whole. Ellis observes that such a representation clearly suggests that time is an illusion: the entire universe just is. The problem is that such a view seems incompatible with irreversible, macroscopic phenomena, as well as with the fundamental indeterminism inherent to quantum mechanics. To take into account these facts of life, Ellis proposes an Evolving Block Universe: time passes; the past is fixed and immutable, and hence has a completely different status than the future, which is still undetermined and open to influence; the kinds of `existence’ they represent are quite different: the future only exists as a potentiality rather than an actuality. The point being that our regular, predictable universe models are based on too simplistic assumptions and oversimplified systems, and that taking into account realistic, emergent ones renders the future under-determined. Although very interesting from a philosophical point of view, Ellis ideas need much fleshing out before becoming a solid theory of anything. But still, he makes many a fine point, and quite a lot of good questions worth thinking about.

A digression: Hertz’s mechanics
Finally, Bertolami’s paper draw my attention to a facet of Heinrich Hertz‘s work i was totally unaware of, namely, his contributions to the interpretation of classical mechanics. After gaining a place in the history of physics with his experimental confirmation of the existence of electromagnetic waves, and before his tragic death when he was only 37, Hertz wrote a book, The Principles of Mechanics Presented in a New Form, where he proposed a formulation of Newtonian physics freed of forces, using instead a variational principle. According to Hertz’s principle, particles move along paths of least curvature, where the (three dimensional) metric is defined by constraints instead of forces. Similar principles were proposed by Gauss and d’Alembert before Hertz, but the latter was notorious (if only ephemerally) for pushing to the forefront a view of space-time defined by matter in a purely relational, Leibnizian fashion: Hertz tries to derive his system of the world from material particles alone. Unfortunately, i’ve found little information on-line on Hertz’s ideas, which seem to be better known to philosophers due to their influence on Wittgenstein (who directly mentions Hertz in his Tractatus). For those of you with a philosophical soft spot, this paper presents a re-interpretation of some of Wittgenstein’s ideas under a Hertzian perspective. As a physicist, i find Hertz’s ideas interesting almost only as a historical curiosity, and don’t know how relevant they really are to modern epistemology: comments welcome! ;)

Getting Schwarzschild right

July 9, 2006

A recent post over at sci.physics.research mentioned some papers (available at the arxiv) that claim that we’ve been misinterpreting Schwarzschild’s solution all these years; more concretely, that the event horizon (and, therefore, static black holes, among other things) is just a mirage produced by ill-chosen coordinates. As it happens, it’s these article’s authors who are misinterpreting coordinates changes, as shown in the answers to the original post. In particular, i strongly recommend T. Essel’s post, Flogging the Xprint to all students of General Relativity, as a beautiful tutorial on the meaning of coordinates in GR, a really tricky issue (as shown by the elementary errors disclosed even in published papers). The thread includes also a bit about the interplay between coordinates and topology and, come to think of it, is a good reading also for those of you privy with the field, if only for the fun of it.

Essel finishes his long post with a reflection worth mulling over:

The fact that such obviously wrong papers continue to be produced, published, and cited is dismaying, because one major goal of providing electronic archives is to make it easier to find/obtain/study relevant previous work, yet this kind of rampant repetition of old errors suggests that some “researchers” have forgotten that -reading- is the most important part of library research!

This raises a disturbing question: by drastically lowering the threshold of pain involved in simply -finding- and -obtaining- relevant prior work, while leaving unaltered the threshold of pain involved in -reading- what one has obtained, has the advent of the arXiv had the unexpected and paradoxical effect of -decreasing- knowledge of the research literature
among researchers? If we make “the easy part” of library research -too easy-, will the next generation fail to take the trouble to read the contents of our libraries (that’s “the hard part” of library research), on the grounds that actually -reading- the literature would constitute an unacceptable burden on the time and energy of busy scholars?

I have no easy answers to these questions, other that my feeling that keeping up with current research in anything but an extremely narrow and specialised area is an overwhelming (if not plain impossible) task. As a side effect, i’ve noted that i lowered my crackpot-detection threshold by a considerable degree, which bothers me a bit because i think that really new ideas will probably be, on a superficial reading, on the verge of crackpotism. Sifting wheat from chaff has always been a problem, but who ordered so much chaff?

Update: Malcolm MacCallum has published yet another refutation of Antoci et al claims about event horizons. Very instructive.

GP-B video lecture

June 10, 2006

GPBAs you surely know, the Gravity Probe B experiment will check, very precisely, tiny changes in the direction of spin of four gyroscopes contained in an Earth satellite orbiting at 400-mile altitude directly over the poles, comparing their values to those predicted by General Relativity. During the 50-week science phase of the GP-B mission and the 7-week instrument calibration phase, which lasted from August 2004 – September 2005, it collected over a terabyte of experimental data. The data analysis phase currently underway will culminate, by next year, an amazing work spanning more than four decades. The GP-B Stanford site is just impressive, containing everything you’ll ever one to know about the experiment, from an introductory General Relativity Q&A or a series of beautiful litographs explaining the experiment to relevant scientific papers and directions to build your own GP-B spacecraft.

Thus, there’s really no point in duplicating that well-organized and excellently presented information here, my point being instead recommending the whole site to the few of you that didn’t knew it, and drawing the attention of everyone to a recent addition: an entertaining public lecture by the mission’s principal investigator and instigator, Francis Everitt. Targeting a non-specialist audience sitting in the aisles, Everitt covers the following ground:

  • Testing Einstein
  • The invention of many new technologies
  • Collaboration between university departments
  • Highly successful student involvement in a long-running space program
  • A remarkable range of spin-offs, some of which made possible other NASA missions, including IRAS, COBE, WMAP, and the Spitzer telescope
  • Collaboration between NASA, academia, and industry
  • The challenge of managing a flight program with a very highly integrated payload and spacecraft

Or, if you don’t feel like going to the movies, read Everitt himself explaining this amazing journey in a recent interview.


June 3, 2006

Reading the recent article Electrons Act Like Waves (from the Physical Review Focus series, a highly recommended, lay[wo]man-friendly feed for your newsreader), i've discovered one of those peculiar stories that make the history of physics even more enjoyable than it would be by its purely scientific side alone.

The article tells the story of Davisson and Germer's discovery of the so-called wave-like nature of electrons. As explained in every textbook, they set up an experiment consisting in scattering electrons through a (nickel) crystal, and observed the familiar fringes that one obtains when a wave crosses a gratting with alternating slots in a wall.

SlitIt was 1927, and i always pictured Davisson and Germer as intrepid experimenters boldly trying to confirm de Broglie's 1924 ideas about the wave nature of matter [1]. (Justly enough, an idealisation of this experiment has become the de facto standard presentation of the quantum mechanical world!.) The funny thing is that this romantic picture has nearly nothing to do with what really happened. As it comes, D&G were looking for evidence of the atomic structure of metals and knew nothing about de Broglie. After the experiment had been going on for a somewhat sterile period, one of their widgets broke and overheated the nickel plaits, which crystalised and made (when used again for scattering electrons) the interference patterns apparent. The experimenters were all but bewildered, and only after Davisson discussed his results with other colleagues during a holiday in England, did they realize the importance of their discovery.

That's serendipity at its best. And, of course, it was not the first nor the last time that serendipity gave physicists a helping hand. The Oxford Dictionary gives a precise definition of this beautiful word:

serendipity noun the occurrence and development of events by chance in a happy or beneficial way.

or, even better, this one from Julius H. Comroe (as quoted by Simon Singh)

Serendipity is looking for a needle in a haystack and finding the Farmer's Daughter.

and its all too apt etymology:

ORIGIN 1754: coined by Horace Walpole, suggested by The Three Princes of Serendip, the title of a fairy tale in which the heroes “were always making discoveries, by accidents and sagacity, of things they were not in quest of.”

which in my opinion captures extremely well the kind of discoveries we're discussing. They were by chance, that's true, but not just chance: one needs to be in the quest of something, to begin with.

Another famous (and probably better known) example of serendipity at work is Penzias and Wilson's discovery of the cosmic microwave background. As explained by Ivan Kaminow,

He [Ivan] joked that Penzias was an unusually lucky guy. "Arno Penzias and Bob Wilson were trying to find the source of excess noise in their antenna, where pigeons were roosting," he said. "They spent hours searching for and removing the pigeon dung. Still the noise remained, and was later identified with the Big Bang."He laughed, "Thus, they looked for dung but found gold, which is just opposite of the experience of most of us."

The experiment was being conducted at Bell's Labs and its aim was to tune an ultra-sensitive microwave receiving system to study radio emissions from the Milky Way. It was only after Penzias talked with Robert H. Dicke (see also this nice memorial (PDF) for more on Dicke) that the misterious radiation was recognized as the relics of the Big Bang hypothesized by George Gamow some time before. I read the whole story for the first time in Weinberg's marvelous book (required reading), and I've always found a bit unfair that the Nobel prize went only to Penzias and Wilson.

My third serendipitous example comes also from the skies. In summer of 1974, Russell Hulse was a 23-year-old graduate student compiling data from the Arecibo Observatory radio telescope in Puerto Rico. The job was a little bit tedious: he was trying to detect periodic radio sources that could be interpreted as a pulsar [2]. One of the pulsar's earmarks is its extraordinary regularity (a few nanoseconds deviation per year for a period of about a second). Around 100 pulsars were known back then, all with a stable period with a extremely slow tendency to increase. At the end of the day, the data obtained by the telescope was processed by a computer program written by Russell, which selected candidate signals based on the stability of their period. Those were correlated with later or former observations of the same sky zone, to rule out earth-based, spurious sources. One night, Russell boringly noticed a very weak candidate, so weak that, had it been a mere 4 percent fainter, it would have passed unnoticed. On top of that, its period was too short (about 0,06 seconds) and, even worst, it was variable. Russell was on the verge of discarding it more than once during the following weeks, but eventually he persevered and, helped by his supervisor, Joe Taylor (a.k.a. K1JT), correctly interpreted the observation as a binary pulsar. The rest is history, and a Nobel prize [3] one. Russell tells the amazing story in his delicious Nobel lecture (PDF), which starts with these telling words:

I would like to take you along on a scientific adventure, a story of intense preparation, long hours, serendipity, and a certain level of compulsive behavior that tries to make sense out of everything that one observes.

I specially like this instance of serendipity, for it shows that, many a time, lucky strikes befall on those who work hard enough to get hit.

Update: I've just found an excellent article by Alan Lightman, Wheels of Fortune, which gives some very nice examples of serendipitous discoveries, as well as a nice discussion. After reading Michael post on serendipity in HEP, i was wondering about non-experimental lucky strikes, and Lightman gives an excellent example: Steve Weinberg's electroweak theory:

Serendipitous discovery strikes not only in the photographic plates, test tubes, and petri dishes of the laboratory. It also can strike in the pencil-and-paper world of theoretical scientists. In the fall of 1967, theoretical physicist Steven Weinberg was working out a new theory of the so-called “weak force,” one of the four fundamental forces of nature, when he discovered, to his surprise, that his new theory was actually two theories in one. Weinberg was approaching the weak force with the seminal idea that pairs of particles it acted upon, electrons and neutrinos for example, might be identical as far as the force is concerned, just as yellow and white tennis balls are identical as far as the game of tennis goes. When he cast this idea into the mathematical language of quantum physics, Weinberg found that his theory necessarily included the electromagnetic force as well as the weak force. The mathematics required the union of the two forces. As he later remarked, “I found in doing this, although it had not been my idea at all to start with, that it turned out to be a theory not only of the weak forces, based on an analogy with electromagnetism; it turned out to be a unified theory of the weak and electromagnetic forces.” 

[1] As an aside, i find the constant chatter about matter being some sort of schizophrenic mix between particles and waves misleading, if not outright wrong. As stressed (to no avail, it seems) by Feynman (see and hear him on this and much more in his Vega Lectures, for instance), electrons (and photons, for that matter) are particles. You never detect half an electron, or a pi-fold-photon. There's always ticks in a detector (a photo-multiplier, a photographic plate, or trails in a Wilson chamber, for instance). The wave function is not real (neither in the physical nor in the mathematical sense of real), and it 'oscillates' in an imaginary space which is not even 3-dimensional when more than a particle is described. The interference patterns observed (which arise from the addition of complex amplitudes which are squared afterwards) are not associated with single electrons, the only thing wavelike (with a twist) about them being the statistics of their hits on the wall. Even if you believe in Bohm's pilot waves, the particles are still particles! Of course, there's ample room for analogy, but i still find the typical discussions misleading.

[2] The discovery of pulsars had also its share of serendipity. They were found, also unexpectedly, by Jocelyn Bell and Anthony Hewish while they were looking studying scintillating radio signals from compact sources. Jocelyn has written a lively report of their discovery, including the funny story of how they were on the verge on attributing the signals to extraterrestrials, and jokingly use monikers starting with the prefix LGM (for little green men) to name the misterious radio sources. There's also a good review of the tale over at the Hitchhiker's Guide to the Galaxy funny website.

[3] The pulsar discovery also won a Nobel in 1974. But, curiously enough, the undergraduate hero of the story (Jocelyn Bell) was not awarded this time. One wonders.

Technorati Tags: , , , ,

Leibniz space-times

May 27, 2006

More often than not, Lee Smolin’s essays are engaging and thought provoking. I specially appreciate his willingness to tackle conceptual issues, often dismissed as philosophical or uninteresting by a great deal of the physics community (which, in my opinion, should know better). Also of note are his efforts to convey to non-specialists the key ideas and problems faced by modern physics, without unduly over-simplifications or dishonest hype.

A case in point is his recent essay The Case for Background Independence, where the meaning, virtues and drawbacks of relationist theories of quantum gravity are explored in detail. More concretely, Smolin describes the close relationship between three key issues in fundamental physics, to wit:

  • Must a quantum theory of gravity be background independent, or can there can be a sensible and successful background dependent approach?
  • How are the parameters of the standard models of physics and cosmology to be determined?
  • Can a cosmological theory be formulated in the same language we use for descriptions of subsystems of the universe, or does the extension of physics from local to cosmological require new principles or a new formulation of quantum theory?

The article begins with a brief historical review of relationism, as understood by Leibniz and summarized in his principles of sufficient reason (there’s always a rational cause for Nature’s choices) and the identity of the indiscernible (entities with exactly the same properties are to be considered the same) [1]. These principles rule out absolute space-times (like Newton’s) or a fixed Minkowskian background (like perturbative string theory), since they single out a preferred structure ‘without reason’, as do theories posing any number of free parameters (think of the much debated landscape) [2]. As is well known, Newton won the day back in the seventeenth century, until Mach’s sharp criticism marked the resurgence of relationist ideas. Mach rejected Newtonian absolute space-time, favouring a purely relational definition of inertia [3], which ultimately would inspire Einstein in his quest for the general theory of relativity [4].

Smolin’s article continues with a careful definition, in modern terms, of relational space and time, and follows with a discussion of some current theories featuring background independence: general relativity, causal sets, loop quantum gravity, causal dynamical triangulation models and background independent approaches (by Smolin himself) to M-theory. In a nutshell, it is argued that any self-respecting relational theory should comply to three principles:

  • There is no background.
  • The fundamental properties of the elementary entities consist entirely in relationships between those elementary entities.
  • The relationships are not fixed, but evolve according to law. Time is nothing but changes in the relationships, and consists of nothing but their ordering.

None of the theories above passes without problems this litmus test of pure relationsm. Take for instance general relativity. To begin with the dimension, topology and differential structure of space-time are givens, and thus play the role of a background. And, on the other hand, only when we apply GR to a compact universe without boundary can we aspire to a relational view, since otherwise we would have arbitrary boundary conditions (partially) determining the structure of space-time. Once you abide to these preconditions, a proper interpretation of general covariance (in which you identify space-times related by arbitrary coordinate transformations) provides a relational description of space-time (for an in-depth discussion of the subtle interplay between gauge invariance and relationsm, see also this excellent article by Lusanna and Pari, and references therein). As a second example, loop quantum gravity is also background dependent: in this case, the topological space containing the spin-networks of the theory. Other than that, loops are an almost paradigmatic case of a relational description in terms of graphs, with nodes being the entities and edges representing their relationships.

After his review of quantum gravity theories, Smolin takes issue with string theory. His subsequent train of thought heavily relies on the fact that relationism, or, more concretely, Leibniz’s principle of the indiscernible, rules out space-times with global symmetries. For if we cannot distinguish this universe from one moved 10 feet to the left, we must identify the two situations, i.e., deny any meaning or reality to the underlying, symmetric structure. But, as is happens, the M-theory programme consists, broadly speaking, in maximizing the symmetry groups of the theories embodied in the desired unified description. More concretely, in background-dependent theories, the properties of elemental entities are described in terms of representations of symmetries of the background’s vacuum state. Each of the five string theories embodied by M-string (should it exist!) has its own vacuum, related with each other via duality transformations (basically, compactifying spatial dimensions one way or the other one is able to jump from one string theory to the next). Thus, M-theory should be background independent (i.e., encompass different backgrounds), but, on the other hand, one expects that the unique unified theory will have the largest possible symmetry group consistent with the basic principles of physics, such as quantum theory and relativity. Smolin discusses some possible solutions this contradiction (which a lack, er, background to comment intelligently), including some sort of (as yet unknown) dynamical mechanism for spontaneous symmetry breaking (which would result in a Leibniz-compliant explanation for the actual properties–such as masses and coupling constants–that we find in our universe).

After all the fuss, there is disappointingly little to be said about relationist unified theories [5]. Invoking again the principle of the indiscernible, Smolin rules out symmetries that would make (unified) identities undistinguishable (if two entities have the same relationships with the rest, they are the same entity). By the same token, a universe in thermal equilibrium is out of the question. Reassuringly, our universe is not, and the negative specific heat of gravitationally bound systems precludes its evolution to such an state. The case is then made (after casting evolution theory as a relationist one, which is OK by me) for Smolin’s peculiar idea of cosmological natural selection. To my view, it is an overly speculative idea, if only for the fact that it depends on black holes giving rise to new universes when they collapse [6]. If that were the case, and provided that each new universe is created with random values for the free parameters of our theories, one would expect that a process similar to natural selection would lead to universes with its parameters tuned to favour a higher and higher number of black-holes (which seems to be the case in our universe). Nice as the idea is, i think we’re a little far from real physics here.

The article closes with a short section on the cosmological constant problem (with the interesting observation than only casual set theory has predicted so far a realistic value) and relational approaches to (cosmological) quantum theory. Again, the author adheres to non-orthodox ideas. This time, to recent proposals (see here and here) of hidden-variable theories, albeit they have far better grounds than the reproducing universes idea. The possibility of a relational hidden-variable theory is argued for with a simple and somewhat compelling line of thought. In classical physics, the phase space of a system of N particles is described by a 6N variables, while a quantum mechanical state vector would depend on 3N variables. On the other hand, in a purely relational theory one would need to use N^2 variables, as these are the number of possible relations between N particles. These would be the hidden-variables completely (and non-locally) describing our particles, which would need statistical laws when using just 3N parameters.

An amazing journey, by all accounts.

[1] See here for excellent (and free) editions of all relevant Leibniz works, including his Monadology, and here for commented excerpts of the Leibniz-Clarke correspondence.

[2] See also here for an interesting take on Leibniz’s principle under the light of Gödel’s and Turing’s incompleteness theorems as further developed by Gregory Chaitin.

[3] Julian Barbour’s “The Discovery of Dynamics: A Study from a Machian Point of View of the Discovery and the Structure of Dynamical Theories” is the definitive reference to know more about the history of the absolute/relative divide. (Another amazing book by Barbour on these issues is “The End of Time : The Next Revolution in Physics”, thoroughly reviewed by Soshichi Uchii here. Smolin himself has many an interesting thing to say about Barbour’s timeless Platonia.)

[4] Barbour argues in his book that Einstein seems to have misunderstood Mach’s discussions on the concept of inertia, taking it for the dynamical quantity entering Newton’s second law instead of the inertial motion caused by space-time according to Newton’s first law.

[5] I’m also a bit surprised by Smolin’s uncritical acceptance of reductionism, which he simply considers, “to a certain degree”, as common-sense.

[6] Tellingly, the only reference where this theory is developed is Smolin’s popular science book “The Life of the Cosmos”.

Technorati Tags: , , ,


May 15, 2006

This morning i’ve been a bit surprised by the number of new articles in the feed claiming what would be very significant advancements in our understanding of quantum gravity and related issues. Not that i’ve had the time to read them, or that i’ve got the expertise to quickly sift wheat from chaff: i’m just listing them here for those of you with better criteria (with the hope of reading some insightful comment):

Too good to be true, right?

Arxiv Structure

May 7, 2006

I bet this is old hat for many of you, but just in case: i’ve discovered a new, nifty way of searching and browsing papers: Xstructure. The exciting part is browsing: articles are classified by theme and can be viewed in a variety of trees and listings. For instance, here‘s the entry page for the recently added gr-qc archive: there you’ll find submission statistics and some useful links, including Review Articles and Authority Articles, which lists the most cited ones. Interestingly, comparing the authority articles in gr-qc with those of hep-th clearly shows what we could call a quantum gravity divide: the former consists almost exclusively of papers on Loop Quantum Gravity (Smolin, Rovelli, Ashtekar and friends), while the latter is monopolized by the String and M-Theory guys (Witten, Polchinski, Randall…). Hardly surprising, i know, but still…