We are now in an era called the Anthropocene, an era in which humans are running way too much of the atmosphere and everything else badly. We’re in this situation where we don’t have a choice of stopping terraforming. We only have a choice of terraforming well. That’s the green project for this century.

I agree with his defence of nuclear:

If all of your electricity came from nuclear, it would be about one Coke can’s worth of waste. But, one gigawatt a day from a coal-fired plant is turning 8,000 tons of fuel into 19,000 tons of carbon dioxide, plus all the slurry and mercury and all the rest of it.

Environmentalists are always worried about nuclear waste. How are we going to bury it? (…) We also have been putting it, for 10 years now, into a thing called the WIPP, the Waste Isolation Pilot Plant in New Mexico, down half a mile in a salt formation. And that salt formation is 250 miles in circumference. It has been there for 250 million years. It’s not going anywhere. Water doesn’t get in or out of it. It doesn’t matter what you put the waste in. The salt heels in around it and encases it. So, the nuclear waste problem that I thought was insoluble has actually been solved long ago. It’s not a major issue.

But I strongly disagree with this part:

Bill Gates makes a nice distinction between energy farms, using very dilute sources of energy, like wind, biofuels and solar, and energy factories that are very concentrated, like coal, gas and nuclear. To a certain extent, those concentrated factories are greener than the things that are big and wide and using up too much space.

That may be right for nuclear, but coal and gas aren’t really concentrated at all! It may look like that when you look at the coal power plant, but then you’re ignoring the “19,000 tons of carbon dioxide, plus all the slurry and mercury and all the rest of it” that’s emitted for every gigawatt of coal-based energy. Bulldozing the Mojave desert may be a real shame, but as Brand says himself, “we don’t have a choice of stopping terraforming. We only have a choice of terraforming well.” The terraforming required for solar seems to be minimal compared to the terraforming that comes as a consequence of burning coal.

Video and transcript of the talk here:

http://www.edge.org/documents/archive/edge338.html#sbb

]]>

Wrong. Well, in fact we are impacting on the global environment at critic scales in many ways, but as far as our energy use is concerned there’s one problem that wouldn’t be resolved, or worse, would be *created*, if we had an unlimited energy source: the heat from energy use itself.

When you use your computer it heats up. When you use your car it heats up. Whenever energy is used in our daily activities, that energy is eventually transformed into heat, which is just a word for energy that is spread out randomly in uncontrollable degrees of freedom. That heat goes from your car/computer into the surrounding environment, and eventually spreads around, heating the planet by a tiny bit. Adding your energy use with my energy use and everyone else’s energy use those tiny bits add up to…? Good question: how much does the temperature of the Earth increase by global human energy use, and how much more energy can we use before we raise Earth’s temperature appreciably?

First we have to figure out how the Earth gets its temperature. The temperature of the Earth is approximately determined by how much energy it gets from the Sun, which depends on how hot the Sun is and how far the Earth is from it, much like the temperature of your marshmallow depends on how hot the fire is and how far the marshmallow is from it.

There is of course a comparatively small amount of heat coming from inside the planet. But it’s only a fraction of the solar energy flow. Current human energy use from marketable sources is around *16 TW* from all sources combined (see graph and references at the end of this post), and approximately *91%* comes from endogenous sources. This is of the same order of magnitude as the energy flow from the Earth’s hot core, which is approximately *44 TW*. If for some reason the energy from the Earth’s center got released much faster, or if human energy consumption were raised much higher, however, the temperature of the Earth would of course increase.

Human consumption of solar energy won’t cause the planet to heat up because it is already part of the energy budget of the Earth. The sunlight would be absorbed by the soil and re-emitted as heat anyway (unless the solar panels absorb much more light than the natural surface they are placed on, but that effect should be small, since the average absorption on the Earth’s surface is already around *70%*). The same goes for wind and biothermal, which are ultimately powered by the Sun. It is only “endogenous” energy use that will cause global warming, such as fossil (oil, coal, gas), nuclear (fission or fusion), and even geothermal (to the extent that it releases the heat that is trapped inside the Earth’s crust into the atmosphere faster than at natural rates).

Alright, so how much more power can we use before heating up the Earth appreciably? I used an admittedly simple model (see below for details), but which should give us a good enough estimate of the order of magnitude we are talking about. And I was surprised to find that it is much less than I first imagined. In fact, *we cannot consume 100 times as much energy from endogenous sources as we do today without affecting the temperature of the Earth, regardless of the greenhouse effect.* A 100-fold increase would raise global temperatures by approximately 1* ^{o}C.* Increasing our energy use by a factor of 1000 would lead to a catastrophic 7 to 10

What does that mean? First of all, clearly no-one should lose their sleep worrying about this effect before we sort out our greenhouse gas emissions. At the present rate, it would take hundreds or even thousands of years to increase our power use that much, and with present sources the greenhouse gases would melt our planet much before that. But it is interesting to realise that there is a hard cap on how much endogenous energy we can use regardless of the greenhouse effect. So we go back to the science fiction story of the start of the post: what if we *did* find a source of virtually unlimited energy that emits no greenhouse gases? Should we celebrate? I’m not sure. If there is one thing that seems to be certain, it is that if energy were 100 times cheaper, we would quickly find ways to use 100 times more energy than we do today.

The plateau of several of the most important non-renewable resources seems to be within decades or has already past. Some of those are just being burnt and irretrievably lost, but some materials like metals could in principle be recycled almost indefinitely with an unlimited energy source. If we have a hard limit on global energy use, however, that also puts a limit on how much recycling we can do even in principle.

This would also seem to put a cap on some science-fictional scenarios about the future development of the human civilization on Earth. For example, the fact that the cap is just two orders of magnitude away could mean that the so-called “technological singularity” can’t be much more than a steep growth followed by a plateau, all within hundreds of years from now (In the extremely optimistic science-fictional circumstance that we find an unlimited energy source with no greenhouse emissions that is. In fact, the estimated resource peaks place the overall plateau of global economic and technological development well within this century). Energy efficiency can sustain development for longer, but that would plausibly still be at most another order of magnitude in effective work per energy unit. We could veer into geoengineering, which is still a huge question mark as far as its side effects are concerned; and if we have learned something about human beings so far it is that we are not very good at predicting or mitigating the externalities associated with our activities.

Let’s consider a very simplified model of the Earth, just to get an idea of the order of magnitude we’re talking about. When talking about thermal radiation, physicists often refer to an ideal model called a *black body. *A black body isn’t necessarily black, mind you, only when it’s temperature is very low. It’s called a black body because it absorbs all the electromagnetic energy (sunlight, for example) that falls into it, but it eventually emits it back in the form of thermal radiation.

Now it turns out that by looking at how much energy (or more precisely, how much *power*—energy per unit time) a black body emits, we can figure out its temperature. According to the Stefan-Boltzmann law, the total power* P* of thermal radiation emitted per area *A* of a black body is proportional to the fourth power of its temperature *T*:

P/A=σT

^{4}. (1)

Here *σ=5.8**×**10 ^{-8} J/(s m^{2} K^{4}) *is the

T=(P/σA)

^{1/4}. (2)

Now of course this is only for the power emitted *as thermal radiation*. If there is a high-powered laser beaming energy out of the body, for example, but little thermal radiation, we should not conclude that the temperature on the surface must be given by substituting the laser beam power on the Stefan-Boltzmann law. But if we ignore any coherent beams of energy and add the total thermal radiation leaving the body, the Stefan-Boltzmann law will give a pretty good measure of the temperature on the surface.

So let’s see what that tells us. The Sun is a good example of something close to a black body. It also emits radiation isotropically, that is, the same in all directions, or at least very approximately so. The sun is approximately a sphere, and the total surface area of a sphere* *is *4πR ^{2}*, where

P=(σT

^{4})(4πR^{2}). (3)

With an average temperature of *T=5,778 K*, and a radius *R=6.7×10 ^{8} m*, this gives us a total thermal radiation power for the Sun of

The Earth isn’t a black body. First of all it doesn’t absorb all the radiation that falls on it. Some 30% of it is reflected on average (the measure of reflectivity is called the *albedo*). Also, it has an atmosphere, which creates the greenhouse effect. But suppose that instead of the Earth we had a black body of the same size and shape in its place, which absorbed the same amount of radiation from the Sun as the Earth does, and emitted it in form of thermal radiation.

The Earth is at an average distance of *D=1.5×10 ^{8} m* from the Sun. The area of the Earth that is on the path of the sunlight is the area of its circular cross-section,

We’re almost there. Now suppose that we add more heat to the Earth, except that now instead of it coming from the Sun, it’s coming from inside the Earth itself, from human energy use. The logic is still the same though: more power, more temperature, just as above. From equation (3) we can see that the extra power *ΔP* that would raise the Earth’s temperature (if it were a black body) from *T _{0}* to

ΔP = σ [(T

_{0}+ ΔT)^{4}- T_{0}^{4}] (4πr^{2}). (4)

Ok, so let’s plot that, for *ΔT=1 K* (note that a variation of 1 K is equal to a variation of 1* ^{ o}C*).

So for the temperature that we calculated above for the Earth (255 K), an extra *2**×10 ^{15} W* would be sufficient to raise the thermometers by 1 K. For a black body at the average temperature of the Earth’s surface, the number would be closer to

The next question you may be interested in is: for a given initial temperature, if we keep increasing the power, how much will the temperature rise? We can also use Equation (4) to answer that question, and I’ll plot the result in another graph:

So if we added an extra power of *2**×10 ^{16}*

Now to the important question: how much power do humans use on Earth, and how fast is it growing? The first question has a more or less easy answer. We currently consume somewhere around *1.6×10 ^{13} W* from all sources combined (see below), and approximately 91% comes from endogenous sources. So the first conclusion that we get to is that

How long would it take us to get there? This website quotes the International Energy Agency as claiming that the world’s energy consumption will continue to increase at 2% per year. This would mean that the energy use would double every 35 years, and thus it would be 100 times higher in 230 years. According to the US Department of Energy, the energy use in the world has actually been like this between 1980-2007:

It’s not clear to me whether this is an exponential growth. Assuming that it is growing linearly in the long timescales, it would take thousands of years to reach the dreadful *2**×10 ^{15} W* mark.

]]>

I am very skeptical about some of his forecasts. It sounds ludicrous to have computerized cereal boxes—even if the technology becomes sufficiently cheap—if only because it would (hopefully) attract some form of carbon taxes making the end product economically unfeasible. At this and other parts Schell’s talk strikes me as naive Back-To-The-Future futurism.

But there is something that has intrigued me for a while and that his talk, especially his final remarks, brings up to mind: we are all increasingly uploading information about all aspects of our lives. Our great-grandchildren’s great-grandchildren will be able (perhaps) to look into our Facebook histories and know details of what their ancestors were doing with their daily lives, their thoughts and values, where they have been, who their friends were, the relationships they had; they will look on last.fm and know what music we’ve listened to and how our musical tastes developed over time; they will see our blogs, our buying history, and a host of other information that we are just starting to put out there on the web.

If you think that we won’t be able to keep all of those records, remember Moore’s law. It’s cheap to keep records of old data because storage space doubles every few years. The further back you go, the cheaper it is to keep it. And you think no one will care to keep it, have a look at the Wayback Machine.

And a remarkable aspect of all this is that once enough people have enough uploaded data, algorithms are able to predict with remarkable accuracy what a given individual would do or what choices they would make given some circumstances. Think of the Netflix prize or last.fm‘s surprisingly accurate recommendation system. Then think of how many endless questionaires people love to fill up with all aspects of their personalities and wonder how much does it actually take to have a reasonably accurate digital representation of your self, or at least of your values and preferences. Maybe something future people will even be able to interact with to some extent?

Then think of your great-great-great-grandchildren looking back at the digital archives. They will see their parents and grandparents, perhaps in a form of representation we can’t even imagine yet. They will scroll back their family trees, and where does the bucket stop? That’s right, with us. We are the first digital people.

[Hello my descendants! Forgive this simple-minded ancestor of yours for this simplorious-looking blog. Look up a decade or two ahead and I'm sure I'll have something better for you. If we haven't fucked up the world before then, that is.]

]]>

I’ve been asked a few times to write a blog entry about EPR to the layman reader, and so here it is. The details may take some attention to follow, but it is a nice story, which culminates with what has been called “the most profound discovery of Science” [H. Stapp]. Of Science, not just Physics. And he didn’t specify just the 20th century. So bear with me.

Before the EPR paper, Bohr, Heinsenberg and others had already set up the standard interpretation of the then-young quantum theory; the development of the basic postulates was completed around 1927, after a couple of decades of work by those physicists and others such as Schrödinger, Planck and Einstein himself. This interpretation, commonly referred to as the “Copenhagen interpretation”—in reference to the city where Bohr’s institute was located—painted a very unsettling picture of reality for the physical intuition of Einstein and his co-authors.

In the picture of Bohr and Heinsenberg, quantum systems cannot be said to have physical properties independently of a process of measurement that can determine them empirically. Since there are multiple ways in which one can observe a quantum system, and since some of those are *incompatible *observations (for example, one can measure the position or the momentum of a particle, but not both at the same time, with arbitrary accuracy), then it follows that not all physical properties can be said to have simultaneous existence, in the Copenhagen view.

[The term "physical properties" has in fact fallen in disuse when talking about quantum mechanics, being replaced by the less metaphysically committed *observables**; similarly, the term object has been replaced by system.]*

According to the Copenhagen interpretation, if one decides to measure the position of a system, then “position” will have an empirical meaning, and it will be meaningless to ask what the “momentum” of the particle would have been if it had been measured instead. This was reflected mathematically by Heinsenberg’s Uncertainty Principle, which states that one cannot reproducibly measure incompatible observables of a physical system with arbitrary precision.

Not being able to determine something with arbitrary precision does not necessarily imply that it doesn’t exist, of course. Einstein, Podolsky and Rosen noticed that the correlations that quantum mechanics predicted between what was later termed (by Schrödinger) *entangled* systems could be used to challenge the Copenhagen interpretation. With an entangled pair of particles, sent to two distant observers—usually called Alice and Bob these days—it would be possible for one observer (Alice, say) to determine with arbitrary precision *either *of two incompatible properties such as the position *or* the momentum of Bob’s system, at a distance, without touching Bob’s system or allowing time for any communication to occur (with a signal not exceeding the speed of light) between the two sub-systems.

It would be absurd, in the light of Einstein’s theory of relativity, which postulated a limit to the velocities of physical systems—the speed of light—to imagine that any kind of instantaneous “action at a distance” could connect the two subsystems, and EPR concluded that because Alice could determine either of them at a distance at her will, the properties associated with* both *position and momentum must have existed before they were measured.

Although it challenged the orthodoxy, this was nothing more than a careful defence of the common sense idea behind most of modern science, that correlations can always be explained by a sequence of local events. If there is an outburst of a disease at around the same time in different locations, say, one would look for a common cause for that coincidence—presumably some previous interaction between the patients allowed the transmission of a pathogen; they must have either come in close contact or some microorganism must have been transmitted through the air or other medium between the two patients. All that EPR were defending was that some analogue of the “microorganisms” (later termed in that context “hidden variables”) should be responsible for generating the correlations.

For some strange reason, perhaps because of the eloquence of Bohr’s metaphysical positions, or perhaps because it was more practically useful to think of the theory in Bohr’s way, EPR’s argument was largely thought to be wrong for at least three decades. This was aided by some mistaken “no-go” theorems, such as that due to Von Neumann, which (erroneously, it was much later realised) purported to show that no theory of hidden variables could reproduce the quantum mechanical predictions.

The first to show (indirectly) the error in Von Neumann’s “theorem” was Bohm, who in 1952 developed precisely what Von Neumann claimed to have demonstrated to be impossible: a hidden-variable theory of quantum phenomena which agreed with all empirical predictions of the “bare theory” formulated in the Copenhagen fashion. It was however, an explicitly *nonlocal *theory. Particles had definite positions at all times, but they could affect each other instantaneously, at a distance, in precisely the way that EPR rejected out of hand as absurd. Later, in 1964, John S. Bell showed, after pointing out the error in Von Neumann’s alleged proof, that nonlocality was indeed not an accident of Bohm’s formulation, but a necessary feature of any theory which attempts to explain quantum phenomena (and ultimately, the world) in terms of underlying physical properties existing independently of the processes that reveal them.

Bell showed—through what is now called *Bell’s theorem*—that the correlations between entangled quantum systems were much stronger than EPR realised. The correlations considered by EPR were, as humourously illustrated by Bell himself, no more mysterious than the correlations between the socks of his physicist friend Bertlmann:

…The philosopher in the street, who has not suffered a course in quantum mechanics, is quite unimpressed by Einstein–Podolsky–Rosen correlations. He can point to many examples of similar correlations in everyday life. The case of Bertlmann’s socks is often cited. Dr. Bertlmann likes to wear two socks of different colours. Which colour he will have on a given foot on a given day is quite unpredictable. But when you see that the first sock is pink you can be already sure that the second sock will not be pink. Observation of the first, and experience of Bertlmann, gives immediate information about the second. There is no accounting for tastes, but apart from that there is no mystery here. And is not the EPR business just the same?…

[J. S. Bell, in “Bertlmann’s socks and the nature of reality”,

Speakable and Unspeakable in Quantum Mechanics,

Cambridge University Press]

The EPR business, Bell showed, was *not *the same. Bell was able to show that the correlations between the *multiple* incompatible observations which are available to be performed by Alice and Bob cannot be given *any local explanation* whatsoever, even if you allow the most general imaginable model (called a Local Hidden Variable (LHV) model) by which the outcomes of those observations could be correlated. With the type of state EPR were considering, the correlations between *the same* measurements (position at Alice/position at Bob, momentum at Alice/momentum at Bob) was amenable to a LHV explanation, as pointed by EPR; but when some *other *equally possible observations are considered, a LHV model is no longer possible. This would be demonstrated by the violation, by a carefully set-up experiment, of what are now called *Bell inequalities—*mathematical inequalities that follow logically from the assumption of a LHV model.

Up to some open technical problems (due to logical loopholes exploiting experimental imperfections) those correlations have been observed, and Bell inequalities violated, in multiple labs around the world, since the work of Alain Aspect and others in the 1980’s. There really are correlations in the world that cannot be given any possible local explanation. In this (negative) sense, the world really is nonlocal, independently of whether or not hidden variables exist underlying quantum phenomena

[unless you are willing to allow backwards-in-time causality, or something that Bell called *superdeterminism*, a class of conspiratorial theories whereby the choice of which of the possible incompatible measurements will performed in the future of a quantum system is already determined by the same hidden variables that determine the present properties of the system itself. But I won't go into that here.]

[People often confuse the commonly used term "local realism" to be a conjunction of two independent terms --- "locality" and "realism" --- and choose to maintain "locality", thus claiming that Bell's theorem suggests the failure of "realism". It is not always clear what is meant by "realism" in these contexts, and it is my considered opinion that locality (or more precisely, what Bell termed "local causality") is proven to be false by Bell's theorem (though see note above). Which is a distinct assertion from saying that something like the "active" nonlocality of Bohmian mechanics is true. The failure of local causality is merely the assertion that a local, separable, description of the phenomena is impossible. Even if there are no hidden variables, at the very least one must treat the entangled system as one indivisible system, not composed of separable parts amenable to independent (but correlated) descriptions.]

Which takes us back to the original question: “Can quantum mechanical description of physical reality be considered complete?” The answer is still debatable, if what the question is asking is whether or not hidden variables underlying quantum phenomena really exist. But in search of an answer, EPR, Bohm and Bell have unearthed the astounding fact that our classically intuitive descriptions of a reality in which things exist independently of each other, interacting only locally to create the multiplicity of phenomena we experience, is demonstrably untenable. Anybody with a scientifically or philosophically inclined mind who is not bothered by this discovery—as pointed out by an anonymous Princeton physicist to the physicist David Mermin—“must have rocks in [their] head”.

]]>

Google Wave is a new technology being developed by the internet giant to be “what e-mail would look like if it were invented today”. Its online collaboration technologies promise to provide a new framework that could potentially help improve, well, almost everything we do online.

And that includes, of course, Science. In a somewhat ironic situation, scientists have been slow in realising anything close to the full potential of the internet in improving the way they collaborate and publish the results of their work. Journals have gone online, sure — no scientist gets their journal articles from the walk-in library these days. New scientific publications are published online first, and although I don’t have the data, it is a safe bet that the overwhelming majority of readers access their content through digital means. But that is not much beyond just a more efficient distribution for essentially the same content.

Although a large part of that, I believe, derives from the inertia of the current system of evaluation of scientific output, perhaps it is partly to blame on the lack of tools useful enough to become adopted by a critical mass of users. Perhaps some system based on Google Wave will be able to overcome that inertia. In his Nature article, Neylon points to a couple of expected benefits of this possible new tool: automated “robot” agents that could be used to perform tasks like update your wave content with new laboratory data in real-time, and a version control system that would make it easier for say, peers to check through analyses of scientific data, and potentially spot frauds or honest mistakes more easily.

What I found a little disappointing was that Neylon seemed to focus too much on the “error-correcting” capabilities of the new technology. The internet can do so much more than police scientists’ data and analyses. It can connect individual human beings in a way that makes the collective effort qualitatively different from anything that the individuals could achieve by themselves, or with just “nearest-neighbour” interactions. This is the goal we should aim at. All the knowledge and all the diverse ranges of intellectual abilities found in individual scientists (and ordinary people, why not?), directly connected with each other in a global brain with many more interconnections and forms of interaction than we currently dream of. I have no idea what will result from that, but I am extremely excited to find out.

]]>

“Yes,” he said at last in rather a strained drawl. “I did have a question. Or rather, what I actually have is an Answer. I wanted to know what the Question was.”

Prak nodded sympathetically, and Arthur relaxed a little.

“It’s… well, it’s a long story,” he said, “but the Question I would like to know is the Ultimate Question of Life, the Universe and Everything. All we know is that the Answer is Forty-Two, which is a little aggravating.”

Prak nodded again.

“Forty-Two,” he said. “Yes, that’s right.”

He paused. Shadows of thought and memory crossed his face like the shadows of clouds crossing the land.

“I’m afraid,” he said at last, “that the Question and the Answer are mutually exclusive. Knowledge of one logically precludes knowledge of the other. It is impossible that both can ever be known about the same universe.”

From Douglas Adams’ *Life, The Universe and Everything*

Few numbers have such a geek cult following than the number 42, thanks to Douglas Adams’ science-fiction series *The Hitchhiker’s Guide to the Galaxy*. I guess one of the reasons is that behind the sophisticated humour of Douglas Adams lies an interesting philosophical question. *Is there* an “ultimate” scientific or philosophical question? If so, what is it? (Perhaps this one?).

Of course, the humour of Douglas Adams sarcastically dismisses the idea of an ultimate question as silly. And in a way it is. But last month, in a meeting at the Perimeter Institute for Theoretical Physics called “Reconstructing Quantum Theory“, Bill Wootters presented the closest I’ve ever seen to a candidate. Bill found a formalism in which quantum mechanics can be represented in a real vector space, as opposed to the usual formulation in terms of complex vectors spaces.

The upshot is that he needs an *universal rebit* to be able to reconstruct quantum mechanics with that formalism. A rebit is just like a qubit but instead of a superposition of states with complex coefficients, you have one with real coefficients.

As an aside for those who don’t know what a qubit is. A *qu*antum* bit* it is the quantum extension of the concept of a *bit*—the unit of information, the amount of information one obtains when finding the answer to a yes/no question. To represent a bit, all you need is one thing in one of two possible states, which are usually denoted ‘1’ or ‘0’. A coin, for example. Heads indicates ‘1’, say, and tails indicates ‘0’. Given a previously agreed code, you can transmit information with a sequence of coins.

In quantum mechanics, for every two possible states of a system (say,* ‘1’ *and* ‘0’*), there are an infinity of possible states related to these two states by what are called complex superpositions. Those are mathematical structures that can be represented in the form* c1 ‘1’ + c2 ‘2’*. Except that in quantum mechanics a state like *‘1’* is represented by the symbol* |1>*, a notation introduced by Dirac. So for example we could define states like *|+> = |1> + |2>, |-> = |1> – |2> *or* |R> = |1> + i |0>*, etc. However, in any complete measurement of a qubit, only two outcomes are possible; a qubit can give you one bit of information, but one bit of information about an infinitude of mutually exclusive (or what Bohr called *complementary*) questions.

A rebit, or real bit, is an intermediate case where the coefficients *c1* and *c2* are allowed to be only real numbers.

This rebit is called universal because—and this is where things get interesting—it is in some sense *shared* by all other systems in the universe, according to the model of Wootters. It interacts nonlocally to all other rebits in the theory. In the model it is also necessary (so as to reconstruct quantum theory) that we are unable to determine the state of this rebit, even though it interacts with everything in the universe. This raises some interesting questions. Is the state of the rebit unknowable *in practice* or *in principle*? Wootters showed some interesting models for mechanisms responsible for this epistemic censorship. In one of these, the rebit is randomised by a very rapid rotation of its direction, much faster than we can experimentally detect.

But thinking about it, the rebit itself represents the answer to one question. One binary question, the answer of which is relevant to all systems in the universe. An ultimate universal question.

After a moment of suspense, Bill advanced his hunch: the ultimate question associated to the rebit relates to the direction of time. (Chris Fuchs, who was also attending the meeting, exclaimed disappointedly: “Direction of time? I was thinking of something more like the triumph of good over evil!”) However, this interpretation would seem to pose a problem to his model of a rotating rebit. In which “time” would the rebit be rotating, if it itself “encodes the direction of time”? Furthermore, by the principle of superposition there wouldn’t be just one question, but a continuum of complementary binary questions, the answers of which would be all the orthogonal pairs of real superpositions of |1> and |0>. Would those correspond to other possible pairs of directions for time?

I am very curious to read Bill’s paper about this work to understand his model in more detail. But whatever the question associated to the universal bit is, the answer can’t be 42 after all. It can only be yes or no. Or any real superposition thereof.

]]>

That’s gonna be a huge hand!

]]>

I’m particularly excited about finally getting this paper off, as it was holding back a few other works — and because I think is a nice paper too! Click here for the arxiv version.

Here’s the abstract:

We formally link the concept of steering (a concept created by Schrodinger but only recently formalised by Wiseman, Jones and Doherty [Phys. Rev. Lett. 98, 140402 (2007)] and the criteria for demonstrations of Einstein-Podolsky-Rosen (EPR) paradox introduced by Reid [Phys. Rev. A, 40, 913 (1989)]. We develop a general theory of experimental EPR-steering criteria, derive a number of criteria applicable to discrete as well as continuous-variables observables, and study their efficacy in detecting that form of nonlocality in some classes of quantum states. We show that previous versions of EPR-type criteria can be rederived within this formalism, thus unifying these efforts from a modern quantum-information perspective and clarifying their conceptual and formal origin. The theory follows in close analogy with criteria for other forms of quantum nonlocality (Bell-nonlocality, entanglement), and because it is a hybrid of those two, it may lead to insights into the relationship between the different forms of nonlocality and the criteria that are able to detect them.

]]>

It was organised by the Institute of Quantum Optics and Quantum Information (IQOQI) at the Austrian Academy of Sciences and by the Faculty of Physics at the University of Vienna. Daniel Greenberger and Helmut Rauch were honoured guests, in the occasion of their 75th and 70th birthdays, respectively.

The list of speakers was impressive, and the talks delivered were on par. Some of the highlights, in my opinion:

Bill Wootters opened the conference with an interesting model of Quantum Mechanics on real vector spaces. His motivation was to give an answer to the commonly held misconception that complex numbers are essential for quantum theory. It is to be expected that any theory that reproduces quantum mechanics has got to be weird, and Wooters’ model isn’t an exception. He models a qubit as being composed of two parts, both represented in real vector spaces. It has to have two components if he is to have sufficient parameters to compensate for the lack of complex numbers. Besides the actual qubit there is an “ubit” — an “universal bit”. There’s a rule that one cannot learn the state of the ubit (so that you have something of the taste of an uncertainty principle), and furthermore, all particles in the universe share the same ubit (and here enters quantum nonlocality). I asked (yay! the first question of the workshop!) if he thought there was an analogy between his postulate that the ubit is unknowable and the epistemic restriction in Robert Spekkens’ toy model (hey, just found there is a Wikipedia entry for it!). He said it was a very interesting question (thank you!) but he didn’t think about it. In hindsight, there is an analogy but also an important difference. In Spekkens’ model the epistemic restriction is about the ontic state of an individual system, whereas in Wootters model it is about an entity shared between all systems. This is why Wootters model can actually reproduce quantum mechanics, whereas Spekkens’ toy model can’t reproduce some features like violations of Bell inequalities.

Simon Kochen (from Kochen-Specker fame) followed with a talk titled “A Reconstruction of Quantum Mechanics”. He started by essentially assuming a Hilbert space structure for the experimental outcomes (although phrased in a quantum logic parlance: lattices and so on) and derived the rest of the formalism from it (with some extra assumptions he tried to justify with an appeal to experimental facts). However, it is well-known that if you assume the Hilbert space structure you can derive the quantum probabilities via Gleason’s theorem. When I asked him about how he justified that assumption (yay! the first question on the second talk of the conference!), he said something like “I am not trying to reconstruct quantum mechanics”.

After the coffee break, it was Stig Steinholm’s talk “Quantum Theory and reality”. He tried to find a way to ground the information encoded by a quantum state in an objective observer-independent manner. Anton Zeilinger added a comment near the end of the question time: “The concept of a reality beyond empirical evidence is not part of Science”. Steinholm’s answer was brilliant: “What’s the empirical evidence for that statement?”

Basil Hiley’s talk on how to derive the basic equations of Bohmian Mechanics for the Dirac equation using Clifford Algebras was very entertaining. Although he seemed to want to distant himself from being called a “Bohmian”. “All I’m saying is that it’s all there in the mathematics, you can interpret it as you want” he replied to a skeptical Zeilinger.

On the second day, Markus Arndt detailed the state-of-the-art on some of the very interesting experiments on matter-wave interferometry. They can interfere objects of up to 2934 atomic mass units and 5600 vibrational modes! One of these is called the “Vienna quantum man”, as the molecule has a shape that looks a bit like a stick man.

Raymond Chiao wants to test Heinsenberg’s Uncertainty Principle against Einstein’s equivalence. He claimed that the clash between the two could be tested with accelerated superconductors. His analysis seemed highly controversial, but hey, there was a clear experimental test proposed, so we can just leave it to experiment to decide.

Nicolas Gisin gave an interesting talk detailing some very nice Bell nonlocality experiments, including of a concept called “Bi-locality”, where the correlations between three particles are modeled by independent local correlations between a central particle and each of the other two. This being a weaker assumption than full locality, the experiments are easier to make, and can be more rigorous as far as loopholes are concerned. Needless to say, quantum mechanics was still upheld.

Abner Shimony’s (from CHSH fame) talk “Quantum Mechanics and Mind” — where he proposed an experimental test for the hypothesis that a definite reduction of a superposition only occurs after a conscious observer learns the outcome — caused clear discomfort in the audience. He was followed by his colleague Michael Horne (the second H in CHSH), who talked about a neat result on the shifting of fringes in an interferometer due to an applied external force.

It was inspiring to see Daniel Greenberger, (from GHZ fame) detail his “Tic-tac-toe theory of gravity”. He starts from a simple assumption that there are three types of mass which can repeal or attract each other in different ways, and derives from very simple non-mathematical arguments things that look like dark matter and the accelerated expansion of the universe. Interestingly, he derived those results before the existence of these phenomena was known (!) but the manuscript was rejected by an angry referee who replied that “we don’t need any new ideas on this field”(!!!). This got to be one of the best referee quotes ever! Surely, there are many problems that would need to be worked in his theory, like why don’t we observe clusters of the other types of matter, to which he gives only tentative partial answers, but he was the first to say he doesn’t take this theory seriously. His main aim was to present it as a motivational talk for the students and postdocs in the audience, so that we not believe that our elders know everything. Thank you, Danny!

I can’t comment on Nobel-prize winner Gerard t’Hooft’s talk. All I can say in my defense is that they shouldn’t have put him first in the morning after all the wine they served at the conference dinner! (ahem!) Well, ok, it was about trying to reproduce quantum mechanics using cellular automata, and the little I saw of it sounded very very interesting! Damn it!

Next it was the turn of Reinhard Werner (from Werner state fame) to remind us that quantum states cannot be thought of as being attributed to each individual system, as this amounts to a local hidden variable theory (alluded to in his title “The most popular hidden variable theory ever”). Actually, that is a fact “well-known by those who know things well”, as a colleague likes to say: it amounts to a quantum separable model, the violation of which is a demonstration of entanglement. But some things need to be repeated until they sink in, I guess.

Bill Unruh (from the Unruh effect fame) cautioned us that looking at the reduced density matrix for signs of decoherence can lead to wrong conclusions. Sometimes it is possible to have large entropy in the reduced density matrices but almost perfect coherence in interference experiments. Thanks for the heads up Bill!

There were many other very interesting speakers and posters, but I wouldn’t have the space to comment on them all, and if you have read this far you will agree with me. As usual, Zeilinger gathered a very nice group of physicists! Looking forward to the Vienna Symposium of 2011!

]]>

http://www.guba.com/watch/2000950423

(I removed the link that opened in this window because it was annoyingly starting even if you didn’t press play.)

]]>