Newcomb’s Paradox at the Gates of Heaven and Hell

In Limbo, outside the Gates of Heaven and Hell, St Peter stands with a scroll. He opens it up and reads it: “God has predicted everything you have ever done, and ever will do. He is not a magical being, however, but an extremely large mind capable of processing the consequences of all the actions that have ensued since He set the world running. For most people, whether they go to Heaven or Hell is based on their deeds in life, but for those who wasted too much time with philosophy and learned about Decision Theory, He created a game to simplify processing. Here are two boxes. In one of them — it is open — there’s a guarantee that you won’t be tortured eternally if you go to Hell. But it will still be painful and unpleasant. There’s also vip tickets for concerts in Heaven. You can watch the Rolling Stones playing Sympathy for the Devil in 1968’s Rock and Roll Circus. Live. God diggs that gig. In the other — it is closed — there is either an entry ticket to Heaven or nothing. You can pick both boxes, or just the closed one. If God predicted you are going to pick just the closed box, He has already put the ticket to Heaven inside. If He predicted you are going to pick both boxes, he has left it empty.”

Bayesian Decision Theorists reason that whatever they do now, God will have predicted it. So they pick the closed box and go to Heaven.

Causal Decision Theorists reason that whatever they do now won’t cause the contents of the box to change. That regardless of whether or not there’s a ticket to Heaven in the closed box, they are better off also picking the open one. They pick both boxes, find nothing in the closed one, and go to Hell. But at least they are not tortured. And they had a smoke in life.

Some people keep their minds trapped in a looping mental calculation and stay in Limbo forever. St Peter tries to remind them every million years that they are welcome to pick the closed box and come to Heaven. Most remain unconvinced.


9 thoughts on “Newcomb’s Paradox at the Gates of Heaven and Hell

  1. Hi Eric,

    Nice post. Is there any place that I can learn in a simple way (if it’s possible) about those two theories?


  2. Hello!

    Have a look at the Stanford Encyclopedia of Philosophy about Causal Decision Theory:

    They explain about Newcomb’s paradox and why they came up with Causal Decision Theory as an attempt to solve it.

    There are also some objections, but I think the best one (aham!) will be in my upcoming paper, “Causation, Decision Theory and Bell’s Theorem”. 🙂

  3. Given recent advances in neuroscience, which enable a machine to detect your decision to (say) push a button with your left hand (as opposed to your right hand) a fraction of a second before you are aware of having made the decision, the intriguing possibility arises of actually carrying out Newcomb’s dilemma in the lab.

    That is, we have a clock that counts down to zero. At the moment the clock shows zero, you have t seconds to decide whether to push Button 1 or Button 2, corresponding to the two choices of Newcomb’s problem. At time zero, the machine puts money (or not) into the opaque box according to its prediction of your decision. (If you press both buttons, or neither button, by the time t seconds elapse, then you get nothing.) The value of t is chosen to be small enough so that the machine can reliably predict your choice, but large enough so that you have the subjective impression that you are making your decision after the clock reaches zero.

    This experiment ought to be feasible with current technology, but I haven’t heard of anyone actually performing it.

  4. Hi Tim,

    You must be talking about the Libet experiments, or some related ones. Yes, I have wondered about a similar experiment. One issue with that is that there would be no time for a conscious deliberation between the time of the prediction and the time that you are aware of having made the decision. The times between prediction and consciousness of decision are always very small; a fraction of a second. So one could argue that the agent would never really be in a situation analogous to Newcomb’s problem. The prediction will always be made while the agent is already deliberating.

    I think this is a fascinating debate, though, and in any case, it would certainly be interesting to see that experiment being done. It seems clear in this case that Evidential or Bayesian Decision Theory gives the right answer. You should pick just the closed box.

    There is another experiment that is feasible with current technology and can discriminate between causal and evidential decision theory. One can make an analogy between the concepts that go into the derivation of a Bell inequality and the concepts of causal decision theory, and show that a causalist should bet against the predictions of quantum mechanics under certain circumstances. My paper about this will appear in the British Journal for the Philosophy of Science soon. I’ll post it to my blog once it’s out.

  5. Yes, I was referring to experiments such as the ones Libet did.

    The advantage of doing an actual experiment is that it sidesteps objections along the lines of “You haven’t specified the problem properly.” This helps crystallize the discussion. For example, you say that “there would be no time for conscious deliberation.” Is that true? My impression is that the subjects at least believe that they are making conscious decisions. And if they believe that they are making conscious decisions, what grounds do we have for overriding their own reports of their consciousness?

    Furthermore, in the original Newcomb paradox, the setup has to be explained to the subject before the subject can participate, so the subject is also “already deliberating” by the time the predictor makes a decision.

    Finally, if the only issue is the quantity of time allowed for deliberation, then who’s to say that improved machines of the future won’t be able to extend the prediction time to several seconds or even minutes? If there is an absolute barrier (of 0.5 seconds?) that cannot be crossed, then this would be a very interesting experimental discovery.

    I would like to suggest this experiment to a neuroscientist who has the equipment to actually carry it out, but I don’t know any such people. If you do, I would urge you to suggest it to them.

  6. I completely agree about being able to actually do the experiment. Thinking more about the Libet case, I think there’s something there.

    In the original Newcomb problem, the prediction was actually made before the participant learns the rules of the game, but I think that is irrelevant. The important thing is that the prediction is made at a time that the agent can confidently take to be before their conscious decision, in such a way that the state of affairs in the world that will depend on the prediction (i.e., the contents of the box) can be taken to be fixed and causally disconnected from the events of the agent’s choice. If this was not the case, then the agent should not give any weight to the causal decision theorist’s argument, i.e. that they should pick both boxes on the basis that whatever they do, they can’t cause the contents of the closed box to change.

    And yes, I don’t see why the time can’t be extended. In fact, apparently it can be up to 10s in some cases, which sounds quite extraordinary, but I haven’t read the details:

    But I’m not sure if we can actually set up an experiment here. In the case of the Libet-type experiment, what happens is that they can see a “readiness potential” in a brain scan some time before the agent’s reported consciousness of the decision to move some part of their body. So we explain the rules of the game to the agent (whatever they are), and plug them in. The prediction time will be the sharp rise of the readiness potential. At this time, we initiate a signal that can, say, indicate the prize of a million dollars if… what? If the agent raises their hand? What is the analogue of picking one or both boxes? The Libet effect gives you a means to tell some specified time in advance when a certain previously specified action will be taken, but I can’t see how to use this resource to make a Newcomb problem.

    The closest I can think of is something like this: “There’s a million dollars in this closed box. There’s a thousand in this open one. In the next minute you can pick both boxes and go home if you wish. If we predict that you will take both boxes—and that prediction will be provably made before you yourself report being conscious about it—we will take away your million dollars. Or you can just sit still for a minute and take home just the closed box. The clock is ticking.” But I think that a causalist could argue that they do cause the contents of the box to change by their decision in this case. Or couldn’t they?

    Does it change if we invert the problem? “There’s nothing in this closed box. There’s a thousand in this open one. In the next minute you can pick just the closed box and go home if you wish. If we predict that you will take just the closed box before you choose to do so, we will put a million dollars in it. If you just sit still for a minute you take home both boxes.” No, I don’t think it changes much.

    Either way, I think you have a point. The prediction should strictly speaking just be required to be made before the agent is conscious of having made the decision to create a problem for causal decision theory. Though the fact that the agent can’t find themselves thinking “ok, the box either contain a million dollars or it doesn’t, and nothing I do from now on will change that so I might as well take both boxes” seems to indicate that this particular experiment won’t necessarily convince many to change their minds about Newcomb’s problem.

  7. I addressed the mechanics in my original comment.

    There is a countdown on a visible clock. At t=0, the machine inserts either an empty envelope into the opaque box or an envelope containing money into the opaque box. This is public knowledge and there is no funny business with the machine; the experiment is conducted honestly. The subject then has a certain (short) amount of time to press either a button with the left hand, or a button with the right hand. Left hand signals “I choose both boxes” (say), right hand signals “I choose only the opaque box.” If both buttons are pushed or neither one is pushed before time is up, then the subject forfeits both boxes. The machine, of course, decides which envelope to insert based on its prediction of whether the subject will press the left button or the right button.

    The experiment is unlikely to change many people’s minds about Newcomb’s problem simply because people rarely change their minds about things like this on the basis of factual information or logical reasoning. It is nevertheless a very interesting experiment in my opinion.

  8. What I was trying to say is that the Libet experiments give no evidence, as far as I can see, that we have the technology to predict which of two movements you will do, after being prompted to choose. All that it seems they can do is to predict the onset of the choice to perform one specified movement at a particular time.

    So if that’s the case, then in the scenario you suggest, we have no way of knowing whether the first readiness potential corresponds to a left-hand or a right-hand movement. Therefore we cannot really predict *what* you will choose, we can only predict *when* you will choose.

    So we need to turn the problem into one of predicting the time of your choice, not of predicting which of two choices you will make. Those seem to be fundamentally different resources with which to construct decision scenarios. The ones in my previous comment were the two cases I could think of: You can

    (i) start with the high payoff and give you a choice to move within a certain window of opportunity in order to get a small extra, but have your initial high payoff withdrawn if you are predicted to move before you choose to do so; or

    (ii) start with the small extra payoff and have a choice to move, when you will forfeit the small extra and have your payoff increased to the high payoff if you are predicted to move.

    Can you think of a different one that looks more like Newcomb’s problem?

    Actually, this just reminds me of Kavka’s toxin puzzle. Let me find a link… Here:

    They have a reference to the original paper and the quote of the description:
    “An eccentric billionaire places before you a vial of toxin that, if you drink it, will make you painfully ill for a day, but will not threaten your life or have any lasting effects. The billionaire will pay you one million dollars tomorrow morning if, at midnight tonight, you intend to drink the toxin tomorrow afternoon. He emphasizes that you need not drink the toxin to receive the money; in fact, the money will already be in your bank account hours before the time for drinking it arrives, if you succeed. All you have to do is. . . intend at midnight tonight to drink the stuff tomorrow afternoon. You are perfectly free to change your mind after receiving the money and not drink the toxin.”

    So it seems that the Libet experiment may be used to make a real-life toxin puzzle, even if not a Newcomb problem.

    Actually, this also reminds me of another feature of the Libet experiment. It seems that it is possible that you can exert a conscious “veto power” to cancel the movement after the rise of the readiness potential. So maybe not only can you make a toxin puzzle, there may be a mechanism to cheat on it.

  9. Ah, I see what your concern is now. Technology has advanced since Libet’s day, and from talking to a neuroscientist friend of mine (not one who does these kinds of experiments, unfortunately), I’m pretty sure that distinguishing between left-hand and right-hand pushes is feasible now. I tried emailing Patrick Haggard, who unlike Libet is still alive and well, but he did not respond.

    The toxin puzzle is an interesting suggestion. It’s a little less appealing to me than the Newcomb paradox, because imagine that someone insists that they formed an intention, but the machine disagrees. Does that mean that the person is lying or does it mean that the machine is flawed? We can’t be sure. The advantage of the Newcomb setup is that all the important points are externally verifiable, without any need for a questionable external judgment on someone’s subjective conscious state.

    Note that if the “veto power” you suggest were real, then a person could perhaps defeat the Newcomb paradox, by fooling the machine into thinking that the left hand is about to move while in fact moving the right hand. Again, this is something that could be investigated empirically.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s