Jump to content

Talk:Monty Hall problem/Arguments: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Glkanter (talk | contribs)
Line 666: Line 666:


:You didn't answer my last question. Thinking only about the 2/3 times there's a goat behind door 2 (before the player picks, before the host opens a door), what is the chance the car is behind door 1 vs door 3? -- [[user:Rick Block|Rick Block]] <small>([[user talk:Rick Block|talk]])</small> 18:11, 22 February 2009 (UTC)
:You didn't answer my last question. Thinking only about the 2/3 times there's a goat behind door 2 (before the player picks, before the host opens a door), what is the chance the car is behind door 1 vs door 3? -- [[user:Rick Block|Rick Block]] <small>([[user talk:Rick Block|talk]])</small> 18:11, 22 February 2009 (UTC)

::It's my thread. You employ a very effective technique for forestalling improvements to the article. Rather than engage in discussion, which I'm trying to do in a civil manner, you go off on long-winded tangents. You're like Jello. It's impossible to go point by point on any issues, large or small.

::Here's my original question: From nothing more than a logical standpoint, does it seem reasonable that since Monty sometimes gives us additional information about the whereabouts of the goat, that the contestant can improve his overall probability of winning the car from the previously unconditionally proven 2/3? [[User:Glkanter|Glkanter]] ([[User talk:Glkanter|talk]])


== Decision Tree ==
== Decision Tree ==

Revision as of 18:24, 22 February 2009

Strategy or reality? (Random host variant)

Behaviour creates reality. If reality is bound by presumptions (no car revealed), a strategy that goes beyond such reality (random door opening) should not be accounted into any valid probability theory. Actually the point I'm making underneath is that there is a fundamental difference between an experiment in which doors are randomly opened, thus not reducing the sample space by experiment, and a single situation in which the sample space has been reduced definitely. If reality is only that single situation (B), without being part of a greater reality A (total sample space), it makes no sense to define P(B) against A. Heptalogos (talk) 12:37, 13 February 2009 (UTC)[reply]

I will try to prove that the version in which the host opens doors randomly, but never picks a door with a car behind it, does not decrease the player’s chance to win the car when switching doors, compared to the original situation.

The original situation, in which a player picks door 1, has six theoretical possibilities:

player host host player host total
door1 door2 door3 chance chance chance
a g 1/3 1/2 1/6
a g 1/3 1/2 1/6
g a 2/3 0 0
g g 2/3 1/2 1/3
g g 2/3 1/2 1/3
g a 2/3 0 0

a = auto; g = goat; empty spaces are unopened doors with goats behind them

Explanation: if door1 hides a goat (player’s chance = 2/3), the host has two options: reveal a goat behind door2 or reveal a goat behind door3. Since the goats are placed randomly, these have even chances.

The version in which a host randomly opens door 2 or 3 has the same possibilities with different chances:

player host host player host total
door1 door2 door3 chance chance chance
a g 1/3 1/2 1/6 *
a g 1/3 1/2 1/6 *
g a 2/3 1/4 1/6
g g 2/3 1/4 1/6 *
g g 2/3 1/4 1/6 *
g a 2/3 1/4 1/6

* In 4/6 = 2/3 of the possibilities the host does not reveal the car. If we would have to statistically test this, we could do the game e.g. 999 times and create about 666 situations in which no car was revealed. Then we take a look into only those 666 cases, and pick one of those. Now there are four options that could have created this situation:

player host host player host total
door1 door2 door3 chance chance chance
a g 1/3 1/2 1/6
a g 1/3 1/2 1/6
g g 2/3 1/4 1/6
g g 2/3 1/4 1/6

Indeed, they add up to a total chance of 4/6 = 2/3, which is exactly the part we are in now. But these chances are old chances that created this situation. What is the chance that any of these four possibilities is now true? Of course it is 1. The sum of chances changed from 2/3 to 1, because we stepped out of the 999 options into 666, carefully selected by one variable. In fact, the ‘knowledge’ of the host in the original problem has been replaced by the ‘knowledge’ of the creator of this new situation. By removing all ‘cars-revealed-by-host-possibilities’ afterwards, we create the same situation as when ‘not-reveal-cars-in-the-first-place’.

If the sum of four possibilities is 1, what are the individual chances? Now the player finds himself in a new situation in which one of those four possibilities is true. This is obvious because no car but a goat has been revealed. It doesn’t matter to him why the host opened a door, random or by knowledge; his behaviour is in both cases the same, leading to the same new knowledge of the player. Could the host have changed the chances of the two possibilities in which the car is behind door1? No, he could only prefer one ‘goat’ above another, but they share no relevant difference. So he should have changed the chances of the other possibilities. Is that possible? Of course, revealing a goat behind doorX disables the other option which has the car behind doorX, so these chances become one, in number. These are the new chances:

player host host player host total
door1 door2 door3 chance chance chance
a g 1/3 1/2 1/6
a g 1/3 1/2 1/6
g g 2/3 1/2 1/3
g g 2/3 1/2 1/3

Which are the same as the chances in the original problem. Could I have explained this easier? Probably. Test it yourself by playing three cards of which two are identical and one is unique. Let a player pick one and turn around another one yourself. Every time you turn around the unique card, stop the game and start again. It is not different from knowing the cards and turn around the twin card in the first place. Heptalogos (talk) 23:23, 31 January 2009 (UTC)[reply]

"The conclusions of this article have been confirmed by experiment" Does anyone have information about experiments on the 'random host variant'? Heptalogos (talk) 16:48, 1 February 2009 (UTC)[reply]

The primary purpose of the bold disclaimer at the top of this page is to encourage folks who are unconvinced switching wins 2/3 of the time in the regular version to think about it before posting comments here doubting the solution. Many experiments and simulations have shown the 2/3 result. I am not aware of similar experiments regarding the random door variant.
However, I think you may be slightly misunderstanding this variant. If the host reveals the car accidentally we don't keep the same initial player pick and have the host pick another door (until he manages to not reveal the car), we ignore this case completely. In your playing card version if you reveal the unique card, stop the game and start again and don't count this time, i.e. re-randomize the cards and have the player pick one. I think you understand that if you do this 999 times, roughly 333 times you'll reveal the unique card and roughly 333 of the other 666 times the player will have picked the unique card (right?). In this variant we're only counting those 666 times (not the entire 999), so the chance of winning by switching is 1/2. Your tables above are confusing because of the "player chance" and "host chance" columns. If you remove these columns and leave the equally possible 6 cases (your second table), the total chance of each is 1/6 as you indicate. We can count the cases where the player picks the car and see the chance of this is 2/6. Similarly, counting the cases where the player picks the goat indicate a chance of this of 4/6 (2/6 plus 4/6 better be 1, since the player either picks the car or a goat!). What happens if we're given that the host didn't reveal the car is that there aren't 6 cases, but only 4. In 2 of these the player picked the car. The probability of the player having picked the car changes from 1/3 to 1/2 because we've eliminated 2 of the possible 6 cases (and in both of them the player initially picked a goat). -- Rick Block (talk) 18:54, 1 February 2009 (UTC)[reply]
I understand. The 333 cases which we threw away are of course all car picks which now indirectly influence the outcome of the remaining group, as a whole. Now I understand why I cannot accept the issue: it seems from the description of the situation that Monty is playing his usual game, remembering the door he should open, but then in one case forgets. In this situation there is no way to define the chances, just because it's a single event. Actually, I would say that in case he never in his career accidentally picks a car, nothing changes the odds because there is no behaviour change. In other words: the chances only change to 50/50 if the game changes statistically, which would mean that Monty always picks randomly, revealing cars 1/3 of the time. Only then it would be true.
Now in the original problem description it says somewhere that "This means if a large number of players randomly choose whether to stay or switch, then approximately 1/3 of those choosing to stay with the initial selection and 2/3 of those choosing to switch would win the car." Only this is true, and it is actually not true, as stated above, that the player should switch (in any individual case). So it can be a good advise to all players of the game generally, to follow consequently, in a consequent game. The statement about a clueless host is correct if this is either a consequent, repeating situation, or if it's defined as generally the best choice, but in the latter case a car should be revealed a significant number of times. I suggest to change the idea of Monty having a memory leak on a particular day, into a consequent situation from which chances can be correctly analyzed. Heptalogos (talk) 20:13, 1 February 2009 (UTC)[reply]
I still stand with my attempted proof of "the version in which the host opens doors randomly, but never picks a door with a car behind it", because this is something else than my 'easier explanation' below with the card game. The card game is statistically realistic, revealing the 'wrong' card and stopping the game repeatedly. But playing random and yet never open the 'wrong' door is a small chance on it's own! (Which becomes smaller when more reps are done.) Hence this is a different world, starting with the assumption that the right door always opens. The 333 expected options do not exist, are not excluded, and do not change the chances of the remaning part. Heptalogos (talk) 20:34, 1 February 2009 (UTC)[reply]
Even if the situation is a single game where Monty forgets we can talk about the chances, although in this case in a strict sense it would be a Bayesian probability rather than a frequency probability. And, although there is a small chance playing randomly that a wrong door is not opened, this doesn't mean much about the frequency probability which by definition is the limit over a large number of trials. If the host hasn't revealed the car roughly 1/3 of the time you simply haven't run enough trials. -- Rick Block (talk) 00:07, 2 February 2009 (UTC)[reply]
Actually we can never define chances in any single specific case. We can only say that in indentical situations chances are..., which means that the outcome is... x% of the time. In case Monty only sometimes forgets, I see two possibilities:
1. Monty forgets once every x shows. (We even need to know x.)
2. In all series of identical shows, the host will forget once. (number of shows in a series is irrelevant)
We need this population of possibilities to define chances within, one of which could be a random single event's chance, but not a specific single event's chance. How many times Monty forgets? What is the chance he forgets? We only don't need to know this when he always forgets, or in other words, when the host acts randomly, consequently.
I think we are reversing the essentials here. To define chances, we basically need a significant group of events, from which we know the overall outcomes. Then we can group similar outcomes and calculate the chance of one of those. Furthermore we may try to understand what causes such groups. A probable cause can be the behaviour of a host randomly opening one of two doors. Now if we presume this behaviour, it is only that we can define the overall chances because we predict the outcome of this behaviour, which is the given group. We assume that the behaviour will create a significant group of outcomes, but it should really reach this number of events before we can be sure that the overall reality looks like it. If this behaviour only creates one single event we cannot say anything about it!
Not only do we miss a big enough control group to define solid chances; even if we define such chances, it makes no sense assuming that it will predict a single specific event. The fun part is that one cannot reject this by experiment, because in those cases a control group will already be created. We can only prove that behaviour is random when we create a big enough reality with it. But it is more than that: without this reality there actually is no random behaviour. I apologize for the generality of this discussion, but I only started this because in this case even a description of such a reality is missing! Random behaviour is assumed as a single exception! This is totally unreal and cannot even be closely proved experimentally. As soon as you do, you create reality. Most realities which are big enough to be near-significant, will confirm the chances, just because of the 'size' of the reality. How many small realities deviate? We never know, but if when we can group them, we can create the bigger reality that we need. That's why we need a bigger control group like one of the two possibilities presented above.
This is not just a formal filosophical argumentation. There is no such thing as random behaviour in single specific events. Every single event has it's own unique variables which may or may not be known by anybody. That is why we never can predict, by experiment, a single specific outcome of random behaviour. It is only in a significant group of events that all of the uncontrolled variables wipe out each other's relevant influence on the given situation. Heptalogos (talk) 22:17, 2 February 2009 (UTC)[reply]
If I put 17 marbles in an opaque urn, 16 of which are white and one black but otherwise identical, you put your hand in and grab one, I think it's common to say you have a 1 in 17 chance of picking the black one (even if you only do this once). We don't have to do this a large number of times to analyze what the chances are using probability theory. Similarly, we can say one individual player in the Monty Hall problem has a 1/3 chance of initially picking the door hiding the car (which we assert is placed "randomly" behind one of the 3 doors). If the host forgets where the car is and opens a door "randomly" (and if the car was actually placed randomly but the host forgets where it is, opening either door is equivalent to opening a door randomly) and fortuitously doesn't reveal the car, we can (using probability theory) say the player has a 1/2 chance of winning by switching because there are 4 equally likely scenarios with 2 where switching wins. Could we design a test to show this experimentally? Sure. Do we have to in order to believe it's true? Well, I don't. I trust probability theory and the law of large numbers. -- Rick Block (talk) 03:40, 3 February 2009 (UTC)[reply]
It's common to say that you have a 1/17 chance to pick a black marble if you try once, but it's not true. Actually there is no chance directly related to a single specific event. The event is related to (as part of) a group of similar events to which chances are related. These chances are generic and cannot be linked to a unique element of the group. That's why we strictly can only advise a group of players, or a player who will repeat a process. The first spacial similarity focuses on similar players in similar shows, while the timely similarity addresses a certain repeating frequency of the same spacial situation.
What happens if we're in a situation in which we (once) seem to have a chance to pick one out of many, is that we make this event part of a group of (seemingly) similar events, and divide the group in two subgroups: the ones and the manies. Now we want to predict in which subgroup we arrive by event. We have no idea (random) what causes access to one of these subgroups, but we only know that one subgroup is bigger. Is it more likely to arrive in a bigger subgroup, just because it’s bigger? If we try, by process, we will see that most events end up in the bigger group. But it only becomes (generally) true because it becomes real, in a big enough group. Fortunately we don't have to start the whole process ourselves; we can make our event part of a group of events which happened in the past. Or we can make it part of similar single events in similar situations at the same time, assuming that they're all random.
Could we design a test to show this experimentally? Do we have to in order to believe it's true? My answer is: we don't have to do the experiment, but we should indeed do the design! We should describe a theoretical reality to make it credible. One cannot 'once' act random, we really need the bigger picture. Please design your test, as you say you can, and you will see that no forgetting Monty can exist in such an experiment. You will likely replace him with 'another' random event, which will even be repeated, but that's really something else. The situation is presented as random, but it's not. Apart from that, there is no way to define chances in a single specific event. Heptalogos (talk) 21:18, 4 February 2009 (UTC)[reply]

Logic and math as different disciplines

We should consider to distinguish logic from maths. We might even agree then. Logic tells us that if we have x doors, from which a player randomly picks one (door 1) after which a host picks another (door 2), the initial chance of the player choosing the one and only winning door is 1/x. If we assume that the host picks the winning door if possible, and otherwise chooses randomly, the chance that he chooses the winning door is 1-(1/x). In this scenario there are two possible situations:

1. Doors 1 and 2 are numbered after chosen.
2. Doors 1 and 2 are numbered before any choice is made.

In the second situation we will afterwards restrict the experiment to the outcomes mentioned. Logic tells us that such a restriction does not change the second chance. This chance is 1-(1/x), for all x, which is logically valid and can be proved by experiment.

Mathematically, this may be wrong. I am not even interested if this is the fundamental discussion which it seems to be (about strict policies). But we have to respect the fact that mathematics are a product of logic, as a tool to make logic more efficient and effective. It cannot outclass logic itself. If we distinguish logic from maths, we can provide the simple, logical solution, and afterwards present the mathematical solution. The last solution is supervised by experts and needs academical sources, but what about the logical? Does it belong to philosophy? Heptalogos (talk) 08:36, 11 February 2009 (UTC)[reply]

Btw, if doors 1 and 2 are 'picked', the host might open all other doors (3-x), or not. The player might switch to the host's choice. Actually, the knowledge of the player doesn't matter, because we decide the chances. That's why it doesn't matter if doors are opened or not, or if players are blindfolded etc. Heptalogos (talk) 09:13, 11 February 2009 (UTC)[reply]

Maybe I still need to explain why "Logic tells us that such a restriction does not change the second chance". Suppose we know that in every country of the world men and women exist in equal amounts. Now we have to pick a country after which we randomly have to pick a person in that country. What is the chance this person is a woman? In this scenario there are two possible situations:

1. We have all countries to choose from.
2. We can only choose African countries.

Logically in both situations chances are the same, because the restriction does not affect the probability. Heptalogos (talk) 09:44, 11 February 2009 (UTC)[reply]

As you may have noticed, I am an expert in logic. Heptalogos (talk) 09:45, 11 February 2009 (UTC)[reply]

I think I understand why maths has to follow a straight line; it cannot make logical jumps. The same occurs writing computer code. The binary code is one-dimensional, while the human brain has more dimensions. Humans can use several time-space coordinate systems. On one hand we may all agree to the validity of certain logical jumps, but on the other hand it may be hard or even impossible to prove it logically. I agree that the mathematical approach, which has solid rules for such proof, is a better proof, or even the only exact proof. On the other hand, maths is a product of the same logic and we need to be pragmatic in some way. It's an interesting idea that we actually only understand things in our conscienceness in a logical way, but we can only make it an objective truth by externalizing and even materializing it (write it down in strict 'code'). To make this less easy: scientists are experts in understanding their own systems, checking if any logic complies to that, without necessarily understanding any of the logic itself.

I call these 'operational' scientists. But think of all the great scientists who had their moments of 'eureka', logically, after which they had to work out new code. They even had to search intensively for the 'right' code to make it fit to their logic! What is good logic and what is bad logic? Besides 'code', the best proof of good logic is 'reality'. I would like to turn that around and state that logic is a recognition of a specific expression of the laws of nature. One recognizes the practice of natural law and tries to make it commonly accessible. The keyword is 'patterns', to be recognized, to be registered, to be tested and to be executed by ourselves.

REALITY -> LOGIC -> SCIENCE -> REALITY

It's a circle of construction. In the famous Monty Hall paradox, we make a logical jump, which really seems realistic, but some scientists make objections. The logical jumps do not fit into their systems. Is it bad logic, does science need an extra dimension, or are these exceptions rather useful to understand the limits of our disciplines? Heptalogos (talk) 11:33, 11 February 2009 (UTC)[reply]

Probabalistic analysis

Notation:

A=number of the chosen door
C=number of door with car
D=number of the opened door

Given:

P(A=a)=1/3 (a=1,2,3)
P(C=c)=1/3 (c=1,2,3)
A and C are independent
P(D=2|A=1 and C=1) = 1/2
P(D=3|A=1 and C=1) = 1/2
P(D=3|A=1 and C=2) = 1
P(D=3|A=1 and C=3) = 0

Similar for other combinations. For simplicity we write A1 instead of A=1, etc.

Given {A=1 and D=3} what is the conditional probability of {C=2}?

Nijdam (talk) 22:30, 13 February 2009 (UTC)[reply]

A simple experiment

Prizes were placed randomly. It is unknown whether the player chooses randomly. He might always choose the same door, but we don't even know if he can (distinguish doors). Even if he would always pick the same door (i.e. door No. 1 is always the same physical door), the results are still random because the prizes are placed randomly. Simulation of this event will relate the car to door No. 1 in 1/3 of the times, in random order. Instead of relating the car to door No. 1, it may relate the car to 'C', which means 'player's Choice'.

Computer simulation looks like this:

Ca
Cg
Cg
Cg
Ca
Cg
Ca
Cg
Cg

P(Ca)=1/3, P(Cg)=2/3

Now two prices are left, from which a goat (No. 3) is excluded from the experiment. This is equivalent to offering Switch-door No. 2 ('S') with two possible outcomes:

Sa
Sg

We know that (C,S) is (a,g) with these possible combined outcomes (only a car and one goat are left in the game):

Ca Sg
Cg Sa

We can simply add it to the simulation above:

Ca Sg
Cg Sa
Cg Sa
Cg Sa
Ca Sg
Cg Sa
Ca Sg
Cg Sa
Cg Sa

Switching leads to the car (Sa) in 2/3 of the times. P(Sa)=2/3, P(Ca)=1/3. Heptalogos (talk) 13:25, 15 February 2009 (UTC)[reply]

Why conditional?

Can someone please explain why the problem should be treated as a conditional problem, as P(A|B)? I would like a general answer, even a maths law if possible. I think there are mainly two reasons to see it as conditional:

1. Event B is affecting the sample space.
2. Event B is affecting the probability of event A.

I think the first is no argument for making it obligatory conditional, but it's still used by many people in the Monty Hall problem, without explicitely saying so, or even being aware of it. The second reason is, but it's not the case in the MHP.

One other question, because it's related: isn't it true that the only definite way to identify unique doors, is by one of these:

a. It has been picked;
b. It has been opened;
c. It is related to a car;
d. It is related to a goat (only after another door has been opened or another door related to a goat has been picked);
e. It is the only one not identified yet, and therefore identified?

Event a. may be called door No.1, event b. may be called door No.3, but it's at least a lable on the event and not for sure a physical identification of a door on beforehand. Heptalogos (talk) 21:23, 15 February 2009 (UTC)[reply]

To put the second question differently: I see no information in the number, at least not to reason 2. It might be to reason 1, as a reduction of sample space, but what information does the number give? It is a precondition that the sample space will be reduced by one door (with a goat), so this is no new information. What new information is in a number? Why this numbering? It seems that the only reason to number the doors on beforehand (like Krauss and Wang (2003:10) seem to do, but even that is not sure), is to be able to 'prove' afterwards that a conditional approach is needed. Heptalogos (talk) 22:47, 15 February 2009 (UTC)[reply]

The way the problem is phrased ("Suppose you're on a game show") gives it a context in which I think it's clear we're not talking about abstract conceptual "doors" that may or may not have any physical reality but doors labeled 1, 2, and 3 on a physical stage. There's a photo of the actual "Let's Make a Deal" studio at http://www.letsmakeadeal.com/. The problem is perhaps not about this specific game show, but we're told it's about a game show which we should assume is at least similar. The decision point (switch or not) is after an initial door has been picked following which the host has opened a door to reveal a goat. Here's a direct quote from the Gillman paper referenced in the article:
Game 1 [the Parade version of the problem] is more complicated: What is the probability P that you win if you switch, given that the host has opened door #3? This is a conditional probability, which takes account of this extra condition. [italics in the original]
Regardless of what you call the door the player initially picked (and we can renumber the doors so that we always call this door 1) and what you call the door the host has opened (which we can similarly always call door 3) the question remains should you switch from one door (which we'll call door 1) to another (which we'll call door 2) given the host has opened a door revealing a goat (which we'll call door 3). Given a choice between representing this question as
or
isn't the conditional question obviously a more appropriate mathematical model? It may turn out that the answer to these two questions are the same, but we can't know this until we answer the second question. -- Rick Block (talk) 23:34, 15 February 2009 (UTC)[reply]


Rick, thank you for your ever continuing effort to try to make other people understand, which I honoustly appreciate. Yet, it's not that easy. Your explanation is very clear as from 'Regardless'. (Let's forget about abstract doors, because of course I agree that they're physical.) You try to make things obvious, but the explicit explanation is still missing. I can do the same:

or

Isn't the conditional question obviously a more appropriate mathematical model?

Since we already agree/assume that door 1 is always picked, door 3 is always opened, and a goat is always revealed, these are preconditions rather than new information. Do you get my point? Heptalogos (talk) 11:36, 16 February 2009 (UTC)[reply]

As far as I know, only one person tried to explicitely explain, which is Nijdam, stating that when event B is reducing the possible outcomes, P(A|B) should be the calculation method, which is the conditional one. This is at least a mathematical rule being applied here. What we need is rules! How can we just 'show' something and assume that it's obvious? Where is the proof? Isn't this all about statistical dependency? Heptalogos (talk) 11:47, 16 February 2009 (UTC)[reply]

Perhaps you're questioning what probabilistic universe we're starting from, which I think means deciding which parts of the problem description should be treated as "rules" of this universe rather than conditions on a problem posed within this universe. What we're basically talking about here is what should be considered the Bayesian prior. If we're fully rigorous, we never write
but rather
where is the background information that is known. The Bayesian analysis section of the article follows this approach and makes it very clear what is background. In a formal Bayesian sense, all problems are conditional problems. What we're calling the "unconditional" probability here (well, at least what I'm calling "unconditional") is
where "the game rules" is everything before the "Imagine ..." sentence in the K&R description (car/goats randomly placed, player initial pick, host must open a door revealing a goat, host opens random door if player initially picks the car). Does this help? -- Rick Block (talk) 14:53, 16 February 2009 (UTC)[reply]

Well it surely helps if it enables you to answer the following question, in general: "which conditions need Bayesian method?" I ask "in general", because you are still not answering my question about the common rules. It may be the same question as "which conditions need conditional probability method?"

What about this: "If events A and B are statistically independent, then P(B|A) = P(B)" Heptalogos (talk) 16:39, 16 February 2009 (UTC)[reply]

Heptalogos suggested that we continue a conversation here. It was about what makes a particular event a condition in a probability problem. Let us consider four events that happen (or might happen) between the player's original choice of door and his decisiom to swap or not. Which of these must be taken as a condition in the solution of the MH problem and why?

  1. Monty opens a door to reveal a goat
  2. Monty uses the word 'pick'
  3. A member of the audience coughs
  4. It starts to rain outside

—Preceding unsigned comment added by Martin Hogbin (talkcontribs)

I'm not trying to be difficult, but what do both of you mean by "need"? It's certainly true that if A and B are independent, then P(B|A) = P(B) - but you've just moved the question to how you know whether A and B are independent (in the case of the MH problem, this is fundamentally the argument that several folks have mentioned is missing from the "unconditional" solution). The point of Bayesian priors is that it eliminates the need to ask the question you're asking. It is a more precise formalism that (among other advantages) neatly avoids the exact issue we're talking about.
(Maybe I should learn about Bayesian statistics.) Martin Hogbin (talk) 20:54, 16 February 2009 (UTC)[reply]
The answer to Martin's question is the first one, and only the first one, because it is exactly what the problem asks. If the problem statement were "what is the chance of winning by switching if a member of the audience coughs" we'd have to include this as a condition. Or am I completely missing what you're asking? -- Rick Block (talk) 19:46, 16 February 2009 (UTC)[reply]
What about 2? Monty does use the word 'pick' when he asks if the player wants to swap. Martin Hogbin (talk) 19:51, 16 February 2009 (UTC)[reply]
Martin - I honestly don't have a clue what point you're trying to make. Care to explain? -- Rick Block (talk) 02:00, 17 February 2009 (UTC)[reply]
It is about what criteria we need to apply to determine whether an event that occurs between the initial choice and the decision to swap must be treated as a condition. You have given one good criterion, that it must be mentioned in the question. But in the parade statement Monty says, 'Do you want to pick...?'. Why is it not necessary to formulate the problem with this statement as a condition. Why do we not need to say, after the initial choice we need to determine the probabilities given that Monty uses the word 'pick' ? —Preceding unsigned comment added by Martin Hogbin (talkcontribs)
Is this merely a pedantic point, or are you truly confused and can't see the difference between using the word "pick" and opening a door? Like any word problem, there's a mapping that has to be done from the natural language version to a more precise mathematical version. I'm saying the natural language version of the problem, as commonly understood, maps to a conditional probability problem, and is specifically asking about:
where the "unconditional" probability
refers to the probability of winning by switching as seen by all players.
What are you saying? -- Rick Block (talk) 19:45, 17 February 2009 (UTC)[reply]


When you do the mapping from natural language to a mathematical description, why do you not do it like this?

I think it much the same question that Heptalogos is asking. Martin Hogbin (talk) 22:26, 18 February 2009 (UTC)[reply]

To answer you question above Rick, I cannot see any difference in principle regarding saying the word 'pick' and opening a door in relation to the MH problem, if I am missing the obvious please point it out to me. Martin Hogbin (talk) 22:30, 18 February 2009 (UTC)[reply]

The difference is that it's obvious that the user's initial pick is the car with probability 1/3. The entire question is what impact, if any, the host opening a door has on this probability. To rephrase slightly, the fundamental question here is the difference between
i.e. the probability of initially picking the car (regardless of what the host does), and
If we assume there's no impact on the player's initial probability the problem is trivial - in effect, we've assumed the solution. As it turns out, there is no impact, but the only actual way we can figure this out is to figure out the conditional probability. -- Rick Block (talk) 23:02, 18 February 2009 (UTC)[reply]
You seem to have missed my point which is, why do we not include the fact that the host uses the word 'pick' as a condition. Is it because we assume that using this word cannot change the probability that the player has chosen a car? Martin Hogbin (talk) 00:01, 19 February 2009 (UTC)[reply]
I don't know if there are hard and fast rules for how to map a natural language problem to a math problem, which is what it sounds like you're really looking for here (are you trying to force me to admit this?). I've tried to get C S to respond to this (and have recently asked Glopk as well). They might have a more formal answer, but I think this basically boils down to common sense. If something is obviously irrelevant with regard to the mathematical model of the problem, we shouldn't include it. We can choose to include it, and conclude it does not change the probability that the player has chosen a car, but if it's something that's obviously extraneous why should we bother?
This might come across condescendingly (not the intent - I just want to use an example that is obviously clear) a word problem like: "If Johnny has 2 apples and 1 orange, and Sally has 3 apples and 4 jelly beans, then how many apples do they have altogether" is obviously (to any adult) the math problem "what is 2 + 3". Similarly, the MH problem is obviously (to a mathematician) a conditional probability question involving the door the host opens (and not whether he uses the word "pick"). -- Rick Block (talk) 02:48, 19 February 2009 (UTC)[reply]
Martin, as the main editor of the current "Bayesian Analysis" section, I'd like to reply to your question as stated at the beginning of this thread:
When you do the mapping from natural language to a mathematical description, why do you not do it like this?
What I do not like about this formulation is that it is uneconomical: it is very verbose, but identical in meaning to the one currently in the article. After careful reading of the "Bayesian Analysis" section of the article, you ought to agree that (using the symbols therein):
, since the player wins by switching if and only if the car is behind Door 2.
, by the definition of the symbol .
, as it specified/implied in the rules of the game.
Therefore your probability expression above is the same as , which is exactly the one whose value is computed in the Bayesian proof, and found to be 2/3 in the standard MH formulation.
So, to repeat again Rich Block's question: what are you saying?glopk (talk) 04:56, 19 February 2009 (UTC)[reply]

Allthough I'm still involved in a private discussion with Martin, I might as well try to explain what I think is the essence of Martin's question, which I rephrase as: What events lead to conditioning? I said this before, but no harm in repeating: all events that put limiting constraints on the sample space. So the cough of a member in the audience might be a conditioning factor if "coughing of the audience" was primarily taking part in the sample space. But in general one doesn't take this into account. However if one would do so, a cough would not limit the possible outcomes. But distributing a car and opening a door plays an essential role in the sample space. I gave several times representations of it (without taking coughing and the use of the word "pick" into account!). And I showed, and it is not difficult to understand, that opening a specific door, limits the outcomes. That's why. Nijdam (talk) 13:04, 19 February 2009 (UTC)[reply]

If your rendition of Martin's question is correct, the question is ill-posed: there is no such thing as "events" that lead to conditioning, particularly not in the standard formulation of the MHP, where the background information is stationary (the rules do not change while the game is in progress). Rather, the conditioning among the terms simply reflects logical structure of the problem. However, I believe that Martin is asking a different question, namely what makes the opening of a particular door following the player's selection a relevant conditioning term, whereas a sneeze in the crowd is not. Or, to use the notation of the "Bayesian analysis" section, why it is that as a function of ,  ? The answer, which Rick Block has already given above, and I restate here formally, is that the conditioning of on is due precisely to . This is because (in the standard formulation) Monty follows a well-defined algorithm to decide which door to open, and the outcome of the algorithm depends on where the car is. Hence is not independent of under , or , and therefore (by Bayes's theorem) . On the other hand, the statement of the problem does not mention anything about sneezes among the public, phases of the moon, or opening prices of sweet crude at the NYMEX, therefore (by Ockam's razor) we must assume the independence of from any of them.glopk (talk) 17:18, 19 February 2009 (UTC)[reply]
I hope Martin will speak for himself, but in the mean time I may comment on your analysis. As the rules I doesn't change during the play (why call this stationary?), I skip mentioning them all the time. In my opinion the main discussion is only about the difference between the unconditional and the conditional , and the necessity of the use of the latter. Nijdam (talk) 17:33, 19 February 2009 (UTC)[reply]
Nijdam, at this point I really don't understand what you talking about - I suspect you are attaching two different meanings to each of the symbols above. Please, spell out the proposition whose probability you wrote above as . Just the proposition, with no "conditional" or "unconditional" qualifiers. Then we can be sure we're talking about the same thing. Thanks. glopk (talk) 07:19, 20 February 2009 (UTC)[reply]
I have accpeted Rick's criterion that only things mentioned in the question should be taken into account but Monty does say the word 'pick' in the Parade statement. Why is this not taken into account? Can we ignore anything that we deem independent of ? Martin Hogbin (talk) 19:45, 19 February 2009 (UTC)[reply]

For the less mathematical skilled, I would say that a lot of these problems have the following form. A situation is given and the probability os some event is calculated. Then something happens and one is asked about "the probability" of the mentioned event. I put quotation marks around "the probability" because it is a different probability than initial. Why else should we bother? What is the difference? Well, because something has happened, a new situation has arised. And it is in this new situation the new(!) probability has to be calculated. It may or may not differ from the old one, and it's common to phrase this as: the probability has or has not changed. These problems are of course only of interest if the thing that happens essentially changes the situation. But anyhow has a new probability to be calculated in the new situation. Probabalist call this new situation a condition to the origanal one; and the new probability the conditional probability given the new situation. Nijdam (talk) 13:32, 19 February 2009 (UTC)[reply]

Firstly as you all have do doubt worked out, I am not an expert of probability, but your statement above seems to me to be pretty much what I thought. This brings be back to my original question, how do we decide which events, of those mentioned in the question, we take into account in calculating the probability? Martin Hogbin (talk) 19:45, 19 February 2009 (UTC)[reply]
For example we might suppose that the host could always say something like, 'Do you want to pick another door?', when he knows that the player has chosen a car but he might say 'Do you want to choose another door?' when the player has chosen a goat, as a way of giving a hidden clue to the contestants.Martin Hogbin (talk) 23:40, 19 February 2009 (UTC)[reply]

Ok. If some event occurs, it usual limits the possible outcomes of the experiment. I.e. in throwing a dice, someone may tell you, the result is an even number. This implies, you only have to consider the possibilities 2, 4 and 6. On forehand the prob. getting 6 was 1/6, now it is 1/3. "Now" means: under the condition the outcome is even. The limiting event in the MHP is the choice of the door by the player and the opening of a door by the host. This event really limits the possible outcomes. And hence the conditional probability is a different function than the unconditional one, even for the car behind the picked door, allthough the numerical value is the same for both functions. Nijdam (talk) 20:28, 19 February 2009 (UTC)[reply]

And to be complete: anything that doesn't limit the possible outcomes may be discarded. Normally this will be events that are not an essential part of the problem formulation. Nijdam (talk) 20:31, 19 February 2009 (UTC)[reply]

But what is the way to decide what is an essential part of the problem formulation? Martin Hogbin (talk) 23:40, 19 February 2009 (UTC)[reply]
Martin, have you read my reply above? What do you find unsatisfactory about it? It is a standard application of Bayes theorem. In plain language, it says that the probability that car is behind any given door is dependent on what the host does because, due to the standard game rules, Monty must select which door to open taking into account where the car is. On the other hand, the game rules do not mention anything about the word "pick" (or the name "Monty") as having any bearing on which door is selected for opening, and therefore the car's location cannot depend on them. I really don't know how to make it easier than this. glopk (talk) 07:18, 20 February 2009 (UTC)[reply]
The Parade statement does not make clear what the game rules are, but what it does say is, '...the host, ... opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" '
There is nothing in the statement of the question itself which says that opening a door to reveal a goat must be taken as a condition but saying the words "Do you want to pick door No. 2?" need not be considered a condition. My question is, what logical process do you use to determine what is a condition and what is not?
Martin, I'll try one last time. In the "standard interpretation" of the Parade statement, the opening of a door to reveal a goat (proposition in the article) must be taken as a conditioning term for the location of the car (), because the selection of which door to open is directly affected by the very location of the car: Monty must reveal a goat and not the car, and he must flip a fair coin to decide which door to open when two are available. Therefore we have no freedom left: the rules say that is not independent of as a function of the index k, and from this very fact, thanks to Bayes' theorem, it necessarily follows that is not independent of . Any formulation of the "standard interpretation" that asserted the independence of from would be either self-contradictory or a violation of Bayes' theorem, hence be useless for reasoning toward a solution.
On the other hand, within the "standard interpretation", we are free to ignore the words "Do you want to pick door No. 2?", because this sentence is not affected by where the car is, save for the particular door number - which has already been covered above. The offering of a choice to switch only affects the player's gain/loss function, which is the mathematical formulation of what it means to "win" the game, not the player's judgment of where the car may be. Therefore we are free to ignore it while remaining consistent with the interpretation.
Now, you are free to allege that the "standard interpretation" is unsatisfactory, and may come up with any number of increasingly baroque variants (subject only to those minimal requirements of consistency that are needed to make the problem well-posed). You may also add as many irrelevant conditioning terms as you wish. By doing so you'll only run the risk to make your version more and more uninteresting, to the point that a certain gray-bearded monk wielding a razor may start taking an interest in you :-) glopk (talk) 05:23, 21 February 2009 (UTC)[reply]

Change of perspective by Morgan

Morgan et al say that there are two distinct probabilities that must be summed to get the unconditional probability, they say: P(Ws) = P(Ws | D3) P(D3) + P(Ws | D2) P(D2).

My assertion is that even when the host does not choose a goat door randomly P(Ws | D3) and P(Ws | D2) must be equal. My reasoning is based on this statement, 'Probability, for a Bayesian, is a way to represent an individual's degree of belief in a statement, given the evidence'. This agrees with my intuitive understanding off the subject. The important question to ask ourselves is which individual should we consider. The Parade statement clearly refers to the viewpoint of a player, not that of the host or a third party observer. Now, even if the host always opens the leftmost door whenever possible, the player is extremely unlikely to know this so, even though the hosts action is not random, the player does not know this. To the player the action still is random.

What Morgan et al do is formulate the problem from the point of view of a mysterious third party observer, who somehow knows of the hosts tactics. Their answer may be correct from this point of view but it does not answer the question actually asked. Martin Hogbin (talk) 23:25, 19 February 2009 (UTC)[reply]

To give an example, suppose somebody tells you that their friend has tossed a fair coin and asks you what are the chances that it will be heads. You could give the fatuous answer that it will be either 1 or 0 depending on which their friend actually got. This is indeed the correct answer from some perspectives, but not from the point of view of the questioner, where the answer is 0.5. Martin Hogbin (talk) 23:48, 19 February 2009 (UTC)[reply]

The probabilistic analysis needs to take into consideration everything from the problem statement (well, everything that's relevant to the probability). The analysis can then be used to answer the question. In the case of the Parade version, the question is "is it to your advantage to switch". The answer is yes (assuming we clarify that the host always offers the chance to switch and always reveals a goat, but say nothing about how the host picks between two goats). If the question were "what should the player think her chances are of winning by switching" we'd then have to consider whether information presented in the problem is known to the player (which simply adds another conditional problem to address). The answer to this question, for the Parade version (clarified for the player in the same way as above), is "my chances are something in the range of 0.5 to 1 depending on how the host picks between two goats". If the problem statement says the host has a preference but says the player doesn't know this preference, then we can ask two different questions: what is the actual chance, and what is the apparent chance.
I know you want the solution to be simple. The problem is, it's not. An unconditional version could be stated as an urn problem: three identical balls in an urn, one with a marking that you can trade for $1000, you withdraw one without looking at it, the host withdraws an unmarked one and shows it to you, what is your chance of having the marked one or do you want to trade for the remaining one in the urn? I think this version has the simple solution you seek (because the balls are identical neither you or the host can tell the two unmarked ones apart). If you could find a reliable reference that discusses a variant like this, we could include a discussion of the simpler problem perhaps even before discussing the solution to the MH problem. But lacking a source, it's simply WP:OR to make up variants (note that the variants currently mentioned in the article are all referenced). My guess is you won't find such a reference because the math for the actual conditional MH problem is not that difficult. -- Rick Block (talk) 01:31, 20 February 2009 (UTC)[reply]
So what exactly do you mean by the probability that the player will win if she switches? I do not mean how do you calculate it but what does the word 'probability' mean? I am aware of two definitions:
'Probability, for a Bayesian, is a way to represent an individual's degree of belief in a statement, given the evidence', which is pretty close to your statement "what should the player think her chances are of winning by switching" .
The other definition that I am aware of is based on frequency and is , 'The probability of a random event denotes the relative frequency of occurrence of an experiment's outcome, when repeating the experiment'. This is the unconditional probability, which we agree for the problem as stated above this is 2/3.
Both the above definitions, by the way, come from the WP article on the subject.
I do not understand what definition of probability you (or Morgan) are using. Martin Hogbin (talk) 09:31, 20 February 2009 (UTC)[reply]
I think we're all talking about the "Modern definition" at Probability theory#Discrete probability distributions. -- Rick Block (talk) 19:34, 20 February 2009 (UTC)[reply]
From what I can understand of the article, it would seem that probability is given no intuitive meaning, it is something that can be calculated according to a certain set of rules. That is how mathematicians like to work and that is fine, but the problem for us comes when you try to answer the question posed with such a probability. For example using the frequency definition, you could say, 'On average you will win 2 times out of every 3 if you take the swap'. Using the modern definition you could say, 'Your probability of winning , if you take the swap is 2/3' , but what do you reply when asked, 'So does that mean that it is to my advantage to do so and if so why?' To give any kind of meaningful reply , you need to suddenly revert to one of the 'classical' definitions. Martin Hogbin (talk) 22:04, 20 February 2009 (UTC)[reply]
I think that I am beginning to understand the "Modern definition" of probability as described at Probability theory#Discrete probability distributions and how it allows Morgan to come to the conclusion that they do. I would like to ask some questions in a new section below. 86.133.179.19 (talk) 12:21, 21 February 2009 (UTC)[reply]

Simple play

May be a simple example may help. I suggest the following play. I have writtin down the name of a random chosen British person. You may guess wheter it is a man or a woman.

Step 1
What is your guess?

After you have answered, I say: the person comes from Yorkshire.

Step 2
You may change your choice. Will you change?

After you have answered, I say: the person is a member of the nursing staff in a hospital.

Step 3
You may change your choice. Will you change?

After you have answered, I say: the person's christian name is Billy.

Step 4
You may change your choice. Will you change?

See how with every step the possibilities are limited. Notice also that in every step a different probability notion is used. Yet the probabilities in step 1 and step 2 are both 1/2. That why people say the probability has not changed. Numerically it has not changed, but by definition it is a different probability (notion). Nijdam (talk) 11:36, 20 February 2009 (UTC)[reply]

Are you replying to me Nijdam? Martin Hogbin (talk) 18:17, 20 February 2009 (UTC)[reply]

@Martin Hogbin: Yes this example is especially meant for you, but others may profit of it as well of course. Nijdam (talk) 18:42, 20 February 2009 (UTC)[reply]

Yes, I full understand what conditional means. My question is, how does one decide when formulating a question, what events are conditions? Intuitively we generally have a some idea. For example, if in your example above I added that between steps 3 an 4 it started to rain, we would not naturally add that into our formulation as a condition. However, for almost any event, it is probably possible to contrive a circumstance in which a seemingly unimportant event is, in fact, an important condition of the problem. My example of the condition that Monty used the word 'pick' was exactly that, the exact wording Monty used when he asks the players if they would like to change doors it is seemingly unimportant, unless he were to use say 'pick another door' when the player had originally chosen a car and 'choose another door' when they had chosen a goat. In that case it becomes a vitally important condition. Martin Hogbin (talk) 19:11, 20 February 2009 (UTC)[reply]
My understanding is that anything that gives new information that could be used to revise the probability of a particular outcome should be treated as a condition. How we know that is another matter. Martin Hogbin (talk) 19:19, 20 February 2009 (UTC)[reply]
One of my complaints about Morgan is not that they sum the two conditional probabilities of the host opening either unopened door or even that they suggest that this is a better way to do things but that they state that this is the only way to do things, even for the random-goat-door version where it is known not to be necessary. If you add every condition that could conceivably be necessary you would get a very complicated problem. Martin Hogbin (talk) 19:39, 20 February 2009 (UTC)[reply]

If we don't get too philosofical, any event that limits the outcomes, forms a condition. That's what the simple play tries to show. I didn't introduce events which are no condition, but it's very easy to think of some. As you said, suppose after you have been informed about Yorkshire and made your choice it starts to rain, and the host tells you so, and asks whether you want to reconsider your choice. Does it matter? No, because the people involved, i.e. the Yorkshire people are still the ones to be considered. No limitations. If the host would inform you that a terrible catastrophy has struck Yorkshire anda lot of people has died, this changes the population, so forms a condition. Is this in any way helpfull? Nijdam (talk) 19:55, 20 February 2009 (UTC)[reply]

I am not looking for help, I am hoping to convince you that that by insisting we must formulate the problem in a certain way, the Morgan paper only serves to obfuscate the central and notable issue, which is that, even for the vos Savant formulation treated unconditionally, most people get it wrong most of the time. This is the only reason there is a WP article on the Monty hall problem; we do not have articles on the odds of winning other game shows. Martin Hogbin (talk) 20:08, 20 February 2009 (UTC)[reply]

More Information => Better Odds

You know how the 'left-most door' variant is used as one method to disqualify any unconditional solution as valid for the so-called conditional problem? I think the reasoning goes that since it has an overall probability of 2/3, but individual plays of the game have varying probabilities, then this conditional problem cannot properly be solved using the unconditional proof. Rick Block, it would be much appreciated if you could leave a brief acknowledgment that this is either a correct interpretation or not.

Sort of yes. The variant is used to show the conditional and unconditional problems are different. In any variant, including variants where it happens to produce the correct numerical result, an unconditional solution is addressing the unconditional problem, not the conditional problem. -- Rick Block (talk) 15:44, 20 February 2009 (UTC)[reply]

The following assumes that the contestant knows the method Monty uses. Rick Block, it would again be appreciated if you could leave a brief acknowledgment that this is either a correct interpretation or not.

It depends on how the problem is phrased, but in this case I'd say no. The problem statement doesn't say anything that would lead us to believe there's any difference between what the contestant knows and what we know. -- Rick Block (talk) 15:44, 20 February 2009 (UTC)[reply]

By my reading, with the 'left-most door' constraint added, there are times when the contestant knows exactly where the car is. Rick Block, one more time, please.

If the contestant knows what we know, yes. -- Rick Block (talk) 15:44, 20 February 2009 (UTC)[reply]

Here's my question. From nothing more than a logical standpoint, does it seem reasonable that since Monty sometimes gives us additional information about the whereabouts of the goat, that the contestant can improve his overall probability of winning the car from the previously unconditionally proven 2/3? Glkanter (talk) 12:23, 20 February 2009 (UTC)[reply]

Depending on the door the host opens the odds may go up, but they may go down as well. The conditional answer is 1 / (1 + p) where p is the host's preference (between 0 and 1) for the door he's opened. If p is 0 (the host really hates the door he's opened and opens it only when forced because the player picked a goat and the car is behind the other door, this is the "leftmost" variant) the player's chance of winning is 1 / (1 + 0) meaning if the player switches she wins. But if p is 1 (the host opens this door whenever given the chance, i.e. both if the player picks the car and if the player picks a goat and the car is behind the other door), the player's chance of winning is 1 / (1 + 1) which is only 1/2. If the unconditional odds of winning are 2/3 and there are cases where the conditional odds improve, there must be cases where the conditional odds go down as well.
Imagine the stage is set up so the player always stands on the left (on the Door 1 side) and Monty has to stand next to her. If she picks door 1 Monty has to walk across the stage to open one of the doors. Sometimes he's feeling lazy and opens door 2 (because he gets there first) unless the car is behind it. This makes his preference for door 3 (p) equal to 0 and his preference for door 2 equal to 1. If he opens door 2 the player has a 50% chance of winning by switching. If he opens door 3 the player has a 100% chance of winning. Other times, he wants to walk as much as possible so he always skips right by door 2 (unless the car is behind door 3 and he has to open door 2). It doesn't matter whether the contestant knows this or not - in a logical sense the probability of winning by switching is always 1 / (1 + p).
The K&R version takes this out of Monty's hands, and specifies his preference is 1/2. This means on the show Monty would have to flip a coin (or something) to decide which door to open if the player initially picks the car (and, to avoid this being a dead giveaway that the player has picked the car he'd have to do this in secret).
I assume you haven't done the card experiment yet. Please try it (always keep the ace, but discard the two of hearts if you can - keep track of wins/losses by discard). It shouldn't take more than 10 or 15 minutes to get enough samples to get a meaningful result. Across all samples (unconditionally) switching will win about 2/3 of the time. But conditionally, switching will win about 1/2 (if you've discarded the two of hearts, and this will happen about 2/3 of the time). or 100% (if you've discarded the two of diamonds, which will happen about 1/3 of the time).
In a probabilistic sense, more information simply makes things different. Might be better, might be worse, might be the same. -- Rick Block (talk) 15:44, 20 February 2009 (UTC)[reply]
Thanks, Rick, I appreciate your input.
I'm unclear on the 2nd response, and therefore the 3rd response. Can you help me?
I'd still like your opinion, and anybody else's as well. Given that, on occaison, Monty tells us (and presumably the contestant) where the car is, does it seem reasonable that the liklihood of picking the car would increase? I'd like to defer the actual proofs for just a little bit, please. Thanks. Glkanter (talk) 18:53, 20 February 2009 (UTC)[reply]
Regarding the 2nd response, does this relate to the same point Martin is raising above at #Change of perspective by Morgan? What I'm saying is the analysis uses what is given in the problem statement. We're not limited to what individuals described in the problem statement may or may not know. The problem may ask "from the perspective of the host ..." - but it doesn't. Is this what you're asking about?
I presume that the MHP is solved from the contestant's point of view. So, what he is told becomes constraints/premises. So, does he know what Monty's up to? Glkanter (talk) 21:01, 20 February 2009 (UTC)[reply]
We know the unconditional probability is 2/3. If Monty tells us in some case where the car is, the likelihood of picking the car goes up in this case - but because the unconditional probability is 2/3 there must be a matching case where the likelihood of picking the car goes down. This is like knowing we have some numbers whose average is 2/3. If one of those numbers is higher, another one better be lower. -- Rick Block (talk) 20:10, 20 February 2009 (UTC)[reply]
If I read your preceding paragraph correctly, then you've just said that 'overall' the uncondition solution indeed gives you the conditional solution. But I know you don't mean that.
I don't think your 'equilibrium' idea is right. I think with new, useful information, like 'the car is behind that door', the overall probability goes up. Glkanter (talk) 21:01, 20 February 2009 (UTC)[reply]
Surely we agree any problem is solved from the point of view of someone reading the problem. I think we're probably saying the same thing, which is that what is important is what is given in the problem statement.
What I said, and what I've been saying all along, is that the unconditional approach gives you the unconditional answer. The "overall conditional solution" is a very weird phrase, but looking past the specific words probably means exactly the same thing as the unconditional solution. Just like a set of 5 different numbers has 5 things each with their own individual value and an overall average, a set of conditions each have their own conditional probability and there's an "overall" unconditional probability. In the MHP there are two conditions (like two numbers contributing to an average), a "host opened door 2" condition and a "host opened door 3" condition. The unconditional probability combines these (sort of like an average). The probabilities are 1 / (1 + p) and 1 / (1 + (1-p)). To combine them you don't just add and divide by 2 (like you would to get the average of two numbers), but you can combine them to get the unconditional probability. If you do combine them, you get 2/3.
I think we're at the point where we're not talking about MHP, but the basics of probability theory. -- Rick Block (talk) 00:48, 21 February 2009 (UTC)[reply]


I'll try harder to stay on the MHP. You agree that when Monty shows us where the car is NOT, that the chance of winning increases from 1/2 to 2/3, so why doesn't it follow that when Monty additionally shows us where the car IS ('left-most door' constraint) 1/3 of the time, the chance of winning will increase from 2/3 to roughly 78% [(1/3*1)+(2/3*2/3)]? Since it's conditional, there's got to be a plus sign in the equation, right? Glkanter (talk) 11:28, 21 February 2009 (UTC)[reply]

Let's back all the way up. It sounds like you are very confused about conditional probability. Is this a term you're familiar with and do you think you know what it means? If so, please explain it briefly here. -- Rick Block (talk) 16:01, 21 February 2009 (UTC)[reply]
Take out the first half of my last sentence, then. I'm just trying to have you explain to me, in English, why the overall odds of selecting a car do not go up when the contestant is given additional useful information. Glkanter (talk) 16:50, 21 February 2009 (UTC)[reply]
I assume by "overall odds" you mean what we'd expect to be the proportion of players who win across many instances of the game (right?). So, if we do this 6000 times and all players switch, by saying the overall odds of winning by switching are 2/3 we're saying we'd expect (about) 4000 to win. [agree or disagree?]
I think I may have said this before, but the leftmost variant DOES NOT CHANGE these overall odds. If we use the rules in this variant, and do the game 6000 times and all players switch, we'd still expect (about) 4000 to win. This should sound familiar, but the host always shows a goat, players still have a 2/3 chance of initially picking a goat, and if you switch you get the other thing, so 2/3 of the players who switch will win.
What this variant does (actually, any variant does this) is split the 6000 players into two groups depending on what door the host opens. In this variant, the door 3 group is given the additional information that a goat is behind door 3 and that a goat is NOT behind door 2 (because the host always opens door 2 if there's a goat there). But this information is specific to this group (we'll talk about the door 2 group in a moment). This group, the door 3 group, gets additional useful information - i.e. that the car must be behind door 2 so switching wins (100%, not 78%). Per the paragraph above the overall odds haven't changed, just the odds for this group. Does this make sense?
Before talking about the door 2 group I have a couple of questions for you. The chance there's a goat behind door 2 is 2/3. [agree or disagree]
Thinking only about the cases where there's a goat behind door 2, what are the chances that the car is behind door 1? Is this 50/50? [agree or disagree]
We need to agree about the above before we can talk about the door 2 group. -- Rick Block (talk) 18:10, 21 February 2009 (UTC)[reply]


Yes, the door 3 group is 100%. And the door 2 group is 2/3. This is based on the unconditional MHP. Monty's actions can't REDUCE the contestant's 2/3 probability of winning by switching. You already know my feelings about time travel.

And they occur 1/3 vs 2/3 of the time. So, 1*1/3 + 2/3*2/3 = 78%. No, I do not expect 4,000 to win. I 'expect' 4,667. (Is 4,000 from Morgan, or OR?). That's my point. More info = greater chances. When Monty opens a random door with a goat, the odds go from 1/2 to 2/3. The overall wins didn't stay at 3,000. This is the same thing. More info = more wins.

Here's my original question: From nothing more than a logical standpoint, does it seem reasonable that since Monty sometimes gives us additional information about the whereabouts of the goat, that the contestant can improve his overall probability of winning the car from the previously unconditionally proven 2/3? Glkanter (talk) 12:08, 22 February 2009 (UTC)[reply]

From a logical standpoint, lets talk for a moment about the overall odds of getting bit by a shark in a year. This has some number, say .001% (I'm making this number up). If we divide people into those who NEVER go into the ocean and those who do, what happens? The ones who go into the ocean have a slightly higher chance and the ones who don't have a 0% chance (right?). This is the essence of conditional probability. We have two groups making one larger group. The odds for the larger group (the unconditional probability) don't have to be the same as the odds for each of the smaller groups (which each have their own conditional probability). If the odds for one of the smaller groups is higher, the odds for one of the other smaller groups must be lower.
In the "leftmost variant" I think you agree the chance a goat is behind door 2 is 2/3, and hence the chance of a player being in the door 2 group is 2/3, and the door 3 group 1/3.
We agree the players in the door 3 group have a 100% chance of winning. This is the conditional probability of winning for players who see the host open door 3 - i.e. the probability of winning considering only players in this group. It is not the same as 2/3.
Similarly the door 2 group has a conditional chance of winning. You agree if we only think about the players in the door 3 group they have a conditional chance of winning that is not 2/3. Why are you so reluctant to believe the door 2 group might have a different conditional chance of winning as well? We're not going back in time and changing the probability for this group, we're just splitting the whole group (by the one act of opening a door) into two subgroups in a way where the chances for one group are higher - but this means the chances for the other group have to be lower.
You didn't answer my last question. Thinking only about the 2/3 times there's a goat behind door 2 (before the player picks, before the host opens a door), what is the chance the car is behind door 1 vs door 3? -- Rick Block (talk) 18:11, 22 February 2009 (UTC)[reply]
It's my thread. You employ a very effective technique for forestalling improvements to the article. Rather than engage in discussion, which I'm trying to do in a civil manner, you go off on long-winded tangents. You're like Jello. It's impossible to go point by point on any issues, large or small.
Here's my original question: From nothing more than a logical standpoint, does it seem reasonable that since Monty sometimes gives us additional information about the whereabouts of the goat, that the contestant can improve his overall probability of winning the car from the previously unconditionally proven 2/3? Glkanter (talk)

Decision Tree

If you draw a decision tree for the problem, you will see that the final probability is actually 0.5. This is caused by the fact that the rules state that there MUST BE a car behind one of the final two doors.

We can assume that there are two main branches to the tree, one in which the correct door is initially chosen, and one in which an incorrect door is chosen (it doesnt matter which, just that its not correct).

1. In the case that the player has chosen the winning door, only one (1) choice allows the selection of this door. Then the host can choose any of the two remaining doors, so he has 2 choices (or N-1, for N initial doors). Finally the player again has two (2) choices, ie. to switch or stay. This gives a final outcome count of 1X2X2 = 4. Of these four final outcomes, two are switches and two are stays. Two are winners and two are losers. In this case, because the initial choice was correct, staying wins. So, assuming he picked a winner right from the start, he has a 50/50 chance.

2. In the second scenario, where the player has picked a loser, there are N-1 possible choices he could have made. For the current example that means he had a choice of two (2) doors. Here is the clincher: Since the player picked a loser, the Host MUST pick the winning door (according to the rules). This means that for each of the players two possible choices above, the host can only make one (1) choice. This cuts down this side of the decision tree to the same size as the side where the initial choice was a winner. Finally the player once again has two (2) choices to make: switch or stay. So here we get 2X1X2=4 once again. Also, once again, there are two out of four results where the player switched and two where the played stayed. Since the initial decision here was wrong, it means that switching wins. So on this side of the tree, the player also has a 50/50 chance.

So on either side of the tree, there are four results of which two won. On the left staying won and on the right switching won.

This gives a final result of staying winning 2/4 times and switching winning 2/4 times. Therefor, regardless of the players choices, there is a 50/50 chance of winning.

Draw out the tree for yourself and see... —Preceding unsigned comment added by 66.8.57.2 (talk) 10:19, 20 February 2009 (UTC)[reply]

Two problems with this: Firstly, you seem to have assumed that whenever there are two branches of the tree, they must have equal probability, which is incorrect in the case of the player's initial door selection - that has a 2/3 or (N-1)/N chance of being wrong. Secondly, you haven't mentioned a reliable source that publishes this line of reasoning, which means that we can't really include it in the article, because of our verifiability and nor original research policies. SHEFFIELDSTEELTALK 14:01, 20 February 2009 (UTC)[reply]

I apologise, I left out one very important fact: The choice that the host is making is not which door to open, its which door to leave closed. This is made clear in the 1000000 door version in the main article. This is why he only has one (1) choice when the player has picked a losing door (regardless of the initial number of doors). The rules state that one of the final two closed doors, MUST have a car behind it. —Preceding unsigned comment added by 66.8.57.2 (talk) 10:31, 20 February 2009 (UTC)[reply]

Please look at the decision tree (referenced to a reliable source) in the Solution section of the article. -- Rick Block (talk) 14:24, 20 February 2009 (UTC)[reply]

Modern definition of probability

I understand the Bayesian and frequency definitions of probability, which I claim would not result in the same conclusions for the MHP as the modern definition, see above. I am new to the modern definition so perhaps someone who knows could answer my questions below:

In the modern definition of probability we refer to some, possible notional, point in time. Everything that happens before that time has a probability of 1 (or 0 if it has not happened); these are the givens. Rather than use the MHP to help me understand how things work, perhaps I could ask some much simpler questions.

I take a fair coin from my pocket and place it face up on the table in front of me, keeping it covered with my hand. What is the probability that it is a head? Are these answers correct?

Bayesian, from my perspective - 1/2
Frequency definition - 1/2
Modern definition - either 0 or 1, we cannot say which.

Martin Hogbin (talk) 12:33, 21 February 2009 (UTC)[reply]

As exact wording is, no doubt, important, what about these questions:

I take a fair coin from my pocket and place it face up on the table in front of me, keeping it covered with my hand. What is the probability that it will prove to be head?

I take a fair coin from my pocket and place it face up on the table in front of me, keeping it covered with my hand. What is the probability that I will see a head when I uncover it? Martin Hogbin (talk) 12:38, 21 February 2009 (UTC)[reply]

Did you have a fair coin in your pocket, and in what way do you place it face up on the table?Nijdam (talk) 13:35, 21 February 2009 (UTC)[reply]

Yes, I did have a fair coin, and I simply took it out of my pocket without looking at it, in the normal way that one does. You may take the method as random. Martin Hogbin (talk) 14:05, 21 February 2009 (UTC)[reply]

If you didn't look at it, how do you know it is a fair coin? Nijdam (talk) 15:19, 21 February 2009 (UTC)[reply]
It doesn't have to be fair: it has to have two visually identifiable sides that are tactiley indistinguishable. The indistinguishability could be established in separate tests. Brews ohare (talk) 16:10, 21 February 2009 (UTC)[reply]
May be it doesn't even has to be a coin, as long as it has two tactiley sides. And why should they be indistinguishable? Nijdam (talk) 20:04, 21 February 2009 (UTC)[reply]

Martin - you're seriously barking up the wrong tree here (you may even be in the wrong forest). The technical differences between the definitions of probability have no impact on this problem. We're all talking about probability in the sense of a number between 0 and 1 reflecting the chance of some expected outcome which should adhere to the law of large numbers. Asking "what do you mean by probability" here is sort of like asking "what do you mean by number" in the context of a counting problem (Johnny has 3 apples, ...). There is absolutely no need for any confusion here. -- Rick Block (talk) 18:42, 21 February 2009 (UTC)[reply]

@Rick: don't bother anymore.Nijdam (talk) 20:07, 21 February 2009 (UTC)[reply]
I'm very, very, very reluctant to suspend WP:AGF. -- Rick Block (talk) 20:30, 21 February 2009 (UTC)[reply]
Thank you Rick. I can assure you that the questions are asked in good faith. Could you possible give me the answers please. I am not sure why some people are getting so hot under the collar here, these are very important points. There is no point in saying the probability is 2/3 if we cannot ascribe any meaning to that statement. According to the frequency and Bayesian notions of probability we can give an understandable meaning to that statement but not, it would seem, for the modern definition. If you do not want to answer, for whatever reason, perhaps you could tell me where I can get answers to my questions. Martin Hogbin (talk) 22:32, 21 February 2009 (UTC)[reply]
Martin - I think the point Nijdam is making (with his arguably flippant responses) is that the line of questioning you're pursuing is tiresomely irrelevant with respect to the MHP. Yes, the nature of probability can be debated (see Probability interpretations), however Bayesian, and modern, and frequentist interpretations are all essentially specializations of classical probability theory. All would say the probability of winning by switching in the MHP is 1 / (1 + p) where p is the host's preference for the door he's opened (and in the K&R version, this is 1/2, so in this version the probability is 2/3). We don't need to ask which technical meaning of probability we're talking about here, because for this problem, the distinctions between these formalisms don't matter. -- Rick Block (talk) 23:43, 21 February 2009 (UTC)[reply]
Rick thanks for you patience. I also do appreciate your position, I am a physicist and have spent much time on the physics newsgroups and on WP arguing with people who do not appreciate the many years of thinking by the worlds best physicists on subjects like relativity and believe that they can prove it wrong with a simple argument that no one else has ever thought of before.
So, I would like to continue the discussion, which might get a bit philosophical and where I may try and box you into a corner to try and clarify a particular point. As I say to Nijdam below, I get the feeling, which I a groping to put into words properly, that something is wrong, or at least arbitrary, in the Morgan analysis of the problem. I would not mind this if they were not so adamant that theirs is the only way to tackle the problem. Martin Hogbin (talk) 10:41, 22 February 2009 (UTC)[reply]

@Martin: I think Rick is hitting the nail wherever it should be hit, and no, I'm mainly getting cold, not only under my collar, and I would be willing to talk about notions of probability, BUT not mixed with this discussion.Nijdam (talk) 09:42, 22 February 2009 (UTC)[reply]

@Nijdam, I am not demanding that any particular person should reply. Rick has been kind enough to reply to my genuine, but possibly misguided, questions. You are welcome to reply if you wish, or not if you do not. It would seem that both you and Rick are more knowledgeable about the theory of probability that I am but I still think that there are problems with Morgan's approach to the MHP, however, I can be persuaded by logical argument. You are welcome to contribute if you wish. I think that this page is an appropriate place for this discussion, it is certainly better that using the article itself as a mouthpiece for opinion. Martin Hogbin (talk) 10:41, 22 February 2009 (UTC)[reply]

@Martin, sorry, didn't mean to offend you. But I think the problem originally was not meant to take into account all kind of more or less philosophical aspects. And it isn't much help for the planned revision. Yet it is of course of interest what kind of alternatives one can think of. I've seen a paper in which at least some of your questions are treated; if I refind it I'll let you know. Nijdam (talk) 11:41, 22 February 2009 (UTC)[reply]

The paper is: Marc C. Steinbach, Autos, Ziegen und Streithahne, I found it on the internet.Nijdam (talk) 15:52, 22 February 2009 (UTC)[reply]