# statistics of a card game

#### drjohsmith

Joined Dec 13, 2021
812
been watching the kids playing a game of cards, and got to thinking about the statistics of it,

easy game, standard crd deck, 52 cards, ace low.
5 of them playing, each take one card ,
lowest card wins
so if one has say a 4 , whats the odds / probability of that being the lowest ?

I got to thinking, If I had a king, then its 100 percent that its not th elowest, if I had a ace, its 100 percent I have the lowest, ( even if its tied with others )

So there must be a tabel / equation, of th enumber of players, what card I have and giving the resultant probabilit that I have the lowest.

I was thinking htere are 4 others, If I have the 4, there is a 3*4 / 51 chance of each of them having a lower card,

but how to "combine" that probability as there are 4 others

then got thinking, only 4 cards of each number in a pack, so if say 6 playing, its also garunteed that the others do not all have the same card,

Any one outthere wants to try to explain to my simple head how to approach this please
sorry , I never got on with statistice at school all those decadews ago,
but still interested..

#### Ian0

Joined Aug 7, 2020
8,939
It's the probability that all the others are higher.
So the probability that the 2nd player is higher: There are four aces, four twos, four threes and three fours* which are lower that's 15 out of 51, so a probability of 36/51 that the 2nd player is higher.
35/50 that the 3rd player is higher
34/49 that the 4th player is higher
33/48 that the 5th player is higher

Combine all those:
p=(33*34*35*36)/(48*49*50*51) = 0.2357

*What do you want to do about "equally low"?

#### Jerry-Hat-Trick

Joined Aug 31, 2022
445
It’s a probability problem, not statistics. I could probably work it out given time and the key check is that the sum of all probabilities is one. These days, I’d have a go at writing a macro in Excel VBA to play a large number of times and populate a table with the results

#### drjohsmith

Joined Dec 13, 2021
812
It's the probability that all the others are higher.
So the probability that the 2nd player is higher: There are four aces, four twos, four threes and three fours* which are lower that's 15 out of 51, so a probability of 36/51 that the 2nd player is higher.
35/50 that the 3rd player is higher
34/49 that the 4th player is higher
33/48 that the 5th player is higher

Combine all those:
p=(33*34*35*36)/(48*49*50*51) = 0.2357

*What do you want to do about "equally low"?
Thank you,
Equal low , the kids had not covered.
But they are kids. ..

#### BobTPH

Joined Jun 5, 2013
8,076

#### Ian0

Joined Aug 7, 2020
8,939
I was thinking more of how to calculate the probability of several things happening, by calculating the probability of none of them happening.

#### WBahn

Joined Mar 31, 2012
29,489
I was thinking more of how to calculate the probability of several things happening, by calculating the probability of none of them happening.
That's a pretty standard approach, but it has nothing to do with the Birthday Paradox, it just happens to be an approach that lends itself to that problem.

There are two basic approaches -- one is to sum the probabilities of things happening several different ways and the other is to take the product of things that have to all happen together. What usually drives the choice of one over the other is whether there exist multiple ways for some things to happen, which makes summing them up more difficult and finding the probability of none of them happening more attractive.

#### Ian0

Joined Aug 7, 2020
8,939
That's a pretty standard approach, but it has nothing to do with the Birthday Paradox, it just happens to be an approach that lends itself to that problem.
I disagree - the birthday problem is about calculating the probability that at least two people share a birthday, and is worked out by calculating the probability that everyone has different birthdays.

#### WBahn

Joined Mar 31, 2012
29,489
I disagree - the birthday problem is about calculating the probability that at least two people share a birthday, and is worked out by calculating the probability that everyone has different birthdays.
That's one way to work it out. It can be worked out directly, too, but (like many problems), calculating the probability for the opposite result is much easier.

#### Ian0

Joined Aug 7, 2020
8,939
thank you
We are on thread. It demonstrates a similar principle.

#### drjohsmith

Joined Dec 13, 2021
812
We are on thread. It demonstrates a similar principle.
My apologies , to me that link seemed very different, but as I said at the start, my probability theory is terrible
Could you expand on how this cN be used fir the problem I asked, then maybe I'll understand more .
Sorry your, this is similar to type quote and a link did not exp,ain,
Thank you

#### drjohsmith

Joined Dec 13, 2021
812
It's the probability that all the others are higher.
So the probability that the 2nd player is higher: There are four aces, four twos, four threes and three fours* which are lower that's 15 out of 51, so a probability of 36/51 that the 2nd player is higher.
35/50 that the 3rd player is higher
34/49 that the 4th player is higher
33/48 that the 5th player is higher

Combine all those:
p=(33*34*35*36)/(48*49*50*51) = 0.2357

*What do you want to do about "equally low"?
Thank you
What happens if there were say 9 p.ayers, and I had a queen, then its 100 percent that there is at least one card lower,
The at least one card lower means that the concern of , equally low you mention does not exist, as any lower is all that matters.

I should be able to make a table me thinks of number of players, and card I have ,

#### Ian0

Joined Aug 7, 2020
8,939
My apologies , to me that link seemed very different, but as I said at the start, my probability theory is terrible
Could you expand on how this cN be used fir the problem I asked, then maybe I'll understand more .
Sorry your, this is similar to type quote and a link did not exp,ain,
Thank you
The principle being that to work out the possibility of various things happening (i.e. two people in a crowd sharing the same birthday or one or more people having a card lower than 4), it is often easier to work out the possibility that none of them happens (i.e. that everyone in the crowd has a different birthday, or everyone has a card higher than 4) because that is in many cases just the probability of x not happening raised to the power of the number of people. It gets a little more complicated with the cards as cards are removed from the pack whenever someone is dealt a card.

#### Ian0

Joined Aug 7, 2020
8,939
Thank you
What happens if there were say 9 p.ayers, and I had a queen, then its 100 percent that there is at least one card lower,
The at least one card lower means that the concern of , equally low you mention does not exist, as any lower is all that matters.

I should be able to make a table me thinks of number of players, and card I have ,
For 5 players, it works out at (36!/32!)/(51!/47!) = (36!47!)/(51!32!)
Algebraically:
If the first card has a value of c, and the pack size is d, and the number of players is n
p=((d-4c)!/(d-4c-(n-1)!) / ((d-1)!/(d-1-(n-1))!)
p=((d-4c)!(d-n)!)/((d-1)!(d-4c-n+1)!)

watch out for numeric overflow because 51!32! is quite big number!

#### WBahn

Joined Mar 31, 2012
29,489
My apologies , to me that link seemed very different, but as I said at the start, my probability theory is terrible
Could you expand on how this cN be used fir the problem I asked, then maybe I'll understand more .
Sorry your, this is similar to type quote and a link did not exp,ain,
Thank you
There are two basic kinds of probability calculations -- the probability that two things both happen, and the probability that one or the other of them happens. There are many variations on these two themes, and certain assumptions that often come into play, often unstated. One of these common assumptions is that events are independent, meaning that the result of one event has no impact on the probability of another event happening. That would be typified by an example flipping a coin or rolling a die -- the results of prior trials has no impact on the likelihood of future trials.

In the case of drawing cards, this is not the case. If the first person draws a king, then the likelihood of the next person drawing a king goes down. It doesn't matter whether anyone knows whether a king was drawn or not. The only thing that having knowledge of the outcome (e.g., being able to look at the card that you drew) is that you have more information upon which to base your estimate of the probabilities of various outcomes for the other cards.

In your game (and I'm going to be a bit more explicit than your kids were on the rules -- I want to know the probability that my card is strictly lower than any other card that was drawn), let's say that I see that I drew a five. I want to know the probability that no other hand has a card that is five or lower. A situation in which this rule might make sense is if we are drawing to decide who gets eliminated from the game. We don't want ties, so if two or more people end up tied for the lowest card, we simply draw again and keep doing that until the lowest card is held by exactly one person.

Let's first approach this using the complementary outcome -- i.e., finding out the probability that NO other hand has a card that is five or less. In doing so, I have split the problem space into two parts and every possible outcome fits exactly into one of those two parts -- either a round in which no one else has a card equal to or less than a five or a round in which at least one other person has a card that is equal to or less than a five. Since all outcomes are in exactly one of these camps, if I can figure out the probability for one of them, the probability for the other is simply 100% minus the probability of the first.

The next observation is that the probability that multiple things all happen (i.e., Event 1 AND Event 2 AND Event 3 AND... ) is simply the product of the probabilities of each of them happening in turn.

In this example, if I have a five, the next person will get 1 of 51 cards in which 19 of them are less than or equal to five and 32 of them are greater than five. The odds that the next person will get a card that is greater than five, GIVEN that I got a five, is therefore 32/51. If the person gets a five or less, we are done (as far as I am concerned, because I now can't be eliminated on this round). But if they hit that 32 in 51 chance and get a card that is greater than five, then the third person also has to get a card greater than five for me to still face elimination. The odds that the third person gets a card that is greater than five GIVEN that one player got a five and another player got a card greater than five is 31/50. For the fourth player it is 30/49 and for the final player it is 29/48.

Multiplying these all together, we have that the probability that I am in sole possession of the lowest-valued card is:

p(5 is sole-lowest) = (32/51)·(29/50)·(28/49)·(27/48) = 11.70%

This is the probability that I get eliminated on this round. The probability that I do NOT get eliminated on this round is therefore 88.30%.

In fact, even if I draw a two, the odds that I am going to be eliminated are just over 50/50 (54.3%).

The other way of framing the problem is directly -- the probability that I am going to survive this round is equal to the probability that at least one other player gets a card that is five or less.

It's tempting to think that the probability that at least one of multiple things happen (i.e., Event 1 OR Event 2 OR Event 3 OR...) is just the sum of the probabilities of each happening, but this is usually wrong. The problem is that we end up double counting things.

Let's use coins as an example. If I throw five coins, what is the probability that I get at least one head? The probability of a given coin ending up heads is 0.50, but if we just add them up, we end up with the probability of getting at least one head being 250%, which makes no sense. The problem is that we are not taking into account that the more than one coin can end up heads and that means that we double count those outcomes and end up with an erroneously high result. So we need to subtract the overlap.

For two events, one with probability A and the second with probability B, the probability that at least one of A or B happens is

p(A OR B) = p(A) + p(B) - p(A AND B)

The bookkeeping using this approach gets pretty messy pretty quick, which is why we like to cast problems as being probabilities that This AND That have to both happen, because then we don't have to worry about the case of one or the other but not both, because that fails to meet the AND criterion right off the bat.

#### drjohsmith

Joined Dec 13, 2021
812
The principle being that to work out the possibility of various things happening (i.e. two people in a crowd sharing the same birthday or one or more people having a card lower than 4), it is often easier to work out the possibility that none of them happens (i.e. that everyone in the crowd has a different birthday, or everyone has a card higher than 4) because that is in many cases just the probability of x not happening raised to the power of the number of people. It gets a little more complicated with the cards as cards are removed from the pack whenever someone is dealt a card.
Thank you.

#### drjohsmith

Joined Dec 13, 2021
812
For 5 players, it works out at (36!/32!)/(51!/47!) = (36!47!)/(51!32!)
Algebraically:
If the first card has a value of c, and the pack size is d, and the number of players is n
p=((d-4c)!/(d-4c-(n-1)!) / ((d-1)!/(d-1-(n-1))!)
p=((d-4c)!(d-n)!)/((d-1)!(d-4c-n+1)!)

watch out for numeric overflow because 51!32! is quite big number!
Thank you
Remind ky old grain what the ! Is please..
Sorry, to me , doing logic ! Is invert or not ,,,,

#### drjohsmith

Joined Dec 13, 2021
812
There are two basic kinds of probability calculations -- the probability that two things both happen, and the probability that one or the other of them happens. There are many variations on these two themes, and certain assumptions that often come into play, often unstated. One of these common assumptions is that events are independent, meaning that the result of one event has no impact on the probability of another event happening. That would be typified by an example flipping a coin or rolling a die -- the results of prior trials has no impact on the likelihood of future trials.

In the case of drawing cards, this is not the case. If the first person draws a king, then the likelihood of the next person drawing a king goes down. It doesn't matter whether anyone knows whether a king was drawn or not. The only thing that having knowledge of the outcome (e.g., being able to look at the card that you drew) is that you have more information upon which to base your estimate of the probabilities of various outcomes for the other cards.

In your game (and I'm going to be a bit more explicit than your kids were on the rules -- I want to know the probability that my card is strictly lower than any other card that was drawn), let's say that I see that I drew a five. I want to know the probability that no other hand has a card that is five or lower. A situation in which this rule might make sense is if we are drawing to decide who gets eliminated from the game. We don't want ties, so if two or more people end up tied for the lowest card, we simply draw again and keep doing that until the lowest card is held by exactly one person.

Let's first approach this using the complementary outcome -- i.e., finding out the probability that NO other hand has a card that is five or less. In doing so, I have split the problem space into two parts and every possible outcome fits exactly into one of those two parts -- either a round in which no one else has a card equal to or less than a five or a round in which at least one other person has a card that is equal to or less than a five. Since all outcomes are in exactly one of these camps, if I can figure out the probability for one of them, the probability for the other is simply 100% minus the probability of the first.

The next observation is that the probability that multiple things all happen (i.e., Event 1 AND Event 2 AND Event 3 AND... ) is simply the product of the probabilities of each of them happening in turn.

In this example, if I have a five, the next person will get 1 of 51 cards in which 19 of them are less than or equal to five and 32 of them are greater than five. The odds that the next person will get a card that is greater than five, GIVEN that I got a five, is therefore 32/51. If the person gets a five or less, we are done (as far as I am concerned, because I now can't be eliminated on this round). But if they hit that 32 in 51 chance and get a card that is greater than five, then the third person also has to get a card greater than five for me to still face elimination. The odds that the third person gets a card that is greater than five GIVEN that one player got a five and another player got a card greater than five is 31/50. For the fourth player it is 30/49 and for the final player it is 29/48.

Multiplying these all together, we have that the probability that I am in sole possession of the lowest-valued card is:

p(5 is sole-lowest) = (32/51)·(29/50)·(28/49)·(27/48) = 11.70%

This is the probability that I get eliminated on this round. The probability that I do NOT get eliminated on this round is therefore 88.30%.

In fact, even if I draw a two, the odds that I am going to be eliminated are just over 50/50 (54.3%).

The other way of framing the problem is directly -- the probability that I am going to survive this round is equal to the probability that at least one other player gets a card that is five or less.

It's tempting to think that the probability that at least one of multiple things happen (i.e., Event 1 OR Event 2 OR Event 3 OR...) is just the sum of the probabilities of each happening, but this is usually wrong. The problem is that we end up double counting things.

Let's use coins as an example. If I throw five coins, what is the probability that I get at least one head? The probability of a given coin ending up heads is 0.50, but if we just add them up, we end up with the probability of getting at least one head being 250%, which makes no sense. The problem is that we are not taking into account that the more than one coin can end up heads and that means that we double count those outcomes and end up with an erroneously high result. So we need to subtract the overlap.

For two events, one with probability A and the second with probability B, the probability that at least one of A or B happens is

p(A OR B) = p(A) + p(B) - p(A AND B)

The bookkeeping using this approach gets pretty messy pretty quick, which is why we like to cast problems as being probabilities that This AND That have to both happen, because then we don't have to worry about the case of one or the other but not both, because that fails to meet the AND criterion right off the bat.
Thank you, I'm going to havevto revread that a few times..m