Poker Quiz Answers

I know the tension has been building, so without further adieu, I present the answers to our poker quiz! And you should listen to what I say, as I am a recognized expert in the field.

Remember the set-up: you’re playing Texas Hold’Em, so you have two cards to yourself, and (eventually) five cards face-up in the middle, and your hand consists of the best five cards you can choose from your two and the five community cards. Which of the following has the best chance of winning against somebody else’s (unknown, obviously) cards at a showdown?

  • Jack-10 suited
  • Ace-7 unsuited
  • Pair of 6’s.

Note that this is not really a poker-strategy question, it’s just a math question. There is a separate issue, which is “which is the best starting hand”, or for that matter “how should you play each hand?” — we’ll get to that later. But this is just a math problem — which is most likely to win if you choose to stay in the pot all the way to the showdown?

The answer, to nobody’s suprise, is: it depends! It does not depend on your position, or whether the betting is limit or no-limit — those might affect your strategy along the way, but at the end of the hand it’s just a matter of who has the best cards. What it does depend on is how many people you are playing against. The absolute probability that you will win obviously goes down if you are playing against more opponents with randomly-chosen cards, just because there are more ways they could beat you. But, much more interestingly, the ordering of which hand is best also changes.

Here are the answers, presented in convenient tabular form. We’re showing the percentage chance that your hand will win outright, both against one other random hand and against four other random hands. The percentages come from running 500,000 simulated hands each, using the Poker Academy software. (It’s a very nice program, incorporating artificial-intelligence routines developed by the University of Alberta Poker Research Group. [Yes, there is such a thing.]) “Jd” stands for jack of diamonds, “Td” for ten of diamonds, etc. For later convenience we’ve chosen the ace to be the same suit as the JT, with all other cards being different suits (it doesn’t matter for this table, but does for the next one).

Jd Td

Ad 7c

6d 6h

1 opponent

56.2

57.3

62.8

4 opponents

27.3

20.7

17.9

So the miracle is that the relative strength of the three hands reverses when we go from one opponent to four. Against one other player, the sixes stand the best chance, followed by the A7, followed by the JTs (where “s” stands for “suited”). But against four, JTs is the most likely of the three to win, while the sixes are the least.

It’s not hard to figure out what’s going on. But before we do, let’s take a peek at something even more surprising. What happens if, instead of putting one of these three hands against some other random cards, we put them up against each other, two at a time? What is the relative ranking? Here is what happens:

Jd Td

Ad 7c

6d 6h

Jd Td

51.5

47.7

Ad 7c

48.3

56.7

6d 6h

51.6

43.0

The table shows the chance that the hand listed on top will beat the hand listed on the left side at a heads-up showdown (no other players). The entries don’t add up to 100% because there can be ties . So, another miracle: it’s not transitive! Sixes are likely to beat A7, and A7 is likely to beat JTs, but JTs is likely to beat a pair of sixes. It’s a kind of combinatorial rock-paper-scissors situation.

So what is going on? Note that if we consider just the two hole cards, without taking advantage of the community cards, the sixes are the best hand, followed by the A7, with JTs bringing up the rear. For one of the latter two to win, the community cards have to help it improve (by pairing one of the hole cards, or making a flush, or whatever). So the question becomes, how many ways are there to improve? The only likely way for the A7 to improve is for either an ace or a seven (or both, or several) to land on the board, although it’s also possible to find four board cards that help make a straight or flush. Adding up the probabilities, it’s almost a fifty percent chance, but not quite. Against the sixes, there are more ways for the JTs to improve. Both because the cards are “connectors,” allowing for cards that would give low straights (7-8-9) and high straights (Q-K-A) or various intermediate possibilities, and because the cards are suited, making it much easier to make a diamond flush. So JTs will usually beat a pair of sixes. But it won’t usually beat A7 if the ace is of the same suit. That’s because some of the ways that JTs will improve will also improve the A7 — in particular, if four diamonds come up, the JT will have a flush but the A7 will have a better one.

The same reasoning explains the first table. Against only one randomly-chosen pair of hole cards, there is a substantial chance that the sixes won’t need to improve, so they do the best; likewise the ace can often come out on top just by itself, so it’s second-best. But against four opponents, chances are excellent that someone will improve, and JTs has the best chance.

Which leads us to the other question: which is the best starting Hold’Em hand? It should be clear that there is no universally correct answer, and it will depend on game conditions — although, in ordinary circumstances, JTs is clearly the best, for a couple of reasons. One is that the thought experiment of playing your cards against another pair of randomly-chosen hole cards isn’t what really happens; in a real game you have a bunch of opponents, and the ones with weak hands simply fold, leaving only the stronger hands. So it’s almost as if you are playing against a larger number of opponents, even if a small number stay in for the showdown. The other reason (much more important) is that the criterion for success is not how many hands you win or lose, but how much money you win or lose. The A7 is not going to make you much money. If no ace comes up on the board, you’re likely beaten. If an ace does come up, either someone else has an ace with a better kicker (in which case you will lose a lot), or nobody has an ace and they will just fold (in which case you will win a little). Likewise for the sixes — if nobody can beat a pair of sixes, they’re not going to be putting much money into the pot. The only way to win big is if another six comes up, which is possible but unlikely, and you’d still have to worry that someone else made a straight or flush. This is why beginning players often over-value low pairs and aces with low kickers.

The moral of the story is that you don’t win in Hold’Em by knowing the percentage chance that your pocket cards can beat some other random two cards — you need to know what kind of hand your opponents are likely to have. Part of that is just probabilities, but much of it is gleaning clues from the way they have played the hand up to that point (did they raise, or call? how many bets? from what position?). In other words, you need a model of your opponents. Poker players have invented a simple two-dimensional parameter space of ways to play that serves as a simple model. One axis ranges from loose to tight — how often someone plays vs. folding. The other goes from passive to aggressive — how often someone simply checks or calls vs. raising. At the crudest level of analysis, you can locate an entire table of players at some point of the tight/loose and passive/aggressive plane; with a bit more data, you can describe individual players this way, and at a very sophisticated level you can get as specific as you like in an extremely high-dimensional parameter space (“they like to raise 80% of the time with pocket nines or better in fifth position with one bet and one caller before them when their stack is less than half of its starting value,” stuff like that).

That’s why it’s much harder to program a computer to be a championship-level Hold’Em player than a championship-level chess player. There is no perfect strategy in Hold’Em — no decision tree you could unambiguously follow to guarantee the best possible outcome. (Indeed, if you had an opponent that used such a decision tree, you could in principle always beat them.) Unlike in chess, the computer can’t win by brute force; it needs to be clever enough to learn from the previous moves of its opponents to figure out how they are playing. Teaching computers to play poker is an active area of research in artificial intelligence. And teaching humans is an active area of research in Vegas (although the “tuition” can get a little steep).

20 Comments

20 thoughts on “Poker Quiz Answers”

  1. I vote we play Scrabble next time. I don’t know anything about probability and connectors, but I do know an assload of “z” words.

  2. Very interesting Sean. Recently, I have been reading “The Theory of Gambling and Statistical Logic” by Epstein and some technical journal papers on card games from the late 50s and the 60s and various stuff on “stochastic games”. However, my card games of choice are Blackjack, Blackjack Switch Poker and Baccarat, playing these online. Statisticians started to analyse Blackjack once the first powerful computers became available in the 50s and 60s. In Blackjack Switch Poker it is actually possible to gain an enormous mathematical advantage and virtually remove the house edge (or at least get it down to under 0.1%) Online casinos offer it but they do so assuming that 99% of people don’t actually know how to play it properly (which is the case). Do you have any familiarity with these card games? I mean can you card count?:) A mathematician from MIT called Ed Thorpe is part of blackjack folklore, a very interesting character, although he went to Wall street eventually I believe.

    As far as online poker goes I have been curious as to whether the software might have “tells”: that is, it might pause a little longer at certain times when it has to go into different or longer subroutines, or compute certain things depending on what hand it holds or what other players do and so on. Also, I have been curious as to how random the shuffling and dealing actually is since computers generate pseudo-random numbers from an iteration algorithm rather than pure random numbers. Any thoughts on any of this?

  3. JustAnotherInfidel

    Dr. Carroll–

    What if the other two hands weren’t taking away some of the outs from the diamond flush (best case scenario)?

  4. You need to lookup Al Hibbs of JPL fame (lives in Pasadena, Feynman’s PhD student, Univ of Chicago math alumni) when you come to Caltech. He was the guy who went to Las Vegas, figured out a system to beat Roulette (there was a flaw), made millions (got banned), & used the proceeds to buy a yacht w/friends..they sailed the Mediterranean. I called him up a few yrs ago, but he was all grouchy & blew me off. (he was on that Community College astrophysics multi-episode distance-learning program, along with other Caltech profs: K. Thorne, K. Libbrecht, H. Zirin)

    There was a show on the History Channel, which detailed the MIT effort (led by a mathematician) to play Blackjack. Some new probability algorithm. One young guy was sitting at the pool, with hundreds-of-thousands of $$ in a duffel bag, contemplating his windfall. I think it broke up, after there was internal dissension..greed.

    If you get rich, let me know & I’ll help you spend it. There’s a GNP stereo/computer store on Colorado Bl a couple blocks fro Caltech (started by a Caltech grad, who sold stuff to Caltech profs). I hear he got really wealthy by getting into the Stock Market. You need to hook up with that guy, & dabble in some mathematical-based Risk Management (aka “gambling”)

  5. There is no perfect strategy in Hold’Em… Unlike in chess, the computer can’t win by brute force; it needs to be clever enough to learn from the previous moves of its opponents to figure out how they are playing.

    This is not true. There is an optimal strategy for Hold’Em: the difference from chess is that the optimal strategy is not deterministic. Obviously, it’s a hard computational problem to find this strategy (just as it is for chess), which is why current poker AI emphasizes opponent modeling.

  6. Ben, what makes you think that? As far as I know there is no theorem, but my impression is that there was always a strategy that could defeat any known strategy, even if it were not deterministic, if the known strategy didn’t adjust for the behavior of the opponents.

    JustAnotherInfidel, I’m not sure I get the question. If it were, say, Ah7c against JdTd, I don’t actually think the numbers would change much, to be honest — there would now be the possibility for a heart flush, for example. Of course, the hand would play out very different in a real game, since two people drawing to the same flush leads to very different betting.

  7. Just a tangential thought: perhaps God does not play dice; however, he might engage in card games…

    Oddly enough, card players appear to confront less uncertainty when playing with “quantum cards” than with “classical cards.” More specifically, the predictability sieve for a deck of playing cards is more refined in a “quantum-game-room” than in a “classical-game-room.” Nevertheless, gravity still remains “the wild card” for both groups of card games…

    In the meantime, put gravity aside, shuffle the cards and enjoy the game! 🙂

  8. Ben, what makes you think that? As far as I know there is no theorem

    I believe Ben is referring to the classic theorem of von Neumann, that any two player zero-sum game, such as heads up poker, has a dominant strategy equilibrium. I’m not sure if the theorem has a generalization to an N player symmetric game. Further, I’m not sure if the game-theoretic idea of an equilibrium is what you want here. Don’t know enough about these things. So you should probably ask Ben.

    my impression is that there was always a strategy that could defeat any known strategy

    But there are strategies that minimize the maximum harm done to you, and if we consider non-deterministic strategies there exists a stable equilibrium of them. Non-transitivity doesnt change this. Consider rock-paper-scissors. If I’m allowed to only play one, I can always be defeated. However, the strategy “pick a random one with equal probability 1/3”, cannot be “defeated”. I can imagine that in a world tournament of super-expert RPS players, they would all just be playing randomly, since any deviation pure randomness would be punished. This means that rock-paper-scissors gets more boring as skill increases even faster than soccer, which judging by the progression of the World Cup is an impressive feat. I think you can see how this works in a poker situation, and already encodes the opponents behavior in some sense without having to dynamically evaluate it.

  9. Chimpanzee,
    Hibbs wrote that book on path integrals with Feynman I believe (?), but you would be hard pushed to find a mechanically flawed wheel these days though. There was a software flaw in the dozen column bets or thirds in some online roulette games but I think it has been fixed. Basically any one of the thirds was hard-wired to always come in within a fixed number of spins in order to create what’s called “realistic waiver”, since the software uses pseudo-random numbers.

    A major software flaw in online Texas Hold Em was detailed here:

    http://www.cigital.com/news/index.php?pg=art&artid=20

    The MIT blackjack project was featured on this bbc Horizon program:

    http://www.bbc.co.uk/sn/tvradio/programmes/horizon/million_prog_summary.shtml

    When they tried it in Europe they got death threats.

    A famous case involving online blackjack–a version called Caribbean 21–was when a player calling himself “Pirateofc21” with an initial bankroll of $1000 blew it up into $1.3 million. Of course, they refused to pay and accused him of “cheating” but couldnt prove it or find any flaws in their own software. I think it went to court and they did pay. (It is worse for a casino’s long-term business to get a reputation for not paying up.) As far as I am concerned though exploiting a software flaw is fair game since that is the software they offer and it is their responsibility for how it functions. After all, they have absolutely no problems about bleeding money off you.

    Interesting to learn that there is a poker research group at the University of Alberta. There is a chapter called ” Poker and Bluffing” in the famous book” Theory of Games and Economic Behavior” By Von Neumann and Morgenstern, but it’s a pretty hard book:)

  10. JustAnotherInfidel

    Dr. Carroll—

    Let me rephrase—in all of the examples you worked above, there were hands with competing cards…that is, the diamond flush, that the JdTd would be drawing to, is less likely to come if you know your opponent(s) has (have) a diamond. Just wondering—I’ve never been much for calculating numbers anyway, I generally get a ballpark estimate and go from there. (Perhaps it’s a second order effect.) But I will stick with my original estimates–put the three hands heads up, such that the A7 and 66 don’t take away any diamonds from the deck, and the JTs is 40% (ish), and maybe even a bit better, to win. Then the sixes, then the A7. And not to be picky, but you did say that “a standard hold em table has ten hands” in your original post…

    Thank you for the academic discussion on poker—I was wondering when it would come up here!

  11. Just a tangential thought: perhaps God does not play dice; however, he might engage in card games…

    From Pratchett and Gaiman’s Good Omens:

    “God moves in extremely mysterious, not to say, circuitous ways. God does not play dice with the universe; He plays an ineffable game of His own devising, which might be compared, from the perspective of any of the other players, [ie., everybody.] to being involved in an obscure and complex version of poker in a pitch-dark room, with blank cards, for infinite stakes, with a Dealer who won’t tell you the rules, and who smiles all the time.”

  12. Chad Orzel,

    You paint a much too grim picture of the cosmos…Bear in mind, we are making headway towards comprehending the “quantum card game.” My inspiration is derived from Wojciech Zurek’s work in quantum information theory. Unfortunately, Zurek seems to fall short of “factoring gravity into the quantum equation.”

  13. . no decision tree you could unambiguously follow to guarantee the best possible outcome. (Indeed, if you had an opponent that used such a decision tree, you could in principle always beat them.) Unlike in chess, the computer can’t win by brute force; it needs to be clever enough to learn from the previous moves of its opponents to figure out how they are playing

    .

    In such cases there can still exist optimal mixed strategies, this is what Ben means. In practice one considers strategies that give probabilities for moves based on the information gained from the moves so far. However, this does not yield the most general mixed strategies.

    Of course, in practice it’s far better to not to use the brute force approach. It isn’t practical and you should exploit the fact that your opponents are using strategies that are far from optimal.

  14. Ben, what makes you think that? As far as I know there is no theorem

    Hi Sean,

    The theorem is due to John Nash. An easy-to-find reference is

    J. Nash, Non-cooperative games, Annals of Mathematics 54(2), 286 (1951).

    It’s on JSTOR but this commenting system seems to choke if I give a direct link. Amusingly, he solves a simplified 3 person poker game in the paper.

    A few caveats: if there’s a rake, then the theorem does not apply. If your opponents are irrational, or rational but with limited computational power, then as you say, opponent modeling is desirable.

    For an overview of the state-of-the-art in finding optimal solutions to Hold’em, have a look at the introduction to this paper by the Alberta group.

    ben

  15. Forget poker! Let’s talk about Italia and how they are the best soccer team in the world!

    Forza Italia!

  16. Ben and Yonah, thanks. And sorry for not replying, it’s been busy. Is it straightforward to state precisely what the conditions are for the Nash theorem to hold? I think I was implicitly discounting the possibility of a non-deterministic strategy, imagining a player that would always do a certain thing in a certain situation.

  17. My personal view is that it may be possible to “prove” that no algorithmic optimal Hold-em strategy can be devised. I think this would involve demonstrating that any sucessful algorithm would inherently require enough non-optimal play to create sufficient uncertainty in the opponent as to style. And then demonstrating the non-optimal play could then be exploited by another algorithm. It may be that there is a general theory that any game which involves “bluffing” is not subject to algorithmic optimization.

Comments are closed.

Scroll to Top