### risk matching in gambling

An argument for playing a game such as poker with “real” money is that it forces people to play with true risk-reward calculations. While this is certainly better than playing without risk, there exists the question of how to match risk profiles among players. With enough players (large liquid market), they can self-sort by stake size, and this seems fair. With only few people though, the situation is turned around, where a stake size has to be agreed upon at some clearing size (so that enough people agree to play the game) rather than chosen individually, and that same amount of money may be considered as very different values by different people. A pauper and a millionaire do not see $100 as the same value, and will adjust their utilities accordingly, and this will materially affect wagering. Since risk is measured in utility units, it is desirable to match utilities rather than dollar amounts. But there isn’t an agreed-upon utility currency. Or is there? There is: the chips, they are perfect representations. So utility matching could be a fruitful path. Start with this: suppose prior to playing, each player declares an “exchange rate” for chips, say 1 chip-unit =$5, and pays however much it takes to get the amount of chips needed for his stake in the game. Without a loss of generality, all games can be played with say 1000 chip units. So this player would pay $5000 to get his chips. Another player who declares 1 chip-unit =$0.05 would pay $50 to get her chips, and so on. At the end of the game, the winning player takes all of the chips at the table and exchanges them back for money. For example, If four people played, then there are 4000 chips, so the person who declared 1 chip-unit =$5 would get $20K, and the person who declared 1 chip-unit =$0.05 would get \$200. If we call the usual way of stake-setting with a small group Scheme A, then we call the one described in this paragraph Scheme B.

Two problems arise with Scheme B. One, winning players whose dollar-per-chip exchange rate happens to be “higher” than average can’t be paid back fully from that game alone, or ever if he keeps winning. Two, the players may not declare truthfully. Is there a clean way out of this?

The second problem is complicated, so ignore it now. The first problem occurs because the group of players does not, in aggregate, support the payoff profile demanded by the players with above-average dollar-per-chip exchange rates, if they also win more than average. In this context, we see that Scheme A (a group playing with agreed-upon equal-dollar stakes) gets around this problem by essentially setting a common exchange rate for everybody at the minimum of the group (that which enticed the last person to join). This works, but can we combine Scheme A’s payoff fairness with Scheme B’s utility matching fairness?

To some extent, yes. A marginal improvement can be made by noting that, as long as the players do not win 100% of the games, some exchange rate flexibility can be allowed. This is an interesting fundamental trade-off: the less spread out the winning percentages of players, the more spread out they can bid their exchange rates, and vice versa. In the one extreme, players set the same exchange rates (Scheme A) and any game outcome can be supported. At the other extreme, if the players are equally likely to win, then on average no chips change hands, so it wouldn’t matter what exchange rates were set (Scheme B). I should probably quantify what happens in the middle but can’t be bothered to do it today.