Let’s suppose you play with an opponent the following game: if you win you get 1 dollar, if you lose you lose 1 dollar. The probability to win the game is *p* (and conversely the probability to lose the game is *q = 1 – p*), you start with *i* dollars and your opponent with *N – i* dollars (N is the total sum in the game). What is the probability to win the whole sum of money, before you go broke? Or in scientific terms we’re looking for the probability to win everything *P(N|i)*, that is the probability to win N dollars conditioned by starting with *i* dollars. After a not so complicated proof the result is:

This problem is known as *gambler’s ruin* and was first proposed by Christiaan Huygens. Let’s plot the above result and see if we can draw some conclusions from this problem.

First, in the case of a perfectly fair game (*p = q = 0.5*, meaning no player has an advantage) the probability to win the total amount is dependent only on how much money you start with. In other words the one with the most money, has a bigger probability to win. Or if you prefer the corollary, if you want all the money in a game try to start the game with most of them in your pocket.

If the game is not perfectly fair it gets even worse. If *N = 50* dollars and you start with 20 dollars, the probability to win everything drops to below 0.1 if the probability of winning one game is just slightly against you (0.48). The probability to win everything decreases slowly after that (if the probability to win a game goes below 0.48) but that just shows that a small difference is all it takes, you don’t need to cheat to badly to get an effect.

So the lesson is simple. If you want to maximize the probability to win all the money in a series of games, try to start with (much) more money than your opponent(s) and try to have a bigger chance of winning the game than your opponent(s). This is the basic principle why professional gambling establishments cannot lose since they will make sure that both of the above conditions are met for them.

### Annex

To find *P(N|i)* let’s draw a Markov chain. This is basically a graphical representation of the gaming process as a series of states (characterized by the amount of money you have won *i*). The future state is determined only by the present state and the probabilities *p* and *q*. The states 0 and N are special end states, meaning if you reach them the game has ended. Also in the 0 and N states we have the boundary conditions *P(N|0) = 0* (you can’t win the game if you have 0 dollars) and *P(N|N) = 1* (you have won the game if you have N dollars).

For any state *i* except 0 and N we have probability *p* to go to state *i + 1* (green transition) and probability *q* to go to state *i – 1* (red transition). So we can write the following:

We know that *P(N|0) = 0* and by writing equation (1) for each transition:

Adding all the above:

Writing the other boundary condition *P(N|N) = 1* for equation (2) we get:

Replacing (2) into (3) we get the relation we were looking for:

Of course this result is not valid for *ρ = 1*, but this case is easily demonstrated by replacing *ρ* into equation (2), where we get:

In this case the boundary condition becomes:

And the result for *ρ = 1 (p = q = 0.5)* is: