## Simple Algorithm to Play a Simple Game

Anyone who has worked with Machine Learning would have used the Logistic Regression algorithm, or at least heard about it. Well, here is a simple application that observes you play, and learns, how to play the game of Tic-Tac-Toe.

### How does it work?

As you play the game the algorithm observes the game. It isolates the choices that you made which led you to win the game. The draw and losing cases are not used. It then uses Logistic Regression to train the underlying intelligence, let us represent it by “$latex \theta$”. As it gains more experience, it’ll be able to make more accurate predictions about the game and hence it learns to play the game.

### The math behind it

It uses Gradient Descent along with Logistic Sigmoid function to train and predict. Prediction in this application is done by nine individual Logistic Regression units, and the outcome of each of them corresponds to one of the nine possible spaces in the Tic-Tac-Toe game.

Layout of Inputs |
Layout of Outputs |

Figure 1 |

Let $latex k = 0, 1, \ldots, 8$ be the indexes of nine spaces in the game. Also let, $latex y_0, y_1, \ldots, y_8$ represent the space chosen by you (to mark either X or O). Let, $latex X = \lbrace x_0, x_1, \ldots, x_8 \rbrace$ represent the input to the algorithm (see Figure 1). The intelligence that maps the input to a predicted output be represented by $latex \theta = \lbrace \theta_0, \theta_1, \ldots, \theta_8 \rbrace$. The prediction $latex h_{k}(X)$ for the $latex k^{th}$ space is made as per the equations shown below:

[latex]\nu_{k}(X) = b_{k} + \sum_{i=0}^{8}(\theta_{ki}.x_i)[/latex]

[latex]h_{k}(X) = \frac{1}{1 + e^{-\nu_k(X)}}[/latex]

*Note: $latex b_{k}$ is the bias parameter. It has the same effect as applying Affine Transform to the $latex \nu_{k}(X)$ polynomial.*

Here, $latex h_{k}(X)$ represents the predicted output and $latex y_{k}$ represents the move made by you. So, the purpose of training process is to adjust the values of $latex \theta$ until $latex h_{k} \approx y_{k}$. This is done using the Gradient Descent algorithm.

[latex]\theta_{kj} := \theta_{kj} – \alpha.\sum_{i=0}^{m}( h_k(X^{(i)}) – y_k^{(i)}).x_j^{(i)}[/latex]

$latex \alpha$ | Learning Rate, a real valued number that indicates how fast the training algorithm converges. |

$latex m$ | Number of training samples. |

$latex \theta_{kj}$ | Intelligence parameter indicating how the $latex j^{th}$ input affects the $latex k^{th}$ output. |

$latex y_k^{(i)}$ | Actual output for the $latex k^{th}$ space given $latex i^{th}$ sample as input. |

$latex h_k(X^{(i)})$ | Predicted output for the $latex k^{th}$ space given $latex i^{th}$ sample as input. |

$latex X^{(i)}$ | Represents the $latex i^{th}$ input sample. $latex X^{(i)} = \lbrace x_0^{(i)}, x_1^{(i)}, \ldots, x_8^{(i)} \rbrace$ |

$latex x_j^{(i)}$ | Represents the value of the $latex j^{th}$ space in the $latex i^{th}$ input sample. |

### How does all that work?

Below is a Silverlight version of the application, try it out. After you play the first game, you should see some prediction. The darkest circle is where the prediction algorithm thinks that you will make the next move. Here is the link to the source code: TicTacToe.zip. Note, you have to move for both the players. The predictor only predicts the moves. Until you play the first game with a winning outcome the predictor will not be trained.