Internship report: Can an AI play Rummikub?

To answer this question we need a few things. First, we need computer vision to predict the game state from images. Then we need an algorithm that can predict optimal moves from a game state and finally, we need these 2 solutions to work together to solve Rummikub.

Getting images

I used the official Rummikub app to play the game, which made data collection slightly easier. To get the images from the Rummikub app I used a library called “Pillow”. With this library you can use “ImageGrab” to grab a section of your screen.

Processing images

Using OpenCV to create filters I can filter out parts of the image and create contours. I use these contours to extract the sections of blocks and the blocks themselves.

When I have extracted blocks I can start preparing for the prediction.

Predicting

When predicting I look at one block at a time. I then use a region of interest to only leave the digit and not the whole block. I then do two different predictions, one for the digit and one for the color.

To predict the digit I convert the region of interest to a binary image. This binary image I then turn into an array which is given to a K-Nearest Neighbors algorithm. This algorithm predicts a digit between 0 and 13 with 0 being the joker.

To predict the color I convert the region of interest into an array and give this array to a K-Means algorith. This algorithm will return me two different RGB values. One of these is the background of the block and the other is the actual color. I set up some thresholds which define what RGB values belong to certain labels so when I receive a RGB value from the algorithm I can determine what color the value is.

When I have predicted all the blocks on the board I can start reconstructing the board. The earlier retreived sections can now be used to group the predictions together. When I have gathered all this information I can use the information in a game algorithm.

Algorithm to predict the next move

I use Monte Carlo Tree Search to predict the next move. Monte Carlo Tree Search is an algorithm that simulates game states, gives these simulations a score and picks the best simulation.

The algorithm looks at which blocks are on the board and which blocks are in the player’s hand. After this it will try to find all possible moves to be made. When these are found the algorithm makes one of these moves and simulates again with the newfound board. After the algorithm can no longer make moves or if a winner is found the algorithm uses a value function to see if this set of actions was a good set of actions. The value function first looks if there is a winner. If the player wins the reward function will return 100. If there is no winner, the reward function will return the number of moves made + 0.5 and if the other player wins the reward function will return 0. In the end the algorithm will return the most profitable instruction to the player.

Watch the program in action

Sander Backx, Thomas More Geel
https://sanderbackx.me/#/internship

Blog

Stay tuned.