Artificial Intelligence: Learning to Learn (page 2)
- Which algorithm wins the most games?
- Which algorithm plays the fastest?
- Which algorithm wins in less moves?
- To compare the algorithms against each other, three tests must be performed (each test involving 3,000 games): Alpha-beta vs. hybrid, Alpha-beta vs. learning, and learning vs. hybrid.
- In order to run the tests in a reasonable amount of time, multiple tests were run on several identical computers simultaneously. In all, 9,000 games were played.
- The results of each game were stored in an enormous HTML table which could then be imported into Microsoft Excel for evaluation and analysis.
- Over 147 hours were spent programming the game of checkers and the three algorithms (alpha-beta, a learning algorithm, and a hybrid of both) that contained the artificial intelligence. After the programming was complete, each algorithm played 3,000 games of checkers against the others. The grand total of the test results for the experiment was 9,000 trials. The tests took place on several identical computers running multiple separate tests simultaneously. The number of moves until a win, the average move time, and the winner of each round was recorded. After the tests were concluded, the results were averaged and totaled.
The experiment clearly demonstrated the alpha-beta algorithm won more games, took less time to generate a move, and took less moves to win. It was clearly superior to both the hybrid and learning algorithms.
This chart shows the percent each algorithm won out of 9,000 games of checkers. Alpha-beta scored the highest percentage of wins, the hybrid came in second, and the learning algorithm scored the lowest percentage.
This chart displays the average time it took each algorithm to generate a move. In this situation the lowest scoring algorithm preformed the best.
This chart represents the average number of moves it took each algorithm to win a game. As with the previous chart, the lowest scoring algorithm performed the best.
Evidence gathered from the experiments showed that the Alpha-beta algorithm was far superior to both the hybrid and learning algorithms. This can be concluded based on three distinct factors: the percentage of wins, the average time taken to make a move, and the average number of moves generated in order to win a game. In each of these categories the Alpha-beta algorithm preformed the best in every category. The hybrid performed better than the learning, but worse than the Alpha-beta. The Learning algorithm performed the worst.
This experiment included 9,000 trials; therefore, the experimental error was minimal. The only measured value that needed to be considered for errors was the average amount of time each algorithm used to generate a move. The computer can record the precise time, but the time was rounded so the time-keeping process would not affect the outcome of an experiment. However, the difference between the averages was not at all significant, and even if the computer recorded the results with absolute precision the conclusion would remain unchanged. Another aspect to consider about the results was the possibility of a recursion loop (basically, when the algorithm gets stuck in a repeating loop). Although the algorithm will break from the loop, it would cause the average time spent on a move to go up considerably for that game. The last error that needed to be considered was the inefficiencies in an algorithm’s programming. If an algorithm was erroneously programmed in a way that was inefficient, it would obviously damage the overall performance.
Questions for Further Research
- How could the learning algorithm be made better?
- Could the algorithms be made faster to allow for more tests?
Chang, K. (2007, July 19). Computer checkers program is invincible. Retrieved from http://www.nytimes.com/2007/07/19/science/19cnd-checkers.html
Frayn, C. (2005, August 1). Computer chess programming theory. Retrieved from http://www.frayn.net/beowulf/theory.html
Friedel, F. (n.d.).A short history of computer chess. Retrieved from http://www.chessbase.com/columns/column.asp?pid=102
Lin, Y. (2003).Game trees. Retrieved from http://www.ocf.berkeley.edu/~yosenl/extras/alphabeta/alphabeta.html
For a demo of the program email connerruhl at me.com
Warning is hereby given that not all Project Ideas are appropriate for all individuals or in all circumstances. Implementation of any Science Project Idea should be undertaken only in appropriate settings and with appropriate parental or other supervision. Reading and following the safety precautions of all materials used in a project is the sole responsibility of each individual. For further information, consult your state’s handbook of Science Safety.