Learning in Computer Games
This project aims at using videogames as a testbed for machine learning techniques and as an application as well.
In last years, videogames became one of the main element in the market of entertainment and, at the same time, they attracted the attention of artificial intelligence community. In fact, it is often difficult to program the behavior of the artificial opponents (bots) by hand and the level of the game is not dynamically adapted to the players' abilities which improve over time. Reinforcement Learning could be well-suited to develop bots able to adapt their performance learning the best strategy according to the behavior of the player. Furthermore, multiagent learning algorithms could be used to enable bots to coordinate and cooperate to achieve their goals in the game.
This research project aims at studying how RL algorithms can be applied to strategy videogames. The first goal of this project is to use a very complex environment to test the RL algorithms. In fact, strategy games can be used as a simulation of real world applications, but at the same time the complexity of the problem can be easily changed according to the class of algorithm that need to be tested (e.g., fully vs partially observable environments, model-based vs model-free approaches, and so forth). On the other hand, the second goal is to provide algorithms and techniques that videogames can benefit from to improve the performance of the bots in the game according to the behavior of the player.
We are now investigating a series of possible applications of Machine Learning techniques to different aspects of videogames. In particular, we focused on different games:
TORCS (The Open Racing Car Simulator)
TORCS is an open-source car racing simulation. We are applying Reinforcement Learning techniques for learning basic skills (i.e., gear changing policy, trajectory planning). Another interesting perspective is the application of Supervised Learning algorithms for user modeling for the generation of effective bots.
NAAC is an open-source shoot'em'up game. We are investigating the possibility to apply Multiagent Learning techniques for the definition of cooperative strategies between the opponents depending on their roles and characteristics.
Wargus is an open-source clone of Warcraft. At the moment, we are studying the interface towards Wargus provided in the TIELT for the application of Multiagent Learning algorithms for the coordination of troops, thus defining automatic behaviors that provide a higher-level control to the player.
- Alessandro Lazaric, PhD Student
- Daniele Loiacono, PhD Student
- Marcello Restelli, PhD
- Marco Canala and Paolo Tajè, Master Students (NAAC)
- Alessandro Prete, Master Student (TORCS)
- Giorgio Luciani, Master Student (TORCS)