Procedural content generation (PCG) has been used in digital games since the early 1980s. Here we focus on a new problem of generating personalized combat encounters in role playing video games (RPG).
A game should provide a player with combat encounters of adequate difficulties, which ideally should be matching the player’s performance in order for a game to provide adequate challenge to the player. In this paper, we describe our own reinforcement learning algorithm that estimates difficulties of combat encounters during game runtime, which can be them used to find next suitable combat encounter of desired difficulty in a stochastic hill-climbing manner.
After a player finishes the encounter, its result is propagated through the matrix to update the estimations of not only the presented combat encounter, but also similar ones. To test our solution, we conducted a preliminary study with human players on a simplified RPG game we have developed.
The data collected suggests our algorithm can adapt the matrix to the player performance fast from little amounts of data, even though not precisely.