Notes about the papers of Collectible Card Game AI
Game AI competition in IEEE COG 2019 was held. The competitions include Angry Bird level generation, fighting video game like Street Fighter 2, DeepMind hanabi.
Also, Collectible Card Game AI competition was held. One is Legends of Code and Magic, which is simple card game for research. Another one is real digital card game Hearthstone. In Hearthstone page, the list of past papers are available. I've read some papers for curiosity.
Legends of Code and Magic and Hearthstone have two tasks.
Create deck from multiple cards
Play good with created deck
Each needs each optimization technique.
There's variation in a. Legends of Code and Magic is selecting cards from limited random set before play(equivalent to Shadowverse 2Pick). Hearthstone AI competition have two tracks. One is play AI using prebuilt deck, the other one is full deck customization & play. The latter one has enough time to consider. The assumption of paper is the same.
Paper about deck creation. With Metastone AI engine, the evolutionary algorithm library μGP is used. To improve confidence, there are 15 tries per deck. The author compared the result with the deck of Tempo Storm, American e-sports team.
Two heroes are scope. They won against the deck created by the professional players. However the simulation of Metastone AI is not perfect.
The computer doesn't play well the deck which human player plays well. Although no condition was assumed, generated deck include multiple same cards.
Automated Playtesting in Collectible Card Games using Evolutionary Algorithms: a Case Study in HearthStone, 2018
Papers from the same authors. After exploring all heroes, they replace the card to similar cost one (+1/-1) to reduce calculations. They call it as Smart Mutation.
In all cases, they outperform human deck. However Smart Mutation doesn't work for some heroes. Most heroes work well and the evolutionary algorithm is stable anyway.
AAIA’17 Data Mining Challenge winning solution. The task is to predict win ratio with Neural Network. at the given game state. In this condition, the player doesn't know played cards and remaining cards.
Used network is CNN. The input is normalized to 0-1. Compared with logistic regression, the performance is better thanks to ensemble and tuning. AUC is around 0.01 anyway.
The assumption of deck selection AI sometimes include play AI. The two tasks are related.
In the both cases, the calculation complexity is important. Play log is the key in n-gram case. Card2Vec may be important.