Charles Explorer logo
🇬🇧

Modern Algoritmic Game Theory

Class at Faculty of Mathematics and Physics |
NOPT021

Syllabus

Week 1

Course

- Matrix games formalism

- Best response

- Minimax policy, Nash equilibrium

- Zero sum games

Practice

- OpenSpiel

- Exploitability computation (i.e policy evaluation)

Week 2

Course

- Minimax theorem

- Minimax policy = Nash equilibrium

- Linear/convex optimization problem

Practice

- Linear programming for solving matrix games

Week 3 - Regret

Course

-Regret

- Folk’s theorem - connecting regret to exploitability

Practice

- Regret minimization in matrix games

- Average/current policy convergence

Week 4 - Sequential Decision Making

Course

- Extensive form games

- Factored observation games

Practice

- Small poker game

- Exploitability computation in extensive form games

Week 5 - Sequence Form

Course

- Sequence -> matrix = exponential explosion

- Sequence form

- Sequence LP

Practice

- Sequence LP

Week 6 - Counterfactual Regret Minimization

Course

- Counterfactual regret minimization

- CFR-BR

Practice

- CFR

Week 7 - Monte Carlo Methods

Course

- MCCFR

- Control variates

Practice

- MCCFR

- VR-MCCFR

Annotation

In this course, the students will learn the core concepts, models, theory and algorithms of modern and practical algorithmic game theory. During the practical part, they will gradually build the tools and codebase for implementing them in real games.

At the end of the course, the students will have a working agent that converges to an optimal policy in Leduc poker and many other small games.