Game-Theoretic Learning: Regret Minimization vs. Utility Maximization
Amy Greenwald

(Brown University)
October 26th
Lubrano Conference Room


No-regret learning algorithms have attracted a great deal of attention in the game-theoretic and machine learning communities. Whereas rational agents act so as to maximize their expected utilities, no-regret learners are boundedly rational agents that act so as to minimize their "regret". In this talk, we discuss the behavior of no-regret learning algorithms in repeated games.

Specifically, we introduce a general class of algorithms called no-Φ-regret learning, which includes common variants of no-regret learning such as no-external-regret and no-internal-regret learning. Analogously, we introduce a class of game-theoretic equilibria called Φ-equilibria. We show that no-Φ-regret learning algorithms converge to Φ-equilibria. In particular, no-external-regret learning converges to minimax equilibrium in zero-sum games; and no-internal-regret learning converges to correlated equilibrium in general-sum games. Although our class of no-regret algorithms is quite extensive, no algorithm in this class learns Nash equilibrium.