Summary

Presenters: Elias Fernández Domingos and Paolo Turrini

Classical equilibrium analysis makes overly simplistic assumptions about players' cognitive capacity, such as common knowledge of the game structure and common knowledge of rationality. Assuming that individuals are rational is often unjustified in many social and biological systems, even for simple pairwise interactions. Moreover, whenever the problem requires a proper understanding of conflicts occurring in large populations, it becomes necessary to characterise the choices and strategies of many individuals throughout time, and not only at equilibrium. As such, in many real-world multi-agent systems, the goal is shifted towards the understanding of the complex ecologies of behaviours emerging from a given dilemma (or "game"). This is where evolutionary game theory (EGT) shines as a theoretical and computational framework. Likewise, from the computational perspective, multi-agent reinforcement learning (MARL) models how self-interested agents learn and improve their policies through the accumulation of rewards coming from their past experience. Just like strategies in evolutionary game theory adapting to one another, agents' actions evolve based on their empirical returns. The similarity is no coincidence. In this tutorial we show how these two frameworks, although applied in different context, are two sides of the same coin, presenting fundamental mathematical results that demonstrate how the equilibria of population dynamics can be encoded by simple RL agents policies and the other way round.

We will provide use-cases in which each modelling framework is useful. This tutorial will help the social science practitioner acquire new tools coming from AI and complex systems, and computer science practitioners to understand their research in terms of economic models. Students will be able to follow the tutorial interactively through a series of Jupyter Notebooks that will be provided. Our objective is to offer a hands-on experience on how to use EGT and MARL to model social dynamics.

Speakers

Elias Fernández Domingos

Postdoctoral Researcher at the AI Lab, Vrije Universiteit Brussel, Belgium

Elias is currently a Post-doctoral researcher (F.W.O. fellow) at the Artificial Intelligence Lab of the Vrije Universiteit Brussel. Also affiliated with the Machine Learning Group (Université Libre de Bruxelles). He is interested in the origins of cooperation in social interactions and how we can maintain it in an increasingly complex and hybrid human-AI world. In his research, he applies concepts and methods from (Evolutionary) Game Theory, Behavioural economics, and Machine Learning to model collective (strategic) behaviour and validate it through behavioural economic Experiments. He is the creator of EGTtools a Python/C++ toolbox for Evolutionary Game Theory.

personal picture

Paolo Turrini

Associate Professor. Department of Computer Science, University of Warwick, UK

Paolo is interested in AI for social good, uses game theory and RL to design agents that achieve socially desirable objectives. Consistently publishes in top AI venues. PhD from Utrecht University, COFUND Marie Curie Fellow at the University of Luxembourg and Intra-European Marie Curie Fellow at Imperial College London, Imperial College Research Fellow. Now member of Board of Directors at IFAAMAS, the International Foundation for Autonomous Agents Research.

personal picture

Format and Outline

We divide the tutorial into two sections of 85 minutes with a 10 minutes rest in between (a total of 3h):

  1. Part 1: Introduction to Evolutionary Game Theory
    1. Motivation and examples
    2. Infinite and Finite populations
    3. Social Dynamics and Mechanisms for the Evolution of Cooperation
    4. Games on networks
  2. Part 2: Introduction to Multi-Agent Reinforcement Learning
    1. Many learning agents
    2. Key algorithms (Cross-Learning, Q-learning)
    3. Connection with EGT
    4. Emergence of Norms among learning agents

Each part of the tutorial will be accompanied by a Jupyter notebook which will contain the examples shown in the presentation. In this way, participants will be able to follow interactively the presentation, and test themselves the outcome of the framework in different scenarios.

Pre-requisites

Even though this tutorial aims to target a general audience in AI, we recommend to have basic knowledge in the following areas:

  • Basic knowledge of dynamical systems
  • Basic knowledge of reinforcement learning
  • Basic knowledge of Markov Chains

Material

Will be added soon.

Important References

  1. Nash Jr, John F. "Equilibrium points in n-person games." Proceedings of the national academy of sciences 36.1 (1950): 48-49.
  2. M. Nowak, Evolutionary dynamics; exploring the equations of life.
  3. Sigmund, K., 2010. The Calculus of Selfishness. Princeton University Press.
  4. Boccaletti, S., Latora, V., Moreno, Y., Chavez, M. & Hwang, D. Complex networks: Structure and dynamics. Physics Reports 424, 175–308 (2006).
  5. Domingos, Elias Fernández, Francisco C. Santos, and Tom Lenaerts. "EGTtools: Evolutionary game dynamics in Python." Iscience 26.4 (2023).