Data Science Seminars - Large Language Models' behaviors and Game Theory

Wednesday, June 4, 2025 | 5:00 p.m.
Data Science and Bioinformatics Lab (Building 21)
Dipartimento di Elettronica, Informazione e Bioingegneria - Politecnico di Milano
Speaker: Nicolò Fontana (Politecnico di Milano)
Contacts: Silvia Cascianelli | silvia.cascianelli@polimi.it
Data Science and Bioinformatics Lab (Building 21)
Dipartimento di Elettronica, Informazione e Bioingegneria - Politecnico di Milano
Speaker: Nicolò Fontana (Politecnico di Milano)
Contacts: Silvia Cascianelli | silvia.cascianelli@polimi.it
Abstract
Wednesday, June 4, 2025 at 5:00 p.m. Nicolò Fontana (Politecnico di Milano) will hold a seminar titled "Large Language Models' behaviors and Game Theory" in the Data Science and Bioinformatics Lab (Building 21). The event is part of the Data Science Seminars organized by the Data Science Lab at Politecnico di Milano.
Large Language Models (LLMs) are increasingly used in decision-making scenarios that involve interaction, cooperation, and strategic reasoning. This presentation explores the intersection of Game Theory, LLMs, and behavioral experimentation to assess the degree to which LLMs emulate human-like strategic behavior. We examine some studies regarding how LLMs perform in classic game-theoretic settings, both in simulated interactions and through comparisons with empirical human data. We discover that while LLMs can exhibit cooperative and context-sensitive strategies, their behavior often reflects the structure and biases of their training data rather than grounded rationality or theory-of-mind reasoning. This presentation outlines a summary of some key literature findings and aims to show the relevance of these topics for LLMs auditing and alignment.
Large Language Models (LLMs) are increasingly used in decision-making scenarios that involve interaction, cooperation, and strategic reasoning. This presentation explores the intersection of Game Theory, LLMs, and behavioral experimentation to assess the degree to which LLMs emulate human-like strategic behavior. We examine some studies regarding how LLMs perform in classic game-theoretic settings, both in simulated interactions and through comparisons with empirical human data. We discover that while LLMs can exhibit cooperative and context-sensitive strategies, their behavior often reflects the structure and biases of their training data rather than grounded rationality or theory-of-mind reasoning. This presentation outlines a summary of some key literature findings and aims to show the relevance of these topics for LLMs auditing and alignment.