Data Science Seminars - Concept Activation Vectors: Probing Human-understandable Concepts in Image Classification Networks

Wednesday, June 11, 2025 | 5:00 p.m.
Data Science and Bioinformatics Lab (Building 21)
Dipartimento di Elettronica, Informazione e Bioingegneria - Politecnico di Milano
Speaker: Matteo Bianchi (Politecnico di Milano)
Contacts: Silvia Cascianelli | silvia.cascianelli@polimi.it
Data Science and Bioinformatics Lab (Building 21)
Dipartimento di Elettronica, Informazione e Bioingegneria - Politecnico di Milano
Speaker: Matteo Bianchi (Politecnico di Milano)
Contacts: Silvia Cascianelli | silvia.cascianelli@polimi.it
Abstract
Wednesday, June 11, 2025 at 5:00 p.m. Matteo Bianchi (Politecnico di Milano) will hold a seminar titled "Concept Activation Vectors: Probing Human-understandable Concepts in Image Classification Networks" in the Data Science and Bioinformatics Lab (Building 21).
The event is part of the Data Science Seminars organized by the Data Science Lab at Politecnico di Milano.
The steady growth in complexity and computational prowess of Deep Neural Networks led to an increase in the opacity of their decision-making process, jeopardizing user trust.
Because of this, the field of eXplainable Artificial Intelligence (XAI) has gained a lot of attention over the last decade, striving to make AI models more transparent. Among the most notable contributions for image classification, and the first in the field of Concept-based XAI, in 2018 Kim et al. proposed explanations through Concept Activation Vectors (CAV). With such representations, it became possible to define human-understandable concepts in the latent space of Neural Networks, and compute their contribution to the model’s output. In this seminar, we’ll explore the idea behind CAVs, their usage for supervised concept probing, as well as recent works in unsupervised concept extraction and the generation of concepts.
The event is part of the Data Science Seminars organized by the Data Science Lab at Politecnico di Milano.
The steady growth in complexity and computational prowess of Deep Neural Networks led to an increase in the opacity of their decision-making process, jeopardizing user trust.
Because of this, the field of eXplainable Artificial Intelligence (XAI) has gained a lot of attention over the last decade, striving to make AI models more transparent. Among the most notable contributions for image classification, and the first in the field of Concept-based XAI, in 2018 Kim et al. proposed explanations through Concept Activation Vectors (CAV). With such representations, it became possible to define human-understandable concepts in the latent space of Neural Networks, and compute their contribution to the model’s output. In this seminar, we’ll explore the idea behind CAVs, their usage for supervised concept probing, as well as recent works in unsupervised concept extraction and the generation of concepts.