Prof. Nikos Deligiannis (ETRO, Vrije Universiteit Brussel) to speak at ECE-NTUA on February 6, 2025 (12:00)
Venue: Conference room, ground floor, new ECE building
Lecture Title: Interpretable Deep Learning and Explainable AI: Toward Trustworthy and Efficient Models
Abstract: As AI systems become increasingly complex and pervasive, ensuring their interpretability, efficiency, and trustworthiness is paramount. In this talk, I will present two complementary research directions: interpretable deep learning models and advances in explainable AI (XAI). These approaches aim to bridge the gap between model performance and the need for human understanding.
First, I will explore interpretable deep models, designed as trainable algorithms that integrate domain expertise directly into their architectures. By unrolling optimization algorithms into deep networks, these models achieve state-of-the-art performance with significantly fewer parameters and enhanced efficiency. I will discuss two recent contributions: Deep Unfolding Transformers (DUST) for sparse recovery in video, and ROMAN, a family of interpretable neural networks for video separation tasks such as background subtraction and foreground detection. These works demonstrate the potential of lightweight, domain-informed architectures to redefine efficiency in deep learning.
Next, I will delve into recent advances in Explainable AI, showcasing techniques that make complex AI models more transparent and accessible. I will discuss visual explanation methods tailored to contrastive learning, revealing how models trained on pairs of images process similarity tasks. I will introduce InteractionLIME, a model-agnostic technique that uncovers feature interactions influencing predictions, and NLX-GPT, a compact model generating natural language explanations alongside predictions for vision and multimodal tasks. Finally, I will present an approach for interpreting CLIP models, using textual concept-based explanations to shed light on zero-shot image classification decisions.
By combining interpretable architectures with advanced explainability techniques, this talk underscores the importance of designing AI systems that are not only high-performing but also comprehensible and trustworthy.
Short Bio: Nikos Deligiannis is an Associate Professor with the Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), the holder of the 2024-2025 Francqui Research Professorship on Trustworthy AI at VUB, and a Principal Investigator at IMEC, Belgium. He is also the Programme Director of the Masters in Applied Computer Science at VUB. He recently received an ERC Consolidator Grant to conduct research at the intersection between interpretable and explainable AI and multiterminal compression for intelligent machines. His current research focuses on interpretable and explainable AI, multidimensional signal processing, computer vision, and distributed/federated AI.
He received the Diploma degree in electrical and computer engineering from the University of Patras, Patras, Greece, in 2006, and the Ph.D. degree in engineering sciences from Vrije Universiteit Brussel (VUB), Brussels, Belgium, in 2012. From 2013 to 2015, he was a Senior Researcher with the Department of Electronic and Electrical Engineering, University College London, London, U.K.
Dr. Deligiannis is a member of the IEEE and EURASIP and served as the Chair of the EURASIP Technical Area Committee on Signal and Data Analytics for Machine Learning in 2021-2023. He serves as an Associate Editor for the IEEE Transactions on Image Processing and has been a Guest Editor for special issues at the EURASIP Journal on Advances in Signal Processing and the Signal Processing journal.