PhD thesis defense to be held on July 10, 2024, at 11:00 (Old ECE Building)
Picture Credit: Anastasios Arsenos
Thesis title: Out-of-distribution robustness in mission-critical computer vision applications
Abstract: Artificial intelligence (AI) has progressed explosively in recent years. Driven by the advent of deep learning, AI is being used in a variety of applications, across multiple scientific fields, in industry as well as in medicine.
Out-of-distribution (OOD) robustness is crucial in mission-critical computer vision applications because these scenarios often involve encountering unforeseen or novel situations that may differ significantly from the training data. In mission-critical contexts, such as autonomous vehicles, medical diagnosis, or security systems, the models need to make reliable and safe decisions. If the model encounters situations or inputs that fall outside the distribution it was trained on, it may provide inaccurate or unreliable predictions, leading to potentially dangerous consequences. Ensuring OOD robustness is essential to enhance the generalization capabilities of computer vision models, enabling them to handle diverse and unexpected scenarios in real-world applications. It helps prevent the system from making critical errors when faced with novel inputs, thereby improving safety, reliability, and performance in mission-critical tasks.
The emergence of Out-of-Distribution (OOD) robustness or Domain Generalization research has become a crucial tool for achieving reliable performance in medical imaging and autonomous driving. In the context of medical imaging, OOD robustness is vital because medical datasets can vary significantly due to differences in patient demographics, imaging equipment, and conditions. Researchers and practitioners recognize the need for models that can generalize well to diverse and previously unseen medical scenarios to ensure accurate diagnoses and treatment plans.
Similarly, in autonomous driving, OOD robustness is essential as driving conditions can be highly dynamic and unpredictable. Ensuring that self-driving vehicles can handle unforeseen scenarios, such as adverse weather conditions, unusual environment configurations, or unexpected obstacles, is critical for their safe deployment in the wild. OOD robustness research in both medical imaging and autonomous driving aims to enhance the generalization capabilities of machine learning models, enabling them to perform reliably in real-world scenarios beyond the training distribution. This research contributes to the development of more trustworthy and resilient systems in these mission-critical domains.
This study proposes methodologies and advancements aimed at enhancing OOD robustness in mission-critical applications. From transfer learning techniques tailored for medical imaging to novel sensor configurations for UAV perception systems and state-of-the-art deep learning architectures for image recognition, significant progress has been made in addressing the challenges posed by OOD data. In the domain of medical imaging, we explored methodologies for enhancing the generalization capabilities of diagnostic models, considering factors such as data heterogeneity, limited sample sizes, and domain shifts across different healthcare facilities. For UAV sense and avoid systems, we investigated techniques for perceptual robustness to ensure safe operation in dynamic environments. In image recognition, we examined approaches for mitigating the impact of OOD data, such as adversarial training, domain generalisation, and uncertainty estimation, to enhance model reliability across diverse datasets and environmental conditions.
In summary, this PhD thesis highlights the critical importance of OOD robustness in mission-critical applications and underscores the need for continued research and innovation in this area. By synthesizing insights from diverse studies and identifying key challenges and advancements, this PhD thesis aims to contribute to the ongoing discourse on enhancing the reliability and safety of AI-driven systems in real-world scenarios. Through interdisciplinary collaboration and rigorous experimentation, we strive to develop effective solutions that ensure the resilience and efficacy of AI technologies across medical imaging, UAV sense and avoid systems, and image recognition domains.
Supervisor: Professor Emeritus Stefanos Kollias
PhD Student: Anastasios Arsenos