Best Paper Award @Microlab-NTUA


We are pleased to announce that the paper entitled “IRIS: Interference and Resource Aware Predictive Orchestration for ML Inference Serving” received the Best Paper Award at the IEEE International Conference on Cloud Computing (IEEE CLOUD), held July 2-8, 2023.

The award-winning paper was written by Aggelos Ferikoglou (NTUA), Panos Chrysomeris (NTUA), Achilleas Tzenetopoulos (NTUA), Manolis Katsaragakis (NTUA), Dimosthenis Masouros (NTUA), and Prof. Dimitrios Soudris (NTUA). This work was partially supported by the EU project NEPHELE (Grant Agreement 10100487).

Short Abstract: Over the last years, the ever-growing number of Machine Learning (ML) and Artificial Intelligence (AI) applications deployed in the Cloud has led to high demands on the computing resources required for efficient processing. Multiple users deploy multiple applications on the same server node to maximize Quality of Service (QoS); however, this leads to increased interference. In addition, Cloud providers aim to minimize their operating costs by efficiently utilizing the available resources. These conflicting optimization goals form a complex paradigm where efficient scheduling is required.

In this work, we present IRIS, an interference- and resource-aware predictive inference scheduling framework for ML inference serving in the cloud. We target the multi-objective problem of QoS maximization with effective CPU utilization based on Queries per Second (QPS) predictions by proposing a model-less ML-based solution and integrating it into the Kubernetes platform. Our approach is evaluated over real hardware infrastructure and a set of ML applications. Our experimental analysis shows that under various QoS constraints, the model-specific interference-aware scheduler violates QoS constraints less frequently by achieving 1.8x fewer violations, on average, compared to over-provisioning and 3.1x fewer violations compared to under-provisioning, through efficient exploitation of available CPU resources. The model-less feature is able to cause, on average, 1.5x fewer violations compared to the model-specific scheduler, while further reducing the average CPU utilization by ~30%.