Eirini Ntoutsi

Eirini Ntoutsi

Full professor at the Bundeswehr University Munich

Eirini Ntoutsi is a full professor for Open Source Intelligence at the Bundeswehr University Munich (UniBw-M) and Research Institute for Cyber-Defence and Smart Data (CODE) where she is leading the AIML lab (aiml-research.github.io), since August 2022. Prior to that, she was a full professor for Artificial Intelligence at the Free University Berlin (FUB). Before that she was an associate professor of Intelligent Systems at the Leibniz University of Hanover (LUH); she remains a member of the L3S Research Center. Prior to joining LUH, she was a post-doctoral researcher at the Ludwig-Maximilians-University (LMU) in Munich, Germany in the group of Prof. H.-P. Kriegel. She joined the group as a post-doctoral fellow of the Alexander von Humboldt Foundation. She holds a PhD in Data Mining/Machine Learning from the University of Piraeus, Greece and a master and diploma in Computer Engineering and Informatics from the University of Patras, Greece. Her research interests lie in the areas of Artificial Intelligence (AI) and Machine Learning (ML) where she develops intelligent algorithms that learn from data continuously following the cumulative nature of human learning, while ensuring that what is being learned helps driving positive societal impact.

Abstract: How to make AI more fair and unbiased

AI-driven decision-making has already penetrated into almost all spheres of human life, from content recommendation and healthcare to predictive policing and autonomous driving, deeply affecting everyone, anywhere, anytime. The discriminative impact of AI-driven decision-making on certain population groups has been already observed in a variety of cases leading to an ever-increasing public concern about the impact of AI in our lives. The domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models. Most of the work in this field focuses on limiting learning settings, typically binary classification with a binary protected attribute. The reality is however more complex, for example, discrimination can occur based on more than one protected attribute, the class distribution might be imbalanced, the population characteristics might change, and more than one learning task might need to be solved at the same time.
In this talk, I will talk about fairness in supervised learning covering the basic binary class mono-discrimination setting as well as works towards more realistic challenges like discrimination for multiple protected attributes, discrimination under class imbalance and multi-task discrimination.