
Jahna Otterbacher
Ph.D., University of Michigan at Ann Arbor, USA
Jahna Otterbacher (Ph.D., University of Michigan at Ann Arbor, USA) is Associate Professor and Dean of the Faculty of Pure and Applied Sciences of the Open University of Cyprus (OUC). Jahna directs the Cyprus Center for Trustworthy AI (CyCAT), which conducts interdisciplinary research focused on promoting both technical and educational solutions for trustworthy and responsible AI-enabled systems. She holds a concurrent appointment at the CYENS Center of Excellence, a new center for research and innovation in Nicosia, in collaboration with two international Advanced Partners, UCL and MPI. The author of over 70 publications in the area of human-centered data science, she is included in Elsevier’s list of top-cited scientists, based on standardized citation indicators, one of a few women in Cyprus to have achieved this distinction. In December 2024, Jahna was appointed as Expert Advisor to the Minister of Education, Sport and Youth of the Republic of Cyprus, concerning the use of AI in the public education system.
Title: Trust in and Trustworthiness of AI: Building Socio-Technical Infrastructure for Assessing AI Alignment on the Large Scale
Generative AI and its foundation models dominate the conversations on the potential of AI to transform nearly every aspect of human digital activity. Current models are powerful and multimodal, generating “human-like” responses to prompts across vast subject domains, and exhibiting increasing understanding of linguistic and visual expression. Foundation models, which are easily adaptable for use in downstream applications, are transforming AI into a general-purpose technology; it is impossible to predict the range of innovative applications that will result. System-focused evaluations (i.e., capability testing) are key to understanding a model’s range of behaviors but tell us little about how the user experiences an application built upon it, and the extent to which ethical principles (e.g., privacy, fairness, accountability) are perceived as being respected. Thus, there is a crucial need to involve the public in evaluation, as to assess the alignment of (system) trustworthiness and (user) trust. In this talk, I will share insights from our efforts to build socio-technical infrastructure for performing such evaluations on the large scale. Specifically, we developed a methodology for user-sourcing evaluations of deployed AI applications in the wild. Tasks based on this methodology have been integrated into our open and distance-learning course “AI in Everyday Life,” offered to members of the public. The key challenges, as well as our plan for scaling up the approach, will be discussed.