CDAC Distinguished Speaker Series: Olga Russakovsky, Princeton, Fairness in Visual Recognition

3:00–4:00 pm Online

Eventbrite Registration

Join us for an exciting lecture from Olga Russakovsky, Assistant Professor at Princeton University & Co-Founder of AI4All

About this Event

Monday, January 25, 2020 3:00pm CT (public livestream and Zoom for registered attendees)

Fairness in Visual Recognition

Computer vision models trained on unparalleled amounts of data hold promise for making impartial, well-informed decisions in a variety of applications. However, more and more historical societal biases are making their way into these seemingly innocuous systems. We focus our attention on bias in the form of inappropriate correlations between visual protected attributes (age, gender expression, skin color, …) and the predictions of visual recognition models, as well as any unintended discrepancy in error rates of vision systems across different social, demographic or cultural groups. In this talk, we’ll dive deeper both into the technical reasons and the potential solutions for bias in computer vision. I’ll highlight our recent work addressing bias in visual datasets (FAT*2020ECCV 2020), in visual models (CVPR 2020under review) as well as in the makeup of AI leadership.

Bio: Dr. Olga Russakovsky is an Assistant Professor in the Computer Science Department at Princeton University. Her research is in computer vision, closely integrated with the fields of machine learning, human-computer interaction and fairness, accountability and transparency. She has been awarded the AnitaB.org's Emerging Leader Abie Award in honor of Denice Denton in 2020, the CRA-WP Anita Borg Early Career Award in 2020, the MIT Technology Review's 35-under-35 Innovator award in 2017, the PAMI Everingham Prize in 2016 and Foreign Policy Magazine's 100 Leading Global Thinkers award in 2015. In addition to her research, she co-founded and continues to serve on the Board of Directors of the AI4ALL foundation dedicated to increasing diversity and inclusion in Artificial Intelligence (AI). She completed her PhD at Stanford University in 2015 and her postdoctoral fellowship at Carnegie Mellon University in 2017.

Part of the CDAC Winter 2021 Distinguished Speaker Series:

Bias Correction: Solutions for Socially Responsible Data Science

Security, privacy and bias in the context of machine learning are often treated as binary issues, where an algorithm is either biased or fair, ethical or unjust. In reality, there is a tradeoff between using technology and opening up new privacy and security risks. Researchers are developing innovative tools that navigate these tradeoffs by applying advances in machine learning to societal issues without exacerbating bias or endangering privacy and security. The CDAC Winter 2021 Distinguished Speaker Series will host interdisciplinary researchers and thinkers exploring methods and applications that protect user privacy, prevent malicious use, and avoid deepening societal inequities — while diving into the human values and decisions that underpin these approaches.

More information at cdac.uchicago.edu

Event Type

Broad Audience

Jan 25