Algorithmic Injustice

Algorithms play an increasingly important role in our daily life but come with serious societal risks. In recent years we have seen many cases of algorithms that show unfair biased behavior towards particular groups or individuals, for instance the Dutch Toeslagenaffaire. This leads to growing concerns about harmful discrimination and reproduction of structural inequalities once these technologies become institutionalized in society. During this evening on algorithmic injustice we explore and discuss both the philosophical and technical aspects as well as the lived experiences of people who suffered from unfair algorithms.

Despite growing concerns about algorithmic injustice, in AI research and policy, the remedies against algorithmic discrimination are often narrowly framed as design challenges, rather than complex, structural, social-political problems. But is the solution always technological? Do we address harmful consequences of algorithms by fixing the data? And should engineers determine what is fair?

At this event, we bring together researchers from various disciplines. Su Lin Blodgett has been working on AI and fairness, Erin Beeghly on the wrong of stereotypes, Naomi Appelman on the unfairness of online proctoring, and documentary maker Nirit Peled documented firsthand stories of people who suffered the consequences of unfair police algorithms. Together, they will explore pressing matters around algorithmic injustice.

The event is organised by Dr. Marjolein Lanzing and Dr. Katrin Schulz as part of their project The politics of bias in AI: challenging the technocentric approach. The project is funded by the RPA Human(e) AI of the University of Amsterdam.

About the speakers

Su Lin Blodgett is senior researcher in the Fairness, Accountability, Transparency, and Ethics in AI (FATE) group at Microsoft Research Montréal. Blodgett is interested in examining the social and ethical implications of natural language processing technologies; she develops approaches for anticipating, measuring, and mitigating harms arising from language technologies, focusing on the complexities of language and language technologies in their social contexts, and on supporting NLP practitioners in their ethical work. She has also worked on using NLP approaches to examine language variation and change.

Erin Beeghly is Associate Professor of Philosophy at the University of Utah. Her research interests lie at the intersection of ethics, social epistemology, feminist philosophy, and moral psychology. Her current book project, What’s Wrong With Stereotyping? (under contract with OUP), examines the conditions under which judging people by group membership is wrong. She and Alex Madva are co-editors of the first philosophical introduction to implicit bias: An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind (Routledge 2020). Beeghly also writes and teaches about topics within legal theory, including discrimination law.

Nirit Peled is an independent filmmaker and writer based in the Netherlands. Drawing on techniques from journalism and documentary, she investigates the social impact of new technologies, structures of legality, systemic abuses of power and the nature of violence. Her latest documentary, MOTHERS tells the story of four women whose lives were forever changed when their adolescent sons entered a youth crime prevention program. TV archive and government documents reveal how their lives were impacted by an algorithmic reality that aims to assess the risks of their sons turning to crime. But can anyone’s life really be captured by data? Can they challenge the statistics that mark them as dangerous?

Naomi Appelman is a PhD-candidate in law and philosophy at the Institute for Information Law (IViR) interested in the role of law in online exclusion, speech governance, and platform power. Her research asks how European law should facilitate contestation of the content moderation systems governing online speech. The aim of facilitating this contestation is to minimise undue exclusion, often of already marginalised groups, from online spaces and democratise the power over how online speech is governed. Appelman is one of the founders of the Racism and Technology Center and together with bioinformatic Robin Pocornie filed a complaint at the Dutch Human Rights Institute for using online proctoring that discriminates against people of color.

Gerelateerde programma’s
12 06 24
Objective Data? The Ways That Gender Bias Can Enter the Data Pipeline

Data is not objective. In this event, we discuss how gender bias infiltrates the data pipeline, shaping the algorithmic outputs encountered in everyday experiences.

Datum
Woensdag 12 jun 2024 17:00 uur
Locatie
SPUI25
17 06 24
De toekomst van werk

In het net verschenen De toekomst van werk betoogt filosoof Lisa Herzog dat ons werk veel te belangrijk is om het door grote techbedrijven te laten dicteren. Vanavond gaat ze met onder anderen Agnes Akkerman in gesprek over hoe we het werk van de toekomst gemeenschappelijk kunnen en moeten vormgeven.

Datum
Maandag 17 jun 2024 20:00 uur
Locatie
SPUI25
07 11 24
Between Hope and Hype: What’s the (Real) Business Case for AI?

**Due to technical issues there won’t be a livestream available**

Artificial intelligence has been hailed as a productivity booster that can turbocharge firms’ and countries’ growth. More recently, the mood has sobered. Even as companies pledged to go ‘all in’ on AI, many have struggled to make money with it. So have AI hopes been exposed as all hype? And what is the (real) business case for AI? 

Datum
Donderdag 7 nov 2024 17:00 uur
Locatie
SPUI25