Media Appearances

(IN CHRONOLOGICAL ORDER)


TALK at NWU Buisness school, 2024

“AI and Fair distribution in high-stakes decisions”

Important decisions are increasingly being made with the aid of AI systems. These decisions can determine the allocation of benefits, like jobs, loans, and educational opportunities. They can also determine the allocation of losses, like vehicle navigation systems, risk-assessment for policing and parole, and medical triage. Companies designing these systems therefore have an ethical obligation to ensure that the recommendations and decisions of these systems are fair. But what does it mean for automated decisions to be fair? This talk will discuss some of the difficult decisions that must be made when designing fair AI systems, including: Which features should be used by the system in its decisions? What efforts should be made to correct the effects of historical injustice in the data? How should we balance the trade-offs in designing fair AI systems? 

 

TALK IN CMU PHILOSOPHY, PITTSBURGH FORMAL EPISTEMOLOGY WORKSHOP, 2023

“Cooperation, Maximin, and the foundations of ethics”

A Social Contract view about meta-ethics proposes that normative principles can be causally and functionally explained as solutions to cooperation problems, and they can therefore be evaluated by how effectively they solve these problems. However, advocates of the Social Contract view have often not specified details about what counts as a cooperation problem, and what solutions to it would look like. I propose that we define cooperation problems as interactions where there exists at least one Strong Pareto-Improvement on every pure Nash Equilibrium (willfully ignoring mixed solutions). We will explore a range of solutions to this problem, and how these solutions correspond to various normative principles. In the past, I have advocated the Maximin principle as an optimal solution to cooperation problems, but this turns out to be incomplete at best, and mistaken at worst.

 
 
mila.jpg

MONTREAL SPEAKER SERIES ON AI ETHICS, 2020

“FAIRNESS IN MACHINE LEARNING”

There are many incompatible ways to measure fair outcomes for machine learning algorithms. In this talk, I characterize rates of success and error across protected groups(race, gender, sexual orientation) as a distribution problem, and describe the possible solutions to this problem according to different normative principles from moral and political philosophy. These normative principles are based on various competing attributes within a distribution problem: intentions, compensation, desert, consent, and consequences. Each principle will be applies to a sample risk assessment classifier to demonstrate the philosophical arguments underlying different sets of fairness metrics.

 
Episode 30: Derek Leben on Ethics for Robots and Artificial Intelligences

Episode 30: Derek Leben on Ethics for Robots and Artificial Intelligences

Sean Carroll’s "Mindscape,” 2019

Episode 30- Derek Leben on ethics for Robots and AI

Episode Description: “It’s hardly news that computers are exerting ever more influence over our lives. And we’re beginning to see the first glimmers of some kind of artificial intelligence: computer programs have become much better than humans at well-defined jobs like playing chess and Go, and are increasingly called upon for messier tasks, like driving cars. Once we leave the highly constrained sphere of artificial games and enter the real world of human actions, our artificial intelligences are going to have to make choices about the best course of action in unclear circumstances: they will have to learn to be ethical. I talk to Derek Leben about what this might mean and what kind of ethics our computers should be taught. It’s a wide-ranging discussion involving computer science, philosophy, economics, and game theory.”

Podcast Main Page

Episode Webpage

BBC Talk, 2019

“Discrimination in Targeted Marketing”

Using information about group membership to make judgments about individuals is discrimination, and discrimination is wrong. So is the use of demographic information in advertising and recommendation algorithms an instance of morally objectionable discrimination? This talk at the BBC examines what the differences may be between algorithms that use group membership as data in the criminal justice system versus those in targeted marketing.

 

CENTER FOR ETHICS AND THE RULE OF LAW (CERL)- UNIVERSITY OF PENNSYLVANIA, 2019

PANEL DISCUSSION ON “CYBERSECURITY AND ARTIFICIAL INTELLIGENCE”

Moderator: Professor Claire Finkelstein, CERL Founder & Faculty Director; Algernon Biddle Professor of Law and Professor of Philosophy, University of Pennsylvania

Panelists: 

Professor Gary Brown, Professor of Cyber Law at the College of Information and Cyberspace, National Defense University

The Honorable Thomas Ayres, General Counsel, United States Air Force

Professor Derek Leben, Department Chair and Associate Professor of Philosophy at the University of Pittsburgh at Johnstown

LTC Christopher Korpella, Associate Professor and Director of the Robotics Research Center at the United States Military Academy at West Point

 

Machine Ethics podcast, 2018

Episode 23- Derek leben

Episode Description: “This month I'm talking with Derek Leben about his new book Ethics for Robots: How to Design a Moral Algorithm. We also dive into a general framework for machine ethics, contractarianism, Rawls’ original position thought experiment (which is one of my favourite ethical thought experiments), maximin function approach to machine ethics, and whether robots should respect the consent of a person in life threatening circumstances...”

Podcast Main Page

Episode Webpage