Representative Journal Articles (since 2017)


 
 

Ethical Issues for AI in Medicine

(2024)

In: Digital Health- Telemedicine and Beyond (ed. Dipu Patel) Elsevier

This chapter surveys the ethical challenges involved in deploying AI for medical purposes divided into categories of “transparency” “fairness” and “safety and liability" Issues involving data privacy and security will not be discussed as these fall more under the umbrella of “data ethics” Most of the case studies in this chapter involve predictive AI systems rather than generative AI systems although the same ethical issues apply to both These issues are an amplification of the same ethical challenges that physicians face on a regular basis; extending the medical decision-making process to a relatively autonomous computer program requires even more precision in answering questions such as: “What does a sufficiently informative medical explanation look like?” “How should physicians respond to broader social inequalities in healthcare?” and “Who is responsible for medical errors?” Although specific policy solutions will not be discussed in this chapter all parties involved in the design and deployment of medical AI systems have a professional obligation to develop detailed policies to address each of these ethical challenges

 

Assessing Group Fairness with Social Welfare Optimization [Best Paper Award, CPAIOR 24]

(2024)

With Violet Chen and John Hooker

In: Proceedings of the 21st International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research        

Statistical parity metrics have been widely studied and endorsed in the AI community as a means of achieving fairness, but they suffer from at least two weaknesses. They disregard the actual welfare consequences of decisions and may therefore fail to achieve the kind of fairness that is desired for disadvantaged groups. In addition, they are often incompatible with each other, and there is no convincing justification for selecting one rather than another. This paper explores whether a broader conception of social justice, based on optimizing a social wel fare function (SWF), can be useful for assessing various definitions of parity. We focus on the well-known alpha fairness SWF, which has been defended by axiomatic and bargaining arguments over a period of 70 years. We analyze the optimal solution and show that it can justify demo graphic parity or equalized odds under certain conditions, but frequently requires a departure from these types of parity. In addition, we find that predictive rate parity is of limited usefulness. These results suggest that optimization theory can shed light on the intensely discussed question of how to achieve group fairness in AI.

 

Local Justice and Machine Learning:

Modeling and inferring dynamic ethical preferences toward allocations

(2023)

With Violet Chen, Joshua Williams, and Hoda Heidari

In: AAAI23: Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence

We consider a setting in which a social planner has to make a sequence of decisions to allocate scarce resources in a high-stakes domain. Our goal is to understand stakeholders' dynamic moral preferences toward such allocational policies. In particular, we evaluate the sensitivity of moral preferences to the history of allocations and their perceived future impact on various socially salient groups. We propose a mathematical model to capture and infer such dynamic moral preferences. We illustrate our model through small-scale human-subject experiments focused on the allocation of scarce medical resource distributions during a hypothetical viral epidemic. We observe that participants' preferences are indeed history- and impact-dependent. Additionally, our preliminary experimental results reveal intriguing patterns specific to medical resources---a topic that is particularly salient against the backdrop of the global covid-19 pandemic.

 

Explainable AI as Evidence of Fair Decisions

(2023)

In: Frontiers in Psychology; Special Issue on AI in Business

This paper will propose that explanations are valuable to those impacted by a model’s decisions (model patients) to the extent that they provide evidence that a past adverse decision was unfair. Under this proposal, we should favor models and explainability methods which generate counterfactuals of two types. The first type of counterfactual is positive evidence of fairness: a set of states under the control of the patient which (if changed) would have led to a beneficial decision. The second type of counterfactual is negative evidence of fairness: a set of irrelevant group or behavioral attributes which (if changed) would not have led to a beneficial decision. Each of these counterfactual statements is related to fairness, under the Liberal Egalitarian idea that treating one person differently than another is justified only on the basis of features which were plausibly under each person’s control. Other aspects of an explanation, such as feature importance and actionable recourse, are not essential under this view, and need not be a goal of explainable AI.

 

Discrimination in Algorithmic Trolley Problems

(2022)

In: Autonomous Vehicle Ethics (Ed. Ryan Jenkins, DavidČerný, and Tomáš Hříbek). Oxford University Press.

Both AV path-planning algorithms and criminal justice algorithms have the structure of a trolley problem, and the focus of this paper has been on which features are morally relevant to use in making choices about trade-offs in inevitable deprivation. It turns out that which features are relevant depends on the nature of the task. With AVs, the trade-offs involve mere harm alone, and so the acceptable features are those which will have some impact on likelihood of harm. For criminal justice algorithms, on the other hand, the trade-offs involve deprivations based on desert, and thus the relevant features must be those that stem from an agent’s particular behavioral history, beliefs, and desires.

 

Normative Principles for Evaluating Fairness in Machine Learning

(2020)

In: Proceedings of AI, Ethics, and Society

There are many incompatible ways to measure fair outcomes for machine learning algorithms. The goal of this paper is to characterize rates of success and error across protected groups(race, gender,sexual orientation) as a distribution problem, and describe the possible solutions to this problem according to different normative principles from moral and political philosophy. These normative principles are based on various competing attributes within a distribution problem: intentions, compensation, desert, consent, and consequences. Each principle will be applies to a sample risk assessment classifier to demonstrate the philosophical arguments underlying different sets of fairness metrics.

 

A Rawlsian Algorithm for Autonomous Vehicles

(2018)

In: Ethics and Information Technology

Autonomous vehicles must be programmed with procedures for dealing with trolley-style dilemmas where actions result in harm to either pedestrians or passengers. This paper outlines a Rawlsian algorithm as an alternative to the Utilitarian solution. The algorithm will gather the vehicle’s estimation of probability of survival for each person in each action, then calculate which action a self-interested person would agree to if he or she were in an original bargaining position of fairness. I will employ Rawls’ assumption that the Maximin procedure is what self-interested agents would use from an original position, and then show how the Maximin procedure can be operationalized to produce unique outputs over probabilities of survival.