Journal Articles


 
 

Ethical Issues for AI in Medicine

(forthcoming in 2024)

In: Digital Health- Telemedicine and Beyond (ed. Dipu Patel) Elsevier

This chapter surveys the ethical challenges involved in deploying AI for medical purposes divided into categories of “transparency” “fairness” and “safety and liability" Issues involving data privacy and security will not be discussed as these fall more under the umbrella of “data ethics” Most of the case studies in this chapter involve predictive AI systems rather than generative AI systems although the same ethical issues apply to both These issues are an amplification of the same ethical challenges that physicians face on a regular basis; extending the medical decision-making process to a relatively autonomous computer program requires even more precision in answering questions such as: “What does a sufficiently informative medical explanation look like?” “How should physicians respond to broader social inequalities in healthcare?” and “Who is responsible for medical errors?” Although specific policy solutions will not be discussed in this chapter all parties involved in the design and deployment of medical AI systems have a professional obligation to develop detailed policies to address each of these ethical challenges

 

Assessing Group Fairness with Social Welfare Optimization

(forthcoming in 2024)

With Violet Chen and John Hooker

In: Proceedings of the 21st International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research        

Statistical parity metrics have been widely studied and endorsed in the AI community as a means of achieving fairness, but they suffer from at least two weaknesses. They disregard the actual welfare consequences of decisions and may therefore fail to achieve the kind of fairness that is desired for disadvantaged groups. In addition, they are often incompatible with each other, and there is no convincing justification for selecting one rather than another. This paper explores whether a broader conception of social justice, based on optimizing a social wel fare function (SWF), can be useful for assessing various definitions of parity. We focus on the well-known alpha fairness SWF, which has been defended by axiomatic and bargaining arguments over a period of 70 years. We analyze the optimal solution and show that it can justify demo graphic parity or equalized odds under certain conditions, but frequently requires a departure from these types of parity. In addition, we find that predictive rate parity is of limited usefulness. These results suggest that optimization theory can shed light on the intensely discussed question of how to achieve group fairness in AI.

 

Teaching AI Ethics as a Renaissance in Business Ethics

(forthcoming in 2024)

In: Teaching Ethics, Special Issue on “Teaching AI Ethics”

 

AI for Analytical Reasoning in Negotiation and Business Ethics

(forthcoming in 2024)

With Lily Morse

In: Journal of Management Inquiry, Special Issue on “AI and the Student-Centered Business School”

 

Explainable AI as Evidence of Fair Decisions

(2023)

In: Frontiers in Psychology; Special Issue on AI in Business

This paper will propose that explanations are valuable to those impacted by a model’s decisions (model patients) to the extent that they provide evidence that a past adverse decision was unfair. Under this proposal, we should favor models and explainability methods which generate counterfactuals of two types. The first type of counterfactual is positive evidence of fairness: a set of states under the control of the patient which (if changed) would have led to a beneficial decision. The second type of counterfactual is negative evidence of fairness: a set of irrelevant group or behavioral attributes which (if changed) would not have led to a beneficial decision. Each of these counterfactual statements is related to fairness, under the Liberal Egalitarian idea that treating one person differently than another is justified only on the basis of features which were plausibly under each person’s control. Other aspects of an explanation, such as feature importance and actionable recourse, are not essential under this view, and need not be a goal of explainable AI.

 

Discrimination in Algorithmic Trolley Problems

(2022)

In: Autonomous Vehicle Ethics (Ed. Ryan Jenkins, DavidČerný, and Tomáš Hříbek). Oxford University Press.

Both AV path-planning algorithms and criminal justice algorithms have the structure of a trolley problem, and the focus of this paper has been on which features are morally relevant to use in making choices about trade-offs in inevitable deprivation. It turns out that which features are relevant depends on the nature of the task. With AVs, the trade-offs involve mere harm alone, and so the acceptable features are those which will have some impact on likelihood of harm. For criminal justice algorithms, on the other hand, the trade-offs involve deprivations based on desert, and thus the relevant features must be those that stem from an agent’s particular behavioral history, beliefs, and desires.

 

Normative Principles for Evaluating Fairness in Machine Learning

(2020)

In: Proceedings of AI, Ethics, and Society

There are many incompatible ways to measure fair outcomes for machine learning algorithms. The goal of this paper is to characterize rates of success and error across protected groups(race, gender,sexual orientation) as a distribution problem, and describe the possible solutions to this problem according to different normative principles from moral and political philosophy. These normative principles are based on various competing attributes within a distribution problem: intentions, compensation, desert, consent, and consequences. Each principle will be applies to a sample risk assessment classifier to demonstrate the philosophical arguments underlying different sets of fairness metrics.

 

A Rawlsian Algorithm for Autonomous Vehicles

(2018)

In: Ethics and Information Technology

Autonomous vehicles must be programmed with procedures for dealing with trolley-style dilemmas where actions result in harm to either pedestrians or passengers. This paper outlines a Rawlsian algorithm as an alternative to the Utilitarian solution. The algorithm will gather the vehicle’s estimation of probability of survival for each person in each action, then calculate which action a self-interested person would agree to if he or she were in an original bargaining position of fairness. I will employ Rawls’ assumption that the Maximin procedure is what self-interested agents would use from an original position, and then show how the Maximin procedure can be operationalized to produce unique outputs over probabilities of survival.

 

In Defense of Best Explanation Debunking Arguments in Moral Philosophy

(2018)

with Jonathon Hricko

In: Review of Philosophy and Psychology

We aim to develop a form of debunking argument according to which an agent’s belief is undermined if the reasons she gives in support of her belief are best explained as rationalizations. This approach is a more sophisticated form of what Shaun Nichols has called best-explanation debunking, which he contrasts with process debunking, i.e., debunking by means of showing that a belief has been generated by an epistemically defective process. In order to develop our approach, we identify an example of such a best-explanation debunking argument in Joshua Greene’s attack on deontology. After showing that this argument is not an instance of process debunking, we offer our best-explanation approach as a generalization of Greene’s argument. Finally, we defend our approach by showing that it is not susceptible to some criticisms that Nichols has leveled against a less sophisticated form of best-explanation debunking.

 

In Defense of ‘Ought Implies Can’

(2018)

In: Oxford Studies in Experimental Philosophy, vol 3

Two recent papers have presented experimental evidence against the hypothesis that there is a semantic connection between OUGHT and CAN, rather than a pragmatic and defeasible one. However, there are two flaws with their designs. One is temporal ambiguity: just asking whether “x ought to A” is underspecified as to when the obligation exists. Another problem is failing to distinguish between prior obligations and all-things-considered obligations. To test these potential confounds, I ran two experiments. The first experiment paired some of the original stories with a visual timeline specifying the time of the obligation. The second experiment flipped the wording of the original “obligated but can’t” question (which suggests prior obligations) into the reversed: “can’t, but still obligated” (which suggests all-things-considered obligations). In both experiments, there were large and significant differences between the original conditions and the modified conditions. These results undermine the conclusions of the previous experiments and remain consistent with the semantic hypothesis.

 

Pushing the Intuitions Behind Moral Internalism

(2014)

with Kristine Wilckens

In: Philosophical Psychology

Moral Internalism proposes a necessary link between judging that an action is right/wrong and being motivated to perform/avoid that action. Internalism is central to many arguments within ethics, including the claim that moral judgments are not beliefs, and the claim that certain types of moral skepticism are incoherent. However, most of the basis for accepting Internalism rests on intuitions that have recently been called into question by empirical work. This paper further investigates the intuitions behind Internalism. Three experiments show not only that these intuitions are not widespread, but that they are significantly influenced by normative evaluations of the situation in question. These results are taken to undermine Internalist intuitions, and contribute to the growing body of evidence showing that normative evaluations influence supposedly non-normative judgments.

 

Neoclassical Concepts

(2015)

In: Mind and Language

Linguistic theories of lexical semantics support a Neoclassical Theory of concepts, where entities like CAUSE, STATE, and MANNER serve as necessary conditions for the possession of individual event concepts. Not all concepts have a neoclassical structure, and whether or not words participate in regular linguistic patterns such as verbal alternations will be proposed as a probe for identifying whether their corresponding conceptsdoindeedhavesuchstructure.IshowhowtheNeoclassicalTheorysupplements existing theories of concepts and supports a version of analyticity and conceptual analysis.

 

When Psychology Undermines Beliefs

(2012)

In: Philosophical Psychology

This paper attempts to specify the conditions under which a psychological explanation can undermine or debunk a set of beliefs. The focus will be on moral and religious beliefs, where a growing debate has emerged about the epistemic implications of cognitive science. Recent proposals by Joshua Greene and Paul Bloom will be taken as paradigmatic attempts to undermine beliefs with psychology. I will argue that a belief p may be undermined whenever: (i) p is evidentially based on an intuition which (ii) can be explained by a psychological mechanism that is (iii) unreliable for the task of believing p; and (iv) any other evidence for belief p is based on rationalization. I will also consider and defend two equally valid arguments for establishing unreliability: the redundancy argument and the argument from irrelevant factors. With this more specific understanding of debunking arguments, it is possible to develop new replies to some objections to psychological debunking arguments from both ethics and philosophy of religion.

 

Cognitive Neuroscience and Moral Decision-Making

(2011)

In: Neuroethics

It is by now a well-supported hypothesis in cognitive neuroscience that there exists a functional network for the moral appraisal of situations. However, there is a surprising disagreement amongst researchers about the significance of this network for moral actions, decisions, and behavior. Some researchers suggest that we should “uncover those ethics [that are “built into our brains”], identify them, and live more fully by them,” while others claim that we should often do the opposite, viewing the cognitive neuroscience of morality more like a science of pathology. To analyze and evaluate the disagreement, this paper will investigate some of its possible sources. These may include theoretical confusions about levels of explanation in cognitive science, or different senses of ‘morality’ that researchers are looking to explain. Other causes of the debate may come from empirical assumptions about how possible or preferable it is to separate intuitive moral appraisal from moral decisions. Although we will tentatively favor the ‘Set Aside’ approach, the questions outlined here are open areas of ongoing research, and this paper will be confined to outlining the position space of the debate rather than definitively resolving it.