Ethics & AI

This course will explore the ethical challenges that businesses face when making use of AI, map out policies which have been proposed as solutions to these challenges, and analyze the normative arguments behind these policies. The goal of the course is to acquire knowledge of the ethical challenges which emerge from AI, and the skills to develop responsible corporate practices around AI. The course is organized around six core principles for the responsible use of AI (with applications to illustrate each principle): (1) autonomy, (2) explainability, (3) bias, (4) fairness, (5) safety, and (6) responsibility.

This course will NOT address peripheral concerns, such as privacy, cybersecurity, and sustainability. We will not consider regulation and public policy about AI, since the focus is on corporate policy and decision-making. We will not address the economic issues about unemployment and macroeconomic impacts of AI, nor will we address the philosophical questions about machine personhood, rights, and consciousness, or the possibility of a technological singularity. Finally, we will not consider the long-term risks of AI, such as existential risk.

Topics:

  • Training Data and IP

  • Model Explainability and Interpretability

  • Metrics and Mitigations for Bias and Fairness

  • Benchmarks and Guardrails for Safety

  • Liability for Damages

Applications:

  • AI in Media and Marketing

  • AI in Finance and Lending

  • AI in Hiring

  • AI in Education and Criminal Justice

  • AI in Healthcare