COL 8381/864: Special Topics in AI

Responsible AI

Instructor: Vijay Keswani
Classes: Tues-Fri 2.00-3.30 pm at LH 605
Office Hours: Wed 2.00-4.00 pm at Bharti 418

Sign up at this link for office hours or email to set up another time if the above doesn’t work


Outline

The availability of large datasets and massive computing power has led to a surge in the use of AI and ML-based tools to make decisions regarding humans. Applications of these tools span a variety of domains, such as in healthcare for diagnosis and patient management, in judicial and police settings to predict recidivism and crime patterns, in banking to determine creditworthiness, and even in recruitment to screen resumes. Yet, despite the prevalence of possible use cases, AI and ML tools are also associated with, and sometimes are the source of, a myriad of social issues impacting their real-world usability, such as performance disparity across demographic groups, misrepresentation of minorities in data from generative models, uninterpretable decision processes, and misaligned objectives.

The field of Responsible AI and ML essentially studies the conundrum of how we can harness the supposed benefits of these technologies while avoiding the harms that have been presently documented and are likely to arise in the future. Research in this field proposes methods to investigate and, wherever possible, mitigate the above issues associated with the usage of these technologies. Issues like AI-based discrimination and misalignment arise during interactions of AI and ML tools with individuals (e.g., in online spaces) and societal institutions (e.g., in courts and hospitals). Discovering and formalizing them inherently requires understanding both the internal workings of the AI/ML tools and the social dynamics of the communities and institutions impacted by them, making this field quite interdisciplinary.

Course Objectives

The purpose of this course is twofold: (a) to introduce students to well-documented AI/ML harms arising in real-world local and global applications, and (b) to prepare them to apply general techniques to evaluate and mitigate future AI/ML-based harms in common societal domains.


Prerequisites

Students would be expected to have a general understanding of AI/ML basics, e.g., sufficient knowledge of topics in probability, statistics, and optimization, as they relate to the development of AI/ML systems.


Topics and Schedule

The course will introduce popular topics in the field of responsible AI from the perspective of the common real-world ethical and practical issues that have been extensively studied in this literature. The topics in the course will be divided into two sections: (i) harms that arise due to a flawed AI/ML development pipeline, which require us to pursue in-depth investigations into this pipeline (to be covered in Weeks 1–8); and (ii) issues that are encountered when AI in employed in institutions to assist/replace humans in existing decision-making setups, which require us to understand the role of AI in broader institutional settings (to be covered in Weeks 9–14). The set of topics that will be covered are listed below.


Evaluation

Final grades will be based on:


Readings

Week 1: Introduction (Jan 2)

Optional Readings

  1. Socially Responsible AI Algorithms: Issues, Purposes, and Challenges. Lu Cheng, Kush R. Varshney, Huan Liu (pp. 1–7)
  2. Managing Extreme AI Risks Amid Rapid Progress. Yoshua Bengio et al.

Week 2: Data & Models (Jan 6 & 9)

Required Readings

  1. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. Harini Suresh and John Guttag
  2. Datasheets for Datasets. Timnit Gebru et al. Pages 1–8
  3. A Primer on Mitigating Gender Biases in LLMs: Insights from the Indian Context. Digital Futures Lab. Pages 5–18

Optional Readings

  1. Anatomy of an AI System. Kate Crawford and Vladan Joler

Week 3: Algorithmic Fairness (Jan 13 & 16)

Required Readings

  1. Fairness in Machine Learning (Chapters 3 & 4). Solon Barocas, Moritz Hardt, Arvind Narayanan
  2. Fairness Constraints: Mechanisms for Fair Classification. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, Krishna P. Gummadi

Optional Readings

  1. The Long History of Algorithmic Fairness. Rodrigo Ochigame
  2. Why Don’t Generative AI Models Understand Caste?. Medianama
  3. Fairness in Machine Learning: Lessons from Political Philosophy. Reuben Binns

Week 4: Algorithmic Fairness – Limitations (Jan 20 & 23)

Required Readings

  1. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Alexandra Chouldechova
  2. Re-imagining Algorithmic Fairness in India and Beyond. Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, Vinodkumar Prabhakaran

Optional Readings

  1. Addressing Strategic Manipulation Disparities in Fair Classification. Vijay Keswani, L. Elisa Celis
  2. What’s Sex Got To Do With Fair Machine Learning?. Lily Hu, Issa Kohler-Hausmann
  3. Delayed Impact of Fair Machine Learning. Lydia T. Liu, Sarah Dean, Esther Rolf, Max Simchowitz, Moritz Hardt

Week 5: Causation and Actionability (Jan 27 & 30)

Required Readings

  1. The Book of Why, Chapter 1 “The Ladder of Causation”. Judea Pearl, Dana Mackenzie
  2. Actionable Recourse in Linear Classification. Berk Ustun, Alexander Spangher, Yang Liu

Optional Readings

  1. Algorithmic Recourse: from Counterfactual Explanations to Interventions. Amir-Hossein Karimi, Bernhard Schölkopf, Isabel Valera
  2. Strategic Classification is Causal Modeling in Disguise. John Miller, Smitha Milli, Moritz Hardt

Week 6: Actionability and Social Choice (Feb 3 & 6)

Required Readings

  1. The philosophical basis of algorithmic recourse. Suresh Venkatasubramanian, Mark Alfano
  2. Computational Social Choice: The First Four Centuries, Ariel D. Procaccia

Optional Readings

  1. Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy. Angelina Wang, Sayash Kapoor, Solon Barocas, and Arvind Narayanan
  2. The computational difficulty of manipulating an election. J. J. Bartholdi III, C. A. Tovey & M. A. Trick