Instructor: Vijay Keswani
Classes: Tues-Fri 2.00-3.30 pm at LH 605
Office Hours: Wed 2.00-4.00 pm at Bharti 418
Sign up at this link for office hours or email to set up another time if the above doesn’t work
The availability of large datasets and massive computing power has led to a surge in the use of AI and ML-based tools to make decisions regarding humans. Applications of these tools span a variety of domains, such as in healthcare for diagnosis and patient management, in judicial and police settings to predict recidivism and crime patterns, in banking to determine creditworthiness, and even in recruitment to screen resumes. Yet, despite the prevalence of possible use cases, AI and ML tools are also associated with, and sometimes are the source of, a myriad of social issues impacting their real-world usability, such as performance disparity across demographic groups, misrepresentation of minorities in data from generative models, uninterpretable decision processes, and misaligned objectives.
The field of Responsible AI and ML essentially studies the conundrum of how we can harness the supposed benefits of these technologies while avoiding the harms that have been presently documented and are likely to arise in the future. Research in this field proposes methods to investigate and, wherever possible, mitigate the above issues associated with the usage of these technologies. Issues like AI-based discrimination and misalignment arise during interactions of AI and ML tools with individuals (e.g., in online spaces) and societal institutions (e.g., in courts and hospitals). Discovering and formalizing them inherently requires understanding both the internal workings of the AI/ML tools and the social dynamics of the communities and institutions impacted by them, making this field quite interdisciplinary.
The purpose of this course is twofold: (a) to introduce students to well-documented AI/ML harms arising in real-world local and global applications, and (b) to prepare them to apply general techniques to evaluate and mitigate future AI/ML-based harms in common societal domains.
Students would be expected to have a general understanding of AI/ML basics, e.g., sufficient knowledge of topics in probability, statistics, and optimization, as they relate to the development of AI/ML systems.
The course will introduce popular topics in the field of responsible AI from the perspective of the common real-world ethical and practical issues that have been extensively studied in this literature. The topics in the course will be divided into two sections: (i) harms that arise due to a flawed AI/ML development pipeline, which require us to pursue in-depth investigations into this pipeline (to be covered in Weeks 1–8); and (ii) issues that are encountered when AI in employed in institutions to assist/replace humans in existing decision-making setups, which require us to understand the role of AI in broader institutional settings (to be covered in Weeks 9–14). The set of topics that will be covered are listed below.
Final grades will be based on:
Optional Readings
Required Readings
Optional Readings
Required Readings
Optional Readings
Required Readings
Optional Readings
Required Readings
Optional Readings
Required Readings
Optional Readings