Scientific Background:
Machine learning and data analysis have enjoyed tremendous success in a broad range of domains. These advances hold the promise of great benefits to individuals, organizations and society. Undeniably, algorithms are informing decisions that reach ever more deeply into our lives, from news article recommendations to criminal sentencing decisions to healthcare diagnostics. This progress, however, raises (and is impeded by) a host of concerns regarding the societal impact of computation. A prominent concern is that these algorithms should be fair. Unfortunately, the hope that automated decision-making might be free of social biases is dashed on the data on which the algorithms are trained and the choices in their construction: left to their own devices, algorithms will propagate – even amplify – existing biases of the data, the programmers, and the decisions made in the choice of features to incorporate and measurements of “fitness” to be applied. Addressing wrongful discrimination by algorithms is not only mandated by law and by ethics but is essential to maintaining the public trust in the current computation-driven revolution.
The study of fairness is ancient and multi-disciplinary: philosophers, legal experts, economists, statisticians, social scientists and others have been concerned with fairness for as long as these fields have existed. Nevertheless, the scale of decision making in the age of big-data, the computational complexities of algorithmic decision making, and simple professional responsibility mandate that computer scientists contribute to this research endeavor.
Goals:
This course aims to share the foundations of algorithmic fairness and at the same time arrive at advanced topics. This is a booming area, and many of the researchers, let alone, practitioners have a vastly outdated and partial understanding of this area. Entering this rapidly moving area is challenging and the course is planed to ease students in.
Intended Audience:
We will aim at a large range of students: Advanced undergraduates, MS and PhD students. All the way from students wishing to gain some understanding of an important area to ones who wish to start research. Theory students, ML students and others are all welcomed.
Prerequisites: CS 161 and 221. I will be assuming some mathematical maturity, and CS 154 is an advantage.
Current Offering: Fall 2021 – Under Construction
Instructor: Omer Reingold, Gates 196, reingold (at stanford dot edu)
CAs:
Jabari Hastings, jabarih (at stanford dot edu)
Charlotte Peale, cpeale (at stanford dot edu)
Kevin Su, ksu20841 (at stanford dot edu)
Location and Times:
Tue and Thursday, 10:30 AM – 11:50 PM at 200-203
Lectures (future schedule is tentative, topics or dates may change):
Tuesday 1/10: Introduction to algorithmic fairness and course Information
Thursday 1/12: The perspective of political philosophy
Additional Reading:
Fairness in political philosophy lexicon.
Tuesday 1/17: The perspective of law (guest lecture: Daniel E. Ho)
Additional Reading:
Fairness and Machine Learning: Limitations and Opportunities (Relevant chapter: Understanding United States anti-discrimination law)
Discrimination in the age of Algorithms
Thursday 1/19: Earlier notions of algorithmic fairness in CS (distributed computing, stable matching, resource allocation, envy freeness, minimax, voting, …).
Additional Reading:
Fairness and Liveness
Fairness and stability of matchings
Cake Cutting Algorithms
Fairness in Facility Location
Leximax Approximations and Cohort Selection
Tuesday 1/24: Earlier notions continued.
Thursday 1/25: ML basics: tasks and goals (in supervised and unsupervised learning) and running examples (such as college admission, job recruitment, loans, ads, bails).
Additional Reading:
Stanford CS 221
Fairness and Machine Learning: Limitations and Opportunities (Relevant chapters: introduction and classification)
On individual Risk
Tuesday 1/31: Group notions of fairness (definitions, algorithms, ways to abuse, impossibilities, potential harm)
Additional Reading:
Fairness Through Awareness
Delayed Impact of Fair Machine Learning
Equality of Opportunity in Supervised Learning
Inherent Trade-Offs in the Fair Determination of Risk Scores
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Fairness and Machine Learning: Limitations and Opportunities (Relevant chapters: Introduction, Classification, Relative Notions of Fairness)
Thursday 2/2: TBD (no in-class meeting)
Tuesday 2/7: Group notions continued.
Thursday 2/9: Individual fairness.
Additional Reading:
Fairness Through Awareness
Probably Approximately Metric Fair Learning
Metric Learning for Individual Fairness
Preference-informed fairness
fairness through computationally-bounded awareness
Metric Learning for Individual Fairness
Metric-Free Individual Fairness in Online Learning
Tuesday 2/14: Individual fairness continued.
Thursday 2/16: Individual fairness continued.
Tuesday 2/21: Multi-group fairness
Additional Reading:
Calibration for the (Computationally-Identifiable) Masses
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness
Multiaccuracy: Black-Box Post-Processing for Fairness in Classification
Outcome Indistinguishability
Fairness Through Computationally-Bounded Awareness
Addressing bias in prediction models by improving subpopulation calibration
Developing a COVID-19 mortality risk prediction model when individual-level data are not available
Omnipredictors
Universal adaptability: target-independent inference that competes with propensity scoring
Thursday 2/23: Multi-group fairness continued
Tuesday 2/28: Multi-group fairness + Faults in the data: fixing or giving up?
Thursday 3/2: Fairness and causality, continued. Guest speaker: Christopher Jung
Additional Reading:
Fairness and Machine Learning: Limitations and Opportunities
introduction to causal inference – Brady Neal
Tuesday 3/7: Fairness and causality, continued. Guest speaker: Christopher Jung
Thursday 3/9: Explainability and fairness. Guest speaker: Judy Shen
Tuesday 3/14: Advanced topics and course summary.
Thursday 3/16: Ask me Anything