Human decision making is increasingly being displaced by predictive algorithms. Judges sentence defendants based on statistical risk scores; regulators take enforcement actions based on predicted violations; advertisers target materials based on demographic attributes; and employers evaluate applicants and employees based on machine-learned models. One concern with the rise of such algorithmic decision making is that it may replicate or exacerbate human bias. Course surveys the legal and ethical principles for assessing the equity of algorithms, describes statistical techniques for designing fairer systems, and considers how anti-discrimination law and the design of algorithms may need to evolve to account for machine bias. Concepts will be developed in part through guided in-class coding exercises. Admission by consent of instructor and limited to 20 students. To enroll complete course application by March 15 at: https://5harad.com/mse330/. Grading based on: response papers, class participation, and a final project.