CS186

Download as PDF

How to Make a Moral Agent

Computer Science ENGR - School of Engineering

Course Description

Is it bad if you lie to ChatGPT? Who is to blame if ChatGPT lies? Should we let superhuman AI make life and death decisions? These questions ask whether advanced AI systems (today, often large language models - LLMs) can be moral agents - whether they are the kind of thing that can know how to make (ethically) correct decisions, and be held responsible for the rights or wrongs they do. Asking these questions leads us to questions about ourselves: What about us makes us moral agents? Is it our reason? Or is it essential that we emotionally feel each others' pain? Is selfishness irrational, or just unpleasant? Understanding ourselves can help us think about what kinds of artificial minds we would like to make, and, if we can, how. In this class, we provide the philosophical rigor and technical background necessary to robustly interrogate these and related questions. Readings will be drawn from philosophy, deep learning, and the cognitive sciences. The major assessment in this class will be a term project. There will be reading assignments for every class, and a mix of lectures, discussions, and participatory in-class activities. Recommended prerequisites: PHIL 80 or multiple philosophy courses; CS 183.

Cross Listed Courses

Grading Basis

ROP - Letter or Credit/No Credit

Min

3

Max

3

Course Repeatable for Degree Credit?

No

Course Component

Seminar

Enrollment Optional?

No

Programs

CS186 is a completion requirement for:
  • (from the following course set: )
  • (from the following course set: )