Skip to main content Skip to secondary navigation
Page Content

AI Courses at Stanford

Stanford HAI Line Art

HAI believes in the integration of AI across human-centered systems and applications in a setting that can only be offered by Stanford University. Stanford’s seven leading schools on the same campus enable HAI to offer a multidisciplinary approach to education.

Learn more about AI courses at Stanford below.

Courses and Programs

    • Stanford Students
    How do we design artificial systems that learn as we do early in life -- as "scientists in the crib" who explore and experiment with our surroundings? How do we make AI "curious" so that it explores without explicit external feedback? Topics draw from cognitive science (intuitive physics and psychology, developmental differences), computational theory (active learning, optimal experiment design), and AI practice (self-supervised learning, deep reinforcement learning). Students present readings and complete both an introductory computational project (e.g. train a neural network on a self-supervised task) and a deeper-dive project in either cognitive science (e.g. design a novel human subject experiment) or AI (e.g. implement and test a curiosity variant in an RL environment). Prerequisites: python familiarity and practical data science (e.g. sklearn or R).

    Instructors

    • Nick Haber

    When

    • Spring

    Subject

    • PSYCH

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    How do we design artificial systems that learn as we do early in life -- as "scientists in the crib" who explore and experiment with our surroundings? How do we make AI "curious" so that it explores without explicit external feedback? Topics draw from cognitive science (intuitive physics and psychology, developmental differences), computational theory (active learning, optimal experiment design), and AI practice (self-supervised learning, deep reinforcement learning). Students present readings and complete both an introductory computational project (e.g. train a neural network on a self-supervised task) and a deeper-dive project in either cognitive science (e.g. design a novel human subject experiment) or AI (e.g. implement and test a curiosity variant in an RL environment). Prerequisites: python familiarity and practical data science (e.g. sklearn or R).

    Instructors

    • Nick Haber

    When

    • Spring

    Subject

    • EDUC

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    Deep Learning is one of the most highly sought after skills in AI. We will help you become good at Deep Learning. In this course, you will learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects. You will learn about Convolutional networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more. You will work on case studies from healthcare, autonomous driving, sign language reading, music generation, and natural language processing. You will master not only the theory, but also see how it is applied in industry. You will practice all these ideas in Python and in TensorFlow, which we will teach. AI is transforming multiple industries. After this course, you will likely find creative ways to apply it to your work. This class is taught in the flipped-classroom format. You will watch videos and complete in-depth programming assignments and online quizzes at home, then come in to class for advanced discussions and work on projects. This class will culminate in an open-ended final project, which the teaching team will help you on. Prerequisites: Familiarity with programming in Python and Linear Algebra (matrix / vector multiplications). CS 229 may be taken concurrently.

    Instructors

    • Andrew Ng
    • Kian Katanforoosh

    When

    • Spring
    • Winter
    • Autumn

    Subject

    • CS

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    While deep learning has achieved remarkable success in supervised and reinforcement learning problems, such as image classification, speech recognition, and game playing, these models are, to a large degree, specialized for the single task they are trained for. This course will cover the setting where there are multiple tasks to be solved, and study how the structure arising from multiple tasks can be leveraged to learn more efficiently or effectively. This includes: goal-conditioned reinforcement learning techniques that leverage the structure of the provided goal space to learn many tasks significantly faster; meta-learning methods that aim to learn efficient learning algorithms that can learn new tasks quickly; curriculum and lifelong learning, where the problem requires learning a sequence of tasks, leveraging their shared structure to enable knowledge transfer. This is a graduate-level course. By the end of the course, students should be able to understand and implement the state-of-the-art multi-task learning algorithms and be ready to conduct research on these topics. Prerequisites: CS 229 or equivalent. Familiarity with deep learning, reinforcement learning, and machine learning is assumed.

    Instructors

    • Chelsea Finn

    When

    • Autumn

    Subject

    • CS

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    Worst and average case analysis. Recurrences and asymptotics. Efficient algorithms for sorting, searching, and selection. Data structures: binary search trees, heaps, hash tables. Algorithm design techniques: divide-and-conquer, dynamic programming, greedy algorithms, randomization. Algorithms for fundamental graph problems: minimum-cost spanning tree, connected components, topological sort, and shortest paths. Possible additional topics: network flow, string searching, amortized analysis, stable matchings and approximation algorithms. Prerequisite: 103 or 103B; 109 or STATS 116.

    Instructors

    • Mary Wootters
    • Moses Charikar
    • Nima Anari
    • Ian Tullis
    • Aviad Rubinstein

    When

    • Winter
    • Summer
    • Autumn

    Subject

    • CS

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    A project-based course that builds on the introduction to design in CS147 by focusing on advanced methods and tools for research, prototyping, and user interface design. Studio based format with intensive coaching and iteration to prepare students for tackling real world design problems. This course takes place entirely in studios; you must plan on attending every studio to take this class. The focus of CS247A is design for human-centered artificial intelligence experiences. What does it mean to design for AI? What is HAI? How do you create responsible, ethical, human centered experiences? Let us explore what AI actually is and the constraints, opportunities and specialized processes necessary to create AI systems that work effectively for the humans involved. Prerequisites: CS147 or equivalent background in design thinking.

    Instructors

    • Julie Stanford
    • Emily Yang

    When

    • Spring

    Subject

    • SYMSYS

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    A project-based course that builds on the introduction to design in CS147 by focusing on advanced methods and tools for research, prototyping, and user interface design. Studio based format with intensive coaching and iteration to prepare students for tackling real world design problems. This course takes place entirely in studios; you must plan on attending every studio to take this class. The focus of CS247A is design for human-centered artificial intelligence experiences. What does it mean to design for AI? What is HAI? How do you create responsible, ethical, human centered experiences? Let us explore what AI actually is and the constraints, opportunities and specialized processes necessary to create AI systems that work effectively for the humans involved. Prerequisites: CS147 or equivalent background in design thinking.

    Instructors

    • Julie Stanford
    • Emily Yang

    When

    • Spring

    Subject

    • CS

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    Artificial Intelligence (AI) has the potential to drive us towards a better future for all of humanity, but it also comes with significant risks and challenges. At its best, AI can help humans mitigate climate change, diagnose and treat diseases more effectively, enhance learning, and improve access to capital throughout the world. But it also has the potential to exacerbate human biases, destroy trust in information flow, displace entire industries, and amplify inequality throughout the world. We have arrived at a pivotal moment in the development of the technology in which we must establish a foundation for how we will design AI to capture the positive potential and mitigate the negative risks. To do this, building AI must be an inclusive, interactive, and introspective process guided by an affirmative vision of a beneficial AI-future. The goal of this interdisciplinary class is to bridge the gap between technological and societal objectives: How do we design AI to promote human well-being? The ultimate aim is to provide tools and frameworks to build a more harmonious human society based on cooperation toward a shared vision. Thus, students are trained in basic science to understand what brings about the conditions for human flourishing and will create meaningful AI technologies that aligns with the PACE framework: 1) has a clear and meaningful purpose, 2) augments human dignity and autonomy, 3) creates a feeling of inclusivity and collaboration, 4) creates shared prosperity and a sense of forward movement (excellence). Toward this end, students work in interdisciplinary teams on a final project and propose a solution that tackles a significant societal challenge by leveraging technology and frameworks on human thriving.

    When

    N/A

    Subject

    • CS

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    Artificial Intelligence (AI) has the potential to drive us towards a better future for all of humanity, but it also comes with significant risks and challenges. At its best, AI can help humans mitigate climate change, diagnose and treat diseases more effectively, enhance learning, and improve access to capital throughout the world. But it also has the potential to exacerbate human biases, destroy trust in information flow, displace entire industries, and amplify inequality throughout the world. We have arrived at a pivotal moment in the development of the technology in which we must establish a foundation for how we will design AI to capture the positive potential and mitigate the negative risks. To do this, we must be intentional about human-centered design because, ¿Only once we have thought hard about what sort of future we want, will we be able to begin steering a course toward a desirable future. If we don¿t know what we want, we¿re unlikely to get it.¿ Thus, building AI must be an inclusive, interactive, and introspective process guided by an affirmative vision of a beneficial AI-future. The goal of this interdisciplinary class is to bridge the gap between technological and societal objectives: How do we design AI to promote human well-being? The ultimate aim is to provide tools and frameworks to build a more harmonious human society based on cooperation toward a shared vision. Thus, students are trained in basic science to understand what brings about the conditions for human flourishing and will create meaningful AI technologies that aligns with the PACE framework:·has a clear and meaningful purpose ·augments human dignity and autonomy ·creates a feeling of inclusivity and collaboration·creates shared prosperity and a sense of forward movement (excellence)Toward this end, students work in interdisciplinary teams on a final project and propose a solution that tackles a significant societal challenge by leveraging technology and frameworks on human thriving.

    When

    N/A

    Subject

    • GSBGEN

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    People often tend to think of technology as value neutral, as essentially objective tools that can be used for good or evil, particularly when questions of race and racial justice are involved. But the technologies we develop and deploy are frequently shaped by historical prejudices, biases, and inequalities and thus may be no less biased and racist than the underlying society in which they exist. In this discussion group, we will consider whether and how racial and other biases are present in a wide range of technologies, such as "risk assessment" algorithms for bail, predictive policing, and other decisions in the criminal justice system; facial recognition systems; surveillance tools; algorithms for medical diagnosis and treatment decisions; online housing ads that result in "digital redlining;" programs that determine entitlement to credit or public benefits and/or purport to detect fraud by recipients; algorithms used in recruiting and hiring; digital divide access gaps; and more. Building on these various case studies, we will seek to articulate a framework for recognizing both explicit and subtle anti-black and other biases in tech and understanding them in the broader context of racism and inequality in our society. Finally, we will discuss how these problems might be addressed, including by regulators, legislators, and courts as well as by significant changes in mindset and practical engagement by technology developers and educators. Elements used in grading: Full attendance, reading of assigned materials, and active participation. Class meets 4:30 PM-6:00 PM on Sept. 29, Oct. 13, Oct. 27, Nov. 10.

    Instructors

    • Phillip Malone

    When

    • Autumn

    Subject

    • LAW

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    We will consider the developing legal and ethical problems of robots and artificial intelligence (AI), particularly self-directed and learning AIs. How do self-driving cars (or autonomous weapons systems) value human lives? How do we trade off accuracy against other values in predictive algorithms? At what point should we consider AIs autonomous entities with their own rights and responsibilities? And how can courts and legislatures set legal rules robots can understand and obey? This discussion seminar will meet four times during the Fall quarter. Meeting dates and times to be arranged by instructor. Elements used in grading: Attendance and class participation.

    When

    N/A

    Subject

    • LAW

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    Machine learning has become an indispensable tool for creating intelligentnapplications, accelerating scientific discoveries, and making better data-drivenndecisions. Yet, the automation and scaling of such tasks can have troubling negative societal impacts. Through practical case studies, you will identify issues of fairness, justice and truth in AI applications. You will then apply recent techniques to detect and mitigate such algorithmic biases, along with methods to provide more transparency and explainability to state-of-the-art ML models. Finally, you will derive fundamental formal results on the limits of such techniques, along with tradeoffs that must be made for their practical application. CS229 or equivalent classes or experience.

    Instructors

    • Carlos Guestrin

    When

    • Spring

    Subject

    • CS

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    Examination of recent developments in computing technology and platforms through the lenses of philosophy, public policy, social science, and engineering. Course is organized around four main units: algorithmic decision-making and bias; data privacy and civil liberties; artificial intelligence and autonomous systems; and the power of private computing platforms. Each unit considers the promise, perils, rights, and responsibilities at play in technological developments. Prerequisite: CS106A. Elements used in grading: Attendance, class participation, written assignments, coding assignments, and final exam. Cross-listed with Communication (COMM 180), Computer Science (CS 182), Ethics in Society (ETHICSOC 182), Philosophy (PHIL 82), Political Science (POLISCI 182), Public Policy (PUBLPOL 182).

    When

    N/A

    Subject

    • LAW

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    Examination of recent developments in computing technology and platforms through the lenses of philosophy, public policy, social science, and engineering.  Course is organized around five main units: algorithmic decision-making and bias; data privacy and civil liberties; artificial intelligence and autonomous systems; the power of private computing platforms; and issues of diversity, equity, and inclusion in the technology sector.  Each unit considers the promise, perils, rights, and responsibilities at play in technological developments. Prerequisite: CS106A.

    Instructors

    • Mehran Sahami
    • Rob Reich
    • Keertan Kini
    • Adrian Liu
    • Cathy Yang
    • Jeffrey Propp
    • Crystal Liu
    • Chloe Stowell
    • Asa Kohrman
    • Shreya Venkat
    • Daniel Guillen
    • Elena Berman
    • Amber Yang
    • Ece Korkmaz
    • Yilin Wu
    • Kathryn Larkin
    • Shanduojiao Jiang
    • Jeremy Weinstein

    When

    • Winter

    Subject

    • PUBLPOL

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    Examination of recent developments in computing technology and platforms through the lenses of philosophy, public policy, social science, and engineering.  Course is organized around five main units: algorithmic decision-making and bias; data privacy and civil liberties; artificial intelligence and autonomous systems; the power of private computing platforms; and issues of diversity, equity, and inclusion in the technology sector.  Each unit considers the promise, perils, rights, and responsibilities at play in technological developments. Prerequisite: CS106A.

    Instructors

    • Mehran Sahami
    • Rob Reich
    • Keertan Kini
    • Adrian Liu
    • Cathy Yang
    • Jeffrey Propp
    • Crystal Liu
    • Chloe Stowell
    • Asa Kohrman
    • Shreya Venkat
    • Daniel Guillen
    • Elena Berman
    • Amber Yang
    • Ece Korkmaz
    • Yilin Wu
    • Kathryn Larkin
    • Shanduojiao Jiang
    • Jeremy Weinstein

    When

    • Winter

    Subject

    • PHIL

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    (Cross-listed with LAW 4052.) Course surveys current and emerging legal and governance problems related to humanity's relationship to artificially constructed intelligence. To deepen students' understanding of legal and governance problems in this area, course explores definitions and foundational concepts associated with AI, likely pathways of AI's evolution, different types of law and policy concerns raised by existing and future versions of AI, and the distinctive domestic and international political economies of AI governance. Course also covers topics associated with the design and development of AI as they relate to law and governance, such as measuring algorithmic bias and explainability of AI models. Cross-cutting themes include: how law and policy affect the way important societal decisions are justified; the balance of power and responsibility between humans and machines in different settings; the incorporation of multiple values into AI decision-making frameworks; the interplay of norms and formal law; technical complexities that may arise as society scales deployment of AI systems; AI's implications for transnational law and governance and geopolitics; and similarities and differences to other domains of human activity raising regulatory trade-offs and affected by technological change. Note: Course is designed both for students who want a survey of the field and lack any technical knowledge, as well as students who want to gain tools and ideas to deepen their existing interest or technical background in the topic. Taught by a sitting judge, a former EU Parliament member, and a law professor, and conceived to serve students with interest in law, business, public policy, design, and ethics. Course includes lectures, practical exercises, and student-led discussion and presentations. CONSENT APPLICATION: To accommodate as many students as possible, please fill out the following application by March 12, 2021 in order to facilitate planning and confirm your level of interest: https://docs.google.com/forms/d/e/1FAIpQLSfwRxaM1omTsJmK9k0gksdS5jBPRz-YCuYhRUpDlVXXglDHjg/viewform. Applications received after deadline will be considered on a rolling basis pending space. Application also available on SLS website (Click Courses at the bottom of homepage and then click Consent of Instructor Forms).

    When

    N/A

    Subject

    • INTLPOL

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    Even just a generation ago, interest in "artificial intelligence" (AI) was largely confined to academic computer science, philosophy, engineering, and science fiction. Today the term is understood to encompass not only long-term efforts to simulate the general intelligence associated with humans, but also fast-evolving technologies (such as elaborate neural networks leveraging vast amounts of data) with the potential to reshape finance, transportation, health care, national security, advertising and social media, and other fields. Taught by a sitting judge, a former EU Parliament member, and a law professor, and conceived to serve students with interest in law, business, public policy, design, and ethics, this interactive course surveys current and emerging legal and governance problems related to humanity's relationship to artificially-constructed intelligence. To deepen students' understanding of legal and governance problems in this area, the course explores definitions and foundational concepts associated with AI, likely pathways of AI's evolution, different types of law and policy concerns raised by existing and future versions of AI, and the distinctive domestic and international political economies of AI governance. We will consider discrete settings where regulation of AI is emerging as a challenge or topic of interest, among them: autonomous vehicles, autonomous weapons, labor market decisions, AI in social media/communications platforms, judicial and governmental decision-making, and systemic AI safety problems; the growing body of legal doctrines and policies relevant to the development and control of AI such as the European Union's General Data Protection Regulation and the California Consumer Privacy Act; the connection between governance of manufactured intelligence and related bodies of law, such as administrative law, torts, constitutional principles, civil rights, criminal justice, and international law; and new legal and governance arrangements that could affect the development and use of AI. We will also cover topics associated with the design and development of AI as they relate to law and governance, such as measuring algorithmic bias and explainability of AI models. Cross-cutting themes will include: how law and policy affect the way important societal decisions are justified; the balance of power and responsibility between humans and machines in different settings; the incorporation of multiple values into AI decision-making frameworks; the interplay of norms and formal law; technical complexities that may arise as society scales deployment of AI systems; AI's implications for transnational law and governance and geopolitics; and similarities and differences to other domains of human activity raising regulatory trade-offs and affected by technological change. Note: The course is designed both for students who want a survey of the field and lack any technical knowledge, as well as students who want to gain tools and ideas to deepen their existing interest or technical background in the topic. Students with longer-term interest in or experience with the subject are welcome to do a more technically-oriented paper or project in connection with this class. But technical knowledge or familiarity with AI is not a prerequisite, as various optional class sessions and readings as well as certain in-class material will help provide necessary background. Requirements: The course involves a mix of lectures, practical exercises, and student-led discussion and presentations. Elements used in grading: Requirements include attendance, participation in a student-led group presentation and a group-based practical exercise, two short 3-5 pp. response papers, and either an exam or research paper. After the term begins, students accepted into the course can transfer, with consent of the instructor, from section (01) into section (02), which meets the R requirement. CONSENT APPLICATION: We will try to accommodate as many people as possible with interest in the course. But to facilitate planning and confirm your level of interest, please fill out an application available at https://docs.google.com/forms/d/e/1FAIpQLSfwRxaM1omTsJmK9k0gksdS5jBPRz-YCuYhRUpDlVXXglDHjg/viewform by March 12, 2021. Applications received after the deadline will be considered on a rolling basis if space is available. The application is also available on the SLS website (Click Courses at the bottom of the homepage and then click Consent of Instructor Forms). Cross-listed with International Policy (INTLPOL 364).

    When

    N/A

    Subject

    • LAW

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    How will high performance computing and artificial intelligence change the way you live, work and learn? What skill sets will you need in the future? The HPC-AI Summer Seminar Series, presented by the Stanford High Performance Computing Center and the HPC-AI Advisory Council, combines thought leadership and practical insights with topics of great societal importance and responsibility¿from applications, tools and techniques to delving into emerging trends and technologies. These experts and influencers who are shaping our HPC and AI future will share their vision and will address audience questions. The overarching theme this year is the potential influence and impact of HPC and AI to battle COVID-19. Students of all academic backgrounds and interests are encouraged to register for this 1-unit course. No prerequisites required. Register early.

    Instructors

    • Steve Jones

    When

    • Summer

    Subject

    • ME

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    This course will explore the promise and limits of artificial intelligence (AI) through the lens of human cognition. Amid whispers of robots one day taking over the world, it is tempting to imagine that AI is (or soon will be) all-powerful. But few of us understand how AI works, which may lead us to overestimate its current (and even its future) capabilities. As it turns out, intelligence is complicated to build, and while computers outperform humans in many ways, they also fail to replicate key features of human intelligence¿at least for now.We will take a conceptual, non-technical approach (think: reading essays, not writing code). Drawing upon readings from philosophy of science, computer science, and cognitive psychology, we will examine the organizing principles of AI versus human intelligence, and the capabilities and limitations that follow.Computers vastly outperform humans in tasks that require large amounts of computational power (for example, solving complex mathematical equations). However, you may be surprised to learn the ways in which humans outperform computers. What is it about the human brain that allows us to understand and appreciate humor, sarcasm, and art? How do we manage to drive a car without hitting pedestrians? Is it only a matter of time before computers catch up to these abilities¿Or are there differences of kind (rather than degree) that distinguish human intelligence from AI? Will robots always be constrained to the tasks that humans program them to do¿Or could they, one day, take over the world?By the end of this course, you will be able to discuss the current capabilities, future potential, and fundamental limitations of AI. You may also arrive at a newfound appreciation for human intelligence, and for the power of your own brain.

    Instructors

    • Christina Chick

    When

    • Spring

    Subject

    • PSYC

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    Understanding the human side of AI/ML based systems requires understanding both how the system-side AI works, but also how people think about, understand, and use AI tools and systems. This course will cover how what AI components and systems currently exits, along with how mental models and user models are made. These models lead to user expectations of AI systems are formed, and ultimately to design guidelines to avoid disappointing end-users by creating unintelligible AI tools that are based on a cryptic depiction of how things work. We'll also cover the ethics of AI data collection and model building, as well as how to build fair systems.

    When

    N/A

    Subject

    • CS

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    Applied linear algebra and linear dynamical systems with applications to circuits, signal processing, communications, and control systems. Topics: least-squares approximations of over-determined equations, and least-norm solutions of underdetermined equations. Symmetric matrices, matrix norm, and singular-value decomposition. Eigenvalues, left and right eigenvectors, with dynamical interpretation. Matrix exponential, stability, and asymptotic behavior. Multi-input/multi-output systems, impulse and step matrices; convolution and transfer-matrix descriptions. Control, reachability, and state transfer; observability and least-squares state estimation. Prerequisites: Linear algebra and matrices as in ENGR 108 or MATH 104; ordinary differential equations and Laplace transforms as in EE 102B or CME 102.

    Instructors

    • Nick Landolfi
    • Sanjay Lall

    When

    • Summer
    • Autumn

    Subject

    • EE

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    The field of machine programming (MP) is concerned with the automation of software development. Given recent advances in algorithms, hardware efficiency and capacity, and an ever increasing avail- ability of code data, it is now possible to train machines to help develop software. In this course, we teach students how to build real-world MP systems. We begin by explaining the foundations of MP. Next, we analyze the current state-of-the-art MP systems (e.g., DeepMind's AlphaCode, GitHub's Copilot, Merly's MP-CodeCheck). We close with a discussion of current limitations and future utility in MP. This course also includes a six-week hands-on project, where students (as individuals or in a small group) will create their own MP system and demonstrate it to the class.

    Instructors

    • Justin Gottschlich

    When

    • Autumn

    Subject

    • CS

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    This course provides a mathematical introduction to the following questions: What is computation? Given a computational model, what problems can we hope to solve in principle with this model? Besides those solvable in principle, what problems can we hope to efficiently solve? In many cases we can give completely rigorous answers; in other cases, these questions have become major open problems in computer science and mathematics. By the end of this course, students will be able to classify computational problems in terms of their computational complexity (Is the problem regular? Not regular? Decidable? Recognizable? Neither? Solvable in P? NP-complete? PSPACE-complete?, etc.). Students will gain a deeper appreciation for some of the fundamental issues in computing that are independent of trends of technology, such as the Church-Turing Thesis and the P versus NP problem. Prerequisites: CS 103 or 103B.

    Instructors

    • Omer Reingold

    When

    • Autumn

    Subject

    • CS

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    Since your first week of law school, you have been reading legal opinions written by judges. Who were those judges and did their identities affect their views? From a judge's perspective, what makes a case hard or easy? Did the process by which the judge was selected--or could be removed from office--influence her or his decision? How do judges make choices about the larger legal ecosystem in which you will practice law? After all, judges determine many aspects of the legal environment in which lawyers operate, from whether you can livestream a court hearing from your phone to whether you will take the bar exam in person or online. Taught by a Justice on a California Court of Appeal, this seminar explores judicial decision making about cases and the court system from a variety of perspectives. It draws from accounts by social scientists, lawyers, and judges themselves, analyzing what judges do and critiquing how they do it. The seminar examines systems of judicial selection, evaluation, and removal in both the federal and state court systems and their potential effects on judicial decision making. We will take up questions such as whether the identity of judges matters to their decisions, how heuristics or implicit biases might influence outcomes, how communities try to choose "good" judges and what they do when those choices go wrong, evaluate efforts to diversify the bench, and consider what lessons might be learned from the experiences of various states in evaluating and electing judges. One theme of the seminar involves the interaction of judges with litigants, the public, and other government actors--on twenty-first-century terms. We will ask how courts should manage questions related to transparency, privacy, access to justice, and technology. We will think about how judges might choose or be compelled to rely on emerging automation technologies, whether simple algorithms or advanced machine learning. We also will consider the extent to which judges do and should take into account the views of executive officials, legislators, nongovernmental organizations, and members of the general public when deciding cases and structuring the legal system. In addition, we will look at ethics rules governing what judges can learn and what they can say. For example, can or should a judge run an experiment that tests a litigant's factual assertion, or, in her free time, write an online product review, lead a religious group, or participate in a commission to improve state government? The seminar will pursue these questions from both theoretical and practical perspectives. Sitting judges from a variety of courts will share their insights with seminar participants. Students will write a research paper on a relevant topic of their choice, and will be encouraged to think critically about how judges make decisions and how courts can be improved in realistic ways. We will think together about how judges and courts can best deliver justice in a changing, contested, unequal, and increasingly complex world. Elements used in grading: Attendance, Class Participation, Written Assignments, Research Paper.

    Instructors

    • Allison Danner

    When

    • Autumn

    Subject

    • LAW

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A

    • Stanford Students
    In recent years, artificial intelligence (AI) has made the jump from science fiction to technical viability to product reality. Industries as far flung as finance, transportation, defense, and healthcare invest billions in the field. Patent filings for robotics and machine learning applications have surged. And policymakers are beginning to grapple with technologies once confined to the realm of computer science, such as predictive analytics and neural networks. AI's rise to prominence came thanks to a confluence of factors. Increased computing power, large-scale data collection, and advancements in machine learning---all accompanied by dramatic decreases in costs---have resulted in machines that now have the ability to exhibit complex "intelligent" behaviors. They can navigate in real-world environments, process natural language, diagnose illnesses, predict future events, and even conquer strategy games. These abilities, in turn, have allowed companies and governments to entrust machines with responsibilities once exclusively reserved for humans---including influencing hiring decisions, bail release conditions, loan considerations, medical treatment and police deployment. But with these great new powers, of course, come great new responsibilities. The first public deployments of AI have seen ample evidence of the technology's disruptive---and destructive---capabilities. AI-powered systems have killed and maimed, filled social networks with hate, and been accused of shaping the course of elections. And as the technology proliferates, its governance will increasingly fall upon lawyers involved in the design and development of new products, oversight bodies and government agencies. AI is the biggest addition to technology law and policy since the rise of the internet, and its influence spreads far beyond the tech sector. As such, those entering practice in a wide variety of fields need to understand AI from the ground up in order to competently assess and influence its policy, legal and product implications as deployments scale across industries in the coming years. This course is designed to teach precisely that. It seeks to equip students with an understanding of the basics of AI and machine learning systems by studying the implications of the technology along the design/deployment continuum, moving from (1) system inputs (data collection) to (2) system design (engineering) and finally to (3) system outputs (product features). This input/design/output framework will be used throughout the course to survey substantive engineering, policy and legal issues arising at each of those key stages. In doing so, the course will span topics including privacy, bias, discrimination, intellectual property, torts, transparency and accountability. The course will also feature leading experts from a variety of AI disciplines and professional backgrounds. An important aspect of the course is gaining an understanding of the technical underpinnings of AI, which will be packaged in an easy-to-understand, introductory manner with no prior technical background required. The writing assignments will center on reflection papers on legal, regulatory and policy analysis of current issues involving AI. The course will be offered for two units of credit (H/P/R/F). Grading will be determined by attendance, class participation and written assignments. Given the course's multi-disciplinary focus, students outside of the law school, particularly those studying computer science, engineering or business, are welcome. CONSENT APPLICATION: To apply for this course, students must complete and submit a Consent Application Form available on the SLS website (Click Courses at the bottom of the homepage and then click Consent of Instructor Forms). See Consent Application Form for instructions and submission deadline.

    When

    N/A

    Subject

    • LAW

    Delivery Method

    N/A

    Time Commitment

    Academic Quarter

    Earned Outcome

    N/A

    Cost

    N/A