Prospective Students
- To sign up for the course, please fill in this form.
- For the Fall 2024 iteration of the course, click here.
Course Staff
Instructor | (Guest) Co-instructor | (Guest) Co-instructor |
Dawn Song | Xinyun Chen | Kaiyu Yang |
Professor, UC Berkeley | Research Scientist, Google DeepMind |
Research Scientist, Meta FAIR |
Course Description
Large language model (LLM) agents have been an important frontier in AI, however, they still fall short critical skills, such as complex reasoning and planning, for solving hard problems and enabling end-to-end applications in real-world scenarios. Building on our previous course, this course dives deeper into advanced topics in LLM agents, focusing on reasoning, AI for mathematics, code generation, and program verification. We begin by introducing advanced inference and post-training techniques for building LLM agents that can search and plan. Then, we focus on two application domains: mathematics and programming. We study how LLMs can be used to prove mathematical theorems, as well as generate and reason about computer programs. Specifically, we will cover the following topics:
- Inference-time techniques for reasoning
- Post-training methods for reasoning
- Search and planning
- Agentic workflow, tool use, and functional calling
- LLMs for code generation and verification
- LLMs for mathematics: data curation, continual pretraining, and finetuning
- LLM agents for theorem proving and autoformalization
Syllabus
To Be Announced
Completion Certificate
Coming Soon!