Skip to content

paser-group/Systems4AICourse-UIUC

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CS 598: Systems for Generative AI (F'24)

Logistics

Lectures: 0216 Siebel Center for Computer Science, MW: 9:30 AM – 10:45 AM

Member (NetID) Role Office Hours
Fan Lai (fanlai) Instructor 3128 Siebel Center. M 11:00 AM – 12:00 PM
Chengsong Zhang (cz81) TA Zoom. W 11:00 AM - 12:00 PM

Piazza: ALL communication regarding this course must be via Piazza. This includes questions, discussions, announcements, as well as private messages.

Presentation slides and paper summaries should be emailed to cs598-aisys-staff@lists.cs.illinois.edu.

Course Description

Learning Objectives: This course will introduce the key concepts and the state-of-the-art in practical, scalable, and fault-tolerant software systems for emerging Generative AI (GenAI). At the end of the course you will be able to:

  • Critique and evaluate the design details of state-of-the-art GenAI systems
  • Develop and utilize tools to profile and understand the performance of GenAI systems
  • Propose new research ideas in topics related to support practical GenAI

Structure: The course will be a mix of lectures, student presentations, seminar-style discussions, and a semester-long project on GenAI topics. We will cover GenAI topics from top conferences that take a systems view to the relevant challenges, including:

  • Basics of GenAI models from a systems perspective;
  • Systems for GenAI lifecycle (pre-training, training, fine-tuning/alignment, inference serving, and grounding);
  • GenAI for systems and etc.

Note that this course is NOT focused on AI methods. Instead, we will focus on how one can build software systems so that existing AI methods can be used in practice and new AI methods can emerge.

Prerequisites: Students are expected to have good programming skills and must have taken at least one undergraduate-level systems-related course (from operating systems, databases, distributed systems, or networking). Having an undergraduate ML/AI course is helpful but not required.

Tentative Schedule and Reading List

This is an evolving list and subject to changes due to the breakneck pace of GenAI innovations.

Date Readings Presenter Companion Reviewer
Aug 26 Introduction
How to Read a Paper (Required)
How to Give a Bad Talk (Required)
Writing Reviews for Systems Conferences
The Shift from Models to Compound AI Systems
Fan
Aug 28 The Illustrated Transformer (Required)
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness (Required)
Attention Is All You Need
The Transformer Family Version 2.0
Banruo, Yifan
Sept 4 The Illustrated Stable Diffusion (Required)
VideoPoet: A Large Language Model for Zero-Shot Video Generation (Required)
Scalable Diffusion Models with Transformers
Hierarchical Text-Conditional Image Generation with CLIP Latents
Chengsong
Sept 9 No Lecture / Work on Project Proposal
Worse is Better (Required)
Hints and Principles for Computer System Design
Sept 11 Multimodality and Large Multimodal Models (LMMs) (Required)
Visual Instruction Tuning
DeepSpeed-VisualChat: Multi-Round Multi-Image Interleave Chat via Multi-Modal Causal Attention
Flamingo: a Visual Language Model for Few-Shot Learning
Ren, Zihan, Shuangliang Gabriella, Han-Ting, Kai-Siang
Sept 16 Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer (Required)
DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale (Required)
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
Scaling Vision-Language Models with Sparse Mixture of Experts
Akul, Aditi, Shraddhaa Ayush, Ritul, Shashwat
Pre-Training
Sept 18 The Llama 3 Herd of Models (Sec 1-4, Required)
Gemini: A Family of Highly Capable Multimodal Models
Enguang, Steven Haocheng, Jackson, Ziqi Saad, Andrew, Yuhang
Sept 23 Alpa: Automating Inter- and Intra-Operator Parallelism for Distributed Deep Learning (Required)
Perseus: Removing Energy Bloat from Large Model Training (Required)
LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers
Jiahao, Xinyu, Zhiyu Michael, Yikang, Nikhil Viswajith, Arijus & Nicholas, Anusha
Sept 25 MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs (Required)
RDMA over Ethernet for Distributed AI Training at Meta Scale (Required)
GEMINI: Fast Failure Recovery in Distributed Training with In-Memory Checkpoints
Jimmy, Yiming, Kartik Saad, Andrew, Yuhang Yueshen, Henry, Jianyuan
Sept 30 SimAI: Unifying Architecture Design and Performance Tunning for Large-Scale Large Language Model Training with Scalability and Precision (Required)
Vidur: A Large-Scale Simulation Framework For LLM Inference (Required)
LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale
Kaizhuo, Haoyang Akul, Aditi, Shraddhaa Michael, Yikang, Nikhil
Alignment & Post-Training Optimization
Oct 2 LoRA: Low-Rank Adaptation of Large Language Models (Required)
S-LoRA: Serving Thousands of Concurrent LoRA Adapters (Required)
Stylus: Automatic Adapter Selection for Diffusion Models
Nicholas, Anusha Banruo, Yifan Muyan, Xin, Jiachen
Oct 7 LIMA: Less Is More for Alignment (Required)
Finetuned Language Models Are Zero-Shot Learners (Required)
Training Language Models to Follow Instructions with Human Feedback
Yueshen, Henry, Jianyuan Kaizhuo, Haoyang Jinwei, Yuxuan, Tengjun
Oct 9 LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale (Required)
AWQ: Activation-aware Weight Quantization for On-Device LLM Compression and Acceleration (Required)
GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers
Viswajith, Arijus Ren, Zihan, Shuangliang Jimmy, Yiming, Kartik
Grounding & Inference
Oct 14 REALM: Retrieval-Augmented Language Model Pre-Training (Required)
CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge Fusion (Required)
Improving Language Models by Retrieving from Trillions of Tokens
Saad, Andrew, Yuhang Muyan, Xin, Jiachen Jiahao, Xinyu, Zhiyu
Oct 16 No Lectures / Prep for the Midterm
Oct 21 Mid-Semester Presentations
Oct 23 Mid-Semester Presentations
Oct 28 SpecInfer: Accelerating Large Language Model Serving with Tree-based Speculative Inference and Verification (Required)
PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU (Required)
MemGPT: Towards LLMs as Operating Systems
Michael, Yikang, Nikhil Jimmy, Yiming, Kartik Haocheng, Jackson, Ziqi
Oct 30 Efficient Memory Management for Large Language Model Serving with PagedAttention (Required)
Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve (Required)
SGLang: Efficient Execution of Structured Language Model Programs
Jinwei, Yuxuan, Tengjun Enguang, Steven & Nicholas, Anusha Ren, Zihan, Shuangliang & Kaizhuo, Haoyang
Nov 4 OMS-DPM: Optimizing the Model Schedule for Diffusion Probabilistic Models (Required)
Are More LM Calls All You Need? Towards the Scaling Properties of Compound AI Systems (Required)
DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models
Muyan, Xin, Jiachen Jiahao, Xinyu, Zhiyu Gabriella, Han-Ting, Kai-Siang
ML for Systems
Nov 6 NetLLM: Adapting Large Language Models for Networking (Required)
Talk like a Graph: Encoding Graphs for Large Language Models (Required)
LLM-ABR: Designing Adaptive Bitrate Algorithms via Large Language Models
Gabriella, Han-Ting, Kai-Siang Ayush, Ritul, Shashwat Enguang, Steven & Banruo, Yifan
Nov 11 Parrot: Efficient Serving of LLM-based Applications with Semantic Variable (Required)
FLASH: Fast Model Adaptation in ML-Centric Cloud Platforms (Required)
Adapting Foundation Models for Operator Data Analytics
Haocheng, Jackson, Ziqi Viswajith, Arijus Yueshen, Henry, Jianyuan
Nov 13 No Lecture: Recalibrate Projects
Nov 18 Extracting Training Data from Large Language Models (Required)
Extracting Training Data from Diffusion Models (Required)
Identifying and Mitigating the Security Risks of Generative AI
Ayush, Ritul, Shashwat Jinwei, Yuxuan, Tengjun Akul, Aditi, Shraddhaa
Nov 20 No Class: Prep for Poster Presentation
How to Write a Great Research Paper (Required)
Dec 2 Final Presentations
Dec 4 Final Presentations
Dec 7 No Class: Prep for Project Report

Tentative Grading

Groups: All activities of this course, except your own participation :), will be performed in groups of 3 students. Form a group of 3 members and declare your group's membership and paper preferences by Sept 5. After this date, we will form groups from the remaining students.

Weight
Participation 10%
Paper Presentation & Discussion 15%
Paper Summary 15%
Project Report 40%
Project Presentations 20%

Academic integrity: The University's Honor Code applies to all activities related to this course. All material you submit in this course (reading responses, project reports, and presentation materials) must be your own. If you use someone else’s material, you must cite them properly.

AI Tool Policy: AI tools may be used for grammar checking and refining initial brainstorms, but the final reviews and codes must be authored by the student. Students are responsible for the entire content and must adhere to the Academic Integrity Policy.

Policies

Participation

Before Each Lecture: Each lecture will include one or two required reading that everyone must read. There will be optional related reading(s) that only the presenter(s) should be familiar with. They are optional for the rest of the class. You are required to submit one insightful question for each presented papers before each lecture.

During Lectures: Active participation is crucial for both your own understanding and to improve the overall quality of the course. You are expected to attend all lectures (up to 2 absences allowed for legitimate reasons), and more importantly, participate in class discussions. Not everyone must have add something every day, but it is expected that everyone has something to share over the semester.

After Lectures: Participation also involves contributing to discussions on Piazza. The group responsible for the summary should initiate the (remaining) discussion, and the rest of the members are encouraged to participate.

Student Lectures

The course will be conducted as a seminar. Only one group will present in each class. Each group will be assigned at least one lecture over the course of the semester. Presentations should last at most 40 minutes without interruption. However, presenters should expect questions and interruptions throughout.

In the presentation, you should:

  • Provide a brief background to motivate the problem (e.g., simplifying this by referencing previous talks)
  • Present the high level idea, approach, and/or insight (using examples, whenever appropriate) in the required reading.
  • Discuss technical details so that one can understand key details without carefully reading (quickly skim the evaluations).
  • Explain the differences between related works as well as the additional reading.
  • Identify strengths and weaknesses of the required reading and propose directions of future research.

The slides for a presentation must be emailed to the instructor team (in *.pptx format) at least 24 hours prior to the corresponding class.

Post-Presentation Panel Discussion

To foster a deeper understanding of the papers and encourage critical thinking, each lecture will be followed by a panel discussion. This discussion will involve three distinct roles played by different student groups, simulating an interactive and dynamic scholarly exchange.

Roles and Responsibilities

  1. The Authors
  • Group Assignment: The 'Companion' group will write the summary and play the role of the paper's authors.
  • Responsibility: As authors, you are expected to defend your paper against critiques, answer questions, and discuss how you might improve or extend your research in the future, akin to writing a rebuttal during the peer-review process.
  1. The Reviewers
  • Group Assignment: The 'Reviewer' group will write the summary and will be assigned to one slot to play the role of reviewers.
  • Responsibility: Reviewers critically assess the paper, posing challenging questions and highlighting potential weaknesses or areas for further investigation. Your goal is to engage in a constructive critique of the paper, simulating a peer review scenario.
  1. Rest of the Class (including the presenters)
  • Responsibility: During the panel discussions, feel free to actively ask questions and engage in the dialogue.

Lecture Summaries

Each group will also be assigned to write summaries for roughly two lectures: one in the 'Companion' role and the other in the 'Reviewer' role. The summary assigned to a group will not be the reading they gave the lecture on.

A paper summary must address the following four questions in sufficient details (2-3 pages):

  • What is the problem addressed in the lecture, and why is this problem important?
  • What is the state of related works in this topic?
  • What is the proposed solution, and what key insight guides their solution?
  • What is one (or more) drawback or limitation of the proposal?
  • What are potential directions for future research?

Late reviews will not be counted. The paper summary of a paper must be emailed to the instructor team within 24 hours after its presentation.

You should use this format for writing your summary. Use Google doc to enable in-line comments and suggestions.

Allocate enough time for your reading, discuss as a group, write the summary carefully, and finally, include key observations from the class discussion.

Project

You will have to complete substantive work an instructor-approved problem and have original contribution. Surveys are not permitted as projects; instead, each project must contain a survey of background and related work.

You must meet the following milestones (unless otherwise specified in future announcements) to ensure a high-quality project at the end of the semester:

  • Turn in a 2-page draft proposal, plus as many pages as needed for references, by September 26. Remember to include the names and UIUC email addresses of the group members.
  • Each group must present mid-semester progress during class hours on October 21 and October 23.
  • Each group must turn in an 8-page final report and your code via email on or before 6:00PM CST on December 19. The report must be submitted as a PDF file, with formatting similar to that of the papers you've read in the class. The self-contained (i.e., include ALL dependencies) code must be submitted as a zip file. Each zip file containing the code must include a README file with a step-by-step guide on how to compile and run the provided code.
  • You can find how to access GPU resources here

Acknowledgements

This course is heavily inspired by other excellent system seminar courses, particularly UMich CSE 585. Acknowledgments to SymbioticLab.

About

Systems for GenAI course taught at UIUC

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published