What You'll Learn
- Learn the fundamentals of finetuning a large language model (LLM).
- Understand how finetuning differs from prompt engineering, and when to use each.
- Gain practical experience with real datasets and techniques for finetuning in your own projects.
About This Course
In this course, taught by Sharon Zhou, Co-Founder of Lamini, you will learn how to finetune LLMs on custom data. You’ll explore when
finetuning is applicable, how it compares to prompt engineering, and its role in achieving specialized, high-quality results by updating model
weights.
- Understand the conditions for applying finetuning on large language models.
- Learn data preparation techniques tailored to finetuning.
- Gain hands-on experience training and evaluating a model with your own data.
- Discover how finetuning allows models to learn specific styles, forms, and update with new information.
By completing this course, you’ll have a foundational workflow to finetune LLMs, enabling customization for various applications.
Course Outline
-
Introduction
Overview of finetuning and its applications for LLMs.
-
Why Finetune
The benefits of finetuning and when it should be used, with practical code examples.
-
Where Finetuning Fits In
Comparison between finetuning, prompt engineering, and Retrieval Augmented Generation.
-
Instruction Finetuning
Techniques for finetuning instructions and training models to follow specific guidelines.
-
Data Preparation
Preparing and organizing datasets for efficient and accurate finetuning.
-
Training Process
Step-by-step guide to training an LLM with hands-on code examples.
-
Evaluation and Iteration
Methods for evaluating model performance and refining results through iteration.
-
Considerations on Getting Started Now
Practical tips for beginning the finetuning process and evaluating its feasibility.
-
Conclusion
Recap of key concepts and next steps for applying finetuning in various projects.
Who Should Join?
This course is suitable for learners familiar with Python and deep learning frameworks like PyTorch, interested in learning advanced LLM
finetuning techniques.