What You'll Learn
-
Develop steps to improve model reliability and accuracy through evaluation, prompting, self-reflection, and fine-tuning.
- Use memory tuning to embed facts in the model, enhancing performance and reducing hallucinations.
- Build an LLM application using the Llama 3-8b model to convert text into SQL queries with a custom schema.
About This Course
In this course, led by Sharon Zhou (Lamini) and Amit Sangani (Meta), you'll learn structured techniques to increase the accuracy of LLM
applications. The course covers prompt engineering, evaluation metrics, and fine-tuning approaches to reduce hallucinations and improve
factuality. By the end, you’ll have built and fine-tuned an SQL agent using Llama’s models to meet specific accuracy criteria.
-
Text-to-SQL Agent: Build an agent that generates SQL queries, incorporating error simulation for evaluation.
-
Performance Evaluation Framework: Create an evaluation system with metrics to measure performance effectively.
-
Instruction and Memory Fine-Tuning: Use techniques like LoRA and memory fine-tuning to embed factual accuracy in the model
and improve its responsiveness.
-
Advanced PEFT Techniques: Explore methods like LoRA and MoME to optimize training time and accuracy.
-
Iterative Data Generation and Fine-Tuning: Gain practical skills in generating training data, creating data variations, and
filtering to increase model accuracy iteratively.
Course Outline
-
Introduction
Introduction to enhancing the accuracy and reliability of LLM applications.
-
Overview
Overview of methods and techniques for consistent, accurate model performance.
-
Create an SQL Agent
Build a text-to-SQL agent and evaluate performance through error simulations.
-
Create an Evaluation
Develop an evaluation framework with criteria and scoring for model accuracy.
-
Fine-tuning, PEFT, & Memory Tuning
Introduction to LoRA, MoME, and memory tuning to embed facts in model weights.
-
Generate Data & Finetune
Steps to generate data, create variations, and fine-tune the model iteratively.
-
Conclusion
Summarization of key concepts and best practices for accurate LLM applications.
Who Should Join?
This course is ideal for individuals with intermediate Python knowledge and familiarity with large language models who want to improve the
factual accuracy of LLM applications.