What You'll Learn
- Learn to evaluate programs utilizing LLMs and generative image models with platform-independent tools.
- Instrument a training notebook, adding tracking, versioning, and logging capabilities.
- Implement monitoring and tracing of LLMs over time in complex interactions.
About This Course
This course covers essential Machine Learning Operations (MLOps) tools, using the Weights & Biases platform to track experiments, version
data, and collaborate. You’ll learn to manage data sources, model parameters, and evaluations effectively to streamline complex AI projects.
- Instrument a Jupyter notebook for tracking experiments.
- Manage hyperparameter configurations.
- Log run metrics and track results.
- Collect artifacts for dataset and model versioning.
- Trace and log interactions with LLMs over time for thorough analysis.
Complete this course to implement a systematic MLOps workflow to accelerate your productivity and enhance project outcomes.
Course Outline
-
Introduction
Overview of MLOps in generative AI and the Weights & Biases platform.
-
Instrument W&B
Setting up and instrumenting Weights & Biases in a Jupyter notebook, with code examples.
-
Training a Diffusion Model with W&B
Tracking model training, versioning, and logging data while training a diffusion model.
-
Evaluating Diffusion Models
Methods for evaluating diffusion models, logging metrics, and analyzing results.
-
LLM Evaluation and Tracing with W&B
Techniques for tracing prompts and responses in large language models, and evaluating results.
-
Finetuning a Language Model
Finetuning an LLM using W&B for tracking hyperparameters, logs, and finetuning results.
-
Conclusion
Summary of the course, key takeaways, and next steps in MLOps for generative AI.
Who Should Join?
This course is designed for anyone familiar with Python and PyTorch (or similar frameworks) interested in managing, versioning, and debugging
machine learning workflows.