What You'll Learn
- Understand the impact of tokenization on search quality in LLMs and embedding models.
- Explore tokenization techniques such as Byte-Pair Encoding, WordPiece, and Unigram.
- Learn how to measure and optimize search relevance with HNSW parameters and vector quantization.
About This Course
This course, taught by Kacper Łukawski, dives into tokenization and vector search optimization for large-scale RAG applications. You’ll learn
about tokenization methods, address challenges like terminology mismatches, and explore optimization methods for HNSW and vector quantization
to improve relevance and efficiency in vector search.
What You’ll Do
- Understand how embedding models work and how text is transformed into vectors.
- Train and utilize tokenizers like Byte-Pair Encoding, WordPiece, Unigram, and SentencePiece.
- Address tokenizer challenges like unknown tokens and domain-specific identifiers.
- Measure search quality using several quality metrics.
- Optimize HNSW parameters to balance speed and relevance in vector search.
- Experiment with product, scalar, and binary quantization techniques for memory and search optimization.
Course Outline
- Introduction: Overview of course goals and retrieval optimization.
- Embedding Models: Learn about embedding models and vector transformations.
- Role of the Tokenizers: Deep dive into various tokenizers and their applications.
-
Practical Implications of Tokenization: Addressing challenges and implications of tokenization.
- Measuring Search Relevance: Explore metrics and methods for evaluating search quality.
- Optimizing HNSW Search: Adjusting HNSW parameters for optimal relevance and speed.
- Vector Quantization: Applying quantization techniques for efficient search and memory usage.
- Conclusion: Summary and next steps in retrieval optimization.
- Appendix – Tips and Help: Additional code examples and resources.
Who Should Join?
This course is suitable for anyone with basic Python knowledge looking to build effective customer-facing RAG applications.