Hugging Face is running a free and open course on fine tuning local LLMs, specifically smol models. You will learn how to fine-tune, align, and use LLMs locally for your own use case.
This is a hands-on course designed to help you align language models for your unique needs. It’s beginner-friendly, with minimal requirements:
- Runs on most local machines
- Minimal GPU requirements
- No paid services needed
*What You’ll Learn*
- Instruction Tuning: Fine-tune models to follow instructions and templates.
- Preference Alignment: Align models with human feedback using methods like DPO and ORPO.
- Parameter-efficient Fine-tuning: Explore techniques such as LoRA and prompt tuning.
- Evaluation: Build benchmarks and test models for domain-specific tasks.
- Synthetic Datasets: Create and validate synthetic datasets for training
- Deployment: Learn to scale and serve models efficiently.
If you’ve ever wanted to fine-tune your own LLM or just avoid the hefty compute bills from major APIs, the smol course might be the perfect way to kickstart your journey.
This is a hands-on course designed to help you align language models for your unique needs. It’s beginner-friendly, with minimal requirements:
- Runs on most local machines
- Minimal GPU requirements
- No paid services needed
*What You’ll Learn*
- Instruction Tuning: Fine-tune models to follow instructions and templates.
- Preference Alignment: Align models with human feedback using methods like DPO and ORPO.
- Parameter-efficient Fine-tuning: Explore techniques such as LoRA and prompt tuning.
- Evaluation: Build benchmarks and test models for domain-specific tasks.
- Synthetic Datasets: Create and validate synthetic datasets for training
- Deployment: Learn to scale and serve models efficiently.
If you’ve ever wanted to fine-tune your own LLM or just avoid the hefty compute bills from major APIs, the smol course might be the perfect way to kickstart your journey.
To get involved, follow the instructions on this repo: https://github.com/huggingface/smol-course