Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Hugging Face is doing a free and open course on fine tuning local LLMs (github.com/huggingface)
9 points by benburtenshaw 8 months ago | hide | past | favorite | 4 comments


Hugging Face is running a free and open course on fine tuning local LLMs, specifically smol models. You will learn how to fine-tune, align, and use LLMs locally for your own use case.

This is a hands-on course designed to help you align language models for your unique needs. It’s beginner-friendly, with minimal requirements:

- Runs on most local machines

- Minimal GPU requirements

- No paid services needed

*What You’ll Learn*

- Instruction Tuning: Fine-tune models to follow instructions and templates.

- Preference Alignment: Align models with human feedback using methods like DPO and ORPO.

- Parameter-efficient Fine-tuning: Explore techniques such as LoRA and prompt tuning.

- Evaluation: Build benchmarks and test models for domain-specific tasks.

- Synthetic Datasets: Create and validate synthetic datasets for training

- Deployment: Learn to scale and serve models efficiently.

If you’ve ever wanted to fine-tune your own LLM or just avoid the hefty compute bills from major APIs, the smol course might be the perfect way to kickstart your journey.

To get involved, follow the instructions on this repo: https://github.com/huggingface/smol-course


Hmm too bad. I was hoping it really was beginner friendly, but it requires a bunch of base knowledge and pytorch stuff and datasets and notebooks.


This is the same Hugging Face that took a dataset from Bluesky and made themselves instant enemies?


love it!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: