Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You already need very high end hardware to run useful local LLMs

A basic macbook can run gpt-oss-20b and it's quite useful for many tasks. And fast. Of course Macs have a huge advantage for local LLMs inference due to their shared memory architecture.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: