Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Does anyone have experience running LLMs on a Mac Mini M2?
6 points by etewiah on Jan 14, 2024 | hide | past | favorite | 6 comments
I would like to run an LLM locally so I can be absolutely sure that the data I send to it is private. Does anyone have experience doing this with the latest Mac Mini? Any insights will be very much appreciated. Thanks.


I ran the 13GB https://simonwillison.net/2023/Nov/29/llamafile/ on an M3 without issues. 40 tokens per second output.


Thanks. Can you please elaborate on the specs of your computer?

I am planning to buy a mac mini (January 2023) M2 3.49 GHz - SSD 256 GB - 8GB.


https://browser.geekbench.com/macs/macbook-pro-14-inch-nov-2... with 18GB RAM. Simon (the blog author) seems to use 64GB M2 MacBook Pro https://til.simonwillison.net/llms/llama-7b-m2


I bought a Mac Mini M2 last year to start playing around with some personal projects, and I did some tests using LM Studio running Mixtral models with pretty good throughput, I also tested Open AI's whisper models to do some transcriptions and those ran fine as well.

I do, however, recommend that you upgrade the RAM, 8GB is barely enough as is, so getting at least 16GB would be better. (I don't recommend upgrading the SSD though, since because of Thunderbolt 4 you can have a fast external SSD for half the price that Apple charges for storage).


Thanks for the tip, that is useful to know


download km studio and you're done. depending on the amount of ram you have you can run different models. check out the mixtral 8x7b ones for generally good results. https://lmstudio.ai/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: