Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Try llamafile https://github.com/Mozilla-Ocho/llamafile I have Mistral 7B running this way on a 10 year old laptop and it only seems to use a few GB with it's memory mapping approach.


It doesn’t count most of the model, since it’s memory; it only shows up as memory used by the disk cache.

Though if your machine can’t keep it all in memory, then speed will still fall off a cliff.


If it is lazy loading just what it needs, seems like an efficient use of memory. In any case, this 4GB model will easily fit into the commenter's 16GB machine.


If you're running on GPU then it would need to be wired, and wired file-backed pages do count as process memory and have to physically fit in DRAM.


Wow that's incredible. And legit too. I was reading through issues on llama-cpp about implementing memory swapping so I didn't think it had been done.

Thanks!


It’s really just a difference in accounting. Memory used for memory-mapped files aren’t shown in the “used” header, but instead the disk cache one. And doesn’t need to be swapped out to be discarded, so if you lack the memory it just slows everything down without an obvious cause.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: