Hacker News new | past | comments | ask | show | jobs | submit login

Local AI is definitely a good thing and I can see why llamafiles can be useful. Sounds great for the use-case of a trusted organization distributing models for easy end-user deployment. But if I am going to be downloading a bunch of different llms to try out from various unknown sources it is a bit scary with executables compared to plain data files.



You can download the llamafile executables from Mozilla's release page here: https://github.com/Mozilla-Ocho/llamafile/releases and then use the `-m` flag which lets you load any GGUF weights you want from Hugging Face. A lot of people I know will also just rent a VM with an H100 for a few hours from a company like vast.ai, SSH into it, don't care about its security, and just want to have to wget the fewest files possible. Everyone's threat vector is different. That's why llamafile provides multiple choices so you can make the right decision for yourself. It's also why I like to focus on simply just making things easy, because that's one place where we can have an impact building positive change, due to how the bigger questions e.g. security are ultimately in the hands of each individual.


Not running eval on third party model weights when encouraging consumers to download them seems like the low bar that comes after have any non-executable policy at all, especially for something Mozilla supported.

Edit: I mean as the default. Which requires users to do a big scary --disable-security or equally scary red button to turn off. Which is what browsers do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: