Hacker News new | past | comments | ask | show | jobs | submit login

agree, the model assumes a multitasking setup where you need some leftover ram for other tasks. You can squeeze in much larger models when running dedicated



It would be a lot nicer if it would not just give a binary flag "can/can't run" but what to expect.

Ideal scenario (YMMV): add more hardware parameters (like chipset, CPU, actual RAM type/timings - with presets for most common setups) and extra model settings (quantization and context size come to mind) then answer like this: "you have sufficient RAM to load the model, and you should expect performance around 10 tok/sec with 3s to the first token". Or maybe rather list all models you know about and provide performance for each. Inverse search ("what rig do I need to run this model with at least this performance") would be also very cool. May be nice have an ability to parse input of common system information tools (like Windows wmic/Get-ComputerInfo, macOS system_profiler or GNU/Linux dmidecode - not sure if all info is there, but just as an rough idea: give some commands to run, parse their output in search of specs)

Of course, this would be very non-trivial to implement and you'll probably have to dig a lot for anecdotal data on how various hardware performs (hmm... maybe a good task for agentic LLM?) but that would actually make this a serious tool that people can use and link to, rather than a toy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: