Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How did you run it? Are there model files in Ollama format? Are you running on NVidia or Apple Silicon?

EDIT: just saw this “ Megatron (1, 2, and 3) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA.”



My recommendation is:

- Exui with exl2 files on good GPUs.

- Koboldcpp with gguf files for small GPUs and Apple silicon.

There are many reasons, but in a nutshell they are the fastest and most VRAM efficient.

I can fit 34Bs with about 75K context on a single 24GB 3090 before the quality drop from quantization really starts to get dramatic.


Thanks! I will check out Koboldcpp.


In the textgeneration web ui on NVidia gpu


your edit is entirely unrelated to this topic




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: