Hacker News new | past | comments | ask | show | jobs | submit login

how does 7b match up to Mistral 8x7B?

coming from chatgpt4 it was a huge breath of fresh air to not deal with the judeo-christian biased censorship.

i think this is the ideal localllama setup--uncensored, unbiased, unlimited (only by hardware) LLM+RAG




I haven’t seen on how it fares on uncensored use-cases, but from what I see Q5_K variants of Mistral 7B are not very far from Mixtral 8x7B (the latter requires 64GB of RAM which I don’t have).

Tried open-webui yesterday with Ollama for spinning up some of these. It’s pretty good.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: