Hacker News new | past | comments | ask | show | jobs | submit login

Wouldn't that simply be probitivly expensive?



One example that says "no" to your question. -> https://ollama.ai/ There are surely more. It can be used with something like "LangChain" or "LlamaIndex" to give the locally hosted LLM access to local data, and a bit of Python "glue code" to tie it all together.


That's why it was a question. All I hear is data farms and massive datacenters and those you cannot do at home/small business.


For GPT4? Sure..

For small LLMs like Llama2 7B/13B and its derivatives? They can be run quite gracefully on Apple Silicon Macs & similarly capable PC hardware.


Smaller or even larger Llama models are vastly inferior to GPTs.


That's not a big problem with the training/fine-tuning you would do when creating specialized 'local' LLM agents.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: