Hacker News new | past | comments | ask | show | jobs | submit login
PydanticAI using Ollama (llama3.2) running locally (github.com/pydantic)
2 points by scolvin 42 days ago | hide | past | favorite | 3 comments



So cool! I wonder what the weakest model that can still call functions and such is?

I don't have anything more powerful than an i5 other than my phone, and a lot of interesting applications like home automation really need to be local-first for reliability.

0.5b to 1b models seem to have issues with even pretty basic reasoning and question answering, but maybe I'm just Doing It Wrong.


See https://github.com/pydantic/pydantic-ai/issues/112 people have tried quite a few models.

Llama3.2 worked well and used <2gb ram


Edit: Gemma2 2B is very slow but it is able to do some basic tasks




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: