Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How does structured output from LLMs work under the hood?
2 points by dcreater 4 days ago | hide | past | favorite | discuss
OpenAI, Ollama, LiteLLM python packages allow you to set the response format to a Pydantic model. As I understand that this just, serializes the model into a JSON. Is the JSON then just passed as context to the LLM and the LLM asked to adhere to the provided JSON? Or is there something more technical/deterministic happening that constrains the output to the JSON format provided?





Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: