Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not exactly. If you refer to the following line:

> Full OpenAI Compatibility

> WebLLM is designed to be fully compatible with OpenAI API.

It means that WebLLM exposes an API that is identical in behaviour with the OpenAI one, so any tools that build against that API could also build against WebLLM and it will still work.

WebLLM by the looks of it runs the inference purely in the browser. None of your data leaves your browser.

WebLLM does need to get a model from somewhere, with the demo linked here getting Llama3.1 Instruct 8B model{1}.

1: https://huggingface.co/mlc-ai/Llama-3.1-8B-Instruct-q4f32_1-...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: