Hacker News new | past | comments | ask | show | jobs | submit | abi's comments login

Exactly.


I’ve experienced that issue as well. Clearing the cache and redownloading seemed to fix it for me. It’s an issue with the upstream library tvmjs that I need to dig deeper into. You should be totally fine on a 32gb system.


FWIW mine fails on the same file. I've tried a few times, including different sessions of Incognito, but seem to repeatedly get the same error with Llama 3 model:

"Could not load the model because Error: ArtifactIndexedDBCache failed to fetch: https://huggingface.co/mlc-ai/Llama-3-8B-Instruct-q4f16_1-ML..."


Use secret llama in a incognito window. Turn off the Internet and close the window when done.


Thanks for the bug report. Yeah, it’s a bug with not resetting the state properly when new chat is clicked. Will fix tomorrow.

Chat history shouldn’t be hard to add with local storage and Indexed DB.


Yes. Web-llm is a wrapper of tvmjs: https://github.com/apache/tvm

Just wrappers all the way down


And one day you realize wrappers are what it was all about

You betta lose yaself in the music tha moment you own it you betta neva let it go (go (go (go))) you only get 1 shot do NAHT miss ya chance to blow cuz oppatunity comes once inna lifetime (you betta) /gunshot noise


Appreciate the kind words :)


Window AI (https://windowai.io/) is an attempt to do something like this with a browser extension.


Try to switch models to something other than tinyllama (default only because it’s the fastest to load). Mistral and Llama 3 are great.


This is amazing. Thanks both for sharing your stories. Made my day.


Well, it should be possible to just drag and drop a file/folder


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: