Hacker News new | past | comments | ask | show | jobs | submit login

This tool fills a very narrow use case for demos of AI chat.

If you know Python and can run inference models on a colab then you can quickly whip up a demo UI with builtin components like chat with text and images. Arguably everyone is testing these apps so making them easy is cool for demo and prototyping.

For anything more than a demo you don't use this.




What other use cases is this tool insufficient for?

BentoML already wins at model hosting in Python IMHO.

What limits this to demos?

IIRC there are a few ways to do ~ipywidgets with react patterns in notebooks, and then it's necessary to host notebooks for users with no online kernel, one kernel for all users (not safe), or a container/vm per user (Voila, JupyterHub, BinderHub, jupyter-repo2docker), or you can build a WASM app and host it statically (repo2jupyterlite,) so that users run their own code in their own browser.


xoogler/ML researcher here. Everyone at Google used colab because the bazel build process is like 1-2 minutes and even just firing up a fully bazel-built script usually takes several seconds. Frontend is also near impossible to get started with if you aren't in a large established team. Colab is the best solution to a self-inflicted problem, and colab widgets are super popular internally for building hacky apps. This library makes perfect sense in this context...


How could their monorepo build system with distributed build component caching be improved? What is faster at that scale?

FWIU Blaze was rewritten as Bazel without the Omega scheduler integration?

gn wraps Ninja build to build Chromium and Fuschia: https://gn.googlesource.com/gn




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: