Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Nvidia Launches a 100kb text-to-image model called Perfusion (nvidia.com)
85 points by enamya on Aug 6, 2023 | hide | past | favorite | 14 comments



* a several GB model to which a small amount of subject specific training can be performed on archetype, augmenting that large model

Doesn’t seem to be any code or runtime examples


@dang very misleading title and editorialized

Of course there is no 100kb text-to-image model


From the white paper on Arxiv:

“This allows runtime-efficient balancing of visual-fidelity and textual-alignment with a single 100KB trained model, which is five orders of magnitude smaller than the current state of the art.”

https://arxiv.org/abs/2305.01644

It's one of those sentences that if you know what it means, you know what it means. That said, the title needs the word "personalization" inserted before the word model, e.g.:

Nvidia intros 100kb text-to-image personalization model called Perfusion


It's not a 100kb model. It's 100kb config files for a several GB model. A small trained layer to stick on top of the real model for fine tuning.


This looks like something between fine tuning a top layer and a zero shot approach.

This is probably what future voice models will begin to look like as they begin to capture prosody and other fine characteristics in a few hundred kb.


Yes, although it is decently interesting that a model can be fine tuned by just tweaking a small number of weights and training for just a few minutes


There is some meat to the story, I agree. But it's not surprising. The fine tuning model of course will be small in file size and not take too long to train because by definition it is applying changes to a small subset of the main model and is trained only on a small amount if input data. You can't use the small tuning model for "Teddies" with a query that has nothing to do with Teddies. You could see these small tuning models as a diff file for the main model. And depending on the user query one can choose an appropriate diff to be applied to improve the result for that specific query.

When you train a model with new inputs to fine tune you can save the weights that got changed to a separate file instead of the main file.

In other words one can see the small tuning models as selectively to be applied updates/patches.


Isn't this just another method of a LoRa like what we've already seen in Stable Diffusion?


Yes, you need the pretrained model. BUT: for embedded applications, you could put that into metal and have the 100kb in flash which could open up some possibilities.


Maybe the title should be the "Key-Locked Rank One Editing for Text-to-Image Personalization", per HN guidelines.


Very misleading title


Very misleading title


[flagged]


They can’t, and they aren’t. In any case as you’ve just read in the article the model is very large, why even say this given what you just read?


you got to 18k karma with such clueless inflammatory comments?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: