“This allows runtime-efficient balancing of visual-fidelity and textual-alignment with a single 100KB trained model, which is five orders of magnitude smaller than the current state of the art.”
It's one of those sentences that if you know what it means, you know what it means. That said, the title needs the word "personalization" inserted before the word model, e.g.:
Nvidia intros 100kb text-to-image personalization model called Perfusion
There is some meat to the story, I agree. But it's not surprising. The fine tuning model of course will be small in file size and not take too long to train because by definition it is applying changes to a small subset of the main model and is trained only on a small amount if input data. You can't use the small tuning model for "Teddies" with a query that has nothing to do with Teddies. You could see these small tuning models as a diff file for the main model. And depending on the user query one can choose an appropriate diff to be applied to improve the result for that specific query.
When you train a model with new inputs to fine tune you can save the weights that got changed to a separate file instead of the main file.
In other words one can see the small tuning models as selectively to be applied updates/patches.
Yes, you need the pretrained model. BUT: for embedded applications, you could put that into metal and have the 100kb in flash which could open up some possibilities.
Doesn’t seem to be any code or runtime examples