Hacker News new | past | comments | ask | show | jobs | submit | popalchemist's comments login

You might want to consider a new name; inviting a lawsuit from Adobe could end your project. They have the cash to waste your resources infinitely.

According to the acquisition document: https://www.adobe.com/content/dam/cc/uk/aboutadobe/newsroom/..., it looks like the copyright might be "frame.io" for Adobe's product. But it's a good point.

If we ever get to the point that we'd be getting sued by Adobe, I'd honestly be just as happy as I'd be sad haha


Source separation is a general term, stem separation is a specific instance of source separation.

This comment is now the first result on google when you search for "Neon Genesis title drop".

Nice work. What's the underlying stack?

It's really just Javascript and the Web Audio API. For the share links, I'm using Postgres on the backend to store the jsons. Pretty simple!

How many bits of information are there? Might be able to put it straight in the url with base122.

Beautiful.

Wake me when the code is out. Too many people claim they have an OSS project before the code is dropped to gather followers, only to "pivot" to something else later.


How do you plan to continue to offer open weights? What's your business model that allows that?


The trends around compute increasing is on the whole correct but by no means is it a universal rule. More optimized training procedures and model architectures are coming out all the time. In just the last week, we got F5-TTS which is trained on twice as much data as the previous leader in realistic TTS (Tortoise TTS) and is exponentially faster - taking only 3 weeks on H-100. We also got Meissonic, a text-to-image model that is exponentially easier to train than any existing model. IE you can train a Stable Diffusion like model from scratch on consumer hardware or in the cloud for abou $500.

https://huggingface.co/MeissonFlow/Meissonic

https://github.com/SWivid/F5-TTS

The reason the trend is that compute costs are doubling is because this is an arms race and everyone in the corporate space is prioritizing bigger models over better architecture in the pursuit of a breakaway. It is not indicative of a law ala Moore's Law.


Yes, to me, the report is interesting not because of the insights it offers, but because it raises more questions:

1. What about new AI chip vendors?

2. How will the price of compute change?

3. How will the demand for compute change?

4. How will the overall supply of chips change?


The "making the image models much faster" part is model optimizations that are also explained in the post.


Where? I don't see any explanation of model optimizations in the linked post.


Beautifully executed. Would you be willing to port this to a Vue/React component?


The deadline for YC's W25 batch is 8pm PT tonight. Go for it!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: