Hacker News new | past | comments | ask | show | jobs | submit login

>"More recently, there is a shift towards using a Transformer architecture, and right now I’m experimenting with that as well."

I'm really curious- any early results to share on that? Attention really does make a big difference on a lot of things (including work I've done so I know first hand). It should improve the coherence of the entire music piece in theory at least, right?




Check out Music Transformer that was recently published https://arxiv.org/abs/1809.04281

Some generated samples: https://storage.googleapis.com/music-transformer/index.html


The accompaniment examples are cool. That would be a very nice tool to have. Just play a melody and it auto-generates an accompaniment.


Transformer is working really well- I'm very excited. I'll probably be sharing results soon. Yes, the attention makes a huge difference & the pieces are both more creative and more coherent.


Have you considered using 'learning from human preferences' as the loss function in addition to the Transformers? That was another OpenAI project, and it seems tailor-made for music generation: what is more 'I know it when I hear it' than music quality?


That is exciting! I'll be watching on Twitter :)


textgenrnn (https://github.com/minimaxir/textgenrnn) uses a simple Attention Weighted Average at the end of the model for text generation, which in my testing allows the model to learn much better.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: