Hacker News new | past | comments | ask | show | jobs | submit login

People want and need other people around, doing human things. Even if the musical output was impressive, I can't think of people developing a deep relationship with it outside of novelty. Algorithms are cold, and music is usually very much the opposite - I can see an "uncanny valley" effect cropping up with the imitative forms: people hearing something that is supposed to come across as laden with emotion, instead leads people to revulsion. What do we get out of computers composing songs that are supposed to relate to the fragility of the human condition?

Sure, the algorithms, once sufficiently advanced, could probably trick us into thinking that certain examples of generative music were made by a person and then later reveal its algorithmic origin to prove that "the humans are stupid" and "the google algorithms are clever" but what are we actually proving here?

Can a computer devise new artistic forms that have some genuine impact on people - can a computer come up with Bacon's Triptych of George Dyer outside of regurgitating fragments of what it already has seen? What do we get out of a computer aping the alcohol-fuelled sweaty anarchic performances of The Black Lips?

The interesting stuff will be to see if this goes to other places that music has not yet gone - some new composition method - manipulation of frequency in ways that humans have not yet devised.




I see this sort of application having a lot of use in the kinds of derivative pop music developed by ensembles of songwriters and manufactured purely to generate radio hits. Bacon's Triptych of George Dyer is genius. The average person listening to Taylor Swift just does not care about Bacon's Triptych of George Dyer.

In a way, Magenta's job is not besting Bach. By the definition of Bach (a human being who changes the way we view and enjoy music), a non-human being cannot best Bach. Magenta's job is besting a much simpler, if equally challenging role - Max Martin, or the writers of "Let it Go".

As it turns out, this kind of music is already pretty formulaic. Much has been written on repetitive chord progressions being spammed across hundreds of famous singles. In a way, artists shouldn't fear the potential of these technologies besting them - they should thank them.

Freed now are artists from loading their albums with eye-rollingly generic lead singles that they immediately get sick of ("Stairway to Heaven", "Creep", "Smells Like Teen Spirit") because record labels know that's what will get the most radio play. You can just let the machine do those. Now, an artists' reputation is determined purely by his relative mettle against other human artists.


The average person listening to Taylor Swift is thinking about Taylor Swift, and not what they're listening to.

Pop is maybe 75% performance, sex, status, and charisma. The music isn't irrelevant, but it only really needs to be a committee-produced mashup of contemporary cliches to do its job.

The rest is posing and attitude.

>As it turns out, this kind of music is already pretty formulaic.

But it's less formulaic than it sounds. Discovering that it uses Standard Chord Sequence Number 7 (from the small standard pop set) won't get you close to an interesting song.

A lot of creative detail goes into the production, arrangement, and the vocal performance. Not the MIDI file.

Basically there are huge gaps between a MIDI cliche machine - buildable now, and not particularly difficult - to a full virtual artist who produces even moderately successful tracks without human help, to a musical AI genius who produces completely new musical styles that capture the human imagination for centuries.

You need a model of mind to do that last one, and we're at least 50 to 100 years away from that.


> The average person listening to Taylor Swift is thinking about Taylor Swift, and not what they're listening to.

I think this is a grand oversimplification. Personality certainly _contributes_ to pop stardom, but the music is still #1. Before anyone knew who Taylor Swift was, they connected with her through one or more song.

> A lot of creative detail goes into the production, arrangement, and the vocal performance. Not the MIDI file.

Of course, but even having an autonomous "songwriter" that could write _a_ hit would be a gamechanger for music (though obviously most immediately applicable to top 40 / pop)

> You need a model of mind to do that last one

I disagree. Machines already produce what would otherwise be considered "experimental" music, you just need some deep reinforcement learning to know what has mass appeal.


> Before anyone knew who Taylor Swift was, they connected with her through one or more song.

Only if by 'connected with her' you mean heard her debut hit over and over and over again on radio until it became an earworm.


I disagree about the songs creep and smells like teen spirit being generic. These were exceptionallu crafted pop songs that expressed heart wrenching emotion. Nothing like the typical pop song at all.


Even if the musical output was impressive, I can't think of people developing a deep relationship with it outside of novelty.

There are plenty of times and places where people want high quality "music" but don't want to actually engage with it on any level - the music that tells you you're still connected when you're waiting for a conference call, the low volume background music in some retail environments, the music in a lift. If "pleasant musical noise" could be generated automatically and to a sufficient quality I think there'd be a pretty decent market for it.


That's true.

Eno worked on a number of projects to generate music years ago, with Bloom and similar: http://www.generativemusic.com/. A quick fiddle with that can definitely generate some banal hold/elevator music.


Or in a video game, where you'd want the music to react to events.


Imagine the performance side of things though.

Being able to add to the song with commands like 'add a psytrance bass line', even within predefined parameters, to dynamically generate an entirely new bass line from other songs in the genre.

Maybe you could instantly add an improvised violin melody by telling it a style given that the chords/key from the human band are consistent.

Sentiment analysers could tweak the music based on crowd reaction towards musician defined goals and learn those pre-sets over time.

If music is a synchronisation layer between humans, maybe machine learning could help us to synchronize even more closely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: