Hacker News new | past | comments | ask | show | jobs | submit login
How I Taught My Computer to Write Its Own Music (nautil.us)
95 points by dnetesn on Feb 13, 2015 | hide | past | favorite | 40 comments



Makes for an impressive read... perhaps too impressive. Reeks of curve-fitting to me. "For instance, if I heard something—a melody, a chord progression—that had an emotional attraction for me, I would draw attention to it in the mix, repeating it and developing it further if necessary." Sounds like a dressed-up version of n monkeys at n typewriters. Let them go, and when you finally see something that's identifiable to you as a word – "dog", tell them to type that more often. So every 20th word is "dog", with pseudo-randomness continuing underneath. We would never enjoy reading a short story consisting of this, but because repetition is one of the most important foundations of music listening (http://aeon.co/magazine/culture/why-we-love-repetition-in-mu...), this works.

To me, while the project seemed interesting to work on, and most people would call the music beautiful, it doesn't really amount to what it's purported to represent.

Another (to me) important point: a lot of the compositions, with some notable exceptions, don't stray far from the pentatonic scale – or the individual elements are pentatonic in relation to themselves. The pentatonic scale is 5 notes that, to western audiences, always sound relatively pleasing in relation to the other 4. You could play something literally completely random in a pentatonic scale (perhaps with certain broad rhythmic restrictions), and western audiences would enjoy it. If anything, I think this is just evidence that the process was actually carefully shaped at every step of the way by human tastes and intuition. The computer was truly more of a servant than a collaborator.


It is not just western audiences. Pentatonic is cross-cultural.

https://www.youtube.com/watch?v=jpvfSOP2slk

A good way to find new ideas for music that people will care about listening to is to go back to the pentatonic system, and find new variations to take from there.


A particularly beautiful example Mr. McFerrin does with various audiences.

https://www.youtube.com/watch?v=_Irii5pt2qE


I got sort of the same impression. He writes a very nice narrative,unfortunately I doubt it really fits reality.


Breaking news! Humanities professor writes attractive narrative, cleverly papers over content's shortcomings. Unprecedented! Read all about it! :p


It still seems inherently sequential or loop-based and generative/reactive to me. My own favorite parts of time-based works of art are when a transformation happens that is surprising in the moment, but satisfying when you realize how inevitable it was in hindsight, or a development that simultaneously communicates planning and restraint. Composers have been able to wring so much creativity and artistry out of an extremely standard set of symphonic instruments for centuries - when I compare the playfulness of the Turkish March in Beethoven's 9th, or the drawn-out and crashing inevitableness of the big moment in the 1st movement of Sibelius' 5th symphony, or the freakishly satisfying conclusion of Respighi's Pines of Rome, it kind of encompasses what music actually means to me, and it seems we are a very long way away from algorithmic composition being able to reach that kind of artistry.


I agree. A good heuristic for interesting music: create an expectation model, and exceed it with an unexpected but inventive twist that's consistent with the surrounding grammar but adds unexpected entropy.

Building representational models of that process is hard. So most people don't bother - they mush stuff up at random and keep the bits they like.

Like this.

Unfortunately this music prof is ignoring 60-odd years of computer music history and reinventing the glitch-with-Max thing that was big 10-15 years ago.

Autechre, Amon Tobin, and many many others have been making music like this for a while now. The sound/sample world is different, but the techniques are very similar.


That's how I feel - the tone of this writing irks me a bit. I don't want to detract at all from the work these authors did, I like it a lot. But the text here is fluffy and frankly somewhat narcissistic.

Lejaren Hiller showed that computers can compose competent music in 1957 with the Illiac Suite. Since then there have been countless other using computers doing things like the project in this article (Autechre is a PERFECT example).

I love that people are doing work like this and sharing it (full disclosure, this is a hobby of mine), but I wish it were presented in a less pretentious way (although admittedly this comment could probably also be a lot less pretentious).


10-15 years ago? Absolute poppycock, Autechre, Amon Tobin you'll be saying Aphex Twin next.

Why no, American born electronic artist Brian Transeau "developed" the technique as recently as 2011.

https://en.wikipedia.org/wiki/Stutter_edit


He developed nothing of the kind. The "Stutter edit" effect has been used in electronic music even before Autechre and the rest, in the eighties electronic bands.

If this guy created anything, it's the plugin with the same name that automates the process (and even that is debetable for 2011 -- similar plugin tools existed way before).

In essense: don't trust anything your read on the internet. This Wikipedia page is an example of the worst BS on Wikipedia, it should be taken down.


your sarcasm detector is way off :)


Really? Faith in humanity restored.

Now I wonder how that Wikipedia article survives...


It's possibly the greatest outrage of our times.


I've created some similar projects of my own, studied with the leading professor in this field (see my other comment), wrote an ebook about this topic, and been keeping an eye on this field for years.

tldr: artificial intelligence is much more artificial than intelligent.

nearly every project of this nature illustrates how much easier it is to write narrow AI than general AI, even though any AI which produced complete musical works would be a very narrow example of general AI.

it is much, much easier to write very specifically-targeted stuff that can perform a few useful tricks than it is to write an actual "composer."

this particular example features a Max/MSP patch designed to turn a very specific database of samples into a very specific style of ambient music.

every time I see anything about this topic on HN, I kind of end up being a bit of a killjoy about it, because the reality is that it is very, very common for people to overhype their results in this context. I've probably been guilty of this myself, in the past.

it's definitely possible to write very satisfying and effective music-generating code, but there is almost never an incredibly deep lesson to learn about the nature of consciousness there. there ARE very, very often incredibly deep and specific lessons to learn about the inner workings of particular musical forms.

I wrote a drum-and-bass drum pattern generator which made me much, much better at writing drum patterns by hand afterwards. one of the professors I studied with (briefly) wrote an amazing fuzzy logic counterpoint generator which allowed him to, effectively, improvise entire concertos, and I'm reasonably confident he learned an enormous amount about classical counterpoint in the process.

(note that "improvise entire concertos" means "play a MIDI wind controller, improvise a tune, and have his software generate appropriate harmonic accompaniment." the overhyping tendency bit me even as I was warning you about it.)

likewise, I think the author of this nautil.us post learned an enormous amount about what makes ambient music work well. and probably got much better at working in Max/MSP.

but this is not Skynet shit. this is not proof of the rise of machine consciousness. this is intricately technical work which develops your skill in programming and musical composition.

that's really all there is to it.


Do you have a link to your ebook?

>there ARE very, very often incredibly deep and specific lessons to learn about the inner workings of particular musical forms.

That's always the problem. You can mechanize a style if you work hard at it, are skeptical about what you're doing so you're never satisfied with nearly-almost, and don't mind being ignored because no one ever listens to this stuff anyway. ;)

But it's impossible to create an original computer-generated musical language with non-trivial appeal without having a good model of human musical perception and emotional response.

Most music theory gives you a musical alphabet, and once you have one you can work out how other alphabets work.

But that's a long way from working out how to mechanize the invention of an original but expressive musical languages.

I think it's a fascinating problem.

My guess is it's going to stay a fascinating problem for a long time.


  Do you have a link to your ebook?
http://gilesbowkett.blogspot.com/2013/11/new-ebook-hacking-m...

and to a lesser extent http://singrobots.com/

the book's actually on sale right now, $23 vs $17. caveat: I've really got to redesign it, and I may just write another one because there's a lot more that could go in there.


Ever heard of Emily Howell? She's a bot.

She can write an infinite amount of new music all day for free. People can't tell the difference between her and human composers when put to a blind test.

Emily Howell fugue: https://www.youtube.com/watch?v=jLR-_c_uCwI David Cope Emmy Vivaldi (composed by Emily): https://www.youtube.com/watch?v=2kuY3BrmTfQ


sorry, but: caveat city.

I studied with Dr. Cope here:

http://arts.ucsc.edu/programs/WACM

Emmy is not the same as Emily Howell; the Emmy Vivaldi was composed by a simpler program called EMI.

In either case, iirc, the music's composed by probabilistically combining key-signature-normalized snippets of existing compositions. EMI mostly just took the works of one composer and created a new work in that composer's style by Frankenstein-remixing snippets of the composer's actual works. Emily Howell, iirc, does the same, but uses multiple composers and/or original snippets by Dr. Cope.

btw: feed EMI Beethoven, and "she" produces Mozart. i.e., when probabilistically combining several key-signature-normalized Beethoven snippets, some of the results were identical to larger snippets of Mozart (who was, as you may have guessed, a big Beethoven fan).

also btw: Beethoven wrote algorithmic compositions for people to perform as a parlor game, with dice.

also also btw: my own drum-and-bass Ruby project from years ago will generate an infinite amount of new jungle riddims all day for free:

https://github.com/gilesbowkett/archaeopteryx


> btw: feed EMI Beethoven, and "she" produces Mozart. i.e., when probabilistically combining several key-signature-normalized Beethoven snippets, some of the results were identical to larger snippets of Mozart (who was, as you may have guessed, a big Beethoven fan).

I think you may mean the other way around; Mozart war 15 years older than Beethoven, and died before Beethoven's career took off.


ugh, how embarassing. you're right about the ages. I'd have to check my notes to be sure if I got the whole thing messed around, or just who was a fan of whom, but you're probably right about that part, too.


Feeding Bach into a Markov generator makes for pleasant tunes

Here's a blast from the pre Go-lang world

http://ipn.caerwyn.com/2007/04/lab-77-unexpected-markov.html


Damn, that is a little fancier than my DubStep.rb program: https://github.com/cortesoft/DubStep.rb


You'd be able to tell the difference between her and a human composer in a blind test when the "fugue" turns out to be a single melody wandering with no direction. I wonder who conducted these blind tests.


Presumably this music doesn't have a copyright as it's algorithmically generated and so has no human author? Similar to the case of a photograph taken by an animal (orangutan?) that made headlines a few months back.


That's like saying I can't copyright an image I made entirely within GIMP because it has no human artist. Cope's program may be more generative than a general-purpose graphics program, but the fact remains that Cope is still the creator.


>That's like saying I can't copyright an image I made entirely within GIMP because it has no human artist. //

Not if you created an algorithm that uses GIMP and runs independent of you. Copyright protects artistic works by natural persons.

Cope is the creator of the algorithm and has copyright protection of any artistic elements of that algorithm. Copyright does not protect technical effort no matter how skilled.

In the same way my camera's firmware writer, though skilled, has technical input in to all images created with that camera. But as they don't have artistic input in to any specific image they don't share the copyright - they may have made the image vastly superior with their technical ability (white-balance, focus, filters) but it was technical input and not "artistic".

Edit: a reference for the general principle under the USC, http://en.wikisource.org/wiki/Page:Compendium_of_US_Copyrigh....


If a monkey steals your camera and takes a photo, who owns the copyright to that photo?


It would be more like if the orangutan took a photo and then a human edited it heavily to create the final work.


Computer generated music often sounds fine, but this comment under http://youtu.be/o0tH_mHXR9c sums it up:

the problem with computer generated music as in this example is that it is formless. The phrases are too expository and the example lacks any identifiable relationship to anything previously stated. If compared to written language, this example writes beautiful sentences but the paragraph makes no sense.

This limitation can surely be overcome - with AI developments and deep learning, algorithms would become capable of learning from human compositions, ultimately producing works indistinguishable from human ones (thereby passing a Turing test of sorts), and even better ones (just like chess engines routinely beat even genius human players now).

This will lead to a world where all casual music can be improvised by computers to user's liking (even adjusting parameters to accessible social context, such as feeding off your Facebook status, or how things are at work etc., again, mastering that skill by learning from your reactions, which are easily detectable - skipping to the next track, replaying bits that you liked), and then every played piece can really be one of a kind; like a kaleidoscope that produces unique pleasing images on demand.

The next step might be to generate compelling movie scenarios (I have a feeling that soap operas and genre movies would be the first to be automated like that...), and ultimately even movies themselves.

An interesting TED Talk (from December last year, so quite new) that I've watched recently: http://www.ted.com/talks/jeremy_howard_the_wonderful_and_ter...

The important shift would be to stop generating music, or other artistic content, out of a fixed set of preprogrammed principles and allow neural networks to derive the rules by themselves, even coming up with new genres. Instead of teaching computer to write music, you just let it teach itself.


Ah. It's a max patch.


looked like puredata to me. I love how philosophical people get when samples get triggered by a few if conditions.


top of the menu bar says "Max"


original post says Max also. :-)


stop the presses!


I immediately thought of Ray Kurzweil when he played music that was written by a computer he programmed in 1965 in which he demonstrated on the game show I've Got a Secret. I'd say with all of these responses, we've progressed quite a bit.

https://www.youtube.com/watch?v=X4Neivqp2K4


https://seaman-supko.bandcamp.com/releases

here's the bandcamp for the music the computer wrote. I wonder does the computer get the 7 dollars and copyright and so on.


See also Emily Howell which is something else entirely.


Whenever I see something like this[0], there are two thoughts that immediately manifest in my psyche:

1) I probably wouldn't have bought a subscription, but now that the begging for me to buy a subscription is blocking the article itself, me not buying a subscription is guaranteed.

2) I guess I really don't feel like reading that article anyway. << closes tab >>

Absolutely unacceptable, and I'm getting sick and goddamn tired of it on every other website, especially here on Hacker News where the general population - of which "people who post things" is presumably a subset - really ought to be well-educated-enough regarding proper user experience to know better.

[0]: http://i.imgur.com/Lwz72w0.png


Just in case you missed it, that ad can be closed by clicking the X in the upper-right corner. If you were aware of that, I don't see what's unacceptable about being shown an ad before reading a free (and interesting) article.


> Just in case you missed it, that ad can be closed by clicking the X in the upper-right corner.

I was indeed aware of that.

> If you were aware of that, I don't see what's unacceptable about being shown an ad before reading a free (and interesting) article.

It's annoying and distracting. It's simply not good user experience, and the fact that such an ad manages to break through AdBlock Plus (probably because it's not coming form a third-party service, so it's harder to detect as an ad) is frustrating.

And to make this clear, I have no issue whatsoever with asking for a subscription; in fact, I might have been swayed positively (albeit admittedly slightly) instead of all the way to the negative should they have implemented a "Subscribe to us if you want to read more articles like this!" somewhere at the top or maybe right after the article itself or somewhere off to the side.

It's akin to smartphone apps (and there are a lot of them that do this, especially ad-supported games) that will step in between you and whatever you were hoping to do at seemingly random and display some fullscreen ad. I understand why they do it - they have to make money somehow, after all - but it's something that makes any somewhat-sane person not want to follow that ad or continue using that application (as opposed to the likely-intended purpose of driving users to pay for an ad-free version).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: