Intriguing to imagine the first time the designers heard the sound their creation could make (presumably there was a TTL or bit-slice prototype before the VLSI implementation). They're sitting around the lab, someone plinking away on the keyboard. One of them says "you know, you could use that sound on a pop record and I bet it would be popular for at least a decade".
The story behind FM synthesis is pretty interesting. A Stanford music professor, John Chowning, came up with the idea in the 1960s and patented it. Stanford didn't think this was what a music professor should be doing and fired him. Meanwhile, Yamaha licensed the patent from Stanford, paying millions of dollars and making it Stanford's most lucrative patent at the time. Stanford changed their mind about Chowning and hired him back, making him a full professor and then department chair.
You may think it sounds crude, but pieces like these took hours of expensive mainframe (PDP-10) time. There wasn't a lot of opportunity for careful sculpting of fine details.
You could easily synthesize something like this in real time now. But not many people do, which I think is a shame.
You could easily synthesize something like this in real time now. But not many people do, which I think is a shame.
I've kind of been following very loosely along with this DX7 series with some (not DX7) FM and phase modulation in my own code. If you're running Linux, you can try something like this in the WIP graph-synth branch[0] of my mb-sound Ruby app/gem (feedback welcome on the DSL syntax):
those early Yamaha FM synths has a sound all on to themselves
For sure, though the parent comment's video link was presumably rendered by software on a PDP-10. I've spent the last week of my spare time trying to write code to make a bass sound that reasonably approximates Lately Bass, and I'm close, but since I'm not even trying to emulate any of the quirks of the Yamaha FM synths it doesn't ever sound quite right.
To be clear - I didn't mean it was a shame that people weren't specifically emulating FM, but that people weren't experimenting with weird synthesis techniques to make weird non-pop music with sounds no one has heard before.
It's the last part that's still really difficult, possibly even harder than it used to be. And that's even with the insane processor power we all have now.
I guess movie soundtracks are the closest to that in widely distributed music, or artists like Aphex Twin.
There's an infinitely vast space of potential sounds, and an infinitely long fractal knife edge of interesting sounds, and it kind of makes sense that finding unexplored areas of that fractal edge between boring and incomprehensible gets harder over time.
c15 on the tx81z, you'd have to model the 12 bit DACs and the steppy-nees of the wave shapes. also the CPUs in the tx81z were slow and some calcs would lag a bit which contributed to its sound... try sending midi data to the TX81z while sequencing it, your pattern with start swinging!
> Stanford changed their mind about Chowning and hired him back
Somehow I feel that if this happened today, no company or university would have the spine to do this, and it would rather turn out as an expensive legal battle if he insisted on getting his part of the patent pake.