"Digital audio workstations have become increasingly sophisticated, able to emulate with "plug-ins" the capabilities of sought-after analog studio gear of the past. It has taken a bit longer for virtual instruments to meet this same standard, but they may be nearly there."
Hardware is much more hands on, with real dials, switches, and other means of physical interaction which screens and mice don't come close to matching. Hardware does exist through which musicians can interact with computers, but it's usually MIDI, which has its own limitations compared to fully analog gear.
With hardware you generally don't have to worry about software or OS updates, and the hardware generally continues to "just work" for a long time.
A lot of uncommon analog hardware is very unlikely to ever be modeled in software, so the way it reacts when manipulated may never actually be matched in software (even when the potential to do so is in principle there if anyone ever bothered to create a high quality model of it in software). This extends to how it sounds as well, for the same reason.
That said, modular synth software like this is still great, and may be the next best thing to using the real thing. It's also a great way to learn and get a taste for what's possible before committing money to buying any hardware.
Yes. The phrases "menu" and "software update" are what kills all tactile electronic instruments of various kinds.
This is the same for electronic test equipment as well. There are so many features crammed in new devices, even if you manage to get enough controls for the basic cases on the front panel, a lot is still hidden in nested menus galore and they aren't necessarily sensible or discoverable or even work properly. This results in semi-religious fighting over which update is the least shit and people refusing to accept updates because they fear them.
The outcome, at least with electronics, is that you'll find a lot of 30-40 year old equipment nestling next to the state of the art with a sticker on it "for indication only". This equipment does a lot of the real work while the new stuff is used for validation and traceable measurements only. Due to rapid progress in technology, people cling to the last thing they felt any physical connection with.
This will pass in time as the status quo is accepted for younger generations. Fly by wire it is.
(I am incidentally a rather large fan of analogue and modular synthesizers and half built one in the late 1990s with 1970's electronic equipment, some of which I still own!)
The UX of digital equipment _really_ sucks. At first I thought it was because it was a new technology still in infancy, but things haven't improved since and we are still emulating analog controls. VST interfaces often have static background image (skin) and 'knobs' you control with your mouse. Actual hardware usually features rotary encoder or buttons to navigate the menu system with several layers of depth. This is tedious and hampers creativity. One positive example is Teenage Engineering OP-1, they seem to know what they are doing.
I'm currently building digital instruments for android phones and I refuse to add any UI elements until I figure out something that makes sense. Until then the interpreted Lua code is the user interface. I would be very interested in some new UI patterns for touchscreen that take good advantage of interaction between user actions and visual feedback.
It is completely baffling why software synths and routing software etc. feels the need to make itself look like a patch board and have rotary knobs and things. Can't we come up with a set of conventions for software-native synth UIs? Sure the physicality of real knobs and sliders is nice on real hardware, but on a screen - especially non-touch systems - there's no way it's the best option.
> but on a screen - especially non-touch systems - there's no way it's the best option
There is a good option - but the current, dumb UX fad, rejects it whoreheartedly. That option is keyboard. Give everything a keyboard shortcut and let the user learn them (if at least by providing a visual overlay listing shortcuts straight on the interface).
If something is a tool - not a shiny cloud SaaS toy intent on getting audience to drive the hockey-stick graph straight into acquihire - users will learn keyboard shortcuts.
I always let my users click on a knob to direct enter a value with the keyboard. But keyboard shortcuts to increase / decrease a value -- that's just a pain. Mouse drag / mouse wheel is far faster at going between coarse and fine adjust.
There are software synths that don't follow the "virtual patch panel" concept: Audiomulch and Supercollider come to mind.
Audiomulch has a graphical interface. Signals are processed by constructing a graph where each line represents an audio channel that can be routed between blocks that perform specific functions.
Supercollider is more like a programming language, you can think of it as a DSL for audio generation.
Both of them can interface with hardware, of course.
I think there will always be a strong demand for tactile hardware interfaces, regardless of what people come up with on a laptop application. The kinaesthetics of making sound by manipulating something physical is so satisfying and intimate.
Knobs are still the best option I've come up with, they act like sliders with the mouse, yet they are smaller so you can have more density. You can also draw a longer track for a given control size, 2pir > 2*r. They often get assigned to a knob on a midi controller anyway, so it makes sense.
My knobs look like software, I refuse to do screw heads, metal finishes, etc.
I haven't done much touch screen software, so no comments there.
Ableton Live in conjunction with the Push is a fantastic, very tactile, very musical solution to this. The Push is a very well-designed hardware instrument that, because of its tight integration with Ableton, has much of the flexibility of software. It's a really interesting approach and very productive from a music and creativity standpoint. This YouTube video gives a good sense of it:
I believe quite strongly that for most people, utilizing hardware of some kind (pad controllers, keyboards, etc.) is essential for making music. Music is generally made by playing instruments. It is the small errors, the groove, the emphasis, the spirit/heart/soul of playing an instrument, that makes music feel alive and interesting and vital. If you're just clicking around with a mouse, you're programming it, and programmed music sounds programmed. Overly precise, inhuman, etc.
In Glenn Gould's debut on US television, conductor Leonard Bernstein talks about - and then demonstrates - the difference between rote, mechanical playing of a score (i.e. "programmed music") vs. a performer's interpretation of it (i.e. the heart & soul that I'm talking about). The discussion starts around here:
You probably mean software equipment, because there is lots of digital music equipment with good UX. See for example Mutable Instruments' Tides or Rings modules, 4ms Spectral Multiband Resonator etc. - no menus, no displays, one knob one function (well there are sometimes 'special modes' and easter eggs of course).
For those boxes, 'digital' is just implementation detail. User interface and interface towards other equipment is still completely analog. I agree that they look very clean and usable.
I don't think there's anything wrong with displays. They offer immediate and quick insight into internal state and operation. The issues appear when you introduce modal workflow, with some features only available in some modes.
Worst offenders in category of hardware equipment are probably guitar multi-effects and synthesizer workstations. They cram so much functionality with so little thought given to flow and usability.
I would absolutely love to one day own an OP-1. Teenage Engineering does amazing work. All of their stuff has a certain tactility to it. They definitely put a lot of thought and love into their work.
For those that can't justify spending about $1k for one, their Pocket Operator series is pretty fantastic.
I also recently bought myself a Novation Circuit, which honestly has the feeling of Teenage Engineering's interface design (minus a lot of the whimsy). For about $300, it's a fantastic little groovebox that I'll be getting a LOT of mileage out of.
I'm sitting in a room full of synths from 1971 through 2017 and I still think my OP1 is one of the most delightful, amazing things in here. You can find them used for around $600.
Yeah i'm a fun of scrounging around for cheap little used modules like Mopho, ms2k, ms20 mini, mother32, mini/monologue, the Roland and yamaha recreations jp08, bass station 2 and various novas from Novation, emu proteus 2k. Roland actually has released dozens of rackmounts, some quite decent. Also kits: Preen, audiothingies, mutable instruments etc. It's astounding what you can buy for under $300 in many cases.
I do wish for a cheap knobby controller like the maudio axiom or novation SL mk2 that has reliable faders, pots and encoders. Those are the one instrument product i would say you should buy new and get the extended warranty from Sam ash or Guitar center. When the pots start shooting out lots of spurious midi CC's, deoxit's not going to save you. Deoxit's great for making door hinges stop squeaking tho
And importantly, it's a way to get into synths without breaking the bank on "fixed layout" hardware, even if it's supposed to be awesome hardware. If you're on a $10/week allowance, projects like these are fantastic.
I concur! As someone who has spent, probably, close to a quarter of a million bucks on synth/audio hardware over the decades, I'm extremely keen on VCV Rack because it lets me design my "perfect" Eurorack setup without any investment, other than the time to get it set up - and more to the point, it lets me experiment with my setup, refining it, without buying a single cable. So once I get my stack configured the way I like it, I'm quite likely to go ahead and switch over to reality and buy all the modules I've used, so I've got it in the real world too.
A truly amazing bit of software, and it has also been a very rewarding experience to be an early adopter of this suite, because the progress made so far by the developers is nothing short of astonishing - not to mention the uptake and adoption by a lot of the more cocktail/unique modular designers out there ... many of whom, seem to have taken VCV Rack as their new standard test-bench for working out ideas before hardware prototyping - meaning we have a huge plethora of modules to play with!
Anyone interested in synthesisers NEEDS to put a weekend or two worth of effort into learning VCV Rack and creating their own stacks. Its really a very, very valuable bit of software - and its free! And its open source! So, you interested-in-programming-synthesizer-algorithm folks can get started with very, very little fuss, already ..
I wish I would understand even a fraction of the stuff that's going on with software synths.
Every couple of years I get this fancy idea of "I'd really like to create some sounds/music", which usually ends up with me aimlessly manipulating digital knobs and switches on some massive UI, creating weird sounds with no real structure and changing all the default settings. The other option being something like FruityLoops, but doing anything in there kinda feels like making "Lego music" due to just slapping together samples.
The closest I ever got to creating something like "music" was actually through a game, FRACT OSC, which I think is still really cool. But I guess for most professionals the toolset in there is just very basic and it also suffers from the same "Lego music" feel of just slapping together samples.
Then there's stuff like SoundStage VR, which looks quite amazing to tinker around in, but due to a lack of VR setup, it's not something I had a chance to play around with yet. But the basic idea of making music in VR really appeals to me, it's just so cyberpunk and feels like it could be way more approachable, at least compared to something like software synths with their intimidating UI.
A lot of synths are painfully unintuitive because they try (and fail) to replicate hardware synth interfaces. Try Helm (free and open source) or Serum (not free) and you might have more luck.
Here's a rough crash course from someone who also struggled to get started:
1: You start out with a wave (sine, saw, pulse, etc). No synth starts out with a pure form of these waves. They all add a little color. Most synths have 2-5 oscillators (they produce the waves), often with each having different controls.
2: The filter constrains the sound. So, for example, if you wanted to make a bassy instrument, you'd filter out most of the high frequencies. The Attack, Sustain, Delay, and Release settings affect length and volume over time. This is what you use to turn waves into plucks, kicks, claps, strings, and anything else you can think of.
3: Most synths have at least 2 LFOs, or low frequency oscillators. You can connect these to different controls across the synth and change the timbre in weird, sometimes unpredictable and amazing ways. With Helm in particular, you click the little helmet by the LFO and drag the little bars on the different controls to change how much the LFO affects it. You'll see a little green bar pop out and move with the LFO. Serum has a separate tab with LFOs and routing to synth controls.
For the benefit of others, I wanted to note that what you're describing is subtractive synthesis where you shape sounds by subtracting (filtering) components from the initial waveform. Additive synthesis is also a technique, where you add many sine waves together (this is difficult/expensive to do in hardware and so doesn't seem to be widely used). There is also Frequency Modulation synthesis where a simple waveform is modulated using another wave at a close frequency. FM synthesizers were popular in the 80s, such as the Yamaha DX7.
In my opinion, though, subtractive synthesis is the easiest to "get".
Starting with a relatively simple subtractive software synth that uses primitive "analog" waveforms -- like, say, Tal-Uno-62 (https://tal-software.com/products/tal-u-no-62) -- is probably the best way to play around with learning synthesis. It's a lot harder to create a "bad" sound in a simple subtractive synth, and in something like Tal-Uno-62, there aren't a whole lot of possibly confusing menu options and buttons.
Samples and wavetable synthesis are also relatively easy to get a "good" sound of quickly. Most are structured, architecture wise like a subtractive synthesizer, you just replace the primitive waveform with either a sample, or a bank of digital waveforms.
FM and additive (and spectral and granular) are neat techniques, but IMHO way more finicky to program. (They are also techniques where the software interface is quite welcome! Programming a DX sound in the (free and open source) Dexed VST is quite a bit easier to do than it is on a Yamaha DX7's two-line LCD screen and buttons.)
For the parent poster: I understand that quite a few professionals, particularly in loop / beat oriented music genres (such as hip hop / trap, EDM, and reggaeton) use FL Studio quite a bit! The nice thing about FL Studio is that it has a nice intuitive layout for beat-making. It's what I'd personally recommend for someone just wanting to mess with synthesizer beats. (My personal favorite is Reaper but I don't tend to write beat oriented stuff. :) )
> In my opinion, though, subtractive synthesis is the easiest to "get".
Interesting, for me additive is the most intuitive. I find it easier to visualize sums of sine waves than subtracting harmonics from a more complex waveform (such as a sawtooth), particularly when considering the use of notch or bandpass filters. But that may have a lot to do with my education, which featured a heavy dose of Fourier Transforms and associated techniques.
I think the biggest "issue" with additive is that it can be complex to program at its most low-level implementation. I'm thinking of stuff like the K5000 where you could not only set the initial volume of a partial, but every partial had an envelope to control volume. Extremely powerful, but on more complex patches that's a lot of envelopes to think about.
On the other hand, "simple" additive with relatively few partials is actually pretty easy too. (The Hammond B3 and other tonewheel organs are essentially additive synthesizers.) A lot of software (your Alchemys and Razors) is built on a simplified version of additive too, or uses additive techniques to perform "resynthesis", or has other tools that "simplify" some of the additive editing.
Intuitive way to understand the concepts and terms behind (subtractive) synthesis: Your mouth and vocal tract!
Open your mouth in a 'neutral' position and make a sound. That's an oscillator. Tensing your throat will change the waveform (on a synth that would be square, pulse, sawtooth).
Purse and widen your lips, make a 'wow' sound: that's a low-pass filter (which cuts off higher sound frequencies).
Make a 'shhh' sound: white noise. Purse your lips, make it sound like wind blowing: band-pass filter (which only lets certain frequencies through).
Turn that wind noise into a whistle: now you're adding filter resonance. As that noise becomes a pure tone, the filter becomes self-oscillating.
Add some vibrato to your sound (like an opera singer), or change the pitch (like a siren), and you're adding modulation.
Doing this in a room, or large hall: now you're changing the reverb (reverberation). In a canyon with echoes: delay.
Often, this is also a useful method to work out how to program a subtle or complicated patch (a sound preset on a synth, so named because early modular synths used telephone patch cables to make the necessary connections).
Aside: I've often wondered what a human choir, making 'synthy' sounds, would be able to do with something like Jarre's classic Oxygène IV...
But how to end up with actual music/a melody doing this? Do you manipulate the sound live? Do you record different samples, made that way, and stick them together with a sequencer? Or do you switch between different, predefined, patches on the fly to kinda "play" it like an instrument?
Sorry if these questions are basic, but this has always fascinated me with synth music: Creating beautiful melodies out of something that starts out pretty much just as "noise", it's kinda like magic.
There are tons of different workflows to create electronic music. You can record sounds (and different instruments) linearly in layers. You can play a single keyboard set up in some clever way that allows you to trigger and control multiple virtual instruments at once (often with the help of arpeggiators or something like KARMA). You can use a looper. You can set up a step sequencer that plays notes in a loop while you manipulate both sounds and melody in some way in real-time. You can set up some gear to procedurally trigger sounds, which is what modular racks are often used for (look up "krell patch"). Or you can do several of these things at once, possibly in a group.
IMO, the "best" way to learn this stuff is to try and pick some specific synth, usually a subtractive synth with normal tools/controls and then try and create a bunch of "real"-ish sounds on it.
For instance, the Roland Gaia is a good example... almost all its controls are knobs or sliders and there isn't a whole lot hidden in it. But there are similar software versions. And a synth like that is just a bunch of modules more or less hard-wired together.
If you reset it and turn as much off as you can, it is much easier to get some insight into how these things work.
When I was learning how to program a synth, I did a little of research on how to create a snare drum sound with a subtractive synth, and when I had that, I worked on creating a tom-tom, a bass drum, etc. Then I tried the same with a "horn" sound or a "string sound".
That won't get you very far, but it's a lot more solid of an approach than just noodling. I like to mess around with stuff too, but it's easier to learn when I'm trying to do something intentional.
"I wish I would understand even a fraction of the stuff that's going on with software synths." -- May I suggest trying syntorial[https://www.syntorial.com/]
I just started and Syntorial is amazing! The teacher is very talented and the App you use to learn (which he built with funding from a kickstarter) is very good. The demo comes with 22 lessons and I am a lot more confident with creating and re-creating sounds by ear in my DAW.
I plan on purchasing it but I'm waiting for the "hey you haven't signed up yet so here's a discount" email to come because their marketing strategy is very familiar.
Also, go ahead and try Propellerhead Reason (free to play with, you can save, but can't open saved files). It's the best one on the UX side. However I have to admit I'm using it for 10+ years now.
This frustration is why Korg sold so many Minilogues last year.
Check out this Minilogue review playlist by Marc Doty of Automatic Gainsay. You can see what's happening on the oscilloscope while he's explaining things.
Now i'm looking at ableton live wondering: where's an instrument/vst as simple as that thing? There are endless "instruments" with weird knobs and then there's effects and whatever. Is Live the wrong tool if i just want to create a nice sound and then record something and then put some tracks together?
I've been producing digital music since 2001. It took me three years of part-time tinkering to get something produced that remotely sounded like a song. It takes time to learn.
FruityLoops is a great start to learn the basics of audio sequencing such as: tracks, beats, bars, measures, tempo, basic mixing and effects.
Once you've mastered FruityLoops, I'd recommend switching to Cubase or Ableton Live. There are a lot of VST ( Virtual Instruments ) synths available for free which can offer a lot of unique tones ( not pre-sampled audio ).
There are a lot of YouTube tutorials available now for all these tools, which can really help. Reddit also has a few good sized subreddits for digital music production and is a good place to get help or advice.
I've been going through Syntorial[1] which is an interactive synth tutorial. I'm about half-way through and I've found it very useful for understanding not just what each part of a synthesizer does to the sound, but how it changes the sound. Later on, as you learn multiple functions of the synth, you practice combining them to recreate sounds.
The software itself is a bit pricey and only runs on Mac or Windows but I've found it very helpful for getting an intuitive sense of what the different synth options actually change.
I get the same fancy idea every now and then. If anyone has been in the same place and gone on to really explore making synth music, I would love to hear it!
I think part of it is that you're conflating writing music with sound design. They often go hand-in-hand but are two different skills. Kind of like programming vs (video) game design. For example, you don't have to design a flute in order to play one. Unlike synthesizers, most instruments can't change (much of) their sound. Organs are probably the most versatile in that regard, but they achieve it basically through brute force. (Traditional instruments do still have quite a bit of flexibility, but it's nothing when compared to a synthesizer; e.g. a violin is very versatile, but you can't suddenly make it sound like like a flute)
For learning sound design, first pick a type of synthesis (as others have talked about). Most people start with subtractive synthesis and it is by far the most popular. So much so that bits of it overlap into other tools that focus on different forms of synthesis (e.g. lowpass filters as part of a wavetable synth). As for plugins, you can't go wrong with synth1. It's free, does more than what you'll need to start (you can just ignore knobs until you understand them), and has clear labels. Start with an initial patch (should be a basic sine wave IIRC) and build up from there.
Learn what the different components actually do to the waveform. You can't think of it like "this makes the sound more like this" since how it changes the sound depends on the rest of the parameters. This is probably why you end up "aimlessly manipulating ... knobs and switches".
For actually writing music (not just tweaking your bloop sounds), Ableton has a fantastic interactive guide [0] that takes you through everything you really need to make music. Everything else builds on top of that and is arguably unnecessary. It also includes great examples. There is also practically infinite information out there around writing music. Go get some.
While focusing on writing actual music: don't be afraid of presets. There's nothing wrong with using them, especially if you are just learning. Nothing in music is 100% original. At the end of the day, all that matters is that it sounds good (and you're not blatantly copying others' work). You can start to use your own sounds once you learn more about actual synthesis. As you start to learn more synthesis, try finding a preset that is close to the sound you want and then tweak it to your liking.
As far as DAWs, don't assume there is "one perfect DAW" for you. FL Studio might just not suit you. While all major DAWs should be able to achieve the same things, they often go about it differently. Depending heavily on how you want to work, some will be harder for you. You'll likely end up learning more than one eventually, so try out a few. Once you find one that starts to click, STOP SWITCHING. Stick to that one as long as you possibly can.
REAPER [2] is a great option for beginners since it has very generous pricing and trial. However, it's not just a "beginner DAW" -- it is fully capable professional software.
Addendum: I mentioned learning what knobs do to the waveform, but I never explained how. There are plenty of options, but the most basic way that you can set that up right now for free is to grab REAPER, synth1, and some free oscilloscope plugin. This very basic setup will take you extremely far in learning synthesis.
> I think part of it is that you're conflating writing music with sound design.
That's a distinction I've totally missed until now, as you pointed out, I've mostly thought about synths like they are traditional music instruments, which ain't really the case and as such added a lot to my confusion about "How to make something other than noise with this?".
Thank you for, yet another, very helpful comment!
I'm literally blown away by the sheer number, and quality, of useful resources and information being shared here. Kudos to the HN community!
That seems an odd complaint? Any music made this year is 2018 music, but what people might not know is that you can faithfully reproduce the sounds of analog synths that largely defined vast swathes of the music landscape of the 70s and 80s.
(If you've been active in digital music for a while it's easy to forget how little the general public knows about stuff like this, and a title that goes "did you know you can do X with Y?" is better than a dry factual title)
i'm with timc3 on this one. at this point every dude with a beard and a modular suitcase has a standing gig in the park making gurgly noises for people, and every gamer who watched deadmau5's twitch has a bunch of mutable instruments modules. it's folk music for the 2010s. and the music of the 70s wasn't, for the most part (yes we can point to the exceptions but that misses the point), bleeps and bloops, it was keith wakeman in leather pants. by the 80s these were collecting dust (so that we could buy them for cheap in the 90s!)
So, the comparison on YouTube is between the VCV Rack and Mutable Instruments Elements, Rings, and Clouds. Mutable's offerings are presented in the Eurorack format, so they have much more 'immediacy' (physical knobs, etc.). But - they're all 'virtual' instruments! We're essentially just reading a comparison between two developers's DSP code.
There were no digital modulars in the 70s and 80s, and certainly nothing like the granular synthesis or physical modelling available today. A more meaningful comparison in that context would've been against something like a Roland System 100M or Moog Modular: instruments with real analog electronics.
Not just the same code, but in many cases, the same developers! So there is a vested interest in making things match as closely as possible - it sells hardware, after all!
>Once you’ve created an account and installed it, you can start adding dozens of plug-ins, including various synthesizers, gates, reverbs, compressors, sequencers, keyboards, etc.
Yeah, you can easily avoid having to make an account, if you just download and compile the plugins yourself. The account just makes it easier for those of us who don't want to fuss with all the compiling and copying-into-the-right-place of the plugins.
I do both - I have all the sources for all the public modules I can find (and there are usually about two more added, on average, every day by the community... WOW!) and I build VCV Rack from sources every week or so, just to see the development pace (and what a pace it is!), but I also just go through the web page, use an account, install the free plugins through the Plugin Manager feature, and so on.
It helps to compare the two while developing my own plugins.
And I have to say that this app is amazing! I've rekindled my love for modular synthesis (been into synths since the 70's) because of this app, and its all I'm taking to my bands jam sessions these days -replacing a suite of digital/DSP synths that I was previously relying on.
Bingo! VCV Rack is built for musicians, not software developers, many who do not know how to unzip a downloaded plugin. This is perfectly fine, because musicians can do many things that I can't. So to make it possible to use for the masses, an account system is best, where it takes two buttons to add and install a plugin. The minority who don't like giving their email address in exchange for this generously-made software can still look at the manifest JSON files to get the binary download link, or compile it themselves.
Well software has been able to recreate analog sound sine 1997 when I started working seriously with computer mudic. The problem back then was that in order to make a faithful emulation that would need quite a lot of processing power. But back then I had a 200 mhz Pentium , nowdays I have a 3.4 Quad Core. Modern soft synths sound quality wise can dance around analogy synths , in depth , quality and versatility. Not to exclude the fact that software updates can practically fundamental change the synthesizer or offer amazing new features. The edge for vintage analogue synths was the fact that they offer more hand on control but with the wealth of midi controls that come with a wide array of presets support many software synths its still not much of a good reason to pick a hardware synth. Hence why software has practically monopolised audio synthesis lately. Thor in Reason can easily compete with my Alesis Andromeda A6 , the most powerful non modular analogue to ever exist. When it comes to modular synths, software reign supreme, while hardware users have to battle with the terror of the spaghetti monster. Nowdays going down the hardware synth lane is purely a personal taste choice.
> The edge for vintage analogue synths was the fact that they offer more hand on control but with the wealth of midi controls that come with a wide array of presets support many software synths its still not much of a good reason to pick a hardware synth.
There are at least two fundamentally flawed assumptions behind your post.
1. Electronic music isn't just about endless options and capabilities. Endless options give you endless search space to navigate.
2. The idea that generic computers are always better than dedicated hardware is rather archaic, because with modern technology you can integrate powerful computers into dedicated hardware very cheaply.
There is a huge difference between using a fully integrated, though-out hardware synthesizer versus using a MIDI controller mapped to some plugin in a DAW. There is a reason thousands of musicians still buy analog gear, VA synths and fully digital FX pedals.
Anyone got a good "full stack" setup for something like this (from midi controller to synth/sequencer/daw software)? I'm a bit overwhelmed by all the modularity and combinations of virtual synthesizers. I'm okay with a "toy setup" as long as it's possible to build upon (i.e. preferrably not something entirely different from what you'd use after you pass the acolyte threshold).
A couple in there might qualify for your "full stack" requirement - or are at least pretty close.
I'd be happy to hear you found something - for my needs I've got a 'full stack' consisting of a 4-voice poly VCO/VCA/VCF standard, a little sequencer, and a drum synth - but its not really in a condition where I could share it since I built some of the modules with special configuration from sources... however, if I've wanted to sit down and create a stack from the standard modules for a while, so .. lets see who gets there first! :)
Pure Data is visual programming language, you could essentially build native / vst / mobile apps with it and go more low level eventually.
VCV rack is more Eurorack hardware simulator so to speak, musical instrument. Good for reality check before going hardware but you don't create apps with it.
PD does more stuff. The dataflow model takes some getting used to and tends to linger in the background even if you get down to the libpd level.
VCV/Eurorack is a simplified rather dated synthesis model for people who don't want to go that low. It's ideal for people who just want to noodle around plugging and unplugging things and turning knobs.
I've been sketching ideas for a next-gen synthesis language which is more powerful than PD/sc, keeps some elements of modular thinking, but adds some useful creative extensions.
If I'm really lucky I'll find the time to do something with the ideas in the new few years.
Ehh, I don't really think that's a good comparison. Pure Data is a visual programming environment/toolkit that gives you a set of low level building blocks from which you can build whatever you want. You can extend it by taking advantage of open source libraries that other people have made, or even code your own in C.
VCVrack is open source, but as an end user you're not dealing with the low-level stuff out in the open the same way you would in PD. It's a completed object rather than a set of legos.
Having said that, You're right about the GUI. The PD GUI is... primitive, to say the least.
I have been quite content with alsa-modular-synth. It supports loading LADSPA plugins as modules, and with JACK integration it can be connected to other LV2 plugins or effects.
VCV Rack directly conforms to the visual patterns and usage patterns of Eurorack hardware, so I'd guess that is the main reason that it's so popular relative to those other options. PD is also a bit harder to get going with out of the box, even with packages like Automatonism, impressive though they are.
Reaktor 6 had a similar thing happen, lots of people who hadn't used it before were brought into that ecosystem by their "blocks", which also kinda emulate Eurorack.
The Fundamental modules included with Rack and all other VCV-branded modules are designed by Grayscale, also responsible for the panel design of a dozen Eurorack manufactures on the market. http://grayscale.info/design-services/
VST/AU support will be added with the “VCV Bridge” module in Rack v0.6.0, scheduled for release in late January 2018.
Rack itself won’t be a VST/AU plugin because of the major limitations of these formats, but instead you’ll be able to connect Rack to your main DAW by adding the lightweight VST/AU Bridge plugin to as many DAW channels/tracks as you like, which will each connect an equal number of Bridge modules in Rack. This cross-application method allows you to break the mindset of mixing linearly in your DAW and instead use nonlinear methods like cross-modulation between tracks, enveloping audio to control filters and sidechain compressors, etc. The Bridge protocol will be fault tolerant and will not matter which order you open the DAWs.
Hardware is much more hands on, with real dials, switches, and other means of physical interaction which screens and mice don't come close to matching. Hardware does exist through which musicians can interact with computers, but it's usually MIDI, which has its own limitations compared to fully analog gear.
With hardware you generally don't have to worry about software or OS updates, and the hardware generally continues to "just work" for a long time.
A lot of uncommon analog hardware is very unlikely to ever be modeled in software, so the way it reacts when manipulated may never actually be matched in software (even when the potential to do so is in principle there if anyone ever bothered to create a high quality model of it in software). This extends to how it sounds as well, for the same reason.
That said, modular synth software like this is still great, and may be the next best thing to using the real thing. It's also a great way to learn and get a taste for what's possible before committing money to buying any hardware.