Hacker News new | past | comments | ask | show | jobs | submit login
Let's Write a Reverb (2021) (signalsmith-audio.co.uk)
367 points by notagoodidea 78 days ago | hide | past | favorite | 62 comments



Signalsmith Audio always has great posts. Their article on pitch shift is also worth a look! I have also posted before:

https://news.ycombinator.com/item?id=36706588

Speaking of reverb, I implemented a Dattorro reverb (https://ccrma.stanford.edu/~dattorro/EffectDesignPart1.pdf) in Rust before.

https://github.com/chaosprint/dattorro-vst-rs

It's based on Glicol source code here:

https://github.com/chaosprint/glicol/blob/main/rs/synth/src/...

You can play with it here:

https://glicol.org/demo#handmadedattorroreverb

Also you don't want to miss the Tom Erbe's reverb patches in Pure Data. Patches demonstrating several types of reverb (Schroeder, Moorer, Moore, Gerzon, Dattorro):

https://tre.ucsd.edu/wordpress/?p=625


Whoah - the author (Signalsmith) here! This was a fun way to wake up, and I'm happy to answer any questions.


Brillant article! Thanks!

Are you also "Geraint"? (The home page https://signalsmith-audio.co.uk/ says "we" but the Geraint page https://geraintluff.github.io/jsfx/ says "I"...?)

Geraint's plugins are a great collection of excellent (and free) effects for Reaper, with a superb GUI (which isn't easy to do in Reaper/JSFX). So thanks also for this! ;-)


Yes, I'm Geraint. :D I'm the tech side of Signalsmith Audio, and my partner is the Business Brain. The JSFX plugins are mostly from before we made a proper company (back when Signalsmith was just my username) - I'm glad you're enjoying them!


Do you do anything with hardware/microcontrollers? I dabble in the daisy/rasbpi world for synthesis and effects, but have always just used off-the-shelf reverb in the chain and things like teensy audio are fine but not especially great (specifically things like shimmer reverb)


I'm not really a hardware person, although at this point I've done a couple of guitar-pedal projects. My favourite is to write OS-independent C++ DSP classes, and work with a team/client who handles the gnarly build/signatures/UI/embedded stuff.


That makes sense.

> I'm not really a hardware person

Neither am I, most of the fun is in the software anyway. The rest is just some amateurish solder work and compiling an array of components I may or may not use.


Ah, excellent! Congrats again!


Thanks for the enjoyable article!

Possibly off-topic, but I am coder with hobbyist interest in the DSP space. I have never really had a "penny drop" moment when it comes to starting from nothing and generating sound.

Even generating simple sine wave seems like either a big chore or a completely abstract concept (depending on the tools/libraries/environment), I have not been able to find a middle-ground where I feel like I am learning but not getting completely lost in trigonometry or calculus. I am not sure if I'm not using the right tools or if I need to start even simpler and build up. GNU Radio comes close to scratching an itch for me of generating signal processing pipelines and a bit of intuition, but ultimately it becomes pretty easy to get lost in that as well (and it seems mostly focused on actual radio use-cases).

Do you have any advice for someone looking to build more familiarity or intuition on this front?


First, join a good community! If you're on Discord, TAP is great: https://discord.gg/aBghGGcfYs - it's beginner-friendly while also having some heavy-hitters in there, and it's generally wholesome. You're not the only hobbyist learner, and it's important to have a place you can ask questions without feeling awkward.

If you swap language/environment later, you'll carry your understanding/intuition with you, so you don't have to start with C++ if that's not your bag (even though it's still the industry standard). There are audio-specific languages with JIT runtimes (which can be used in Logic/Reaper/GarageBand/etc.), Rust/JS frameworks, etc. so find the one that feels good to tinker with, and keep that momentum/motivation going. :)


Thank you for the advice!


You could start with making sounds using a software synth or PureData. There will be no issues with generating sine waves. And there are lot of tutorials for this.

If you aim for performance, you will have to use SIMD and language that supports it (for example, C, Rust).


Thanks for this! Your reaper plugins were a great inspiration for me to learn JSFX and make my own effects. I was pleasantly surprise to see you're still sharing things with the community when I stumbled onto your talk about pitch-shifting at ADC :) Glad to have found a whole blog to explore! Thanks again :)


Thanks for writing so many blogs. The adc talk was also great. Have you considered doing some demos with wasm on the web? Also, have you considered porting these packages to rust?


I haven't done anything with Rust yet, but it's on my free-time wishlist.

I'm working on WASM demos! I've been playing with WASM builds of my plugins, and it's great for prototyping/sharing, but it could definitely be set up better for demos/teaching: https://signalsmith-audio.co.uk/tmp/web-audio/?url=/tmp/basi...


FWIW, this is my implementation of this reverb in Rust: https://github.com/cornedriesprong/cp3-dsp-rs/blob/main/src/...


Not a question but, really cool articles on your page, cheers :)


Thanks for this, Geraint! I've been trying to get a better understanding in programming for DSP, and blog posts like this are a big help with that.

I have subscribed to your blog.


Thanks so much for taking the time to write this up clearly. I now have a much better understanding of how recent work than I did before. Really appreciate it!


Care to offer feedback on my 8-band EQ implementation for OBS?

https://github.com/phkahler/obs-studio/tree/eq8

It got rejected but only because they don't want the feature. My implementation does seem different to most which I think use band-pass filters instead.


Nothing to ask, really. Just saying thanks for a fantastic intro to reverb, that I used as one of the sources to implement my own. Hope to see more audio processing educational material.


Do you happen to know the basis of state of the art reverb algorithms such as the one used for the Bricasti? Are there any clues as to how they work, or are they completely proprietary black magic?


I don't know anything about proprietary reverbs, I'm afraid - particularly hardware units! Sometimes you can tell things about a reverb's internals by looking at impulse responses, but I've always had more fun designing something from scratch.


One of my favorite things I've ever found on the internet is the "Reverb Subculture" thread on GearSpace. https://gearspace.com/board/geekzone/380233-reverb-subcultur...

It's a discussion of some of the finer (and coarser) parts of reverb design and includes comments from Casey Dowdell (Bricasti), Sean Costello (Valhalla DSP), Matt from LiquidSonics, Urs Heckmann (u-he), Chris from Airwindows, Stian (Acon Digital) and other top-notch audio DSP gurus. They're not giving away trade secrets but there are fascinating discussions around reverb design, topology, theory, and of course perception.


Easily the most approachable yet complete writeup I've seen on the topic. I've always noticed an aura of esoteric dark magic around writing good algorithmic reverbs, this makes it seems less daunting.

Since the author does a quick comparison between convolution and algorithmic reverbs, I'll mention how I often combine them: a small/medium convolution reverb, plus a long algorithmic reverb. The convolution can perfectly diffuse the signal and it can also give a precise character to the sound, depending on the impulse response. It's great for adding a "body" to a raw sound generator. The algorithmic layer then adds a subtle ambience that can be extra long if desired.


Take a look at the articles on Valhalla's site. They're another goldmine https://valhalladsp.com/learn/


Came to write this. The guy not only makes the best algorithmic reverb plugins out there and sells them very cheap, he also shares his obsessively deep and broad knowledge on the subject. Big fan.


Very nice tutorial, and extremally nicely done!

It's worth knowing that some of those effects can also be achived (of course much much more simplified) in all modern browsers using the Web Audio API. I created mobbler[0] using that, and I also wrote a small tutorial on how some of the effects can be achieved using simple modules (it might seem too complex at first glance, but you can just look at the pictures)[1].

[0]: https://github.com/Megaemce/mobbler

[1]: https://github.com/Megaemce/mobbler/wiki/Tutorials


Given the number of connections/nodes in my design, I'd be interested to know what the performance is.

For Web Audio, there's an increasing trend of compiling WASM and running it in an AudioWorkletProcessor, which is maybe 2-3x slower than native. It's actually how I do a lot of my prototyping now, because the Emscripten build times are faster than a full plugin, and I can send it to people without them having to install anything.


Fantastic article from start to finish. Great explanation, great audio samples, and from a first skim the C++ example looks very readable.

> I haven't found any good resources on this particular diffuser design. I found a couple of forum posts and a paragraph from a book

When I read this my first thought was "this is a link to a comment by mystran on KVR, isn't it?" And yep. If you're looking for some obscure DSP knowledge, all Google searches eventually lead to a comment by mystran.


I hang around the KVR DSP forums and indeed, it's always impressive to see mystran answer questions posed by world-class commercial plugin devs who are stumped or stuck somewhere. mystran appears to have one of the best understanding of both DSP theory and the tricks required to implement efficient C++ algorithms.


Haha, yeah - although their comment was only considering the 2D case, same as the book.

You can make a (very efficient!) diffuser from 2-channel rotations, but you have to tune it a bit to get it smooth without having a slow attack. With more channels, it's much easier to get right.


>throw a random-number generator at all the delay times

Reverb algorithms are indeed random enough, a leading audio plugin brand once came up with a randomizing reverb freeware[0] embedded with a "submit when it sounds cool" button.

https://www.kvraudio.com/forum/viewtopic.php?t=453229


u-he weren't the only ones either:

https://bedroomproducersblog.com/2021/11/17/baby-audio-magic...

And while it's not a one-button random reverb, MCharmVerb also heavily relies on randomizing:

https://www.meldaproduction.com/MCharmVerb


I wonder where do you find information like this? E.g. how a reverd is made, how those electronic drum sounds from classic drum machines are made and so on? Is there something like a big catalogue of different tricks and algorithms?


Some stuff is around the web, some is from other people. Particularly for classic kit, someone's usually analysed it before, but it's not always in a good tutorial kind of form. Personally, I did a lot of messing around with audio/programming from my early teens onwards, and built my intuition about how the maths and sound link up from years of tinkering without many reference resources.

A lot of this stuff can be made from relatively simple building-blocks though, and you don't have to copy a previous configuration. My thought process was pretty much exactly as written in the blog-post! I just wanted to make something which didn't require any special tuning skills.


This one, making reverbs, at least is taught in signal processing classes in EE courses


I have a Yamaha FX-500 multi-effects processor from 1989.

Original owner; got it in 1989.

I was fiddling with it recently to try to get a decent reverb. The main algorithms don't sound good; they have not aged well. These algorithms have a delay parameter, but it doesn't produce enough of a separation somehow.

Instead of a reverb-only block you can choose a combined delay with reverb (R->D, D->R or D+R). The delay block takes its own resources, leaving fewer for the reverb, which is simplified. You cannot choose the type of reverb (plate, hall, ...) and there are fewer parameters.

The interesting thing is that with the D->R and R->D, I can get much better sounds.

The D->R can produce a pretty lush/deep long reverb, where the main signal stands out clearly.

(I should mention that I'm using an external analog mixer for mixing the dry signal: the FX-500 is 100% wet. Thus, in general, I can get the best possible sound out of it: the full resolution is allocated to the effect, not to propagating the dry signal.)

Less can be more; a simpler reverb can sound better. A completely separate delay before the reverb can be better than playing games with built-in pre-delay.

Basically, in combining the resources of the delay and reverb into a more complicated reverb effect, Yamaha somehow made a mess.


Is it possible to model reverb using a neural network (e.g. wavenet or LSTMs) for real-time use? Is this what something like Neural DSP is doing under the hood?


The trick is not to use NNs for DSP but to discover the parameters for the DSP. In other words you hardcode the signal flow architecture using a common technique like an FDN but then train a NN to find "good" sounding parameters, like comparing to a convolution reverb or recordings.

The thing about reverb is they require a lot of state and nonlinearity is undesirable.


For reverb I don't see much practical use, mainly because you can capture a pretty-much-perfect recreation of a real space with an impulse response. No need for thousands or millions of rounds of training a network. For unrealistic reverbs, you have the problem that to get training data you'd have to invent several unrealistic reverb effects to apply to sounds. And once you've made those effects, there's not really any reason to neural-netify them.

For NeuralDSP it's a bit different because they use NN's to simulate a guitar amp circuit which is a nonlinear system and so there's no simple way to "capture" the effect the way that you can for reverb sims or speaker sims. And while you can make a very accurate model using something like SPICE, that won't run in realtime. With traditional amp modeling you basically take the SPICE version and try to optimize and cheat as much as you can so it can run in realtime, at the cost of accuracy.

So that's what NeuralDSP's goal is, a system that approximates the amplifier but can also be computed in real-time, except done using a trained NN instead of a human-optimized variant of the SPICE circuit.

They have a couple whitepapers on their website, though none of them go deep enough to really give away their secret sauce. But basically according to them, making a NN model of an amplifier at a fixed setting is fairly simple. Where they had to get novel with it is adjustable settings/parameters. E.g. turning the drive up, or turning the treble down. Just capturing a few hundred or thousand models based on adjusting parameters and cross-fading between them doesn't sound realistic. So they had to come up with a larger model architecture that can "learn" those parameter changes.

https://arxiv.org/pdf/2403.08559


It’s not that hard, you just collect a lot of data. Much easier with a robot turning the knobs. Predict the next sample based on input and knob settings.


Not sure about Neural DSP or reverbs in general, but real-time neural network based DSP seems very possible. The open source Neural amp modeler[1] would be a good place to start diving in.

[1] https://www.neuralampmodeler.com/the-code


I have tried NAM but with limited success in modeling some time-based effects (e.g. octave shifting). However, I have not tried to model reverb effects.


To handle time-based effects you need a custom architecture.

https://www.research.ed.ac.uk/en/publications/neural-modelli...

Don’t use NAM. Learn PyTorch.


NAM uses pytorch for its NN implementation?


It is 100% possible and there are a slew of tricks you can use to get big performance boosts with negligible cost to accuracy.


Do you know what the tricks are?


1. Don’t use LSTMs (4 vector-matrix multiplies) or GRUs (3 multiplies). Use a fixed Hippo matrix to update state. Just 1 multiply and since it’s fixed you can unroll during training, much faster than backprop through time.

2. Write SIMD intrinsics by hand. None of the libraries are as fast.

3. Don’t use sigmoid or tanh functions as your nonlinear activation. Instead approximate them with the softsign function which is much cheaper.

Depends on exact architecture, but these optimizations have yielded 10-30x improvement for single threaded CPU real time audio applications.

When GPU audio matures all this may be unnecessary.


On my personal laptop (Mac), I used AU Lab to apply effects to the music I listened to, and discovered that I like the ambience provided by adding a little reverb.

I'm interested in doing the same at work, but the office PCs run Windows and I'm somewhat limited in what I can install.

The Arduino Audio Tools library seemed like it would work, but it only applies effects on one channel.

https://github.com/pschatzmann/arduino-audio-tools/wiki/Audi...

I discovered the Wishing Well stereo reverb pedal, and am wondering whether building one will allow me to apply the effect to music from an iPod, rather than a guitar.

https://scientificguitarist.wixsite.com/home/wishing-well

If anybody would like to group-buy the Wishing Well, please let me know - if I order PCBs and parts, I'll have spares.


FTA: There are some interesting designs here - and they work, but they often require careful tuning. As well as finding a good compromise for the delay time, using too much inter-channel mixing can lock the delays together so they act like a cohesive unit, which isn't great for longer tails.

Made me wonder whether there’s a connection with linear feedback shift registers (https://en.wikipedia.org/wiki/Linear-feedback_shift_register)

If you have a LFSR that produces a decent but not too good pseudo-random signal, can you use its parameters to create a decent reverb?


I came expecting DSP code but was pleasantly surprised by this!

If anyone has any other interesting ones to share about how audio hardware and software are built, I'd love to see them!


Sean Costello of Valhalla DSP is doing excellent work with his designs. He's got a series of blog posts that delve into various approaches for designing reverb algorithms:

https://valhalladsp.com/2021/09/20/getting-started-with-reve...

https://valhalladsp.com/2021/09/22/getting-started-with-reve...


Here's a video that just dropped that's starting at very approachable first principles, coding as they go. It's is also entertaining and they promise to continue with the series which I look forward too. The code will be shared as well.

https://youtube.com/watch?v=iA6wRgwl7k0


Using convolution I found that applying some random variation between the stereo channels made a much more satisfying stereo result, though only on later reflections, earlier it makes for unhelpful stereo effects. Is that something you have tried?


Interesting article, however in drums demo the reverb sound increases after the drum is hit, and this doesn't seem natural. I would expect the reverb to have only decay phase, but here the reverb has a slow attack as well.


*EDIT*: I just realised that the demo you're talking about doesn't include the "early reflections"! So that demo only includes echoes which have gone through the main feedback delays at least once.

If you mix in some of the diffused signal directly, it bridges the gap between the initial sound and the feedback echoes: https://signalsmith-audio.co.uk/writing/2021/lets-write-a-re...

---

You can tune the diffuser to have an almost-instant onset. I can't remember what I did last time, but at a guess, having the largest diffuser stage increase by a factor of N (number of channels) instead of 2 might do it.

But also, if you're playing acoustic drums in a big space like a concert hall (instead of a long one like a staircase), the first echoes coming back from the walls are actually a bit delayed (1 foot ~= 1ms, at the speed of sound). So if your nearest wall is 10ft away, the first wall-echoes will come 10-20ms after the initial direct sound.


I wrote a synthesizer last year. It was a lot of fun. Just through trial and error I learned how sound waves work. It's really enjoyable to build things that aren't websites


Is there a way to get "realistic" reverb using GPU computation?


I think a GPU is overkill. In the guitar world, there are dozens of insanely high quality reverb pedals and they all only have some basic chips in them, no GPU necessary.


No, but the question bears discussion. Many of us have high powered GPUs in our machines that sit idle during audio production. Could they be leveraged? Not just for reverb.


My understanding is that latency is usually a much bigger concern than raw computing power when doing audio processing. GPUs are great at doing large batches of computation, but aren't as good at doing lots of small batches with low latency, which is what audio processing tends to be.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: