Mick Gordon did some fun hidden spectrogram imagery in the Doom 2016 soundtrack.
He talks about that and plenty of other cool stuff in his talk at the 2017 GDC conference. One of my favorite conference talks ever, he did so much cool experimentation to get the sounds he used on the soundtrack, and watching his talk is one of those moments where you really get to see a master of his craft let loose and explain his process.
Author here. This is a basic spectrogram visualizer that's mobile friendly. It allows to select regions on the spectrogram and play them separately. There is no grand plan behind this web app: it's just a handy basic tool to capture sounds on your phone and see what they look like.
Your spectrogram looks elongated horizontally because the FFT window size is too large. I use window size 1024 with sample rate 48000 Hz, so one window covers 1024/48000=0.02 sec. This window size looks optimal in most cases: if you change it in my web app, you'll see that all other window sizes get the spectrogram blurry in different ways, but at 1024 it gets into focus.
Of course, don't forget the window function (Hann, or raised cosine), but it looks like you've got that covered because your spectrogram looks smooth.
The color palette looks good in your case. FWIW, my color function is like this: pow(fft_amp, 1.5) * rgb(9, 3, 1). The pow() part brightens the low/quiet amplitudes, and the (9,3,1) multiplier displays 10x wider amp range by mapping it to a visually long black->orange->yellow->white range of colors. Note, that I don't do log10 mapping of the amplitudes.
It uses "audio/webm;codecs=opus" to record mic. Now it's possible to change it in the config menu in the top right. Safari probably needs audio/mp3. Edit: also consider "audio/foo;codecs=pcm" where "foo" is something compatible with Safari.
Very neat! May I suggest adding a button to switch to log scale for frequency? I love the ability to select and play back just a particular set of frequencies. But voice uses only about ~15% of the screen height [1], so it's hard to play with.
Zooming is not really a way to get what I'm after, because I was trying to hear particular bands one after the other. E.g., trying to listen to one octave after the next. And since the octave relationship isn't linear, I'm thinking a non-linear scale would better match what I was trying to do.
The WebAudio API has an anlayser node that can create spectrogramms in real-time. The ones I've created in the past were nowhere near as detailed as this one though.
Can I ask what kind of use cases would a spectrogram have for radar data? I've been messing around with making my own spectrogram app as well (linux desktop app and not web app though) and would be stoked to know if there's any potentially easy to reach use cases for it
We basically make doppler radars - here frequency-shift is proportional to the speed of the object. Most other radars (pulse-radars) uses the bandwidth of a pulse to gain range resolution (the wider the bandwidth the better the res).
Radar signals are modulated in very specific ways, which are visible right away even with plenty of noise on a spectrogram. Classifying the modulation of radar signals is something common in military contexts, since it allows you to listen for emissions and be able to tell if it's an enemy or an ally. I bet it has more uses than that, but it's the first one I could think of.
When I read about ultrasonic cross-device trackers in advertising [1], I installed "org.woheller69.audio_analyzer_for_android" and "hans.b.skewy1_0" (automatic ultrasonic detection) and started scanning through TV channels after running some test tones. Suffice to say I didn't find any, but the entire process was quite fun. There's also "org.billthefarmer.scope" which is an oscilloscope with a spectrum (not spectrogram).
Web apps like this that accesses user's data should provide samples for users to experiment and explore before they have to give access to their actual data.
Brilliant work - I "get" how this works, I've just spent about half-an-hour playing with this (Chrome browser on my kitchen ChromeBook), singing into it and letting it "listen" to the ambient background noise here (old cooker clock ticking, fridge compressor rumbling occasionally). Useful, educational, and fun also - thanks for publishing/hosting this so others can enjoy it!
I usually use Audacity to inspect the spectrogram of FLAC files and see if they really are 44100Hz or if someone packaged a constant rate 320kbps mp3 encode into a FLAC file.
One place I used these was on a toy AI assistant. I recorded myself saying a trigger word thousands of times, cut the audio in pieces and converted each to a spectrogram image. I then feed those to a training model to help recognize the trigger word.
Before the spectrogram, i was feeding the wav file directly, it was incredibly intensive on my laptop. But the image files were easier to process in real time. This tool can be used for debugging.
How would this work with AI? Don’t you need to train the model to discriminate between the trigger word and other words? If all that’s seen during training is the trigger word, the model will just learn to say “yes” to everything, if you get what I mean.
I do have a WebGL-based implementation of FFT, but here I used good old JS. When properly written, it gets translated into really fast machine code, which is even faster than WebAssembly (I tried!). WebGL's problem is the high toll on the CPU--GPU bridge. When you need to transfer a block of audio data from CPU to GPU to perform calculations, you wait. When you need to transfer the FFT data back, you wait. These waits quickly outweight everything else. However on wavelet transforms GPU comes first because you can store some pre-computed FFTs on GPU and reuse them in multiple runs.
Izotope, associated with MIT researchers, makes arguably the best such tool for the pro audio industry. Their RX suite is truly miraculous, allowing audio engineers to visualize frequencies in a similar manner, but also offering brush-like tools to do things such as "deleting a dog bark from a guitar take" fairly easily.
Seems like you never saw or used SpectraLayers (commercial tool from Steinberg) or Sonic Visualiser (OSS project). Both have much more advanced visualization capabilities than RX. However, RX definitely has the more advanced "semi-automated" editing / repair features.
I've witnessed a large number of studios across the US and Latam using RX on a regular basis — places recording anything from indie stars to Grammy-winning artists.
Can you recommend any good references to begin understanding the Spectrogram ? I work in DL based Noise cancellation - major part of my work involves analyzing spectrograms - I find it very difficult to do my work without having an ability to critically analyze these images. Any help from anybody ?
What do you mean by "understanding the Spectrogram"? The graph itself is straightforward: x axis is time, y axis is frequency. The intensity of each pixel represents the intensity of a certain frequency component and a certain point in time.
If you're referring to generating spectrograms with Fourier transforms, you will need some math background to properly do the calculation by hand. It largely just boils down to "find the amount of each frequency over time"
Last question, if this is the premise your work, shouldn't you know about it already?
o The tall vertical lines reflect "plosives" - sudden releases of sound energy often at the begining of words from having mouth/airway closed then open, as in the first letter of "put" or "tea"
o The high frequencies come from "fricatives" like the first letter of "see" or "free" where air is being passed through the teeth or almost closed lips
o The lower frequencies are where most of the recognizable speech content is, corresponding to the way the resonant frequencies of the mouth and throat are being changed (articulation) by moving the tongue, lips and teeth. Specifically the speech content is in changes to the "formants" which are the changing resonant frequencies showing up as bright mostly horizontal bands in the lower frequencies
Noise may show up in various ways depending on what the noise source is. A fixed frequency spectrum background hum is going to show up as one or more horizontal frequency bands across the entire spectrogram. High frequency noise is going to show up as much more energy in the higher frequencies, which don't have a lot of energy for clean speech (fricatives only).
Thanks for sharing this! I didn’t know about these terms before. Every consider writing a blog post/tutorial on your knowledge of human speech in spectrograms? This is much more digestible than most of what’s out there
Thanks for your effort in sharing the link- am kind of comfortable with most of the theoretical aspects of STFT/FFT/MelScale etc.. but when i look at the spectrogram i still feel am missing something.
When i look at the spectrogram i want to know how clear is the quality of the speech in the audio - is there background noise - Is there a reverb - Is there a loss anywhere - I have a feeling that these are possible to be learnt from analyzing spectrograms but not sure how to do it. Hence the question.
Look for clear and distinct frequency bands corresponding to the vocal range of human speech (generally around 100 Hz to 8 kHz).
If the frequency bands are well defined and distinct then the speech is likely clear and intelligible.
If the frequency bands are blurred or fuzzy then the speech may be muffled or distorted.
Note that speech like any audio source consists of multiple frequencies, a fundamental frequency and its harmonics.
Background noise can be identified as distinct frequency bands that are not part of the vocal range of human speech.
E.g. if you see lots of bright lines below or above the human vocal range then there's lots of background noise.
Especially lower frequencies can have a big impact on the perceived clarity of a recording whereas high frequencies come of as being more annoying.
Noise within the frequency range of human speech is harder to spot and you should always use your ears to decide whether it's noise or not.
You can also use a spectrogram to check for plosives (e.g. "s" "k" "t" sounds) as they also can make a recording sound bad/harsh.
Unfortunately I think the answer is “we don’t know” we have loads of techniques (ex: band pass filter) and hypotheses (ex: harmonic frequencies and timbre) but we haven’t been able to implement them perfectly which seems to be why deep learning has worked so well.
Personally I hypothesize that the reason it’s so hard is that the sources are intermixed sharing frequencies so isolating to certain frequencies doesn’t isolate a speaker. We’d need something like beam forming to know how much amplitude of each frequency to extract. I’d also hypothesize that humans, while able to focus on a directional source, also cannot “extract” clean signal either (imagine someone talking while a pan crashes on the floor - it completely drowns out what the person said)
Speech is pretty well understood - there are two complementary aspects to it, speech production (synthesis) and speech recognition (via the changing frequency components as show up in the spectrogram).
When we recognize speech is almost as if we're hearing the way the speaker is articulating words, since what we're recognizing is the changing resonant frequencies ("formants") of the vocal tract corresponding to articulation, as well as other articulation clues such as the sudden energy onset of plosives or high frequencies of fricatives (see my other post in this topic for a bit more info).
High quality (that is, highly intelligible) speech synthesis has been available for a long time based on this understanding of speech production/recognition. One of the earliest speech synthesizers was the DECTalk (from Digital Equipment) introduced in 1984 - a formant-based synthesizer based on the work of linguist Denis Klatt.
The fact that most of the information in speech comes from the formants can be proved by generating synthetic formant-only speech just consisting of sine waves at the changing formant frequencies. It doesn't sound at all natural, but nonetheless very easy to recognize.
The starting point for human speech recognition is similar to a spectrogram - it's a frequency analysis (cf FFT) done by the ear via the varying length hairs in the inner ear vibrating according to the frequencies present, therefore picking up the dominant formant frequencies.
Agreed theoretically however if I gave you two spectrograms, would you be able to tell which one is clear speech and which one is garbled? I’d bet we’d be able to come up with one that wouldn’t pass the sniff test.
If you know of any implementations that can look at a spectrogram and say “hey there’s peaks at 150hz, 220hz and 300hz with standard deviations of 5hz, 7hz, and 10hz, decreasing in frequency over time thus this is a deep voice saying ‘ay’” and get it right every time I’d be really interested in seeing it (besides neural networks)
Maybe an expert linguist (not me) could do a pretty good job of distinguishing noisy speech in most cases, but a neural net should certainly be able to be super-human as this.
Some sources of noise like the constant background hum (e.g. computer fan) are easy to spot though.
https://imgur.com/sRe6Ypv
Aphex twin did something similar, but this is more playful in my opinion.