Hacker News new | past | comments | ask | show | jobs | submit login

That spectrogramme view really is phenomenal, so effortlessly fast even while being kind of 3d.

I was talking about this the other day with someone, why don't DAWs push the spectrogramme view more forcefully, instead of the default wave view? There's so much more information to be gleaned in spectrogramme view than the waveform view.

I'm trying to learn to sing these days, and I'd been wondering if this was a good way to practice a song: look at a vocal stem of the song I'm trying to sing, and observe visual feedback of the spectrogramme.




While a spectrogram shows a lot of useful information, it also kind of doesn't. Your ears tell you much of the same thing except with better subjectivity, especially as your ears improve. You can tell (with practice) if a bass guitar is too bassy, or has too much "twang" or sounds harsh and throaty. But if a spectrogram of a bass guitar shows higher than usual frequency content in the 800-1200hz range, is the bass tone too twangy for the song, or is it just right?

The waveform view on the other hand will always remain useful no matter how good your ears get. If you're comping together multiple takes of the same section, or shifting tracks to adjust for phase-alignment in a multi-microphone setup. Doing this by looking directly at the samples is way less tedious than doing it by ear.

Also, though it's probably not an issue today, I would guess CPU concerns are another reason why a spectrogram isn't displayed by default on all tracks.


Another issue I can think of is that spectrogram view inherently looses temporal resolution the more precise you make it in the frequency space.


For everybody else who was wondering DAW = Digital Audio Workstation. https://en.m.wikipedia.org/wiki/Digital_audio_workstation


Audacity isn't really a DAW but you can indeed switch the track displays to show a spectrogram, though it takes a while to compute. You can get spectrogram plugins for DAW's also that will display a real time spectrogram. Mastering software often includes a spectrogram tool.


Definitely click on the second to last icon on the bottom of the SPECTROGRAM page to see the spectrogram applied to an old POTS modem dialing out to another modem. Notice in particular that the DTMF touch tones are indeed composed of two tones.


You might give https://pitchy.ninja/game a try for practicing vocals.


Wavesurfer.js has a nice spectrogram plugin for audio files.

I also found out audio based classification trains your models on spectrogram images!


I will try Wavesurfer.

The things you're talking about... please talk more about them. What're some example applications of what you're talking about, for example?


Here is an article, I’ve just been learning about this myself

https://medium.com/x8-the-ai-community/audio-classification-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: