There have been a good number of major improvements in projectM in the last few years, which is impressive considering the age of this project. Trying to keep software and user-contributed presets and shaders from 20 years ago working are efforts that are never finished.
To mention a few:
* Milkdrop was a windows-only win32 affair, and the shaders for presets have all been written using HLSL, for DirectX. ProjectM had to incorporate a shader transpiler to convert preset shader code on the fly from HLSL to GLSL so that projectM can run on platforms other than windows. The conversion isn't perfect and can cause a few shaders to fail compilation but these problems do get fixed when someone takes the time to dig into them.
* Improving the FFT maths and PCM data interface
* Optimizations for preset evaluation using the LLVM JIT
* Halfway-completed port to the web with Emscripten
* Updated support to work first with more recent versions of iTunes as a plugin and more recently as a Music.app plugin. macOS installer for the plugin and SDL app. Almost-working multi-bundle installer code signing and notarization.
* Text menus and preset searching in the SDL app (keys listed in README)
* Released as a standalone app for Steam
* The build system was ported from a very-broken CMake setup to autotools, and then back to CMake again, soon to be released as a new major version 4.0.0.
Music visualization is really underdeveloped in my opinion. I've never seen anything I find impressive. The problem is it never really looks very much like the visualization relates to the music I'm listening too. Usually you just kind of see the rythm of the music at most. I want to see something where harmonies look like beautiful patterns, noise looks like noise , and perhaps you could even pick out individual instruments in the visualization. It's really difficult though, which I guess is why it hasn't been done. You need to re-implement much of human audio perception, and then map that to a visual language.
In fairness, projectm is literally built on 90s tech (Winamp’s milkdrop).
For some reason around the early 00s peoples love for visualisations fell out of favour. Meanwhile clubs would still hire dedicated VJs and/or lighting guys so didn’t rely on math-based visualisations. Which meant fewer people contributing to developing cool new math.
Worse still, the few industries left that did care about beat detection were games like Guitar Hero, Beat Sabre and those dance machines in arcades; all of which were easier to have patterns programmed by a human due to the limited selection of songs on offer.
> For some reason around the early 00s peoples love for visualisations fell out of favour
It didn't. Music companies effectively killed it because music visualization is a combination of derivative work and performance. So, copyright, licensing etc.
I still setup both Winamp AVS and ProjectM inside foobar2000 simultaneously to this day. I don't use foobar too much these days, but when I do, I have those two things running.
I wanted something like this 4 years ago so I started writing a project called Drop [0] with the intention of making it possible to add custom visualizations through a plugin interface, providing plugin developers with tools such as beat detection, and a decent user interface. I added a lot of features and basically finished the interface but never got around to finishing the plugin portion.
Most of these just work on playing loops set at certain BPMs at the lower end of actual responsiveness, to a fast fourier transform which can maybe separate out further based on frequency range, maybe just roughly from bass/mid/treble, further mapped to rendering (Like here in projectm, which as I recall is just a fork of milkdrop). There was some really cool research I found recently which I think I'm linking here if I'm not mistaken:
They've gone and tweaked the basic formula from just FFT to something that actually does "peak detection", which is better explained in that article than I can here as it involves a lot of math. There's some ideas out there to be sure.
YEARS ago (so it looks kind of fuzzy now) I remember this guy hacked together some synths he programmed with visuals generated for them, it's really psychedelic stuff:
Really really difficult, and maybe not possible for many years to come.
Someone recently posted a link to Terence Tao on mathematical notation. Much the same problem. Notation is hard. Indeed I reckon notation is one of the most difficult problems we have.
That said, visualising Music is kinda like musicalising Art. Possible but not necessarily useful. Synesthesia is one way to deal with it, but is hugely reliant on very human perception and kinda iffy (not knocking those who can experience synesthetically, more suggesting https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F).
A general theory of correspondence of meaning and notation is still unavailable, notwithstanding a vast amount of research in the field. If you can make any useful contribution at all you are worthy of historical recognition.
I've been thinking about this stuff for decades, and can only recommend Charles Saunders Pierce as a good beginning for the modern mind.
The gag to this song[1] is a bit crass, but one of the visualization tools in particular has always caught my eye: the hexagonally-arranged note visualization. It seems to capture a number of chord relationships in a very intriguing and intuitive way.
Thought I recognized one of the viz methods, and the hexoganal representation of the chords may be based on this idea from Euler: https://en.wikipedia.org/wiki/Tonnetz
I saw some music software once also a few years ago that mentioned this type of input device. Don’t remember if the software in question was a VST or a DAW or what it was so can’t find it at the moment. The software that mentioned it may have been open source but I don’t remember.
Yep. I remember the early visualizers did indeed display something akin to frequency spectral data. I wrote a few visualizers for SoundJam (later iTunes) and the API was exactly that: left and right bucketed spectral data.
I know, bar graphs got a little tiresome so people tried writing more interesting stuff but at some point the "visualizer" became so disconnected from the music that even if you paused the audio the fireworks and "flow fazing" just kept right on doing its thing.
I think it's a problem similar to maps in rythm games (thinking Beat Saber in my case), auto generated ones are never as good as manually crafted ones. If it was possible to extract the dominantly perceived beat/instrument/melody, it would probably lead to better map generation too.
The problem there isn't necessarily development, but rather the data accessible to developers.
What you need to have that level of visual interaction is the original stems (or something which emulates them relatively closely).
Services like https://www.lalal.ai/ are a step in the right direction, and will probably lead to what you--and many of us--crave. But as it is now, they require way too much computing to provide meaningful enough data fast enough to build a real-time visualiser.
But even then there is the whole issue of copyright which would bottleneck development even further...
try finding something that lets you do an XY plot like an oscilloscope does. left channel moves the "pen" left and right, right channel moves it up and down, and it draws a fading line as it goes. it's amazing to watch, to me.
Nintendo pisses off and alienates fans so much. They're going to get steamrolled by the Steam Deck and Nickelodeon All Stars if they're not careful. They don't listen or learn from their customers.
I... surely you're joking. The Steam Deck poses no serious threat to Nintendo.
It's not remotely possible for Steam Deck to have the sort of first-party integration Nintendo accomplishes around co-op games, multiple controllers in multiple configurations, motion controls, pairing, latency, suspend/resume...
Like, have you used a Valve product before? They're beautiful and inspiring but... when they work, it's still not Nintendo quality.
The vast, vast majority of Nintendo customers know nothing of game mods or competitive multiplayer or care about latency or netcode. I doubt the majority even play Smash or any fighting game.
I mean, people may disagree with these moves by Nintendo and this position may be short sighted, maybe not. But regardless, it's Nintendo's intellectual property to do with as they please. They certainly don't owe fans or customers anything.
If they don't want people to play their games, who am I to argue? The article taught me people actually need Nintendo's permission to host a tournament. They might get C&D'd otherwise. What an insane world we live in. I can't fathom why anyone would ever C&D an event involving their own game and its enthusiastic fans.
Intellectual property shouldn't even exist in the first place, let alone be used to extort $50k from fans who are basically advertising their game for free, making it popular and driving a lot of sales. They're the ones who owe Nintendo nothing, yet they remained loyal despite this mistreatment.
Intellectual property doesn't exist. Copyrights exist, patents exist, trademarks exist. But they are all different and none of them are the same as property. It’s just a propaganda term by people who want you to think they are.
Just to clarify my comment: what I mean by 'it exists' is in a practical sense, i.e. what people mean by 'IP' is enforced by the institutions.
It does not matter that much if they are infringing copyright, patents or trademarks law, and in such projects it is very likely all of them.
The sad state of affairs is that this term is also industry standard, so somewhat understood by artists as opposed to real laws which are understood by essentially nobody.
So for all other meanings IP does not exist indeed, and this is a misleading term. But instead on arguing on these details, it seems more productive to point that there is an already existing alternative that is better for everybody.
Like it or not, 'IP' exists. By doing these kind of projects, sadly these people relinquish their rights.
They could be 1) fighting for change in these laws, 2) investing in free culture/software. Sadly, almost all of them don't.
They are happy to operate in that grey space as long as it is tolerated, but they should know that they are on borrowed time.
Big companies striking with absurd power at fan projects should help drive the point home, and should be the occasion to advocate for open standards.
I think civil disobedience is a form of protest. As you say, it helps drive the point home. If "illegal" video game tournaments are widely attended and accepted as societal good, or Nintendo lawyers anger enough people, the activism and legislation you hope for will follow naturally.
I would be afraid this approach would rather reinforce the status quo: the needle would slightly move so that people find themselves comfortable being serfs again, rather than encourage to move towards freedom.
Imagine how much more awesome these visualizations could be if music files came with more than 2 channels. I'm reminded of old tracker software which had simple visualizations for each track.
given that alas consumers of the world are expected to enjoy bundled finished products, it seems semi unlikely. so perhaps this suggests a great use case for stream seperators like the one posted a couple hours ago Cassiopeia[1].
ultimately i'd love for music to be more like html- an encoring of content that the user's agent then renders as it sees fit. having individual streams, or something even more complex lije an ambisonic recording that encodes position would unlock a lot of experimentation & play. i had not considered though how useful such discreization could aid in visualization, which right if the curf sounds very promising.
> ultimately i'd love for music to be more like html- an encoring of content that the user's agent then renders as it sees fit.
Yeah, that would be so nice. Imagine the cool stuff people would be able to do. Disable voice tracks for instrumental versions. Play video games that perfectly synchronize to the music. Make custom visualizers for each song featuring graphics perfectly synchronized to each instrument... I always try to imagine these visualizations when I listen, even something as simple as lines being drawn in accordance to note pitch would be awesome. Pretty much impossible to do that when every sound is mixed together...
Something like this could be done with a format like WavPack [0] I imagine.
The Two Big Ears "spatial audio" format is just a repackaged WavPack file, after all, and that handles 10 channels of simultaneous audio playback even in streaming environments, and its interface is low-latency enough to perform real-time panning and spatialization processing based on listener position, i.e., for VR environments.
Indeed! I produce music for fun using virtual synths, multi-layer sound libraries, audio plug-ins and MIDI combined in a software digital audio workstation (Cubase at the moment). I find it super enjoyable and despite not formally studying music, not reading traditional music notation nor being able to play any musical instrument well in real-time, through step recording to a MIDI multitrack timeline (and subsequent on-screen tweaking) I manage to create remarkably pleasing, quite commercial-sounding original music.
As an equally enthusiastic video hobbyist, I've long-wanted to create my own original visuals to accompany my music. Yet the "visual synthesis" tools I've played with seem to remain split into two extremes - essentially "too algorithmic / Geiss-like" on one end and "too complex and art-talent-thirsty" on the other (for example Blender 3D, AfterEffects). A similar gap existed in music tools 20 years ago but remarkable progress has been made in closing that gap empowering in-betweeners like me to achieve wonderful results with only hobby-level skills, time and budgets. Sadly, that stunning progress in democratizing music synthesis has not been equally mirrored in the visual synthesis domain. The visual side mostly remains polarized into DJ/VJ-centric real-time apps and pro-level video editing and 2D/3D rendering tools.
It's odd because the musical synthesis domain seems to have been unleashed by increasing access to dedicated hardware DSPs in the 90s followed by rapid, iterative 'software-ization' as general-purpose processors became capable of replacing DSPs (MMX etc). GPUs have followed a similar march down the curve of democratization but for some reason broad access hasn't unleashed a similar hobby renaissance on the visual synthesis side like it has for visual capture and editing with mobile/DSLR imaging and desktop video editing.
However, I still remain hopeful that perhaps the visual end of the digital synthesis revolution has only been delayed. I find encouraging signs in the real-time rendering and compositing capabilities of GPUs driven by game engines from the nascent efforts of Machinema a decade ago to, more recently, ILM using Unreal Engine to create remarkable visuals for the Disney/Star Wars series "The Mandalorian" using conceptually approachable live projection mapping techniques. The custom code ILM paid Unreal to add has since been bundled (unsupported) into all versions of the engine including the personal/hobby-use ones and garage experiments can already be found on YouTube.
As for providing more useful audio sources as input to the visual pipeline, there are millions of desktop music hobbyists ranging from Garagebanders on tablets and phones to DAW users on desktop. All of them are already capable of supplying separated, uncompressed multi-track audio stems (pre or post downstream effects like EQ/reverb) along with expressive metadata in the form of MIDI 2 notes and articulations. There are even a variety of standards across Windows, Mac, iOS and Android for real-time, multi-track transfer between applications within a device. For distribution formats, new consumer digital digital audio interfaces from HDMI 2.x on the interconnect side to lossless multi- and N-channel spatial audio standards on the encoding side are democratizing the necessary I/O capabilities.
From time to time I run projectM on songs I like. It's designed to be an open-source implementation of the Milkdrop visualizer (I think), so it also runs on Linux unlike WinAmp/Milkdrop. Its visualizations are very psychedelic and mesmerizing.
It is very entertaining and I highly recommend giving it a try. You can download extra presets or use the ones that come with it. It's also worth noting that I had to adjust (increase) the beat sensitivity and fiddle with the audio output settings to get it to work on Linux. It also works better for faster-paced songs IMO.
I watched the demo video, and couldn't convince myself that the images had anything to do with the accompanying music. Cool animations, but not a "music visualizer."
Almost all of the visualizations are responsive to the bass/mid/treb amplitudes and detected beat, but the problem on the SDL version on macOS is latency, which is unfortunately present in the demo video you are likely referring to.
SDL doesn't currently allow for requesting low-latency audio capture so we're looking at PortAudio instead.
I've loved playing with this over the years - it's a great project.
I've been to a couple music shows where the VJ had a serious setup with what looked like physical mixers with buttons, knobs, and dials, and some quite complex software going. Does anyone know how to learn about the state of the art in VJing?
VJ's tend to use multiple different apps and connect them using syphon (Mac) or spout (windows) to share textures. I've seen resolume (clip heavy content) and touchdesigner (node based "visual IDE" ) mentioned. Those are all big in the space and offer something unique.
I wanted to add https://www.synesthesia.live to that list. It has a ton of audio reactive content and the best audio reactivity in the space.
But the key really is in using other apps + syphon/spout to combine all the textures into unique content. Then manipulating that in something like touch designer.
Full disclosure: I'm one of the creators of synesthesia.live but use multiple apps, Touch Designer everyday.
Not sure about “state of the art” but if you’re on Mac, VDMX (https://vidvox.net/) is a good place to start looking. Its sort of like a DJ mixer for video sources but lets you do other things like control parameters of your video sources with a MIDI contoller for example.
+1 for VDMX. When you understand the fundamentals of how to build stuff with it you can almost create your own version of most of the other software available on the market. Plus the vidvox dudes are legends.
Worked with a friend to create NestDrop. It's a VJ interface wrapper for the Milkdrop engine with support for Spout output.
http://nestimmersion.ca/nestdrop.html
if you want to get into hacking on a real ntsc/pal signal there is a whole universe of very expensive eurorack gear. on the low end you could check out ch/av for a VGA synth. you probably want something a bit higher level.
for a simple raspi-based device that lets you play with video feedback and has midi-tuneable features check out video_waaaves and other software by andrei jay. combined with a basic edirol video mixer and a couple sources this could get you pretty far.
for coding visuals, learn glsl on shadertoy, p5.js and threejs from streamers like yuri artyukh, and check out what people are doing with commercial software like touchdesigner or max/msp
Can someone give the quick tutorial on how to load music files into the thing? I installed projectm-sdl on Arch, it doesn't seem to do anything when I pass a file name to the program, keyboard mashing seems to do nothing, and dragging-and-dropping from a file manager seems to do nothing.
You have to make sure to either feed it a microphone input or research how to set up a loopback monitor with pulsaaudio/pipewire/alsa (whichever you use). This way you can feed it whatever the audio output from any application. Then you many also need to specify an environment variable in your shell (I think it’s something like SDL_AUDIODRIVER I’m not sure and you set it to the specific driver you’re using) in any case you should be able to find instruction with “pulseaudio loopback stereo monitor sdl”
(Apologies for formatting i am on mobile)
I recently had to set this up on a raspberry pi 4 with manjaro / KDE plasma and it was a bit of a pain to setup (the setup is slightly different depending on your exact audio configuration) but it’s very rewarding once you do.
Feel free to PM as I spent a lot of time debugging the setup and may be able to help!
This is a fantastic idea! I forked [1] projectM a while back to work on a few modifications such as screen and audio recording (by saving raw framebuffers) and being able to select the active preset remotely from a web interface so I became somewhat familiar with the way projectm handles A/V input/output.
I just took a look at ffmpeg’s video filter interface and it looks like linking the two could be doable!
This feels like a much more elegant solution for recording frame-perfect captures so thank you!
IMO making a projectM-gstreamer plugin would accomplish this a lot easier and better. I wanted to do this but wasn't sure how to create the GL context.
Started it a while back: https://github.com/projectM-visualizer/projectm/pull/207
Man this has come a long way since GEISS.EXE for MSDOS+VGA (320x200x256)... it didn't even synchronize, but it was 1988 and I was so stoned that it appeared to be in sync.
Really cool stuff.
I saw Jon Hopkins in concern a few years ago and his visualization was killer. Wonder if he used this.
Everytime I've tried to run this it ends up crashing and being utterly unusable on Linux. I've been trying for about 15 years on different hardware. Nowadays I just use a webgl based visualizer.
Works fine for me on Linux. I’ve had it running for hours on end without a crash.
Some builds are more stable than others. I think I’ve found the SDL build to be the best. The ALSA one was fine bar a few specific plugins that needed disabling but the SDL build on ArchLinux is rock solid.
I have issues as well, each implementation (SDL/Jack/Pulse) is completely different, in some the shaders break completely, some don't respond to shortcuts or don't show the menus... it's a mess and the funny thing is that these are the same issues I had 10 years ago. The first thing that they have to fix is that each client works the same way.
There is also an android port of this app. It is pay to play but you can easily copy over presets from the original source and aftermarket presets. I run it on my android headunit which brings back the nostalgia of 90s aftermarket stereos with a simple visualization that plays on loop (except this one is interactive and fills a 9" touch screen :D)
YES. I had to look twice to be sure this is the same project we used 20 years ago to throw visuals on a wall while having awesome parties! Back in the days I routed the audio from the dj-mixer into winamp to get this to work. Fun times! Thank you so much for those memories and big thumbs up for the maintainers of the project!
libprojectM is a cross-platform library designed to be part of other applications. There are some reference implementations using SDL and Qt. They run on Linux.
For best results grab the latest master and build with cmake.
What OS are you after? It officially supports Windows, macOS and Linux. But I do know of FreeBSD, iOS and Android ports too. There’s bound to be more ports out there as well.
To mention a few:
* Milkdrop was a windows-only win32 affair, and the shaders for presets have all been written using HLSL, for DirectX. ProjectM had to incorporate a shader transpiler to convert preset shader code on the fly from HLSL to GLSL so that projectM can run on platforms other than windows. The conversion isn't perfect and can cause a few shaders to fail compilation but these problems do get fixed when someone takes the time to dig into them.
* Improving the FFT maths and PCM data interface
* Optimizations for preset evaluation using the LLVM JIT
* Halfway-completed port to the web with Emscripten
* Updated support to work first with more recent versions of iTunes as a plugin and more recently as a Music.app plugin. macOS installer for the plugin and SDL app. Almost-working multi-bundle installer code signing and notarization.
* Text menus and preset searching in the SDL app (keys listed in README)
* Released as a standalone app for Steam
* The build system was ported from a very-broken CMake setup to autotools, and then back to CMake again, soon to be released as a new major version 4.0.0.
There's a discord now too. https://discord.gg/tpEuywB
We welcome PRs and generally respond quickly to them. It's a completely community-driven project and we're always looking for help.