Love this. A project I've had in mind for a while but realistically will never get around to building is an ambient/coding music generator that's hooked up to a metric, e.g. your project's latency, autoscale size, error rate, potentially controlling different instruments.
It would be fun to be able to subconsciously monitor your system without staring at graphs.
A long time ago, a friend mentioned this exact same idea! The analogy he used was how steam engine operators became so familiar with their machines that they could tell what was going on just by listening. I'm not sure if we can really do the same with services, but it's a fun thought experiment.
You can sort of do this with a modern computer. The CPU and GPU fans spin up on increased load, and you can listen to the hard drive (the sounds correspond to some degree to the HDD indicator light). Over time you can get an idea of system activity just by listening to the machine itself.
At one place I worked at back in the 1970s there was a speaker on the mainframe (it may have just been a transistor radio picking up interference) and you could get some idea of what the machine was doing from the sound.
Hansen and Rubin did something like this in 2001 [1]. I have some audio samples of their program's product from monitoring the Lucent site at 6 AM, noon, and 2:30 PM - decidedly different sounds at each time.
Edit: The page on Hansen's website about this project (Listening Post) is available in archive.org, but without any audio samples alas [2]
That's a cool idea. Something that was suggested to me with Flowful was to use some sort of body triggers, perhaps from facial recgonition through your device's camera, to determine when the user was becoming less focused. The music could then adapt to help them get back on track.
XD yes! I want to jack my earbuds into the stock market: time and sales frequency denoting tempo and trade volume denoting dynamics; with a real time candle analyzer modulating between keys. Each company would be a different song!
When I was younger, I was into autohotkey. My PC would go off because of a powercut and the ups failed in 4-8 minutes while beeping.
Found "tonedet" or something that listens to a specific tone and does something via ahk which in my case was proper shutdown.
That was a success.
Then I thought of a project that would give me status via soundbeep,500,400.
I never got around to doing that but it wouldn't have been half bad.
Three small beeps in a succession, repeat, internet is down or some combination of deep or low beeps to say something else.
I remember the first time I assembled a PC (nineties), listening to motherboard beeps and comparing against the (printed) manual to work out what was wrong.
My initial EE Senior project proposal (back in the early 90's) was a biofeedback controlled music generator. The idea was to use a Markov chain (not that I knew that term, but I'd heard of the concept) to produce tone sequences, then use biofeedback to alter the probabilities to encourage relaxation. With my luck, it probably would have devolved into a metronome, but I was young and naive.
I ended up failing out of college (I was young and stupid, too) and when I returned 5 years later I ended up changing my senior project to a leg controller for a hexapod robot.
I make algorithmic (infinite) music in a DAW (Reaper) and have been thinking about porting some of it to the browser, but didn't actually do anything about it :-(
Can you describe the architecture a little? Are you using the Web Audio API for the instruments (and timing)? Are tracks static/looping, or dynamic (created on the fly based on random parameters)?
Yeah sure - you're right, it's using a lib built on top of the Web Audio API called Tone Js. This handles scheduling and some instruments, but the sounds themselves are mostly samples which I record myself from VSTs, then do things in the browser such as filter modulation so that they sound different over time. When a track loads for you, that's the samples being loaded in from cloud storage.
As for static vs dynamic, it's a mix of both. Some tracks are more static than others, and I have learned it actually seems better to try and use randomness sparingly. Almost all tracks will use probabilities of loops firing, and some watch for the status of other loops to create sections (e.g, always fire A if not firing B). Note / Chord selection also often contains randomness, such as a probability to pick each note from a list, then a probability that note itself to fire.
I quite like the way it sounds. Randomness is difficult, indeed. If you ever want to look at something less random: 1/f noise generates more interesting sequences than white or pink noise.
Each track is it's own JS file! With a folder of corresponding samples, which the JS file pieces together as I mentioned before. There's no MIDI, it's all done with the Tone.js scheduler. :)
I'm looking for a good tool and stack for this too, along with algorithmic visuals.
I have a synesthesia-themed project (https://testfixture.presteign.com) where I currently make all the audio with generative synthesis in a DAW and the visuals by hand in a design package, but I'd love to branch out into fully-procedural at some point.
Absolutely love this concept. One issue I'm having after listening for a while to a variety of the channels is that the music is all pretty "sad."
I don't know enough about music theory to give you more detail than that, but is there a way you can add more upbeat songs, to get my mood up, energy up, motivation kicking in? Listening to these, I kind of want to take a nap.
The dissonant intervals that were introduced by modern composers in the late 1800s and early 1900 by composers such as Rachmaninov, Revel, Debussy, and Varèse were hardly "pure".
The most amazing feelings often wash over me, in fact, while listening to these incredible composers...after a while the "atonal" notes will suddenly "snap" into place while my brain is somehow making sense of the music without my help.
It is truly a sublime moment when this magic occurs, and I'll often laugh out loud as the sudden transition from melodic confusion to understanding occurs.
Introducing major key based songs would do a lot to improve the mood of the music.
In general, major keys are used for a lot of pop music...for example, both the Beatles and Nirvana wrote almost all their hits using major keys (Let It Be and Smells Like Teen Spirit), while ballads and more sad and "moody" music (ie Stairway to Heaven/Zeppelin and Unforgiven/Metallica) will be written in minors.
Also, tempo is important to add to the"upbeatness" of a piece...it would an interesting addition to this wonderful program to add switches for major/minor key and tempo.
I love ambient tunes and listen to them for 1-10 hours a day while doing stuff. Any time I need to focus (even when writing an email) I'll throw on my favorite ambient music.
How is this any different? Why would I get a "premium" account for some random website when there's an entire catalog of ambient music on Spotify/Apple/etc that I could listen to?
The biggest differentiator may be the infinite length of tracks. Once you tune into something that fits it can be left going as long as necessary. With other services there is a need to compose play lists or restart the player when a selection ends.
Such offerings have discontinuities between tracks or beat matched transitions. These generated tracks are constructed to continue with the same themes and sounds as long as wanted. It may or may not be desirable, but it is different.
Oh, this makes sense, I didn't really think of the transition as potentially being distracting or a cue to stop working.
I personally use a "relaxing music playlist/radio", the songs are pretty similar (meditation music), and I don't often notice the song changing, but I do often notice that suddenly I am listening to a completely different melody.
… but there's no continuity with radio … or not a uniform/dependable continuity, at least: the mood is different song-to-song, and there is no control over the "levers" beyond _skip_ and _like_. So, to "train" the radio stations involves interacting with them.
Similar to flowful, endel creates generative soundscapes based on mood/need: focus, read, work out, drive, you get the picture.
I appreciated that they made a concerted effort to anchor what they did in music and scientific theory, to understand what makes for good focus, what focus is, etc.
Also cool: It uses time of day, local weather, and your activity level as inputs to its generative music, so in theory, it's very contextual.
Interestingly: was a happily paying subscriber for over a year, but lapsed because I moved out of the city, began working in a private office, and found that I no longer needed to close out the outer world to focus well :shrug: who knew?
Sure thing. Further down in another comment I went into how the music is made, so I'll stick to the tech stack here.
The frontend is built with React, Chakra UI and Tailwind CSS. It also does all of the audio generation using a scheduling library called Tone.js.
Auth / Database are handled by Firebase, and payments are by Stripe. It's fully serverless; I use cloud functions for anything server side.
The samples themselves are stored in Google Cloud Storage, although I may need to look into a different method or making it more efficient, as today's traffic has absolutely smashed through the free downloads tier.
Call me old fashioned but I prefer chill music that was hand crafted by a DJ blending samples from wide influences as in the Buddha Bar or Cafe Del Mar collections. I am afraid all these LoFi channels and Spotify Focus playlists songs are in reality just procedurally generated by them under some made up lables so in reality no artists are actually paid for the stream and the money (what little there is anymore) goes from their right pocket to their left pocket. Claud Challe and DJ Ravin would probably not happen in the modern streaming age.
This is quite irrelevant to the discussion. To the best of our knowledge, the OP made the tracks and programmed the sounds; if any artist should be paid it's them, not some DJ or samples maker in fashion at the Buddha Bar.
There's such a vast catalog of human-composed ambient music, that I just simply don't see why someone would prefer to listen to this algorithmically generated soulless muzak.
Same goes for the "lo-fi beats for study" channels. There are so many good instrumental hip-hop albums. MF Doom instrumentals (Special Herbs series), J Dilla instrumentals, Boards of Canada, Madlib instrumentals, etc. Why listen to that trite crap? It's the equivalent of "eating out" in McDonald's.
> There's such a vast catalog of human-composed ambient music, that I just simply don't see why someone would prefer to listen to this algorithmically generated soulless muzak.
A lot of human-composed ambient music actually has a lot of algorithmically determined elements. One of the simplest examples is using several tape loops of differing lengths (which Boards of Canada are very fond of).
Similarly if you dig into the composition and ideas behind some of Brian Eno's pieces, you will find they are rule based.
Even using a computer for doing this automatically isn't very new. It's being done by modular synthesis enthusiasts since ages.
Many of these ambient pieces would actually generate different ambient sounds for a very long time, and the "soulful" recordings thereof (on CD or Vinyl) are merely short excerpts from the actual piece.
Generating and listening to it live is sometimes the only way to fully enjoy an ambient piece.
Is the piano detuned by intention (which is something that rather bothers my concentration)? Does the algorithm select the sounds itself? Is there information on how the "generator" works?
I record a bunch of samples from VSTs I have. Once I have the samples I upload them to a server, which you request when you load up a song. In your browser, the generators (which is another way of saying 'tracks') then piece together these samples in ways I have coded. So for example, I might have a list of chords which sound good, and a loop which selects from that set. Or maybe a bunch of note patterns to play at a certain interval, but they only have a small probability to play. To make it always unique (and hopefully always fairly interesting), I do things like automate filters, introduce randomness, and switch things around based on how long the track has been running for. Each track has it's own pre-defined set of samples and musical key. The code works on the arrangement, randomness and modulation over time. These random effects are different every play, and so each person will have a slightly different song than the next.
Interesting, thanks; so these "samples" are not just the sounds which are triggered by a (Midi) track, but are the music themselves; in that case the "generator" just selects and combines pre-existing musical parts; it's not an "ab initio" music generator as we saw e.g. in MuseNet or BachBot; it's rather a kind of "automated DJ".
Yeah that's not a bad way of putting it. Each musical part will use the samples in a different way each time though, with some randomness added, so it hopefully stops things from being too recognisable.
I really dig this; I always have a need for background music while working and the context switching is really bothersome! Maybe I can finally ditch the youtube music sub and just support your project.
Actually, I've been thinking to write my own for baroque-era music-- those rulesets are well-enclosed and have all kinds of tricks for escaping resolution. Perfect for long D&D sessions! :D
Do you have any communities / resources you go to for inspiration for your work?
The fact that you'd consider making that switch is just awesome to me, so thanks! :)
For resources, theres a great site here as an intro to generative systems - https://teropa.info/loop/
And for communities, I have made a Discord for Flowful where I plan to post updates and how-its-made type stuff. The link is in the top right of the Flowful app, feel free to join!
I got an error while trying this on Firefox (Librewolf):
Application error: a client-side exception has occurred (see the browser console for more information).
Checking the console doesn't produce anything helpful either.
Edit: While inspecting the page, I found that there are a total of 28 different inline CSS styles. Some of them are empty, and some contains styles for class with random names. I assume this is just something with React and Chakra UI, though.
I got the same error, it's related to WebGL being disabled when `privacy.resistFingerprinting` is enabled. The related error is `TypeError: this.minigl is undefined`
You might be right. It's based off a polyryhthmic style I found on Youtube. Perhaps I should just call them 'Polyrhythms' - I don't intend to make any medical claims.
Plausible deniability: Audio Delivery in High Definition.
Besides, does it count as a medical claim if certain content metadata tags happen to coincide with medical acronyms? I'm not a regulatory official, but this seems like a grey area _at worst._ no diagnosis is being made or even implied.
I work in medical device regulations.
The way ADHD appeared is clearly not a claim, and there is not regulatory concern of any sort here in any jurisdiction I could think of.
That said - regardless of regulations, you don't want to confuse your users, so better leave this acronym out indeed.
FWIW, while I understand your concern, I've seen a bunch of ambient sites that list this genre as ADHD. It does seem to help me. And I don't think anyone would reasonably consider a music category to be "medical advice?"
I would pay $20/month for an API that let me download a chunk of music/sound, with the rights to use it in commercial products. With the option of fade in/out at start and end, and the ability to specify the duration of the sound downloaded.
I love the app, but consider disabling the visual stuff instead of failing with an exception in cases when WebGL is disabled: it's not strictly necessary for the app to work.
Came here to mention generative.fm … it's great, been using it for years.
The creator, Alex Bainter[1], posts a lot of interesting/great stuff about generative music[2,3], and recently published the collection of music theory utilities he uses for generative.fm: https://github.com/generative-music/theory
IANAL but that is not the impression I get from reading your terms and conditions.
> Except as expressly provided in these Terms of Use, no part of the Site and no Content or Marks may be copied, reproduced, aggregated, republished, uploaded, posted, publicly displayed, encoded, translated, transmitted, distributed, sold, licensed, or otherwise exploited for any commercial purpose whatsoever, without our express prior written permission.
A comment on HN is not "expresly provided in these Terms of Use" and don't think would constitute "express prior written permission." If you want to let people use it in their streams you might want to update the language there.
I'll give it a go, haven't used ambient music in some time. Just one picky thing, I found that even when on same volume levels, some tracks are way louder, but maybe its just me
Yeah, this is something I am still struggling with. The gain normalization is super manual currently, so I want to try and programmatically even them out between tracks. Sorry about that!
uhh, right now.. it's not the most user friendly. You can delete the fields that the site places in your localStorage (can be done by clearing your cache for this site).
The selections from the question just decide a) what's in your recommended section and b) what category you start on when you first load the app.
I figured people wouldn't want to keep navigating the landing page after they had already seen it once. This way you can just head straight back into the app.
An incognito tab will let you see it again though.
I prefer to see how "deep" a product will let me go (without hitting a regwall) before reading about it. After I've tested out the gatekeeping, I go back to the homepage to learn more about the product.
But my personal use case aside -- if people want to see the homepage again for any reason, like contacting you, or linking to a friend -- you are blocking that ability.
You all made good points - I've removed the redirect. You should now be able to go back to the homepage.
I was modelling it after Notion's homepage, where you just want to go back to your workspace when you go to notion.so, not the landing page. But I'll put some more thought into how it can be done less cruedly.
Notion does a lot of stuff poorly, I'm not sure why their patterns are being replicated elsewhere, just because the company itself is successful?
Anyhow, what you can do, is to have the landing page under /home for example, so when a visitor lands at /, you decide if to send them to /home (not logged in) or /player (logged in). The brand logo in the top links to /home instead.
What problem does this solve: Poor playlists / context switching when you get given something distracting in an ambient music recommendation algorithm. Also, hopefully, it's just nice music to listen to.
You are listening to the music of other people - mine. I made the generators, which were not trained on anyone else's music. It does not 'process tracks that were made by real people'. The paid element of this service goes to me, without X streaming service taking any cuts.
Computer-made generative music has been around since at least the 70's. Outstanding human-made ambient works have been made since then, which people love and continue to listen to. If this draws some attention away from 'actual musicians', then I'm sure they will survive.
Our society is currently suffocating in the endless torrent of information and pointless products. Just because you aren't interested in thinking about costs of things doesn't mean they don't have costs. So yes, I want to know what is the benefit.
Any thinking predicated on the notion that we should just gouge ourselves on endless amount of stuff (services, products, information) without asking any questions is faulty at best.
It would be fun to be able to subconsciously monitor your system without staring at graphs.