I was just thinking this the other day. What form do you think this would take?
I have only a fuzzy understanding of how this works, but I was thinking you could run a bunch of .wav samples through Web Audio to get their wave shapes ... save them as arrays, and then synthesize sounds from a single js file? (Rather than lugging around the .wav files themselves).
Well, I was thinking more along the lines of providing a service that takes care of all the note-sample mapping for you. You could either specify instruments/effects that you want via script tag url params, or perhaps using some kind of loading library (Google Web Fonts offers both of these methods). The bottom line is that you would be provided with patchable Instrument and Effect objects that expose a simple interface for triggering/releasing notes and modifying relevant parameters.
I have only a fuzzy understanding of how this works, but I was thinking you could run a bunch of .wav samples through Web Audio to get their wave shapes ... save them as arrays, and then synthesize sounds from a single js file? (Rather than lugging around the .wav files themselves).
If this is possible, it would be awesome.