Last year I was attacked by three policemen at a demonstration who thought I was filming them. I told them I was listening to colours, but they thought I was mocking them and tried to pull the camera off my head.
For some reason, this strikes me as particularly awful. Not that police don't want to be filmed, that's predictably repugnant for its own clear reasons. Its that they had no problem ripping a prosthesis off someone before even bothering to try to understand it because it looked different. What's next, wrestling old folk to the ground and ripping out their hearing aids because they might be recording devices? Will there be an unwritten "normalcy code" that disabled people will have to follow to avoid assault?
I expected yet another “synesthesia—isn’t it interesting!” article, and was pleasantly surprised. As a synesthete (grapheme→colour and sound→colour), I find that the topic has been done to death, and non-synesthetic writers tend to romanticise it to the point of outright misrepresentation. Anyway, the brain’s peculiar propensity for conflating senses seems to have proved useful for once. Props to this guy for hacking his brain to get around a stroke of bad genetic luck.
I agree that the topic gets written about often, and usually pretty terribly - but because of that I was hoping it was another "synesthesia - isn't it interesting!" article, just a rare good one.
I don't have it myself, but did study music with someone who had it, and since then it's always fascinated me.
Somewhat related: my main research area is actually in sonification (representing data through non-speech sound) - imagine listening to changes in the stock market through changes in pitch, or loudness, or tempo. We can use sonification for the visually impaired, communicating data and patterns in new places, as this guy has done. But we can also use it to revolutionize how we interact with computers - we can be mobile, multitasking, visually overloaded, and still process data through sonification. IMO a potentially revolutionary technology!
The field was pioneered by Paul Bach-y-Rita (http://en.wikipedia.org/wiki/Paul_Bach-y-Rita) who most notably invented a setup that allowed blind people to "see" via a camera connected to a vibrating grid attached to their their backs, effectively substituting visual with haptic input.
In a nutshell, there is nothing intrinsically "visual" about neurons in the visual cortex, nor are neurons in the, e.g., auditory cortex exclusively tuned towards sound - the brain is plastic enough to "make sense" of a new type of input signal, which typically takes a couple of weeks.
My co-founder Peter König at EyeQuant.com - a neuroscience professor at the University of Osnabrueck - is working on similar projects with his feelspace group, where they created a compass-belt that vibrates whereever north is, taking sensory substitution a step further by effectively creating a new sensory modality of direction (Wired article: http://www.wired.com/wired/archive/15.04/esp.html)
I'm curious if for blind people the back stimulation system for seeing things would start to produce visuals in a way similar to normal sight. Similar to the possiblity that ultrasound used by bats might allow them to "see."
Also, I really want to get one of those compass-belts. It seems like an incredible experiencee. I wonder how it feels to not have it on though. Losing a sense mustn't be the nicest experience.
This is some really cool tech. But while light frequencies can obviously be translated into other media we can perceive, "color" as such always comes attached with a ton of spacial information, so I wonder how well the eyeborg conveys that? It seems like this would feel like being extremely nearsighted, which is suggested when he mentions getting close to peoples' faces when doing portraits. I also wonder how much this is a constraint of the technology, and how much are the limits of our sense perception? For example, if the device were able to encode arbitrarily specific spacial information, could one train themselves to be able to instantly distinguish among 100s of unique, simultaneous sounds (like we do with sight), or would the experience always be din?
I have fully working eyes, but the concept of self-induced synesthesia is interesting. Especially for the purposes of getting extra-human senses. (Even if they're not that useful in practice.) Of course, if your eyes can already see color; glasses with a screen filter might be more efficient.
More sensitive EM field detection could be vaguely useful, but to be useful it would probably require a bit of pre-processing (e.g. scaling the entire frequency range of the currently used EM spectrum into a range we can hear, see or feel) and maybe some protocol-specific hardware (decoding radio, video, wireless etc).
Other interesting candidates for extra-human senses beyond just increasing the range and sensitivity of existing senses would be EM/light polarization (insects can see polarization in the sky, so they can navigate by the sun's direction even when it's hidden by clouds) and magnetic field direction (which exists in bacteria, invertebrates, and birds, and may exist in some mammals.)
Is the color->note map arbitrary or is there some logic behind it? I imagine it greatly affects the associations (including emotions) he has built up over the years.
>Is the color->note map arbitrary or is there some logic behind it?
One way of doing this would be to map from the electromagnetic wavelength of the color to a corresponding audible frequency.
Audible range is something like 32 to 32768 Hrz. Assuming speed of sound in air of 343 m/s, this translates to wavelength range of 0.010437 to 10.6875 m . This can then be mapped onto the visible spectrum of 390 to 750 nm.
Your ears decode multiple frequency's at the same time so nothing limits things to a single color at a time.
But, this is for someone who literally could not perceive color at all but can still see. If he waves it around he can probably quickly tell that the wall is blue or white and then focus on extracting details from things he finds interesting.
Taking onboard what you said, a system could work to match sound volume with direction to see in a higher resolution.
I.e. It would superimpose all color in range together with those closer to the center having a higher amplitude. So the colors directly ahead would be player louder and peripheral colors played quietly.
Or perhaps this is how it works already....
Roughly how many different notes could you hear at a time?
Aphex Twin does have synesthesia, so there is at least logic to thinking that a BBC article on the subject could be about him, though maybe not that it would be.
Definitely going to like. Inadvertently said the opposite of what I meant.
I hadn't considered artificial melding of the senses like this. Came out of left field, so the name of the first (only?) Brit I know of with synethesia popped into my head.
For some reason, this strikes me as particularly awful. Not that police don't want to be filmed, that's predictably repugnant for its own clear reasons. Its that they had no problem ripping a prosthesis off someone before even bothering to try to understand it because it looked different. What's next, wrestling old folk to the ground and ripping out their hearing aids because they might be recording devices? Will there be an unwritten "normalcy code" that disabled people will have to follow to avoid assault?