Hacker News new | past | comments | ask | show | jobs | submit login
Can't you just turn up the volume? (medium.com/amp)
545 points by varunsrin on Jan 7, 2015 | hide | past | favorite | 152 comments



This is nifty.

My father has hearing loss and it's bad enough that not only does he use hearing aids, he is constantly turning them up until they feedback and start ringing. He doesn't hear the ringing, but I do, and I have some hearing damage of my own. And he wonders why his hearing aids go through batteries so fast.

Their description of how hearing loss works gives me some ideas on how I can help my father manage his hearing loss better than just constantly buying batteries.

However, I do have one complaint with the article, and that's their (mis)use of terminology, specifically, dynamic range. Dynamic range is not, as they claim, the range of frequencies one can hear, from lowest to highest (e.g. 20Hz-20kHz). That's bandwidth. What dynamic range is, is the ratio of quietest to loudest sounds possible, often expressed in dB.[1]

For example, as they mention, human hearing has about 120dB of dynamic range. An audio CD can encode a dynamic range of 96dB. The 24-bit files professional audio studios work with can represent up to 144dB of dynamic range.

Perhaps it's a pedantic distinction, but using already existing terms for what you mean to say is less likely to cause confusion than misusing one that means something else.

1. https://en.wikipedia.org/wiki/Dynamic_range


Not a pedantic distinction at all - on a second read, I completely see the confusion, and I'm going to reword the article to separate the concepts of dynamic range (which is just a dB value) and dynamic ranges across a frequency spectrum more clearly.


You're not being pedantic at all. It's a fairly egregious error for an article about hearing.

It would be like an article about a camera talking about its fast f/1.4 shutter speed.


Curiously, that sort of language is used by camera people. Aperture and shutter speed are not the same thing, but they are inextricably linked to each other (in a way that dynamic range (in the amplitude sense) and frequency response/range are not nearly so linked). In a "camera article", the phrase " ... its fast f/1.4 shutter speed." would probably be in the context of saying something about one or more of the sensor's size, sensitivity, or noise-floor.

http://en.wikipedia.org/wiki/Lens_speed

"Lens speed refers to the maximum aperture diameter, or minimum f-number, of a photographic lens. A lens with a larger maximum aperture (that is, a smaller minimum f-number) is called a "fast lens" because it delivers more light intensity (illuminance) to the focal plane, achieving the same exposure with a faster shutter speed."


Sorry to disagree, but a knowledgeable camera person would never talk about a "fast f/1.4 shutter speed". A camera article, review, or blog post that said that would be loudly criticized in the comments.

The phrase just doesn't make sense - that's why I chose it as an analogy. It especially doesn't make sense as a figure of speech referring to some characteristic of the sensor.

A more sensitive and less noisy sensor may let you use a higher ISO setting and therefore allow either a faster shutter speed or a smaller aperture for the same overall exposure.

A lens with a wider aperture would allow you to use a faster shutter speed or a lower ISO with the same exposure.

Similarly, a slow shutter speed would allow you to use a smaller aperture for more depth of field, or a lower ISO setting for less noise. Or a fast shutter speed would allow you to a wider aperture to blur the background. Again all with the same overall exposure.

Aperture, shutter speed, and ISO setting are closely linked - changing any one of them will affect your overall exposure, or you can change two or all three of them in combination to get the same exposure while altering other characteristics like motion blur or depth of field.

But they are three very different things, and even in casual conversation someone who knows cameras would not mix them up.

BTW this goes way back to the film days. Shutter speed and aperture work the same now as they did then, the only difference is that you changed ISO rating by buying a new roll of film instead of turning a dial.


I think you are missing a characteristic of lenses called the f-ratio, which is sometimes described as fast or slow. This is not specific to cameras; it is a general term used in optics. The f-ratio is the ratio of the aperture to the focal length. Even telescope lenses and mirrors are described similarly. For example, a telescope with an f/4 mirror can be described as a fast telescope.

Not only that, a lot of the terms have been misappropriated in camera lingo. The camera folks express aperture as an f-ratio, which is extremely confusing.


Thank you, you are absolutely right! I should have said "f-ratio" everywhere I used "aperture".

I'll leave my previous comment in its incorrect state in the hope that people will see yours to correct and clarify it. :-)

> The camera folks express aperture as an f-ratio, which is extremely confusing.

Isn't it the other way around? I think the mistake we self-proclaimed "knowledgeable" camera folks make is using the term "aperture" where we really should say f-ratio.

For example, I have two Olympus Micro Four Thirds lenses that are both f/1.8: a 45mm and a 75mm. Looking at these lenses it's pretty obvious that the f/1.8 is an f-ratio: the front element of the 75mm is much wider than the 45mm, as you'd expect.

So thanks to your comment, I will endeavor to use "f-ratio" where I've been misusing "aperture".


While the phrase "fast f/1.4 shutter speed" doesn't make perfect sense, it's not uncommon to call a f/1.4 lens a "fast" lens.


Indeed that is true. Some of my lenses are much faster than others! A fast lens does let me use a fast shutter speed, and as Wikipedia describes, that's where the phrases "fast lens", "slow lens", and "lens speed" came from.

But "fast f/1.4 shutter speed" doesn't just not make perfect sense, it doesn't make any sense. That's why I chose it as an analogy. The phrase doesn't mention lenses at all, it's talking about the shutter. And shutter speeds are measured in seconds or fractions of a second, not f-stops.


It seems to me that a simple way to eliminate the feedback issue is to separate the microphone from the speaker. Hearing loss runs in my family, so I've watched my relatives struggle with hearing aids, and I've decided that when it's time for me:

1) I will build it myself and figure out the details, to hell with the audiologists and their racket.

2. I will carry the microphone and amplifier in my shirt pocket. No more feedback.

Ironically, with the advent of personal electronics, everybody wears some sort of gizmo on their body, so I think we could just persuade the elderly that hearing aids don't need to be invisible any more.


I'm not a hearing aid expert but the microphone and earpiece may currently be tightly coupled because it has to be. That is, human ears are very sensitive to phase shifts and time delays. Placing the microphone in a location that is not your ear may throw off your hearing somewhat (possibly ability to locate from where a sound originated).


I can vouch for this. I used to have an FM system - this is a mic that transmits on a very specific FM frequency to a receiver that's paired with my hearing aids. I loved it because I could easily use it to hear people near me in loud rooms, but it sucked in meeting rooms. The reason why is that I would hear someone speaking through my hearing aids and through the mic - at slightly different times. This is the phase shift Dwolb speaks of. This reduced clarity significantly.

That said, if analog31 wants to wear the mic on his shirt, that distance is actually good enough for most 1 to 1 conversations. Just understand that you're trading off "spatial perception" (no left/right balance if only one mic).


Short after losing one of my ears audition I was laying on a beach at the Mediterranean. Holydays, sun, good times. Then I felt a strong buzzing sound, it had to be a gigantic flying insect, so close to my head!. I jumped out of the towel and run some meters flapping arms around my head. Crowded beach and my own wife were perplexed at me and my flapping arms strategy did nothing for the sound that was still as loud an close.

Then my brain recalculated. Sound stopped and started again and the pattern was obvious then. I confused for some good 5-7 seconds a halfmile away boat siren with an insect around my head.

By first hand experience: It's amazing what the brain does triangulating the two ears input, and it is indeed very sensible to the quality of the inputs. On the other hand the brain is also great at adapting to new, lower quality sensors. I don't see that same scene happening now, after several years of one sided hearing.

My wife still recreates that moment from time to time.


Many modern hearing aids already have ways to do what he's looking for.

Bluetooth receivers for television or phones, for example. In other cases, I've seen lapel microphones meant to be worn by the person speaking.

If analog wants to build their own for the sake of the experience, or for cost, then I'm certainly not going to discourage them. But if all they want is the solution to that particular problem, it already exists and is being sold.


As a hearing aid user: my spatial awareness (as provided by sound information) is pretty much borked anyway. If I can't see where a sound is coming from I have no way of knowing what direction it's from or how far off it is. Even a small amount of hearing loss can cause this problem.


Another advantage of having the microphones (yes, there can be more than one!) near the ear itself is for DSP. Many hearing aids use multiple microphones to selectively amplify sounds from the direction you are facing. I think this would work best with microphones near the ear, or at least in a known position.


definitely. binaural hearing is very important. a single microphone in your shirt pocket would be a bad idea and not even close to the quality of hearing you can get with current hearing aids.


A bigger issue with separating the microphone and amplifier is cosmetic: hearing aids sadly have a stigma attached to them and people are more likely to wear them if they are invisible. There is a reason why hearing aids are produced in hair colors.

Also, I found that the best way to get rid of feedback was to get an ear mold rather than using an "open-fit" mold. This is a clear separation between the speaker and the microphone and pretty much solves the problem in my experience.

EDIT: I noticed late that you addressed the cosmetic issue in your post. I don't see the elderly changing but our generation just might.


"I don't see the elderly changing but our generation just might."

I'm youngish (mid 30's) and recently had my hearing aids replaced and realized I had a strong preference to stick with fairly visible behind-the-ear units rather than something more "discrete". I want them to be visible so that people I'm interacting with will be more sympathetic about repeating themselves and may make an (possibly unconscious) effort to speak more clearly.

The line of thinking that got me over being self conscious was "Lot's of people walk around with assistive devices for their vision...why should I be embarrassed about the same thing for my hearing?"


If I notice that somebody has hearing aids, I make sure that they can see my lips when I'm talking.


My first experience with a close friend who had partial hearing loss lead me to realise how much lip reading helped her. If she wasn't looking at your face her responses would often be nonsensical.

Also, I work in a noisy environment where hearing protection is mandatory, and I find I have less trouble understanding people if I can see their face.


That's very good reasoning. If a person with visual impairment walks around with a white stick, it's obvious and people normally cater for their needs. It shouldn't be any different with hearing.

What a good point.

I have an elderly friend who has suffered complete hearing loss in one ear after an infection and the other ear can only detect very very low frequencies, and he's constantly saying "PARDON?". It must be very difficult to hear ANYTHING going on, other than the rumble of lorries and buses. I wonder if they could put a pitch-shifting circuit in his hearing aid to shift sounds up/down so that they fall within his hearing range, whilst not shifting frequencies already in that range. That would help significantly, surely?

Just thinking out loud.


> I wonder if they could put a pitch-shifting circuit in his hearing aid to shift sounds up/down so that they fall within his hearing range, whilst not shifting frequencies already in that range. That would help significantly, surely?

If you read the article, you'll see that's more or less what most modern hearing aids do, via a technique called multi-band compression.

Edit: Actually, here anigbrowl, an audio engineer, states that this is not how multi-band compression works: https://news.ycombinator.com/item?id=8854142

Assuming he is correct, my above statement may well be wrong.


Multiband compression works by splitting the incoming audio into different bands, much like your bass/mid/treble controls on your EQ only works on bass/mid/treble parts of the frequency range. Compression is then applied to only those frequencies and then they are summed together.

There is no pitch shifting in multiband compression - pitch shifting involves moving the frequency up or down by a number of cents, semitones, octaves etc. It's the effect used to get the "chipmonk" voice (high-pitch and squeaky) where a normal voice is fed into a pitch shifter and it is shifted up or down. It is also how harmonisers work, where they work out the frequency you're singing at and shift it up 7 notes (or an arbitrary amount) so you can sing and get a harmony of yourself.


> fairly visible behind-the-ear units

I've had hearing aids like that for almost a year, and "fairly visible" is a stretch; they're pretty blasé and don't stand out.


You're right, they're still fairly subtle and probably not the first thing someone would notice about me, but if I turn my head slightly, you're bound to notice my ear moulds/tube.


True, I do wear glasses as well, so they kind of blend into that as well.


I too wear BTE's with glasses and most people are surprised when I tell them I wear hearing aids. They cannot see them.

This is especially true of modern "Receiver In The Ear" (RITE) models where instead of a tube carrying sound, you have a very thin wire going into your ear canal.


> hearing aids sadly have a stigma attached to them

It's probably worth distinguishing between two kinds of phenomena that might be described as carrying a stigma:

- Something might lead other people to mock or otherwise denigrate you for exhibiting it. Being fat is a good example here; fat people get a lot of messaging from society that they're worse people for being fat.

- Something might carry no real significance to the rest of society while still being viewed, by the individual, as painfully embarrassing. There's a traditional view that women don't like to wear glasses because they think the glasses ruin their looks. I don't know how well that currently corresponds to reality; I've known one girl who really hated her glasses for that reason and another who, not needing glasses of her own, liked to take other people's and wear them -- but that's the prototype of a "category two" stigma: a woman who hates wearing her glasses even though no one around her sees anything wrong with them.

I suspect that hearing aids are firmly within the second category, which means getting people to wear them "openly" should be doable.


"I suspect that hearing aids are firmly within the second category, which means getting people to wear them "openly" should be doable."

You suspect wrongly. Having seen the attitudes to my father change when he wore one. Ranging from outright verbal abuse, to assumptions of stupidity & senility.


Maybe it's related to age? I've been wearing "behind the ears" aids since I was 7 and I never sensed any perception like that.

(I have no idea what is the correct term for "behind the ears", I hope it's understandable.)


17 years ago, I worked for a hearing aid manufacturer. The common terms in use there were BTE and ITE, for "behind the ear" and "in the ear." Frankly, I thought the initialisms were poorly conceived. ITE is three syllables, same as "in the ear," and less meaningful for the uninitiated. BTE only saves you one syllable, again at the cost of meaningfulness. But either which way, your terminology is both understandable and correct.

Off-topic: While I was there, they asked employees to submit ideas for a new hearing aid marketing slogan, with the incentive of a free vacation to Vegas going to the person who submitted the one they used. For some reason, I did not win the vacation with my suggestion: "Stick It In Your Ear!"


It would be pretty interesting to see people assume that a ten-year-old suffered from senility because he was wearing a hearing aid. By definition, it only applies to the old.


Really? Who from? That's terrible.


Just people.

At the low end when eating out in restaurants occasionally having wait staff ignore him and asking other folk at the table "what would he like", or people assuming that he couldn't hear and talking about him — to at the high end having a guy shouting "deaf fuck" at him repeatedly on the street for no obvious reason.

I'm not trying to say that this happened every day — especially the outright insults. But it was enough to be noticeable.

I suspect, as @hibbelig commented, age had something to do with it.


While that sounds like a good idea at first, I think it makes sense to have the microphones at least near the ears.

The frequency content of a sound is directional, and moreso as the frequency increases. As a test, listen for a difference in the highs with your (computer/stereo/home theater/whatever) speakers both aligned with your ears, and not. Due to this directionality, a microphone in your shirt pocket would pick up a sound differently than one near your ear.

The brain does some very fancy and clever tricks based on the differences in the timing and phase of a sound as it arrives in both ears to determine things like relative position and distance. Having the hearing aid mics receiving sound similarly to your ears would should make it easier for the brain to continue to do these nifty tricks.

Otherwise, yeah, put the amp and other circuitry in your shirt pocket or wherever else is convenient. It should be a lot easier and cheaper to fit an amplifier and multi-band compressor in your shirt pocket, than in a tiny sliver of plastic that has to fit behind the earlobe.


You can actually already get body units for hearing aids. Even better, you can get radio-aids, where the microphone is elsewhere. This is particularly useful for students in a classroom, for example, where the teacher may wear the microphone directly.


> 2. I will carry the microphone and amplifier in my shirt pocket. No more feedback.

This is how hearing aids used to be:

http://www.hearingaidmuseum.com/gallery/Transistor%20(Body)/...


>It seems to me that a simple way to eliminate the feedback issue is to separate the microphone from the speaker.

That won't necessarily fix the problem. Feedback comes from bad electronic/DSP design. Physical mic/speaker positioning makes certain solutions harder, but in practical designs it's not (usually) the limiting factor.

Phones of all kinds, Skype, etc include adaptive echo/feedback cancellation already. It's a well-understood technology - adaptive cancellation has been used since the 1960s - although the fact that it exists is maybe not as well known as it could be.

(Protip - if Skype starts feeding back, you can often reset the adaptive filter by clapping once loudly.)

As for hearing aids - the first papers mentioning multiband compression date from the 1980s, so there's nothing new here, except maybe a lack of research.

My mother's hearing is pretty bad now, and I had to talk a professional audiologist through tuning her aids for maximum intelligibility. I couldn't fault his personal skills, but the consonant/vowel heuristic he'd learned in training was oversimplified and not giving good results.

He'd basically set up a phone curve filter, but in fact you need some low-mid for good intelligibility, especially on male voices. Once he dialed that back in everyone was happy.

Thing is, we needed three one hour sessions to get it in the ballpark. The real problem with aids isn't the technology, it's the fact that setting up a good prescription is really difficult and time-consuming - even more so for elderly people who may have problems describing what they're hearing.


"Feedback comes from bad electronic/DSP design"

I see that you mention an adaptive filter, and that might be a good solution, however, you are limited by CPU power of the system and other constraints (like having to work continuously)

Bonus points if you tell me what are the poles and zeros of that system to avoid feedback.


> other constraints (like having to work continuously)

I'd vote that next to no delay is a stronger constraint. With skype, you're already delayed, so adding a little processing isn't an issue. With a hearing aid, if there is too strong a delay it because off putting and, at times, dangerous.

But yeah, CPU and Power constraints in something about the size of the first joint of my pinky (including battery) are tight and very limiting.


> Feedback comes from bad electronic/DSP design.

Absolutely wrong. Feedback comes from a loop gain of > 1. Period. Microphone and speaker proximity have everything to do with it.


"My father has hearing loss and it's bad enough that not only does he use hearing aids, he is constantly turning them up until they feedback and start ringing."

This may also be a sign that the hearing aid wasn't fitted properly — or the shape of your father's ear has changed a bit since it was fitted. Poor fit makes it much more likely for this to happen (coz speaker and mike have a more open channel between 'em, and more ambient noise gets in so folk turn 'em up more). (My dad has similar problems until he got a new ear canal mould taken.)

It may also be a sign that you're dad's hearing loss has changed since the hearing aid was chosen and he needs it adjusting, or a different type.

Feedback is usually a sign that something needs fixing ;-)


For a while, I had the problem that I got feedback depending on the temperature: Going out into the winter cold was enough to change the shape of the hearing aid versus my ear so that the fit was not tight enough anymore.

What I mean to say is that hearing aids are a piece of technology that it tuned (pimped?) so much that there is very little buffer.

Today, my new hearing aids detect feedback and compensate, but of course that will eliminate sounds I need to hear, as well.


Did they change this after your comment?

>Dynamic range is the difference between the loudest and quietest sound you can hear.


I'm completely deaf in my left ear, and I wear a hearing aid in my right ear. What's really cool is that my hearing aid has BlueTooth, and starting with the iPhone 5S, Apple supports direct-to-hearing-aid technology. That means when I get a phone call, it streams directly from the phone to my hearing aid -- not out of a speaker, directly to my ear. Very, very cool.

Here's more info: https://www.apple.com/accessibility/ios/hearing-aids/

That said, if I had an older hearing aid or didn't have this one, I'd definitely use this app. They are spot on about hearing loss and how it's more than just a volume thing. In fact, most of hearing loss is really an understanding thing. I can hear your voice just fine -- I just can't hear 100% of it, so the words don't make sense to me right away.


I once went to a 3h long blackboard talk given by James Hudspeth [1] on the physics of hearing.

It was one of the most fascinating things I've ever heard: it turns out that not only the cochlea performs a Fourier transform of the sounds we're hearing, but it can also selectively amplify some frequencies, by vibrating the very same hair that detect the sounds.

Sometimes the mechanism that amplifies some sounds goes wrong and that's why sometimes old dogs seem to emit a high pitched sound from their ears, and also the cause of some forms of tinnitus.

If you have some time to kill do go read the wikipedia pages of the cochlea and hair cells, it's really fascinating stuff!

[1] http://www.rockefeller.edu/research/faculty/labheads/JamesHu...


I have always wondered if that high pitched sound was real! No one else heard it, and it makes so little sense I always figured it was my hearing.


Do you hear that high-pitched sound all the time? That's tinnitus.

Or do you only hear it from dogs?


It turns out I have really decent hearing. So it's unlikely to be tinnitus; at worst I have a sort of hearing after-image when a tonal frequency suddenly cuts out (and now I'm guessing that's an effect of the ear actively filtering - I can't wait to watch that video).

But yeah, I did hear it on the dog. It's so anomalous to hear a high pitched noise coming from an ear that I was willing to question my own hearing/sanity. The best I could manage to guess was that there was some sort of small gas pocket leaking, which made a little bit of sense since my dog (samoyed) had a bubble on his ear (that eventually gave him a sadly adorable floppy ear when it eventually drained - prolly an aural hematoma); the bump never actually seemed to deflate due to the noise, though. At the time I was an early teen just sort of beginning to learn analytical techniques, so that reasoning was the best I could muster. These days I tend to trust/understand my senses more.


Interesting article. Thanks for posting, varunsrin! I've been following development of SoundFocus for awhile.

I'm profoundly deaf. This is a technical term classifying the degree of hearing loss; to give you a sense of where this fits, the typical classification range is mild, moderate, severe, profound, total.

Between a combination of hearing aids and lip-reading, I've done a reasonable job of integrating into a hearing society. Not perfect, but ok.

I've often wished for a different approach to correcting hearing. It crystallized for me after I read this article by Jon Udell: http://blog.jonudell.net/2014/12/09/why-shouting-wont-help-y...

In that article, what Jon found was that his mom would hear best if you spoke at a low to medium volume close to her ear - this worked better than any shouting at a greater distance could accomplish.

And it should be easy for you to simulate - get a friend to talk to you from 50' away - you can still hear them, but there's some detail loss that wouldn't happen if they're 3' away.

I still benefit - a lot - from MBC, but if someone could come up with a way to make the incoming sound sound as if it were right beside me, man, that would really help me understand people clearly.

One non-technical solution, that people use to ensure that deaf people can understand you clearly is to enunciate consonants audibly. An example of this is is the word "red" - it becomes "erREDdead". I don't know if there's a name for this so I can't point you to a page describing how to extra-enunciate all the letters. As useful as it is, people speaking to me like that always makes me feel like I'm dumb, because they sound dumb saying it. Clearly I have issues :-)


I don't think you have issues. I wonder if they sound dumb because of the difference to "normal" speech, where we typically say words in a daft manner to young children or those learning to speak? We associate them with reduced capabilities (in a sense) because they are children and are still learning. That isn't meant offensively, more that we know the child needs to learn?

But if it helps you hear, I think it's great!

That article you linked to was interesting. Thanks.


My wife is an audiologist, and she and her colleagues found this excellent. One suggestion if I may (actually that my wife made), is that where you have the soundcloud files demonstrating MBC, you add one demonstrating what a person with hearing loss would hear, before the one with the MBC.

That way a person can judge the improvement that MBC gives to a person with hearing loss, instead of just judging the reduction of quality to that of a person with perfect hearing.

But again, excellent article!


My mother as a teenager listened to her transistor radio all the time, it was a new thing when she was young. She held it up to her ear and it was turned up very loud now she suffers from fairly profound hearing loss but only in a specific high range she can hear low bass normally.

If you talk to her and then turn on a tap to get a glass of water the conversation over, the fridge motor comes on conversation over, any non-verbal sounds is noise that obscures all words to her. "What?" is the response to nearly every word from anyone mouth has to be repeated twice except in a dead silent room. She listens to the TV on level 20 and it's very draining to everyone around the person.

But she won't get a hearing aid! She's 70 years-old but refuses to even discuss it. It's odd how if you say to a person who can't see they may need glasses it's OK but if you say to a person hard of hearing the may need a hearing aid it's like you said the most obnoxious thing ever to say to anyone.


I bet you'd find a lot of older people also unwilling to get new glasses or a stronger prescription. All the same, the taboo likely exists because it's common for young people to have myopia, but not poor hearing, so it's always OK to have glasses and never OK to need hearing augmentation.

I really hope this taboo goes away as new generations get more and more comfortable with augmenting their senses.


"very draining to everyone" hahaha that's made me chuckle. Thanks! I think you're right about "it's like you said the most obnoxious thing ever" hahaha

Is there a way of highlighting how often she says "what?" like a tally chart? I would try that with my mum to get a message across. She'd likely be deeply offended, but I think the message would get across.


I would have to ask my dad but I know when I am there I swear it's every time I say something except in an incredibly quiet room so tracking isn't necessary since it always occurs from my perspective.

It's probably partly due to hearing loss and also just habit because I know I will say something like "Are you going to go for your walk now or later?" she says "What?" and I repeat "Are you go.." and she interrupts as if she knew all along what I had said ugh! Although context helps many situations.

I also find she mistakenly interrupts current conversations between people if they are discussing something she interrupts the conversation with a new conversation since she can't hear. She also has a hard time with conversation flow as if on a phone conference call when everyone interrupts each other due to not being able to follow the flow of the conversation.


For people with hearing loss & run Linux: if you have your audiogram and an idea how your hearing loss varies by frequency, you can try to do selective boosts by frequency through a Pulseaudio filter. I discuss it a bit in https://plus.google.com/103530621949492999968/posts/32qSkcQP...


Thank you! This is pretty usefull for testing on grandma ☺


Nice! Thanks a lot for the link and writing that up!


"Why can't I just turn up the volume on my iPhone?" is something I ask myself everyday and shake my fist toward Cupertino. Seriously, the gain on the phone is severely limited. Try listening to a voice call on speakerphone in even a moderately quiet environment with just a twee bit of ambient noise. It is maddening that I can't get any more volume out of this device without jailbreaking it.


If you're looking for someone to blame, I highly recommend the guy that sued Apple because the iPod could be loud enough to cause hearing damage:

http://www.ft.com/intl/cms/s/2/7bf03be0-94de-11da-9f39-00007...

Apple eventually won and the case was dismissed, but the lower maximum value followed from that lawsuit.


And, if you're in Europe, you can thank the EU, who limited portable audio players to 150 mV max output at the phone jack and 100 dB (A) peak level with the included headphones.

This renders my iPhone almost useless as an iPod replacement when travelling with big headphones, it's just not enough juice for a train or bus ride (not to mention a flight).


wow, thanks for the info. lawyers - gah. still, no excuse for apple to limit the speakerphone...no one is at risk of hearing damage from a cell phone's speakerphone...you hear me apple?


until they put the speakerphone next to their ear, perhaps


I actually have the opposite problem: the minimum volume on my iPod is too loud. In some contexts (late at night, listening to something while falling asleep) it's slightly too loud, but they don't give you any more gradations between "off" and "slightly too loud".

I find it especially bothersome because on older, non-digital radios, I was able to lightly tune the volume to somewhere just on the threshold of audible, which is the perfect level for falling asleep to.


Could you get some higher impedance headphones? They would be quieter as the iPod wouldn't be able to drive them so well.


I had this with my Moto X too, I downloaded a EQ app and lowered all of the dials by an equal amount and that seems to have worked.


I'm not trying to pull your leg here, but ..

why would you try to use the speaker if there's noise around you? I mean, why wouldn't you - like - put it up against your ear instead?


I do it all the time in mildly noisy environments:

- Impromptu "conference call" with a colleague physically next to you and another one remote. - Need to be typing to take minutes, retrieve information relevant to the call, etc. - Too many calls already and arm is tired of holding the phone and listening.


Any scenario where you'd want to use the speaker phone has little to do with noise. Sometimes I want it while driving because I don't have a headset (and don't plan on getting one, since I don't own a car and drive rarely). Or in the shop while drilling, soldering, or whatever. Or outside working on something. It's about using the phone while doing something else, often something that you couldn't do with something blocking one ear or dangling anything from your head. Then of course there's the case where multiple people are listening/speaking through the same phone.


Seriously.. just don't talk while driving. It's dangerous for everyone.


I would love to not talk while driving. I would also like to not listen while driving. In fact, I would like to not drive while driving! That being said, I'm much less concerned about spending a minute talking to a dispatcher than the guy/gal flying down I-95 while breaking up with someone over the phone. (Speaking of which, at least with a phone you can hang up... but some passengers, man...)

Edit: I would also posit that in some cases, a short phone call can actually do a great deal to remove distraction if it is a settleable matter. The conversation you have in your head while driving may be just as bad as the one you'd have on the phone, except that it may dwell longer.


Very true about the conversation in your head. You can sometimes drive great distances and not remember any part of the journey, which is worrying in case you missed dangerous road conditions etc.

This is particularly true if you have a "lot on your plate".

Perhaps putting on an irritating radio station would help?


If you can't multitask enough to talk while driving, you should not be driving in the first place.


All research I've seen finds that talking while driving, even if it's just to a passenger increases accidents by a large amount. Other studies show that people tend to vastly overestimate their multitasking abilities.

So in short: very few people can actually talk safely while driving, and the people that think they can probably can't.


Most interestingly, it seems that people who think they're good multi-taskers are actually the worst at it[1]. They are more likely to be impulsive and more likely to indulge in risk-taking.

1: http://www.npr.org/blogs/health/2013/01/24/170160105/if-you-...


Maybe. The study does show that people that multi-task a lot are bad at multi-tasking. But it says that they didn't find a significant correlation between perceived and actual ability, while also assuming a negative correlation in part of their write-up. They also found that most people rate themselves 'above average', but that's not surprising.


leg pulled. maybe because i'm not the only one on this end of the convo?


I've had to call 911 to report a traffic accident, and can attest that the iPhone cannot be heard over adjacent traffic whizzing by. It just doesn't go loud enough. My current Nokia doesn't either. Since the phone knows you're making an emergency call, I don't see why the volume limiter can't be removed in that case.


> It is maddening that I can't get any more volume out of this device without jailbreaking it.

Use a pocket battery-powered amp and a small portable loudpeaker? That'll get you both more gain and better sound than the tinny built-in speaker.


thanks for the suggestion. i still lament the fact that my phone can't do this out of the box. this is a silly limitation, IMO.


this is kind of like suggesting that an old phone booth is convenient for noise isolation.


I think I actually miss old phone booths, particularly the bright red BT ones you used to get here in the UK. Nice booths, although they always got beat up for no good reason.


While I appreciate the article, having hearing loss is not like losing context of an image such as not being able to see the bear on the tricycle. It's more like the image is fuzzy and depending on the factor of loss, it might be a bear or it might just be some fuzz:

http://i.imgur.com/vKn7oTf.png

Audio compression, especially when using psychoacoustic principles, helps by lowering the noise of the unwanted sounds e.g "probably not a human voice" or "not a bear" in this case and increasing certain frequencies for a person's particular hearing range so they can "see" the image better.


I'd agree the fuzzy analogy is better since a hearing impaired person may think they understand but they don't, sometimes totally wrong.

I recall reading that vowels are easier to hear than consonants or maybe it is vice versa? "Hello how are you today" may seem like "Hll hw r tdy" which to the hearing impaired person may seem like "How am I tidy?" or something totally incomprehensible but their brain makes up something close (incorrectly) by filling in the blanks.

The Monty Python sketch "I'd like to buy a hearing aid" feels like what I go through daily when trying to communicate with my mother.

I showed it to my mother thought it was funny, sometimes when she thinks she knows what I said but it's not even close it's like the sketch.

https://www.youtube.com/watch?v=T7UqhDs8zj4


This article would be great if they replaced their frequency-domain 'dynamic range' terminology with the standard word for it, bandwidth.


Thank you. This was the word I was looking for, but escaped me when I wrote my own commment.


What’s the solution? Multi-Band Compression (MBC), a technique that’s been used by the $6 billion hearing aid industry to solve this specific problem.

An MBC uses intelligent design instead of a one-size-fits-all method. With the right data about your hearing pattern, it can mash the full sound into your range so that you get all the information you need.

Audio engineer here. That is patently untrue. MBC is a super-useful technique and is indeed helpful for mitigating hearing loss in relatively transparent fashion, but it does not and cannot bring sounds from outside someone's audible hearing range back within it. It will dynamically rebalance incoming audio in inverse proportion to the degree of hearing loss within a set of frequency ranges, but many kinds of sensorineural hearing loss involve the death of cilia cells (the tiny hairs thatvibrate at particular frequencies, much like the bins of of an FFT) which can result in a total loss of perception at or above certain frequencies.

http://en.wikipedia.org/wiki/Sensorineural_hearing_loss

To 'mash the full sound into your range' requires a technique known as frequency shifting, but that's problematic because it destroys the harmonic relationships of the incoming material and sounds disorienting, at best.

In any case, I think the illustration of the bear on the tricycle is absurdly simplistic and makes me wonder to what the degree the pp designers really grasp the underlying concept. A much more appropriate parallel would have been to show an image with a severe Gaussian blur, which more closely parallels the actual experience of hearing loss in terms of both empirical measurement (higher frequencies tend to be more severely attenuated in cases of induced hearing loss) and subjective experience (blurring hinders edge detection, which is analogous to transient detection in audio, and which has a large role in speech intelligibility.

http://en.wikipedia.org/wiki/Gaussian_blur

If you're struggling with hearing loss, then you should really, really consult an audiologist, work out the basis of your hearing loss (which is sometimes as simple as impacted earwax), and work out a treatment strategy. If you're suffering from degenerative hearing loss then listening to overly-compressed music could actually accelerate it, and listening on headphones or earbuds (many of which bias the sound for increased impact) could also contribute to the problem. It's a truism in the pro audio world that most people are awful at self-measurement and tend to over-equalize in the absence of proper experimental control protocols.

I apologize for the rather negative tone of the post; I appreciate the people at SoundFocus are trying to provide people with something useful and helpful at minimal cost, by leveraging the pretty good audio hardware in their phone. However, hearing loss tends to be a one-way thing, and I think that offering a product to that market without a clinician on the team is a bad idea. There's a lot more to being an 'audio ninja' than understanding the fundamentals of DSP.


Yeah, the article confuses things with the wrong definition of dynamic range, then goes on to explain MBC using the term "range" correctly.

MBC can "mash the full sound into your range" if "range" means dB range at each freq. band. But since the author previously (ill)defined dynamic range as a frequency-related term, the reader reads that passage and thinks he's referring to frequency shifting instead.


I was about to question everything until I read this. Thanks AE. My understanding of MBC is more like a crossover (or mix of low, bandpass, and highpass filters) network followed by compression per band within each section of the frequency range.

Now, I want to go test out what it would be like to 'compress' frequencies. Something like a notch filter that shifts nearby frequencies around the target frequency away into regions above and below. It adds noise, essentially, within the compressed range, but maybe it's tolerable and is useful for someone with a narrow band hearing loss. It could potentially be interesting musically.

Maybe such a filter exists, but I am not familiar with it.


If you're interested in frequency shifting, Harald Bode was the leading engineer in this area. You can read a gentle introduction here: and if you look around there are some VST plugins that emulate the Bode designs.

I haven't tried using this for precision stuff - over a small range it might well improve intelligibility at the expense of only minor distortion. I tend to reach for it when I want to give sounds an extra weird dimension, it sounds somewhat orthogonal to the normal harmonic distributions we're familiar with.


Looks like you were going to paste a link but it didn't stic. . . .


>I was about to question everything until I read this. Thanks AE. My understanding of MBC is more like a crossover (or mix of low, bandpass, and highpass filters) network followed by compression per band within each section of the frequency range.

This is exactly right. Basically, you separate the audio into arbitrary frequency bands, and then apply compression to each band to control its volume independent of what is going on in the rest of the spectrum.

I was incredibly frustrated by reading the article, since their explanation of multiband compression was incredibly misleading. I get what they're doing and why multiband is helpful (it sounds like they're basically bringing up the volume in the parts of the spectrum where the user's hearing is less sensitive than healthy hearing would be), but that was a poor explanation of how multiband compression works.


Well, that's probably why. Somewhat obvious. If you put something through with content in and around the range, you get a notch filter with a resonant hump... not all that interesting.


I thought the bear picture was maybe the best part of the article. Compression is tricky, multiband even more so, and the type of multiband compression they're using is even harder to wrap your head around (there are two ways (that I know of) to do what they're doing but I doubt they want to talk about which one they use.)

It's a great layman's explanation, but if you have a better one I'd love to see it.


I think the point of the bear picture isn't that you can't see half the bear, but rather that (to continue the visual analogy) you can't see half of the bear's colors. How do you fix something that you can't perceive? You can change the missing colors to something that you can see, but you end up distorting the original image.

In the soundcloud samples, if I can't hear anything above a certain frequency, making them louder isn't going to help. You can drop the frequency of those things, but my guess is that it's going to sound pretty ugly. It would be interesting to listen to a sample that has everything above a certain frequency pitch-shifted downwards.


Here's how I took it: if you're just losing sensitivity at a particular frequency then you may only hear sounds in the 40-100dB range, below it's too quiet to register and above it's painful. That's a lot of information to lose but you can smash the 1-100dB range into the 40-100dB range. If you choose to you could even smash the 1-60dB range into the 40-60dB range (or pick whatever numbers) and leave everything above that relatively untouched. This is a fairly common sound engineering technique to fill out a sound without destroying its dynamics.

So if you picture a scale next to the bear picture from 1-100, then the bottom part of the bear is what's beneath the (effective) noise floor for that frequency. To extend the analogy to multiband compression you'd have maybe 10 bears next to each other, each missing different amounts and each needing a slightly different smashing to lift the bottom of the picture into the visible range.

edit: I think people are assuming that the frequency content of the bear picture corresponds to the frequency content of sound (they're all signals, right?) but to me it's a much more basic analogy. To do it that way you'd have to be turning up the soft reds or something to that effect, but rods and cones being what they are we don't lose vision in a comparable way to how we lose hearing so I don't think there's a good, intuitive visual analog in that sense.


hey anigbrowl, OP here.

You raise some great points - it is indeed impossible to use an MBC to bring sounds back into a user's range of hearing if they have lost all sensibility at that particular frequency. However, the loss of hearing at a particular frequency is not binary - it tends to start with a reduction in dynamic range at that frequency, as the cilia start to get worn out / destroyed.

So if you have loss @ 3 KHz, you don't often completely lose all hearing, but your dynamic range which normally is 0dB -> 100 dB (over-simplification here) might now be 30 dB -> 100 dB.

What an MBC will do here is compress the range at that frequency band, so your 100dB of range is now 70dB of range.


I get what you're aiming at, and I applaud what you're trying to do - I just think that you've over-simplified a complex topic, to the point of creating quite a bit of confusion, going by this thread. To quote Einstein, “Make things as simple as possible, but not simpler.”


There are numerous interstitiated mechanisms in hearing. I don't see why MBC in particular would be valuable, other than amplifying quiet parts automatically, which could in fact have the opposite effect.

However, I think there is potential for many other mechanisms to be developed, such as automatic filtering, to eliminate masking in frequency and time domains.

The question I have is why other companies, such as Apple and Spotify, don't simply add this DSP technology to their software. What can SoundFocus do that can't be copied? Proprietary algorithms?


This was an interesting read, but what to me personally is more interesting is how to prevent hearing loss over the years.

I might be wrong on this, but I recall a hearing specialist advising me not to use earbuds at all, or at least limit the use to max 1 hour at a time. A dynamic headphones, such as the good old Superlux HD 681 would be "better" in the long term. (Not trying to advertise for that headphone, it's just one of the few that is cheap, good, and I can rewire myself and even add a plug so I can easily buy new aux cables).

However I cannot wear my headphone longer than 6 hours without it getting annoying. And running with a big headphone is a big no, but than again, I'm nowhere near to running longer than 30 minutes in a row.

Anyone who has his own thoughts on this topic?


I might be wrong on this, but I recall a hearing specialist advising me not to use earbuds at all, or at least limit the use to max 1 hour at a time.

The issue isn't necessarily about all "earbuds," the issue is that many earbuds (including the ones included with Apple iOS products) don't seal the ear canal very well, so a listener is exposed to outside sounds in addition to sounds from the player. Since the outside sounds have a tendency to mask the sounds coming from the earbuds, the listener will often turn up the volume to better hear the audio material and therefore be exposed to SPL's that can cause hearing damage over long exposure times.

The advice to use something like the Superlux HD 681 is that circumaural headphones offer some (not a lot, but some) shielding from outside noise, so a user won't be tempted to increase the volume level as much. Active noise canceling headsets and in-ear-monitors (like Etymotics-brand) provide better sound isolation so that users can keep the volume at more moderate levels.


>the issue is that many earbuds don't seal the ear canal very well

For those wondering which earbuds do provide decent isolation, there are, as you stated, the pretty expensive "in-ear-monitors", but you can also go for cheaper earbuds based on isolating memory foam like the JVC marshmallows, which are pretty cheap and provide decent isolation.


For really high noise environments, like airplanes, I like using 30db dampening earplugs and then circumaural heaphones over them.


Exactly. With my Etymotics in-canal buds, I can set the volume on my music player to about 25-30%. With the stock buds or even non-isolating over-the-ear headphones, the same subjective loudness requires about 60-75% on the volume control.


This is not a good measure, because different earphones have different efficiencies.

And the type of earphone changes the efficiency as well: The sound from the Etymotics all goes to your ear, but the others waste some sound out of your ear.


Be aware that many prescription medications are ototoxic (e.g. Aspirin). I recently had a routine check-up of my hearing and at 31 my actual "hearing age" is supposedly 66 for higher frequencies - permanent hearing loss most likely brought on by Wellbutrin which I took for some weeks only (I've never been exposed to loud noises and never was much of a headphones user). Just putting it out there, be wary with prescriptions of Wellbutrin especially.

https://en.wikipedia.org/wiki/Ototoxicity


I use earbuds, but I set the volume while I'm somewhere quiet. If the underground train is drowning out the music, so be it.

I've also used reusable earplugs (I spent about £15) for all gigs, nightclubs etc. Sometimes I use them in noisy pubs.

> However I cannot wear my headphone longer than 6 hours without it getting annoying.

That's a really long time. Wouldn't it be best to reduce it?

The main charity for deaf people in the UK, which seems to have renamed itself from the "Royal National Institute for the Deaf" to "Action on Hearing Loss", has a campaign called [Don't lose the music](http://www.dontlosethemusic.org.uk/).


I don't often go to clubs or places in general with loud music, but I play music trough earbuds/headphones to shut out distracting noises around me. And I guess to get "in the zone". I'm going to look up if there would be earplugs that could help me with that.

And 6 hours is seldom, it's the max, but average I do go above the suggested 1 hour.


OP here. Some argue that in-ear earbuds send more high frequency energy directly into the ear than headphones, so the balance of sound is always tilted towards the highs, which over time can cause more high-frequency loss (low frequency loss is fairly uncommon). The difference between headphones and earbuds is likely minor - the volume you set your music to probably has a much greater impact. Get a pair with good isolation or cancellation, so you can keep the music volume lower.

One thing that works really well for preserving your hearing is taking breaks - this is especially true of loud concerts (step out for a bathroom break once an hour), but also holds true if you're going to listen to headphones at high volumes for many hours straight.


I remember with a pair of earbuds getting a little informational thing on the correct volume.

Their rule of thumb is if you are incapable of hearing and understanding what people are telling you without taking out your earbuds, they're too loud.

I tried this out, and found that I'm actually fine with the volume bar being maybe 30% of what I used to. Basically, put your earbuds in, play some music, and try to talk to someone. If you're unable to then you might be playing your music way too loud.

(For people using earbuds to drown workplace noise out, I'd suggest an ambient noise program like noiz.io. You can train your brain to filter ambient background noise, and its better than trying to 1-up the sounds in your office)


"incapable of hearing and understanding what people are telling you without taking out your earbuds"

High quality isolating earbuds should make it pretty difficult to tell what other people are saying even with no music playing! Moving to a mid-range pair of Etymotic noise isolating earbuds was an absolute revelation for me and allowed me to use significantly lower volumes when listening to music. I can't recommend high quality isolating headphones/earbuds enough.


In addition to poor isolation noted here, many cheap-o earbuds have a horrible response curve.

They deliver way too much sound in narrow frequency ranges, usually the mid-range, say 1-4Khz, while not enough in the bass, and it's a mixed bag higher up through 10Khz+

People will turn it up, until they are hearing everything, and they do very significant damage to those "hot spot" frequency ranges, despite the overall perception of volume seeming reasonable to them.

Add overly compressed music, and or crappy audio output, and the need to drive it loud happens to nearly everybody.

There will be a whole generation of people, some who we are already seeing struggle with this, requiring adaptive sound options more early in life than is typical.


If your ears hurt or are annoyed, stop.

Quiet is better than loud. The more sound isolation your phones provide from the outside world, the quieter you can play your music and enjoy it.

If you're sitting with headphones on for six hours, maybe you should get up a few times.


> If your ears hurt or are annoyed, stop.

I'd say it's more from the pushing on the sides of my head, not so much from the volume of the music. But you are right, and I have been thinking about stopping listening to music trough headphones when working, and use speakers.

> If you're sitting with headphones on for six hours, maybe you should get up a few times.

I have to say that my current working desk and rhythm is pretty much perfect as far as I know. My eyes never get tired, the screens are a good distance away from me, and I don't have to look down. I have a decent computer chair, and I take breaks about every hour, to go outside or get a new cup of coffee (but I do that with my headphones/earbuds in).

Heck, no employee can offer a better working situation if you ask me, I'm all for the working remote - office not required-idea! :P


I've lost hearing in my left ear, and I'm pretty sure it's due to years of driving with the window down.


I wear good quality noise cancelling headphones with the music turned quite low. I feel like this is doing a lot to preserve my hearing.


A spectrogram would show what MBC is doing far more directly than the waveform plots.


Hi folks,

In order to summarize what I've read so far: This promotion article about SoundFocus is clearly not written with help of a professional from within the hearing aid industry nor from someone with clinical experience. The author shows to be good with language, probably an engineer who makes links with technical terms as if he knows what he is talking about.

I find this article very misleading and not a help for the hearing disabled or their relatives. It reminds me of a very useful course I once followed:'physiology of the ear for physicists'. It would be good if the author or developers find something similar.

I realize that my post breaths some arrogance and of course it is easy to burn something down. But yes, I know better. And yes, I could have written this article that would market SoundFocus properly (in a similar style if you like) with only useful and correct information.

Maybe I should... ?

Cheers -a professional-


My bro points out I did not mention a single flaw/example. (I am busy enough as it is)

Besides things already mentioned by others, talking about dead regions, upwards spread of masking perhaps temporal scatter and tuning curves would have made it a lot more juicy.

One flaw: time and volume are NOT self explanatory. Think about recruitment and such. Another flaw: multi-band compression is not doing what is suggested by the picture of the bear where the bicycle is missing. This visual example fits much better to a person with a dead region for whom frequency compression is applied. And this is not a one-size-fits-all method. Different techniques are available for this particular phenomenon (as there is frequency shifting as well).

Anyway... let's mention a positive side. I appreciate the attempt of communicating on the topic and it was a good try to make things clearer for some. Better luck next time.

Happy bro?


I wish people would put half the effort into prevention that this article puts into mitigation.

Work beside a machine humming at a particular frequency and you will loose that frequency even if the sound doesn't seem loud at the time. And simply jambing in a pair of earplugs doesn't make you immune. They have limits.


I might be wrong here but isn't what you call "dynamic range" usually referred to as "hearing range"? Dynamic range has a bit different connotation AFAIK.


A lot of the concepts in this article appear to be deliberately simplified and terminology deliberately abused, and I would argue too much so. Take this paragraph for instance:

> As you look at the waveform, the problem should become apparent. Sound is a 3-dimensional construct, but we can only represent 2 dimensions on a textbook or a monitor. In the waveform representation, we see Time on the x-axis plotted against Volume on the y-axis.

The two axes in most waveform plots are sound pressure and time, not volume and time. The fact that the waveform depicted is nearly symmetrical reflected across the x axis should hint that this is the case.


I think that the first use of "dynamic range" is clumsily worded.

> Dynamic range is the range of frequencies and volumes that are audible to the human ear

But reading on, it looks like it is used in the correct sense albeit it a specialised one. The article discusses dynamic ranges per frequency. Talking about multi-band compression confirms that.


> If you know people who have hearing loss, you’ve noticed that they can’t tolerate loud noises that you’re fine with, but they also can’t hear some of things that you can hear perfectly well.

So what is the explanation of why they can't tolerate certain loud noises? I feel like the article was going to address that aspect of hearing loss as well but never did.


It may be that intolerance of loud noises isn't that the noise is painfully loud or otherwise physically irritating, but that it interferes with their ability to understand simultaneous auditory input.

I've noticed as I age that noise which I would formerly have disregarded without thought has become an irritant - that is, I can't/won't tolerate it - not because it's physically uncomfortable, but because my wetware apparently can no longer do the signal processing that earlier made such noise irrelevant to auditory comprehension.


Hearing is an range. So they have reduced range on both side.


Hmm, I'm not sure you understand it any better than I do.

The article says:

>Well, if you get hearing damage at a specific frequency, you’ll start to lose sensitivity to the quiet sounds at this frequency. However, your sensitivity to loud sounds remains the same.

If their sensitivity to loud sounds remained the same on the one hand, why would they be unable to tolerate certain sounds on the other hand. Seems contradictory.


I don't have an exact answer, but it's a common problem: http://en.wikipedia.org/wiki/Hyperacusis

Hearing is complex: you have the cochlea acting as both sensor and first pass signal processing. There's a muscle that acts as a built-in gain control that can cut sounds by about 20dB, partly so your own voice doesn't deafen you.

Hearing damage isn't just loss of sensitivity—apparently it can alter the shape of cochlear filters' response too, changing the way masking works. And I'd imagine the brain tries to compensate as hearing loss progresses, which could have interesting effects.

More info:

http://en.wikipedia.org/wiki/Auditory_masking http://en.wikipedia.org/wiki/Acoustic_reflex

Edit: Less related, but a similar phenomenon that happens to me: http://en.wikipedia.org/wiki/Misophonia. Hearing has odd failure modes.


Over time, the ear loses it's ability to carry very loud sounds through all the stages needed for sensation.

This manifests in two basic ways:

1. Fatigue.

That pop music, "wall of loud sound" album was tolerable for a few hours at age 20, gets tiresome at 40, and could be almost painful at age 60. Hearing is a mechanical process, and worn out mechanical parts tend to rattle, lose response, etc... and this is fatigue.

People will respond with something like, "I'm tired of hearing", and to them, the experience of hearing is no longer transparent at higher volumes and longer times.

2. Less dynamic range possible / pain

Think of it like gamma for displays. There is a curve of sensation to input sound in db. People tend to vary a lot, but this curve gets bent away from the normal to a condition that is a lot like "crushed black" and white, where the upper limit of sound volume perception gets distorted.

An unimpaired person can tell the difference between 60, 80, 100db. People who have aged and have losses, may not be able to tell much difference between those volume levels, resulting in "all loud sounds are just loud" not different amounts of loud.

Also think of banding when color depth is reduced. Rather than a nice, smooth response, the listener gets more steps, or various steps may all present in ways that are similar when they should be easily distinguished.

A related thing is discrimination, and it happens with pitch in the same way that it can happen with volume. Loss of pitch discrimination means being unable to distinguish between close frequencies, hearing them as one, or perhaps just as a muddled combination, not as two distinct tones.

This is subtle, and it can happen with most other responses being normal. But the person just can't quite pick out the sounds, despite hearing them individually just fine. It's the combinations they struggle with.


Could you add a "hearing test" to your app, which does at least rudimentary tuning? Call it "headphone calibration" and I bet you'd improve the listening experience of people who don't know they are hearing impaired.


There is a drift in the soundcloud audio. Audio goes ahead of visual wave. So I missed the last visual beep in both compresses and uncompressed version.


So it's like a CPU can't process fast enough and start to losing signals.

Just like the choppy adobe flash player on OSX!


Spatial frequencies in the image might have been a much better analogy instead of simply cropping the image size.


I really wish this app was available on OS X, I would very happily pay for it and use it every day.


tl;dr "hearing loss is caused by uneven or abnormal frequency response of ear compared to that of normal human"


the music world already uses copious amounts of multiband compression, as if all listeners have hearing loss


Why is this post on top?


> "Sound expresses itself in three dimensions: time (seconds), volume (decibels) and frequency (Hertz)."

Is anyone else as irked about the authors choice of the word dimensions as much as I am? I can't read past it. Wouldn't "factors" be a better fit?


No. "Dimensions" is the right word, because they are three orthogonal scales along which musical notes can be measured[1]. The author could also have suggested timbre and other possible dimensions, but the three stated apply to all sound, including (importantly) sine waves, the simplest type of sound.

[1] Technically, frequency is a function of time too (and timbre a function of the interaction of multiple frequencies and envelope changes, another function of time) but these are all independent uses of time.


Technically there are two complementary sets of dimensions: time and amplitude vs frequency and phase. Both are complete encodings of the waveform.

The article is extremely muddled from a technical point of view. When dealing with perceptions it is extremely important to distinguish physics and physiology. In optics we have radiometric (physical) vs photometric (perceived) values: https://en.wikipedia.org/wiki/Photometry_%28optics%29#Photom...

It appears in the article they are doing some kind of implicit averaging over the ear's response function at each frequency, which may make sense in terms of perceptions but makes very little sense in terms of physics.

A much better visual analog would be a blurred photograph rather than a cropped one. "Turning up the volume" simply increases the brightness of the images, which doesn't do a damned thing to reduce the blurring.

One thing that people with normal hearing don't get is how much information is in the high frequencies, which are where the most loss normally occurs, although there are also "notch" losses that happen to people whose ears are routinely subject to loud noises in narrow bands.

We tend to think of "high frequency" sounds in terms of single notes, but in speech the high frequencies are most important in the unvoiced constants, the "s" and "th" sounds and similar. Losing the high frequencies blurs the edges of speech, often making the shape of it unrecognizable. Frequency-dependent enhancement sharpens the edges and brings it back into useful focus.


> Technically there are two complementary sets of dimensions: time and amplitude vs frequency and phase. Both are complete encodings of the waveform.

Minor note, the "frequency and phase" is actually frequency and complex amplitude, which encompasses both phase and scalar amplitude as we think of it intuitively.

In the mathematical theory there is also provision for complex amplitude in the time-domain, but this is rarely needed in practice (and never found in real-world signals).


Not really. A sound can be described by a one or more [time, volume, frequency] triples. I think the author's use of the word "dimensions" is perfectly suited.


Frequency is just periodic variation in volume so it doesn't really belong on that list as if it's a separate thing and not an emergent property


They are complementary. You can fully describe sound with either function of volume in time or complex amplitude (So normal amplitude + phase) in frequency. So you could just as easily say the volume is emergent property of the frequency.

Regardless it's an 1 dimensional function.


Dimensions are variables that are independent of each other. They could be spatial dimensions like (x,y,z), or color dimensions like (red,green,blue).


does dimensions have to be orthogonal? different spaces can be transformed...


When talking about dimensionality they kind of have to be orthogonal.

Because otherwise anything can be of arbitrarily large dimension, and thus the whole term loses meaning.


just to satisfy a possible curiosity: No

Your intuition is in the right direction, but not quite correct.

Think of a cartesian plot. Now, think of the Y axis "tilted" to the front. With those 2 "vectors" you can still describe the whole plane (for whatever "describe" means =P) If you tilt it so much the "Y" axis becomes paralel to the X axis, then you lost something.

Instead of "orthogonal" what you need is a mathematical property called independence (as in linearly independent) that basically means "not redundant to express a space"


Seemed natural to me, dimensions in a phase space.


They are dimensions but only two of them are independent: time and volume or frequency and volume. You can't arbitrarily set the three together.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: