Can someone point to some reading material about the nature of the signal processing that needs to be done?
Do you mean how DSP works, or what should happen in the 'real' analog world of creating clarity of sound for your ears?
do we need to consider compressors? For example if the person's hearing only works between 200 - 600 Hz do we need to compress the entire spectrum to this particular range or do we just apply filtering?
Compressors compress dynamic range..ie: turn loud things down in a very fast manner...like reshaping the sound of a snare drum hit, or leveling out volume between bass guitar notes. I'd imagine this is precisely what you wouldn't want to do with hearing assistance because it could kill off the clarity of consonants. Now, an expander + a limiter, that might be useful to make consonants louder & then limit them to a max loudness.
Moving a 2000hz sound down to a 200hz sound is pitch shifting. It's typically kinda mucky & bizarre sounding if you're being that aggressive with it.
Hell I don't even know if this selective bandwidth thing even exists..?
Seems to me you could hook a microphone to preliminary HP/LP/BP filters, then split the signal to a super snappy VCA chip, and send the rest of it to a dsPic/blackfin/whatever running an FFT routine or something like that. Once the dsPic deciphers sounds from 2khz-4khz, use it's DA converter to juice the gain on a VCA. Use that with some open air earbuds & only amplify the frequency range that needs to be amplified, and only when it is present. The whole board could be 2-3 chips + some passive components. Might even get someone to make a windows gui to write amplification behavior tables to the dsPic.
Just a thought.