separating vowels from consonants

Discussion in 'General Electronics Chat' started by Al Mond, Nov 22, 2012.

  1. Al Mond

    Thread Starter New Member

    Mar 6, 2008
    4
    0
    A great contribution to the deafend people of this world will be accomplished when we can electronically separate consonants from vowels. That is because consonants are considerably quieter than vowels can be. And most often in speaking the vowels volume can be as much as 30 db or more times louder than consonants. This results in the deafend persons not hearing both consonant and vowel, a requirement for understanding what is being said. Hearing aid users very often say,"I hear you but I don't understand you". This demonstrats that both parts of speech is not being heard. Specifically, the vowel is being heard, "the I hear you part", but I don't understand you- implying that the consonant part is not being heard. Electronically the consonants only need amplication to create understandability. Not a terribly hard problem to resolve electronically, but a migic gift to the world of the deafend.
     
  2. crutschow

    Expert

    Mar 14, 2008
    13,000
    3,229
    I would assume that is an area where the hearing aid manufacturers are doing extensive work already.
     
  3. ramancini8

    Member

    Jul 18, 2012
    442
    118
    If it was not a terribly hard problem to solve it would be solved by now. The sound levels, duration, frequency content, and variability of speech from person to person make this task a real challange. Crutschow is correct; millions have been spent investigating and trying to solve this problem.
     
  4. Audioguru

    New Member

    Dec 20, 2007
    9,411
    896
    Vowels are low voice frequencies. Consonants are high frequencies.

    I am not deaf but I have difficuly understanding the "anchors" who read the news on my local TV station. They wear a lavalier mic on their chest that picks up vowels VERY LOUDLY directly from the throat but the weaker high frequency consonants are far away at the mouth and are directed away from the mic.
    The TV station uses severe auto gain control which turns down the overall level from the blasting vowels then the consonants can barely be heard.

    I think the TV station has a deaf sound man who was told that speech goes as high as only 3kHz so he cuts all highs (including cutting all consonants). Consonants go to at least 14kHz.

    It is obvious that the TV station produces normal wideband audio from reporters with a handheld mic in front of their mouth and from commercials.

    The hearing of deaf people is frequently attenuated at high frequencies due to playing with war and guns or listening to acid rock playing too loud. Then most hearing aids boost high audio frequencies.
     
  5. Al Mond

    Thread Starter New Member

    Mar 6, 2008
    4
    0
    To the several responders to my piece on deafened persons hearing but not understanding. Let me expand. Their are several incorrect assumptions made by the responders. Number one, assuming that this is a problem regarding frequency, if not wholly incorrect, then a practical non-issue. The coding of speech can more nearly be described as a Morse Code like, coded system in which the two elements of the code consisting of sounds coming from the vocal chords, which are very much sinusoidal-like. The second set of entirely different sounding audio eminates from the mouth, tongue and lip working together to form uniquely different sounds which other humans can distinguish from each other and from the vocal chord sounds. In practical terms the vocal chords sounds are very sinusoidal and can be displayed on an oscilloscope. Not so for most sounds emitted by the mouth-tongue-lip combination. Many of these sounds would appear on the scope as noise spikes. That is why I chose to leave freuency out of the discussion. Or better still, think of the speech code as a two part system consisting of a sinusoidal portion and a noise portion. Since this system was invented by some very ancient ape-like ancestors of homo sapien, they likely knew very little about the frequency spectrum, but they could differentiate between tones and noises. The originators likely considered it the old language and the new language.

    The great super genius of this invention is that the the two languages were not simply combined into one still larger set of sounds, but instead, the new language was utilized to create a code in which adding one of the new sounds to a suitable place in the structure of the old sound gave a very specific identification. As an example, a simple sound from the old language might be sounded as ahoh which to the human ear may have little to no useful meaning except to make note that the sound making individual is present. If the primitive were to add one of the new sounds like the g sound to the central portion of the ahoh sound, then it would become ahgoh, or to put it into modern speech, ago, and we modern humans, who understand the code, recognize this as something hppening in the past. As example, "it happenened a long time ago".
     
  6. Audioguru

    New Member

    Dec 20, 2007
    9,411
    896
    Of course vowels and consonants have different frequency ranges.
    Eliminate the low frequencies and voices sound "tinny" but are still understood.
    But eliminate the high frequency consonants then people are frequently saying "What did you say?" like on the telephone.

    Un, oo, ee, or, eye, ick, even, ate, nine, en. The count from one to ten as heard by many deaf 'eople because most of the high frequency 'onsonants are mi''ing.

    Deaf people cannot hear a whisper not because it is not loud enough but because it is all high frequency consonants and they cannot hear high frequencies.
     
  7. THE_RB

    AAC Fanatic!

    Feb 11, 2008
    5,435
    1,305
    To the OP; what is it that you actually want, ie what is the goal?

    If you can detect the consonants from their high freq response they will still get attenuated by the transmission processing and gain control of the media. Or do you want to detect the consonants at the final user and boost them?

    As others have said maybe all you need is an extreme treble boost etc at the receiving end.

    Having said that, it might not work with many people as hearing impairment is often involving high freq loss, and even boosting the high frequencies doesn't work they still can't hear them.
     
  8. Audioguru

    New Member

    Dec 20, 2007
    9,411
    896
    I think that many ordinary (not deaf) people do not like music because they cannot hear the high frequency harmonics. Then all the low frequencies all sound the same: boring.

    I love to hear that every musical instrument sounds different and that many sound different determined how they are played. I love to hear the "sizzle" of high frequencies.
     
  9. ramancini8

    Member

    Jul 18, 2012
    442
    118
    There is a system that uses a fast A/D and DSP to detect a tone in speech or music. The detected tone when sustained is considered to be feedback or mike squeal, etc., and the tone is suppressed with a DSP filter. This device was made by a small guitar tuner audio electronics company in Gainesville, Fl., but I don't remember their name. I might be adaptable to the speech problem.
     
  10. thatoneguy

    AAC Fanatic!

    Feb 19, 2009
    6,357
    718
    I'm also nearly deaf. Sibilant sounds may as well not exist (soft 's', hard 'T', etc.), I'm wondering if these are what the OP is looking for? Solution is a bit of compression/expansion of dynamic range at low freq and 3kHz+, and amplification. This is done in the advanced hearing aids built in the last decade.

    The TV anchorman story is also an accurate analogy. FM encoding doesn't help with the post-processing some cheap systems apply to make the bass have more 'punch' for the size. Those systems are the enemy of the hard of hearing. To simulate it yourself, get a 10 band EQ, put all the sliders left of 1kHz to +20dB, and all to the right to -20dB and you'll quickly identify the sounds that are missing. Cymbals in music, and hard consonants in speech are the first to be lost, it's like listening to people talk while underwater, if that makes sense.
     
  11. Audioguru

    New Member

    Dec 20, 2007
    9,411
    896
    If you ever had good hearing, did you ever hear a "throat" microphone that is used underwater? It produces only the groans of vowels: A, E, I, O and U.
     
  12. thatoneguy

    AAC Fanatic!

    Feb 19, 2009
    6,357
    718
    Yeah, it helped me get used to being deaf in a way. Throat mikes on land, not underwater. Hearing started going away in mid 20s. When your ears are underwater or filled with water, those are the only sounds you hear as well.
     
Loading...