main content

cochlear implant speech processor -凯发k8网页登录

this example shows how to simulate the design of a cochlear implant that can be placed in the inner ear of a profoundly deaf person to restore partial hearing. signal processing is used in cochlear implant development to convert sound to electrical pulses. the pulses can bypass the damaged parts of a deaf person's ear and be transmitted to the brain to provide partial hearing.

this example highlights some of the choices made when designing cochlear implant speech processors that can be modeled using the dsp system toolbox™. in particular, the benefits of using a cascaded multirate, multistage fir filter bank instead of a parallel, single-rate, second-order-section iir filter bank are shown.

human hearing

converting sound into something the human brain can understand involves the inner, middle, and outer ear, hair cells, neurons, and the central nervous system. when a sound is made, the outer ear picks up acoustic waves, which are converted into mechanical vibrations by tiny bones in the middle ear. the vibrations move to the inner ear, where they travel through fluid in a snail-shaped structure called the cochlea. the fluid displaces different points along the basilar membrane of the cochlea. displacements along the basilar membrane contain the frequency information of the acoustic signal. a schematic of the membrane is shown here (not drawn to scale).

frequency sensitivity in the cochlea

different frequencies cause the membrane to displace maximally at different positions. low frequencies cause the membrane to be displaced near its apex, while high frequencies stimulate the membrane at its base. the amplitude of the displacement of the membrane at a particular point is proportional to the amplitude of the frequency that has excited it. when a sound is composed of many frequencies, the basilar membrane is displaced at multiple points. in this way the cochlea separates complex sounds into frequency components.

each region of the basilar membrane is attached to hair cells that bend proportionally to the displacement of the membrane. the bending causes an electrochemical reaction that stimulates neurons to communicate the sound information to the brain through the central nervous system.

alleviating deafness with cochlear implants

deafness is most often caused by degeneration or loss of hair cells in the inner ear, rather than a problem with the associated neurons. this means that if the neurons can be stimulated by a means other than hair cells, some hearing can be restored. a cochlear implant does just that. the implant electrically stimulates neurons directly to provide information about sound to the brain.

the problem of how to convert acoustic waves to electrical impulses is one that signal processing helps to solve. multichannel cochlear implants have the following components in common:

  • a microphone to pick up sound

  • a signal processor to convert acoustic waves to electrical signals

  • a transmitter

  • a bank of electrodes that receive the electrical signals from the transmitter, and then stimulate auditory nerves.

just as the basilar membrane of the cochlea resolves a wave into its component frequencies, so does the signal processor in a cochlear implant divide an acoustic signal into component frequencies, that are each then transmitted to an electrode. the electrodes are surgically implanted into the cochlea of the deaf person in such a way that they each stimulate the appropriate regions in the cochlea for the frequency they are transmitting. electrodes transmitting high-frequency (high-pitched) signals are placed near the base, while those transmitting low-frequency (low-pitched) signals are placed near the apex. nerve fibers in the vicinity of the electrodes are stimulated and relay the information to the brain. loud sounds produce high-amplitude electrical pulses that excite a greater number of nerve fibers, while quiet ones excite less. in this way, information both about the frequencies and amplitudes of the components making up a sound can be transmitted to the brain of a deaf person by a cochlear implant.

exploring the example

the block diagram at the top of the model represents a cochlear implant speech processor, from the microphone which picks up the sound (input source block) to the electrical pulses that are generated. the frequencies increase in pitch from channel 0, which transmits the lowest frequency, to channel 7, which transmits the highest.

to hear the original input signal, double-click the original signal block at the bottom of the model. to hear the output signal of the simulated cochlear implant, double-click the reconstructed signal block.

there are a number of changes you can make to the model to see how different variables affect the output of the cochlear implant speech processor. remember that after you make a change, you must rerun the model to implement the changes before you listen to the reconstructed signal again.

simultaneous versus interleaved playback

research has shown that about eight frequency channels are necessary for an implant to provide good auditory understanding for a cochlear implant user. above eight channels, the reconstructed signal usually does not improve sufficiently to justify the rising complexity. therefore, this example resolves the input signal into eight component frequencies, or electrical pulses.

the speech synthesized from generated pulses block at the bottom left of the model allows you to either play each electrical channel simultaneously or sequentially. oftentimes cochlear implant users experience inferior results with simultaneous frequencies, because the electrical pulses interact with each other and cause interference. emitting the pulses in an interleaved manner mitigates this problem for many people. you can toggle the synthesis mode of the speech synthesized from generated pulses block to hear the difference between these two modes. zoom in on the time scope block to observe that the pulses are interleaved.

adjusting for noisy environments

noise presents a significant challenge to cochlear implant users. select the add noise parameter in the input source block to simulate the effects of a noisy environment on the reconstructed signal. observe that the signal becomes difficult to hear. the denoise block in the model uses a soft threshold block to attempt to remove noise from the signal. when the denoise parameter in the denoise block is selected, you can listen to the reconstructed signal and observe that not all the noise is removed. there is no perfect solution to the noise problem, and the results afforded by any denoising technology must be weighed against its cost.

signal processing strategy

the purpose of the filter bank signal processing block is to decompose the input speech signal into eight overlapping subbands. more information is contained in the lower frequencies of speech signals than in the higher frequencies. to get as much resolution as possible where the most information is contained, the subbands are spaced such that the lower-frequency bands are more narrow than the higher-frequency bands. in this example, the four low-frequency bands are equally spaced, while each of the four remaining high-frequency bands is twice the bandwidth of its lower-frequency neighbor. to examine the frequency contents of the eight filter banks, run the model using the chirp source type in the input source block.

two filter bank implementations are illustrated in this example: a parallel, single-rate, second-order-section iir filter bank and a cascaded, multirate, multistage fir filter bank. double click on the design filter banks button to examine their design and frequency specifications.

parallel single-rate sos iir filter bank: in this bank, the sixth-order iir filters are implemented as second-order-sections (sos). notice that the dsp system toolbox™ scale function is used to obtain optimal scaling gains, which is particularly essential for the fixed-point version of this example. the eight filters are running in parallel at the input signal rate. you can look at their frequency responses by double clicking the plot iir filter bank response button.

cascaded multirate multistage fir filter bank: the design of this filter bank is based on the principles of an approach that combines downsampling and filtering at each filter stage. the overall filter response for each subband is obtained by cascading its components. double click on the design filter banks button to examine how design functions from the dsp system toolbox are used in constructing these filter banks.

since downsampling is applied at each filter stage, the later stages are running at a fraction of the input signal rate. for example, the last filter stages are running an one-eighth of the input signal rate. consequently, this design is very suitable for implementations on the low-power dsps with limited processing cycles that are used in cochlear implant speech processors. you can look at the frequency responses for this filter bank by double clicking on the plot fir filter bank response button. notice that this design produces sharper and flatter subband definition compared to the parallel single-rate sos iir filter bank. this is another benefit of a multirate, multistage filter design approach. for a related example see "multistage design of decimators/interpolators" in the dsp system toolbox fir filter design examples.

available example versions

floating-point version:

fixed-point version:

references

[1] loizou, philip c., "mimicking the human ear," ieee® signal processing magazine, vol. 15, no. 5, pp. 101-130, 1998.

网站地图