Relatively little research has examined the effect of processing strategy on the duration and degree of learning or change in performance over time with training for listeners with Cochlear Implants (CIs). This is further complicated by the fact that best outcomes are achieved with CIs in both ears, which is now the standard of care for individuals with bilateral hearing loss. With two CIs, these devices can be programmed with identical or complementary frequency cues. CIs programmed with complementary cues across ears may improve resolution and clarity if patients are able to integrate the different CI signals. Understanding how variations in sound processing strategies impact learning of speech recognition within and across two ears will direct future clinical CI programming, enhance patient counseling, and help improve outcomes. CI simulations (vocoders) are invaluable tools to examine the effects of variations in sound processing strategies of CIs in normal-hearing listeners.
My research study will use different CI simulations to examine CI programming strategies and their effect on learning speech recognition over time with a training protocol of vowels and consonants. The CI simulations differ in how they split information across ears, with either complementary or identical information in both ears, as well as whether they shift frequency ranges to different neural regions in the cochlea as is common with CIs and varies with surgical placement and across manufacturers.
This study aims to address the following research questions:
- Can normal-hearing listeners integrate CI simulations with complementary cues across ears for better speech understanding?
- Does speech perception with training differ for CI simulations with identical or complementary cues across ears?
- Does perceptual learning with training differ for CI simulations with or without frequencies shifted along the organ of hearing and auditory nerve?