­
Blog — Hearing Health Foundation

General Grand Chapter Royal Arch Masons

Lasting Effects From Head and Brain Injury

In our research “Patient‐Reported Auditory Handicap Measures Following Mild Traumatic Brain Injury,” published in The Laryngoscope, we examined auditory complaints following traumatic brain injury, as well as changes that occur to the peripheral vestibular system in the postmortem setting.

Print Friendly and PDF

BLOG ARCHIVE

Developing Better Tests for Discovering “Hidden” Hearing Loss

Hidden hearing damage can hypothetically still affect hearing in everyday noisy environments such as crowded restaurants and busy streets. Therefore, it is important to develop tests to detect such damage.

Print Friendly and PDF

BLOG ARCHIVE

Detailing the Relationships Between Auditory Processing and Cognitive-Linguistic Abilities in Children

According to our framework, cognitive and linguistic factors are included along with auditory factors as potential sources of deficits that may contribute individually or in combination to cause listening difficulties in children.

Print Friendly and PDF

BLOG ARCHIVE

Introducing the 2018 Emerging Research Grantees

Our grantees’ research investigations seek to solve specific auditory and vestibular problems such as declines in complex sound processing in age-related hearing loss (presbycusis), ototoxicity caused by the life-saving chemotherapy drug cisplatin, and noise-induced hearing loss.

Print Friendly and PDF

BLOG ARCHIVE

New Research Shows Hearing Aids Improve Brain Function and Memory in Older Adults

New research from the University of Maryland (UMD) Department of Hearing and Speech Sciences (HESP) shows that the use of hearing aids not only restores the capacity to hear, but can improve brain function and working memory.

Print Friendly and PDF

BLOG ARCHIVE

New Data-Driven Analysis Procedure for Diagnostic Hearing Test

By Carol Stoll

Stimulus frequency otoacoustic emissions (SFOAEs) are sounds generated by the inner ear in response to a pure-tone stimulus. Hearing tests that measure SFOAEs are noninvasive and effective for those who are unable to participate, such as infants and young children. They also give valuable insight into cochlear function and can be used to diagnose specific types and causes of hearing loss. Though interpreting SFOAEs is simpler than other types of emissions, it is difficult to extract the SFOAEs from the same-frequency stimulus and from background noise caused by patient movement and microphone slippage in the ear canal.

2014 Emerging Research Grants (ERG) recipient Srikanta Mishra, Ph.D., and colleagues have addressed SFOAE analysis issues by developing an efficient data-driven analysis procedure. Their new method considers and rejects irrelevant background noise such as breathing, yawning, and subtle movements of the subject and/or microphone cable. The researchers used their new analysis procedure to characterize the standard features of SFOAEs in typical-hearing young adults and published their results in Hearing Research.

Mishra and team chose 50 typical-hearing young adults to participate in their study. Instead of using a discrete-tone procedure that measures SFOAEs one frequency at a time, they used a more efficient method: a single sweep-tone stimulus that seamlessly changes frequencies from 500 to 4,000 Hz, and vice versa, over 16 and 24 seconds. The sweep tones were interspersed with suppressor tones that reduce the response to the previous tone. The tester manually paused and restarted the sweep recording when they detected background noises from the subject’s movements.

mishra.jpeg

The SFOAEs generated were analyzed using a mathematical model called a least square fit (LSF) and a series of algorithms based on statistical analysis of the data. This model objectively minimized the potential error from extraneous noises. Conventional SFOAE features such as level, noise floor, and signal-to-noise ratio (SNR) were described for the typical-hearing subjects.

Overall, the results of this study demonstrate the effectiveness of the automated noise rejection procedure of sweep-tone–evoked SFOAEs in adults. The features of SFOAEs characterized in this study from a large group of typical-hearing young adults should be useful for developing tests for cochlear function that can be useful in the clinic and laboratory.

Srikanta Mishra, Ph.D, was a 2014 Emerging Research Grants scientist and a General Grand Chapter Royal Arch Masons International award recipient. For more, see Sweep-tone evoked stimulus frequency otoacoustic emissions in humans: Development of a noise-rejection algorithm and normative features” in Hearing Research.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 


Print Friendly and PDF

BLOG ARCHIVE

Cortical Alpha Oscillations Predict Speech Intelligibility

By Andrew Dimitrijevic, Ph.D.

Hearing Health Foundation Emerging Research Grants recipient Andrew Dimitrijevic, Ph.D., and colleagues recently published “Cortical Alpha Oscillations Predict Speech Intelligibility” in the journal Frontiers in Human Neuroscience.

The scientists measured brain activity that originates from the cortex, known as alpha rhythms. Previous research has linked these rhythms to sensory processes involving working memory and attention, two crucial tasks for listening to speech in noise. However, no previous research has studied alpha rhythms directly during a clinical speech in noise perception task. The purpose of this study was to measure alpha rhythms during attentive listening in a commonly used speech-in-noise task, known as digits-in-nose (DiN), to better understand the neural processes associated with speech hearing in noise.

Fourteen typical-hearing young adult subjects performed the DiN test while wearing electrode caps to measure alpha rhythms. All subjects completed the task in active and passive listening conditions. The active condition mimicked attentive listening and asked the subject to repeat the digits heard in varying levels of background noise. In the passive condition, the subjects were instructed to ignore the digits and watch a movie of their choice, with captions and no audio.

Two key findings emerged from this study in regards to the influence of attention, individual variability, and predictability of correct recall.

First, the authors concluded that the active condition produced alpha rhythms, while passive listening yielded no such activity. Selective auditory attention can therefore be indexed through this measurement. This result also illustrates that these alpha rhythms arise from neural processes associated with selective attention, rather than from the physical characteristics of sound. To the authors’ knowledge, these differences between passive and active conditions have not previously been reported.

Secondly, all participants showed similar brain activation that predicted when one was going to make a mistake on the DiN task. Specifically, a greater magnitude in one particular aspect of alpha rhythms was found to correlate with comprehension; a larger magnitude on correct trials was observed relative to incorrect trials. This finding was consistent throughout the study and has great potential for clinical use.

Dimitrijevic and his colleagues’ novel findings propel the field’s understanding of the neural activity related to speech-in-noise tasks. It informs the assessment of clinical populations with speech in noise deficits, such as those with auditory neuropathy spectrum disorder or central auditory processing disorder (CAPD).

Future research will attempt to use this alpha rhythms paradigm in typically developing children and those with CAPD. Ultimately, the scientists hope to develop a clinical tool to better assess listening in a more real-world situation, such as in the presence of background noise, to augment traditional audiological testing.

Andrew Dimitrijevic, Ph.D., is a 2015 Emerging Research Grantee and General Grand Chapter Royal Arch Masons International award recipient. Hearing Health Foundation would like to thank the Royal Arch Masons for their generous contributions to Emerging Research Grants scientists working in the area of central auditory processing disorders (CAPD). We appreciate their ongoing commitment to funding CAPD research.

We need your help supporting innovative hearing and balance science. Please make a contribution today.

Print Friendly and PDF

BLOG ARCHIVE

Introducing HHF's 2016 Emerging Research Grant Recipients

By Morgan Leppla

We are excited to announce the 2016 Emerging Research Grant recipients. This year, HHF funded five research areas:

  • Central Auditory Processing Disorder (CAPD): research investigating a range of disorders within the ear and brain that affect the processing of auditory information. HHF thanks the General Grand Chapter Royal Arch Masons International for enabling us to fund four grants in the area of CAPD. 
     
  • Hyperacusis: research that explores the mechanisms, causes, and diagnosis of loudness intolerance. One grant was generously funded by Hyperacusis Research.
     
  • Méniere’s Disease: research that investigates the inner ear and balance disorder. One grant was funded by the Estate of Howard F. Schum.
     
  • Stria: research that furthers our understanding of the stria vascularis, strial atrophy, and/or development of the stria. One grant was funded by an anonymous family foundation interested in this research.
     
  • Tinnitus: research to understand the perception of sound in the ear in the absence of an acoustic stimulus. Two grants were awarded, thanks to the generosity the Les Paul Foundation and the the Barbara Epstein Foundation.

To learn more about our 2016 ERG grantees and their research goals, please visit hhf.org/2016_researchers

HHF is also currently planning for our 2017 ERG grant cycle. If you're interested in naming a research grant in any discipline within the hearing and balance space, please contact development@hhf.org.

Print Friendly and PDF

BLOG ARCHIVE

Neural sensitivity to binaural cues with bilateral cochlear implants

By Massachusetts Eye and Ear/Harvard Medical School

Many profoundly deaf people wearing cochlear implants (CIs) still face challenges in everyday situations, such as understanding conversations in noise. Even with CIs in both ears, they have difficulty making full use of subtle differences in the sounds reaching the two ears (interaural time difference, [ITD]) to identify where the sound is coming from. This problem is especially acute at the high stimulation rates used in clinical CI processors.

 A team of researchers from Massachusetts Eye and Ear/Harvard Medical School, including past funded Emerging Research Grantee, Yoojin Chung, Ph.D., studied how the neurons in the auditory midbrain encode binaural cues delivered by bilateral CIs in an animal model. They found that the majority of neurons in the auditory midbrain were sensitive to ITDs, however, their sensitivity degraded with increasing pulse rate. This degradation paralleled pulse-rate dependence of perceptual limits in human CI users.

This study provides a better understanding of neural mechanisms underlying the limitation of current clinical bilateral CIs and suggests directions for improvement such as delivering ITD information in low-rate pulse trains.

The full paper was published in The Journal of Neuroscience and is available here. This article was republished with permission of the Massachusetts Eye and Ear/Harvard Medical School.

Dr. Yoojin Chung, Ph.D. was a 2012 and 2013 General Grand Chapter Royal Arch Masons International award recipient through our Emerging Research Grants program. Hearing Health Foundation would like to thank the Royal Arch Masons for their generous contributions to Emerging Research Grantees working in the area of central auditory processing disorders (CAPD). We appreciate their ongoing commitment to funding CAPD research.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

BLOG ARCHIVE

Defining Auditory-Visual Objects

By Molly McElroy, PhD

If you've ever been to a crowded bar, you may notice that it's easier to hear your friend if you watch his face and mouth movements. And if you want to pick out the melody of the first violin in a string quartet, it helps to watch the strokes of the players' bow.

I-LABS faculty member Adrian KC Lee and co-authors use these examples to illustrate auditory-visual objects, the topic of the researchers' recently published opinion paper in the prestigious journal Trends in Neurosciences.

Lee, who is an associate professor in the UW Department of Speech & Hearing Sciences, studies brain mechanisms that underlie hearing. With an engineering background, Lee is particularly interested in understanding how to improve hearing prosthetics.

Previous I-LABS research has shown that audio-visual processing is evident as early as 18 weeks of age, suggesting it is a fundamental part of how the human brain processes speech. Those findings, published in 1982 by the journal Science, showed that infants understand the correspondence between sight and the sound of language movements.

In the new paper, Lee and co-authors Jennifer Bizley, of University College London, and Ross Maddox, of I-LABS, discuss how the brain integrates auditory and visual information—a type of multisensory processing that has been referred to by various terms but with no clear delineation.

The researchers wrote the paper to provide their field with a more standard nomenclature for what an audio-visual object is and give experimental paradigms for testing it.

“That we combine sounds and visual stimuli in our brains is typically taken for granted, but the specifics of how we do that aren’t really known," said Maddox, a postdoctoral researcher working with Lee. “Before we can figure that out we need a common framework for talking about these issues. That’s what we hoped to provide in this piece.”

Trends in Neurosciences is a leading peer-reviewed journal that publishes articles it invites from leading experts in the field and focuses on topics that are of current interest or under debate in the neuroscience field.

Multisensory, especially audio-visual, work is of importance for several reasons, Maddox said. Being able to see someone talking offers huge performance improvements, which is relevant to making hearing aids that take visual information into account and in studying how people with developmental disorders like autism spectrum disorders or central auditory processing disorders (CAPD) may combine audio-visual information differently.

"The issues are debated because we think studying audio-visual phenomena would benefit from new paradigms, and here we hoped to lay out a framework for those paradigms based on hypotheses of how the brain functions," Maddox said.

Read the full paper onlineThis article was republished with permission of the Institute for Learning & Brain Sciences at the University of Washington

Ross Maddox, Ph.D. was a 2013 General Grand Chapter Royal Arch Masons International award recipient. Hearing Health Foundation would like to thank the Royal Arch Masons for their generous contributions to Emerging Research Grantees working in the area of central auditory processing disorders (CAPD). We appreciate their ongoing commitment to funding CAPD research.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

BLOG ARCHIVE