2025

Franziska Auer, Ph.D.

Franziska Auer, Ph.D.

New York University

Defining myelin’s role in developing vestibular circuits

The vestibular system serves a vital purpose, to stabilize posture and gaze by producing corrective head and body movements. Vestibular circuits are myelinated early in life, suggesting a crucial role in proper balance development. In addition, balance, posture, and gait deficits are common symptoms for patients affected by diseases where myelin breaks down. Myelination alters conduction velocity thus it is crucial for circuit function. Recent studies have shown that the formation of novel myelin plays an essential role in memory formation and learning. The overall goal of this project is to define a role for myelin in vestibular circuit development and postural behaviors. We will investigate the consequences of loss of vestibular myelin on postural development. The work will also establish and validate transformative new tools to selectively disrupt the myelination of genetically defined subsets of neurons. I will test the role of myelin in different circuits for postural behavior, locomotion, and coordination in order to understand myelin’s contributions to circuit function. These novel tools will also permit future investigations into the role of myelin in auditory circuits and the consequences for hearing health.

Timothy Balmer, Ph.D.

Arizona State University

The role of NMDA receptors in vestibular circuit function and balance

The vestibular cerebellum is the part of the brain that integrates signals that convey head, body and eye movements to coordinate balance. When this neural processing is disrupted by central or peripheral vestibular disorders, profound instability, vertigo, and balance errors result. We lack a basic understanding of the development and physiology of the first vestibular processing region in the cerebellum, the granule cell layer. This lack of knowledge is a major roadblock to the development of therapies that could ameliorate peripheral disorders such as Ménière’s disease. This project will look at a specific understudied cell type in the granule cell layer of the cerebellum, unipolar brush cells (UBCs). Our focus is particularly on the cells’ glutamate receptors, which control synaptic communication. It remains unclear how glutamate receptors assume their form and function during development, and we hypothesize that the NMDA-type glutamate receptors expressed by developing UBCs are necessary for the development of the remarkable dendritic brush of these cells, which slows and controls communication across the synapse, and the cells’ function in the circuit.

Divya Chari, M.D.

Divya Chari, M.D.

Mass Eye and Ear

Auditory and vestibular phenotype characterization of a Ménière’s disease model in humans and mice with X-linked hypophosphatemia

Our group has begun to segregate the pool of Ménière’s disease patients into distinct subtypes based upon specific clinical characteristics and morphologic features of the inner ear endolymphatic sac and vestibular aqueduct. One cohort—designated MDhp—demonstrated on histopathology and radiologic imaging an incompletely developed (hypoplastic) endolymphatic sac and vestibular aqueduct and had a high comorbid prevalence of X-linked hypophosphatemia. XLH is a genetic phosphate metabolism and bone growth disorder caused by a loss-of-function variant in the Phex gene. The high coincidence of XLH in the MDhp cohort led to the hypothesis that the two disorders may have etiologic similarities. Our preliminary studies suggest that the Phex gene-deficient XLH mouse also recapitulates clinical features of the MDhp cohort: hearing loss and balance dysfunction, endolymphatic hydrops, and hypoplasia of the endolymphatic sac and vestibular aqueduct. During this project we will determine whether the inner ear phenotype of humans with XLH generally resembles that of MDhp, and whether the XLH mouse model also exhibits an MDhp phenotype. Characterizing the MDhp phenotype within the context of patients with XLH and a Phex-deficient mouse model is a critical first step toward investigating the pathophysiology of MD and elucidating the genetic etiology of the MDhp subgroup. This research may demonstrate that the Phex gene-deficient mouse can be used as a reliable animal model of the MDhp subtype, which will pave the way for future studies of the role of the Phex gene mutation in MD patients and, more generally, the genetic basis of this complex disease. 

Margaret Cychosz, Ph.D.

Margaret Cychosz, Ph.D.

University of California, Los Angeles
Leveraging automatic speech recognition algorithms to understand how the home listening environment impacts spoken language development among infants with cochlear implants

To develop spoken language, infants must rapidly process thousands of words spoken by caregivers around them each day. This is a daunting task, even for typical hearing infants. It is even harder for infants with cochlear implants as electrical hearing compromises many critical cues for speech perception and language development. The challenges that infants with cochlear implants face have long-term consequences: Starting in early childhood, cochlear implant users perform 1-2 standard deviations below peers with typical hearing on nearly every measure of speech, language, and literacy. My lab investigates how children with hearing loss develop spoken language despite the degraded speech signal that they hear and learn language from. This project addresses the urgent need to identify predictors of speech-language development for pediatric cochlear implant users in infancy.

Amanda Griffin, Ph.D., Au.D.

Amanda Griffin, Ph.D., Au.D.

Boston Children’s Hospital

Toward better assessment of pediatric unilateral hearing loss 

Although it is now more widely understood that children with unilateral hearing loss are at risk for challenges, many appear to adjust well without intervention. The range of options for audiological intervention for children with severe-to-profound hearing loss in only one ear (i.e., single-sided deafness, SSD) has increased markedly in recent years, from no intervention beyond classroom accommodations all the way to cochlear implant (CI) surgery. In the absence of clear data, current practice is based largely on the philosophy and convention at different institutions around the country. The work in our lab aims to improve assessment and management of pediatric unilateral hearing loss. This current project will evaluate the validity of an expanded audiological and neuropsychological test battery in school-aged children with SSD. Performance on test measures will be compared across different subject groups: typical hearing; unaided SSD; SSD with the use of a CROS (contralateral routing of signals) hearing aid; SSD with the use of a cochlear implant. This research will enhance our basic understanding of auditory and non-auditory function in children with untreated and treated SSD, and begin the work needed to translate experimental measures into viable clinical protocols.

Nicole Tin-Lok Jiam, M.D.

Nicole Tin-Lok Jiam, M.D.

Mass Eye and Ear

Age-specific cochlear implant programming for optimal hearing performance

Cochlear implants (CI) offer life-altering hearing restoration for deafened individuals who no longer benefit from hearing aid technologies. Despite advances in CI technology, recipients struggle to process complex sounds in real-world environments, such as speech-in-noise and music. Poor performance results from artifacts of the implants (e.g., adjacent channel interaction, distorted signal input) and age-specific biological differences (e.g., neuronal health, auditory plasticity). Our group determined that children with CIs require a better signal input than adults with CIs to achieve the same level of performance. Additional evidence demonstrates that auditory signal blurring in adults is less impactful on performance outcomes. These findings imply that age should be considered when programming a CI. However, the current clinical practice largely adopts a one-size-fits-all approach toward CI management and uses programming parameters defined by adult CI users. Our project’s main objective is to understand how to better program CIs in children to improve complex sound processing by taking into context the listening environment (e.g., complex sound processing in a crowded room), differences between age groups, and variations in needs or anatomy between individuals.

HiJee Kang, Ph.D.

HiJee Kang, Ph.D.

Johns Hopkins University

Age-related changes on neural mechanisms in the auditory cortex for learning complex sounds 

In everyday environments, we encounter complex acoustic streams yet we rapidly perceive only relevant information with little conscious effort, such as when having a conversation in a noisy background. With aging, this ability seems to degrade due to disrupted neural mechanisms in the brain. One of the key processes that enable efficient auditory perception is rapid and implicit learning of new sounds through their reoccurrences, allowing our brains to link auditory streams with relevant memories to perceive meaningful information. This process must be conveyed by populations of neurons in relevant brain regions—for hearing, in the auditory cortex. This project focuses on age-related changes in implicit learning. We aim to identify how neuronal activity encodes sensory signals, detects reoccurring stimuli, and ultimately stores reoccurring sensory signals in memory. We will use optical imaging and holographic stimulation to identify changes in a group of neurons in the auditory cortex that are involved in such processes. Our goal is to acquire a comprehensive understanding of the neural circuits involved in learning new sounds in a healthy young population as well as to characterize altered neural circuits caused by aging. 

Manoj Kumar, Ph.D.

Manoj Kumar, Ph.D.

University of Pittsburgh

KCNQ2/3 potassium channel activator mitigates noise-trauma–induced hypersensitivity to sounds
in mice

Noise-induced hearing loss (NIHL) is one of the most common causes of hearing disorders. NIHL reduces the auditory sensory information relayed from the cochlea to the brain, including the primary auditory cortex (A1). To compensate for reduced peripheral sensory input, A1 undergoes homeostatic plasticity. Namely, the sound-evoked activity of A1 excitatory principal neurons (PNs) recovers or even surpasses pre-noise trauma levels and exhibits increased response gain (the slope of neuronal responses against sound levels). This increased gain of A1 PNs after NIHL is associated with highly debilitating hearing disorders, such as tinnitus (perception of phantom sounds), hyperacusis (painful perception of sounds), and hypersensitivity to sounds (increased sensitivity to everyday sounds). Despite the high prevalence of these hearing disorders, treatment options are limited to cognitive behavioral therapy and hearing prosthetics with no FDA-approved pharmacotherapeutic options available. Therefore, to aid in the development of pharmacotherapeutic options, it is imperative to 1) develop animal models of these hearing disorders, 2) identify the brain plasticity underlying these hearing disorders, and 3) test potential pharmacotherapy to rehabilitate hearing and brain plasticity after NIHL. Here, we aim to develop a novel mouse model of hypersensitivity to sounds, identify its underlying A1 plasticity, and test pharmacotherapy to mitigate it after NIHL.

Ben-Zheng Li, Ph.D.

Ben-Zheng Li, Ph.D.

University of Colorado

Alterations in the sound localization pathway resulting in hearing deficits: an optogenetic approach

Sound localization is a key function of the brain that enables individuals to detect and focus on specific sound sources in complex acoustic environments. When spatial hearing is impaired, such as in individuals with central hearing loss, it significantly diminishes the ability to communicate effectively in noisy environments, leading to a reduced quality of life. This research aims to advance our understanding of the neural mechanisms underlying sound localization, focusing on how the brain processes very small differences in the timing of sounds reaching each ear (interaural time differences, or ITDs). These differences are processed by a nucleus of the auditory brainstem called the medial superior olive (MSO), which integrates excitatory and inhibitory inputs from both left and right ears with exceptional temporal precision, allowing for the detection of microsecond-level differences in the time of arrival of sounds. By developing a computational model of this process and validating it through optogenetic manipulation of inhibitory inputs in animal models, this project will provide new insights into how alterations in inhibition and myelination affect sound localization. Ultimately, the goal of this research is to contribute to the development of innovative therapeutic strategies aimed at restoring spatial hearing in individuals with hearing impairments, including those with autism and age-related deficits.

Melissa McGovern, Ph.D.

Melissa McGovern, Ph.D.

University of Pittsburgh

Hair cell regeneration in the mature cochlea: investigating new models to reprogram cochlear epithelial cells into hair cells 

Sensory hair cells in the inner ear detect mechanical auditory stimulation and convert it into a signal that the brain can interpret. Hair cells are susceptible to damage from loud noises and some medications. Our lab investigates the ability of nonsensory cells in our inner ears to be able to regenerate lost hair cells. We regenerate cells in the ear by converting nonsensory cells into sensory cells through genetic reprogramming. Key hair cell-inducing program genes are expressed in non-hair cells and partially convert them into hair cells. There are multiple types of nonsensory cells in the inner ear and they are all important for different reasons. In addition, they are in different locations relative to the sensory hair cells. In order to better understand the ability of different groups of cells to restore hearing, we need to be able to isolate different populations of cells. The funded project will allow us to create a new model to target specific nonsensory cells within the inner ear to better understand how these cells can be converted into hair cells. By using this new model, we can specifically investigate cells near the sensory hair cells and understand how they can be reprogrammed. Our lab is also very interested in how the partial loss of genes in the inner ear can affect cellular identities. In addition to targeting specific cells in the ear, we will investigate whether the partial loss of a protein in nonsensory cells may improve their ability to be converted into sensory cells. This information will allow us to further explore possible therapeutic targets for hearing restoration.

Anahita Mehta, Ph.D.

Anahita Mehta, Ph.D.

University of Michigan

Effects of age on interactions of acoustic features with timing judgments in auditory sequences

Imagine being at a busy party where everyone is talking at once, yet you can still focus on your friend’s voice. This ability to discern important sounds from noise involves integrating different features, such as the pitch (how high or low a sound is), location, and timing of these sounds. As we age, even with good hearing, this integration may become harder, affecting our ability to understand speech in noisy environments. Our brains must combine these features to make sense of our surroundings, a process known as feature integration. However, it’s not entirely clear how these features interact, especially when they conflict. For example, how does our brain handle mixed signals regarding pitch and sound location?
Previous research shows that when cues from different senses, like hearing and sight, occur simultaneously, our performance improves. But if they are out of sync, it becomes harder. Less is known about how our brains integrate conflicting cues within the same sense, such as pitch and spatial location in hearing. Our study aims to explore how this ability changes with age and develop a simple test that could be used as an easy task of feature integration, especially for older adults. This research may lead to better rehabilitation strategies, making everyday listening tasks easier for everyone.

Bruna Mussoi, Au.D., Ph.D.

Bruna Mussoi, Au.D., Ph.D.

University of Tennessee

Auditory neuroplasticity following experience with cochlear implants

Cochlear implants provide several benefits to older adults, though the amount of benefit varies across people. The greatest improvements in speech understanding abilities usually happen within the first 6 months after implantation. It is generally accepted that these gains in performance are a result of neural changes in the auditory system, but while there is strong evidence of neural changes following cochlear implantation in children, there is limited evidence in adults with hearing loss in both ears. This study will examine how neural responses change as a function of the amount of cochlear implant use, when compared to longstanding hearing aid use. Listeners who are candidates for a cochlear implant (who either decide to pursue implantation or to keep wearing hearing aids) will be tested at several time points, from pre-implantation and up to 6 months after implantation. The results of this project will improve our understanding of the impact of cochlear implant use on neural responses in older adults, and their relationship with the ability to understand speech.

Wei Sun, Ph.D.

Wei Sun, Ph.D.

University at Buffalo

FOXG1 gene mutation-caused hyperacusis—a novel model to study hyperacusis

Hyperacusis is a common symptom in children with neurological disorders such as autism spectrum disorder, Williams syndrome, Rett syndrome, and FOXG1 syndrome (FS). The cause of hyperacusis in these neurological disorders has not been fully discovered. FOXG1 mutation is a recently defined, rare and devastating neurodevelopmental disorder. MRI studies show a spectrum of structural brain anomalies, including cortical atrophy, hypogenesis of the corpus callosum, and delayed myelination in children with FS. However, the impact of the FOXG1 mutation on the central auditory system and hyperacusis is largely unknown. Children with FS show signs of hyperacusis, including becoming startled, upset, and even experiencing seizures from loud sounds. The mouse model of FOXG1 mutation provides a novel model to study neurological dysfunction in the central auditory system resulting in hyperacusis. In this project, we will use a mouse model developed by colleagues at University at Buffalo that replicates gene mutations in FS children to study hyperacusis. In our preliminary studies, we found that the mutant mice showed a lack of habituation in the startle tests and an aversive reaction to loud sounds in the open field test. We also found that the cortical neurons showed reduced neural activities and prolonged responses to sound stimuli, suggesting hypoexcitability and a lack of adaptation to sound stimuli. The results point toward a novel neurological model of hyperacusis compared with the current “central gain” theory. Our findings will provide mechanistic insights into the role of the FOXG1 gene on hyperacusis and shed light on detecting potential therapeutic targets to alleviate hyperacusis caused by FS and other neurological disorders.

Osama Tarabichi, M.D., MPH

University of Iowa

The role of inner ear lymphatics in the foreign body response to cochlear implantation

To develop spoken language, infants must rapidly process thousands of words spoken by caregivers around them each day. This is a daunting task, even for typical hearing infants. It is even harder for infants with cochlear implants as electrical hearing compromises many critical cues for speech perception and language development. The challenges that infants with cochlear implants face have long-term consequences: Starting in early childhood, cochlear implant users perform 1-2 standard deviations below peers with typical hearing on nearly every measure of speech, language, and literacy. My lab investigates how children with hearing loss develop spoken language despite the degraded speech signal that they hear and learn language from. This project addresses the urgent need to identify predictors of speech-language development for pediatric cochlear implant users in infancy.