Defining Auditory-Visual Objects

By Molly McElroy, PhD

If you've ever been to a crowded bar, you may notice that it's easier to hear your friend if you watch his face and mouth movements. And if you want to pick out the melody of the first violin in a string quartet, it helps to watch the strokes of the players' bow.

I-LABS faculty member Adrian KC Lee and co-authors use these examples to illustrate auditory-visual objects, the topic of the researchers' recently published opinion paper in the prestigious journal Trends in Neurosciences.

Lee, who is an associate professor in the UW Department of Speech & Hearing Sciences, studies brain mechanisms that underlie hearing. With an engineering background, Lee is particularly interested in understanding how to improve hearing prosthetics.

Previous I-LABS research has shown that audio-visual processing is evident as early as 18 weeks of age, suggesting it is a fundamental part of how the human brain processes speech. Those findings, published in 1982 by the journal Science, showed that infants understand the correspondence between sight and the sound of language movements.

In the new paper, Lee and co-authors Jennifer Bizley, of University College London, and Ross Maddox, of I-LABS, discuss how the brain integrates auditory and visual information—a type of multisensory processing that has been referred to by various terms but with no clear delineation.

The researchers wrote the paper to provide their field with a more standard nomenclature for what an audio-visual object is and give experimental paradigms for testing it.

“That we combine sounds and visual stimuli in our brains is typically taken for granted, but the specifics of how we do that aren’t really known," said Maddox, a postdoctoral researcher working with Lee. “Before we can figure that out we need a common framework for talking about these issues. That’s what we hoped to provide in this piece.”

Trends in Neurosciences is a leading peer-reviewed journal that publishes articles it invites from leading experts in the field and focuses on topics that are of current interest or under debate in the neuroscience field.

Multisensory, especially audio-visual, work is of importance for several reasons, Maddox said. Being able to see someone talking offers huge performance improvements, which is relevant to making hearing aids that take visual information into account and in studying how people with developmental disorders like autism spectrum disorders or central auditory processing disorders (CAPD) may combine audio-visual information differently.

"The issues are debated because we think studying audio-visual phenomena would benefit from new paradigms, and here we hoped to lay out a framework for those paradigms based on hypotheses of how the brain functions," Maddox said.

Read the full paper onlineThis article was republished with permission of the Institute for Learning & Brain Sciences at the University of Washington

Ross Maddox, Ph.D. was a 2013 General Grand Chapter Royal Arch Masons International award recipient. Hearing Health Foundation would like to thank the Royal Arch Masons for their generous contributions to Emerging Research Grantees working in the area of central auditory processing disorders (CAPD). We appreciate their ongoing commitment to funding CAPD research.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

BLOG ARCHIVE