Impaired neural encoding of naturalistic audiovisual speech in autism
Visual cues from a speaker’s face can significantly improve speech comprehension in noisy environments through multisensory integration (MSI)—the process by which the brain combines auditory and visual inputs. Individuals with Autism Spectrum Disorder (ASD), however, often show atypical MSI, particu...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2025-09-01
|
Series: | NeuroImage |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S1053811925004008 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1839607687647592448 |
---|---|
author | Theo Vanneau Michael J. Crosse John J. Foxe Sophie Molholm |
author_facet | Theo Vanneau Michael J. Crosse John J. Foxe Sophie Molholm |
author_sort | Theo Vanneau |
collection | DOAJ |
description | Visual cues from a speaker’s face can significantly improve speech comprehension in noisy environments through multisensory integration (MSI)—the process by which the brain combines auditory and visual inputs. Individuals with Autism Spectrum Disorder (ASD), however, often show atypical MSI, particularly during speech processing, which may contribute to the social communication difficulties central to the diagnosis. Understanding the neural basis of impaired MSI in ASD, especially during naturalistic speech, is critical for developing targeted interventions. Most neurophysiological studies have relied on simplified speech stimuli (e.g., isolated syllables or words), limiting their ecological validity. In this study, we used high-density EEG and linear encoding and decoding models to assess the neural processing of continuous audiovisual speech in adolescents and young adults with ASD (N = 23) and age-matched typically developing controls (N = 19). Participants watched and listened to naturalistic speech under auditory-only, visual-only, and audiovisual conditions, with varying levels of background noise, and were tasked with detecting a target word. Linear models were used to quantify cortical tracking of the speech envelope and phonetic features. In the audiovisual condition, the ASD group showed reduced behavioral performance and weaker neural tracking of both acoustic and phonetic features, relative to controls. In contrast, in the auditory-only condition, increasing background noise reduced behavioral and model performance similarly across groups. These results provide, for the first time, converging behavioral and neurophysiological evidence of impaired multisensory enhancement for continuous, natural speech in ASD. Significance Statement: In adverse hearing conditions, seeing a speaker's face and their facial movements enhances speech comprehension through a process called multisensory integration, where the brain combines visual and auditory inputs to facilitate perception and communication. However, individuals with Autism Spectrum Disorder (ASD) often struggle with this process, particularly during speech comprehension. Previous findings using simple, discrete stimuli do not fully explain how the processing of continuous natural multisensory speech is affected in ASD. In our study, we used natural, continuous speech stimuli to compare the neural processing of various speech features in individuals with ASD and typically developing (TD) controls, across auditory and audiovisual conditions with varying levels of background noise. Our findings showed no group differences in the encoding of auditory-alone speech, with both groups similarly affected by increasing levels of noise. However, for audiovisual speech, individuals with ASD displayed reduced neural encoding of both the acoustic envelope and the phonetic features, marking neural processing impairment of continuous audiovisual multisensory speech in autism. |
format | Article |
id | doaj-art-de8baf9eb85e4eff9d3801020a88c9d9 |
institution | Matheson Library |
issn | 1095-9572 |
language | English |
publishDate | 2025-09-01 |
publisher | Elsevier |
record_format | Article |
series | NeuroImage |
spelling | doaj-art-de8baf9eb85e4eff9d3801020a88c9d92025-08-01T04:44:21ZengElsevierNeuroImage1095-95722025-09-01318121397Impaired neural encoding of naturalistic audiovisual speech in autismTheo Vanneau0Michael J. Crosse1John J. Foxe2Sophie Molholm3The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx 10461, NY, USA; Department of Pediatrics, Albert Einstein College of Medicine, Bronx 10461, NY, USAThe Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx 10461, NY, USA; SEGOTIA, Galway, Ireland; Trinity Center for Biomedical Engineering, Trinity College Dublin, IrelandThe Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx 10461, NY, USA; Department of Pediatrics, Albert Einstein College of Medicine, Bronx 10461, NY, USA; Department of Neurosciences, Albert Einstein College of Medicine, Bronx 10461, NY, USA; The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USAThe Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx 10461, NY, USA; Department of Pediatrics, Albert Einstein College of Medicine, Bronx 10461, NY, USA; Department of Neurosciences, Albert Einstein College of Medicine, Bronx 10461, NY, USA; The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA; Corresponding author.Visual cues from a speaker’s face can significantly improve speech comprehension in noisy environments through multisensory integration (MSI)—the process by which the brain combines auditory and visual inputs. Individuals with Autism Spectrum Disorder (ASD), however, often show atypical MSI, particularly during speech processing, which may contribute to the social communication difficulties central to the diagnosis. Understanding the neural basis of impaired MSI in ASD, especially during naturalistic speech, is critical for developing targeted interventions. Most neurophysiological studies have relied on simplified speech stimuli (e.g., isolated syllables or words), limiting their ecological validity. In this study, we used high-density EEG and linear encoding and decoding models to assess the neural processing of continuous audiovisual speech in adolescents and young adults with ASD (N = 23) and age-matched typically developing controls (N = 19). Participants watched and listened to naturalistic speech under auditory-only, visual-only, and audiovisual conditions, with varying levels of background noise, and were tasked with detecting a target word. Linear models were used to quantify cortical tracking of the speech envelope and phonetic features. In the audiovisual condition, the ASD group showed reduced behavioral performance and weaker neural tracking of both acoustic and phonetic features, relative to controls. In contrast, in the auditory-only condition, increasing background noise reduced behavioral and model performance similarly across groups. These results provide, for the first time, converging behavioral and neurophysiological evidence of impaired multisensory enhancement for continuous, natural speech in ASD. Significance Statement: In adverse hearing conditions, seeing a speaker's face and their facial movements enhances speech comprehension through a process called multisensory integration, where the brain combines visual and auditory inputs to facilitate perception and communication. However, individuals with Autism Spectrum Disorder (ASD) often struggle with this process, particularly during speech comprehension. Previous findings using simple, discrete stimuli do not fully explain how the processing of continuous natural multisensory speech is affected in ASD. In our study, we used natural, continuous speech stimuli to compare the neural processing of various speech features in individuals with ASD and typically developing (TD) controls, across auditory and audiovisual conditions with varying levels of background noise. Our findings showed no group differences in the encoding of auditory-alone speech, with both groups similarly affected by increasing levels of noise. However, for audiovisual speech, individuals with ASD displayed reduced neural encoding of both the acoustic envelope and the phonetic features, marking neural processing impairment of continuous audiovisual multisensory speech in autism.http://www.sciencedirect.com/science/article/pii/S1053811925004008Autism Spectrum DisorderMultisensory processingContinuous speechAudiovisual speech processingNeural speech trackingElectroencephalography |
spellingShingle | Theo Vanneau Michael J. Crosse John J. Foxe Sophie Molholm Impaired neural encoding of naturalistic audiovisual speech in autism NeuroImage Autism Spectrum Disorder Multisensory processing Continuous speech Audiovisual speech processing Neural speech tracking Electroencephalography |
title | Impaired neural encoding of naturalistic audiovisual speech in autism |
title_full | Impaired neural encoding of naturalistic audiovisual speech in autism |
title_fullStr | Impaired neural encoding of naturalistic audiovisual speech in autism |
title_full_unstemmed | Impaired neural encoding of naturalistic audiovisual speech in autism |
title_short | Impaired neural encoding of naturalistic audiovisual speech in autism |
title_sort | impaired neural encoding of naturalistic audiovisual speech in autism |
topic | Autism Spectrum Disorder Multisensory processing Continuous speech Audiovisual speech processing Neural speech tracking Electroencephalography |
url | http://www.sciencedirect.com/science/article/pii/S1053811925004008 |
work_keys_str_mv | AT theovanneau impairedneuralencodingofnaturalisticaudiovisualspeechinautism AT michaeljcrosse impairedneuralencodingofnaturalisticaudiovisualspeechinautism AT johnjfoxe impairedneuralencodingofnaturalisticaudiovisualspeechinautism AT sophiemolholm impairedneuralencodingofnaturalisticaudiovisualspeechinautism |