Multimodal AI for risk stratification in autism spectrum disorder: integrating voice and screening tools

  • Sookyung Bae
  • , Junho Hong
  • , Sungji Ha
  • , Jiwoo Moon
  • , Jaeeun Yu
  • , Hangnyoung Choi
  • , Junghan Lee
  • , Ryemi Do
  • , Hewoen Sim
  • , Hanna Kim
  • , Hyojeong Lim
  • , Min Hyeon Park
  • , Eunseol Ko
  • , Chan Mo Yang
  • , Dongho Lee
  • , Heejeong Yoo
  • , Yoojeong Lee
  • , Guiyoung Bong
  • , Johanna Inhyang Kim
  • , Haneul Sung
  • Hyo Won Kim, Eunji Jung, Seungwon Chung, Jung Woo Son, Jae Hyun Yoo, Sekye Jeon, Hwiyoung Kim, Bung Nyun Kim, Keun Ah Cheon

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Early Autism Spectrum Disorder (ASD) identification is crucial but resource-intensive. This study evaluated a novel two-stage multimodal AI framework for scalable ASD screening using data from 1242 children (18–48 months). A mobile application collected parent-child interaction audio and screening tool data (MCHAT, SCQ-L, SRS). Stage 1 differentiated typically developing from high-risk/ASD children, integrating MCHAT/SCQ-L text with audio features (AUROC 0.942). Stage 2 distinguished high-risk from ASD children by combining task success data with SRS text (AUROC 0.914, Accuracy 0.852). The model’s predicted risk categories strongly agreed with gold-standard ADOS-2 assessments (79.59% accuracy) and correlated significantly (Pearson r = 0.830, p < 0.001). Leveraging mobile data and deep learning, this framework demonstrates potential for accurate, scalable early ASD screening and risk stratification, supporting timely interventions.

Original languageEnglish
Article number538
Journalnpj Digital Medicine
Volume8
Issue number1
DOIs
StatePublished - Dec 2025

Bibliographical note

Publisher Copyright:
© The Author(s) 2025.

Fingerprint

Dive into the research topics of 'Multimodal AI for risk stratification in autism spectrum disorder: integrating voice and screening tools'. Together they form a unique fingerprint.

Cite this