Skip to content
ai-can-predict-effectiveness-of-cochlear-implants-at-improving-speech

AI Can Predict Effectiveness of Cochlear Implants at Improving Speech

Cochlear implant for baby. Deaf child with hearing aid. Health care and medicine technology and innovations
Credit: satura86/Getty Images

A model using a complex form of artificial intelligence (AI) can accurately predict how effective cochlear implants are at improving spoken language in young deaf children.

The deep transfer learning (DTL) model, which was developed by researchers at the Chinese University of Hong Kong and Ann & Robert H. Lurie Children’s Hospital of Chicago and published in JAMA Otolaryngology–Head & Neck Surgery, predicted language outcomes in young deaf children with an accuracy of 92.4%.

More than 180,000 deaf adults and children in the U.S. have cochlear implants, but this is only a small proportion of those who could actually benefit from using these devices to improve their hearing.

The implants, which first became more widely available in the 1980s, can improve the hearing of both adults and children. However, they are normally most effective when implanted in younger children, as they give the developing brain access to sound and spoken language during early life when auditory and language circuits are still being formed in the brain.

Despite the increased potential for benefit in younger individuals, past research has shown great variability in spoken language outcomes among deaf children receiving cochlear implants. “This variability cannot be reliably predicted for individual children using age at implant or residual hearing,” write the researchers.

To assess the potential of DTL, an advanced form of machine learning, for predicting outcomes after cochlear implantation, co-lead investigator Nancy Young, MD, medical director of audiology and cochlear implant programs at Ann & Robert H. Lurie Children’s Hospital of Chicago, and colleagues followed up 278 children who received cochlear implants in the U.S., Hong Kong, and Australia.

The children participating in the study had pre-implantation brain magnetic resonance imaging (MRI) scans, and these were used by the AI to predict how well their speech would improve. The children had their implantations at around two years of age on average and their language ability was assessed before and after they had the implants for 1–3 years.

The dataset analyzed by the AI model was complex as the children spoke three different languages—English, Cantonese, and Spanish—and the different centers involved in the study used different protocols for scanning the brain and for other outcomes.

DTL uses a deep neural network that has already learned to recognize patterns in large image datasets and fine‑tunes it to work on a smaller, specific medical task like predicting outcomes for the children included in this study. The researchers theorized it might achieve better predictive accuracy in this study population than more standard machine learning.

This proved to be the case and the DTL model achieved better results than standard machine learning. Overall, it achieved a predictive accuracy of 92.4%, a sensitivity of 91.2%, and specificity of 93.6%.

“Our results support the feasibility of a single AI model as a robust prognostic tool for language outcomes of children served by cochlear implant programs worldwide. This is an exciting advance for the field,” said Young in a press statement.

“This AI-powered tool allows a ‘predict-to-prescribe’ approach to optimize language development by determining which child may benefit from more intensive therapy.”

colind88

Back To Top