AI Shows Promise for Detecting Early Cognitive Decline through Speech Samples

Two individuals sit across from each other at a wooden table in an office-like setting. One person is using a laptop with charts displayed on the screen, while the other sits with hands resting on the table. The background includes cabinets, a sink, a couch, and a window with blinds partially open.

Artificial intelligence shows promise for detecting early cognitive decline by analyzing speech samples, according to new research from Washington State University’s Elson S. Floyd College of Medicine. The findings could translate to more accurate and efficient assessments of brain health.  

A pilot study presented at the American Speech-Language-Hearing Association (ASHA) Convention found that a machine learning model accurately identified individuals with cognitive decline in 75% of cases.  

Speech analysis has recently emerged as a noninvasive and cost-effective screening tool for mild cognitive impairment, which is a risk factor for developing Alzheimer’s disease and related dementias. Individuals with Alzheimer’s often show subtle changes in their speech patterns, such as speaking more slowly or in a higher pitch, before signs of cognitive decline.  

“The goal is to see if the model is able to identify the different speech patterns we see associated with cognitive decline, and then use that not to make a diagnosis but to identify people who may be at risk,” said Department of Speech and Hearing Sciences undergraduate student Solveig Anderson, who led the study in collaboration with Assistant Professor Amy Kemp, PhD, CCC-SLP.  

Early detection of cognitive decline is essential for interventions that can improve quality of life and preserve individuals’ independence for as long as possible. Often, older adults don’t seek care until they begin to show clear symptoms like memory loss that affect their daily functioning. But a routine screening of speech features could change that. 

“Many risk factors for developing Alzheimer’s disease or dementia are modifiable,” Anderson said. “If we can develop a method to identify those at risk earlier, they can make changes before they lose theirindependence.” 

In the study, six older adults with mild to moderate cognitive impairment and six without impairment completed a word fluency task. The recorded speech samples were analyzed for acoustic features including pitch, volume, and variation in various features. Researchers fed the resulting data to the basic machine learning model K-Nearest Neighbors (KNN), an algorithm that classifies datapoints based on their similarity to other points.  

The model accurately classified nine of 12 participants, showing moderate predictive value. Based on the success with a small sample, the researchers plan to conduct a broader study to evaluate the technique and improve the accuracy of the model with a large dataset.  

While KNN isn’t replacing health care professionals any time soon, machine learning models show potential for supplementing clinicians’ assessments and scaling screening efforts. A machine learning program could analyze samples from millions of patients located anywhere in the world, a flexibility that could also improve access to screening in rural communities.  

Media Contact

Stephanie Engle, WSU Elson S. Floyd College of Medicine Communications and Marketing, 509-368-6937, stephanie.engle@wsu.edu

Featured Faculty