Purpose: The purpose of this research note is to provide a performance comparison of available algorithms for the automated evaluation of oral diadochokinesis using speech samples from patients with amyotrophic lateral sclerosis (ALS). Method: Four different algorithms based on a wide range of signal processing approaches were tested on a sequential motion rate /pa/-/ta/-/ka/ syllable repetition paradigm collected from 18 patients with ALS and 18 age- and gender-matched healthy controls (HCs).
Results: The best temporal detection of syllable position for a 10-ms tolerance value was achieved for ALS patients using a traditional signal processing approach based on a combination of filtering in the spectrogram, Bayesian detection, and polynomial thresholding with an accuracy rate of 74.4%, and for HCs using a deep learning approach with an accuracy rate of 87.6%. Compared to HCs, a slow diadochokinetic rate (p < .001) and diadochokinetic irregularity (p < .01) were detected in ALS patients.
Conclusions: The approaches using deep learning or multiple-step combinations of advanced signal processing methods provided a more robust solution to the estimation of oral DDK variables than did simpler approaches based on the rough segmentation of the signal envelope. The automated acoustic assessment of oral diadochokinesis shows excellent potential for monitoring bulbar disease progression in individuals with ALS.