Apple’s speech recognition algorithms could help people who stutter

AirPods third generation

Apple’s researchers have published a new machine learning research paper that shows how the company can assist people who stutter to better use speech recognition.

According to the Stuttering Foundation, Stuttering is a communication disorder in which the flow of speech is broken by repetitions (li-li-like this), prolongations (lllllike this), or abnormal stoppages (no sound) of sounds and syllables. 

Related Reading:

There may also be unusual facial and body movements associated with the effort to speak. Stuttering is also referred to as stammering.

More than 80 million people worldwide stutter, which is about 1% of the population. In the United States, that’s over 3 million Americans who stutter.

Using the phone can cause a great deal of anguish, and each person must learn to cope with it in his or her own way.

Apple’s new paper “From User Perceptions to Technical Improvement: Enabling People Who Stutter to Better Use Speech Recognition”, published in March 2023 demonstrates the power of technology that can be used to alleviate some of the issues associated with verbal communication for people who stutter. 

According to the researchers, consumer speech recognition systems do not work as well for many people with speech differences, such as stuttering. The inefficiencies in current speech recognition systems have not been studied in great detail when it comes to addressing this issue.

The researchers conducted a 61-person survey of people who stutter and found that participants want to use speech recognition but are frequently cut off or the speech predictions miss out on the intent of the speech.

In a second study, where 91 people who stutter recorded voice assistant commands and dictation, the researchers quantified how dysfluencies impede performance in a consumer-grade speech recognition system.

Using novel machine learning research and three different studies, the Apple researchers demonstrated that many common errors can be prevented, resulting in a system that cuts utterances off 79.1% less often and improves word error rate from 25.4% to 9.9%.

Apple has always been on the leading edge when it comes to providing solutions for accessibility-related issues. The paper was submitted by Colin Lea, a research scientist at Apple who works on Machine learning and accessibility related topics at Apple.

Previous articleHealium’s VR/AR Biofeedback platform collaborates with Mayo Clinic
Next articleGoogle Pixel Watch not charging or charging slowly? Check out these tips
Sudz Niel Kar
I am a technologist with years of experience with Apple and wearOS products, have a BS in Computer Science and an MBA specializing in emerging tech, and owned the popular site AppleToolBox. In my day job, I advise Fortune 500 companies with their digital transformation strategies and also consult with numerous digital health startups in an advisory capacity. I'm VERY interested in exploring the digital health and fitness-tech evolution and keeping a close eye on patents, FDA approvals, strategic partnerships, and developments happening in the wearables and digital health sector. When I'm not writing or presenting, I run with my Apple Watch Ultra or Samsung Galaxy Watch and closely monitor my HRV and other recovery metrics.


Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.