Apple’s researchers have published a new machine learning research paper that shows how the company can assist people who stutter to better use speech recognition.
According to the Stuttering Foundation, Stuttering is a communication disorder in which the flow of speech is broken by repetitions (li-li-like this), prolongations (lllllike this), or abnormal stoppages (no sound) of sounds and syllables.
Related Reading:
- Connected clothing from Google’s wearable team makes accessibility easier
- Using Apple Watch and Siri to check your health and fitness stats and a lot more!
- Your voice can help with diagnosing serious illnesses with the help of vocal biomarkers
There may also be unusual facial and body movements associated with the effort to speak. Stuttering is also referred to as stammering.
More than 80 million people worldwide stutter, which is about 1% of the population. In the United States, that’s over 3 million Americans who stutter.
Using the phone can cause a great deal of anguish, and each person must learn to cope with it in his or her own way.
Apple’s new paper “From User Perceptions to Technical Improvement: Enabling People Who Stutter to Better Use Speech Recognition”, published in March 2023 demonstrates the power of technology that can be used to alleviate some of the issues associated with verbal communication for people who stutter.
According to the researchers, consumer speech recognition systems do not work as well for many people with speech differences, such as stuttering. The inefficiencies in current speech recognition systems have not been studied in great detail when it comes to addressing this issue.
The researchers conducted a 61-person survey of people who stutter and found that participants want to use speech recognition but are frequently cut off or the speech predictions miss out on the intent of the speech.
In a second study, where 91 people who stutter recorded voice assistant commands and dictation, the researchers quantified how dysfluencies impede performance in a consumer-grade speech recognition system.
Using novel machine learning research and three different studies, the Apple researchers demonstrated that many common errors can be prevented, resulting in a system that cuts utterances off 79.1% less often and improves word error rate from 25.4% to 9.9%.
Apple has always been on the leading edge when it comes to providing solutions for accessibility-related issues. The paper was submitted by Colin Lea, a research scientist at Apple who works on Machine learning and accessibility related topics at Apple.