AI models to be taught to understand speech disabilities • The Register

Tinkerers at the University of Illinois Urbana-Champaign (UIUC) in the US are working with the usual internet supercorps to ideally improve AI speech recognition for people with disabilities.

Speech recognition software often has trouble processing speech for people with heavy accents, and performs even worse for people with speech disabilities because their voices typically don’t represent well or at all in training datasets.

The Speech Accessibility Project, launched Monday and supported by Amazon, Apple, Google, Meta and Microsoft, and nonprofits, aims to make speech recognition models more effective for everyone. “For many of us, language and communication are effortless,” said Clarion Mendes, a clinical professor of speech and hearing science at UIUC who is working on the project The registry.

“However, there are millions of people for whom communication is not effortless. It’s a daily struggle. By uniting our efforts towards a common goal of improving language accessibility for people with language disabilities or differences, we’re not just improving technology – we’re improving the quality of life and promoting self-reliance.”

Researchers will focus on obtaining various audio data from people suffering from various medical disorders that affect speech, such as: B. Lou Gehrig’s disease or amyotrophic lateral sclerosis (ALS), Parkinson’s, cerebral palsy and Down’s syndrome who speak English. Volunteers are paid to record audio samples that are used to create a large dataset for training AI models for commercial and research applications.

If there are or were similar projects to this, that’s great, although this project stands out for its support from those who make today’s AI voice assistants and the like.

Industry partners supporting the Speech Accessibility Project will fund the project for at least two years and will work with scientists to explore how current speech recognition models can be improved.

“By working directly with individuals with language differences and disabilities through focus groups and our advocacy partners, we will be able to determine the strengths and limitations of current automatic speech recognition systems and the need to develop novel systems,” said Mendes.

The team will work with the Davis Phinney Foundation and Team Gleason, two non-profit organizations, to initially collect speech data from people with ALS and Parkinson’s disease before expanding to other types of disabilities.

“The ability to communicate and operate devices using voice is critical for anyone interacting with technology or the digital economy today. Voice interfaces should be available to everyone, and that includes people with disabilities,” said Mark Hasegawa-Johnson, UIUC professor of electrical and computer engineering who is leading the project.

“This task was difficult because it requires a lot of infrastructure, ideally the kind that can be supported by leading technology companies. That’s why we’ve assembled a unique interdisciplinary team with expertise in linguistics, language, AI, security and privacy to help us meet this important challenge.” ®

https://www.theregister.com/2022/10/04/ai_language_recognition_disabled/ AI models to be taught to understand speech disabilities • The Register

Rick Schindler

World Time Todays is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@worldtimetodays.com. The content will be deleted within 24 hours.

Related Articles

Back to top button