Privacy watchdog asks companies to ditch AI that analyzes emotions • The Register

Businesses should think twice before deploying AI-powered emotional analysis systems, which are prone to systemic bias and other snafus, Britain’s Information Commissioner’s Office (ICO) warned this week.

Organizations will face investigations if they go ahead and use this subpar technology that puts people at risk, the watchdog added.

Machine learning algorithms that purport to predict a person’s moods and reactions use computer vision to track gazes, facial movements, and audio processing to measure inclination and overall mood. As you can imagine, this is not necessarily accurate or fair, and there can be privacy issues when handling data for training and inference.

Indeed, ICO Deputy Commissioner Stephen Bonner said the technology is not foolproof, leading to systemic bias, inaccuracy or discrimination against certain social groups. “The developments in the market for biometrics and emotional AI are not yet mature. They may not work yet, or at all,” he said in a statement.

“While opportunities are there, the risks are greater right now. At the ICO, we are concerned that improper analysis of data could lead to inaccurate assumptions and judgments about an individual and could lead to discrimination,” his statement continued.

Bonner noted that emotional analysis software is often combined with biometric systems that collect and store a broader range of personal information beyond facial images. Software providers often do not make transparent how this personal data is processed or stored.

“The only sustainable biometric applications will be those that are fully operational, accountable and supported by science,” he said. “As it stands, we have not yet developed emotion AI technology that is compliant with data protection requirements and have broader questions about proportionality, fairness and transparency in this area.”

Which seems like a polite way of saying: You should only use AI models for reading emotions that work the way the ICO expects them to, and those models don’t exist, so you should just stop what you’re doing.

The UK Data Protection Authority is expected to publish a report early next year detailing how biometrics, including facial recognition, fingerprint recognition and voice recognition, should be handled by businesses.

“The ICO will continue to scrutinize the market, identify stakeholders interested in developing or deploying these technologies, and explain the importance of enhanced privacy and compliance, while promoting confidence in how these systems work,” Bonner added.

Experts have been sounding the alarm about sentiment and emotion analyzes for years. Feelings are subjective and difficult to interpret accurately. Humans often cannot do this effectively for themselves or others, let alone machines. ®

https://www.theregister.com/2022/10/29/data_privacy_ai_emotions/ Privacy watchdog asks companies to ditch AI that analyzes emotions • The Register

Rick Schindler

World Time Todays is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@worldtimetodays.com. The content will be deleted within 24 hours.

Related Articles

Back to top button