soword科技言
永久公益免费API接口
提供永久免费的API接口,查看更多API接口,如果您有其他免费API资源,请联系我们,造福人类。
提供商务开发:小程序,系统,APP
定制开发,免费评估,免费咨询,价格便宜,售后保障,前往开发服务中心联系开发客服中心
Microsoft announced to stop selling emotion analysis technology and restrict facial recognition workers

Share with friends and circles of friends via wechat scanning QR code

news on June 22, US local time on Tuesday, Microsoft announced that it would stop selling automated technology based on facial image analysis of personal emotional states and restrict the use of its facial recognition tools

over the years, many activists and scholars have always worried that facial analysis software that claims to be able to identify an individual's age, gender and emotional state may be biased, intrusive, or simply unreliable, so it should not be allowed to be sold

Microsoft acknowledged the above problems and said on Tuesday local time that it plans to remove these functions used to detect, analyze and recognize faces from its artificial intelligence (AI) services. Microsoft will stop selling the technology to new users this week, and existing users will phase it out this year

these changes are part of Microsoft's efforts to strengthen control over its AI products. After two years of review, a Microsoft team developed a "responsible AI standard". In a 27 page document, the company lists specific requirements for responsible AI systems to ensure that they do not have a harmful impact on society

these requirements include ensuring that AI systems "provide effective solutions to the problems they are committed to solving" and "provide services of similar quality to identified population groups (including marginalized groups)"

these technologies will be used to help individuals make important decisions such as obtaining employment, education, health care, financial services or life opportunities. Before they are released, they need to be evaluated by the team led by natashacrampton, chief responsible AI officer of Microsoft

obviously, Microsoft is more worried about this emotion recognition tool. It can classify personal facial expressions as anger, contempt, disgust, fear, happiness, neutrality, sadness or surprise. Clapton said: "there are huge cultural, geographical and personal differences in the way we express ourselves, which raises concerns about the reliability of AI recognition tools. At the same time, it also raises greater questions about whether facial expressions are a reliable indicator of an individual's inner emotional state."

Clapton also said that age and gender analysis tools and other tools to detect facial features such as smiles may help blind or visually impaired people interpret visual images. However, Microsoft believes that it is inappropriate to provide these analysis tools to the public generally. Clapton stressed that the so-called gender classifier of the system is binary, "which is not completely consistent with our values"

Microsoft will also impose new controls on its face recognition function, which can be used for identity verification or search for specific people. For example, Uber, an online car Hailing company, uses the software in its application to verify whether the driver's face matches the ID on the driver's account. Software developers who want to use the Microsoft facial recognition tool need to apply for access rights and explain how they plan to deploy the tool

users will also be asked to apply and explain how they will use other AI systems that may be abused, such as custom neural voice. The service can generate adult voiceprints from someone's speech samples so that authors can create their synthetic sounds to read audiobooks in languages they can't speak

because the tool may be abused, such as generating words that someone has not said, users must go through a series of steps to confirm that the voice they use is authorized, and the recording includes a watermark that Microsoft can detect. "We are taking concrete actions to implement our AI principles, but it will be a long journey," Clapton said

like other technology companies, Microsoft's AI products have encountered many setbacks in the development process. In 2016, Microsoft launched a chat robot named Tay on twitter, which aims to learn how to "understand dialogue" from users interacting with it. However, the robot soon began to publish racist and offensive tweets, which Microsoft had to delete

2020, researchers found that the "voice to text" tool developed by Microsoft, apple, Google, IBM and Amazon did not perform well among black users. Although Microsoft's system is the best among them, the false recognition rate for whites is 15%, while the false recognition rate for blacks is 27%

although Microsoft has collected various voice data to train its AI system, it has never fully understood the diversity of languages. Therefore, the company hired sociolinguistic experts from the University of Washington to help explain the language variants they need to know. This approach goes beyond demographics and regional differences to encompass the way people speak in formal and informal settings

Clapton said: "it is misleading to think that race is the determinant of a person's way of speaking. After consulting experts, we learned that there are many factors that can affect the diversity of languages. The process of solving the gap between voice and text has provided great help for the company in formulating guidelines for new standards."

Clapton also said: "this is a critical period to develop specifications for AI." She refers to the proposed regulation in Europe, which sets rules and restrictions on the use of AI. "We hope to use our standards to contribute to the discussion and development of standards that technology companies should generally comply with," Clapton said

the potential harm of AI has been hotly debated in the scientific and technological circles for many years. The cause of the debate is that some AI mistakes and mistakes have had a real impact on people's lives, such as the algorithm to determine whether people can receive benefits. The Dutch tax authorities wrongly deprived poor families of child care benefits because a flawed algorithm excluded persons with dual nationality

automated software for face recognition and analysis has been controversial. Last year, Facebook shut down its 10-year-old photo recognition system. "Many people are worried about the status of facial recognition technology in society," said the company's vice president in charge of AI business

several black men were wrongly arrested for facial recognition matching. In 2020, after the police shot and killed George Floyd in Minneapolis, a protest against "black lives matter" was triggered. At the same time, Amazon and Microsoft announced that they would suspend the use of their facial recognition products by the U.S. police, saying that they need to enact clearer laws on their use

since then, Washington and Massachusetts ?
2023-03-22 10:04:53

新人小程序+APP定制199元起


发放福利,助力中小企业发展,真正在互联网中受益

点击询问定制

广告服务展示