Decisions by Microsoft and Amazon to refrain from sales of facial recognition software to law enforcement agencies could have negative consequences long-term if U.S. lawmakers fail to enact regulations governing the use of the technology, Microsoft CEO Satya Nadella told computer vision researchers this week.
“We need to use this moment to advocate for strong national law, otherwise we’ll see responsible companies leaving the market and others stepping in,” Nadella said, addressing the in a prerecorded conversation Tuesday morning.
In addition, Nadella said, Microsoft will soon give other customers new ways to measure how its facial recognition technology performs in conjunction with their data, seeking to address potential bias in their implementations.
Separately, by the ACLU show Microsoft repeatedly pitched its facial recognition software to the federal Drug Enforcement Administration in 2017 and 2018.
Microsoft says it had not been supplying facial recognition programs to any U.S. police force, but with the announcement, the company joined Amazon and IBM in putting new limits on the technology. Amazon , after
The moves come as national protests sparked by the killing of George Floyd bring new attention to racial bias throughout society. Facial recognition has long caused concern over its potential use for surveillance, and due to studies showing a greater potential to misidentify people of color. Evan Greer, deputy director of the digital advocacy group , last week called the moves “largely public relations stunts,” but said they are “also a testament to the fact that Big Tech is realizing that facial recognition is politically toxic.”
Speaking with former Microsoft executive Harry Shum during the event this week, Nadella pointed to examining the way different facial recognition technologies identify people of different genders, ages and racial backgrounds.
“One of the first challenges was how do we ensure that there is no bias,” Nadella said. “Thanks to NIST, there are robust benchmarks now to measure the performance against a number of ethnic groups to ensure that there’s no bias in our models and to create a level of transparency that is very helpful.”
“Soon we’ll be providing guidance to our customers and how they can measure the performance relative to their own data to set the right thresholds and balance these false matches,” he said.
Nadella added, “Then, on the other side, are the runtime considerations and the ethical use of AI. I think we will all have to realize that sometimes, even with all the good intentions during design time, if you don’t have safeguards in runtime protecting privacy and our democratic freedoms, for example, there could be really bad unintended consequences.”
Microsoft previously , including facial recognition, and established an internal Office of Responsible AI along with a committee that advises the company’s executives on related ethical issues as new scenarios emerge for use of the technology.
Watch the full video of Nadella’s CVPR talk above. Charlie Bell, Amazon Web Services senior vice president, speaks at the event Thursday afternoon. (Day corrected since original post.)