Microsoft bans U.S. law enforcement agencies from using generative AI tool, Azure OpenAI Service for facial recognition

image
Facial recognition by is licensed under Canva
UNITED STATES - As of Wednesday, May 1, language was added to the terms of service for Azure OpenAI Service, Microsoft's fully managed, enterprise-focused wrapper around OpenAI tech, to reaffirm the company's ban on U.S. police departments from using generative AI for facial recognition. 

In its Code of Conduct, the terms clearly state that Azure OpenAI Service prohibits integrations with the generative AI from being used "by or for" police departments for facial recognition in the United States, including integrations with OpenAI's current and possibly future image-analyzing models. 

A separate new bullet point specifically covers "any law enforcement globally" and explicitly bars the use of "real-time facial recognition technology" on mobile cameras like body cameras and dashcams, to attempt to "identify a person in uncontrolled, in-the-wild environments." These changes in Microsoft's terms of service come one week after Axon, a leading maker of public safety technology, training and software for police departments and the military, announced a new product that leverages OpenAI's GPT-4 generative text model to summarize audio from body cameras. 

Following that announcement, critics of using the technology quickly pointed out the potential pitfalls, including hallucinations and racial biases that may be introduced from the training data. According to Techcrunch, it is unclear whether Axon was planning on using GPT-4 via Azure OpenAI Service and if so, whether Microsoft's updated policy was in response to Axon's product launch.

The complete ban on Azure OpenAI Service usage pertains only to the United States and not international police organizations. It also does not, at the moment, cover facial recognition performed with stationary cameras in controlled environments, like a back office. However, the terms do prohibit any use of facial recognition by police in the United States.

The terms align with Microsoft and OpenAI's recent approach to AI-related law enforcement and defense contracts. Back in January, Bloomberg revealed that OpenAI was working with the Pentagon on a number of projects including cybersecurity capabilities, which was a change from the startup's earlier ban on providing its AI to militaries. 

According to the Intercept, Microsoft has reportedly pitched using OpenAI's image generation tool, DALL-E, to help the Department of Defense (DoD) build software to execute military operations. 

In February, Azure OpenAI Service became available in Microsoft's Azure Government product and additional compliance and management features geared towards government agencies, including law enforcement, were added. In a blog post, Candice Ling, SVP of Microsoft's government-focused division Microsoft Federal, pledged that Azure OpenAI Servie would be "submitted for additional authorization" to the DoD for workloads supporting DoD missions. 

At the end of the blog, Ling wrote, "By making Azure OpenAI Service available in government clouds, Microsoft remains committed to enabling government transformation with AI. Along with delivering innovations that helps drive missions forward, we make AI easy to procure, easy to access, easy to implement. Microsoft is committed to delivering more advanced AI capabilities across classification levels in the coming months."
 
For corrections or revisions, click here.
The opinions reflected in this article are not necessarily the opinions of LET
Sign in to comment

Comments

Powered by LET CMS™ Comments

Get latest news delivered daily!

We will send you breaking news right to your inbox

© 2024 Law Enforcement Today, Privacy Policy