MOUNTAIN VIEW, CA — A group of over 1,666 Google employees wrote a letter to Alphabet-Google CEO Sundar Pichai demanding that the company stop selling its technology to police departments, according to TechCrunch.
The letter reportedly said:
“We’re disappointed to know that Google is still selling to police forces, and advertises its connection with police forces as somehow progressive, and seeks more expansive sales rather than severing ties with police and joining the millions who want to defang and defund these institutions.
“Why help the institutions responsible for the knee on George Floyd’s neck to be more effective organizationally? Not only that, but the same Clarkstown police force being advertised by Google as a success story has been sued multiple times for illegal surveillance of Black Lives Matter organizers.”
Google highlights New York’s Clarkstown Police Department in one of its customer success stories and credits it for helping the small city of Clarkstown earn a reputation as being “one of the safest and best” places to live in the United States.
Experts Denounce Racial Bias of Crime-Predictive Facial-Recognition AI: An open letter signed by experts in the field from MIT, Microsoft and Google aim to stop the ‘tech to prison’ pipeline. https://t.co/vgSZ4yQbbA pic.twitter.com/QmsUdTHOs9
— CyberIOT (@IotCyber) June 24, 2020
Referring to the police department as a “catalyst for a culture shift,” Google explains how the police department was transformed by using its G Suite product:
“Cloud-based email was an obvious alternative, so the Clarkstown PD introduced Gmail, the highly secure G Suite email solution. It was the start of a major cultural change across the whole department.”
Techcrunch reported that Google employees demanded that the company stop selling its G Suite to law enforcement agencies because working with them contributes to the “racist legacy of police across the United States.”
In response, Google reportedly announced it would give $12 million dollars to racial justice organizations.
Google has followed suit with other technology giants. IBM, Amazon and Microsoft recently announced that they would limit or ban selling facial-recognition software to the police.
Microsoft President Brad Smith told The Washington Post that the company has not sold facial-recognition technology to police departments:
“We will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology.”
Amazon announced it would implement a one-year moratorium on police use of its facial recognition technology:
“We will continue to allow organizations like Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics to use Amazon Recognition to help rescue human trafficking victims and reunite missing children with their families.
“We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.”
Could this be why google wants to suspend sales of facial recognition equipment to LEO? https://t.co/o3I9RaNeX5
— jonathan bays (@jonathanbays3) June 25, 2020
However, Nicole Ozer, technology and civil liberties director for the ACLU, criticized the one-year break, according to Business Insider:
“This surveillance technology’s threat to our civil rights and civil liberties will not disappear in a year.”
IBM’s CEO Arvind Krishna recently sent a letter to politicians outlining that the company would no longer offer general purpose facial recognition or analysis software:
“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.
“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
In a tweet, Rep. Alexandria Ocasio-Cortez (D-NY) hailed IBM’s decision:
“Shout out to @IBM for halting dev on technology shown to harm society.
“Facial recognition is a horrifying, inaccurate tool that fuels racial profiling + mass surveillance. It regularly falsely ID’s Black + Brown people as criminal.
“It shouldn’t be anywhere near law enforcement.”
So, you're telling me that if someone came into your home and murdered your entire family, but the only way they could identify the perp was through facial recognition from security camera footage, you wouldn't allow it because "it's racist." You'd rather a murderer walks free?
— Melissa Grimshaw (@mia1021) June 23, 2020
Business Insider reports that civil rights groups and artificial intelligence experts say that facial-recognition technology disproportionately affects people of color in two ways:
“Firstly, like any policing tool operating by systemically racist societies or institutions, it will inevitably be used to target people of color more often.
“Secondly, the data used to build facial recognition software ingrains it with racial bias which makes it more likely to misidentify women and people of color, which would in turn lead to more wrongful arrests. This is because the datasets used to train facial recognition algorithms are often predominantly made up of pictures of white men.”
A Massachusetts Institute of Technology study:
“[It] demonstrated that while men with lighter skin were often almost always positively identified, about 7 percent of women with lighter skin were misidentified and up to 35 percent of women with darker skin were falsely identified.”
Law Enforcement Today has been reporting facial recognition software and civil rights groups advocating against it for months.
Here’s a recent report we brought you on the subject again.
Facial recognition. Appearance search. Weapons detection. Head counts. Heat maps.
These things are all capabilities built into numerous video surveillance and camera analytics platforms. And they are changing the way that safety and security procedures are being enhanced.
It could help revolutionize not only forensic evidence, but also how active crime prevention policing is taking place.
And there are some who do not like that. Portland city officials are considering the strictest ban of the technology in the country, prohibiting its use not only by government agencies but also private businesses.
Authors note: I will express some opinions in this article. They will be my own, and are not necessarily the beliefs of Law Enforcement Today, nor that of any of our editors, writers or contributors. I work in the physical security world and spend a lot of time working with facial recognition platforms.
According to Oregon Live, facial recognition technology typically uses a camera and software to analyze human faces to identify or verify a person’s identity. The technology can compare a scan with an already existing database of images, such as jail booking photos or government identification records.
There are no federal rules regulating facial recognition technology or what’s done with data obtained through its use, which city officials say is forcing them to follow the lead of other cities and institute their own rules.
The state of Oregon, along with cities like San Francisco, Oakland and Berkeley, California already bans police from using body cameras with facial recognition technology.
Officials cite privacy and civil rights concerns when considering the ban.
Using facial recognition is in no way a removal of reasonable expectation of privacy in public.
And apparently, the technology is racist and sexist.
Yep. You read that correctly, Facial recognition and other camera analytics are geared to discriminate. At least that is what people who do not know what they are talking about would have you believe.
“We felt a moral obligation to develop a broader approach, recognizing that any use of a surveillance technology that is biased against people of color, lacks consent, lacks due process and can be used on minors is unacceptable,” said Hector Dominguez, an open data coordinator in the city’s Bureau of Planning and Sustainability.
In July of 2019, the New York Times published an opinion piece called The Racist History Behind Facial Recognition. In it, they ask the question:
“When will we finally learn we cannot predict people’s character from their appearance?”
The question, and most of the opinion written in the piece, is idiotic at best.
Facial recognition is not a technology in use for security applications for purpose of looking at a person’s character. It is no different that when TV news crews would flash a police sketch of a suspect up on the 6 o’clock news and say, “Please call the police if you know who this man is, he is wanted for robbing a bank.”
I do not remember anyone screaming about racism and civil rights violations when that happened.
Facial recognition means exactly that. Recognizing the face.
Case in point. There is a school district in Texas that employ facial recognition. At their graduation ceremony last year, they set up cameras in the parking lot and at the entrances to the football stadium where the ceremony was being held.
Within a half hour of opening the gates, the sheriff’s department was able to identify an individual who was previously issued a criminal trespass from district property.
With 5,000 people coming in and only 20 deputies working, he could have easily slipped through the crowd undetected. Instead, he was intercepted in the parking lot by deputies and arrested.
A Las Vegas casino also uses facial recognition. Last April, we were able to attend a live demonstration of the solution they had in place.
In the hour we were watching, casino security had 4 different individuals that matched the photos provided by local law enforcement. They were all wanted for felony warrants.
They were able to track the movements of each of the four and notified the police. When officers arrived on scene, the security team was able to direct them to the exact location of the four subjects. Each was apprehended without incident.
But all of that is lost on Portland and Dominguez.
Portland’s proposed ban on facial recognition technology is part of a bigger effort to shape technology policy in a way that reduces harm to marginalized communities, Dominguez said.
The goal is for his bureau and the Office of Equity and Human Rights to propose drafts of both the public and private bans for the public to see in March, then final versions for the council to vote on around April.
“We are using the word ban, but we consider it more as us putting the brakes on this technology in the city for now so we can create a space for developing a capacity for better understanding all this emerging technology.” he said.
“We see this as a process and as the technology evolves, we need to evolve as well.”
And now you have people like Mayor Ted Wheeler determining the advancement of technology. He said he doesn’t think facial recognition technology should be used on a wide scale and that it hasn’t developed enough to “serve the public’s best interests.”
Wheeler, who oversees the Bureau of Planning and Sustainability, also favors the ban.
He recommended “having a community group vet organizations that want to use the technology.”
Businesses would have to prove that the technology’s current equity, privacy and data management issues are satisfactorily addressed.
Wheeler said the use by Jacksons could possibly lead to discriminatory practices and is an example of what the city is trying to prevent. He sees a silver lining in the business claiming their employees feel safer and that thefts have decreased, but he believes it needs to be better perfected.
“We’re not there yet,” Wheeler said. “None of this is possible with the technology we currently have and so we have to look to protecting the rights of our citizens above all else, especially those who’ve been historically under served.”
Um, Ted. You are completely and unequivocally wrong. Stick to things you know how to do…like protecting your city from groups like Antifa.
Oh, wait. You don’t know how to do that either.
To Wheeler’s false assumption about what they technology can do, one of the solutions I recommend to my clients most often, and is in use in the school district I referenced earlier, boasts a .1% false alarm (wrongful identification) rate at a detection speed of .2 milliseconds.
I have also the solution get a positive hit on a subject in their late 50’s from a photo of them in their early 20’s.
So yes, Mayor, we are there.
Some say that the technology isn’t foolproof. And that is true. No technology is 100% operational, efficient and effective. At the end of the day, it is a machine-based platform, and machines can break down.
But to argue that it is faulty because entering a facility wit a group of people may cause someone to not be detected while others are is asinine. It also shows a complete lack of understanding as to how the technology works.
The platform I mentioned earlier captures, accurately analyzes and validates hundreds of faces per second. If it can see your face, it will analyze it against the system.
Personally, I am OK with a hotel or restaurant using the solution and having my image and that of my family stored in the system. I am willing to let go of some of my “privacy” for safety.
The technology can revolutionize the policing world.
At a time where many departments and agencies are overworked and understaffed, what would it mean to the likelihood of apprehension of wanted and dangerous criminals, if local law enforcement could pass out a mug shot to local businesses?
Those businesses, in turn, could put the photo in their system.
If that individual walks in the door, and the business owner get a system message, they can discreetly notify law enforcement of the subjects presence and allow them to come make contact and arrest them.
That was shown to be an accurate capacity at the casino demonstration last year.
But Oregon law enforcement agencies may never know…since the leadership there seems to believe that cameras are racists.