PORTLAND, OR – The Portland City Council recently passed a ban on specified uses of facial recognition technology across the city – and this ban happens to the strictest ban ever enacted within the nation.
Portland has passed the strictest ban on facial recognition technology in the nation, barring the use of the software by city bureaus and restricting its use by private businesses.https://t.co/lVPsUIxto6
— OPB (@OPB) September 9, 2020
Facial recognition technology has been a topic of heated debate when it pertains to citizens’ rights to privacy and now Portland is among the cities pushing back against said tech. Now the software is banned from use by city bureaus and extremely limiting the manner in which they’re used by private businesses.
This ban is the first ever in the United States to be enacted that specifically restricts how private companies use facial recognition software.
So the restriction being imposed on private businesses doesn’t bar business owners from having and using facial recognition software exactly – but business owners cannot have the software used in a way where the technology is facing a public area like a sidewalk or street.
Mayor Ted Wheeler and Commissioner Jo Ann Hardesty introduced the agenda on September 9th, and the matter was passed unanimously across the City Council.
Commissioner Hardesty mentioned privacy concerns being the most important element to enacting said ban and restrictions:
“We own our privacy. And it’s our obligation to make sure that we’re not allowing people to gather it up secretly and sell it for profit or fear-based activity.”
During public testimony on the then-proposed ban before it passed, Darren Golden, policy strategist for the Urban League of Portland, was among those in favor of banning and restricting the use of the technology:
“The opposition to these bans will say [facial recognition technology] could be good, that we need to wait to see if this can be made better, that the algorithm can be made perfect. All of that is wrong. You cannot consent to having your facial data taken by camera on any public access way. Ever. You just can’t.”
Portland bans #FacialRecognition:
"The ordinances bar the use of facial recognition technology by city agencies and on public property within the city, but also prohibit its use "by private entities in places of public accommodation”"https://t.co/Uy02cK42XX
— Garen T. Bragg (@garenbragg) September 10, 2020
Considering the sentiments mentioned by Golden, stating that people can’t consent to having their “facial data taken by camera on any public access way” – it’s rather interesting that Portland still uses red light cameras.
Albeit, while the technology between red light cameras is vastly different from facial recognition software, the result is somewhat synonymous. Because while a red light camera isn’t using one’s face to identify them – the technology is using their license plate which leads to the same outcome.
However, one blaring caveat is that facial recognition software has a bit of an issue when it comes to those with a darker hue skin. While the error rate for Caucasian men was .08% on positive matches, a study revealed that there’s a 34.7% error rate on positive matches when it comes to darker skinned women.
Basically, facial recognition software has a tendency to think that black women look fairly alike.
But there are some concerns that this ban wasn’t due to wanting to protect the privacy of innocent Portlanders – but rather make it so that the tech can’t be used in the event of rioters in the streets.
Portland bans facial recognition technology so rioters,looters, arsonists and murderers can not be identified.https://t.co/DyltST9Fje
— Richard Lucente (@LucenteRichard) September 10, 2020
However, considering that this ban won’t go into effect until next year, it is uncertain whether that was a genuine hidden motivation by those who proposed the motion to enact the ban and restrictions on the use of said tech.
The latest set of bans and restriction on the software are reportedly going to go into effect on January 1st of 2021.
Do you want to join our private family of first responders and supporters? Get unprecedented access to some of the most powerful stories that the media refuses to show you. Proceeds get reinvested into having active, retired and wounded officers, their families and supporters tell more of these stories. Click to check it out.
Meanwhile, in Florida, police have been getting quite the use out of the same styled technology.
According to NBC 6, Miami Police used a “controversial” facial recognition program to identify a violent protestor, leading to her arrest.
On May 30, multiple “protestors” attacked police officers with projectiles outside the Miami’s downtown police headquarters.
Video shared on the NBC6 website shows police body camera footage as projectiles land on officers.
One person can be heard to shout,
“They’re throwing rocks!”
One protestor, identified by NBC6 as Oriana Albornoz, 25, can be seen standing on the trunk of a police cruiser, wearing a white shirt bearing the symbol for anarchy.
Police are seen to deploy tear gas as they continue to be pelted with objects.
Albernoz is later caught on camera throwing unidentified objects at police officers.
Miami police reported that Albernoz’s actions resulted in injury to an officer’s leg.
A month later, Albernoz was arrested. She pled not guilty to charge of battery on a police officer.
According to NBC6, police used facial recognition technology provided by Clearview AI to identify the violent perpetrator.
— HealthTechForum DFW (@HTFDFW) February 19, 2020
As it is stated on the Clearview AI website, the technology uses publicly available images from non-protected, non-private information, including social media.
The website further says:
“Clearview AI helps to identify child molesters, murderers, suspected terrorists, and other dangerous people quickly, accurately, and reliably to keep our families and communities safe.”
The website points out that the results are subject to legal compliance and scrutiny, adding:
“Just like other research systems, Clearview AI results legally require follow-up investigation and confirmation. Clearview AI was designed and independently verified to comply with all federal, state, and local laws.”
Miami police have a detailed policy regarding the use of facial recognition technology.
The official policy states:
“This technology can be a valuable investigative tool in developing leads for a criminal or Internal Affairs investigation, detecting and preventing criminal activity, reducing an imminent threat to health or safety, and helping in the identification of deceased persons or persons unable to identify themselves.”
The document goes on to emphasize respect of rights and privacy, saying:
“This policy will ensure that all facial recognition technology uses are consistent with authorized purposes while not violating the privacy, civil rights, and civil liberties of individuals.”
Authorized use of the technology includes:
“Potential suspects, witnesses, and/or victims in a criminal investigation.”
Unauthorized use includes:
“…surveillance of persons or groups based solely on their religious, political, or other constitutionally protected activities, their race, ethnicity, gender, sexual orientation, sexual identity, or other constitutionally protected class membership.”
The MPD policy also requires monthly audits to assure compliance with regulations.
Miami Police Assistant Chief Armando Aguilar spoke highly of Clearview AI.
He told NBC6:
“While we live in a society where video seems to be everywhere, many times the challenge of video is to have a photo or a video of a suspect but not knowing who the suspect is. This has helped us go over that barrier.”
As of July, the technology had assisted Miami PD in identifying 28 people linked to crimes.
Not everyone is a fan of Clearview AI, however.
Mike Gottlieb, attorney for arrested protestor Oriana Albornoz, cited privacy concerns. He told NBC6,
“How or where they got her image from begs other privacy rights.
“Did they comb through her, let’s say, social media? And if they did, how did they get access to her social media?”
NBC6 also made sure to point out the Miami PD’s policy prohibits “surveillance of people exercising ‘constitutionally protected activities’ like protesting,” from which one might infer that the news outlet was arguing that Albernoz’s activities fell into that category.
Assistant Chief Aguilar confirmed,
“if someone is peacefully protesting and not committing a crime, we cannot use it against them.”
However, Aguilar noted that throwing rocks at officers is, indeed, a crime.
He went on to say,
“We have used the technology to identify violent protestors who assaulted police officers, who damaged police property, who set property on fire, and we have made several arrests in those cases, and more arrests are coming in the near future.”
Chad Marlow, of the ACLU, is also opposed to the use of Clearview AI.
He noted that companies like IBM, Microsoft and Amazon have refused to sell facial recognition technology to police departments over concerns of privacy and race.
“I think it’s disgraceful, frankly, for police departments to say, well if Amazon and Microsoft and IBM won’t sell me facial recognition, let’s find a company who will.”
Marlow pointed NBC6 to multiple studies showing errors in identification using facial recognition technology and voiced concerns that persons of color would be more likely to be misidentified. Marlow added that police departments in South Florida “don’t care” about that possibility.
New: BuzzFeed News obtained a report given to the North Miami PD by Clearview AI attesting to the technology's accuracy using "the ACLU's facial recognition accuracy methodology."
The ACLU calls the report "absurd."https://t.co/o9Mu3kFBcE
— Logan McDonald (@_loganmcdonald) February 10, 2020
Assistant Chief Aguilar responded to concerns over racial bias by stating that the Miami PD “has protections in place to prevent the wrong person being arrested.”
“We ensure that our officers, our detectives, are aware of those algorithms biases, and we build that into the policy to ensure that our officers don’t make an arrest based solely on recognition identification.”
Clearview’s CEO, Hon Ton-That, bolstered Aguilar’s words by assuring that wrongful identification is highly unlikely.
“Unlike other facial recognition algorithms, which have misidentified people of color, an independent study indicates that Clearview AI has no racial bias.
“As a person of mixed race, this is especially important to me. We are very encouraged that our technology has proven accurate in the field and has helped prevent the wrongful identification of people of color.”
Despite claims as to its “controversial” nature and potential for misuse, reports indicate that Miami PD’s use of Clearview AI for facial recognition has resulted in the arrests of numerous criminals, including projectile-hurling “protestor” Oriana Albornoz.
As ICE has recently inked a contract with Clearview AI, we are sure to hear more on how this facial recognition technology will be used in law enforcement work.