On Tuesday, California passed into law a three-year block of the use of facial recognition in police bodycams that turns them into biometric surveillance devices.
This isn’t surprising, coming as it does from the state with the impending, expansive privacy law – California’s Consumer Privacy Act (CCPA) – that’s terrifying data mongers.
As it is, in May, San Francisco became the first major US city to ban facial recognition. It might well be a tech-forward metropolis, in a state that’s the cradle of massive data gobbling companies, but lawmakers have said that this actually confers a bit of responsibility for reining in the privacy transgressions of the companies headquartered there.
When facial recognition gets outlawed, lawmakers point to the many tests that have found high misidentification rates. San Francisco pointed to the ACLU’s oft-cited test that falsely matched 28 members of Congress with mugshots.
The ACLU of Northern California repeated that test in August, finding that the same technology misidentified 26 state lawmakers as criminal suspects.
One of the misidentified was San Francisco Assemblyman Phil Ting, the lawmaker behind the bill that passed and which was signed into law by Gov. Gavin Newsom on Tuesday: AB1215.
The law, which goes into effect on 1 January 2020 and which expires on 1 January 2023, prohibits police from “installing, activating, or using any biometric surveillance system in connection with an officer camera or data collected by an officer camera.”
The law cites the threat to civil rights posed by the pervasive surveillance of facial recognition bodycams:
The use of facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights. This technology also allows people to be tracked without consent. It would also generate massive databases about law-abiding Californians, and may chill the exercise of free speech in public places.
…and noted the technology’s tendency to screw up:
Facial recognition and other biometric surveillance technology has been repeatedly demonstrated to misidentify women, young people, and people of color and to create an elevated risk of harmful ‘false positive’ identifications.
There are many cases in point when it comes to this error-prone technology. Here’s one: After two years of pathetic failure rates when they used it at Notting Hill Carnival, London’s Metropolitan Police finally threw in the towel in 2018. In 2017, the “top-of-the-line” automatic facial recognition (AFR) system they’d been trialling for two years couldn’t even tell the difference between a young woman and a balding man.
Facial recognition failure hasn’t stopped the UK from signing up with Singapore to collaborate on developing a digital identity, mind you. As part of its Gov.uk Verify scheme, the UK Government Digital Service launched a system of biometric payment for government services earlier this year. For its part, France is set to implement a nationwide facial recognition ID program next month, in spite of protests from privacy groups and from its independent data regulator, CNIL.
Source: https://nakedsecurity.sophos.com/2019/10/10/california-outlaws-facial-recognition-in-police-bodycams/
Comentários