Photo credit: PHOTOCREO Michal Bednarek
Since I last wrote about it, people seem to be paying more attention to the dangers of facial recognition technology:
- Yesterday, two Illinois residents sued Amazon, Microsoft, and Alphabet (Google’s parent company) for allegedly violating the state’s biometric privacy law.
- Google was sued for allegedly violating this same Illinois law by failing “to obtain consent from anyone” when it introduced facial recognition to its cloud service for storing and sharing photos.
- Facebook settled a lawsuit alleging it misused its facial recognition technology.
- The EU considered a five-year ban on facial recognition technology.
- A number of cities have banned the municipal use of the technology.
- IBM wrote to Congress, stating it no longer provides general purpose facial recognition or analysis software.
- Amazon announced a one-year moratorium on police use of its facial recognition technology.
- Microsoft declared it will not sell facial recognition technology to police departments in the United States until a law is passed governing the technology.
- In the wake of the George Floyd killing and Black Lives Matters protests, an initial version of the Justice in Policing Act of 2020 would generally prohibit using facial recognition technology with police body cameras. A later draft bill would essentially ban the technology in policing altogether.
Much of the negative attention being paid to facial recognition technology has focused on engineering issues, such as algorithmic bias, and specifically the high error rates when identifying females and darker-skinned people, versus males and those with lighter-skin. This major issue deserves continued scrutiny.
At the same time, despite the lawsuits based on Illinois’s biometric law, the focus has moved away from how social media companies and other consumer-based companies are handling our images, the underlying data that makes facial recognition technology useful to the police.
Yet companies continue to capture, store, and share biometric data. Consumers may “consent” to sharing their images for certain limited purposes. But that data is being used now — and will be used in the future — for purposes far beyond what was contemplated when users ticked the box in a privacy disclosure (if they did at all).1
This vast trove of data is the main reason police have renewed their interest in facial recognition technology — there’s enough data to actually make it useful in identifying criminal suspects.
So while the facial recognition technology needs to be improved and more heavily regulated, we also need to regulate how data is being gathered and treated in the first instance.
Our images ought to be protected to the same extent we do other personal identifying information, such as Social Security numbers, or certain financial information like credit card numbers.
Voluntary moratoriums and proposed bans may temporarily halt law enforcement’s use of the technology, at least in certain countries. But as companies continue to collect and store our data, and the utility of the technology increases, pressure will build to use it again.
- The essential feature of this technology is its ability to assign a unique identifier to a previously unquantifiable object, our faces. We can thus link a human image to a field in a database. This link can be shared and cross-linked for whatever purpose. ↩︎
Update: On July 23, U.S. District Court Judge James Donato considered a revised proposed settlement in the Facebook case. The new amount would be $650 million — making it one of the largest ever class action settlements — but may still total less than $1,000 in statutory damages per claimant, the amount prescribed by the Illinois law. To notify potential class members, Facebook plans to use some its own “aggressive methods” for capturing user attention.