You may have good reason to be worried that police use of facial recognition might erode your privacy -- many departments are already using software with serious privacy concerns. The New York Times has learned that over 600 law enforcement agencies in the US and Canada have signed up in the past year to use software from little-known startup Clearview AI that can match uploaded photos (even those with imperfect angles) against over three billion images reportedly scraped from the web, including Facebook and YouTube. While it has apparently helped solve some cases, it also creates massive privacy concerns -- police could intimidate protesters, stalk people and otherwise abuse the system with few obstacles.
Part of the problem stems from a lack of oversight. There has been no real public input into adoption of Clearview's software, and the company's ability to safeguard data hasn't been tested in practice. Clearview itself remained highly secretive until late 2019. It's certainly capable of looking at search data if it wants -- police helping to test the software for the NYT's story got calls asking if they'd been talking to the media.
The software also appears to explicitly violate policies at Facebook and elsewhere against collecting users' images en masse. Facebook said it was looking into the situation and would "take appropriate action" if Clearview is breaking its rules.
Company chief Hoan Ton-That tried to downplay some of the privacy concerns. He noted that surveillance cameras are "too high" to deliver truly reliable facial recognition, and that Clearview was only notified about the reporter's inquiries because of a system built to catch "anomalous search behavior." Customer support reps don't look at uploaded photos, the company added. And while there's underlying code that could theoretically be used for augmented reality glasses that could identify people on the street, Ton-That said there were no plans for such a design.
Clearview's software has nonetheless raised alarm bells, and not just for the possible abuses of power mentioned earlier. The software is only about 75 percent accurate, and hasn't been tested by outsiders like the US government's National Institute of Standards and Technology. There's a real risk of false matches, not to mention potential gender and racial biases. However well it has worked in some instances, there's a chance it could lead to false accusations or disproportionately target non-white groups. Cities like San Francisco have rushed to ban government use of facial recognition over problems like these, and calls for further bans might grow louder in the wake of this latest news.
Source: New York Times
by: via https://www.AiUpNow.com/