gPeople talk about gadgets they wish existed but don’t, and soon the idea of ”glasses with a built-in computer that tells you who you’re looking at” arises. Such “augmented reality” devices seem so obviously desirable that someone usually wonders, “When are they going to start selling them, huh?”
In fact, two different companies have built working prototypes in the last six years. Facebook experienced an internal version in 2017, fueled by the colossal number of profile photos on its site. The other is Clearview AI, a secretive American startup that first came to the attention of New York Times journalist Kashmir Hill in November 2019.
Neither has put their device up for sale; Facebook has carefully backed away from this idea. (The same goes for Google, which, although it was the first to offer augmented reality glasses in February 2013 and has the computing power and data, told Hill that it “will not make general-purpose facial recognition commercially available while we work through policy and technical issues.”)
But Clearview AI made its basic system, capable of identifying almost anyone whose photo and name appeared on the Internet, available to a few Silicon Valley venture capitalists hoping to invest, then to understand where the best arguments for beneficial use are. secular – rented it to American police departments to identify suspects. It may sound dystopian, but it’s about stopping crime!
Ironically, it appears that Clearview AI was very wary of being recognized by the media, placing Hill’s face (and, presumably, those of other journalists) on a blacklist that would return no results if asked. Hill documents how she tracked down the company’s founders: it’s a classic piece of the internet privacy shoe journalism that she specializes in and excels at.
In recent years, powerful “machine learning” and cloud computing, combined with the growth of smartphones, selfies and social media, have created a facial recognition system capable of identifying anyone as inevitable as atomic bomb after the division of the uranium atom. 1938. Just as that breakthrough led to a cascade with an obvious outcome, the prerequisites for facial recognition – masses of images online and rapidly improving algorithms to determine what makes a face unique – awaited those who were ready to ignore the social consequences. controversial effects.
In fact, face-naming systems have been invented repeatedly over the past decade, sometimes spilling over onto the Internet, where they are inevitably used for nefarious purposes — often by would-be stalkers. As Hill notes, in China the use of the state goes well beyond this, with individuals and entire ethnic groups such as the Uyghurs being monitored and controlled. And an entire chapter describes the experience of an innocent black citizen in Detroit who was singled out as a possible match (the ninth out of 243) for a thief, and then wrongly arrested.
Despite this, advocacy groups tend to be in conflict: Hill points out that the American Civil Liberties Union (ACLU), which sued Clearview, could have argued in favour facial recognition if it had been used to recognize police officers who did not wear badges during the demonstrations.
Overall, the problem is that we can’t determine whether ubiquitous and immediate facial recognition is a good or bad thing. Could he find the kidnapped children? Hit-and-run drivers? Burglars? Spare us the embarrassment at social occasions? Certainly. Would it be misused by people seeking to harm and harass, as well as by governments and police in authoritarian or democratic states? Again, definitely. More importantly, can it be stopped? It’s hard to see how, and Hill – not unreasonably – offers no suggestions. The bomb is outside the bay. The question now is where this lands.