The Messy History of Facial Recognition Company Clearview AI
Clearview AI, a provider of facial recognition capabilities powered by billions of images scraped from all over the web, has faced much media scrutiny and numerous legal challenges in 2020 and 2021
As we’ve found in our analysis of AI news in 2021, few if any topics receive as much attention in AI coverage as facial recognition, and few companies have as many news stories devoted to them as Clearview AI – whose chief product is a ‘search engine for faces,’ or the ability to find someone’s name from a photo of their face. By scraping publicly available images on the internet, they’ve built a vast database that has a good chance of having you in it if you live in the US. As their website states: “Our platform, powered by facial recognition technology, includes the largest known database of 10+ billion facial images sourced from public-only web sources, including news media, mugshot websites, public social media, and other open sources.” This article will provide background information on the company, and then delve into the many legal challenges the company has faced over the past few years.
Clearview AI was first formed in 2016 (with the name being chosen in 2017), but was not often discussed in the press until the New York Times published The Secretive Company That Might End Privacy as We Know It – a thorough expose revealing much about the then obscure company – in early 2021. As covered in the article, while there already were government databases and commercial services for facial recognition, Clearview’s efforts to build a database that encompassed just about any person whose image and name could be found on websites such as Facebook, Venmo, Twitter, and more was unprecedented:
“Police departments have had access to facial recognition tools for almost 20 years, but they have historically been limited to searching government-provided images, such as mug shots and driver’s license photos. In recent years, facial recognition algorithms have improved in accuracy, and companies like Amazon offer products that can create a facial recognition program for any database of images.
Mr. Ton-That [the CEO of Clearview AI] wanted to go way beyond that. He began in 2016 by recruiting a couple of engineers. One helped design a program that can automatically collect images of people’s faces from across the internet, such as employment sites, news sites, educational sites, and social networks including Facebook, YouTube, Twitter, Instagram and even Venmo. Representatives of those companies said their policies prohibit such scraping, and Twitter said it explicitly banned use of its data for facial recognition.”
After building its database and an algorithm that made use of cutting edge Deep Learning techniques to search that database, in 2017 the company decided to focus on selling its product to American law enforcement agencies. They marketed directly to officers, with the hope that those officers’ departments would then adopt it: “The company’s most effective sales technique was offering 30-day free trials to officers, who then encouraged their acquisition departments to sign up and praised the tool to officers from other police departments at conferences and online.“
By 2019 the approach paid off, with many policy departments, the F.B.I and the Department of Homeland Security using it. It was not until the end of 2019 that such use of Clearview’s product by police agencies made headlines, with Florida law enforcement agencies use facial recognition to identify alleged thief being published on December 27th of that year. The New York Times’ article came out just weeks after on January 17, and just days later Buzzfeed released the article Clearview AI Says Its Facial Recognition Software Identified A Terrorism Suspect. The Cops Say That's Not True. Buzzfeed’s article debunks a claim Clearview made in an email which it sent to law enforcement agencies and which had the subject line “How a Terrorism Suspect Was Instantly Identified With Clearview”. It also describes Clearview’s response to the New York Times article:
“In the aftermath of revelations about its technology, Clearview has tried to clean up its image by posting informational webpages, creating a blog, and trotting out surrogates for media interviews, including one in which an investor claimed Clearview was working with "over a thousand independent law enforcement agencies." Previously, Clearview had stated that the number was around 600. …
Clearview was also prepared for the Times’ story, putting new information, claims, and promotional material up on its site to replace the sparse page that had existed for at least the last six months. At the top of its new site, a video boasted that Clearview helped capture a terrorism suspect in the New York subway. “
Then, on January 28 Buzzfeed published a second article on the subject – Clearview AI Once Told Cops To “Run Wild” With Its Facial Recognition Tool. It's Now Facing Legal Challenges. As the name of the article implies, it covers the legal fallout of this media attention; Clearview “received cease-and-desist letters from Twitter and the New Jersey attorney general. It was also sued in Illinois in a case seeking class-action status.”
Following these three stories, a flurry of developments and media coverage followed:
Needless to say, a lot of attention was drawn to Clearview over the first half of 2020. Since then, the attention has rarely abated, and many more legal challenges to the company have come about. The lawsuit of the ACLU – The American Civil Liberties Union – is based on The Illinois’ Biometric Information Privacy Act (BIPA), “a 2008 law that prevents companies from collecting or storing fingerprints or scans of citizen’s faces without their consent.” A separate class action lawsuit was also filed in Illinois and requested a “temporary injunction against Clearview to prevent it from using the biometric information of current and past Illinois residents until the case reaches a conclusion.” Clearview now seeks to fight these lawsuits and BIPA in general in a case presented to the Supreme Court. But its troubles go beyond this law; it has also faced lawsuits in Vermont, California, and elsewhere.
Clearview was also challenged internationally. In July, privacy regulators in the U.K. and Australia announced a joint probe into Clearview AI. Clearview also stopped offering its services in Canada following an investigation by both provincial and federal authorities, with its app later being ruled to be illegal there. In 2021 it was hit by many GDPR-related privacy complaints in Europe, and nearing the end of 2021 Australia, France, and the UK hit Clearview with fines and ordered it to stop collecting and delete data it has already collected.
Use of Clearview’s services also received increased scrutiny, with Macy’s having been sued over it, there being much press coverage of Immigration and Customs Enforcement (ICE) having signed a contract with the company, and the publication of data analysis that showed that Use of Clearview AI facial recognition tech spiked as law enforcement seeks to identify Capitol mob. General scrutiny and reflections about it have also continued to be published via articles such as Clearview AI's plan for invasive facial recognition is worse than you think and Is there any way out of Clearview’s facial recognition database?.
Despite these many challenges, Clearview AI is still going strong. In July 2021 it announced that it has raised $30 million from investors, and just last month it was reported that it is on track to win a U.S. patent for its technology. While many countries such as Australia and Canada have already ruled it cannot operate within them, no such outcome has yet to happen within the US. Looking to the future, there is no doubt that it will continue to be at the forefront of AI news throughout 2022, and that the outcome of the legal challenges against it will have major implications for all Americans in the future.