Discover more from Last Week in AI
A Summary of Concerns About Clearview AI's Facial Recognition Product
Is facial recognition an encroachment on individual freedom, or the next step to a technologically advanced society?
Quick thing before we get to the article. Editorials will be exclusive to paid subscribers for the first month and then made available to everyone afterward. Here’s the latest one to be made free for all:
Clearview AI’s facial recognition platform has been awarded a US patent titled, “Methods for Providing Information About a Person Based on Facial Recognition” after it performed nearly perfectly in the Facial Recognition Vendor Test (FRVT) hosted by the National Institute of Standards and Technology.
Clearview is reportedly on track to have 100 billion facial photos in its database within a year, which they claim is enough to ensure “almost everyone in the world will be identifiable”.
Academics, industry practitioners, lawmakers, and press circles are concerned that the startup is building such a large face image database without the target's consent or knowledge, and about the misuses of facial recognition and prevalent algorithmic biases in gender and race.
Temporary bans and delays in using this technology are justified. However, more updated rules and regulations are needed to match the pace of developments in facial recognition to minimize the technology’s social harms.
Clearview AI’s facial recognition platform has been awarded the US patent after it performed nearly perfectly in the Facial Recognition Vendor Test (FRVT) performed by National Institute of Standards and Technology (NIST) in January 2022. The patent titled, “Methods for Providing Information About a Person Based on Facial Recognition” was issued by US Patent and Trademark Office (USPTO).
Clearview AI is a start-up founded in 2016 that develops a facial recognition application, intended to serve as a search engine for peoples’ identities. While there are many existing patents for facial recognition technologies, the unique aspect for this platform is the combination of gathering information from the public internet and these sorts of technologies. They have already collected 10+ billion images by scraping from Facebook, Youtube, Twitter, Venmo, Linkedin and many more such internet profiles, and the company is continuing to expand on that. According to a recent update, Facial recognition firm Clearview AI tells investors it’s seeking massive expansion beyond law enforcement, the database will have 100 billion facial photos within this year, which they claim is enough to identify almost every person in the world.
Clearview AI’s founder, Ton-That, maintains that the platform intends to only serve government users for law enforcement applications. To avoid any misuse of the application, they will refrain from making a consumer version of Clearview AI. However, the patent application lists various other purposes beyond law enforcement where the product can be useful. The company argues that “it may be desirable for an individual to know more about a person that they meet, such as through business, dating, or other relationship.” Common ways of learning about new people, like asking them questions or checking out their business cards, may be unreliable because the information they choose to share could be false, the application says.
Facial recognition systems have been known to be incorrect. Multiple incidents demonstrate that innocent citizens can be drawn into criminal cases if the face recognition algorithms determine they look like someone else. However, not only can these algorithms be incorrect, they are known to be biased towards race and gender. For instance, in 2019 Michael Oliver, an African American young man was wrongfully accused and arrested by the Detroit police. In more such incidents in 2020, Robert Williams and Nijeer Parks, also African Americans, were accused and arrested because of an incorrect match of facial recognition technologies. We refer our readers to this article, “Police Misuse of Facial Recognition - Three Wrongful Arrests and Counting” for a more detailed read of more such flawed matches leading to wrongful arrests.
Amidst all this, Clearview’s facial recognition platform has garnered a lot of interest from private businesses, large corporations, and governments. At the same time, the company has faced a multitude of controversies and pushback since the startup came into the public eye, thanks to this 2020 article ‘’The Secretive Company That Might End Privacy as We Know It”.
A brief timeline of Clearview AI:
2016 - 2017 | Inception: of Clearview AI Started as a very small operation. Teamsworked remotely on building tools to scrape off images of people from social media platforms like Twitter, Facebook, Linkedin, Venmo and Instagram, and build a facial recognition model based on the plethora of academic papers published in recent years.
Late 2017 | Algorithm behind Clearview AI application: The facial recognition application with a facial image database of 3 billion images is ready and is named “SmartChecker”. This is when the founders started exploring whom and how to market the product to. The company decided to focus on selling its product to American law enforcement agencies as a tool to aid in finding culprits.
2018 - 2019 | Interest in Facial Recognition: During this time period, various law enforcement agencies, government agencies, and private companies had started using Clearview AI. This was revealed later on in Jan 2020 in a Buzzfeed Article. Clearview AI started by marketing directly to police officers, with the hope that those officers’ departments would then adopt it. “The company’s most effective sales technique was offering 30-day free trials to officers, who then encouraged their acquisition departments to sign up and praised the tool to officers from other police departments at conferences and online.“
Jan 2020 | Media Reveals Clearview AI to public: The New York Times article, “The Secretive Company That Might End Privacy as We Know It” brings the company to limelight. A week later, Buzzfeed article, titled “Your Local Police Department Might Have Used This Facial Recognition Tool To Surveil You. Find Out Here.” reveals all the vendors of Clearview AI.
Feb 2020 | Lots of troubles and legal battles start for Clearview AI: Twitter, Facebook, Google, YouTube, Venmo after discovering Clearview AI was scraping images from their site, sent a cease-and-desist letter, insisting that they remove all images as it is against their company’s policies.
May 2020 | More Legal Mess for Clearview AI: The American Civil Liberties Union, and the law firm Edelson PC filed a lawsuit against Clearview AI alleging violation of Illinois residents’ privacy rights under the Illinois Biometric Information Privacy Act (BIPA). The case was filed in Illinois with the backing of a consortium of Chicago-based rights groups.
Jan 2021 | Clearview AI being used for Capitol mob attack investigations: It was reported in this article, “Use of Clearview AI facial recognition tech spiked as law enforcement seeks to identify Capitol mob”, that the use of Clearview AI facial recognition increased as law enforcement seeked to identify people involved in the US Capitol mob attack.
July 2021 | Clearview AI raises money: Clearview AI raises $30 million in the financing round while the company continues to face backlash and legal battles. The investors were not identified publicly, but Hoan Ton-That said “include institutional investors and private family offices.”
Jan 2022 | Clearview AI wins a US patent: Clearview AI was awarded a US patent for their facial recognition technology.
Since the company came under public scrutiny in January 2020, there are various debates that have been sparked by the nature of facial recognition technology.
First, many are concerned that the company has violated people’s right to privacy by scraping face images off the public Internet without individuals’ consent. Second, many are concerned about the usage of this application without any formal vetting regarding the various forms of algorithmic biases common with facial recognition technology. For more on these racial and gender biases, we refer the readers to this article, “Gender and racial bias found in Amazon’s facial recognition technology (again)”. Moreover, the long-term effects of wide-spread adoption of such technologies is also of concern. There is an immediate need for regulations regarding the justification of use of facial recognition applications.
Matt Mahmoudi, an Amnesty International researcher who is leading the group’s work to ban facial recognition technologies, believes that the part that the recent patent is trying to protect (building a face images dataset by scraping off images from public internet) is the most concerning part.
“The part that they're looking to protect is exactly the part that's the most problematic. They are patenting the very part of it that's in violation of international human rights law.”
Ton-That justifies the violation of people's right citing that there is a First Amendment right to make use of public material.
“All information in our datasets are all publicly available info that people voluntarily posted online — it's not anything on your private camera roll. If it was all private data, that would be a completely different story.”
Woodrow Hartzog, professor of law and computer science and Evan Selinger, professor of philosophy at Northeastern University in Boston, believe that Clearview AI should serve as proof that facial recognition needs to be completely banned altogether:
“Stopping this technology from being procured — and its attendant databases from being created — is necessary for protecting civil rights and privacy. But limiting government procurement won’t be enough. We must ban facial recognition in both public and private sectors, before we grow so dependent on it that we accept its inevitable harms as necessary for “progress.” Perhaps over time appropriate policies can be enacted that justify lifting a ban. But we doubt it.”
We refer the readers to the article, “What Happens When Employers Can Read Your Facial Expressions?” for a more detailed reading regarding the efforts required on the lawmakers and government regulators to combat such bias and ethical and privacy concerns.
Professor Aleksander Madry, professor at MIT, also expressed concern regarding the bias that exists in such algorithms:
“I would expect accuracy to be quite bad, and even beyond accuracy, without careful control over the data set and training process I would expect a plethora of unintended bias to creep in. Without due care, for example, the approach might make people with certain features more likely to be wrongly identified. Even if the technology works as promised,the ethics of unmasking people is problematic. Think of people who masked themselves to take part in a peaceful protest or were blurred to protect their privacy,”
Despite these concerns, it is also evident that the technology seems very luring and useful to public and private organizations alike. BuzzFeed News gained access to the leaked documents and retrieved a list of organizations engaging with the facial recognition application, all of which was tracked and logged by the company:
“For a company that maintains its tools are for law enforcement, Clearview’s client list includes a startling number of private companies in industries like entertainment (Madison Square Garden and Eventbrite), gaming (Las Vegas Sands and Pechanga Resort Casino), sports (the National Basketball Association), fitness (Equinox), and even cryptocurrency (Coinbase). The logs also show that the startup is particularly interested in banking and finance, with 46 financial institutions trying the facial recognition tool.”
“Employees at big-box retailers, supermarkets, pharmacy chains, and department stores have also tried Clearview. Company logs reviewed by BuzzFeed News include Walmart (nearly 300 searches), Best Buy (more than 200 searches), grocer Albertsons (more than 40 searches), and Rite Aid (about 35 searches). Kohl’s, which has run more than 2,000 searches across 11 different accounts, and Macy’s, a paying customer that has completed more than 6,000, are among the private companies with the most searches.”
Jolynn Dellinger, a professor of Ethics at Duke University emphasizes the need to justify what technology should be created and used.
Another interesting occurrence is that the big tech giants, Facebook, Microsoft and Google; all three prominently involved in developing facial recognition technologies for significant time, came out and clarified their stance on these technologies. All of them acknowledged the potential that these technologies hold, but at the same time realized that they have an immense scope of misuse and potential to harm the society at large. They stress the importance of regulations around the usage of the technology to minimize misuses.
However, Clearview AI has done its share of circulating false information and exaggerating claims since the very beginning. For example, to market their product, the pitch email sent to law enforcement agencies like NYPD claimed that Clearview AI was responsible for catching an alleged terrorist. They also put a similar example in a promotional video on their website, both of which were dismissed as false claims by the NYPD. Clearview has misrepresented its relationship with the NYPD multiple times. For more such instances, we refer our readers to the article “Clearview AI Says Its Facial Recognition Software Identified A Terrorism Suspect. The Cops Say That's Not True”. In a similar attempt, news surfaced very recently where Ton-That claimed that companies like AirBnB, Lyft, and Uber had expressed interest in their application. However, all the three companies denied any such conversations with Clearview AI.
Facial Recognition technologies have been there for some time now, and so have been the debates around the use of them. However, Clearview AI’s facial recognition application has definitely brought the issue to the forefront. We believe there are two aspects to the issue at hand-
Is the use of facial recognition technologies justified? How does this change for government agencies and for private companies?
What is the definition of ownership of data on the internet? As a user posting original images on the internet, what rights should they have over their content and facial likeness?
Facial recognition is definitely a big advancement with a multitude of use cases, such aiding law enforcement to find missing persons or catch criminals, replacing current security measures based on touch sensors, and streamlining many consumer processes such as clearing security at airports. However, there is immense potential for misuses given is ethical and privacy implications. It is important to make sure that the harm does not outweigh the benefits of the technology. This is only possible if there is a collective responsibility and agenda for the tech companies, government agencies, lawmakers, and researchers.
It is important for tech companies and researchers to keep working on reducing bias in these algorithms.
It is important to have transparent public communication about what the technology is, what is at stake, and what it can be used for.
We need better evaluation systems to determine biases of available facial recognition products.
We need to develop ways of informing the public what data is being captured and obtaining consent for the data being used
Lastly, governments need to lay regulations and give clarity on appropriate and inappropriate uses of facial recognition.
While countries like Canada, Australia, and the UK have completely banned the use of Clearview AI, outright banning an otherwise useful technology is not the way to go. Delaying the deployment of facial recognition tools until more clarity on rules and regulations is a better approach. Having said that, it is important that appropriate debates and regulations take place now to minimize the potential harms of widespread deployment of facial recognition..
Facial recognition does have legitimate uses. But like with any technology, it also has its dangers. If it ends up in the wrong hands, it could yield disastrous results ranging from breach of privacy to more dire consequences such as security fraud, theft, and wrongful verdicts in courts. It is of utmost importance to regulate its use and make sure the technology is not being exploited for malicious intent. Along these lines, USPTO’s patent award to Clearview AI’s technology should not be misunderstood as a greenlight for the technology to be used without checks and balances. Such technology could be beneficial, but only if the appropriate regulations are in place.