top of page

Clearview AI and evolving facial-recognition regulatory regimes

By Nicholas Clark

Clearview AI, which collates facial recognition data, faces substantial fines under the UK Information Commissioner’s Office for breaking data protection law.

The Information Commissioner’s Office (ICO), a UK public body under the ambit of Parliament, has issued a provisional fine to American software company Clearview AI, following a joint investigation with Australian counterparts. [1] The fine is provisionally weighted at £17 million. [2]

Clearview AI provides a database of facial recognition data, indexing over three billion images found on the Internet, including social media platforms. The two agencies found that Clearview AI’s database was likely to have violated UK law, including provisions under the UK General Data Protection Regulations (GDPR), which the UK has retained in domestic law despite Brexit.

The ICO, which seeks to “uphold information rights in the public interest” and preserve rights to data privacy, has strongly rebuked the company, which has taken the facial data of a “substantial number” of UK citizens.

Facial recognition technologies have been criticised for their potential to breach rights to privacy and their potential for abuse from both public and private bodies. In 2020, the Court of Appeal ruled that the use of automated facial recognition technology by police in South Wales was unlawful, following an expression of concern from the ICO and a private lawsuit. The use of facial recognition tools by law enforcement in other jurisdictions has led to outcries—indeed, Clearview AI claims that more than 2,400 agencies in the United States use its software, while the Brookings Institute has reported that more than 80 countries have adopted Chinese surveillance platforms domestically.

In Singapore, facial recognition technologies are commonplace, and have yet to attract significant public or legal challenges—government services such as SingPass have made the use of facial recognition technologies an everyday occurrence. Current private sector governance is partially provided by the Model AI Governance Framework (Second Edition), which provides guidance for the deployment of AI by private companies. [3] However, Singapore is yet to adopt a GDPR-style regulatory regime that would significantly limit the deployment of facial recognition technologies. As governance frameworks for artificial intelligence and facial recognition evolve, however, a clear awareness of risk to privacy and organisational overreach remains crucial.



bottom of page