Facial recognition and human rights

The Australian Human Rights Commission (AHRC) has recently released the final “Human Rights and Technology” report, an initiative that began in 2018. The 240-page report addresses issues that arise from Artificial Intelligence (AI) as an emerging technology. Amongst all the technology umbrellaed under the term AI, facial recognition seemed to be of particular concern. The report dedicated section 9 to this issue: “Biometric surveillance, facial recognition and privacy”.

The AHRC goes as far as to propose a temporary ban on ‘high-risk’ facial recognition. As an Australian developer of facial recognition technology, you might be interested in what we think of this recommendation.

The short answer is: We agree.

Wait… What?

To set the stage, let’s see what the AHRC recommends verbatim.

Recommendation 20: Until the legislation recommended in Recommendation 19 comes into effect, Australia’s federal, state and territory governments should introduce a moratorium on the use of facial recognition and other biometric technology in decision making that has a legal, or similarly significant, effect for individuals, or where there is a high risk to human rights, such as policing and law enforcement.

Page 117 of the report adds clarification: “This moratorium would not apply to all uses of facial and biometric technology. It would apply only to uses of such technology to make decisions that affect legal or similarly significant rights, unless and until legislation is introduced with effect human rights safeguards.”

Pumping the brakes on something that affects our human rights is something that we can all get behind. Nirovision supports the responsible use of AI. Let’s explore some of these “high-risk” issues at hand and what safeguards we’ve added to mitigate these risks.

Mass surveillance and the use of facial recognition in policing and law enforcement

When you think of the horrors of AI and facial recognition, your mind is immediately drawn to reports of a particular authoritarian regime that has employed mass CCTV surveillance, including facial recognition, to monitor its citizens. The least harmful (but still scary) “use-case” involved automatically issuing infringement fines to jaywalkers, while extremely horrifying examples include racially profiling minority groups and restricting their freedoms.

Australia, a democratic nation, needs to decide if such a dystopian future is what we want. The AHRC’s recommendation on a moratorium until proper human rights legislations are in place is sound and something we can stand behind.

Until such laws are in place, what can private facial recognition companies like Nirovision do?

Build purpose-driven tools for low-risk use-cases.

In other words, do not build mass surveillance tools. Nirovision is a workplace visitor management solution. We help industrial sites such as meat and food processing plants and logistics and warehousing facilities to ensure that only compliant personnel are on-site at any time. Facial recognition only speeds up the entry and compliance process. Sites without Nirovision would need to collect the same amount of data from visitors, albeit much slower and more error-prone. if facial recognition fails, the consequence is trivial – revert to the old, manual verification process.

Unfortunately, purpose-built tools can still be misused. Companies like ours must consider carefully the features that are built and also build in the necessary guardrails. For example, we commonly get feature requests to allow pre-recorded video to be imported into Nirovision for analysis or to perform historical searches of a newly uploaded person of interest. These might be standard features of a “facial recognition surveillance platform” but are overpowered in the context of a visitor management platform.

Committing to building domain-specific software reduces the likelihood of misuse – a customer looking for mass surveillance tools would naturally buy a mass surveillance tool. To do better, companies involved in facial recognition can also opt to disclose if they supply law enforcement, policing and the like. (Nirovision currently does not.)

AI Profiling

people holding up paper with question mark illustration

Automated AI profiling is another commonly requested feature that Nirovision has actively pushed back on. What is it? It is the task of predicting attributes such as age, gender, ethnicity, mood (sentiment), etc., solely from an image of a face. If that sounds like a can of worms, it is. Now, these requests are never for nefarious reasons: “I want to detect underage drinkers”, or “I want to know if my customers are happy”. It’s not their fault; end-users have been continuously oversold on the reliability of AI profiling systems without understanding the issues. AI profiling is often “packaged in” with the sale of facial recognition systems as if these traits (age, gender, etc.) are automatically generated as part of the recognition process.

Profiling is not in-built to facial recognition. It is a separate classification style problem that involves training an AI model to classify (or label) an image. Remember the ridiculous Not Hotdog app from the comedy series Silicon Valley? Profilers are similar. Instead of applying either a “hotdog” or “not hotdog” label, they are multilabel classifiers that attempt to apply one or many predefined labels such as:

  • Age range: under_21, 22_35, 36_50, 50_65, 65_over.
  • Emotion/mood/sentiment: happy, sad, angry, annoyed, neutral.
  • Gender
  • Race/ethnicity/skin colour

Knowing now what these systems are meant to do, two problems arise. Firstly, assuming an AI profiler could be build to achieve perfect accuracy, would we want governments or companies collecting this information without restraint? There are only a handful of reasons why an organisation should need this type of data (like a census, for example), and it should be “given” and not “taken”.

Secondly, the assumption of a perfect or even somewhat accurate AI profiler being available is incorrect. Given today’s state-of-the-art algorithms and techniques, it is virtually impossible to train an accurate face profiler. AI classifiers are trained using large data sets, and their accuracy depends on the data that goes in. Collecting verified datasets (also known as ‘ground truth’) is extremely difficult and tedious, so companies take shortcuts. Age classifiers are often trained using data scraped from dating websites, for example. But can we be confident that:

  • no one ever lies about their age.
  • profile photoes are not from our younger heydays.
  • makeup doesn’t cover up wrinkles.
  • plastic surgery doesn’t work.
  • J-Lo is not still in her thirties?!

The same thing applies to other traits. I, for one, have a resting grumpy face (aka RBF), but I promise I’m happy on the inside!

For these reasons, despite being a sought after feature, Nirovision chooses not to develop such classifiers.

Conclusion

When used safely, facial recognition can make our lives safer and simpler. Millions of people already use it daily to unlock their devices, approve digital payments and more. We see this technology rapidly being adopted by private facilities where health and safety are paramount. With the proper guardrails in place, the chance of facial recognition misuse is slim.

The use of facial recognition and other mass surveillance technologies in public places must be regulated by legislation as per Recommendation 19 of the report. Until such legislation is in place, I’m sure that we can all support a moratorium on the use of facial recognition in high-risk situations, such as in policing and law enforcement. At the very least, government agencies should immediately disclose what facial recognition software they use and how it’s being used.

Nirovision newsletter

Subscribe to get fresh stories, tips, and resources delivered straight to your inbox once a month.

Thanks for signing up for newsletter.
Oops! Something went wrong while submitting the form.
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.