Facial recognition technology establishes a person’s identity from a single digital image of their face. This technology is not only used to identify criminals and prevent crime, but it is also used in a range of other commercial settings. However, issues including trust, consent and bias, limit its use in some regions. Research conducted by Gary Chan, Professor of Law at Singapore Management University, investigates how to build trust in facial recognition technology via technological measures, ethnical guidelines, and legislation.
Read the original article here: https://doi.org/10.1093/ijlit/eaab011
Transcript:
Hello and welcome to Research Pod. Thank you for listening and joining us today.
In this episode we are exploring research conducted by Gary Chan, Professor of Law at Singapore Management University. His research is related to building trust in facial recognition technology through the assessment of its risks and benefits.
Facial recognition technology establishes a person’s identity from a single digital image of their face. The technological process of facial recognition differs significantly from how our human brains identify others. First, a computer analyses the digital image, breaking it down into minute areas called pixels. In the next stage, statistical techniques detect patterns within the image. Finally, the computer can match these patterns against a database to identify the individual. The facial information within this process is classified as biometric data, like our DNA, fingerprints, walking gait and vocal tones.
Law enforcement agencies use this technology to identify criminals and prevent crime. It is also beginning to be employed throughout a range of settings, including finding missing persons, monitoring school attendance, employment interviews and even the personalisation of our consumer experiences. During the COVID-19 pandemic, the Russian Government used facial recognition technology to monitor the whereabout of individuals who should be quarantining.
Since the introduction of this technology there has been controversy. Certain US states prohibited its use. In 2020, the European Commission considered implementing a 5-year ban on the use of facial recognition technology but have instead opted to let individual nations decide. In the UK, legal rulings have proposed a hiatus on the use of automated facial recognition for the purpose of public surveillance.
Why is facial recognition technology prohibited in so many regions? Especially when its primary aim is often to reduce crime. One of the main reasons is infringement of privacy, defined as the right to control your own personal information. This gives individuals the choice over what they disclose to others. This right to privacy can protect an individual from adverse events. For example, fraud or identity theft using personal data can lead to financial losses. It can prevent situations such as unfair employment dismissal due to employers receiving health information without the individuals’ permission.
Another issue surrounding facial recognition data is lack of consent which involves the storage and usage of personal information without permission. Aggregation of the data with other information that is publicly available, such as from social media platforms like Facebook, is a further cause for concern. There have also been issues around capturing secondary information, such as the person’s location or even their emotional states. If there was a security breach, as this is biometric data, the individual cannot simply change their password or identification number.
Another concern relating to the usage of facial recognition technology is the possibility of bias and inaccuracies. Bias can either be built into this technology or can arise from the use of the technology. Policymakers in the US are concerned about minorities being intimidated or oppressed through use of this technology. Inaccuracies have also proved to be an issue, with the US Department for Homeland Security calling for a temporary ban, after discovering that false positives were between 10 and 100 times higher for Asian and Black faces. There was also apprehension towards private firms using this kind of data, especially after the wide media coverage around a company which amassed around 3 billion images from the internet for financial gains.
Several safeguarding measures have been brought into place to mitigate the risks associated with using facial recognition technology, including the implementation of regulation controls across various countries. Professor Gary Chan from Singapore Management University argues that placing blanket bans on the use of this technology is too extreme. He believes that this kind of action overlooks the potential benefits of facial recognition technology. Also, by prohibiting its use, innovations may be impeded which could have developed new areas for use and helped to mitigate some of the risks.
Professor Chan believes in using a calibrated approach to build trust, focusing on technological measures, ethnical guidelines, and legislation. He suggests that there are three pillars which can help to build trust in facial recognition technology. The first pillar covers the stakeholders, which includes members of the public, as well as those who use and develop this technology. Public trust often centres around perceptions of the company or organisation that is employing the technology. Therefore, Professor Chan believes that for the public to accept facial recognition technology they must first have trust in the institutions and organisations using it, as well as the technology itself. Perceptions also tend to be based on information in the media, including stories of misuse or security breaches.
The second pillar covers normative standards, which relate to legal and ethical issues. Professor Chan emphasises that to gain trust in this technology, it is important that ethical considerations such as privacy, consent, risk of mistake and discrimination are taken seriously. Companies or organisations using this technology need to be transparent and act responsibly, taking on board any legal considerations or rulings.
The final pillar is implementation. Professor Chan believes in regulations around notification and consent, with data only used for the sole purpose which was consented to. This would alleviate concerns around function creep, where data collected for one purpose is then used for another without consent. Opinion polls have shown that the level of public trust in facial recognition is related to the context and purpose of the data capture. In the US, for example, the public were more likely to agree to the use of this technology if it was related to law enforcement rather than for commercial reasons.
As public trust is strongly related to the purpose and context of facial recognition, Professor Chan believes trust can be gained by clearly communicating the impact and scope of use, as well as weighing up the risks and benefits. The benefits for individuals could include speed, security, and convenience. Whereas the potential risks would include loss of privacy, unfair treatment, reduced freedom, monetary loss and even loss of life. Some of these risks could be mitigated by setting a minimum level of accuracy that is required for the technology to be in use. Implementation of human review or corroboration of evidence within the process could improve accuracy and instil trust.
Based on evaluating these risks, Professor Chan believes that facial recognition technology should be used if the benefits are high, and the risks are low. Particularly if its purpose is to help society, such as finding a missing child or assisting with navigation for the blind. Conversely, if there are high risks with little benefit then he accepts there is no justification for its use. However, in a situation where both the benefits and risks are high, he feels that the user should carefully consider whether the risks can be mitigated or if the purpose is significant enough to justify the taking of risks.
Professor Chan strongly believes that facial recognition technology can yield great benefits for society and emphasises that the development of a clear set of recommendations would help guide both users and the public. He also urges that we weigh up the risks and benefits of facial recognition technology within the specific context of its use, rather than prohibiting it entirely.
That’s all for this episode. You can find links to Professor Chan’s research paper in the show notes for this episode, and links to more of his work at his Singapore Management University staff page. Thanks for listening and be sure to stay subscribed to ResearchPod for more of the latest science.
See you again soon.
Also published on https://researchpod.org/informatics-technology/a-trust-based-approach-to-the-use-of-facial-recognition-technology
Podcast is also available on Spotify, Apple iTunes, Google Podcasts, and many more (please use search term “ResearchPod”).
See More News
Want to see more of SMU Research?
Sign up for Research@SMU e-newslettter to know more about our research and research-related events!
If you would like to remove yourself from all our mailing list, please visit https://eservices.smu.edu.sg/internet/DNC/Default.aspx