Innodata — Ethical Issues in Computer Vision and Strategies for Success

Ethical Issues in Computer Vision and Strategies for Success

Who has ownership of images and videos of your person and property? How would you want them to be used, if at all? How does one give consent for the use of personal data, or even find out where it exists?

As AI applications infiltrate many areas of our lives, this unrestrained spread can feel insidious. Computer vision and facial recognition use personal information, such as physical appearance, location, residence, and behavior, to perform functions that improve our lives in previously unimagined ways. However, this use of our personal information raises ethical concerns regarding privacy, discrimination, and safety. Here we examine some of these issues and discuss strategies to mitigate them, in an ongoing effort to harness the power of computer vision with minimal collateral damage.

The Brain-Like Power of Computer Vision

Computer vision joins two technologies, deep learning and Convolutional Neural Networks (CNNs), to analyze and identify images. Deep learning, a form of machine learning, uses algorithms to assess new images based on training data. The algorithm is continuously tuned by repeatedly adjusting the parameters of the neural net to improve performance. CNN, a structure that determines how the neural net calculates its results, processes the images identified by machine learning models. It seeks to simulate human brain function by breaking images into component pixels or objects, tagging them, and running “convolutions” on them, in a cycle of predictions, accuracy checks, and modifications, until the predictions and outcomes match. In this way, computer vision performs a number of basic tasks, such as image classification, object detection, object tracking, and content-based retrieval. These tasks are the basis of larger applications like facial recognition, self-driving cars, criminal investigations, disease detection, and augmented reality.

Ethical Concerns in the Use of Computer Vision

Because computer vision enables artificial intelligence systems to identify faces, objects, locations, and movements, this technology raises a variety of ethical concerns and privacy issues. These include fraud, bias, inaccuracy, and the lack of informed consent. 

For example:

  • Fraud: Hackers and fraudsters have outsmarted facial recognition technology using masks and photos in order to claim benefits or gain entry to a site under another person’s identity. According to the Wall Street Journal, there were over 80,000 attempts to fool government facial recognition systems (to claim others’ unemployment benefits) between June 2020 and January 2021, during the Covid-19 pandemic. 
  • Bias: In law enforcement and other arenas, facial recognition produces a far higher rate of false identifications among Black and Asian faces than white ones, increasing the chances of false arrest. It is also far more likely to identify elderly people and young children than middle-aged adults, skewing the evidence and affecting investigations. As a result, the use of this technology has been banned in the City of Baltimore, the City of Portland, and many other jurisdictions.
  • Inaccuracy: In healthcare and disease detection, extraneous signals, or data noise, can lead to inaccuracy and faulty diagnoses. One CV system was found to make health predictions based primarily on the type of X-ray machine used. It made a false correlation between portable X-ray machines and a specific disease (likely because patients who use portable X-ray machines are typically in poorer health than those who can be taken to an X-ray room).
  • Legal Consent Violations: In the private sector, facial recognition has been used to collect personal data without consent, resulting in violations of privacy laws such as Illinois’ Biometric Information Privacy Act (BIPA) and the California Consumer Privacy Act (CCPA), prompting a multitude of class action lawsuits. Due to a public outcry against privacy violations, Apple delayed the launch of its controversial CV-driven software designed to detect and report Child Sexual Abuse Material (CSAM) found on personal devices. Users feared that their personal images could be misused for government surveillance or false prosecution.
  • Ethical Consent Violations: Researchers amass large data sets of facial images without consent, and this data, in addition to data scraped from the web, is often used to improve military and commercial surveillance algorithms. Unbeknownst to users, the personal data they post on the web is later used as training data for surveillance applications around the world, including in China. In academic research, the use of facial recognition to study the vulnerable Uyghur population in China triggered widespread backlash in the scientific community and calls for retraction of the studies.

How to Use Computer Vision Ethically

Because computer vision is a new technology and its implications are still poorly understood, there are few government regulations, institutional review procedures, and ethical best practices to guide and constrain its use. Until such guidelines exist, organizations, researchers, and private users must establish their own ethical frameworks for the use of computer vision. Below are some ways to get started.

Broad strategies for using computer vision ethically:

    • Improve the training data. Biases and discrimination originate in the training data that is fed into a machine learning system. Careful curation and annotation of the data with a view toward diversity and objectivity, as well as stronger verification procedures and countermeasures, will make a model less prone to bias and discrimination.
    • Choose the appropriate level of technology for the problem. When the capability of the technology exceeds the requirements of the application, unintended consequences follow and ethical risks increase. For example, a camera system designed to track the number of people entering and exiting a theme park would not need to use facial recognition technology.
    • Clearly define the purpose for which the technology will be used. Set boundaries for the use of a specific CV model, document the execution of the model, and work to ensure that its application does not extend beyond its stated purpose and intent.
    • Create and/or review strong internal privacy and data protection programs. Organizations need to ensure compliance with existing (and evolving) local laws as part of a larger effort to protect personal and client data from unintended distribution and use.
    • Prioritize informed consent. To the fullest extent possible, obtain informed consent before collecting facial images and personal data. This is essential, and often mandated by law. In large scientific studies, consent could be obtained from a panel of representatives who can speak for a large population.

Best Practices for Handling Sensitive Data

In the tech sector, companies that handle vast quantities of sensitive data and images can enhance data security and privacy for their clients using the following:

  • Homomorphic encryption: a form of encryption that allows processing of data that remains in encrypted form, producing a result that is also in encrypted form. This encrypted result can only be decrypted by a client or end user with the encryption key.

  • Secure Federated Learning: a decentralized data processing approach that uses independent nodes (devices or servers) to handle subgroups of data without sharing or exchanging information. The processed data is then combined into a single machine learning model, but the raw data, in aggregate form, remains inaccessible to any single entity. This technology was honed by Google, which currently uses it in its Gboard predictive keyboard, Now Playing music feature, and TensorFlow machine learning framework.

  • Secure Multiparty Computation: distributes training data among multiple parties without a trusted third-party server. It allows joint computation of inputs by disparate participants while preserving the privacy of each participant and each set of inputs.

Conclusion

Computer vision and facial recognition technology are changing the way we see the world. Information, misinformation, and surveillance are expanding at blinding rates, with positive and negative consequences. Until governing bodies are able to effectively regulate these growing technologies, organizations and individuals must take the lead in using computer vision and facial recognition ethically and responsibly.

Further reading:
For best practices specific to facial recognition privacy, the U.S. Federal Trade Commission’s recommendations can be found HERE.

Accelerate AI with Annotated Data

Check Out this Article on Continuous Learning for Facial Recognition in Images
Continuous Learning for Context Recognition in Images

follow us

(NASDAQ: INOD) Innodata is a global data engineering company delivering the promise of AI to many of the world’s most prestigious companies. We provide AI-enabled software platforms and managed services for AI data collection/annotation, AI digital transformation, and industry-specific business processes. Our low-code Innodata AI technology platform is at the core of our offerings. In every relationship, we honor our 30+ year legacy delivering the highest quality data and outstanding service to our customers.

Contact