AI facial recognition — a.k.a. face search — is a powerful and promising technology already being applied in healthcare, law enforcement, and air travel.
However, in the wrong hands, AI face search can serve as a tool for exploitation.
Looking to the future of AI face search, there are some obvious ethical gray areas.
So, let’s take a look at 6 of the most important ethical considerations surrounding AI facial recognition.
6 Ethical Considerations Surrounding AI Face Search
AI facial recognition is revolutionizing searches for missing persons and identity theft prevention. Yet, this technology can potentially cause privacy breaches, bias, and criminal misuse.
#1. User privacy
The first and foremost ethical consideration surrounding the topic of AI face search is an inherent lack of user privacy. Whether you consent to data collection or not, the mere existence of this technology threatens your privacy.
AI face search collects facial data en masse via:
- Social media and public databases;
- Continuous surveillance;
- Third-party data brokers;
- Behavioral tracking.
When your face can be scanned in any public space, an individual’s right to remain anonymous in public is under threat. In any public space, be sure to use encryption on your devices — look for VPN best deals for affordable encryption.
#2. Lack of consent
So they collect all of our Facebook data… surely they ask permission? Clear consent regarding the use of personal facial data is severely lacking — meanwhile, AI facial tracking softwares are adopted worldwide.
How do they get away with it? Well, many facial recognition systems operate in public spaces, where individuals simply don’t have the choice to opt out.
Additionally, companies often bury consent fineprints within their Terms of Service. This way, you’re giving your consent to have your facial data stored and shared, albeit unknowingly.
#3. AI bias
Even though we’ve hastily embraced this new technology, it’s far from perfect — in some cases, it can even cause damage. Here’s where the tricky subject of AI bias arrives centerstage.
A study in 2019 found that Asian and African American individuals are up to 100 times more likely to be misidentified by AI facial recognition compared to caucasian Americans.
Due to limited training datasets and the blindspots of their creators, AI models can be incredibly biased. This raises concerns about the potential for wrongful arrests and racial bias.
#4. Regulatory challenges
Advancements in AI facial recognition have surged rapidly, and legislation hasn’t kept up. Some countries are quick to implement patchwork regulations, while others lack any federal governance whatsoever.
The lack of clear regulation surrounding AI face search raises various ethical issues:
- Corporate loopholes: Companies claim they’re only responsible for how their AI is made, not used.
- No global regulation: International companies can move their operations to countries with no regulation on facial recognition.
- Self-regulation: Without clear legislation, corporations are claiming to self-regulate. This way, privacy policies can be scrapped at a moment’s notice.
#5. AI’s role in mass surveillance
It may sound dystopian to you, but for plenty of individuals around the world, mass surveillance is a way of life. AI facial recognition enables governments to conduct widespread surveillance on citizens.
This includes the tracking of:
- Public behavior;
- Individual’s movements;
- Antiestablishment speech;
- Social connections.
China’s social credit system relies on public surveillance to track citizens’ movements and behaviors. Those who become “discredited” may have to face throttled internet connections, flight bans, and even public shaming.
#6. Potential for misuse
Beyond the implications of surveillance and regulation, facial recognition technology has a high potential to be used for unethical or even directly harmful purposes.
Corporate responsibility: AI companies often sell their facial recognition software to law enforcement and security agencies. How these customers use or misuse this software is rarely monitored.
Targeted cybercrime: Facial recognition can serve a far uglier purpose in cybercrime. AI-generated deepfakes can be used to bypass biometric security systems or even commit identity fraud.
Confidentiality breaches: The storage of facial data always carries the risk of a security breach. Hospitals, workplaces, and airports often face data breaches, compromising the safety of countless individuals.
Social control: The combination of facial recognition and mass surveillance begs the ethical dilemma of social control. In the future, free expression in public can come under the threat of widespread surveillance.
Confronting the (possibly) unethical future of AI face search…
AI facial recognition has evolved at a staggering pace, promising a technologically sophisticated future. Yet, the potential for its misuse raises some alarming ethical questions.
Will corporations need to take responsibility for how their AIs are used? And will governments adopt more comprehensive AI legislation?
To avoid exploitation, AI face search requires stronger regulation, corporate accountability, and constant ethical oversight.
FURTHER READING