The dark side of Artificial Intelligence and Facial Recognition

Published January 23, 2019   |   

Your face is what distinguishes you from John or Jane in the simplest form of its purpose. But as facial recognition technology keeps making strides with each passing day, people’s faces are now taking on more complex functions. Today, they are used for accessing bank accounts and even unlocking smartphones. And that’s not about it; law enforcement officers also run faces through facial scanning software to identify persons of interest in a crowd quickly and with impressive accuracy. Thus far, various tech titans have launched facial identification products to the market such as MasterCard’s pay-by-selfie, Facebook’s facial identification feature (makes tagging easier), Rekogntion by Amazon, and iPhone X’s Face ID tech by Apple. Taking these into account, it is safe to conclude that facial identification tech is big.

Now, while facial identification tech indeed comes with many upsides, for instance, automation of people identification and improved security, it is worth noting that this ever-evolving tech opens up a whole new world of problems. In fact, facial identification tech has since picked up on an Orwellian “Big Brother” persona as more and more view it as a detriment to the welfare of an open and free society.

Let’s explore the dark side:

Unwarranted mass surveillance

China and the US being two of the most technologically developed countries, have had their fair share of mass citizen surveillance.

China and digital dictatorship

Perhaps the most worrying of concerns coming from the use of facial identification tech is none other than unapproved surveillance. Each day there are claims of unsanctioned snooping on our lives by the government and other agencies.

Presently, China is at the forefront of implementing Big Brother surveillance on its 1.4 billion-strong citizenship. With the support of the Chinese political system, the government is working towards combining over 170 million security cams with Artificial Intelligence and facial identification tech to create state-wide surveillance. Without the Chinese nationals knowing, their data is aggregated using predictive software and individuals can be flagged when considered to be threats.

Through the China surveillance, the government plans to introduce the Chinese social credit system. This system is expected to be fully deployed by 2020. This set up is pretty much like the one imagined in the Nosedive episode of “Black Mirror”. The people will be allocated a certain score – at times 800 or 900, with the ones with higher social credits enjoying better service, cheaper loans, among a host of other benefits and incentives provided by the Chinese government.

The USA Stalking System

The US, just like China has experienced significant cases of unsanctioned government surveillance of citizens. Just the other day, the Congressional hearings brought to light disturbing evidence regarding the FBI’s use of its facial recognition system.

Aside from their illegal gathering of over 30 million mugshots, the agency has been making underhand dealings that have seen it access other sources of identification including driver’s licenses. Worse still, even as the FBI went about the surveillance, it was without appropriate supervision.

There have also been allegations of Google home spying and Alexa spying in the US. These AI assistants record information all the time and infringing on the privacy of users. Worse still, when dangerous people access this information, they can find out a person’s location and plan robberies on the target’s property.

The negative impact of face recognition on hiring process

It is a given that technology presented us with the digital interview. Nonetheless, with the advent of facial recognition in recruiting, the process becomes quite subjective and bias. It is now possible for recruiters to use face recognition algorithm to analyze your face against huge datasets sourced from social media and other networking sites.

Hiring managers can look you up online and use the information they learn about you to make their decisions. In the past, all you had to do was send in your application and patiently await a callback. But thanks to this tech, several other auxiliary factors come to play before you are considered for an opening thus making the process flawed to some extent.

Facial recognition software and security concerns

As our anonymity continues to dwindle thanks to facial identification software, we are increasingly becoming predisposed to security risks. David Murray, a Chief Technology Officer at, reiterates this thought. He contends that this tech is likely to make crime worse. That said, facial identification has in a way facilitated:

1. Fraud

While it might be quite challenging to pull off, fraudsters can replicate your face and gain access to your bank account, paid membership sites, and so forth. In turn, they can steal your money or use it to buy items you have not approved.

A piece of advice of Arnie Gordon from Arlyn Scales says that it is necessary to stay alert to avoid falling for the “I know you” type of scams.

2. Break-Ins

By making it possible for people to track each other in public, crime opportunities go up. For instance, there might be cases of break-ins when an owner of a house is away.

3. False Identification

Solely relying on facial expressions can lead to errors when identifying people. In turn, this can lead to wrong arrests, confusions in deliveries, and so forth.

4. Stalking

Case in point, last year in Russia, an app called FindFace was launched. When you take a picture of someone, this application uses facial identification technology to identify the person in the digital image. Now, if a stalker were to get their hands on FindFace, they can know where you are, what streets you frequent, your friends, and much more.

Final word on facial analysis tech

These are select few scenarios detailing how facial identification technology and AI can affect human lives negatively. It remains a matter of carefully implementing these systems if we are to realize the most gain from them. We ought to appreciate the fact that while these systems are automated, human intervention remains paramount. Naresh Soni, a CSO at Tsunami XR, says that combining facial identification with human assistance remains the better practice. Nonetheless, the handlers need to be trained adequately to eliminate any bias.