4 Risks Associated With Facial Recognition Technology

As technology keeps getting better, businesses and governments are coming up with new ways to use this new technology in our daily lives. There have been a lot of debates about facial recognition software in the last few years.

What Is Facial Recognition?

If you look at someone’s face, you can figure out who they are or make sure that they are who they are. A facial recognition system can be used to find people in photos, videos, or in real-time.

There are different types of biometric security. Biometric facial recognition is one of them. This is not the only type of biometric software. Other types include voice recognition, fingerprint recognition, and iris or retina recognition. The technology is mainly used for security and law enforcement, but there is a lot of interest in using it in other areas.

Facial Recognition

How Facial Recognition Works?

FaceID, the technology used to unlock iPhones, is well-known (however, this is only one face recognition application). Typically, facial recognition doesn’t use a vast database of photos to figure out who someone is. It only identifies and recognizes one person as the only person who owns the device, and it doesn’t let anyone else use it.

Before you can use facial recognition to unlock your phone, it works by matching people’s faces who walk by special cameras to images of people who are being watched. The watch lists can have pictures of anyone, even people who haven’t been accused of anything. The images can come from anywhere, even from our social media accounts. Facial technology systems can be different, but in general, they work like this:

  • Face Detection
  • Face Analysis
  • Converting the Image to Data
  • Finding a Match

By looking at many facial images, the technology can then identify and use these faces in a wide range of ways. For example, face filters on some social media apps can recognize a person’s face and put some filters over it for fun. The use of facial technology in fields like security has become a significant ethical issue.

Even though these kinds of uses aren’t harmful to anyone, the use of facial technology in these fields has become a significant ethical issue. Users of the technology have to figure out how the software comes to its conclusions because the software isn’t clear about how it makes its decisions.

Facial recognition technology is dangerous, and these ethical issues must be solved before the software can be used in even a small way.

1. Biased Data

In the documentary Coded Bias, it is shown that technology (facial recognition bias) itself is not always fair. In some cases, the software doesn’t know how to distinguish between black people and other races. This is often because the data sets that these algorithms are fed are mostly made up of white men, so the algorithm gets better at recognizing these faces at the expense of other people.

There are times when black men and women can be mistaken for other people, leading to them being arrested.

CBS News article talks about how this software made a mistake that led to Michael Oliver being charged with larceny. If this kind of software is used in an area where people are already being over-policed and discriminated against, it can significantly adversely affect them.

It could lead to many people being wrongly arrested, which could significantly impact communities across the country.

2. Rights to Privacy

All people have the right to privacy. You can choose to remain anonymous and choose with who you want to share details about your life. This is because facial recognition technology takes a picture of your face even though you didn’t want it to happen. An article in the New York Times says that “The tool could identify activists at a protest or an attractive stranger on the subway.” It could also show where they lived, what they did, and who they knew. This is how it works: It’s common for police stations to use technology like this.

A rogue officer could use the technology to stalk someone they don’t like or a person they want to have sex with. Governments could use the technology to actively watch you, blocklisting you from certain federal positions based on traits that they disapprove of. You could be entirely at risk if there is a data breach. An image of your face and all of your personal information could be out there, leaving you completely open. People have a right to privacy, and taking a picture of someone’s face and putting it in a database without their permission is entirely against that right.

3. Transparency

To keep their product safe, many tech companies make it hard to see their code or any other part of their software that isn’t the output of their software. In the field of facial technology, there are a lot of new competitors who want their product to be the one that security services around the country buy. They hide their code from both the general public and each other to keep it safe.

So police stations across the country and tech experts don’t know how these algorithms decide what to do. People who know what they’re talking about helping each other out in the design process. Many of these products were put on the market before being checked by experts in the field. As a result, we can have biases like the one above and many other things.

A software called Clearview is being used more and more by police departments. Still, it isn’t clear how many false matches it makes because it hasn’t been tested by an outside group, like the National Institute of Standards and Technology, which evaluates facial recognition algorithms. Clare Garvie, a Georgetown University researcher who has looked into the government’s use of facial recognition, said: “We don’t have any data to show that this tool is correct.”

4. Ethic Micro and Macro

There are a lot of individual impacts on micro ethics that you can read about in the list above. People can be mistaken for someone else, leading to false arrests and the risk of false imprisonment. They also have a picture of their face taken without their permission, then put into a database and used to find them. This is a complete breach of privacy rights, and the person deserves to have a say in whether or not an image of them is taken.

As a result, facial recognition victims can’t see how the software makes decisions, which means they can’t see how the software makes its decisions. This lack of transparency denies tech experts the ability to go through the data and code behind the software and point out its issues.

There is a direct link between micro ethics issues and macro ethics issues. Face-recognition software can have a hard time figuring out people of color, leading to people being unfairly punished because they aren’t white men. False arrests and imprisonments based on facial recognition software can lead to over-policing and increase tensions between police and people who aren’t like them.

More stores and police stations are now using this kind of software, which makes us less free to be who we want to be at the expense of safety. Many countries are setting up national databases that collect images of their citizens’ faces and other information about them that has already been gathered.

Facial Recognition

Conclusion

For their safety, most people who use facial-recognition software should be aware of their algorithm. Take care to make sure that any facial recognition software you use doesn’t collect your data for the company using it. Make sure you’re careful when you use face filters or other features that are easy to use, and don’t use facial recognition software when you can. Keep an eye on privacy and technology laws going through your government. If you can, write to your elected officials if that’s what you can do. Tell people about privacy and facial recognition software.

The more people who know, the easier it is to make significant changes. Because this kind of software can have many different effects, technologists and those who use the technology should think about them. For example, they should use facial recognition software that doesn’t lead to over-policing of a surveillance state. Peer review of the code and data sets used by such software should also be done to ensure that it is fair for everyone. This would change facial recognition software from a tool that doesn’t help the public to one that allows the better public.

Leave a Comment