Jump to content

Face Recognition CEO Says Use of This Tech by Police Is 'Irresponsible and Dangerous'


steven36

Recommended Posts

Face recognition will be used to harm citizens if given to governments or police, writes Brian Brackeen, CEO of the face recognition and AI startup Kairos, in an op-ed published by TechCrunch today. Last week, news broke that bodycam maker Axon requested a partnership with Kairos to explore face recognition. Brackeen declined, and writes today that “using commercial facial recognition in law enforcement is irresponsible and dangerous.”

 

https://img20.pixhost.to/images/327/73935750_lsnjpq8pznofu4ug0fr7.png

 

“As the Black chief executive of a software company developing facial recognition services, I have a personal connection to the technology both culturally, and socially,” Brackeen writes. Face recognition is one of the most contentious areas in privacy and surveillance studies, because of issues of both privacy and race. A study by MIT computer scientist Joy Buolamwini published earlier this year found face recognition is routinely less accurate on darker-skinned faces than it is on lighter-skinned faces. A serious problem, Brackeen reasons, is that as law enforcement relies more and more on face recognition, the racial disparity in accuracy will lead to consequences for people of color.

 

“The more images of people of color it sees, the more likely it is to properly identify them,” he writes. “The problem is, existing software has not been exposed to enough images of people of color to be confidently relied upon to identify them. And misidentification could lead to wrongful conviction, or far worse.”

 

Law enforcement agencies have increasingly relied on face recognition in the U.S., celebrating the tech as a public safety service. Just last week, Amazon employees rallied against the use Rekognition, the company’s face recognition technology, by police. Once optional for U.S. citizens, the Orlando Airport now mandates face scans for all international travelers. And CBP has moved to institute face recognition at the Mexican border. In areas where identifying yourself is tied to physical safety, any inaccuracies or anomalies could lead to secondary searches and more interactions with law enforcement. If non-white faces are already more heavily scrutinized in high security spaces, face recognition could only add to that.

 

“Any company in this space that willingly hands this software over to a government, be it America or another nation’s, is willfully endangering people’s lives,” concludes Brackeen. “We need movement from the top of every single company in this space to put a stop to these kinds of sales.”

 

More on this  at [TechCrunch]

 

Source

Link to comment
Share on other sites


  • Replies 1
  • Views 495
  • Created
  • Last Reply

Wow everyday is worse and worse :( The next step will be to use a "Mental Machine" to know if you are thinking bad things :( 

Link to comment
Share on other sites


Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...