Jump to content

Controversial software claims to tell personality from your face


Reefa

Recommended Posts

Quote

qTKP4Gw.jpg

Terrorists and bingo players watch out


Can software identify complex personality traits simply by analysing your face? Faception, a start-up based in Tel Aviv, Israel, courted controversy this week when it claimed its tech does just that. And not just broad categories such as introvert or extrovert: Faception claims it can spot terrorists, paedophiles – and brand promoters.

 

Faception’s algorithm scours images of a person from a variety of sources, including uploaded photos, live-streamed video and mugshots in a database. It then encodes facial features, including width and height ratio, and key points – for example, the corners of the eyes or mouth.

 

So far, so uncontroversial. “Using automated feature extraction is standard for face recognition and emotion recognition,” says Raia Hadsell, a machine vision engineer at Google DeepMind.

 

The controversial part is what happens next. Faception maps these features onto a set of 15 proprietary “classifiers” that it has developed over the past three years. Its categories include terrorist, paedophile, white-collar criminal, poker player, bingo player and academic (see image below).

 

HtEIdYM.jpg

Do you see yourself here?


To come up with these custom archetypes, Itzik Wilf, Faception’s chief technology officer, says they trained the system on the common facial features of thousands of images of known examples. The software only looks at facial features, he says, and ignores things like hairstyle and jewellery.

 

Wilf says this has led to notable successes. When presented with the pictures of the 11 people behind the 2016 Paris attacks, the algorithm was able to classify 9 of them as terrorists. Similarly, it spotted 25 out of the 27 poker players in an image database.

 

The Faception site also lists more prosaic uses for its tech, including marketing, insurance underwriting and recruiting. “HR could use it to identify suitable candidates,” says Wilf.

 

“Faception has been working on its classifiers for more than three years now with the best team in the world to get where we are today,” says co-founder Gilad Bechar, who is now at Moburst, a marketing company in New York, but remains on the Faception board. Overall, the algorithm can class people into Faception’s categories with around 80 per cent confidence, Wilf says.

 

Many machine vision researchers are crying foul, however.

 

Arab descent?

“A classifier that tries to flag every single person of Arab descent could identify 9 out of the 11 Paris attackers at the cost of falsely flagging 370 million out of the 450 million Arabs in the world,” says Emin Gün Sirer at Cornell University in Ithaca, New York. “Such a classifier is completely useless.”

 

Jay Turcot, director of applied AI at emotion recognition firm Affectiva also has strong reservations. “I want to ask immediately what it says about a population that is around the same age, gender, facial hair as the Paris attackers,” he says. “How many false positives will their algorithm get? What does the test set look like?”

 

Wilf says that for each of their classifiers, the training sets of images run into thousands. But for behaviours as rare as terrorism or paedophilia, this will still lead to a number of false positives.

 

Wilf acknowledges the problem. “There are always accuracy issues with machine learning algorithms,” he says. For that reason, he says the algorithm won’t be deployed on its own and will always defer to human judgement.

 

However, what that would mean in practice is unclear. The algorithm apparently performs more accurately than humans do. In the past few years, physiognomy – the notion that a person’s character can be assessed from their appearance – has enjoyed a mild comeback after long being relegated to pseudoscience.

 

For example, differences in testosterone in men, broadly reflected in certain facial features, might lead to differences in moral decision making. But even the more recent results have been quite broad. It’s a big step from “utilitarian decision maker” to “terrorist”. What’s more, it’s not very accurate. At best, this kind of research has demonstrated “slight accuracy”, says David Perrett, who studies facial cues at the University of St. Andrews in Fife, UK. Humans can only infer personality from facial traits at a rate slightly better than chance, he says.

 

Shadow profiles

Face recognition technology has been at the centre of many ethics debates in recent years. Facebook was criticised for creating “shadow profiles” of people who did not have accounts of their own but appeared in images uploaded by people who did. Most recently, there was an outcry over Russian app FindFace, which scraped identifying data from social network Vkontakte so that users could identify people they snapped on the street.

 

“We would never license our IP to someone who would use it for those kinds of purposes,” says Wilf. But Bechar says one of its clients is an unnamed security contractor outside of the US.

 

“This is a new idea,” says Wilf. “New ideas are often greeted with friction.”

 

source

 

Link to comment
Share on other sites


  • Replies 1
  • Views 757
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...