Jump to content
  • Deepfakes can fool biometric checks used by banks, research finds

    aum

    • 543 views
    • 2 minutes
     Share


    • 543 views
    • 2 minutes

    Fraudsters can easily use artificial intelligence to open up fake accounts online.

     

    A team of researchers has found that biometric tests used by banks and cryptocurrency exchanges to verify users’ identities can be fooled by deepfake technology.


    In a report published on Wednesday, researchers with Sensity, a security firm focused on deepfake detection, demonstrated how it was able to bypass an automated “liveness test” by using AI-generated faces.


    Commonly known as “know your customer” or KYC tests, such verification processes often ask users to provide photographs of their identification as well as their face. A “liveness test” is then used to capture the users’ face in real-time in order to match it to their selfie and identification photo with facial recognition.


    KYC verification is utilized in a wide array of industries including banking, fintech, insurance, crypto, and gambling. Sensity tweeted out footage of its demonstration a week before it released its report, detailing how 9 of the top 10 KYC vendors were highly vulnerable to deepfake attacks.

     

    On Wed, May 18, we will publish our latest research about how fraudsters are leveraging #deepfakes to spoof identity verification on digital banks and crypto platforms pic.twitter.com/xIKCdkpOd5


    — Sensity (@sensityai) May 13, 2022

     

    “Despite its widespread adoption, active liveness checks are weak against attacks by Deepfakes,” the report states. “The reason is that real-time Deepfakes can reproduce faithfully facial landmark movements of the attackers.”


    Even with such a glaring vulnerability, KYC vendors do not appear concerned about the potential for misuse. In a statement to the Verge, which first covered the report on Wednesday, Francesco Cavalli, Sensity’s chief operating officer, claimed that vulnerable companies did not appear to care.


    “We told them ‘look you’re vulnerable to this kind of attack,’ and they said ‘we do not care,’” he said. “We decided to publish it because we think, at a corporate level and in general, the public should be aware of these threats.”


    With massive crypto heists becoming common, it seems likely such vulnerabilities will be exploited more and more by cybercriminals as deepfake technology becomes more realistic and easier to use.

     

    Source

     


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...