Jump to content
  • Intel says it can sort the living human beings from the deepfakes in real time

    aum

    • 313 views
    • 3 minutes
     Share


    • 313 views
    • 3 minutes

    Claims to be able to spot imposters in live video feeds within milliseconds

     

    Intel claims it has developed an AI model that can detect in real time whether a video is using deepfake technology by looking for subtle changes in color that would be evident if the subject were a live human being.

     

    FakeCatcher is claimed by the chipmaking giant to be capable of returning results in milliseconds and to have a 96 percent accuracy rate.

     

    There has been concern in recent years over so-called deepfake videos, which use AI algorithms to generate faked footage of people. The main concern has centered on it potentially being used to make politicians or celebrities appear to be voicing statements or doing things that they did not actually say or do.

     

    “Deepfake videos are everywhere now. You have probably already seen them; videos of celebrities doing or saying things they never actually did,” said Intel Labs staff research scientist Ilke Demir. And it isn't just affecting celebrities, even ordinary citizens have been victims.

     

    According to the chipmaker, some deep learning-based detectors analyse the raw video data to try to find tell-tale signs that would identify it as a fake. In contrast, FakeCatcher takes a different approach, involving analyzing real videos for visual cues that indicate the subject is real.

     

    This includes subtle changes in color in the pixels of a video due to blood flow from the heart pumping blood around the body. These blood flow signals are collected from all over the face and algorithms translate these into spatiotemporal maps, Intel said, enabling a deep learning model to detect whether a video is real or not. Some detection tools require video content to be uploaded for analysis, then waiting hours for results, it claimed.

     

    However, it isn’t beyond the realm of possibility to imagine that anyone with the motives to create video fakes might be able to develop algorithms that can fool FakeCatcher, given enough time and resources.

     

    Intel has naturally enough made extensive use of its own technologies in developing FakeCatcher, including the OpenVINO open-source toolkit for optimizing deep learning models and OpenCV for processing real-time images and videos. The developer teams also used the Open Visual Cloud platform to provide an integrated software stack for Intel’s Xeon Scalable processors. The FakeCatcher software can run up to 72 different detection streams simultaneously on 3rd Gen Xeon Scalable processors.

     

    According to Intel, there are several potential use cases for FakeCatcher, including preventing users from uploading harmful deepfake videos to social media, and helping news organizations to avoid broadcasting manipulated content. ®

     

    Source

    • Like 3

    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...