Jump to content

Face-Swapping Porn: How a Creepy Internet Trend Could Threaten Democracy


Matrix

Recommended Posts

Using AI to make "deepfakes" – swapping famous faces into porn – started as a Reddit trend, but the technology could have far scarier implications

rs-face-swap-porn-3278dc81-4769-4ca7-b3b

"Deepfakes" – where AI swaps in different faces to videos – could have wide-ranging consequences for how we perceive truth. Chelsea Johnston

There's a popular porno on XVideos that begins with a woman standing naked in heels with her back to the camera, swishing her long, bleach blonde hair. After a few seconds she turns around and instantly becomes recognizable – it's Game of Thrones' Emilia Clarke. Or rather, it's a blurry, semi-believable version of her. Using artificial-intelligence technology, Clarke's face has been superimposed onto the body of an adult film star. It's a scuzzy new category of porn called "deepfakes," named after a Reddit user who first introduced this kind of X-rated, face-swapping handiwork.

 

Deepfakes of celebs like Emma Watson, Gal Gadot, Taylor Swift, Daisy Ridley – even Michelle Obama – started popping up in places like Pornhub and Twitter earlier this year. Public outcry ensued, and those sites, along with Reddit (where some 90,000 users had joined a deepfakes-dedicated subreddit before it was shut down) banned the practice in February, calling the content "nonconsensual." Months later, loads of doctored flicks are still floating around these platforms and others. They're invasive, mostly legal, and a terrifying snapshot of what machine-learning technology can accomplish in the wrong hands.

Or, it seems, in almost any hands. Easy-to-use desktop apps have enabled the spread of deepfakes, which can pull images from places like google search (making celebs an easy target) but also Facebook and Instagram, meaning anyone could be at risk of unwittingly starring in their own adult film. "We could all be living in a world, as of next year, in which everybody has a fake sex video of themselves out there," says Mary Anne Franks, a technology law professor at the University of Miami and advisor for the Cyber Civil Rights Initiative. "The tiny upside, then, [is that] if that the authenticity of every video can be called into dispute. But I don't think any of us should celebrate a world in which we can't tell the difference between what's real and what's fake." And the implications of deepfakes could go much further than porn, potentially threatening national security and democracy itself. 

Sound illegal? It isn't.

Put simply, deepfakes are created by feeding hundreds of photos into a machine-learning algorithm that's trained itself to stitch one face on top of another. Tough stuff to pull off without exhaustive computer-science training – that is it was, up until recently. "It used to be that you had to be very sophisticated and highly motivated to make something like this – a government agency or big-budget Hollywood production company had the resources to do it. But the new software that's come out in the past year or so has democratized access – now some guy in his mother's basement with a PC can create it," says Hany Farid, Ph.D., chairman of the computer science department at Dartmouth College and an image forensics expert.

Fake App is one such program making deepfakes easy for amateur programmers: the app's creator (a programmer who has kept his identity concealed to avoid any public backlash) told Motherboard, who first reported on the deepfake trend, that anyone with a graphics processing unit and CUDA support (a programming model) can run the app to create these videos using only one or two high-quality videos of the faces they want to swap. The whole process takes around 8 to 12 hours. "An interface like this won't give you fantastic-looking fakes, but they're not bad," says Farid. "And, of course, the question isn't what we can do now, but where we're going." He points out that tech development follows a pretty predictable path: the first iterations of computer programs and apps don't work very well, the next couple stabs are decent, and ultimately, with some trial and error, they become – in the case of deepfakes – freakishly seamless.

Sound illegal? It isn't yet. Despite the progress many states have made in recent years passing laws aimed at nonconsensual "revenge porn," there isn't a whole lot deepfake victims can do to protect themselves. "The basis for nonconsensual porn laws is that it's private, true information being disclosed without your consent, and you can regulate that. But if it's created – false information – it's no longer considered a privacy violation," says Franks. In other words, despite the fact that your face stitched onto a body of a random porn star doing something explicit is horrific, it's not exactly "true." And that's hard to fight.

For starters, there's the issue of the First Amendment – most deepfake 'creations' are protected under the freedom of expression, which allows us to construct vulgar satires and parodies with impunity. As long as a deepfake video doesn't claim to be real (and with "deepfake" in the title of many of these videos, we know they're not), it's simply salacious entertainment, which has a long history of protection under the law, says University of Chicago first amendment scholar Geoffrey R. Stone. 

"The question isn't what we can do now, but where we're going."

One of the most famous such cases dates back to 1983, when Hustler magazine published a satirical ad suggesting televangelist Jerry Falwell had drunken sex with his mom in an outhouse. Falwell sued for libel and emotional distress, and the Supreme Court shot him down on the basis of the First Amendment. So what will happen when these AI-enabled pornos start passing, and claiming, to be the real thing? "Historically the court's been very cautious about limiting what would otherwise be free speech because of technological change," says Stone. We'll have to wait.

There are, however, a few exceptions to that First Amendment protection: victims could draw on anti-defamation laws, says Franks, although they'd have to prove the creator's intent was to harm their reputation by claiming the video was real; that the video wasn't just for gross entertainment. Some celebrities might be able to fight back with copyright laws (if they own the image being used), or they could sue for a misappropriation of their image, but only if the video was being used for commercial purposes (for example, the porno was being used to sell beer), which is unlikely. "The problem is that any of these actions are civil, meaning you have to hire legal representation and have the time, energy, and resources to see it through. Not many of these people creating deepfakes are scared of getting sued because they're doing it anonymously, and if they're not particularly wealthy it's unlikely they can be sued in a way that's meaningful to them," says Franks.

Until the laws are updated to reflect these kinds of tech advances, Franks says private companies (who will likely face pressure to ban these videos from advertisers, but not the law) may be our best bet at solving the problem. "When you get Google to say, 'we're going to deindex these sites' or Facebook to say, 'we'll take this down if you tell us it's unauthorized', that can do a lot to keep these from gaining attention and going viral."

"New technologies will create even more compelling fakes, and the cycle will continue."

What's most concerning about deepfake tech is that seedy porn is just a preview of what's to come elsewhere. "Putting aside the fact that people are doing horrible and disrespectful things, this is worrisome on the national security front: what happens when we start seeing videos of world leaders saying controversial things? In the criminal justice system, how will we trust evidence in the court of law?" says Farid. At the moment, most doctored videos can be called out – the lips don't always match the speech, or the face flickers in and out – but that's rapidly changing. (For example, a convincing video released by Buzzfeed this week appears to show President Obama saying derogatory things about several politicians, before it reveals the real speaker.) 

While forensic techniques are currently being developed to help us detect sophisticated deepfakes, we're years away from a reliable solution, says Farid. "It takes many years to refine these technologies before they are ready for wide deployment," he says. "In the meantime, of course, new technologies will be developed to create even more compelling fakes, and the cycle will continue."

Without "video proof" – our shining source of truth – the world could get messy, fast. Trolls and bots won't just be sharing incendiary articles on social media, they'll be creating videos to go with them and bolster their claims. "The harm will no longer be that someone is going to be embarrassed, the harm is that we're going to trigger a World War III," says Franks. Separating fact from fiction – within minutes, or even seconds – could be our government's next biggest crisis. Our greatest protection at the moment? Believing less of what we see.

 

Source

Link to comment
Share on other sites


  • Views 661
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...