Jump to content

Google releases AI-powered Content Safety API to identify more child abuse images


nir

Recommended Posts

Google has today announced new artificial intelligence (AI) technology designed to help identify online child sexual abuse material (CSAM) and reduce human reviewers’ exposure to the content.

 

The move comes as the internet giant faces growing heat over its role in helping offenders spread CSAM across the web. Last week, U.K. Foreign Secretary Jeremy Hunt took to Twitter to criticize Google over its plans to re-enter China with a censored search engine when it reportedly won’t help remove child abuse content.

 

Earlier today, U.K. Home Secretary Sajid Javid launched a new “call to action” as part of a government push to get technology companies such as Google and Facebook to do more to combat online child sexual abuse. The initiative comes after fresh figures from the National Crime Agency (NCA) found that as many as 80,000 people in the U.K. could pose a threat to children online.

 

The timing of Google’s announcement today is, of course, no coincidence.

 

Neural networks

 

Google’s new tool is built upon deep neural networks (DNN), and will be made available for free to non-governmental organizations (NGOs) and other “industry partners,” including other technology companies, via a new Content Safety API.

 

News emerged last year that London’s Metropolitan Police was working on a AI solution that would teach machines how to grade the severity of disturbing images. This is designed to solve two problems — it will help expedite the rate at which CSAM is identified on the internet, but it will also alleviate psychological trauma suffered by officers manually trawling through the images.

 

Google’s new tool should help this broader push. Historically, automated tools rely on matching images against previously identified CSAM. But with the Content Safety API, Google said that it can effectively “keep up with offenders” by targeting new content that has not previously been confirmed as CSAM, according to a blog post co-authored by engineering lead Nikola Todorovic and product manager Abhi Chaudhuri.

 

“Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse,” they said. “We’re making this available for free to NGOs and industry partners via our Content Safety API, a toolkit to increase the capacity to review content in a way that requires fewer people to be exposed to it.”

 

Most of the major technology companies now leverage AI to detect all manner of offensive material, from nudity to abusive comments. But extending its image recognition technology to include new photos should go some way toward helping thwart — at scale — one of the most abhorrent forms of abuse imaginable. “This initiative will allow greatly improved speed in review processes of potential CSAM,” Todorovic and Chaudhuri continued. “We’ve seen firsthand that this system can help a reviewer find and take action on 700 percent more CSAM content over the same time period.”

 

Among Google‘s partner organizations at launch are U.K.-based charity the Internet Watch Foundation (IWF), which has a mission to “minimize the availability of ‘potentially criminal’ internet content, specifically images of child sexual abuse.”

 

“We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our human experts review material to an even greater scale and keep up with offenders by targeting imagery that hasn’t previously been marked as illegal material,” added IWF CEO Susie Hargreaves. “By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and users.”

 

Access to the Content Safety API is available by request only by completing this form.

 

Source

Link to comment
Share on other sites


  • Views 408
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...