Jump to content

Facebook ‘revenge porn’ pilot proceeds despite concerns


Matrix

Recommended Posts

Pre-emptive non-consensual image blocking process was widely criticised when first announced, but will proceed unchanged regardless

Facebook says it is proceeding with its controversial ‘revenge porn’ prevention pilot, despite the slew of concerns from security experts and victim advocates.

The company announced the scheme in November last year, detailing the steps users had to take to stop intimate images being shared on Facebook, Instagram and Messenger without their consent.

The process – which required potential victims to send their private photos to Facebook to be scrutinised by “a handful of specifically trained members” of the company’s “Community Operations Safety Team” – drew much criticism regarding its security pitfalls and the considerable burden it places on users.

 

The Australian Office of the eSafety Commissioner, led by Julie Inman Grant who at the scheme’s launch said the agency was “proud to partner with Facebook”, today told Computerworld that the Office had “provided some feedback about the proposed pilot, which Facebook is working through before the pilot goes live”. 

On Wednesday night Facebook’s global head of safety Antigone Davis posted that testing for pilot program was this week “starting in Australia” detailing a process unchanged from the initial announcement.

“This week, Facebook is testing a proactive reporting tool in partnership with an international working group of safety organisations, survivors, and victim advocates, including the Australian Office of the eSafety Commissioner,” Davis wrote.

An additional detail given by Davis about the “proactive reporting tool” potentially raises additional concern, with the company revealing it will store a user’s intimate images on its servers for up to a week.

Security experts criticised the scheme’s mechanics back in November – before the Cambridge Analytica scandal revealed the personal information of up to 87 million Facebook users may have been improperly shared.

“My huge concern with this program is that it turns a vulnerable user's fear of future possible harm into an actual tremendous privacy invasion – sending an intimate picture to a corporation that's among the worst privacy abusers on the planet,” the University of Melbourne’s Vanessa Teague told Computerworld.

“At a time when the GDPR and other progressive laws are trying to improve users' opportunity to take their data and leave a relationship with a corporation they no longer trust, this solution puts even more power into the hands of those who have already shown, at least in a political context, that they have irresponsibly abused it,” she added.

The eSafety Office said it understands “that Facebook plans to go live with the pilot later this year”.

Victims pay the cost

The pilot’s process for users who fear their intimate images may be shared on Facebook is as follows:

• They first contact the Office of the eSafety Commissioner in order to “submit a form”. They then receive an email “containing a secure, one-time upload link” which they use to upload images they fear may be shared.

• “One of a handful of specifically trained members of our Community Operations Safety Team will review the report and create a unique fingerprint, or hash, that allows us to identify future uploads of the images without keeping copies of them on our servers,” Facebook explained.

• The victim is then notified by email and their images are deleted from Facebook’s servers “no later than seven days”.

• The hashes are held by Facebook, so that if someone attempts to upload “an image with the same fingerprint”, it is blocked from appearing.

Image hashing is a method of converting an image into a line of numbers, a ‘digital fingerprint’, using a perceptual hashing algorithm. The same image, even if it’s in a different file format or has been resized or has had a watermark or filter added, will tend to create the same image hash.

This allows for faster searching (Google uses the technique for its image search) and allows Facebook to find and block when alike images are uploaded.

The hashing could be done by an app or simple software downloadable by the user, meaning Facebook never need see the intimate image, only receiving the image hash which it can use to block alike images from being uploaded.

“The obvious technical improvement would be to let Facebook users hash their photos on their own device, rather than uploading the plain version to Facebook, who then promises to hash it,” Teague said.

Facebook’s chief security officer Alex Stamos yesterday hit back at this suggestion tweeting that hashes couldn’t be “shipped into client code without bad guys creating ways to manipulate images to not be caught or to create false positives”.

However, experts dispute this, arguing the algorithm doesn’t need to be secret to be effective, and its workings can be revealed anyway simply by using it and observing its outputs.

The reason Facebook isn’t allowing local hashing, is a commercial one, they argue.

“Facebook has an incentive to resist this user-empowered version for commercial reasons, not because plain uploads are in the user's interest. The hashing algorithms are probably of significant commercial value, so Facebook is protecting the privacy of its own assets, at the cost of users' privacy,” Teague explained.

“Furthermore, the hash may be relatively easy to circumvent, that is for a malicious outsider to post similar photos that don't trigger Facebook's detection-and-removal algorithm, and also relatively easy to reverse, i.e. for Facebook to recover some details of the photo from the hashed version. Keeping the algorithm secret solves neither of these problems, but makes users much less likely to be aware of them,” she added.

Stamos later tweeted that “A future where much of this can be done client-side is not impossible.”

Emotional impact

Image-based abuse or ‘revenge porn’ is undoubtedly a devastating problem.

According to an October national survey by the eSafety Office, one in 10 adult Australians have experienced their nude or sexual image being shared without consent, and almost one-fifth have been bystanders to image-based abuse.

Younger adults, women, Indigenous Australians and those who identify as LGBTI are far more likely to be victims, although all sections of society are affected.

For victims the psychological and emotional impact is huge. As one told an eSafety Office qualitative study: “It made me put a lot less value in myself for a long time”.

Since October 2017, people experiencing image-based abuse have been coming to the eSafety Office for help getting images removed from websites and social media with 200 reports to date.

Facebook already offers users the ability to report if their intimate images have been shared on the site without their consent, which it removes.

But with trust in the platform at an all-time low, it is difficult to assess how successful the pre-emptive approach, if it ever gets beyond the pilot stage, will be.

Unease at sending Facebook intimate images could be heightened by the platform’s announcement this month it would be launching a dating service – arguably encouraging the exchange of such images (although the messaging function is understood to be text-only “for safety reasons”).

With Facebook's technical might, perhaps a less intrusive solution could be found.

“If they were more motivated to block pornographic images from their pages, they wouldn't have to ask people to show them the ones they were afraid of having posted,” Teague said.

source

Link to comment
Share on other sites


  • Replies 1
  • Views 422
  • Created
  • Last Reply
EternalPurple

Facebook doesn't really care about its users, all it cares is personal data potential money.

Link to comment
Share on other sites


Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...