//How Facebook’s Anti-Revenge Porn Tools Failed Katie Hill

How Facebook’s Anti-Revenge Porn Tools Failed Katie Hill

Twitter declined to provide any comment, and instead pointed me to the company’s nonconsensual nudity policy. The original DailyMail.com tweet—nude photo, shortened link, and all—remains online, with 1,500 retweets and 2,300 likes.

The photos will indelibly remain on the rest of the internet, too. Once they were published by RedState and DailyMail.com, they seeped across networks and platforms and forums as people republished the images or turned them into memes or used them as the backdrop for their YouTube show. (After I contacted YouTube about some examples of the latter, it removed the videos for violating the site’s policy on harassment and bullying.)

It’s one of the many brutal aftershocks that this kind of privacy violation forces victims to endure.

“You can encourage these companies to do the right thing and to have policies in place and resources dedicated to taking down those kind of materials,” says Mary Anne Franks. “But what we know about the viral nature of especially salacious material is that by the time you take it down three days, four days, five days after the fact, it’s too late. So it may come down from a certain platform, but it’s not going to come down from the internet.”

Using AI to Fight Back

Two days after Katie Hill announced she was stepping down from office, Facebook published a post titled “Making Facebook a Safer, More Welcoming Place for Women.” The post, which had no byline, highlighted the company’s use of “cutting-edge technology” to detect nonconsensual porn, and to even block it from being posted in the first place.

Facebook has implemented increasingly aggressive tactics to combat nonconsensual porn since 2017, when investigations revealed that thousands of current and former servicemen in a private group called Marines United were sharing photos of women without their knowledge. Facebook quickly shut down the group, but new ones kept popping up to replace it. Perhaps sensing a pattern, after a few weeks Facebook announced that it would institute photo-matching technology to prevent people from re-uploading images after they’ve been reported and removed. Similar technologies are used to block child pornography or terrorist content, by generating a unique signature, or hash, from an image’s data, and comparing that to a database of flagged material.

Later that year, Facebook piloted a program in which anyone could securely share their nude photos with Facebook to preemptively hash and automatically block. At the time, the proposal was met with some incredulity, but the company says it received positive feedback from victims and announced the program’s expansion in March. The same day, Facebook also said that it would deploy machine learning and artificial intelligence to proactively detect near-nude images being shared without permission, which could help protect people who aren’t aware their photos leaked or aren’t able to report it. (Facebook’s policy against nonconsensual porn extends to outside links where photos are published, but a spokesperson says that those instances usually have to be reported and reviewed first.) The company now has a team of about 25 dedicated to the problem, according to a report by NBC News published Monday.

“They have been doing a lot of innovative work in this space,” Mary Anne Franks says. Her advocacy group for nonconsensual porn victims, the Cyber Civil Rights Initiative, has worked with many tech companies, including Facebook and Twitter, on their policies.

Facebook will also sometimes take the initiative to manually seek out and take down violating posts. This tactic is usually reserved for terrorist content, but a Facebook spokesperson said that after Hill’s photos were published, the company proactively hashed the images on both Facebook and Instagram.

Keep Reading

illustration of a head

Hashing and machine learning can be effective gatekeepers, but they aren’t totally foolproof. Facebook has already been using AI to automatically flag and remove another set of violations, pornography and adult nudity, for over a year. In its latest transparency report, released Wednesday, the company announced that over the last two quarters, it flagged over 98 percent of content in that category before users reported it. Facebook says it took action on 30.3 million pieces of content in Q3, which means nearly 30 million of those were removed automatically.

Still, at Facebook’s scale, that also means almost half a million instances aren’t detected by algorithms before they get reported (and these reports can’t capture how much content doesn’t get flagged automatically or reported by users). And again, that’s for consensual porn and nudity. It’s impossible to say whether Facebook’s AI is more or less proactive when it comes to nonconsensual porn. According to NBC News, the company receives around half a million reports per month. Facebook does not share data about the number or rate of takedowns for that specific violation.