20,384 views|May 26, 2019,05:13am EDT
Not a good week for Snapchat. On Thursday, Motherboard reported that “several departments inside social media giant Snap have dedicated tools for accessing user data, and multiple employees have abused their privileged access to spy on Snapchat users.” And now the Sunday Times has published an investigation into allegations that predators are “flocking” to the social media platform, which has become a “haven for child abuse.”
Motherboard’s article cited two former employees who claimed that “multiple Snap employees abused their access to Snapchat user data several years ago.” This included the use of “internal tools that allowed Snap employees to access user data, including in some cases location information, their own saved Snaps and personal information such as phone numbers and email addresses.”
SnapLion, one of the tools referenced in the Motherboard article, was designed to gather information for “valid law enforcement requests. Claims that this tool was involved in the alleged misuse have not been verified.
A Snap spokesperson told me that “any perception that employees might be spying on our community is highly troubling and wholly inaccurate. Protecting privacy is paramount at Snap. We keep very little user data, and we have robust policies and controls to limit internal access to the data we do have, including data within tools designed to support law enforcement. Unauthorized access of any kind is a clear violation of the company’s standards of business conduct and, if detected, results in immediate termination.”
Ironically, it is this limited user data that is central to the Sunday Timesinvestigation. The newspaper’s investigation has uncovered “thousands of reported cases that have involved Snapchat since 2014,” including “pedophiles using the app to elicit indecent images from children and to groom teenagers,” as well as “under-18s spreading child pornography themselves.” This has now resulted in U.K. police “investigating three cases of child exploitation a day linked to the app, [with] messages that self-destruct allowing groomers to avoid detection.”
The Sunday Times quotes Adam Scott Wandt from John Jay College of Criminal Justice in New York calling Snapchat a “haven” for abusers, arguing that the “self-destruct” nature of Snapchat’s messages “makes it difficult for the police to collect evidence.”
Wandt claims that in this way “Snapchat has distinguished itself as the platform where abuse of children happens… The problem was that adults realized you could do a simple Google search and find out that most Snapchat messages are unrecoverable after 24 hours, even by law enforcement with a warrant.”
The U.K. children’s charity, the NSPCC, rates Snapchat as a high risk, with a spokesperson for the charity explaining that predators intent on grooming children “cast the net wide in the expectation that a small number of children will respond.”
The charity has also warned on self-generated images taken and shared by children themselves. “As soon as that image is shared or screenshotted, the child loses control over it… those images may start on a site like Snapchat, but they could very easily end up circulating among technologically sophisticated offenders, making their way onto the dark web.”
Snap told me that “we care deeply about protecting our community and are sickened by any behavior which involves the abuse of a minor. We work hard to detect, prevent and stop abuse on our platform and encourage everyone – young people, parents and caregivers – to have open conversations about what they’re doing online. We will continue to proactively work with governments, law enforcement and other safety organizations to ensure that Snapchat continues to be a positive and safe environment.”
A similar investigation in March focused on Instagram, with the NSPCC claiming that Facebook’s photo-sharing app has become the leading platform for child grooming in the country. During an 18-month period to September last year, there were more than 5,000 recorded crimes “of sexual communication with a child,” and “a 200% rise in recorded instances in the use of Instagram to target and abuse children.” The charity’s CEO described the figures as “overwhelming evidence that keeping children safe cannot be left to social networks. We cannot wait for the next tragedy before tech companies are made to act.”
This latest investigation makes the same point and comes a little over a month after the U.K. Government published proposals for “tough new measures to ensure the U.K. is the safest place in the world to be online,” claiming these to be the world’s “first online safety laws.” The proposals include an independent regulator with the “powers to take effective enforcement action against companies that have breached their statutory duty of care.” Such enforcement will include “substantial fines” as well as, potentially, the powers “to disrupt the business activities of a non-compliant company… to impose liability on individual members of senior management… and to block non-compliant services.”
The regulation of social media has been in and out of the headlines for most of this year. The prevalence of social media use by under-age children, and the risky interactions those children expose themselves to, has been one of the most disturbing aspects disclosed thus far. Regulation is coming. But the open question is how do the platforms prevent users from deliberately circumventing their security controls with little understanding of the risks they might then face.
I am the Founder/CEO of Digital Barriers—developing advanced surveillance solutions for defence, national security and counter-terrorism. I write about the intersection of geopolitics and cybersecurity, and analyze breaking security and surveillance stories. Contact me at email@example.com .