The public shaming of the photo-messaging service Snapchat by an anonymous hacking group — by leaking millions of its users' details — illustrates the troubling vulnerabilities of the world of smartphone apps, security experts say.

Particularly vulnerable are those who sign up for the latest app without really knowing how secure their personal information is. "The users have no power in this,” said Ian Goldberg, an associate professor with the University of Waterloo's Cryptography, Security and Privacy group. "The users are basically at the mercy of the company."

In this case, the irony is that Snapchat touts itself as a “new way to share” photos and videos in a more secure fashion. A shared photo or video only lasts for up to 10 seconds after a recipient opens it. After that, it disappears from friends' phones (unless they take a screenshot).

On Wednesday, though, an anonymous group of hackers posted a database containing the user names and phone numbers of 4.6 million Snapchat users from across North America, apparently as a kind of vigilante action.

The group posted the information on a website,, saying it was motivated to put public pressure on the company after it failed to fix a security problem identified months prior.

Though the site has been suspended, the leaked database continues to circulate.

Limited resources

“It is understandable that tech start-ups have limited resources, but security and privacy should not be a secondary goal,” the anonymous group said in an email to TechCrunch, a website that deals with the latest news on the small tech front.

Phone numbers and user names don't constitute the most sensitive personal information, but cybersecurity experts note that they are key bits of information required to track somebody down or steal their identity.

In Snapchat's case, because it wanted to expand its base quickly, it added an optional service called Find Friends, which meant that if you entered your phone number in the Find Friends service, those who had your number in their address books would be able to know your Snapchat user name. 

It was this Find Friends service that hackers used to match against a large number of random phone numbers and so discover user names. Snapchat issued a statement late Thursday saying it will be releasing a new app shortly that will allow users to opt out of the Find Friends service. 

The initial problem, however, goes back to August when the Australian group Gibson Security publicly stated that Snapchat had “severe vulnerabilities,” including a function that easily allowed someone to create a database of user names and phone numbers.

Four months later, in December, the group again reported the issue, saying it hadn’t been fixed. This time, they posted detailed information about the problem. Gibson Security, however, says it did not leak the database.

Most companies pay attention when alerted about potential security or privacy gaps in their software, says Waterloo's Goldberg, though they don’t have to.

“Hopefully the company wants to fix the problem and they do,” he said. “In this case, it didn’t appear to have happened.”

400 million snaps a day

Brian Bourne, a Toronto-based cybersecurity expert, said Wednesday’s leak likely marks the first massive security incident for the nearly three-year-old company.

He notes that more mature companies such as Microsoft, Adobe and Apple have formal intake processes for security issues.

Start-ups tend to have fewer resources to provide sufficient security, which can slow down product development and be expensive and complex. But Bourne notes that Snapchat is well beyond a start-up at this stage.

Since two Stanford University students founded the application in early 2011, the company has rapidly risen and now competes with social media giants.

In November, CEO Evan Spiegel told TechCrunch that its users now share 400 million snaps a day, more than both Instagram and Facebook. The company won’t disclose the number of users, but estimates have ranged from just under 10 to more than 25 million.

However, few protections exist for those millions of users where a software gap or weakness is identified. In fact, some suggest that laws work against the public interest by punishing those, like hackers, who expose vulnerabilities.

The U.S. Federal Trade Commission can go after companies when software gaps exist that put users in danger, notes Johannes Ullrich, chief technical officer of SANS Internet Storm Centre, a global co-operative that monitors internet security. But he adds that the problem usually must be more severe than this one, such as revealing financial account details.

Plus, most user agreements contain clauses that absolve companies of responsibility in situations, like Snapchat, where the data is leaked by an outsider, said Ullrich.

Bourne says in Canada there are far fewer protections than south of the border.

“We don’t even have any disclosure laws,” said Bourne, who is president of CMS Consulting and co-founder of Toronto's annual computer security conference. “So if there’s a compromised Canadian company, you don’t even have to tell the people who were affected.”

Figuring out how to encourage app makers to better protect their users is a matter of much discussion among security experts these days, says Goldberg.

A key part of the problem is that the financial incentives for lax security are far higher. Not only does increased security cost money, the companies can make money by selling users' personal information to advertisers.

“So there needs to be an incentive in the opposite direction,” said Goldberg.

Legal chill

What's more, even the public’s knowledge of potential problems with apps is not assured.

Many security researchers abide by a policy of so-called responsible disclosure — whereby they alert a company to vulnerabilities to give them time to fix it, as Gibson Security did last summer. Often, the person reporting the issue works with the company until it releases the fix (and gives the researcher credit or money for his help).

But when that doesn’t work, some groups take the next step and provide full disclosure, revealing details about the security or privacy issue to the public — and potentially putting users at risk. 

However, Goldberg, for one, is troubled by recent incidents in Europe where companies have pursued lawsuits or criminal charges against security researchers who have threatened to publish details about a system’s vulnerability.

Last July, a U.K. court blocked three researchers from publishing details of how to hack a car immobilization system after Volkswagen and a defence group argued the information could be used by criminals.

Radboud University Nijmegen, which represents two of the researchers, said the ban was incomprehensible since the publication in no way described how to steal the car and the chip-maker was alerted nine months prior to publication.

However, the reality of such legal action can be chilling.

“If the user knows that there is this vulnerability, they wouldn’t use it or a new user wouldn’t sign up for it until the vulnerability is fixed. But they need to know this in order to make informed decisions,” said Goldberg.

While security researchers suggest that vigilante actions such as the one taken by the anonymous hackers against Snapchat is seldom justifiable, it can stem from frustration.

“If you don’t respond to me, then how can I force you to take action?” said Bourne. “I have no legal recourse. I can’t go to the police and say I just discovered that [your] software has a vulnerability and [you’re] not fixing it.”

“So that’s when people take things into their own hands.”

Security experts suggest there’s little those affected by the Snapchat leak can do other than delete their account, wait for a fix to be posted and be more cautious next time.

“You kind of have to choose who you put your trust in, knowing that pretty much anybody, no matter what safeguard they use, could still have an incident,” said Bourne.