Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hi everyone. Joel here from Buffer.

Some of you might have heard about our security breach on Saturday. I just wanted to leave a quick note and clarify that this MongoHQ security breach was the method used to obtain our users' access tokens, which led to the wave of spam on Saturday.

This is a key final piece of our investigation which brings it back full circle. I'm very happy that we have been able to gain the full understanding and can be confident there is no backdoor.

I want to be clear that this is still our fault. If access tokens were encrypted (which they are now) then this would have been avoided. In addition, MongoHQ have provided great insights and have much more logging in place than we have ourselves. We’re also increasing logging significantly as a result.

I've updated our security breach blog post with this information. If you want to see the full set of events, take a read here: http://open.bufferapp.com/buffer-has-been-hacked-here-is-wha...

Let me know if you have any questions about this. I'll keep an eye on this thread.



Just trying to develop a timeline here.

  Buffer security breach - October 26, 2013 [1]
  MongoHQ security breach - October 28, 2013 [2]
But the Buffer security breach was via MongoHQ, so MongoHQ has likely had the issue since at least the 26th, and probably earlier, since the attackers had to have enough situational awareness to target Buffer. I guess my point is, MongoHQ likely had the issue for a while and it went undetected.

[1] http://open.bufferapp.com/buffer-has-been-hacked-here-is-wha...

[2] http://security.mongohq.com/notice


Joel and Josh have it right, we found the actual breach yesterday. You also have it right, the breach happened before Monday. We're hoping to find reasonably conclusive evidence of a start date we can share with affected customers.


Hi Justin. To clarify, from what I understand, October 28 is the date MongoHQ detected this. They've provided us with the logs of database access, and unfortunately the queries leading to our spam attack on Saturday started as early as October 19.

I can understand that MongoHQ wanted to obtain the full picture here and not put other customers at risk by exposing this information before the situation was fully locked down.


I'm not sure what the implication (if any) of your post is, but the timeline seems fairly reasonable to me.

I wouldn't be surprised if it was Buffer's investigation that tipped MongoHQ off to the breach.


Have you considered not outsourcing critical parts of your company to startups that are iterating fast and potentially breaking things? Could you share the reasoning behind Buffer choosing to store their customers' private data with MongoHQ, instead of your own secure infrastructure?


This is the elephant in the room and I'm surprised it hasn't been mentioned by any of the other comments.

When it comes to infrastructure as a service, it appears to me that there is an imbalance between the sensitivity of information entrusted to external systems on one hand, and the standards such systems are held up to on the other.

EDIT: case in point, on http://mongohq.com I can't find a single mention of the word "security". The language is all about ease of use, performance, scalability, disaster recovery, and low cost, but as far as I can tell not a single word is spent on what procedures are in place to safeguard your data from unwanted access.

EDIT: removed inflammatory comment.


"security" is always left for after-the-fact ---- exactly as evidenced by this disclosure and their poor in-house practices.


While I can see your point of view, and the grandparent's, let's be fair. These are startups in a rapidly evolving, highly competitive marketplace and they have limited resources. If they spent months triple checking every dotted i and crossed t they might never even launch, and there'd be no company.

Like everything in software it's about tradeoffs. Maybe they erred a little too far on one side of the curve, so let's learn from that. But it's unfair to expect startups to be in the same league as banks security-wise. Do you have any idea how much a good pentest costs?


I reject the tradeoff that your internal customer support 'impersonate user' web app would be available via simple password on the open internet.


As a founder of another startup (octocall.com) choosing to host our database with another company (postgres, hosted with Heroku) the decision is a simple one: You trade infrastructure management complexity for a monthly fee. Especially when you're starting out, the team is small, and time is much more expensive than whatever your database provider is charging you per month.

It's a great trade-off for most early-stage companies, because managing databases is hard. I'd rather leave it to the experts who specialize only in managing databases. You and your product team have a thousand and one other things to think about other than managing your database. Your provider may end up making mistakes, but that's part of the risk you take.

Security breaches are a mess for everyone involved, and we're in relatively new territory here in the Infrastructure As A Service space. Overall, I have little doubt that IAAS overall is a good thing. As an industry, we'll learn and improve on how to deal with all things "security", but we're clearly not there yet.


+1 Using Heroku Postgres as an example gets you a lot for free, e.g. WAL-E disaster recovery.

Also farming out a piece of infrastructure and having a secure stack doesn't have to be mutually exclusive.


That's sort of a loaded question, no? A database is, in one way or another, critical to almost every business. I don't think that means every business should build one in house.


He's not talking about building the database, he's talking about hosting the database. And he's not claiming you shouldn't use third-party hosts, he's suggesting restraint in who you choose, in waiting for a track record to accumulate.


"The backdoor that was created through one of our partners, MongoHQ who are managing our database."

Looks like the database is managed by MongoHQ. That's what OP is talking about.


Security-after-the-fact is actually +EV just like certain risk distributions are +EV in finance. Companies must balance where they want to be in the secureness spectrum against the investment cost to get there and it seems that high grade security isn't worth the opportunity cost for a pretty big class of companies and customers.


If access tokens were encrypted (which they are now) then this would have been avoided.

That answers my question! (See https://news.ycombinator.com/item?id=6619265)

Kudos for how you guys handled this during this tough few days.


That's right. In addition, Facebook has an 'appsecret_proof' method where you can require signing of all API calls with the app secret. We've now implemented this. Details: https://developers.facebook.com/docs/reference/api/securing-...

Thanks for the kind words :)


So you were not storing billing, login, or other such account information in MongoHQ, just the access tokens?

Or were you somehow able to identify which fields from your MongoHQ database were actually accessed by the hacker?


It's not cool throw your upstream vendors under the bus.


(I am one of the founders of MongoHQ)

Buffer has been exceedingly fair with us, we are fully in favor of them giving customers all the information they have.


Seems like they're pretty clear that they're still culpable for designing a system that relied on the trust of a 3rd party vendor to protect user data.

They waited until after MongoHQ made their own disclosure, and all evidence (including comments on the post) point to a fairly good working relationship between the two.

I'm sure both parties wish this hadn't happened, but I don't see any bus throwing...


Huh? This is pretty normal, I think. When a service has downtime because AWS was having a bad day, say, they normally declare that...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: