Even at 16, you should be mature enough to know that this is classless. I hope for their sake they never start their own business and never fuck up, because that'd be awfully sad if the next kids to come along decided to show them the same courtesy they've shown here.
@ElliotSpeck:
> ...I'm available for consulting if you ever want to hire a security manager for @phpfog. :)
As someone who takes security seriously, and manages shared hosting security for a living, I can't imagine what the PHPFog people are going through right now. Finding security holes in commercial systems and discreetly notifying the owners of the problem is one thing; broadcasting knowledge of the holes to the world without a reasonable wait is akin to criminal. I don't care if they actually exploited it, they just threw wide the door without a second thought.
> Finding security holes in commercial systems and discreetly notifying the owners of the problem is one thing
Last time this happened to me, I gave 6 months free on a dedicated server which was announced in an e-mail that went out to around a thousand users (the focus was explaining why feature x was disabled for the past few days).
It was brought up in discussion that it was probably too much, but the alternative to me was terrifying considering the amount of tickets opened because of the preventative measures.
Yeah, in my experience, the best way to handle these things and keep goodwill is to own up to them and take responsibility for what happened; and explain to your customers what happened, what went wrong, how you fixed the problem and (hopefully) the entire class of problem, and what you've done to prevent the issue in the feature. A mature and honest response goes a long way.
> broadcasting knowledge of the holes to the world without a reasonable wait is akin to criminal
I wouldn't go as far as that. It's sure bad form, but disclosing a fact (maybe with the exception of immediate national security concerns) can't be considered a crime.
This will cost the PHPfog folks some and they can - and should - pursue civil action against whoever causes damage to them.
Exactly; as I said, "disclosing a fact without a reasonable wait" which is fair and ethical in the security world. I'm all for full disclosure, but give the affected parties time to clean up the mess and get PR ready.
After berating one of the "d00ds" involved on Twitter, it looks to me like he told his friend how to exploit the problem, and his friend (or his friend's friend) made the site and exploited the hole.
If I show someone how to break into your house, and that person tells someone else "hey, nbpoole's house is open, let me show you," and your house gets broken into am I completely innocent of the crime? Security knowledge is the kind of knowledge that gets things broken into, so security people need necessarily be cautious with who they tell about security problems.
FYI, when I found about an open ASP.NET padding oracle at Subway.com, all I did was to run PadBuster to exploit it without damaging the servers in any other way. Eventually I reported it to feedback@subway.com, and only after a week of no response only then I finally posted it to reddit:
http://www.reddit.com/r/netsec/comments/g9crj/open_aspnet_pa...
There are numerous facts which disclosing would be considered a crime. For one thing, copyright infringement is a crime; all that is, in essence, is disclosing a fact. Disclosing trade secrets may be a crime. Disclosing personal health records can be a crime. Disclosing insider information to a third party can be a crime. There are plenty of facts which can be criminal to disclose.
Now, this particular case may or may not be criminal, but it is at least incredibly irresponsible.
I think this is a Federal Crime in the US. If he was an idiot and actually disclosed his details, they can find him and actually extradite him from Australia for this.... not a lawyer but wow, but he did not think this one through
(I had to create a different account because I have no_procrast activated on my main account. It'd be awesome if no_procrast would be automatically disabled during the weekend.)
I realize that it's harder than it looks. However, it would be trivial to allow people to choose the days they don't want the procrastination setting enabled (based on a standard timezone like PST.)
I'm Elliot Speck, one of the guys (let's be realistic, the main guy) behind the phpFog hack, I guess the record needs to be set straight about exactly what went down.
phpfogsucks.com isn't mine, I never contributed directly to it and any work credited by me is assumed by the creator and owner of that site.
My work was slightly different, I was proving that the system was horribly exploitable. Throughout the process I burnt into the box, gained root access, and took a screenshot. I also gained access to the phpFog Twitter account and posted a bit. I didn't damage any files, and when I finally came into contact with Lucas, I explained my methodology directly and gave him a few security pointers for immediate causes for concern. As a result, the project is now on standby as they fix up the issues that were made apparent by my break-in.
I don't consider what I did to be a bad thing. It's better me break in and make the fact I did public, than someone break in silently and wipe the box, losing hundreds of hours of both the team's and clients' time. That is below any moral standard I could possibly even consider upholding.
What I did not do:
-Damage or otherwise alter any of the system files
-Damage, alter or view any client files
-Post or otherwise make public the methodology behind my access
-Post or otherwise make public the engine code for phpFog, this was done by someone else who I showed the code to in order to investigate further potential security holes before I alerted the phpFog team.
I'm posting here to clear the air, but if you have any questions you can contact me on Twitter: @ElliotSpeck.
"It's better me break in and make the fact I did public, than someone break in silently and wipe the box, losing hundreds of hours of both the team's and clients' time."
It's better yet to break in and discreetly notify the folks involved. Show a screenshot at Twitter.com that you COULD have tweeted. Voila-- you've done something positive.
Going public is an immature ego play that doesn't consider the feelings of lots of folks. Even if you want the the ego boost, post a "How I saved PHPfog" post-mortem when the issue is resolved.
I appreciate that you discovered a security flaw and took action to get it fixed. Thank you.
However, the WAY you did this really screwed up a bunch of people. I have an app running on PHP Fog that serves 25,000 people a day, and I woke up on Sunday morning to a stream of complaints that it had been down for hours. You seem technically capable, so I'm sure you have a lot of interesting (and useful) projects and hacks to come. But next time you do something like this, model it after this:
If you're hacking to help people and make the world a better place, do it like David Chen. With your abilities you will get a lot of respect and appreciation if you do it like that. If you act destructively, some people might appreciate your technical chops but you won't get real respect in the field.
And don't worry too much if it feels like you're at the center of a cyclone right now. It'll pass, and as long as you act more deliberately in the future you'll be okay. :)
Unfortunately, that cyclone may not be as easy to get past. Yes, people won't forever care about phpfog. However, if phpFog (which was at least PARTIALLY at fault here) presses charges, thats a criminal record and will come up on every background check for the rest of his life. This effects job opportunities, VISA opportunities, loans (not to mention lawyer debt from fighting it), hell even insurance prices.
What the kids did was bad, but I think pressing charges and seriously hindering two smart sixteen year-olds is a knee-jerk, over-zealous application of law and retaliation/punishment. Especially (I know I'm going to draw a lot of heat for this) when they found THEIR irresponsible storage of sensitive data.
I am a dev. I have also worked in the computer security field for a reputable firm. What phpfog did was irresponsible(actually, stupid!) and it was relatively easily avoidable. I know this because I (along with pretty much every dev) have used the exact stopgaps and quick-fixes that phpFog did. BUT (big lesson) cleaning up after your self is as much a part of programming as putting those quick-fixes in place. Unfortunately, its not the "fun" part and its not the most obvious money maker.
Like they (pretty much) said, phpFog put off the fixes because they wanted to deliver quickly. Thats THEIR decision and THEIR risk/reward assessment. I've made the same assessments in my work. They should suck it up and learn the lesson. Not hurt little kids. They're lucky it was found by these kids and not someone that knows how to conceal their identities and/or wants to do more serious damage (For example, hurting a phpFog clients).
If I knew some dev at my hosting company was keeping system passwords on a web server, they wouldn't be my hosting company. What about the trust/confidence of the clients that phpFog was knowingly betraying?
Edit: Yes, there is a proper way to disclose information. They're kids. I'm surprised they handled it as well as they did to be honest. I was a much dumber 16 year old.
"I was proving that the system was horribly exploitable."
but I read:
"I was exploiting a horribly exploitable system that, had I notified the admins, almost certainly would have been dealt with fast by some guys who obviously care about their service. If it wasn't, I could have still released it publicly a few days later like every other pen tester anywhere. Instead I went for the lulz. Now I'm backpedaling by justifying bad behavior with worse behavior, editing posts, and blaming people who I told, instead of just admitting I handled it really, really badly."
Personally, I didn't know PHPFog beyond the name, but your jackass move makes me want to actively support them.
And don't kid yourself - nothing you did after finding the vulnerability was in the best interest of PHPFog's users. This isn't pen testing or stumbling across a vulnerability. Telling someone else who released stolen code makes it quite black hat.
The worst thing is, this guy (compwhizii) is sort of important online, he is the system administrator for facepunch.com, a very large forum. I guess he'll be losing that job.
Would you rather that I hadn't, and instead just wiped the box?
How about I changed every DNS record for every domain to something like goatse.cx?
In perspective, it's not a dick move at all. I'm not academically subnormal, I wouldn't do stupid things with a public Twitter account excluding make it noted that it's temporarily under someone else's control. What's more, I willingly relinquished control of it back to Lucas about an hour later.
That's a false dichotomy. You didn't have to post on their Twitter account, just like you didn't have to wipe a box or alter DNS. I hope if you learn one thing from this, it's how real responsible disclosure works.
I didn't have to at all, correct. But like you said, one doesn't have to wipe the box or redirect everything to goatse, however if you give many people the ability, there will be 10% who will do it. In perspective, me posting on the Twitter account (which was easily remedied, and like I said control was willingly relinquished) wasn't much of a bad thing.
I 'think' what I did was a relatively good thing. I never claimed it was, nor would I use that sort of thing as a defence. Everything that I have a say in is under control of phpFog now, and no data was lost. Anything further is completely out of my hands, I can only do so much.
I think we're in danger here of arguing in circles, but let me just say that I think the mindset of "I could've done so much worse, but I showed restraint, therefore It's (relatively) okay" is very troubling to hear (and I know it's not just you that thinks this way). If you broke into a home, and only broke a few lamps and changed the locks on the doors, and then tell the homeowner "no hard feelings, I mean I could've burnt the place to the ground", you sound like a mad-man -- but because this is virtual, the impacts of your actions aren't so immediate or easy to feel, but they're still there (downtime leads to loss of consumer confidence, leads to loss of sales, leads to loss of jobs and livelihoods, and on and on). However, it doesn't mean there aren't real, financial, consequences to the actions.
I realize everyone makes mistakes, especially as teens, but I just wanted to voice my opinion that this mindset people seem to have where because they didn't {burn the server to the ground}, they shouldn't feel bad is both naive and dangerous, and if I were you, I'd do my best to drop it, learn your lesson, and move on. Best of luck.
One thing you should probably watch out for in all this is that you've used your real name and your website is personally identifiable. Depending on the Laws in your jurisdiction, what you've done (getting root on the phpfog server and accessing their twitter account) could be a criminal offence.
Indeed a quick look at Queenslands Cybercrime laws shows up
"The Queensland law introduced in 1997 uses the heading 'computer hacking and misuse' but the offence is defined as the use of a restricted computer without the consent of the computer's controller. A restricted computer is defined as one that requires a 'device, code or sequence of electronic impulses' to gain access. There is a penalty scale of two, five or 10 years maximum term of imprisonment depending on whether (1) an offender simply uses a computer, (2) causes detriment or damage, or gains or intends to gain a benefit, or (3) the detriment, damage or gain is valued at more than $5,000."
I think the takeaway that you should have from this is; the person you showed this exploit to is not trustworthy, I'd avoid associating with them in the future.
Dude, are you aware that this is a federal crime in the US? They have extradition treaties with AUS. You need to get your parents to get you a lawyer - FAST
The website was allegedly posted before I obtained the engine code, however it then went on the site after I gave a copy of the engine code to someone in order to analyze and look for further exploits.
To clarify, I had no intention of hosting the files for public access and never did so. Any links to my site were immediately dead as they were only used so that a copy of the source could be obtained to analyze. The files were destroyed from the server after.
Aha. That's an unfortunate situation for you. Ultimately though, it seems like you dropped the ball by leaking the code to someone else: even if you weren't responsible directly for the site or for posting the code publicly, you were the one who made it possible. Hopefully you can learn from this experience.
---
Edit: You said "To clarify, I had no intention of hosting the files for public access and never did so. Any links to my site were immediately dead as they were only used so that a copy of the source could be obtained to analyze. The files were destroyed from the server after."
The links are dead. They were the links to the original uploads for the others to look at. The link was leaked to Andrew somehow. By looking at times, I'm very sure that the files were deleted from there before they were posted by Andrew.
I don't know and don't want to find out how he obtained those links. We're all a big group of people, but the links were never shared by me to him. He's a rash and irresponsible person as you can tell from that tweet.
2) Stole their source code and distributed it to others.
3) Unlawfully accessed and defaced their Twitter account.
Yes, you're clever, but your behavior is "rash and irresponsible." If I were you, I'd be on the phone with Lucas apologizing and getting ready for some community service.
What a dick move. Did these idiots actually publish their names in relation to this? Coming from "security experts" this is the most unprofessional thing I've ever seen.
Astonishingly classless and mean-spirited. Especially this part:
"feel free to harass the staff in their support forums..."
The founder and CEO of PHP Fog is extremely active in their support forum. He provides lightning fast responses, and is really proactive at trying to help.
I mentioned that my app was undergoing a traffic spike, and he personally on his own initiative ran some tests to help me understand how to deal with the load.
I really feel for Lucas and the team at PHP Fog. They are awesome people, so they'll get through this. But dang, it's gotta be miserable.
True. I was asking for a good hosting solution in Kohana forums[1], PHPFog was mentioned a few times and Lucas went there to offer us direct access (skip the invite-friends part). He is indeed very active and kind.
Absolutely. Especially taking over phpfog.com and their twitter account. All of the folks on that list (http://elliotspeck.com/, http://johnduhart.me/) seem to be just teenagers though.
Please don't put these guys in the same basket as security experts. They are by no means experts, from the looks of it they are just doing it for the lulz at this point...
Heroku, NodeFu and now PHPFog. All the Heroku-style clones have had security issues in the last few months. Security in this space is very, very hard work (I think NodeFu made an checkin mistake and it wasn't a 'jail/isolation breakout' scenario).
Edit- wow - they just pointed phpfog.com at phpfogsucks.com. I feel bad for the phpfog guys - they have a long weekend ahead.
This is a pretty good lesson: when you have that little niggling feeling in the back of your head about something security-related, take care of it. Otherwise, someone WILL exploit it.
Seems like they were using the load balancer as a way to obfuscate the existence of the individual EC2 instances. Also, that has gotta be really expensive to have an EC2 instance-per-customer.
Depends on the type of instance they spin up, but I would definitely tend to agree with you!
Security in shared hosting is extremely hard (I used to be a sys admin for a hosting company in a prior life), especially since there is no good way to separate everyone from each other without making performance suck completely, FreeBSD jails alleviate some of it, but you start having scalability issues, PHP running in php-fpm works, but uses up a lot of resources keeping spare instances around, there are a whole bunch of other ones as well.
Individual virtual machines per user isn't such a crazy idea but it is really expensive. What I would really like to know is how Google has accomplished it, at scale, with AppEngine. How are they able to do their security separation so well that at this point I am not even aware of any security breaches.
There has to be a better way to do it, and securely, but it may require rethinking how the entire architecture fits together, PHP, a web server, and the database engine.
>What I would really like to know is how Google has accomplished it, at scale, with AppEngine. How are they able to do their security separation so well that at this point I am not even aware of any security breaches.
Easy (well it takes a lot of work, but it isn't difficult): they only allow languages that run on VMs, they then rewrite the VMs to remove/limit the file system access, network access and whatever else they don't want you to access, prevent access to unknown C code (since you can always fudge with the stack and other fun stuff at that point) and count the memory used and instructions run (so they can bill you and so denial of service gets too expensive).
This kind of thing is easy to do if you throw 100 good developers on it. My guess is that phpfrog didn't have that kind of manpower.
Ahem. There have been multiple exploits for heroku, some of which enabled access to code and data of other heroku customers (google for "heroku vulnerability").
From what I read about their virtualization (which may not be up-to-date) they seem to rely on the security of chroot(). If that is still the case then there is a big problem in their future.
chroot has not been designed as a security feature but as a system testing tool. you only need a local root exploit to get out of chroot. you need additional protection to have a proper jail; freebsd does this, openbsd used to, not sure how it is now.
That is definitely interesting, although for Ruby that is slightly simpler because there is no real way like PHP to have a single instance of Ruby deal with all of the requests (talking about mod_php and PHP-fastcgi, not php-cgi which spawns a process per request).
I don't think they are obfuscating it. If I remember correctly, their pricing page made perfectly clear that they used dedicated EC2 instances.
Just for the record - the cheapest EC2 instance type is t1.micro, and amounts to ~15 USD/month (+EBS and IP costs). I didn't see their business plan so I can't tell what is their big picture about that :-)
If they use reserved instances, it should be even less than that. They still need to control abuse in terms of bandwidth, etc. and that is the difficult part.
Not the first "you've been pwned" message on the Internet and won't be the last.
It just happens to be the first I've seen use Google Analytics to track the lulz with CSS and @font-face. With that layout I was expecting to see a customer rant, not a "pwned" message.
On a more serious note are they going to be able to afford to have a separate EC2 instance per customer to avoid having to write a proper sandbox?
It didn't take long for someone to use that vulnerability to open up the entire server. People are posting from the @phpfog Twitter account and someone posted the entire codebase: http://twitter.com/#!/communistcake/status/49340298677075968
Edit: Actually, the links in that message appear to just be mirrors of the links at the bottom of the article.
Edit 2: Links in that last status are now dead. Wonder if the young Elliot Speck is trying to walk it back a bit.
Not particularly related to the post, but seeing "phpfog down for maintenance" - a seemingly innocuous title - on the domain phpfogsucks.com gave me an idea.
If I ever have a semi-successful site, I'm going to register sitenamesucks.com as well, and use it as a status blog to explain downtimes, etc.
1) Every environment is going to be chrooted and Apache will be running under per-user mpm
2) The dedicated ec2 servers will be running in a way that has no security credentials of any sort, a walled garden that will not have access anywhere else.
Anyone actually read the exploit? This is not so much hacking as it is PHPFog being extraordinarily stupid. The fact is that such an obvious vulnerability (that I'm sure many of their experience customers have noticed) went ignored by the PHPFog team.
The phpfogsucks site is tasteless and mean spirited, but it is good information to have for potential PHPFog customers that the service they are shipping their valuable code too is extremely poorly managed.
Not sure what the hack was for the main server, but I'm not even sure I would consider the steps mentioned at this site as hacks so much as "server administration." It's a pretty obvious thing to try. It was only a matter of time before someone decided to poke around and see what they could do.
Actually, I'm going to go sign up for an invite over at phpfog... it looks like something I could make real use of. In a way, this incident may turn out to be a boon for the folks over there.
from the @phpfog timeline : "Time for a contest! How many security enhancements were in PHP 5.3.6? First correct answer gets into the beta immediately."
4:22 PM Mar 18th vía Twitter for Mac
http://twitter.com/phpfog/status/48856795669737472
Is this at all incriminating against php itself? Tumblr which is written in php had a security issue and now php fog. Is php hard to secure as opposed to other languages/platforms?
Did you not RTFA? The article is about a PHP hosting company that is getting merc'd because of the security flaws inherent in PHP that lead to their design decision to use Amazon EC2.
Whats up with the attitude? Seriously. The arrogance and self righteousness on HN is ridiculous sometimes and really kills the conversation.
To your point though no i didnt read the article because there was so much noise between it and the flamewar going on here that it was difficult to figure out what was even going on. However, to quote you, "The article is about a PHP hosting company that is getting merc'd because of the security flaws inherent in PHP that lead to their design decision to use Amazon EC2."
Ya wasnt that the question i just asked? Seriously maybe you should read the question before just downvoting it and replying with no reply. My question was actually a serious question. I want to know if there are security flaws in php as i am looking at it for a few projects and would like to know if there are issues with it before i start them.
PHP is just as secure as any other language. It's the programmer's best practices (or lack of) and implementation that can make the code secure or insecure. The language is mature, actively maintained, and has a nice standard lib (debatable). Whether or not YOUR program will be secure depends on you the PROGRAMMER not the language.
While there are some features of PHP which are inherently a bad idea (register globals for example) these are, for the most part, deprecated and removed in the most up-to-date version.
I agree with other views that it is the programmer's code that is insecure, not the language itself.
So does that mean ec2 is insecure? Or is the flamewar about how the article is really about the writer blaming their problems on something thats really not at fault. Meaning php or ec2?
The exploit was not anything to do with PHP. A section of their site was allowing to users to execute commands under a user which they should not have been allowed to. This could have happened under any programming language.