Hacker News new | past | comments | ask | show | jobs | submit login
Drupal Core – Highly Critical Public Service announcement (drupal.org)
222 points by ohashi on Oct 29, 2014 | hide | past | favorite | 131 comments



Whenever I read about the latest vulnerability in a popular WCMS, I wonder why static HTML export still doesn't seem to be a prioritized feature in popular systems.

After all, most sites out there probably don't need server-side dynamic preprocessing for every request. The CMS directory could be locked using HTTP auth (implemented by the HTTP server); this way, not every little CMS bug would allow the world to compromise the whole server.

Do we really expect every parish choir with a web site to hire a CMS specialist who installs updates within hours of the release and fixes all compatibility quirks that occur with new major releases? This is an unworldly approach that bestows thousands of zombie machines on us.

And what happens if the CMS for some old site stops being maintained? A responsible admin would shut the site down, resulting in a loss of potentially valuable information. This issue would be solved by using static HTML export, too.

Are there any well-maintained open-source CMS out there where static HTML export is an integral part of the architecture, ideally with good usability and written in PHP (not that I like the language, but that's what is available everywhere)? (I'm not talking about command line static site generators without a user-friendly backend - those are only an option for techies.)


I was just thinking the same thing. Elsewhere, someone asked how to handle "build it and forget it" jobs of the type handled by small web contractors all the time. Drupal is a really popular engine for those kinds of contract apps.

At this point, if I was advising someone building sites for small businesses and the like, my advice would be:

Can you use a hosted service? Wordpress.net, Shopify, that kind of thing? If you can make that work, that should be your first and only choice. If not... are you really sure that you can't use a hosted service? Okay, then build a site that does static HTML for end users, with a carefully locked-down admin section. On a different domain, if you can.

If you need dynamic features for users (a shopping cart, etc.), again, see if you can integrate a hosted service with your static site. If the reason you can't use a hosted service is because your client is too cheap to pay for it, do not take that client.

Otherwise, if your client absolutely needs bespoke, dynamic features for their end users, and absolutely no hosted service will work for them, they need to invest in a support contract, and you need to tell them up front that they'll have to do that. There are actually contractors out there that do long-term support for other people's apps, if you don't want to be saddled with it yourself.


Actually this is one of the core features of the Ruby CMS I've been working on for the last few years http://spontaneous.io

It's template engine supports two tag types: one that gets run during the publish stage & one that get's run at the request stage.

It can then take an intelligent approach to publishing by having a publish step that renders each page to a static file. It's then trivial to test for the existence of 'run at request' tags and position the generated template accordingly: pure static files get put where a reverse proxy (nginx by preference) can see them, anything with per-request tags gets put in a 'private' directory out of nginx's paths for rendering & delivery by some front-end server.

This has many advantages, including:

- static pages can be served directly by Nginx - the public facing site runs in a separate process that consumes the generated templates - the public app (if needed at all) has no admin-centric code so has a smaller (& separate) attack surface - the publish step can be seen as a form of code generation so you could, in theory, publish a dynamic PHP powered site from this Ruby CMS

I'm also gradually working towards abstracting the template 'filesystem', currently you can render to the disk, redis, memcache or any other key-value store supported by the Moneta gem[1].

For the developer it gives the power of static site generators but provides a simple & usable editing interface for content editors.

[1]: https://github.com/minad/moneta


It's a little more complex when the site is not 100% static. Even a contact form requires a server.

But I do think there should be a good separation between html and admin backend. Security is only one reason. There are other very important reasons:

1. waste of resources. the machine that builds the html from the cms is completely idle for 99% of the time.

2. scalability. the html should be served from a storage server like S3 together with a CDN. there should be absolutely no downtime in viewing html as a result of overload.

the ideal system for small websites is a machine that is turned off by default, and when an admin needs to change something it is turned on (even if it takes a whole minute). After the changes are committed, the system creates html and sends to S3. For forms, comments, and dynamic things it's best to use third-party (like facebook comments and a billion forms services), or use a different small machine that captures user input (completely separate from the turned-off admin machine).


> The machine that builds the html from the cms is completely idle for 99% of the time.

Why would that have to be a separate machine? It could easily be the same one.


A separate machine/VM is better from an availability perspective.


> A separate machine/VM is better from an availability perspective.

How so? A separate machine is one more thing that can fail and since it isn't web facing it won't help with availability if the other one fails. And if it is a VM they will both go down if the underlying hardware fails.


Its true that a separate machine is one more thing that can fail, but if its purpose is so different, as with the "active CMS" - "static hosting service", then it becomes easier to create a replacement.

E.g. the frontend can be replicated (if needed), S3 can be used, while the backend CMS remains intact. Or the backend CMS can be implemented in a HA setup and the static hosting in the cloud.


Wagtail (our open-source Django CMS [1]) can generate a static HTML export of your site to the filesystem, Amazon S3 or Google App Engine, piggy-backing on the excellent django-medusa library [2]. This feature was commissioned by a security company who didn't want to risk an embarrassing exposure to new vulnerabilities in Drupal, Rails, Django etc.

Search is an issue for static HTML sites. Swiftype [3] and Algolia [4] look like solid options - has anyone used these in production?

[1] https://github.com/torchbox/wagtail

[2] https://github.com/mtigas/django-medusa

[3] https://swiftype.com/

[4] https://www.algolia.com


TLDR; publishing static HTML is not straight-forward and dynamic has huge advantages.

Compare this NYTimes article:

http://www.nytimes.com/2012/05/15/technology/facebook-needs-...

to this one:

http://www.nytimes.com/2014/10/28/business/virginia-plans-to...

Everything pre-redesign was published in as a static. Back-publishing 1,000's of articles is impractical and risky (for other reasons).


Back-publishing 1,000's of articles is impractical and risky (for other reasons).

Would be curious as to why.

Also, while the pages clearly have different designs, it isn't immediately clear which is "better" (or that this betterness is due to its being dynamic versus static).


Funnily enough Drupal already has this as a contributed module, would imagine Wordpress has similar. But it's not a massively popular approach as a) it can lead to considerable complexity as an update to an atom of content may need to propagate to many parts of a site, and b) the whole web is moving towards contextual dynamic delivery.


> Funnily enough Drupal already has this as a contributed module, would imagine Wordpress has similar.

Sure, static export plug-ins do exist in plenty, but I don't think that core functionality like this should be implemented as a plug-in.

First, if it isn't enabled by default, most people won't bother to enable it. Second, who knows how long and how well the plug-in will be maintained? Third, if static export is an architectural afterthought, it will probably break unusual features and workflows.

> a) it can lead to considerable complexity as an update to an atom of content may need to propagate to many parts of a site

We shouldn't let perfect be the enemy of good. Even if some blog article from months ago won't get the updated shiny new sidebar, what's the matter? The New York Times still have content from the web stone age on their server [1] - it doesn't use the latest template, but isn't it more important that this content is still available? It probably wouldn't be had they used a dynamic CMS back in 2001.

> b) the whole web is moving towards contextual dynamic delivery.

I'm not talking about amazon.com. They can probably afford proper maintenance of their fully dynamic site.

[1] http://www.nytimes.com/learning/general/onthisday/big/1014.h...


As the creator of a static export module for Drupal, and a maintainer of another, I agree that in the end if it's not built into the core, it's not going to have much long-term viability. That said, D8 is built on services, which makes an interface for static export available basically out of the box.


> b) the whole web is moving towards contextual dynamic delivery.

But not using server side dynamism like Drupal provides. It is moving towards sites serving up static files consisting mostly of Javascript that then call data services, so called single page sites.


In so-called single page sites, the data services are the server side dynamism. The only difference is that you've moved the template engine client-side, and replaced one call to the server-side application with multiple calls.

SQL injection attacks are just as possible against those kinds of apps, although slightly tougher to configure because you need to read client-side JS to find the service URLs, tokens, etc.

Drupal is used to power client-side JS-based sites. The current Tonight Show website is a recent example.


+1000. If your content management system is also your content delivery system, you are doing it wrong.


Having moved a large media co from bespoke systems which worked this way, I couldn't disagree more. Separate frontend and backend systems are an absolute maintenance nightmare and kill time to market which is the lifeblood of any online commercial endeavour. Apart from being extremely dated (c. Vignette) that architecture is a technical conceit which doesn't put business needs first. It is easy to build and easy to secure but that doesn't make it a useful longterm solution.


I doubt any CMS will adopt this into its core, since there are so many approaches to chose from, all yet unclear in benefits. These community solutions should perhaps be marketed better, and if there is indeed a need for this functionality, the market should prove that.

There is an implementation out there that I think is the most intriguing, integrating Drupal with Jekyll: http://www.gizra.com/content/dekyll-drupal-on-jekyll/


> I doubt any CMS will adopt this into its core

There is at least one open source WCMS that uses static export at its core: http://openengine.de/ (site is in German, but docs are in English) - unfortunately, it has been unmaintained since 2010. It will export .htm or .php files depending on whether you need pre-processing on every request.

As far as I know, though, Publicis still use a closed source version of openEngine (openEngine Corporate ASP) for two clients: http://www.siemens.com/ (pretty big fish) and http://www.man-finance.de/ (search for "openEngine" in the source code).

So this is definitely viable as a core approach.


If you turn on the Drupal cache you can cache pages for non logged in users, and logged in users with a mod.

The problem is a lot of stuff is still dynamic even though it doesn't look like it.


By default it stores pages in the database, so this doesn't address the security vector, just performance.


Do we really expect every parish choir with a website to configure server file and directory permissions and turn on HTTP authentication? Seems to me that static site generation is only going to be more secure if the people setting it up know what they're doing.

Probably the best approach these days for a parish choir website is a hosted solution like Wordpress.com, Squarespace, Google Sites, or if we're talking Drupal specifically, Acquia Cloud Site Factory, or Pantheon.


I've been using Stacey for that purpose. It doesn't exactly export HTML but the surface area of attack is very reduced as it doesn't interact too much with the outside world: http://www.staceyapp.com/


I tried Stacey but had a hard time doing a few crucial things:

1) reordering posts -- posts are displayed in _alphabetical_ order, as are directories, so the suggested nomenclature is a numerical prefix before every file name. So if I wanted to rearrange things, I had to decide whether not a linear shift was worth the benefits of file name readability.

2) the order of posts is reversed! I want new blog posts to show up at the top of my feed, but Stacey puts higher numbers underneath. I didn't want to count backwards, so I looked around and found a php hack for reversing this.

This might be fun if you're hankering for some DIY PHP, but I wouldn't recommend it otherwise.


Stacey 3.0 allows you to reverse the order of arrays with a twig filter:

https://github.com/kolber/stacey/wiki/Templating-Language#fi...


It doesn't have a backend for editing content, does it? This would make it unsuitable for the proverbial parish choir.


Stacey is like Kirby without a web interface. You manage content by dropping text files into directories. It's not inelegant. Admin interfaces can be just as poor for representing content as anything else (see: TinyMCE/WordPress) so once you get a client used to the workflow it isn't much of a hassle.

It's not like you're forcing them to build jekyll.


The takeaway: if you run a site or application that is accessible on the Internet, you are responsible for ensuring the site, servers, and infrastructure is maintained after the initial build is complete.

If you're helping a client or company build a new site/app, and they are not ready and willing to either maintain the site themselves, or pay for a decent maintenance contract, or otherwise guarantee that security patches and hot fixes are applied in a timely manner, they don't yet value their website/app enough. It's your job—as someone who knows the consequences—to convince people of this.

This applies all the more when a project is built on a widely-used platform like Drupal, Wordpress, RoR, Django, etc. Though bespoke/lesser-used frameworks may provide a tiny amount of protection through obscurity, that's still no excuse for not educating people on the importance of maintenance in any software project.


Drupal is saying that people were being compromised only hours after the details were known. That's a very short window to update software. In this instance, I'm not really sure what anyone could have done to avoid the potential for compromise. Your install is practically DOA by the time you will learn of the news.


That's exactly the issue. Most enterprises didn't even have time to be notified and properly test/push a patch live before the attacks were already in the wild.


It's like Exploit Wednesday for Windows, except Drupal is a lot easier to reverse and find where the security issues lie given it's open-source.. So instead of taking a day to reverse/find holes, it takes a matter of minutes.


In addition, the issue was leaked pre-announcement among a great many parties who had time to prepare.


Do you have more detail on that? I'd hazard a guess at irresponsible disclosure, or an internal Drupal employee leak.


Then you fall back on backups. These are all very old admin problems.


If I'm using something like Composer to manage dependencies, should I be running an update every week on every past website I've created? I feel like having this run as a cron job would be a bad idea.

If I'm not, is there a way to be notified about critical patches? Should I be signed up for a mailing list for every Github project I've incorporated?

Where I work, it seems like the standard practice is to finish the project, deploy it, and let it sit until the client requests any changes.


This really speaks to this line in the top comment "It's your job—as someone who knows the consequences—to convince people of this."

Your company should really not be taking the "set it and forget it" approach to building web-based applications. It is important to educate the client on the importance of a proactive support/maintenance plan.

As a digital agency building mid-market websites/ applications, we have also found that selling clients on support/maintenance has helped increase client loyalty as well when they want to take on new projects. If you build a site and then disappear the chance they decide to use a different firm next time increases greatly.


It would definitely be a bad idea to run Composer update via cron because unless you know for sure that future versions of packages won't break any functionality you built on top of them, your website could stop working (or worse start working in unexpected ways) without your knowledge.

Maintain a staging environment (even in a temporary virtual machine if necessary) and run your updates in that environment first, then check it, and then deploy to production once you've confirmed everything is ok.


This requires some combo of having a support contract with a budget large enough for manual QA or having built automated tests with the initial work. Vast majority of projects at build-and-forget CMS agencies simply won't. (The reason Drupal and PHP thrive in these environments is because clients can't be upsold out of their cheap LAMP shared hosting.)


You would probably have to make a few assesments, like how often is this package used in my project, and how exposed is it in terms of security vulnerabilities.

An authentication package that provides HTTP endpoints onto your framework where the controller classes are entirely out of your control or you simply pass a Request instance to the service class seems highly likely to be a security risk. The same goes for your ORM(s). Something like a Markdown converter seems less likely, though not impossible.

The popularity of the package could also play a role. If it's popular, it's more likely bugs/vulnerabilities will be spotted and revealed publicly on places like HN or Reddit.

Lastly, if you can ensure that all the packages you use follow semantic versioning, you can lock the version constraint in composer.json to only receive bugfix releases, which would in theory make it safe to run composer update as a cronjob. However, in my experience very few packages do this, and most of them when releasing a new minor version won't backport bugfixes/security fixes to older versions.

Personally, I have an RSS reader set up to notify me of new tags on Github for the more involved packages I use, but I also put a high emphasis on writing a lot of the code myself rather than use packages.


> Something like a Markdown converter seems less likely, though not impossible.

Really? I've found serious XSS bugs in frameworks that are semi-popular for writing real-time applications with a web component. What's outputted from your Markdown converter is generally assumed to be trusted HTML. Additionally, if it's not written in a language with good string support... it has string processing, which could lead to a crash or buffer overflow easily. It seems that a Markdown converter is exactly the sort of place you'd be likely to find an attack vector.


You're right, XSS and client-side injections didn't occur to me when I wrote the comment.


You can run your composer.lock file through here https://security.sensiolabs.org/check to see if there are any vulnerabilities reported against your dependencies.


>If I'm using something like Composer to manage dependencies, should I be running an update every week on every past website I've created?

Absolutely not.

For one, an updated package can bring some subtle or crude incompatible changes when you don't expect it. Even in a minor/bugfix version.

Second, what if the composer repo itself is that got compromised? Then you are installing backdoors to all your sites automatically.


Don't forget providing a way for security researchers to report XSS and similar bugs.


The premise that the attacks occurred after the announcement was made, and thus can be be blamed on the announcement itself is in error.

The article details how it can be practically impossible to tell if a site has been hacked. There is no reason to believe that your site has not been exploited prior to the announcement.

Whilst the post might have increased the volume of such attacks, I strongly doubt that this exploit was completely unknown prior to announcement.

In other words, if you run a Drupal site, that was vulnerable to this attack prior to the announcement, there is a risk that your site was exploited before the announcement.

This is a much more realistic scenario and also a more frightening one.


All the more realistic given that the issue -- or a big fat hint at the issue, anyway -- was sitting in the public issue queue for nearly a year.

https://www.drupal.org/node/2146839


Yes, except that none of the major hosting providers who host on behalf of thousands of clients (pantheon, acquia, etc) were able to find any examples in their logs of an attack signature prior to the disclosure. Just like shellshock has been sitting around for 25 years until somebody found it, it is highly unlikely that this vulnerabilty has been exploited prior to that, despite the fact that it was 'possible'.


Apart from it is very obvious when an attack is made when looking at logs.

We saw the attacks start hours after the announcement.


That's as big as it can be. We started seeing attacks hours after the initial disclosure and shared some of them here:

http://blog.sucuri.net/2014/10/drupal-sql-injection-attempts...

This is a lot worse than Heartbleed, Poodle and others. Full database / server take over for 700,000+ sites that use Drupal.


"This is a lot worse than Heartbleed, Poodle and others."

There are many security vulnerabilities that have permitted full machine takeovers in an automated fashion for a long time now. Generally speaking though such attacks only work against very small fractions of the Internet, and, no matter how big Drupal may be in absolute terms, in relative terms, it is not that large. Heartbleed was so bad because it was so widespread. So many things use OpenSSL. That you could lose private keys was just icing on the cake; the arbitrary memory read was bad enough on its own, and the difficulty of detecting it also factored into its high rating. So, yes, I'd still rate Heartbleed far, far above this. Or the Ruby YAML attack.

(POODLE was by contrast well-named; annoying, yappy, ultimately not significant enough to warrant its own entry in the Great Security Vulnerability list. Enough to worry about and mitigate and it if helps bury SSLv3, hey, great, but not a thing for universal panic the way Heartbleed or Shellshock were.)


You have a good point, but I was looking at these two points:

1- Extent of the damage

2- Number of points vulnerable

Heartbleed had (has) a lot more servers vulnerable, but the impact is a lot lower and it is a lot harder to exploit to extract valuable data. In fact, I doubt you will see a compromise or a major issue because of heartbleed (despite the mass drama).

Compared to this problem with Drupal, that is used by the many of the top sites online, the overall damage can be a lot bigger.

Time will tell.


I disagree, the type of information that was potentially leaked by services that use openssl is much more critical than the assets you can obtain by hacking servers via the Drupal vulnerability.

My reasoning is that it's obvious (at leas I hope it is!) to your system admin that the system has been compromised when he's actively looking for indicators of compromise. This is not the case with heartbleed, so yes you can steal keys if you hack the cms and you control the server for a brief while. But this is obvious, the keys are going to rescinded, the users are going to be alerted and your access to the server is going to be severed again.

In contrast the consequences of heartbleed may not be completely known even now. What if the private keys of a linux kernel dev were compromised? The attack surface was huge, and the sensitive information covers more than only cryptographic keys. There could have been all kinds of stuff in the memory of the vulnerable servers.


Drupal (in many/most setups) executes code out of its database. These machines could be told to hack internal networks and act as botnets as the result of a single POST. Definitely wormable.


wow. Drupal took a month to turn this around.

     16. Sep.  2014 - Notified the Drupal devs via security contact form
     15. Okt.  2014 - Relase of Bugfix by Drupal core Developers [1]
[1] https://www.sektioneins.de/en/advisories/advisory-012014-dru...


"""The Drupal Security Team was informed of this issue in the third week of September of 2014. Given the severity of the issue, we debated about releasing it early. Our main concern was when people would have the time to perform the upgrade. Drupalcon Amsterdam started on September 29th meaning that many of our community members were busy preparing for that event. The week after Drupalcon is typically busy catching up from being at Drupalcon and then October 15th was the first regularly planned security release Wednesday. We felt that it would be better to use the regularly scheduled date which also happened to be the first date when the Drupal community would be likely to have time to focus on the upgrade."""

https://www.drupal.org/node/2357241


This is terrible.

---

We didn't want to disrupt the busy schedule of Drupalcon Attendees attended Drupalcon, a event to engage in discussion of the platform we, the organisers, know currently has a critical vulnerability.

We also assume attendees are uninterested in critical vulnerabilities while attending Drupalcon.

We assume attendees will be unable to return to their regular roles due to the excitement / insight / general awesomeness / other affairs unrelated to Drupal for a full week after attending out event.

Non-attendees implicitly missed out on our fun

We have now issued a fix, which is one line of code altering a database query string. Please be noted in our security advisory that you have almost no way to know whether your site was compromised and if it remains compromised.

---

It is more than terrible. It is arrogant, negligent and contempt.


The date was scheduled to allow people a quick update. It could have been released oob, but would have caught many people by surprise.


The surprise of an OOB patch is nothing compared to the surprise of a defaced site.


Is it becoming more common to see these type of attacks directly after patches are released?


"More common" might be a bit tricky, because it's always been a race once patches go out. But modern systems can hit every IP address and quite a ridiculous number of domains extremely easily. There are more targets, so while it might be just as easy to get N% of vulnerable machines, the rewards for a hacker doing that are much higher, so the incentives put you at substantially more risk.

There's also greater danger and incentive for attackers now that more websites are run by primarily non-technical people, as they are less likely to patch immediately.


I'm sure there are also indexers out there who catalogue known Drupal/Wordpress/RoR/etc sites in anticipation of quickly hitting them with an exploit once a new one is released.


This definitely happened. Major hosting providers were seeing attacks against all of their sites, in alphabetical order.


It depends on the complexity of the attack. This Drupal one took our team less than an hour to have a working proof of concept (just based on the diffs).

The exploit is very simple, and doesn't require any interaction with the remote site. Explaining why they started on it very fast.

On other more complex exploits, we generally see days (if not weeks), before the attacks start.


This is just the lowest hanging fruit possible:

1. Widely deployed software.

2. Internet facing on purpose.

3. Trivial to fingerprint.

4. Fairly robust exploit (doesn't crash the target or require version specific offsets, magic numbers, etc.)

And yes, when similar bug happens that meet these characteristics, mass scale exploitation will follow quickly after for various purpose (spamming, google ranking scam, DDOS, etc.)


If I were an attacker, I'd fingerprint sites. It should be pretty straightforward to maintain a relatively up-to-date database of web server + version, and app stack + version. Then, when a vuln hits, you know exactly whom to attack...


Looking at the patch that fixed the vulnerability [0], I think it's a pretty safe bet to say that having a hybrid array/key-value store [1] is a generally terrible idea.

[0]: https://www.drupal.org/files/issues/SA-CORE-2014-005-D7.patc... [1]: http://php.net/manual/en/language.types.array.php


I don't agree with you, I believe it is wrong to blame the tool for your mistakes. I agree that it may be debatable how much of the screw ups in the PHP world is actually inherited behavior from how easy and permissive PHP is to work with (perhaps a truism though and nothing to debate about).


This is a really good example of why, if you have something important to your organisation on the Internet, you need to really be doing defence in depth, to either reduce the risk of compromise, and/or increase your chance of noticing when you're attacked and reacting accordingly.

So things like IDS (network and host), perhaps looking at adding a WAF to catch basic attacks, shipping all your logs off your front end servers so they can't be easily destroyed by attackers etc.

This kind of disclosure--> attack timeline now looks to be the norm, so this will become a theme and companies are either going to have to spend more on defense, or spend more on incident clean-up and then spend more on defense....


WAF are a bad idea imvho. They can give a false sense of security and the feeling that security is being taken care of. The reality is theta they are generally not very good at catching anything other than the most basic of attacks.

I much prefer

- external security monitoring (there are many vendors) - automated testing in pre production using skipfish/w3af/whatever - static code analysis - penetration testing - responsible disclosure programmes - hackdays


I don't see it as an either/or decision TBH. I wouldn't suggest that WAFs are a panacea, but that doesn't mean that they can't be a useful defensive layer.

A lot of companies have difficulties getting app. patches applied quickly due to test cycles, so applying a WAF rule to block known issues (this one for example) can be a fast, low risk way of reducing the risk.


Virtual patching is the main benefit on WAFs for cases like this.

We were able to issue a virtual patching signature for our clients in less than 2 hrs after the disclosure.

Plus, our generic SQL injection signatures were already blocking this attack even without it.

That gives our clients more time to test and deploy patches without worrying about being compromised in the middle.


You'd be surprised at how many robot attacks don't have a user agent, or similar things you can trigger on.

I agree, a direct targetted attack against your site they won't help (much). But for stopping a lot of robotic, automated hacking attempts they certainly have their place.


Note that the "within hours" is in the past -- if you didn't patch rapidly on 10/15, you should assume you're compromised.


You should assume you're compromised regardless. More than researchers could have known about the vulnerability.


Why is this posted now, more than 2 weeks later? Even back then, within hours, people had publicly published working examples of remote code execution via simple single HTTP POST requests?

For example: https://twitter.com/i0n1c/status/522495098630987777


The Drupal Security Team decided to post a PSA today reinforcing the criticality of this bug. Most of the big sites, including all the sites hosted on popular Drupal PaaS services like Acquia Cloud and Pantheon, were updated or otherwise protected promptly when the original vulnerability was released.

Many other sites were patched within hours.

However, there's a long tail of sites still running Drupal <7.32, and all those site's owners should assume, at this point, they've been compromised. As such, this PSA gives instructions for how to avert total disaster.


and by long tail we mean 75% of drupal 7 sites.


As someone who was learning drupal I was wondering this. My test site came up with a friendly notice to "upgrade core". I figured I should since it seemed like a good exercise.

But trying to figure out how was not intuitive. The drupal web site was mum on the issue (you figured they post something on the site telling people to upgrade asap)

I figured it out eventually. Was disappointed how it was handled.


I recommend also subscribing to the Debian security mailing list[1], even if you're not a Debian user--they are on top of security issues that involve software in their repo (and that's a lot of software) within minutes of the advisories.

In fact, that's how I learned about most of the Drupal's core security issues (got a message in my inbox) and was able to patch them up really quickly.

[1] https://lists.debian.org/debian-security-announce/


It's best to follow the security announcement list. Such announcements are also posted to https://drupal.org/security and the release notes (click through from the upgrade warning).


I am doing an in-development project, we patched to 7.32 minutes after the initial SA was sent. We do have some components sitting out on the net (not public) and they were not updated (brought in sync with development code) for about three hours, so I am going to go ahead and conduct an audit.


I'm feeling like it's a whole new age of security, these days.

The amount of time/resources it takes to keep your apps or sites secure today is so much greater than it was even a few years ago.

Development and maintenance practices that seemed reasonable to people only a few years ago now seem impossible. Delivering an app or site based on Drupal, WordPress, Rails, etc. as a finished product to a client that does not have sufficient in-house IT staff -- you can almost guarantee they're going to run into security trouble. And what is required for 'sufficient in-house IT staff' is way more than we thought a few years ago -- even if not everyone has realized it yet (those who have not will get burned).


>Delivering an app or site based on Drupal, WordPress, Rails, etc. as a finished product to a client that does not have sufficient in-house IT staff -- you can almost guarantee they're going to run into security trouble. And what is required for 'sufficient in-house IT staff' is way more than we thought a few years ago -- even if not everyone has realized it yet (those who have not will get burned).

And that for 99% of the cases (unless they process credit cards and transactions), it won't matter much, if at all.


Well, it matters to the customer if their WordPress site goes down because it was infected by malware that sends out spam or makes clicks on Google Adwords.

This happened with someone I was working with, to try and rescue their WordPress.

Ironically, the site went down only because the malware that I'm guessing was scraping Google or making clicks on google adwords or something (I just skimmed the malicious code, it wasn't entirely clear to me what it did) -- had a bug in it that brought down their site. If it had been bug free, it could have kept using their site for it's malicious purposes for years without them ever noticing.


I'm concerned about all the critical websites powered by drupal, including Whitehouse.gov.


Actually, the White House was hacked two weeks ago. [1] Not much is officially known, some think that the hackers managed to penetrate a lot of their systems (some employees leaked info before the announcement).

[1] http://www.usatoday.com/story/theoval/2014/10/29/white-house...


That's their internal network (LAN), not their web servers.


Surely you are not claiming that there is no possible connection? if/once the Drupal Web server was hacked, the hackers could not possibly have served malware to admin users, or reached into other machines?


Anything is possible, but I think it would be unwise to assume they are related.


I presume most of the bigger Drupal sites are run by teams that care about security enough to follow security lists closely and have release management in place that would allow hotfixing within hours, if not minutes, of a critical vulnerability being announced.


Whitehouse.gov is still on Drupal 6 and not affected by this vulnerability.

The petition site is D7. But last I checked they have some pretty heavy hitters on that project. They probably knew about and patched the vulnerability before it was even publically released.


The following White House announcement seems to say that they have already transitioned the main site to Drupal 7. [1] Do you have contrary evidence or insider knowledge?

[1] http://www.whitehouse.gov/blog/2014/10/24/welcome-white-hous...


Not insider knowledge but at the DC Drupal event in early August the WH devs gave a talk about the 2014 State of the Union live stream, and at that time said that the main site was still D6. Perhaps they deployed the D7 upgrade since then.



I stand corrected!


This is going to be a nightmare for a lot of smaller shops I know who have hundreds of Drupal clients. They must be going crazy right now.

I stopped using Drupal and WordPress about a year ago and am glad I did. Myself and several clients just dodged a MASSIVE bullet!


Just because you have not found vulnerabilities, it does not mean they do not exist. This Drupal vulnerability sat there for a while before a security code audit found it.

What are you using to build sites now? When was the last time the codebase was audited by a security firm? There might be bullets out there for you...


Exactly. Not enough eyes on enough code these days. I've found obvious bugs in almost every framework or CMS I examined so far :)


What are you using in place of those? Wouldn't your clients normally pay for maintenance?

The killer here is:

"Consider obtaining a new server, or otherwise remove all the website’s files and database from the server. (Keep a copy safe for later analysis.)"


I replaced Wordpress with a home-built solution that is drastically simpler. It retains most of the URL compatibility so links wouldn't break, but it has only a tiny fraction of the functionality of Wordpress (most of which we didn't use anyway). It's entirely possible that our solution has vulnerabilities (though we designed it with security in mind, and the code base is much easier to audit due to its simplicity). But at least it's not going to get compromised due to a generic Wordpress exploit.


There would be a lot of demand for a much simpler WP alternative built with security in mind. Would you by any chance be open sourcing the project? More eyes on the source couldn't hurt.


We ourselves built a project with speed and security in mind and are working on open sourcing it in 2015


For my monthly subscribing clients, I use an MS stack (with Azure) with Visual Studio and then have them on the $20/month CloudFlare plan.

Included in their monthly sub is an update service. I maintain a 24 hour turn around on any changes. This way, I get to control my code, then don't break their site, and everybody plays nice.

Lately, I've been using PyroCMS or KeystoneJS for a lightweight CMS when necessary. Most of my CMS customers are one-time dev deals. I design, build and then hand it over to them so I'm not responsible for security or updates - which is something I have in the contract they sign.

By doing so, the clients I need to maintain control over (in a security sense) I can and then I don't have to take chances with WordPress or Drupal. I've been a fan of Drupal, so it's tough to see they got hacked pretty good. Usually its plugins which get hacked, so getting the core of your framework hacked is a huge deal.


Statically-generated site with client-side customization?


You still have server-level vulnerabilities to contend with. It might be a simpler task to maintain server-layer security and not have to worry as much about application-layer, but security is still something that needs to be dealt with, if you have a server that is powered on and connected to a network.


Host with a private repo on Github Pages.


Why? Didn't they plan for security updates?


Most of the smaller web design shops that create WordPress or Drupal sites for customers either don't do updates or offer it as an optional paid add-on that many customers don't subscribe to.


Wow. This is about as big as it can get - your site has probably been compromised, data could have been stolen, and you will have no idea if you were hit or not.


Yes Drupal waiting too long to send out this public notice, yes it seems there were exploits written very quickly. The thing is, no one can manually exploit nearly every Drupal site out there without assistance of search engines. It's my opinion that search engines are the primary tool in a mass malicious exploit attempt such as this.

If search engines had restrictions in place for this obvious type of malicious search activity, we'd be much better off and would see a massive reduction in the huge number of infected/exploited sites/apps. They are using them as a database of targets and must be stopped.

I feel this is a wake up call to petition search engine providers to implement and restrict these types of queries that are obviously malicious. We can't prevent direct attacks but these robots that farm search engines for mass infection, something can be done.

The other item is these software vendors need to restrict and minimize anything that puts a giant flag up saying "Look at me, I'm a Drupal site!" EG: CHANGELOG.txt shouldn't be in www root, etc. Even some simple htaccess rules can provide a mountain of help.


We did a lot of Drupal work at my last job, and I sent my ex-boss a friendly warning about this when it was first announced, but I don't think he patched all of their Drupal sites. I could be wrong, but I just checked the CHANGELOG.txt on a couple of them, and they're still on 7.27.


Why on earth is CHANGELOG.txt included on a production server and publicly accessible?


I had to try to believe it.


Probably did the one line patch?

There was a one line patch that fixed this particular bug. Probably a lot easier to apply that than to do a full core update to a bunch of sites.


To worsen the things, Drupal did not make the new version available through its usual update mechanism immediately after the announcement. `drush up` did not bring up the 7.32 version for many hours if not a day.


I believe that to be a drush issue. drush cc drush should have fixed that.


Why on Earth is the list of available Drupal updates cached?


Beats me. This is a third-party tool. The release XML was updated immediately after release, which is the normal procedure.


Those developers still using Drupal must really not be having fun. Drupal is an EOL still sticking around from back in the day of monolithic php cms frameworks that were broken at the core (Joomla, PHPNuke, etc). They make developers lives worse and lead to very little in solid products. It was good for a time, that time has passed and there are better ways.

Keep this in mind when you pick up new monolithic/take-over frameworks, they bloat and die, microframeworks are the way to go. I am sad for developers lost in Drupal dead zones. Drupal man walking!


Have fun trying to get your microframework to satisfy the requirements of massive governmental organizations(http://australia.gov.au/misc/drupal.js) , universities (http://www.harvard.edu/misc/drupal.js), and gigantic media sites (http://www.nbc.com/misc/drupal.js).

I think you haven't a clue as to what drupal's capabilities are if you've got it even in the same ballpark as Joomla and PHPNuke.


The parent's claim was that Drupal is monolithic, bloated, and behind the times -- not that it was lacking in functionality.

If anything, linking to massive governmental organizations and universities (both famous for their bureaucracy and resistance to change) that use Drupal would seem to support his claim.


Drupal 8 aims to cut down on the NIH syndrome by using high-quality Symfony 2 components instead of in house libraries.


I've been waiting to see a Drupal 8 RC for years...


Right, because there are never security vulnerabilities in custom CMS projects developed on frameworks.

Look at it this way--this Drupal vulnerability was caught because one company using Drupal invested in a code audit. Now everyone using Drupal can benefit from that audit, because the vulnerability was disclosed and patched by the community.

Now let's say you develop a CMS-like project on a microframework. Are you going to pay an outside company for a thorough code audit of the project? Is anyone else?


What microframeworks do you suggest?


My 1-project experience with Drupal is that Drupal is a clusterfuck that any experienced developer will quickly become frustrated with (much like its underlying PHP).


So glad I stopped using Drupal to build websites long ago.


yeah they said drupal 7 was vulnerable to sql injection however it looks bigger http://www.techworm.net/2014/10/drupal-7-vulnerable-sql-inje...


It is a SQL-injection bug. The consequences of such a bug are pretty grave, and include remote code execution.


There is a lesson to be learned here, I believe:

Thoroughly vet any platform (i.e. audit as much code as you can) before you deploy it, even if you want to believe that there are a lot of well-meaning white-hats looking at the product. (Linus's Law, etc.)

The SQL Injection vulnerability here was a little bit more clever than the ones you'd normally see. ($query = "SELECT * FROM table WHERE id = $tainted"; mysql_query($query);) All the more reason to take a closer look at the code.

And if no one else will, I'm certainly up to the task. But I make no promises I will publish my findings.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: