A simple loop over your certs will solve so many problems:
cd ssl/certs; for pem in *.pem; do ssl-cert-check -a -x 15 -e admin@yourdomain.com -q -c $pem; done
[imagine a grumpy "silly companies with millions of dollars in funding, no actual processes, and too little systems knowledge" rant here. it's like a racecar driver who's team lets him run out of petrol every two laps. this is basic stuff, kiddies.]
I recommend doing the test against the live SSL server. Accidents happen, and sometimes webservers get configured with a different SSL cert than you have in /etc/ssl/certs/. Heck, i've seen orgs with a whole slew of certs created in their issuer's domain management page, and none of them were ever applied to the server. Nobody would know the certs loaded in the webserver were bad until they get a page error.
Also, the nagios monitoring "check_http" will check the time to expiry of HTTPS certificates, and warn you when you have 30 days left. http://nagiosplugins.org/man/check_http
I would recommend checking all certificates on every host locally too, as you then can catch other than HTTP certificates too, like IPSec and OpenVPN. I found a good plugin on GitHub that does just this. If you then just put your certificates in the proper directory you can have them all checked and monitored by Nagios. I use nagios check_http to verify that a page is actually serving content via SSL.
I wish it was only an issue that occurs in such cases. Unfortunately, this kind of stupidity happen TWICE in a row in a company I worked for previously.
The developpers told the manager litterally MONTHS ago, but it was just not considered as important enough. (Eh, if I click on "Approve" it works, so what the big deal?!)
It's an important reminder that most startups are run by a handful of people who are normal, fallible human beings. They generally don't have access to the infrastructure and manpower required to be on the ball and maintain near 100% uptime of their services. I think the important information to be gleaned about a company is how they deal with problems like this (not necessarily the fact that they encountered the problem in the first place - if that is of concern, pick a more established organisation).
Also another issue, which probably doesn't apply to Parse is limited funding. One of my own venture's certificates has now expired and as a 'bootstrapped' web app, I simply can't justify spending more to renew the certificate. However I keep the service live as a demo to potential customers, if one takes a very keen interest I will happily renew it, or seek further funding.
This is why I think its always good to build your apps in the 'Good old way', instead of relying on a third party like Parse. It becomes a bottle-neck in situations like these, where in you are kind of stuck - Either, you will have to re-write your code from scratch, or wait for them to fix it, with a lot of uncertainty.
Is there any value in companies like Parse selling their software as installable on your own servers (i.e., GitHub Enterprise).
I ask, rhetorically, because I built a somewhat competing service that isn't doing too well and I'm looking to make it so you can install it on your own servers.
Your best bet would be to follow the OpenX model - Release a community friendly version of your software that anyone could install on their servers, (would be good if it was open source; not necessarily free though) and also at the same time, target the Enterprise guys with a hefty premium plan, by offering a hosted service,with some extra features and with Service uptime Guarantees. I see no reason why you wouldn't be hitting the gold pot, this way.
Yes. I think that is a useful business. If you could make a system that was scalable and provided an API like parse that sounds like something interesting. Have virtual machine images you can host. The hard part would be figuring out how to simplify scaling it out. That is one really nice thing about hosted solutions. I can forget the IT. I'd be happy to work in a middle ground where i traded the IT effort for additional control/changes/features.
Well I think as this issue with Parse shows, you can forget about the IT until you can't. If your app is down due to a mistake like this, it's even more frustrating because you have so little control on the situation.
We have a product that relies heavily on Parse. The model is a white labeled solution. We currently have a potential client that wants to host everything and we are kind of stuck with Parse. I could simply charge more if there was a customer hosted solution.
Hint: the web is one thing, but if you're deploying a library that talks to your own server, consider embedding your own root certificate. In addition to verifying the traffic isn't being proxied[1], you can also issue 10 or 20 year certs and be done.
[1] Users can add their own root certs and watch your traffic (typically with the intent of breaking your app), or worse, companies can deploy roots to all their devices and sniff their employees' traffic. Why be complicit?
If you have your own cert, wouldn't it be best to ignore expiration (and decide based on risk whether or not to implement revocation)? Unless you are saying to add the cert to the device/computer's list of trusted certs, which doesn't seem necessary to me.
All of Google's pinned certs are certainly evidence that this is a reasonable approach (I haven't looked at their expirations), though now it's kind of weird since it can't be MitM'd even to debug -- no one knows what is being shipped off to Google by Chrome anymore without reverse engineering binaries.
Issuing inordinately long-lived certificates might not be the best security decision. To put it another way, "infinity" is not the appropriate lifetime for a 4096-bit cert.
For a big org, SSL certs can sometimes be a bitch to manage. I recently started a project to track all the SSL-enabled devices on our medium-sized network, and it came out to something in the low four-digits. As you might assume, not all of them are CA-signed, and many of them expire without anyone ever realizing it.
I wrote a couple scripts to manage bulk-checking SSL certs on a network. One of them uses Curl's Mozilla root CA .pem file and follows the chain to verify a cert is really signed and not expired. https://github.com/psypete/public-bin/tree/public-bin/src/ne...
Well, it's a 1 year wildcard certificate from DigiCert, which costs $595 according to [1] - while their 3 year certificate costs $1425.
That's quite a bit of cash when you're a small business - and your minimum viable product doesn't need a 5 year cert, you just have to remember to renew your cert if you're still in business after a year...
Because they are new, have never run into this problem before, and are bootstrapped to the hilt. At least you know they'll never let it happen again. ;-)
I get SSL certificates from Servertastic. They are a RapidSSL reseller, whom in turn sell rebranded GeoTrust certificates. The result is that you get "the same thing" (by the end of the payment process you are filling out forms being served by GeoTrust's servers) for $13.95/yr.
I've only had a couple situations where I needed enough subdomains to warrant the wildcard cert. $10/yr for RapidSSL through namecheap, most wildcard certs are at least 100-ish; I usually only need 5-6 subdomains and it's cheaper to just buy all the single certs at $10 each.
I started using Parse for my app and as I code I am starting to realize how painful it will be to move away from it in future. It would mean a complete rewrite of the app. Now, if they have problems like these (down due to certificate expiry), I am not sure what value I will get out from this service. Also, it is taking more time for me to learn their SDK than I thought, as I am beginning to write complex queries and relations. I encourage people to write data services on their own.
Indeed... we just finished our app building on Parse, but this and last week make me think hard about whether or not it was a good decision to be dependent on their infrastructure.
In my line of work, I see this a lot (both expired and using a cert on a subdomain the cert isn't valid for). It gets really difficult trying to balance security with the needs of the business when employees are begging me to whitelist the site so it isn't blocked by the corporate proxy but I really don't want to whitelist invalid certificates. That sets a bad precedent.
There really should be another way that doesn't involve SSL certificates.
A simple loop over your certs will solve so many problems:
[imagine a grumpy "silly companies with millions of dollars in funding, no actual processes, and too little systems knowledge" rant here. it's like a racecar driver who's team lets him run out of petrol every two laps. this is basic stuff, kiddies.]