Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Short answer: no real security issues. AWS itself runs everything from credit card processors to hospital software, they have great documentation on how to design your system so that you comply with the major certifications like HIPAA, PCI-DSS, and FedRAMP: http://aws.amazon.com/compliance/ http://aws.amazon.com/compliance/fedramp-faqs/

Our security audit was apparently the first one in years to pass with no "high findings" (high-priority security issues that must be fixed before the system is approved to go live).

The main barrier we ran into was that although AWS was already FedRAMP-certified, it had not gone through CMS's own internal security review process, which requires hundreds of pages of additional documentation and an audit of the code, team, and penetration testing of the live system. At the end of that process, you get an Authority to Operate, or ATO, and that's what every site in the government legally must have in order to launch.

You can get a sense of how much paperwork is involved here: http://www.cms.gov/Research-Statistics-Data-and-Systems/CMS-...

Navigating this ATO process was the single largest roadblock in developing the various parts of healthcare.gov 2.0. It meant that what we expected to take two months ended up taking more like eight.

The good news is that once we had gotten an ATO for AWS [1], any project within CMS--not just healthcare.gov--could start using AWS. [2] And within weeks of the ATO, we started hearing about other groups building their new projects on AWS, including Accenture, who's now the primary healthcare.gov contractor.

And that also means that if a startup wants to work with CMS, rather than the typical set of DC contractors, now they can use AWS too and have one fewer barrier keeping them out of the system.

[1] as a small technical point, we split the ATO into "infrastructure" and "application" portions so that the infrastructure (AWS) portion could be reused.

[2] If you're wondering why it matters that the government can use AWS, it's because most data centers that cater to the government are really terrible. In the initial data center used for healthcare.gov, the way you provisioned a new server was to send a word document to a sales representative, listing the unix packages you want installed, and then a few weeks later they would give you a virtual machine which may or may not have what you asked for. You can't be agile at all in that type of environment, and you certainly can't do DevOps well because you can't script anything.



I'm seconding everything said here.

I recently finished a project whereby it took over a year, from kickoff to handoff to deploy two blade servers.

Government IT is often extraordinarily hamstrung in the types of solutions it can deploy due to mandatory requirements to use certain security software, or employ specific policies. [1] While those policies come with a framework for adapting them to the needs of an environment, most don't understand them, and so you're often stuck with a computing environment that any DevOps engineer would find to be practically broken. There's an enormous amount of manual steps and needless shuffling between systems to accomplish the most rudimentary of tasks.

This enormous inefficiency is accepted as being inevitable. Because simple tasks take so long, and making them more efficient would require enormous effort (such getting exceptions to policies approved), the issue is often solved by adding more people. Contractors make their money through staffing, so if you can show that you can get more done with more people, then it can be a good sell. Costs, however, go up. This is where your overruns come from.

(Regarding ATOs, that sounds like the "Type" vs. "Site" accreditations I'm familiar with.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: