Hacker News new | past | comments | ask | show | jobs | submit login
Rebuilding Our Infrastructure with Docker, ECS, and Terraform (segment.com)
157 points by mafro on Oct 9, 2015 | hide | past | favorite | 20 comments



I just setup something similar for one of our api services as a trial to see how it would work out. My biggest gripe with ECS is that it currently does not support dynamic port mapping between ELB and a container instance. For example, if your service listens on port 8080, and ELB terminates port 443 and forwards to a container, then you can only have one container instance per EC2 cluster node, b/c spinning up two containers on the same node results in a port resource conflict.

This makes the whole solution fairly similar to just using auto-scale to spin up a new EC2 instance per app.

I know this is a major gripe right now with ECS, and hopefully Amazon is working on a solution.

Other than a few other frustrating gotchas, ECS has been pretty nice. I do like the idea of putting central config in S3 and copying config files over to cluster instances with a user data script (how they recommend you pull from private docker repo).

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/...


you can get around the 1:1 ratio of docker container to host by introducing service discovery. We are using ECS + docker with consul as our service discovery solution. We have an edge gateway using Zuul, with our routing filters hitting consul to get a service endpoint and port. Then forwarding the request to that service. It has been nice sofar and allows us to run multiple containers (same service or different service) on a host.


An option here is to run a local proxy on the node that knows how to forward requests to various container ports on the same node.


Self promotion, but I recently published an intro to ECS after doing an extensive evaluation:

http://www.infoq.com/articles/intro-aws-ecs.

ECS is solid. Happy to answer any questions on it.


Great article. It focuses on the details of ECS, as well as the high level concepts of orchestration.

As someone relatively new to Docker, the latter was extremely helpful.


I can see how having separate AWS accounts for dev/stage/prod makes things easier. Such as not having to think about name conflicts on ECS clusters, SQS queues, roles and whatnot. We use a single AWS account with multiple IAM users and name our resources like Uploads-Dev, Uploads-Stage, Uploads-Prod, and so on but I feel that it clutters the AWS console and makes it difficult to keep IAM and roles (as well as their access policies) simple.

Maybe I have missed some developments in AWS world, but in the screenshot of the AWS Console it says you are logged in as ops-admin @ segment. So you have one AWS account with alias set to segment, right?

Do you use an identity provider to be able to log in as calvin and is ops-account a user in your AWS account named segment? And is ops-account in the segment AWS account then allowed to assume roles in your three other AWS accounts for dev/stage/prod?

If you switch to ops-admin @ stage for example, and then go to the S3 page in the console, will you get a listing of all buckets in the stage AWS account even though your login session as calvin "belongs" to an other AWS account (segment)?

Sorry if my questions are confusing. I am very interested in knowing what the relationships among the identity provider, AWS accounts, and roles look like.


They are leveraging the Cross-Account access feature of the AWS Management console, which has been released early this year: https://aws.amazon.com/blogs/aws/new-cross-account-access-in...

> If you switch to ops-admin @ stage for example, and then go to the S3 page in the console,

> will you get a listing of all buckets in the stage AWS account even though your login

> session as calvin "belongs" to an other AWS account (segment)?

It should work like that, yes. Your privileges will be "scoped" to the role you are assuming in the "stage" account.

> I can see how having separate AWS accounts for dev/stage/prod makes things easier.

It's also a good way of maintaining agility when you have multiple teams working in parallel on different projects (ex: 1 team = 1 AWS account). Zalando has recently published a bunch of tools leveraging IAM to help manage multiple AWS accounts (ex: federated SSH access) : https://stups.io/


I missed the announcement of cross-account access. This seems great, thanks!

STUPS looks interesting. What scares me a bit about these suites that provide many abstractions on top of AWS is how they work in mixed environments where some resources have been set up and are managed out-of-band. I understand that the purpose of STUPS for example is to provide a higher-level interface to AWS, and that having many AWS accounts avoids these mixed environments.

Perhaps it's just me suffering from analysis paralysis. I kind of want there to be one or two leading suites of AWS PaaS tools to choose from, whereas the market today seems fragmented with new tools popping up all the time. For the moment I'm betting on HashiCorp. :)


Consider using Resource Groups in the AWS Console (when you have to use it!) to limit which environment's resources are displayed.


Using DNS for service discovery is very convenient but it also makes it difficult to change your setup quickly in the face of ops emergencies, since it's so aggressively cached.

We're in the process of transitioning from our datacenter to AWS and we landed on much the same set of technologies, minus Docker.


If done right can really be awesome , set up stack using hasicorp's consul(consul.io) and theres the option to change the TTLs and cache limits.If using the DNS interface the updates are relatively painless and nodes are almost instantly available when provisioned.


We ended up using consul's K/V store for discovery.



Good for you setting up separate AWS accounts. Companies that do this avoid many problems! Companies that assume their code will arbitrate the separation intrinsically often go oooooops!


No matter what url I try to visit at https://segment.com I get an ssl encryption error.

I'm on android 5.0.1 in chrome


It might be an issue with the certificate chain not installed in the correct order on their servers. I've had this issue in the past. Mobile browsers sometimes do not "fill in the gaps" for missing/out of order certificates, but desktop browsers are much better at that. Not sure why.


Just checked it here:

https://www.digicert.com/help/

and chain appears to be correct.


I see you use different machines for NGINX and auth. Do you plan to combine all of them under one API gateway like Kong?


Is it possible to use ECS with something like Dogestry, rather than ECR or Docker Hub?


I think it died.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: