Hacker News new | past | comments | ask | show | jobs | submit | more mailslot's comments login

I’ve worked with a few developers that have been adamant about developing inside of containers. What I’ve noticed:

Terrible performance. One of my engineers recently thought that 150ms was terrific for a HTTP request. Break out of the container and it was <10ms. YMMV.

Fragile everything: Because one expects a “pristine” environment, often any slight change causes the entire stack to fall apart. This doesn’t happen at the start, but creeps in over time, until you can’t even update base images. I’ve seen it a lot. It ends up only adding an additional layer of complication.

Etc.

There are definitely reasons to do this... But when a pedantic developer that needs everything to be “just right” does it, it often becomes a disaster, leading to shortcuts and a lack of adaptability.

There’s also the developer that has no idea WTF is going on. They use a standard Rails/PHP/NodeJS/etc container and don’t understand how it works. Sometimes, they don’t even know that their system can run their stack natively. I’ve been on teams that have said “Let’s just use Docker because X doesn’t know how to install Y.”

Docker is fantastic for many things, but let’s stop throwing it at everything.


Maybe you're talking about non-native containers (i.e. not Linux), but there's no technical merit to the idea that a container by itself could introduce 15x latency on a Linux host for something like a web request, unless something like network namespaces, tc, etc was being used very improperly.

You also point to a lot of problems that are container-independent and lay them at the feet of docker, which is unfair.

Upgrading the OS is always hard unless you have some awesome, declarative config and you managed to depend on zero of the features that have changed. It doesn't matter if you're in a container or not, switching from iptables in Centos 7 to nftables in Centos 8 is going to introduce some pain.

And somehow we get mad at people for not knowing how to install things, but the complexity of installing them is itself a problem. More steps means more inconsistency, which means it's more likely that "it works on my machine, but breaks on yours."


They were probably running docker on Mac


Yes. They were. :)


Docker for MacOS is a joke. You need to start using docker machine again with rsync for file sync (eg. via Vagrant) or you can try docker-sync: https://docker-sync.readthedocs.io/


Why run locally? Isn't it better to have cloud build service?


Why build in the cloud if you can do it locally? ;-)


> often any slight change causes the entire stack to fall apart

Yes, but this is true generally; it's not specific to containers. Any dev environment naturally tends disorder with unsynchronized versions, implicit dependencies, platform-specific quirks, etc. It takes an effort to keep chaos at bay.

At least with containers you have a chance of fully capturing the complete list of dev dependencies & installed software. I'm interested in how CodeSpaces/Coder.com solves these issues.


Counter-argument(?): I've seen a product where production ran in Docker, but development was a mix of Mac, Windows and every popular Linux distribution, on laptops, on-prem servers and in the cloud, as per each devs preference. Components could be run separately. The product could run anywhere.

Then standardization crept into development. Two years later, it was essentially impossible to run it outside Docker built by Bamboo, deployed by Jenkins in on-prem OpenStack, components were tightly coupled (database wasn't configurable anymore, filesystem had to look a certain way, etc.), and it required very specific library versions, which largely haven't been updated ever again, and cannot be updated easily anymore by now. No individual team had an overview of everything inside the container anymore (we ended up with 3 Redis, 1 Mongo and 1 Postgres in that container. The project to split it apart again was cancelled after a while). Production and development were the same container images, but in completely different environments.

If you want code paths to work, you need to exercise them regularly through tests. Likewise, if you want a flexible codebase, you need to use that flexibility constantly. Control what goes into production, but be flexible during development.


The same mistake can be done outside of containers though. Any software needs to be maintained and its dependancies kept up to date. Containers might give the feeling that it's not a necessity anymore as it allows to spin up an environment in one command, but in the end those dependencies are still there.

My experience is the opposite. I once had started a job with totally outdated software that couldn't be run anywhere else than the old server it was currently running and had never been touched since 2008. We were able in the end to bring everything back up to date and create containers that are: - easy to update - allow devs to work on their favourite os (windows, linux or macos) - does not require someone help devs to fix their dev environment regularly


So much this.


Sure, keeping a stack clean is always difficult. But I think OPs point was that programming in a container encourages a more fragile setup.

On a native setup, you get a feel for the fact that X config file might be in different places, or that Y lib is more robust and more widely available than lib Z. You end up with a more robust application because you have been "testing" it on a wide range of systems from day one.


> But I think OPs point was that programming in a container encourages a more fragile setup.

I don't see how that point can be argued at all, particularly if the project is expected to be deployed with Docker.


I just argued that point.

When developing inside docker, you are fooled into thinking that various things about your environment are constants. When it comes time to update your base image, all these constants change, and your application breaks.


> When developing inside docker, you are fooled into thinking that various things about your environment are constants.

No, you really aren't. You're just using a self-contained environment. That's it. If somehow you fool yourself into assuming your ad-hoc changes you made to your dev environment will be present in your prod environment although you did zero to ensure they exist then the problem lies with you and your broken deployment process, not the tools you chose to adopt.

A bad workman blames his tools. Always.


Is that a problem in practice though?

Updating libraries or the base image that one's code depends on always has the risk of breaking from API changes or regressions, and in a container, at least it's easy to reproduce the issue.


> Yes, but this is true generally; it's not specific to containers.

Always using containers make it harder for you to tell when you're making your setup brittle. If your environment always is exactly the same, how will you notice when you introduce dependencies on particular quirks with that environment? If your developers use different operating systems, different compilers, etc., you have a better shot at noticing undesirable coupling between the system and its environment.


But why do you care? This seems true bit backwards to me. Using a container with the same image as everyone else lets you all use the same environment, while each using whatever environment you want.

If you run on Linux right now but think you might one day switch to running natively on Windows server... Ok sure, but who's in that position?


The most obvious and critical reason is because of security. You don't want your app to be stuck on Ubuntu 12.04 forever, but that's exactly what can happen. If you're not incrementally updating and fixing your stuff, you end up facing 5+ years of accumulated problems, at which point many people will take door #2: keep using the broken environment until somebody forces you not to; or door #3: start from scratch.

The upgrade treadmill is exactly that, a treadmill--it's exercise. The alternative to not exercising is poor health and an early death.


but then theres the guy who only eats bacon, smokes 2 packs a day, never exercises, and lives to see 105.


Opposite side of this. Back in the day there was this company called Silicon Graphics (SGI). They had this API called GL, it’s what you know as OpenGL.

Software was written for their workstations that ran their UNIX OS IRIX. This is where Maya and many other awesome programs were built. Maya now runs on Windows, Linux, macOS, etc.

Cross platform code is fantastic.


Is anybody writing user programs that might otherwise be cross-platform and shipping them as an image with a docker dependency?


Yes, our main application is mostly used as a cloud service deployed in Kubernetes, but also has deployments running natively on Windows.


You're arguing "containers give you a chance of keeping everything pristine" but the claim was "you end up with a more robust system if you don't thing 'everything should be pristine' should be a precondition."

I'm not sure I agree with the original poster though. I both dislike doing dev inside a container and dislike complicated manual dev environment setups. Containers for deps like dbs are more reasonable. This is faster perf-wise, more friendly for fancy tooling/debuggers and such, and it introduces just enough heterogeneity that you may catch weird quirks that could bite you on update in the future.

But you should be able to spin up/down new deploys easily, without having to do manual provisioning and such, which means the env on your servers should be container-like, even if it's not directly a container. Pristine and freshly-initialized. And then if you regularly upgrade the dependency versions, from linux version to third part lib versions to runtime versions, then you will still avoid the brittleness.


>>Terrible performance. One of my engineers recently thought that 150ms was terrific for a HTTP request. Break out of the container and it was <10ms. YMMV.

Try Linux and all those lags,spikes,inconsistencies are magically gone.


Eh. Worked for a big CDN. Using containers would have decimated profits. There IS overhead. At scale, you feel it.

But for dev? Yeah. Linux on Linux is less impactful if you’re building something simple like a blog.


> Terrible performance. One of my engineers recently thought that 150ms was terrific for a HTTP request. Break out of the container and it was <10ms. YMMV.

If you're not using Linux (presumably you're using MacOS), your "containers" are actually VMs so it's unsurprising that the performance suffers somewhat (not to mention that file accesses are especially slow with the Docker-on-Linux setup). The performance impact of being inside a container on Linux is as close to zero as you can get.


Both 10ms and 150ms are bad results for trivial HTTP requests to localhost.


> often any slight change causes the entire stack to fall apart

Arguably this is one reason why containers are popular in the first place. Devs don't want to spend time dealing with dirty environments.


What’s a dirty environment? One that a dev doesn’t clean up? Makes a mess like a filthy hoarder?


Devs want to spend even less time with fragile environments that breaks from the slightest change.


Thankfully containers are the exact opposite then.


Agreed.. there's ways to have a "pristine" environment on the host machine, anyway. Our team uses ansible.


I prefer chef locally, but that totally works!


> Terrible performance. One of my engineers recently thought that 150ms was terrific for a HTTP request. Break out of the container and it was <10ms. YMMV.

It's not that bad for everyone.

For example on my Windows dev box, I have HTTP endpoints in volume mounted Flask and Phoenix applications that respond in microsends (ie. less than 1 millisecond). This is on 6 year old hardware and the source code isn't even mounted from an SSD (although Docker Desktop is installed on an SSD).

On Linux, I have not noticed any runtime differences in speed, except that starting a container with Docker takes quite a bit longer than starting the same process without Docker. Apparently there's a regression: https://github.com/moby/moby/issues/38077


This feels like very much a YMMV situation. I think my own personal thinking is mostly the same as yours.

But for the OP it may be perfect. On the blog he indicates that he's a CS professor. I could imagine that in a research environment maybe he gets better mileage out of this than someone coding in a for-money work environment.


Dan Lemire is a prof but also a very hands-dirty type. He measures stuff down to CPU IPC levels. Check other stuff on his blog, he's not an ivory tower type[0]

[0] Not that there's anything wrong with that at all, it's just a different kind of person with different strengths, but DL is not one.


Oh, I thought the name looked familiar. I've used his JavaFastPFOR library; very performant and a pleasure to work with.


Ah, that makes sense. It sounds like he balances being academic with trying to work through things in a real-world way.


Docker is fantastic at precisely this use case: capturing tool and build dependencies in a reproducible way. I am not sure what performance issues you are complaining about. We run high speed trading services with single digit microsecond latency on docker just fine.


Working with 150ms handicap is not necessarily a bad thing for web developers.


> Terrible performance. One of my engineers recently thought that 150ms was terrific for a HTTP request. Break out of the container and it was <10ms. YMMV.

Did you happened to develop with Macs? Because Docker for Mac has a known network performance issue.

https://github.com/docker/for-mac/issues/3497


Yep. macOS for what I mentioned :)

... but latency is still an issue regardless. It’s why there’s a premium to go bare metal with cloud providers.


None of these problems are really a result of containers.

People now call native installs "bare metal" installs.

I've developer for linux on the machine, and in a container.

Once you get the container mentality and start writing dockerfiles it creates a pretty predictable organized haven.


can i also add that containers were not really built with multiple services or apps running inside them in mind? there are workarounds but, honestly, they are not worth it.

also, author seems to be using ubuntu. i wonder if he has considered multipass? https://multipass.run


Docker containers were not meant to be used that way. LXD containers on the other hand are excellent for running multiple services.


How do you maintain build machine environments if not with docker? The CI script updates all the tools or something?


There's no conflict between using Docker for CI (and deployment) and not using Docker for development.


Oh ok, so they still build and maintain a (fragile?) docker image with up to date build tooling for CI machines to use?


I cannot speak for the OP, but this is more or less how we use Docker at my workplace.

> (fragile?)

The Docker image isn't fragile, it's your software that risks becoming fragile if it's too strongly reliant on a specific environment.


Oh I see what they mean now. Lack of varied developer preference leads to things like hard coded paths instead of configuration files, for example.


It’s been rare, but this has definitely been a problem: “Why do I need an ENV var, the path is always /app?” And then not supporting symlinks... ugh.


It’s both. Ruby IS slow. If you ever need to create a tree structure that isn’t available in a natively-compiled Gem, you’ll know what I mean.


Sounds like every other believer of the JVM. I’ve been reading about how Java outperforms C for 20+ years now. When it doesn’t even come close, someone faithful of the JVM insists that I try some new garbage collector or wait for the next release.


Funny, I managed to spent a similar time in this industry and cannot remember any claim of Java being faster than C.

That's not to say it doesn't exist: too many monkeys with internet-connected typewriters will eventually make every claim. But Google turns up a paltry 652 results for "Java faster than C"[0][1].

[0]: https://www.google.com/search?client=safari&rls=en&q=%22java...

[1]: On a related note, I found one of those Google searches with a single hit. Isn't there a name for that? It's "PHP faster than C"

Edit: I'm on a roll here. "Google faster than C" also gets one result: https://www.google.com/search?client=safari&rls=en&q=%22goog...


Java running on a good VM such as HotSpot can outperform C in certain cases.


One of my favorites to take a 24bit PNG and convert it to indexed color... while keeping a full 8bit alpha channel for smooth edges. I don’t think even Gimp or Photoshop can do 8bit alpha channels and indexed color.


I’ve actually started to resort to Bing every now and again. It’s THAT bad.

I used to be able to get very specific and find lines in source code, for instance. Well, that probably breaks emoji search or something and is near impossible.

It’s always suggesting that I’d rather see something else. Short of typos, it’s NEVER right.


Aren’t a lot of top athletes into therapies related to this?


Top athletes of all sorts seem to try treatments that aren’t scientifically proven to actually work and yet they continue being employed.

Compression, dry needling, tape, and so on. And despite scientists rolling their eyes, the athletes seem happy.

As am I. And quite frankly, that’s good enough for me. Though I won’t claim it’s not placebo. And I won’t claim it will work for everybody.

In sports where there’s little to no money to be made, you’ll find even more of these anecdotes. It’s quite amusing, really.


If you look ahead and keep going, you can just not stop.


I used to love opening things from Fry’s and discovering that it was an unmarked return & not the actual product at all. Sometimes: empty box.


As someone that would be disciplined for doing work correctly and getting correct answers, I support this.

A handful of teachers would give really bad instructions. Like, commanding everyone how to round numbers incorrectly kind of bad. I can’t even wrap my head around it still.

I’d do the work as it’s supposed to be done. When my answers matched the book and nobody else’s did: I was accused of cheating & stealing the answer keys or disobedience & not following instruction.


My third-grade son's teacher insists it's spelled and pronounced "communitative" (as in "commutative"), to the point where we had to have a parent-teacher conference about my son correcting her.


A slightly different, yet more relevant example: digital music.

Microsoft had their “plays for sure” DRM. No one thought Microsoft would get out of the music game. They did. “Plays for sure” no longer works. Does not play.


Did Microsoft at some point absolutely dominate the music market? No.


The key word here is "DRM".


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: