Hacker News new | past | comments | ask | show | jobs | submit | more ldlework's comments login

These are definitely exciting times for container networking!


Thank you. My mouth was literally agape.


Hey I come from Python too and I just want to put out there both that Nim is a normal language - you can use it without templates or macros (you might end up using someone else's macro because some stdlib or library is implemented as one, but you don't have to write them yourself) - but also that Nim's macros are quite easy to understand!

They are just functions that take an AST tree, modify that AST or produce a new AST and return it. Its just normal Nim code working on an AST object. That's it! I wrote a small blog article about them when I first learned about them: http://blog.ldlework.com/a-cursory-look-at-meta-programming-...

I encourage you to conquer your fears!


Oh, I have no fear.

I was just comparing to C++ - where you have to understand how everything about templates and the STL works if you intend to understand the error messages produced by the compiler if you use STL. Or any library that interacts with STL containers. I heard things on this front have improved - I myself have left C++ for C a while ago.

C++ is a monstrosity. A common advice is "just pick the parts you like and stick to them". But it never works that way - as soon as you interact with 3rd party code, you have to deal with the parts THEY liked.

In comparison, my intuition says that despite all the dark corners (and if you read the entire manual, there are quite a few of them), that's not the case with Nim; Instead, if you treat it like a dialect of Python, you'll get simplicity close to Python; You'd only need to understand those dark corners if you plan to make use of them.

But I will need actual experience in Nim to find support for or against this intuition.


Hey I am just curious as to what you see as the dark corners of Nim?


While I'm not super familiar with the scientific computing field, I can say that Nim needs everyone's help in building the library ecology. Right now, Nim seems to be in the wrap-all-the-things! mode. You can see that most of the available packages are bindings to existing libraries. I'm not sure this is a bad thing. And I don't think it has anything to do with anything other than Nim's currently small community.


I don't see a problem with that. Since Nim compiles to C, it's only natural for people to take advantage of existing C libraries.


I haven't dived deep enough to figure out if integration with existing C/C++/Fortran libraries works out-of-the-box, or whether you have to setup bindings a la Python. If it works out-of-the-box, my question is moot, especially if two-way interoperability with Nim code is seamless.


You have to set up bindings but it is as simple as it could possibly be to do by hand, and there is a c2nim program to help.


I certainly agree.


Thanks for that explanation


As a long time Python programmer who has struggled to cross the gap from dynamic scripting languages to modern statically typed languages Nim is by far the most frictionless language I have tried. Before I found Nim I longed for a language like C# where the generics "just work" and overall the language just feels like it designed at-once rather than piecemeal overtime. Everything just seem really "nice" in C# and I am able to transfer my Python experience over to it. But being locked up to .NET/Mono I never really used it outside of Unity3D.

I tried Golang, because it was sold to me as something I would love since I am a long time Python developer. I strongly dislike Golang. It doesn't have much in the way of letting me model my programs like I am used to. I am told "that's wrong, do it the Go way". This is too much friction. Once I am done thinking how to solve my problem algorithmically, I do not want to then figure out how to rethink my algorithm just for the sake of the maintainer's of Golang.

I tried Rust. I think Rust is beautiful (mostly). However, Rust has a far too fundamentalist view on memory safety. And that's not to downplay the importance of memory safety. But there's just too much friction. I want to sit down an implement my algorithm. I don't want to stop and spend just as much time thinking about the particulars that Rust demands.

When I found Nim I almost couldn't believe it. The core language was simple, clean and immediately absorbable. I was able to start writing basic Nim programs after just perusing the language docs for a few minutes while the compiler compiled. I read that Nim had powerful meta-programming facilities and this started to turn me off. I had heard that macros were generally a negative force in the world but only knew them from Lisp. Then I learned that Nim's macros are just functions that take and AST tree, perform modifications to that AST and return an AST tree. Wow that's pretty simple. Oh hey the generics "just work" like in C#. Woah, Nim even supports inheritance!

Nim is definitely the next language for me. In thinking about it, I find that I agree with one of Jonathon Blow's sentiments that we have been seeing a number of new up and coming languages but they are all Big Idea languages. Big Idea languages who's ideas have yet to be vetted and proved out over the course of a decade or two. They all incur too much friction.

Nim seems like a competent, low-level systems language with a helpful repertoire of well-implemented modern tunings to features that are all mostly established in the field. It doesn't try to revolutionize systems languages. It tries to bring us a modern refinement by bringing us a highly performant yet relentlessly productive take on what has already been shown to work.

Please don't be offended if you see me around evangelizing its existence.


As I've said before: for me Nim seems to be everything I had wished Go was.


Looks fascinating. Have you used it in any real projects yet?


> You mean like the conferences they held before they even had a viable product?

The conference was awesome and there was plenty to announce at that point. It certainly helped drive home the conceptualization of containerization for many.

> we should take a good minute and consider

You're right we should consider - except what you're doing is making charges like;

"we are head-first rushing", "perpetuated by a for-profit company", "my company will be at the whims of Docker"

And other fear in this thread without providing your audience with the context of: - for profit companies backing an innumerable number of major technologies in a variety of fields since technology was a thing - most of the development of Docker happens outside of Docker Inc - the project has an independent governance board - the project has been nothing but the epitome of successful transparent open source

If Docker ends up achieving a cohesive ecosystem of holistically designed pieces solving one of the most difficult unsolved problems (a class of problems, really) in modern computing benefiting so many then is it really a problem paying Docker Inc for first class support to all of that? I imagine there are already companies selling support for Docker other than Docker Inc

It seems like you're the one imagining there is going to be no competition, honestly.


Its a flag. There's not much to manage or setup.


> Its a flag. There's not much to manage or setup.

Yes, but managing multiple containers is more effort than 1.


I don't scale out to a hundred servers and I still use Docker with separate processes because scaling is not the only advantage to having stateless single-purpose containers.


Single purpose is a role.

Single process is a single process. [e.g. Nginx]

I think you are confused by what I meant.

Stateless single purpose containers is good. One process per container is bad because it involves more service discovery before you actually need it.


I run several processes on each VM, each in their own container, and use Docker links which exposes environment variables into each container describing ip/port information for dependent services. It was really easy and works great.

Furthermore, you can easily set all of the containers to share the same networking namespace so they can all just listen locally if you wanted a turn key solution. The pretty trivial issue of single-host service discovery is not a very strong argument against the benefits of single process containers in my experience.


In order to do things your way you need:

1) Multiple docker containers to spin up and be managed.

2) Multiple health checks.

3) Set additional flags/do additional config for Docker.

The fact people consistently say "Eh, this is a non-issue" is great. It means you are much luckier and more skilled than I since you can manage all of that with 0 additional effort.

For me, all of this is effort I don't need to expend.


I don't think I understand your point.

From the Docker Host itself, if you need to manage the state of a container, the intuition is that you need to go into the container (with SSH) in order to do so. But by externalizing your state, you can manage it without the need to enter the container. Assuming your Docker Host is secure, this doesn't make anything less secure just because you're no longer abusing SSHd in order to manage your application's state.

In the case you need to gdb, or strace the process, you can do that from the Docker Host with nsenter. Assuming your Docker Host is secure, you no longer need to abuse SSHd to carry out a debugging task that has nothing to do with needing a secure shell.

Neither of these use-cases have anything to do with the security of SSH.

In the case that you need to do these things from a remote host, the prescribed answer is indeed SSHd to access the Docker Host, at which point you switch to the previously suggested methods for managing state.

"I don't see how granting access to the host is a cleaner architecture... from a security standpoint, it seems the opposite."

Because now you only have to worry about one security layer instead of N security layers for each container you run. The security layer is now actually coupled to the act of granting access to the host, its intended purpose vs granting access to a container so you can manage its state or debug it or whatever.

As far as being locked into Docker's APIs, I totally miss the aim of this remark. Volumes are just paths on the filesystem. If you're talking about the interoperability of standard tools to manage your state, I don't think they will have problems in this case.


the prescribed answer is indeed SSHd to access the Docker Host, at which point you switch to the previously suggested methods for managing state. [...] As far as being locked into Docker's APIs, I totally miss the aim of this remark.

Yes, you missed the point. Please read the other response to comprehend the difference.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: