Hacker Newsnew | past | comments | ask | show | jobs | submit | KAMSPioneer's commentslogin

You can do this by using a dedicated syncoid user and ZFS delegated permissions: https://openzfs.github.io/openzfs-docs/man/master/8/zfs-allo...

You'll need to add the --no-elevate-permissions flag to your syncoid job.


No, my GP is correct: if the server's RSA private key is compromised it does not allow decryption of any previously-recorded sessions.

You would need to compromise the _ephemeral session key_ which is difficult because it is discarded by both parties when the session is closed.

Compromising the RSA key backing the certificate allows _future_ impersonations of the server, which is a different attack altogether.


I mean, Ansible isn't the best choice for Windows configuration, I would agree, but you're not strictly correct: https://docs.ansible.com/ansible/latest/os_guide/windows_usa...


Gross and net profit are each their own concept: https://www.investopedia.com/ask/answers/101314/what-are-dif...


What they call "gross profit" is not profit, by definition. It's certainly useful to track $revenue-$cost_of_goods, but you can't call that profit. People are free to use words incorrectly, but they shouldn't expect anyone else to go along with them.


Who chooses the "correct" use of words? Is it you? Wikipedia disagrees with you: https://en.wikipedia.org/wiki/Gross_margin. Maybe you should make your own encyclopedia.


"Gross margin" seems a suitable alternative term.


you mean some person on wikipedia disagrees with him.



What they call "gross profit" is an accounting standard, defined in GAAP, and a standard part of every financial statement.

If we're talking about profit from the lens of a unit sale, we're usually talking about gross profit and gross margin.


I'm sorry but this is a pet peeve of mine: drag force does not scale exponentially with velocity, it scales with the square of velocity. Your point stands, of course.


Yes, you’re right.

I’ve gotten use to saying this incorrectly because most people aren’t trained on (or at least don’t remember) the difference between various types of growth functions. Exponential registers much more clearly in everyday conversation.


I also use the word exponential to refer to any curve that trends upwards


But...you don't need systemd or Quadlets to run Podman, it's just convenient. You can also use podman-compose (I personally don't, but a coworker does and it's reasonable).

But yeah I already use a distro with systemd (most folks do, I think), so for me, using Podman with systemd doesn't add a root daemon, it reuses an existing one (again, for most Linux distros/users).


Exactly my point.

Today I can run docker rootless and in that case can leverage compose in the same manner. Is it the default? No, you've got me there.

SystemD runs as root. It's just ironic given all the hand waving over the years. And Docker, and all it's tooling, are so ubiquitous and well thought out that Podman and friends are literally a reimplementation which is the selling point.

I've used Podman. It's fine. But the arguments of the past aren't as sharp as they originally were. I believe Docker improved because of Podman, so there's that. But to discount the reality of the doublespeak by paid for representatives from RedHat/IBM is, again, ironic.


> And Docker, and all it's tooling, are so ubiquitous and well thought out that Podman and friends are literally a reimplementation which is the selling point

I would argue that Docker’s tooling is not well thought out, and that’s putting it mildly. I can name many things I do not like about is, and I struggle to find things I like about it’s tooling.

Podman copied it, which honestly makes me not love podman so much. Podman has quite poor documentation, and it doesn’t even seem to try to build actually good designs for tooling.


Curious what your point is?

> I can name many things I do not like about is, and I struggle to find things I like about it’s tooling.

Please share.


Off the top of my head:

FROM [foo]: [foo] is a reference that is generally not namespaced (ubuntu is relative to some registry, but it doesn't say which one) and it's expected to be mutable (ubuntu:latest today is not the same as ubuntu:latest tomorrow).

There are no lockfiles to pin and commit dependency versions.

Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.

Mostly resulting from all of the above, build layer caching is basically a YOLO situation. I've had a build result in literally more than a year out-of-date dependencies because I built on a system that hadn't done that particular build for a while, had a layer cached (by name!), and I forgot to specify a TTL when I ran the build. But, of course, there is no correct TTL to specify.

Every lesson that anyone in the history of computing has ever learned about declarative or pure programming has been completely forgotten by the build systems.

Why on Earth does copying in data require spinning up a container?

Moving on from builds:

Containers are read-write by default, not read-only.

Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.

The tooling around what constitutes a running container is, to me, rather unpleasant. I can't make a named group of services, restart them, possibly change some of the parts that make them up, and keep the same name in a pleasant manner. I can 'compose down' and 'compose up' them and hope I get a good state. Sometimes it works. And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.

I'm sure I could go on.


> Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.

I think you're conflating software build with environment builds - they are not the same and have different use cases people are after.

> Why on Earth does copying in data require spinning up a container?

It doesn't.

> Containers are read-write by default, not read-only.

I don't think you really understand containers since COW is the default. Containers are not "read-write" by default in the context of the underlying image. If you want to block writing to the file system that is trivial.

> Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.

Almost all of this is wrong.

> And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.

What? This gets wilder as you go on. Why would you expect compose files to be "compatible" with k8s? They are two different ways to orchestrate containers.

Pretty much everything you've outlined is, as I see it, a misunderstanding of what containers aim to solve and how they're operationalized. If all of these things were true container usage, in general, wouldn't have been adopted to the point where it's as commonplace as it is today.


>> Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.

> I think you're conflating software build with environment builds - they are not the same and have different use cases people are after.

They're not so different. An environment is just big software. People have come up with schemes for building large environments for decades, e.g. rpmbuild, nix, Gentoo, whatever Debian's build system is called, etc. And, as far as I know, all of these have each layer explicitly declare what it is mutating; all of them track the input dependencies for each layer; and most or all of them block network access in build steps; some of them try to make layer builds explicitly reproducible. And software build systems (make, waf, npm, etc) have rather similar properties. And then there's Docker, which does none of these.

> > Containers are read-write by default, not read-only.

> I don't think you really understand containers since COW is the default. Containers are not "read-write" by default in the context of the underlying image. If you want to block writing to the file system that is trivial.

Right. The issue is that the default is wrong. In a container:

    $ echo foo >the_wrong_path
works, by default, using COW. No error. And the result is even kind of persistent -- it lasts until the "container" goes away, which can often mean "exactly until you try to update your image". And then you lose data.

> > Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.

> Almost all of this is wrong.

I would really like to believe you. I would love for Docker to work better, and I tried to believe you, and I looked up best practices from the horse's mouth:

https://docs.docker.com/get-started/docker-concepts/running-...

and

https://docs.docker.com/get-started/docker-concepts/running-...

Look, in every programming language and environmnt I've ever used, even assembly, an interface has a name. If I write a function, it looks like this:

    void do_thing();
If I write an HTTP API, it has a name, like GET /name_goes_here. If I write a class or interface or trait, its methods have names. ELF files expose symbols by name. Windows IIRC has a weird old system for exporting symbols by ordinal, but it’s problematic and largely unused. But Docker images expose their APIs (ports) by number. The welcome-to-docker container has an interface called '8080'. Thanks.

At least the docs try to remind people that the whole mechanism is "insecure by default".

I even tried asking a fancy LLM how to export a port by name, and LLM (as expected) went into full obsequious mode, told me it's possible, gave me examples that don't do it, told me that Docker Compose can do it, and finally admitted the actual answer: "However, it's important to note that the OCI image specification itself (like in a Dockerfile) doesn't have a direct mechanism for naming ports."

> > And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.

> What? This gets wilder as you go on. Why would you expect compose files to be "compatible" with k8s? They are two different ways to orchestrate containers.

I'd like to have some way for a developer to declare that their software can be run with the 'app' container and a 'mysql' container and you connect them like so. Or even that it's just one container image and it needs the following volumes bound in. And you could actually wire them up with different orchestration systems, and the systems could all read that metadata and help do the right thing. But no, no such metadata exists in an orchestration-system-agnostic way.

> If all of these things were true container usage, in general, wouldn't have been adopted to the point where it's as commonplace as it is today.

Software doesn't look like this. Consider git: it has near universal adoption, but there is a very strong consensus in the community that many of the original CLI commands are really bad.


> They're not so different. An environment is just big software.

Containers are not a software development platform, but a platform that can be used in the build phase of software development. They are very different. Docker is not inherently a software development platform because it does not provide the tools required to write, compile, or debug code. Instead, Docker is a platform that enables packaging applications and their dependencies into lightweight, portable containers. These containers can be used in various stages of the software development lifecycle but are not the development environment themselves. This is not just "big software" - which makes absolutely no sense.

> Right. The issue is that the default is wrong. In a container: $ echo foo >the_wrong_path

Can you do incorrect things in software development? Yes. Can you do incorrect things is containers? Yes. You're doing it wrong. If you are writing to a part of the filesystem that is not mounted outside of the container, yes, you will lose your data. Everyone using containers knows this and there are plenty of ways around it. I guess in your case you just always need to export the root of the filesystem so you don't foot gun yourself? I mean c'mon man. It sounds like you'd like to live in a software bubble to protect you from yourself at this point.

> If I write an HTTP API, it has a name, like GET /name_goes_here. If I write a class or interface or trait, its methods have names. ELF files expose symbols by name. Windows IIRC has a weird old system for exporting symbols by ordinal, but it’s problematic and largely unused. But Docker images expose their APIs (ports) by number. The welcome-to-docker container has an interface called '8080'. Thanks.

You clearly don't understand Docker networking. What you're describing is the default bridge. There are other ways to use networking in Docker outside of the default. In your case, again, maybe just run your containers in "host" networking mode because, again, you're too ignorant to read and understand the documentation of why you have to deal with a port mapping in a container that's sitting behind a bridge network. Again you're making up arguments and literally have no clue what you're talking about.

> Software doesn't look like this. Consider git: it has near universal adoption, but there is a very strong consensus in the community that many of the original CLI commands are really bad.

OK? Grab a dictionary - read the definition for the word: "subjective", enjoy!


> > They're not so different. An environment is just big software.

> Containers are not a software development platform, but a platform that can be used in the build phase of software development. They are very different. Docker is not inherently a software development platform because it does not provide the tools required to write, compile, or debug code.

You seem to be arguing about something entirely unrelated. GNU make, Portage, Nix, and rpmbuild also don’t provide tools to write, compile, or debug code.

> Can you do incorrect things in software development? Yes. Can you do incorrect things is containers? Yes. You're doing it wrong.

This is the argument by which every instance of undefined behavior in C or C++ is entirely the fault of the developer doing it wrong, and there is no need for better languages.

And yes, I understand Docker networking. I also understand TCP and UDP just fine, and I’ve worked on low level networking tools and even been paid to manage large networks. And I’ve contributed to, and helped review, Linux kernel namespace code. I know quite well what’s going on under the hood, and I know why a Docker container has, internally, a port number associated with the port it exposes.

What I do not get is why that port number is part of the way you instantiate that container. The tooling should let me wire up a container’s “http” export to some consumer or to my local port 8000. The internal number should be an implementation detail.

It’s like how a program exposes a function “foo” and not a numerical entry in a symbol table. Users calling the function type “foo” and not “17”, even though the actual low-level effect is to call a number. (In a lot of widely used systems, including every native code object file format I’m aware of, the compiler literally emits a call to a numerical address along with instructions so the loader can fix up that address at load time. This is such a solved problem that most programmer, even agency assembly programmers, can completely ignore the fact that function calls actually go to more or less arbitrary numerical targets. But not Docker users — if you want to stick mysql in a container, you need to type in the port number used internally in that particular container.)

There are exceptions. BIOS calls were always by number, as are syscalls. These are because BIOS was constrained to be tiny, and syscalls need to work when literally nothing in the calling process is initialized. Docker has none of these excuses. It’s just a handy technology with quite poorly designed tooling, with nifty stuff built on top despite the poor tooling.


> Why is the port number part of the way you instantiate the container?

Because that’s how networking works in literally every system ever. Containers don’t magically "export" services to the world. They have to bind to a port. That’s how TCP/IP, networking stacks, and every server-client model ever designed functions. Docker is no exception. It has an internal port (inside the container) and an external port (on the host), again, when we're dealing with the default bridge networking. Mapping these is a fundamental requirement for exposing services. Complaining about this is like whining that you have to plug in a power cable to use a computer. Clearly your "expertise" in networking is... Well. Another misunderstanding.

> The tooling should let me wire up a container’s 'http' export to some consumer or to my local port 8000.

Ummmm... It does. It's called: Docker Compose, --network, or service discovery. You can use docker run -p 8000:80 or define a Docker network where containers resolve each other by name. You already don’t have to care about internal ports inside a proper Docker setup.

But you still need to map ports when exposing to the host because… Guess what? Your host machine isn't psychic. It doesn’t magically figure out that some random container process running an HTTP server needs to be accessible on a specific port. That’s why port mapping exists. But you already know this because "you understand TCP and UDP just fine".

> The internal number should be an implementation detail.

This hands-down the dumbest part of the argument. Ports are not just "implementation details." They're literally how services communicate. Inside the container, your app binds to a port (usually one) that it was explicitly configured to use.

If an app inside a container is listening on port 5000, but you want to access it on port 8000, you must declare that mapping (-p 8000:5000). Otherwise, how the hell is Docker (or anyone) supposed to know what port to use? According to you - the software should magically resolve this. And guess what? You don’t have to expose ports if you don’t need to. Just connect containers via a shared network which happens automagically via container name resolution within Docker networking.

Saying ports should be an "implementation detail" is like saying street addresses should be an implementation detail when mailing a letter. You need an address so people know where to send things. I'm sure you get all sorts of riled up when you need to put an address on a blank envelope because the mail should just know... Right? o_O


I feel like we're talking right past each other or something.

Of course every TCP [0] and UDP networking system ever has port numbers. And basically every CPU has calls functions with numeric addresses. And you plug in power cables to use a computer. Of course Docker containers internally use ports -- if I have a Docker image plus its associated configuration, and I instantiate it as a container, and it uses its internal port 8080 to expose HTTP, then it uses a port number.

But this whole conversation is about Docker's tooling, not about the underlying concept of containers.

And almost every system out there that has decent tooling has abstraction layers to make this nicer. In AT&T assembly language, I can type:

    1:
    ... code goes here
and that code is called "1" in that file and is inaccessible from outside. If I want to call it from outside, I type something more like:

    name_of_function:
    ... code goes here
with maybe a .globl to go along with it. And I call it by typing a name. And that call still calls the numeric address of that function.

If I plug in a power cable to use a computer, I do not plug it into port 3 on the back of the computer, such that accidentally plugging it into port 2 will blow a fuse. I plug it into a port that has a specific shape and possibly a label.

So, yes, I know that "If an app inside a container is listening on port 5000, but you want to access it on port 8000, you must declare that mapping (-p 8000:5000)", but that's not a good thing. Of course, if it's listening on port 5000, I need to map 8000 to 5000. But the fact that I had to type -p 8000:5000 is what's broken. The abstraction layer is missing. That should have been -p 8000:http or something similar.

And the really weird thing is that the team that designed Dockerfile seemed to have an actual inkling that something was needed here, which is why we have:

    EXPOSE 8080
    VOLUME ["/mnt/my_data"]
but they completely missed the variant that would have been good:

    EXPOSE 8080 "rest_http"
    VOLUME "mydata" MANDATORY
    MOUNT_VOLUME "mydata" "/mnt_mydata"
or whatever other spelling of the same concept would have passed muster.

And yes, Docker Compose helps, but that's at the wrong layer. Docker Compose is a consumer of a container image. The mapping from logical exposed service to internal port should have been handled at an abstraction layer below Docker Compose, and Compose and Quadlet and Kubernetes and the command line could all share that abstraction layer.

> ... service discovery. You can use docker run -p 8000:80 or define a Docker network where containers resolve each other by name. You already don’t have to care about internal ports inside a proper Docker setup

Can you point me at some relevant reference? Because, both in my experience and from (re-)reading the basic docs, all of the above is about finding an IP address by which to communicate with a relevant service, not about port numbers, let alone internal port numbers (which are entirely useless to discover from inside another container, because you can't use them there anyway). Even Docker Swarm does things like:

    $ docker service create ... --publish published=8080,target=80
and that's another site, external to the container image in question, where one must type in the correct internal port number.

> I'm sure you get all sorts of riled up when you need to put an address on a blank envelope because the mail should just know... Right? o_O

I will take this the most charitable way I can. Sure, it's mildly annoying that you have to use someone numerical phone number to call them, and we all have contact lists to work around this, but that's still missing the target. I'm not complaining about how you address a docker container, and it makes quite a bit of sense that you need someone's phone number to call them. But if you had to also know that that particular phone you were calling had its microphone on port 83 and you had you tell your phone that their microphone was port 83 if you wanted to hear them and you had to change your contact list if they changed phone models, then I think everyone would be rightly annoyed.

So I stand by my assertion: Docker's tooling is not very good.

[0] But not every networking protocol ever. Even in the space of non-obsolete protocols, IP itself has no port numbers. And the use of a tuple (name or IP, port) is actually a perennial source of annoyance, and people try to improve it periodically, for example with RFC 2782 SRV records and, much more recently, RFC 9460 SVCB and HTTPS records. This is mostly off-topic, as these are about externally visible ports, and I’m talking about internal port numbers.


systemd runs as root yes, but services started by systemd dont unless you instruct them to.

that means your podman containers dont run as root unless you want them to.

mine runs as user services


I don't see your point. This is exactly how Docker works. Containers that are running when instantiated from the Docker daemon don't need to be run as root. But you can... Just like your containers started from SystemD (quadlet).

I run all my containers, when using Docker, as non-root. So where is the upside other than where your trust lies?


> So where is the upside other than where your trust lies?

The upside is political rather than technical, in that Docker signalef multiple times before they happily will pull the rug for developers.

Moving away from that is the driving motivation for using podman. The fact that podman happens to be better engineered is just added bonus.


Have you used podman compose? It's shit.

When I bring this up online the answer is invariably "well use quadlets then" (i.e. systemd).

>systemd doesn't add a root daemon, it reuses an existing one

lol the same could be said of every docker container ive ever run....


Podman runs on FreeBSD without systemd, so there you go.


yeah, it runs fine without systemd, until you need a docker compose substitute and then you get told to use quadlets (systemd), podman compose (neglected and broken as fuck) or docker compose (with a daemon! also not totally compatible) or even kubernetes...


Or you can implement a firewall on your gateway device with a default drop policy for inbound traffic. Essentially the same behavior as NAT in terms of unsolicited (usually malicious) inbound traffic, but without the downsides of one-to-many NAT.

Which is, coincidentally, exactly how it works if your LAN is made up of devices with publicly-routable IPv4 addresses as well, which happens in business/academic/military networks all the time.


But most users are not business academic military.

They just want to watch some reels.


And? Most consumer routers also implement a stateful firewall with deny-by-default inbound policy. My point is that NAT isn't a security feature, and that firewalls in edge network equipment is table stakes these days.


>Most consumer routers also implement a stateful firewall with deny-by-default inbound policy.

No they don't.

Most ISP boxes only implement the bare minimum of functions to make sure that youtube is available to the users. Which includes NAT, because otherwise youtube does not work, and does not include anything else.


Well, that's news to me. I don't use consumer routers myself, but I know lots of folks who do. Now, I won't say that I go investigating their home networks, but IPv6 is rather prevalent among the discount ISPs where I live, and I know of at least two coworkers who have an IPv6 firewall by default with their router.

Anyway, NAT is costlier than a firewall. It uses more memory, it requires rewriting packets on-the-fly, and typically if you're using embedded Linux (I'll assume that the vast majority of consumer devices for this are) then you're already using `iptables` or `nftables` to get NAT functionality. It is comparatively to set default inbound/forward drop policies.

But yes, I should have said "in my experience," since it's true that I only know the networking equipment of a few people in a small country with limited IPv6 rollout (my ISP does not provide it).


It is likely an option, but as per the ReadMe:

Nebula uses Elliptic-curve Diffie-Hellman (ECDH) key exchange and AES-256-GCM in its default configuration.


Read a little further down:

> Xiaomi central hub gateway is only available in mainland China. In other regions, it is not available.

Nonstarter for many, myself included.

ETA: Yes it does say "partial" local control can be done without the gateway software, but is not recommended and does not work unless you are on the same local network. Better than nothing, but still a nonstarter.


It also says

> If you do not have a Xiaomi central hub gateway or other devices having central hub gateway function, all control commands are sent through Xiaomi Cloud.

Now I don't actually know what central hub gateway function entails :) but it does sound like some non-Xiaomi devices could also fulfill this role.

In case of HASS I think local network is a pretty decent assumption. I have my HASS on both my local VLAN as well as the IOT VLAN.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: