Hacker News new | past | comments | ask | show | jobs | submit login
Gentoo x86-64-v3 binary packages available (gentoo.org)
76 points by laserstrahl on Feb 4, 2024 | hide | past | favorite | 21 comments



If like me you were unaware of the "x86-64-v.." classification, here is an overview: https://developers.redhat.com/blog/2021/01/05/building-red-h...


See also:

> In 2020, through a collaboration between AMD, Intel, Red Hat, and SUSE, three microarchitecture levels (or feature levels) on top of the x86-64 baseline were defined: x86-64-v2, x86-64-v3, and x86-64-v4.[41][42] These levels define specific features that can be targeted by programmers to provide compile-time optimizations. The features exposed by each level are as follows:[43]

* https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...


Assuming APX[1] actually becomes a thing, that will be an even more significant feature level.

[1]: https://en.wikipedia.org/wiki/Draft:Advanced_Performance_Ext...


Lack of -v3 binaries are common in Docker containers. Most, if not all of development, CI and deployment targets support -v3. And yet images like Postgres or Redis are built with the absolute lowest common denominator in mind.

How many CPU cycles are wasted globally because of this? It’s nuts


There are containers which are based on Gentoo and optimized for x86-64-v2 and x86-64-v3. https://github.com/rahilarious/gentoo-containers


Sure there are, but it doesn’t matter because everyone uses the stock images.

The issue is there is no concept of a -v2 or -v3 Intel architecture in the OCI spec. which is a shame.


I wonder what happened to Arch Linux's sub-architecture support.

There was talk years ago about this. Pacman supposedly got updates, and it was just a matter of kicking the building infrastructure into gear.

But years passed, and still nothing.


The newest status update I could find is this [0] but that's over a year old too. Honestly I would rather if they worked on providing ARM64 packages. I know about and use Arch Linux ARM but that's a completely different project with different people and even fewer resources, which often manifests in broken packages (right now generating initramfs fails without manual intervention [1]). I remember for a long time Arch Linux didn't have any build servers and packagers mostly used their own machines to build packages. Not sure what's the current state of things but if that's still the case, then it's quite understandable that not everybody has an ARM64 machine to build on.

[0] https://lists.archlinux.org/hyperkitty/list/arch-dev-public@...

[1] https://archlinuxarm.org/forum/viewtopic.php?f=15&t=16672


It's all part of the same problem domain; How do we rearchitecture the release process of Arch so we can support multiple architectures.

The amount of people that understand this problem domain, have the time+energy to work on it and is actually able to see it too completion is.. well. Not many.

I tried to hack on buildbot, as I wrote in that email, but I have been questioning the maintainability of trying to fit what we need on top of buildbot. In constrast to writing something from scratch.


I wonder if something like that should be built on top of GitLab now that you have migrated to it. GitLab CI supports multiple runners for a single job, so the workflow might be that you merge a merge request which modifies PKGBUILD/.SRCINFO, the CI picks it up and based on labels/tags dispatches it to different runners running possibly on a completely different architecture for building. The runners then send the artifacts back and the job publishes them. The nice thing about this is that most of the parts are already there handled by GitLab itself, you just have to draw the rest of the owl.


This doesn't help at all as you need to coordinate so-name rebuilds across multiple architectures. This implies you need to some way to orchestrate a staging repository and have building do rebuilds towards this repo interactively until all issues are solved.

Retrofitting this ontop of gitlab sounds painfull. I don't even know if we would like to be that tied to a singular forge.


There's a repo at https://wiki.archlinux.org/title/unofficial_user_repositorie..., but nothing new as far as I know.

I've used it for a while, ran into some minor issues from time to time, but nothing critical.


https://lists.archlinux.org/archives/list/arch-dev-public@li...

The "building infrastructure" of Arch is just Package Maintainers building and publishing packages they maintain. There was some resistance from PMs against supporting another architecture.


Latest mention is over a year old[0]. This situation invited third parties to release unofficial Arch derivatives built for x86-64-v3[1], splitting the community.

0. https://lists.archlinux.org/hyperkitty/list/arch-dev-public@...

1. https://cachyos.org/


- "Fedora Optimized Binaries for the AMD64 Architecture" currently says Targeted Release: Fedora 40 but rejected: https://fedoraproject.org/wiki/Changes/Optimized_Binaries_fo... https://news.ycombinator.com/item?id=38825503


The change was withdrawn to improve it for F41 [1]. There were concerns about using $PATH to find executables for v2/v3/v4, especially for containers and absolute path specifications.

[1]: https://pagure.io/fesco/issue/3151


> That said, in some processor lines (i.e. Atom), support for this instruction set was introduced rather late (2021).

Just one little sentence at the end but quoting it since it feels important.


These are bad processors born out of ridiculous market segmentation done by Intel for which we pay for dearly (also worth calling out making ECC server-only feature, because profit, exabytes of user data corrupted be damned).

Not only they are not worth targeting, it is a moral imperative not to consider them for pieces of software like PostgreSQL. If the push comes to shove, a "-compat" or "-legacy" flavour of binaries can be offered to select few users which use older systems with CPUs made before 2011 (pre-Sandy-Bride and Jaguar respectively), which would allow the overall ecosystem get healthier while not leaving affected people behind.


The Atom-based processors are bad not because of market segmentation but because Intel is perennially unable to make more than one good architecture at a time. The ones that are bad because of market segmentation are the processors with a Core family microarchitecture lobotomized down to the i3, Pentium or Celeron product lines, leaving them years behind on feature support that's physically present but fused off.


    !!! The following binary packages have been ignored due to non matching USE:

        =sys-libs/glibc-2.38-r10 -multilib -stack-realign
        =sys-libs/glibc-2.38-r10 systemd
So in order to use binary packages, I either need to switch to systemd, or switch to no-multilib and manually enable all the desktop-related USE flags. Seems too tedious to bother right now.


If you're currently building from source, presumably you have `-march=native` in your CFLAGS anyway, so you wouldn't gain any perf improvements from using the pre-compiled ones.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: