I love the idea of Rye/uv/PyBi on managing the interpreter, but I get queasy that they are not official Python builds. Probably no issue, but it only takes one subtle bug to ruin my day.
Plus the supply chain potential attack. I know official Python releases are as good as I can expect of a free open source project. While the third party Python distribution is probably being built in Nebraska.
We'd love it if there _were_ official portable Python binaries, but there just aren't. We're not just distributing someone else's builds though, we're actively getting involved in the project, e.g., we did the last five releases.
We've invested quite a bit of effort into finding system Python interpreters though and support for bringing your own Python versions isn't going anywhere.
I'm from Nebraska. Unfortunately if your Python is compiled in a datacenter in Iowa, it's more likely that it was powered with wind energy. Claim: Iowa has better Clean Energy PPAs for datacenters than Nebraska (mostly due to rational wind energy subsidies).
Anyways, software supply chain security and Python & package build signing and then containers and signing them too
> Since some time, conda-forge defines multiple "cpu-levels". These are defined for sse, avx2, avx512 or ARM Neon. On the client-side the maximum CPU level is detected and the best available package is then installed. This opens the doors for highly optimized packages on conda-forge that support the latest CPU features.
> We will show how to use this in practice with `rattler-build`
> For GPUs, conda-forge has supported different CUDA levels for a long time, and we'll look at how that is used as well.
> Lastly, we also take a look at PyPI. There are ongoing discussions on how to improve support for wheels with CUDA support. We are going to discuss how the (pre-)PEP works and synergy possibilities of rattler-build and cibuildwheel
Linux distros build and sign Python and python3-* packages with GPG keys or similar, and then the package manager optionally checks the per-repo keys for each downloaded package. Packages should include a manifest of files to be installed, with per-file checksums. Package manifests and/or the package containing the manifest should be signed (so that tools like debsums and rpm --verify can detect disk-resident executable, script, data asset, and configuration file changes)
virtualenvs can be mounted as a volume at build time with -v with some container image builders, or copied into a container image with the ADD or COPY instructions in a Containerfile. What is added to the virtualenv should have a signature and a version.
rpm-ostree rebase ostree-image-signed:registry:<oci image>
rpm-ostree rebase ostree-image-signed:docker://<oci image>
> Fetch a container image and verify that the container image is signed according to the policy set in /etc/containers/policy.json (see containers-policy.json(5)).
So, when you sign a container full of packages, you should check the package signatures; and verify that all package dependencies are identified by the SBOM tool you plan to use to keep dependencies upgraded when there are security upgrades.
e.g. Dependabot - if working - will regularly run and send a pull request when it detects that the version strings in e.g. a requirements.txt or environment.yml file are out of date and need to be changed because of reported security vulnerabilities in ossf/osv-schema format.
Is there already a way to,
as a developer, sign Python packages built with cibuildwheel with Twine and TUF or sigstore to be https://SLSA.dev/ compliant?
Plus the supply chain potential attack. I know official Python releases are as good as I can expect of a free open source project. While the third party Python distribution is probably being built in Nebraska.