Hacker News new | past | comments | ask | show | jobs | submit | ericfrederich's comments login

Today someone's pipeline broke because they were using python:3 from Dockerhub and got an unexpected upgrade ;-)

Specifically, pendulum hasn't released a wheel yet for 3.13 so it tried to build from source but it uses Rust and the Python docker image obviously doesn't have Rust installed.


Wow, that's crazy. I tried a 6 digit hash and got a 404, then I tried another 6 digit hash and got "This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository."

Insane


> 1) Fork the repo. 2) Hard-code an API key into an example file. 3) <Do Work> 4) Delete the fork.

... yeah if <Do Work> is push your keys to GitHub.


R2 in only "free" until it isn't. Cloudflare hasn't got a lot of good press recently. Not something I'd wanna build my business around.


Aside from the casino story (high value target that likely faces tons of attacks, therefore an expensive customer for CF), did something happen with them? I'm not aware of bad press around them in general


R2 egress is free.


Why Rust? Aren't you alienating Python devs from working on it?

I see that UV is bragging about being 10-100x faster than pip. In my experience the time spent in dependency resolution is dwarfed by the time making web requests and downloading packages.

Also, this isn't something that runs every time you run a Python script. It's ran once during installation of a Python package.


I actually think that Python's tooling should not be written in Python. Because if yes, you end up with at least two version of Python, one to run the tooling, one to run the project.


I'm not sure of the answer, but one thing Rust has obviously bought them is native binaries for Mac/Windows/Linux. For a project that purports to be about simplicity, it's very important to have an onboarding process that doesn't replicate the problems of the Python ecosystem.


If you are building a production app that uses python in a containerized way, you may find yourself rebuilding the containers (and reinstalling packages) multiple times per day. For us, this was often the slowest part of rebuilds. UV has dramatically sped it up.


Uv has already proven itself by being faster at every step it seems like, except maybe downloading. But notably it includes unpacking and/or copying files from cache into the new virtualenv, which is very fast.


It parallelizes downloads and checking of the packages.

It also doesn't compile .py files to .pyc at install time by default, but that just defers the cost to first import.


It runs every time you build a docker image or build something in your CI


so it take 3 seconds to run instead of 0.3? Don't get me wrong, that's a huge improvement but in my opinion not worth switching languages over

Features should be developed and tested locally before any code is pushed to a CI system. Dependency resolution should happen once while the container is being built. Containers themselves shouldn't be installing anything on the fly it should be baked in exactly once per build.


Modern CI can also cache these dependency steps, through the BuildKit based tools (like Buildx/Dagger) and/or the CI itself (like GHA @cache)


Wait until you realize that "giving up the decently sized ecosystem of Powershell libraries" is a net positive ;-)


Would be nice if the "obscure in URL" feature wouldn't show the text in the textbox when you send it to someone.


Good idea! I've gone ahead and implemented this feature: if "obscure in URL" is turned on, the text won't be visible unless you focus on the textbox (e.g. to edit it).


Well noticed. Good point ...

(Or an additional "Obscure in textbox" checkbox or something along those lines ...)


Dude, let's fix spam callers first that are calling my USA number from a USA number.

This shouldn't be hard. If we can't fix that then good luck tracking down bad actors on the interwebs


I've nearly given up on my phone as a device for making calls because of this.


I came on here looking for an article about all of the network issues last night streaming the game. Couldn't find one so I'll rant here in the comments ;-).

In my neighborhood we have 3 ISPs but one is only just recently available (Google Fiber) so there's not many on it as we already have Spectrum and AT&T fiber available. Lots of people complaining across different streaming services (YouTube TV, Hulu, Paramount+, etc) and also across different internet providers (Spectrum and AT&T... just 1 data point for Google Fiber). Lots of buffering, scaling down to extremely low bitrates where you couldn't even make out how many timeouts were left and could barely make out the score.

Sending each customer their own bespoke video stream works fine for movies and shows, but apparently works terribly for popular live events.

Some sort of multicast solution would fix this... but then theres DRM.



Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: