Hacker News new | past | comments | ask | show | jobs | submit | pelletier's comments login

> Multi-repository pull requests should be a first class feature of any serious source code management system.

Do you have examples of source code management systems that provide this feature and do you have experience with them? repo-centric approach of GitHub often feels limiting.


Apparently Gerrit supports this with topics: https://gerrit-review.googlesource.com/Documentation/cross-r...


It is very easy to build this yourself. Just use your dependency manager of choice that has git support and create a config of versions of the repos and lock file that. Whenever you have a change across multiple services, simply change that config in one go, so that every commit of that config describes a deployable state of the whole project.


I might be misunderstanding but that seems off-topic, not related to the idea of "multi-repository pull requests"


At a company I used to be at they used GitHub Enterprise and some repos definitely seemed to have linked repos or linked commits (I don't remember exactly but there was some way of linking for repos that depended on each other).


Doesn't this license file say that most of ffmpeg is LGPL2? IANAL either, but my understanding is they are fine to distribute their application however they want assuming they did not use any of the opt-in GPL2-licensed functions, and they link to ffmpeg as a shared library.

EDIT: seems like the user has to install ffmpeg separately, so seems like they are in the clear anyway? https://hieudinh.notion.site/How-to-setup-CompressX-93a89b07...


I'm curious to see how the limitation of using pyodide packages only will play out for non-trivial builds. Thinking of all the non-pure python code out there that need to be manually rebuilt to support a non-trivial production app.

Maybe Cloudflare's adoption will help bring more packages into the fold, and if it's an 80/20 rule here, would be good enough.


I certainly think there's an 80/20 rule here. Most packages are not very hard to port, and generally the ones that are hard to build use features like threads and multiprocessing, graphics cards, raw sockets, green threads, or other capabilities that have no obvious analogue in a webassembly runtime.

As we mention in the blog post, the biggest issues are around supporting server and request packages since they are clearly useful in cloudflare workers but are difficult to port because they frequently use raw sockets and some form of concurrency.


As we build out support for some of these features in the Workers Runtime, we should be able to port Python modules to use them. Some features like raw sockets are already available, so we should be able to make some quick headway here.

(Myself and Hood above are the folks who implemented Python Workers)


This is probably what they are referring to https://github.com/jart/disaster


Nice. I have been using rmsbolt for a similar feature, but it is very rough. I'll need to give this as try.


Thanks! I need to get better at googling I guess.



How does the generation of validation data for registration work? As far as I understand, this requires details from an actual Apple device (serial number, model, etc.)



They mention the generation needs a plist from an actual Apple device, and provide one of their own in the repository. I wonder what Beeper does. Maybe they have just one serial number? Maybe they have multiple and rotate?


I think it's calling a server generating validation data (probably with a pre-set hardware informations to be able to run it on a Linux machine which is cheaper, with emulation as pypush does it or by directly loading the macOS executable in the memory and run the right code snippets there).


I'm curious what are the implications of having pre-set hardware info. Maybe rate-limiting? or easier for Apple to flag those particular serial numbers to block the service if they wish?


You mention it's a passion project. If technology is the center of that passion, then pick what you're passionate or curious about!

If not: the tried but true Rails + Postgres.

Add more things only when needed. For example Alpine.js if a bit of interaction that's not covered by Rails' Turbo. If the need for background processing arises, bring in the good_job gem, and there may be no need to deploy it separately at first.

For hosting I'm not quite sure these days. Heroku may be on life-support, but its feature-set covers most basis.


> For hosting I'm not quite sure these days. Heroku may be on life-support, but its feature-set covers most basis.

I'd probably start with Render for rails hosting these days. Seems to be the best heroku descendant.


I wrote one of the Go implementations [0] when TOML was announced and have maintained it since.

As a library implementor, I wish arrays would hold only one type at a time, but I get that could be useful for users. But as a user, I wish tables were fully defined once (more can't be added up later in the file), especially when using larger files.

[0]: https://github.com/pelletier/go-toml


Wrote a deep dive on my way to learn how CPython prints stack traces to teach our profiler how to interpret stack traces. That was a good learning experience, so I figured I would share.


I am in this situation. I've installed Tailscale on my router and use it as an exit node. Works great for me, and not just with Google.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: