The raw point is valid. Sure this a poorly edited article and lacks the best research - can we agree that like almost everything, we've engineered these things to be commercially short in terms of life-span, let alone not super-sustainable in terms of manufacture or maintenance? Aside from the continuously performance-oriented standardization shifts, the specialization of bicycle types is doing a disservice to sustainability. Also, the "electrification" of bicycles is a completely separate argument - from all angles: manufacture, maintenance and longevity; one that is no different that all EVs. TBH, this article reads like a ChatGPT dialog.
Unfortunate that we've missed another opportunity for healthy dialog around the benefits of the bicycle.
Basic analysis of open source software's source - for things like "bus factor". Started as a research project, probably going to die as a research project.
That exists, it is just hidden right now. If my intent was to offer this up and a for-the-public-good I'd turn it back on. But like most of the other comments here, at some point it just doesn't make sense for me to foot the bill to run it. It would be one thing if it was a simple front-end, with a simple back-end. But processing this insight is a little bit resource intensive which creates a cost burden. Academically it wasn't a big deal to run this in a true HPC, the costs have already been covered. Commercially is a different calculus.
But...the issue is packaging/distributing a CLI built with Elixir. Comparatively to building a CLI in something like Rust, there is a lot of overhead that comes with a VM-based language and framework. Especially if you want to target multiple OS and processor architectures (or distributions). Not to say that it is impossible, just maybe not as simple. It is one thing to run Mix tasks, or access the Owl API from REPL, it is another to run an Owl-based app on macOS, Linux and Windows and get it there.
It still has limitations (the biggest one is the requirement for the os&architecture to match between the builder and the deployment target) — but the result is a standalone binary which not only embeds the VM and preloads the app's bytecode, but even "trims" the stdlib to only ship the required functions.
Right, so the moral of the story centers on the target user of the CLI tool. If you're building something for the Elixir community - game on I suppose, though there is still the complexity of build-env per OS/arch.
I wonder where WASM/container enters the discussion.
> Firefly compiles Elixir applications faster and more efficiently than the BEAM can, and introduces WASI targeting to run applications in resource-constrained environments.
Containers are already solved, its trivial to build and boot a mix release - but whether that's appropriate for a CLI tool depends on the complexity of the tool I guess, but not too far from flatpaks etc no?
Undeniably it's not going to be as convenient, but the divide isn't what it used to be. BEAM apps can be compiled to a binary, and as long as that binary was compiled for the platform, that should be good enough.
I'm doubtful I'll see it used much outside the BEAM community, but then again it's been a "successful despite a lack of mass usage" community for awhile.
I work at NanoVMs. We don't actually do hosting. What we've found in the past is most organizations are very wedded to their existing infrastructure (AWS/GCP/Azure, etc.) so you can deploy Nanos to any cloud. I suppose we could turn something on again in the future if we wanted too but all of our users/customers are mostly on the big three clouds.
However, you are spot on that deploying Nanos to say GCP is very much a heroku like experience. I'd encourage anyone who is interested to just try it out.
I worked on a DARPA project a few years before this - where we were using CBE as the core for a polymorphic processor (one with an FPGA attached to every IO). We were also gutting PS3s to make mission computers for early unmanned systems - running Ubuntu on top. USAF wasn't the only one - not only were there commercial supply challenges with the PS3, various components were being horded by various nation states. We were pretty sure they didn't even no what to do with the parts, but was a basic attempt to prevent projects like this from getting off the ground.
Could you give an example of where Perforce is not OSS-friendly? Sure, the product core is not OSS, but parts of the periphery are, and there's at least one section of the company whose entire business model is supporting OSS:
This is awesome! Super appreciate the effort on this.
One challenge I've had is the file-based concept. And it losing "shape" quickly. I taken a few whacks at something different and have settled on a CLI-based kanban-y thing: https://github.com/kitplummer/clikan
But this lacks things like tags - which I appreciate as long as they are searchable in some form.
I loved nb! Until I ran it from a top-level directory and it committed an update to every git repo within that top-level. :D. Definitely operator error, but I learned my lesson and am back to separating notes from todos.
Nice! I also have this pain of the file losing shape quickly. My take is to have a a CLI tool to "carry over" all todos which aren't solved into a new heading. This way the old/resolved items are moved to the back/lower in the file.
Cool, that looks quite neat! I agree with you, at some point you have to structure things in one way or the other, in order to keep an overview. For me it works somewhat well to organise my data across different files and folders, which are setup in a certain structure. That either requires some manual labour and discipline, or you automate these workflows. That requires “standardised” data formats, however.