Hacker News new | past | comments | ask | show | jobs | submit | hackrmn's comments login

It caught my attention big time. All up until the point the word "caracul" was linked to the Zapatista movement, literally -- as in, to the Wikipedia page on Zapatista -- which, in turn says:

> is a far-left political and militant group that controls a substantial amount of territory in Chiapas, the southernmost state of Mexico.[4][5][6][7]

I don't mean to preach political theory now, but as far as I can see we're already collectively pretty divided (divide and conquer comes to mind): for a project that seems to preach all manner of fairness and correction of a system gone wrong, and is arguably moderately anti-capitalist (in the sense of objecting some of the status-quo product of Silicon Valley's mode of operation), do we really need to be thrown all the way to the other end of the left-right scale? Is Bonfire arguing for the analogue of "militant revolution" of software?

Imagining the project now, I am envisioning green-clad militants writing "fair" software. While not without merit, in my opinion the explicit political associations detract from the intrinsic value something like Bonfire could have for us, who are indeed have never been more firmly under the boot of the commercial IT industry than now.


The way I read these capitalist/anti-capitalist debates is that those for capitalism usually have an idealized version of it in their heads and those against it have some very specific issue in mind.

In Latin America, there are many communities that have suffered because of specific capitalist ventures: a banana plantation, a copper mine, etc.

You have to acknowledge these things and offer a better version of capitalism if you want diminish this divide.


Meanwhile, those who are for socialism usually have an idealised version of it in their heads and those against it have some very specific issue in mind.

People are funny.


You seem to be recoiling from the idea of average people being armed and militant. Like its surprising or outside of normal.

Im going to assume you're a reasonable person and have been watching some news. You probably think like I do that its good for society to follow some laws and have some checks on different groups power.

Hows that working out right now? You know without meaningful militancy on one side of the political spectrum.

Its been my experience when people ask for "politics" to be taken out of a thing they implicitly ask for the politics of the status quo. Which is domination by the commercial software industry and anything else the rich own, because they write the laws.


They're very far from irrelevant, depends on what kind of Web development you do, I would say -- I have been writing WebAssembly by hand (I mean, a lot can be said about that but it's a thing) and the spec. is authored by W3C. There's plenty of other things they author, like, you know, either one of the many _CSS_-related specifications.

It's just that with the modern Web 7.0 (or whatever version we're on now), it's WHATWG that's most prominent since there's that one spec that defines 90% of what happens on the Web, it's called "The HTML standard" or some such. Then you have Google de-facto authoring specs., which may or may not find their way back into the HTML document, but even if they don't, they do make you feel like W3C is left behind.


Ironically, perhaps, I just read the article while taking a micro-break from writing a triangle rasterizer (aliased, aka pixel-art / retro) in WebAssembly. It'll probably feature rendering performance orders of magnitude slower than a graphics card from 20 years ago, and I am well aware of it but the truth is I am not doing it because I am still nurturing a dream of becoming one of the rich rock stars of IT from an era that passed me by those same 20 years ago. No, it's just that I find it pleasurable to do these things, exactly because I don't need to stay competitive doing it -- well, not against hordes of very capable software engineers churning all kind of useful _valuable_ systems. Not the least because I am doing the things I do like the above, during my spare time, and I have reasons to believe there's plenty need for the artistic pet projects done in spare time on intake of inspiration.

So yeah, just reminding everyone that not everything is about fierce competition -- if artists can chain smoke and drink their life through ups and downs of patronage, so can everyone else.

Noone says we should stop being responsible, but all the responsibility and adulting without play is much, much worse, in my opinion, than the alternative. It just so happens that I relax writing code.

I am still writing other things that have long been invented, and they consistently give me inspiration.

Not sure if I missed the point of the article, but I react to what I read from it, after all.


Tbh I don't understand the need for games in web assembly. Is it really worth the effort developing an environment for games that runs in the Webbrowser when installing a game through steam or the play store is already very simple and quick

I find it disturbing/puzzling that there is this fundamental physical behaviour like emission of light with wavelength of _exactly_ 21cm -- assuming one centimeter wasn't based on any such property but was just a "random" unit measure that stayed with us historically and through sheer volume of use (in U.S. inches filled the same niche; still do). I mean what are the odds that the wavelength is _exactly_ (the word used in the article) 21cm?


The article does say "precisely 21cm" in the subtitle, repeats it in the "key takeaways" section, and then close to the end of the article these's this:

>By measuring light of precisely the needed wavelength — peaking at precisely 21.106114053 centimeters

Which I assume is the actual measurement every time "21cm" is brought up in this article.


No more probable than any other value, whole or otherwise. In particular, its (per wikipedia) 21.106cm.

Its funny how our brains find nice whole numbers unsettling in the natural world. I was always sort of weirded out by the distance light travels in a nanosecond: just shy of 1 foot. How weird it is that it flops between systems!


The author uses "precisely" incorrectly, which is quite surprising for an article on physics.


isn't a cm now defined based on the distance light travels in a vacuum in a very small period of time?

so it's not arbitrary really, or rather it probably goes the other way around. a cm used to be based on an arbitrary physical distance but was I think redefined to avoid needing to keep a standard meter cube in Paris.


It started with the grandfather clock. Everyone's clock pendulum needed to be the same length to have the same length of a second. So a meter also happens to (approximately, this was before we could easily be precise to several decimal places) be the length of pendulum that cycles at 0.5 hz (each swing back and fourth is a second) in 9.8 m/s^2 gravity.


It started with the French.

https://en.m.wikipedia.org/wiki/History_of_the_metric_system

The meter was originally based on the measured dimensions of the Earth.


I think there were multiple competing suggestions at the time, the grandfather clock was one while the standard ended up being the French proposed one that you mention


Ah yes, you're right. Another nice coincidence that a seconds pendulum is less than 1% away from 1/10 millionth the distance between the equator and poles.


tanach still does not acknowledges science. so does 1/10 millionth error even matter in grand scheme of things ?


The standard metre was a rod 1 metre long, you might be thinking of the standard kilo which is a compact cylinder?


Was, they made the smoothest silicon sphere, Avogadro project. And now apparently they define it via physics as mentioned in ops article, "namely a specific transition frequency of the caesium-133 atom, the speed of light, and the Planck constant"


To be honest, thinking of all the times I read an article that in the very least mentioned Turing Machines, I don't recall significant occurrence of linking "Turing Machines" to a Wikipedia article on the subject _or_ to one on Alan Turing, or subsequent (in parentheses, for example) elaboration of either concept or brief excerpt on the person -- the reader's knowledge on the subject of either, seems to more often than not, be implied.


I think their specific objection was sort of wrong—it isn’t really that similar to Turing, because Turing machines are a very well known concept in CS, which is a whole big field. Lots of blog posts on CS assume you’ve at least taken the 101 level class and know who Alan Turing is.

Retail, uh, theory or whatever is not not nearly as widespread (I mean lots of people stock shelves, but as someone who did, I never thought about why things were laid out the way they were). So, most likely an article about Gruen Transfer is introducing the idea to the reader. So, some background could have been nice.


I suppose lack of overlap in the "interface surface" (attributes, including callables) between `str` and `Template` should nip the kind of issue in the bud -- being passed a `Template` and needing to actually "instantiate" it -- accessing `strings` and `values` attributes on the passed object, will likely fail at runtime when attempted on a string someone passed instead (e.g. confusing a `t`-string with an `f`-string)?


I've been writing [GNU] Makefiles for years, and have a love-hate relationship with the [GNU Make] tool. I tend to push tooling to the limit, I think it's in part because I believe in "soundness of scope" -- a tool should have a well defined scope and within that scope "all bases should be covered". In practice that would mean, that with Make I am able to define the dependency graph of pre-requisites and targets (files that Make makes) such that it just about handles the graph resolution complexity for me -- _with variables_, that is.

I love Make because it largely delivers on its promise -- and I am using it almost in _opposite_ to what the author describes. That is, I consider phony targets to be an "illegitimate" feature of Make, and avoid them like the plague. While convenient, targets in Make are heavily geared to be files, certainly most of the machinery in Make was written to assume so, and even the well-known (and documented) targets like "install" and "clean" leave a terrible taste in my mouth as of late, despite these being very conventional.

The problem with phony targets is that they're hard to reason with by Make (unless you actually turn "install" and "clean" into files) and break half of the rest of its assumptions on what targets should be and how they behave. The rest of the problem is the linguistical aspect of it -- if I `make install` am I making an install program/script or what? These kind of vagaries have led me firmly away from ever using phony targets.

As for the rest of it, Make is terribly archaic, but that also lends it strength since the archaic nature is very simple on the flip side.

The "hate" part is me taking a dislike to its bare-bones, in my opinion insufficient variables facility, where only truly global variables (certainly sharing one namespace) exist and juggling these for any non-trivial Makefile is always a problem of its own.

I am no novice with GNU Make, not any longer, but occasionally I need to look back into its manual to remember how e.g. the so-called double-colon rules work (when I suspect I might need one) and the real difference between the `=` and `?=` variable assignment, for when I want to provide default values and so on.

Lately I've been playing with the idea of _implementing_ a [GNU] Make compatible tool just to see if the somewhat _patchy_ scope of [GNU] Make can be given more coverage -- for instance to see if adding locally-scoped variables and more flexible target definition can improve Make? What I mean is to experiment with an implementation that retains fundamental principle and mandate of Make -- dependency graph resolution and reliance on [normally] UNiX shell -- but "upgrading" the syntax and extending the semantics. Because while Make is useful, it's also at times terribly hard to work with. To paraphrase Bjarne Stroustrup (the man behind C++), "inside Make there is a neat sound idea struggling to get out".


How is your proposal different from CMake?


Well, most importantly, CMake can't use Makefiles, and my idea revolves around specifically being compatible with Make in a way where a fork [of mine] would behave equvalently to [GNU] Make for Makefiles which both the fork and [GNU] Make are able to use, while not necessarily the other way around (given how the fork would have features that rely on syntactical constructs [GNU] Make wouldn't want to parse, for example).

CMake is a different beast, really. While both have in common that they're build automation tools, you can say, save for some shared ideas, they aren't really that similar once you zoom in past some level of detail. Meaning I hardly can choose to adopt CMake _instead_ of writing a Make-derivative if my goal is to _extend_ [GNU] Make. And I have reasons to prefer Make over CMake, so I am absolutely not interested in extending CMake (or acknowledging it has fit my needs and/or is aligned with the way I like to solve problems I have used [GNU] Make for solving).

You could say that CMake is the same as [GNU] Make beyond their different syntax, which is true in a sense, but syntax does decide a lot for each respectively, I would say. The fundamental syntactical differences between the two become larger as one walks the abstraction ladder upwards, and looking at each from the perspective of a user (tasked with, say, building a large C++ program/library), one has to adopt slightly different set of concepts specific to each. To that end, I prefer Make's abstractions over CMake's.

Last, the value of my implementing a [GNU, henceforth implied] Make compatible tool, wasn't just for forking an improvement, but also in that when I have written a sufficiently capable fork, say, I can assess _how_ Make was made to work, in my experience one tends to learn a lot about what a piece of software writing a compatible "emulator". I _can_ read Make's source code, but I really don't want to because what I have seen suggests the kind of "organic development" that no longer ideally resembles something an outsider would find easy to grok, even a C expert. It's just the way of those things, unfortunately. Instead, I could pick up Python and write a very bare-bones Make-compatible implementation that would give me a lot of answers for "why does Make work like this?" questions.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: