Hacker News new | past | comments | ask | show | jobs | submit | more zdw's comments login

And even then there was still an attempted coup to try to stop the surrender:

https://en.wikipedia.org/wiki/Kyūjō_incident


How Japan made the decision to surrender is well covered in the book "Japan's Longest Day", originally published in 1973. Many of the major players were interviewed. There's a reasonably accurate movie version worth watching, if you're interested in this.

It's a very strange story of decision-making under extreme pressure. No one was in charge. The Navy was barely talking to the Army. The civilian government had been sidelined from control of military matters years before. The Emperor was supposed to be a figurehead. And, as pointed out above, there was an attempted coup to stop the surrender.


Is this named from the overweight cat meme?


No, it has been initially developed to manage a "COMmunity INfrastructure" and it sounds like the word "coming". I didn't know this meme but it's a nice coincidence because this infrastructure is actually a kind of "CHATONS" [1] (kitten in french)! Thx for the ref which could be useful for a future logo!

[1] https://www.chatons.org/en


I don't think this is the case.

In python, rST is just one of many supported input formats for docutils: https://docutils.sourceforge.io/README.html#purpose

The entire point docutils is to parse formats and convert using an API: https://www.docutils.org/docs/index.html#api-reference-mater...


My beef is that it doesn't have an RST writer. What I want is a central hub where I can suck in documentation that was originally in different formats and then manually edit it in a format where I can capture essential semantics and then feed it through the system again to produce output documents.

If it had that it would be a 200 foot tall giant robot, as it is it's just another documentation generator.


Something like this perhaps https://soupault.app/


Ah, you want something like pandoc, but you can manipulate the internals as needed?


Yes, and with a somewhat different focus. I wish I could throw in 1000 pages of reference documentation, align it semantically with a few pages of documentation about a particular topic and then enrich those pages with snippets pulled out of the reference material. And things like that.


Sounds like an actually good use case for an LLM with RAG.


The amount of ads thrown randomly over this site, including ads that pop up when trying to navigate is too damn high.


If you are going to shove this amount of ads on your users how can you say it's "free"


This seems like a skill issue on your end. I don't see a single ad.


People are busy. Not everyone is willing to spend their time diving into the latest ad block tricks. Some have decided that it's a better use of time to just avoid sites that use dark patterns and aggressively anti-user-friendly design to get you to click on ads instead of the actual content you came to see. This doesn't mean they lack skill — it's possible they just have different priorities.


> it's possible they just have different priorities

and/or principles


it is important for us (the nerds with skills) to appreciate what the internet is like for those who aren't in the know. you and I will never see an ad, our grandparents will see thousands.


Two NICs and a giant passive heatsink (or huge slow/quiet fan) would make this a great router material.


If you're feeling a little adventurous, you can find Intel N100 boxes with 4-6 NICs: https://forums.servethehome.com/index.php?threads/cwwk-topto...

Went down this route and haven't had any issues with heat or having to replace heatsinks, add fans, etc. that others mentioned, but your mileage may vary.

Runs opnsense on proxmox, along with some other containers. It's been a great little box.



I wonder if I could run OpenBSD on this.


Same.


https://wiki.odroid.com/odroid-h4/start <- Looks like a very standard X86/AMD64 setup, with nothing which OpenBSD would have any problem with.

Also the introduction of the H4 model line, straight from the horses mouth https://forum.odroid.com/viewtopic.php?f=168&t=48344 with all the gory details.


It has only 9 (old pcie 3.0) lanes though which is bit limiting. In theory you should be able to use 8 of those lanes and get something like 4x10Gbe, which would be actually really neat, but that requires that the lanes are not squandered for anything else.


Just bought myself an MSI Cubi N for exactly that - 1Gbs ports only though.


Have you tried turning it off and on and off and on again?


Posting this as someone who owns a 2007 Toyota Matrix. Runs fine. Has modern safety features like ABS and a full complement of airbags. Holds a massive amount of stuff in the back given the size. With good and basic maintenance, will likely run fine for another few decades.

The smartest thing in it is a box that adapts the 6-disc CD changer port on the radio to an iPod 30 pin cable, and also has RCA jacks (that currently go to a $20 bluetooth audio adapter).

Replacing it with something new and electric while it's still working would likely be a net negative environmentally, given the impact of manufacturing anything new.

I could definitely buy something newer, but why? Every new I've had as a rental just seems like more complicated and with more stuff to break or software to be abandoned or that provides undesirable telemetry.


Running and upgrading any site has a non-trivial amount of work, especially if it has any sort of dynamic content or CMS.

What would you propose a person or company do if they want to stop running a popular site (ie, not fund it going forward), but still keep the content up, if without any changes?

This keeps happening - older tech sites like https://techreport.com also got sold off and all the new publishing is sensationalist/cryptoscam crap.

What we need is a benevolent crap-free way to just freeze old sites in place, with all the URLs intact, and all the hosting paid for...


I'm not sure what value that provides either. See https://www.macsurfer.com/


Automattic offers this as a service. I think it’s $35,000 — so not too much, but probably out of range for a lot of places.

This is what I said to 404 Media yesterday [1]:

> “It’s gross. We have decades of this stuff built up and it keeps changing hands and changing hands. And what recourse do we have as the writers?,” Warren said. “We also do such a bad job of web preservation, of preserving the history of the web as it was. So it’s kind of fucked to me that these approximations of the past are being recreated in these really blasphemous ways.”

Unfortunately, an archive wouldn’t have solved this particular issue. The problem here was threefold.

First, the original domain had been inaccessible (but still owned by AOL/Verizon/Yahoo/Apollo Global) for close to ten years. It used to redirect to Engadget.com, where most of the archives (sans images as most of them were mangled in CMS migrations, I guess) still exist. (It is important to note they didn’t redirect individual URIs, just to the main Engadget domain.) That means that what was sold was explicitly the domain and not the content. (Many of us original authors owned the content, it turns out. And AOL and its I guess follow-on companies had a non-exclusive perpetual license to it)

Second, the domain was sold (and explicitly not the content) to a shady web hosting adjacent operator who runs a lot of these sorts of spam/splog operations, who then decided it would be fine to take the content from archive.org, recreate slugs (which I’m pretty sure there is no legal recourse for in any jurisdiction, which is fine) and poorly rewrite the articles (this is probably copyright infringement, as many of the headlines and text are extremely close or identical. Just worse because they’ve been mangled by AI summarizers) and then attribute that content to the bylines (incorrect at that) of the old staffers

Third, this operation then decided to start publishing automated and ripped off content under those same bylines of previous staffers, potentially creating confusion and professional repercussions, not to mention making a mess of SEO for those authors, etc., etc..

If the owner of a domain or brand determines that there is value in that domain or brand, having archives (which we had here) isn’t enough to stave off absolute ghouls who want to use late 2010s splog techniques to try to game Google or something, attempting to steal the bylines and identities of the former contributors. (The economics of this whole thing make no sense. But whatever. Not my circus.)

I’ll quote myself again (also from my interview with 404 Media):

> “What’s worse than not having a good archive of my work is having one that is bastardized with my name but not my face and not my words on it,” Warren told 404 Media. “If they wanted to try to revive an old brand, fine, but leave the original writers out of it, and leave the old content out of it too because a lot of that old stuff has been rewritten and regurgitated and I don’t have any idea what it is, and it’s not what I actually wrote.”

[1]: https://www.404media.co/a-beloved-tech-blog-tuaw-is-now-publ...


Actually caring about another person.


How do you measure that?


TIL "caring" == "abstract reasoning".


I've been using https://github.com/whipper-team/whipper to digitize CD's, and it supports identifying Hidden Track One Audio (HTOA) when it exists and is not blank.

Add in MusicBrainz Picard and Navidrome and you have a really nice solution.


Whipper user here also. If you've not yet encountered it, as it's not as prevalent in repos as Whipper, Cyanrip is always very much worth a look and has come on in leaps and bounds, with recent updates adding (non compliant) .cue sheet support.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: