Interesting: the entry for the Apollo Guidance Computer (AGC) indicates it used integrated circuits—I had remembered hearing it used RTL (resistor-transistor logic).
It turns out both are true [1]. The "integrated circuits" were sort of "flat-packs" of RTL circuits. I had forgotten that early IC's were not quite what we envision today. Regardless I suppose ICs were RTL before they were TTL (before they were CMOS, etc.).
In particular, the IBM 1401 (two of them actually) that you can see demonstrated at the Computer History Museum in Mountain View are transistor-based and were very successful computers.
Indeed, my dad was a research scientist at a large chemical company, and every scientist had a Friden mechanical calculator, which was capable of multiplying and dividing. But it was not a programmable computer.
When the HP 35 came out, it was cheaper than the annual maintenance contract for the Friden. They bought one, and passed it around to try out for a week, then all of the Fridens went into the dumpster. Of course he brought one home, and we got to play with it.
Ha ha, the rich kids when I was in high school Physics had these calculators. It was the first I had seen them. At over $100 (as I recall) they were completely out of reach for me and half the class.
(Ands they had to either have an extra set of batteries handy or access to an outlet to plug in the cord since the possibility of the batteries dying during a test was a real likelihood.)
I can suggest our service (previously here https://news.ycombinator.com/item?id=44849129 ) that might be helpful -- If you want a zero-setup backend to try qqqa, ch.at might be a useful option. We built ch.at — a single-binary, OpenAI‑compatible chat service with no accounts, no logs, and no tracking. You can point qqqa at our API endpoint and it should “just work”:
My new theory, developing for a while, is that as technology makes things easier, the perceived average quality goes down over time. I've yet to fully understand the factors that drive this trend, but feel certain AI will put it in overdrive! I'm not a luddite or hater actually - but this trend is pretty apparent...
Software typesetting/layout. Software music engraving. Hot-melt glue in bookbinding. Those are my three favourite examples of the definite trend. Technology has made good enough easier, at the cost of actually good.
Interesting - I somehow didn't realize that KVM didn't require root access.
Also, I wonder if this could be adapted to use Apple's Hypervisor.framework. That one also doesn't require root and ought to be able to spin up and down very quickly.
Author here, was a bit surprised to see this here. I thought there needed to be a good zero-JS LLM site for computer people, and we thought it would be fun to add various other protocols. The short domain hack of "ch.at" was exciting because it felt like the natural domain for such a service.
It has not been expensive to operate so far. If it ever changes we can think about rate limiting it.
We used GPT4o because it seemed like a decent general default model. Considering adding an openrouter interface to a smorgasbord of additional LLMS.
One day, on a plane with WiFi before paying, I noticed that DNS queries were still allowed and thought it would be nice to chat with an LLM over it.
One interesting thing I forgot to mention: the server streams HTML back to the client and almost all browsers since the beginning will render as it streams.
However, we don't parse markdown on the server and convert to HTML. Rather, we just prompt the model to emit HTML directly.
> However, we don't parse markdown on the server and convert to HTML. Rather, we just prompt the model to emit HTML directly.
Considering the target audience it probably doesn’t matter but it sounds like this could lead to pretty heavy prompt injections, user intended or not. Have you considered that and are there any safeguards?
The domain is great by the way. Congrats on getting it!
Clear Linux's performance came primarily from function multi-versioning (CPU-specific optimizations at runtime), aggressive compiler flags (-O3, LTO, AutoFDO), kernel tweaks, and a stateless design that minimized I/O overhead.
Yeah, but there is something else here too... I used cachy for a heartbeat and it advertises the same benefits; it just felt slower (notably on boot) Maybe it was just all the graphical load screens.
There's something clear had that made it feel modern, familiar and boring (which might not be for everyone) 90% of my tasks were in vscode devcontainers so kept things simple and out of the system for the most part.