Hacker Newsnew | past | comments | ask | show | jobs | submit | barrkel's commentslogin

The main purpose of alarms is to relieve automation of liability.

I also have a Skoda that also has that "feature". I prefer using Maps via Android Auto, but if I am in that interface, and I have to cancel it, the way I cancel it is using a voice command.

Voice commands and Scottish accents are not a great combination.

I think there's an interesting idea behind Gas Town (basically, using supervisor trees to make agents reliable, analogous to how Erlang uses them to make processes reliable), but it's lacking a proper quality ratchet (agents often don't mind changing or deleting tests instead of fixing code) and architectural function (agents tend to reinvent the wheel over and over again, the context window simply isn't big enough to fit everything in).

However, Steve Yegge's recent credulous foray into promoting a crypto coin, which was (IMO) transparently leveraging his audience and buzz to execute a pump and dump scheme, with him being an unwitting collaborator, makes me think all is not necessarily well in Yegge land.

I think Steve needs to take a step back from his amazing productivity machine and have another look at that code, and consider if it's really production quality.


Indeed, the Gas-Town token is down 97% from all-time high, see https://coinmarketcap.com/currencies/gas-town/

He's obviously a smart guy, so he definitely should've known better. It's weird how these AI evangelists use AI for everything, but somehow he didn't ask ChatGPT what all of this means and if it may have reputational damage, because I just asked if I should claim these trading fees, and it said:

   Claiming could be interpreted as:

   * Endorsing the token

   * Being complicit if others get rugged later

   * This matters if your X account has real followers.
and in the end told me to NOT claim these fees unless I'm OK with being associated with that token.

When you're under a lot of stress, your internal evaluation function for what is moral can start to break down. It may have been hard for him to turn the money down, especially if he's addicted to the sense of power he's getting from his coding agent spend. As he said, his wife suggested they can't afford it.

There's another thing. A certain type of engineer seems to get sucked into Amazon's pressure culture. They either are, or end up, a bit manic. Laid back and relaxed one day (especially after holidays), but wound up and under a lot of internal pressure to produce the next, and a lot more of the latter. Something like Gas Town must be a crazy fix when you're feeling that pain. Combined with the vision that if you don't, you're unemployed/unemployable in 12 to 24 months, you might feel you have no choice but to spend every waking minute at it.

It's a bit (more than a bit) rude to analyse someone at a distance. And to be honest, I think something like Gas Town is probably one of the possible shapes of things to come. I don't think what I can observe looks super healthy, is all.


> Indeed, the Gas-Town token is down 97% from all-time high,

What else could possibly have happened? Surely every one put their money in with the express intention of participating in a pump and dump.

Not taking the money would have been the high road. I don't think basing the economy on gambling and scams is good for society. But who could realistically claim to be a 'victim' here?


> have another look at that code

So true. beads[0] is such a mess. Keeps breaking often with each release. Can't understand how people can rely on it for their day-to-day work.

[0] https://github.com/steveyegge/beads


That's been my experience as well. I like the idea of Beads, but it's fallen apart for me after a couple weeks of moderate use on two different projects now. Luckily, it's easy to migrate back to plain ol' Markdown files, which work just as well and have never failed me.

> have another look at that code

That would assume he's even looked at the code in the first place - I think his whole thesis is based on you never looking at the code.


> Steve Yegge's recent credulous foray into promoting a crypto coin

I didn't notice that. Can you give me a source?



There's some related discussion here: https://news.ycombinator.com/item?id=46654878

"Quality ratchet" is such a great name. Thanks for that.

I read this post as saying he won’t take funding from VCs, but he will from (his own word) crypto-bros?

In practice you see noticeable degradation of performance for streaming reads of large files written after 85% or so. Files you used to be able to expect to get 500+MB/sec could be down to 50MB/sec. It's fragmentation, and it's fairly scale invariant, in my experience.

I scrub once a quarter because scrubs take 11 days to complete. I have 8x 18TB raidz2 pool, and I keep a couple of spare drives on hand so I can start a resilver as soon as an issue crops up.

In the past, I've gone for a few years between scrubs. One system had a marginal I/O setup and was unreliable for high streaming load. When copying the pool off of it, I had to throttle the I/O to keep it reliable. No data loss though.

Scrubs are intensive. They will IMO provoke failure in drives sooner than not doing them. But they're the kind of failures you want to bring forward if you can afford the replacements (and often the drives are under warranty anyway).

If you don't scrub, eventually you generally start seeing one of two things: delays in reads and writes because drive error recovery is reading and rereading to recover data; or, if you have that disk behaviour disabled via firmware flags (and you should, unless you're reslivering and on your last disk of redundancy), you see zfs kicking a drive out of the pool during normal operations.

If I start seeing unrecoverable errors, or a drive dropping out of the pool, I'll disable scrubs if I don't have a spare drive on hand to start mirroring straight away. But it's better to have the spares. At least two, because often a second drive shows weakness during resilver.

There is a specific failure mode that scrubs defend against: silent disk corruption that only shows up when you read a file, but for files you almost never read. This is a pretty rare occurrence - it's never happened to me in about 50 drives worth of pools over 15 years or so. The way I think about this is, how is it actionable? If it's not a failing disk, you need to check your backups. And thus your scrub interval should be tied to your backup retention.


That's a fine fit of pique - and I once had an awkward file on one of my zfs pools, about three pools ago - but how does it leave you better off, if you want what zfs offers?

> That's a fine fit of pique

So you're rejecting a story about a real bug because...?

> how does it leave you better off

That's a really mercenary way to look at learning about your tools.

But presumably they take smaller risks around zfs systems than they otherwise would.


If by block by block you mean you stop using an IDE and spend most of your time looking at diffs, sure. Because in a well structured project, that's all you need to do now: maintain a quality bar and ensure Claude doesn't drop the ball.


I'm like you. I get on famously with Claude Code with Opus 4.5 2025.11 update.

Give it a first pass from a spec. Since you know how it should be shaped you can give an initial steer, but focus on features first, and build with testability.

Then refactor, with examples in prompts, until it lines up. You already have the tests, the AI can ensure it doesn't break anything.

Beat it up more and you're done.


> focus on features first, and build with testability.

This is just telling me to do this:

> To use it the way you are using it we would instead have to allow it to replace the part that happens (or can happen) away from the keyboard: the mental processing of the code.

I don't want to do that.


I feel like some of these proponents act like a poet has the goal to produce an anthology of poems and should be happy to act as publisher and editor, sifting through the outputs of some LLM stanza generator.

The entire idea using natural language for composite or atomic command units is deeply unsettling to me. I see language as an unreliable abstraction even with human partners that I know well. It takes a lot of work to communicate anything nuanced, even with vast amounts of shared context. That's the last thing I want to add between me and the machine.

What you wrote futher up resonates a lot for me, right down to the aphantasia bit. I also lack an internal monologue. Perhaps because of these, I never want to "talk" to a device as a command input. Regardless of whether it is my compiler, smartphone, navigation system, alarm clock, toaster, or light switch, issuing such commands is never going to be what I want. It means engaging an extra cognitive task to convert my cognition back into words. I'd much rather have a more machine-oriented control interface where I can be aware of a design's abstraction and directly influence its parameters and operations. I crave the determinism that lets me anticipate the composition of things and nearly "feel" transitive properties of a system. Natural language doesn't work that way.

Note, I'm not against textual interfaces. I actually prefer the shell prompt to the GUI for many recurring control tasks. But typing works for me and speaking would not. I need editing to construct and proof-read commands which may not come out of my mind and hands with the linearity it assumes in the command buffer. I prefer symbolic input languages where I can more directly map my intent into the unambiguous, structured semantics of the chosen tool. I also want conventional programming syntax, with unambiguous control flow and computed expressions for composing command flows. I do not want vagaries of natural language interfering here.


The problem comes in when you need to flip a flag that isn't set in the default kernel build for compatibility with your hardware and configuration.


Exactly, then you are depending on that third party (be it MS, Apple, Valve, Debian, etc) to care enough about your obscure setup to support it.


Well if you walk backwards 10 paces and look at the big picture here, what MS did enables anti-cheat attestation via TPM, and that in turn can act as a feature that structurally - via the market - reduces the appeal of Linux.

Signing your own custom-built kernel (if you need to adjust flags etc., like I do) won't result in a certification chain that will pass the kind of attestation being sketched out by the OP article here.


Yes because you’re trying to communicate that trust to other players of the game you’re playing as opposed to yourself.

It’s why I hate the term “self-signed” vs “signed” when it comes to tls/https. I always try to explain to junior developers that there is no such a thing as “self-signed”. A “self-signed” certificate isn’t less secure than a “signed” certificate. You are always choosing who you want to trust when it comes to encryption. Out of convenience, you delegate that to the vendor of your OS or browser, but it’s always a choice. But in practice, it’s a very different equation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: