Hacker Newsnew | past | comments | ask | show | jobs | submit | perrygeo's commentslogin

Having been in this situation more than once, recreating a concept from scratch when you've already coded it once takes ~20% of the time. This also tracks with my long term empirical observation that roughly 80% of a software project is maintenance, testing, debugging, monitoring, fixing bugs, planning, refactoring, etc.

Sitting down to an editor and typing out ascii charachters is the smallest and least consequential part of software development. And that was _before_ LLMs enter the equation - now it's not even strictly necessary. The software industry needs to get over its obsession with coding as an activity, and with code as an asset. Code is at best a necessary liability. Software systems are what we should be focused on.


> (configure to run in CI)

Every time I've believed this, I came back to broken CI.

Depending on your language ecosystem, your CI workflows and build deps stand a greater chance to break than your code.


The main thing that breaks for me is that GitHub Actions drops support for Python versions that are EOL, but usually I can fix that with a change like this one: https://github.com/simonw/datasette-scale-to-zero/commit/1ae...

I jumped into a new-to-me Typescript application and asked Claude to build a thing, in vague terms matching my own uncertainty and unfamiliarity. The result was similarly vague garbage. Three shots and I threw them all away.

Then I watched a someone familiar with the codebase ask Claude to build the thing, in precise terms matching their expertise and understanding of the code. It worked flawlessly the first time.

Neither of us "coded", but their skill with the underlying theory of the program allowed them to ask the right questions, infinitely more productive in this case.

Skill and understanding matter now more than ever! LLMs are pushing us rapidly away from specialized technicians to theory builders.


For sure, directing attention to valuable context and outlining problems to solve within it works way, way better than vague uncertainty.

Good LLMing seems to be about isolating the right information and instructing it correctly from there. Both the context and the prompt make a tremendous difference.

I've been finding recently that I can get significantly better results with fewer tokens by paying mind to this more often.

I'm definitely a casual though. There are probably plenty of nuances and tricks I'm unaware of.


Thank you. Fixing a vitamin deficiency is an obvious benefit to physical health. But does it "cure depression"? To the extent that your depression was caused by vitamin deficiency, sure. But then didn't you just cure a vitamin deficiency? I'm sure a lot of things starting improving once your body starts getting proper nutrients after years of neglect! That doesn't indicate causality.

As far as we can tell, "depression" is a label we put onto a big bag of issues with similar symptoms, but that may require different interventions. Vitamin D deficiency may very well be or lead to one of those, and it definitely doesn't help when coexisting with something else either.

Wat? Android, Linux, BSD (incl iOS and Mac) absolutely dominate the market in terms of mind share and deployment. I can't think of another philosophy that is so directly, causally responsible for billions of dollars of realized value.

It sounds like you're parroting the corporate line of the early 80s. "Making money directly selling software artifacts is the only way to win." Which, as we know in retrospect, was a completely failed strategy, steamrolled by companies which... wait for it... adopted more flexible technology based on the Unix philosophy.


I guess I will never understand the microservices vs monolith debate. What about just "services"? There are 1001 reasons you might want to peel off functionality into a separate service. Just do that, without making it into some philisophical debate.

I get that Gas Town is part tongue-in-cheek, a strawman to move the conversation on Agentic AI forward. And for that I give it credit.

But I think there's a real missed opportunity here. I don't think it goes far enough. Who wants some giant complex system of agents conceived by a human. The agents, their role and relationships, could be dynamically configured according to the task.

What good is removing human judegment from the loop, only to constrain the problem by locking in the architecture a priori. It just doens't make sense. Your entire project hinges on the waterfall-like nature of the agent design! That part feels far too important, but gas town doesn't have much curiousity at all about changing that. These Mayors, and Polecats, and Witnesses, and Deacons ... but one of infinite ways you arrange things. Why should there be just one? Why should there be an up-front design at all? A dynamic, emergent network of agents feels like the real opportunity here.


Speaking only of written communication here: I've noticed a distinct trend of people stopping documentation, comments, release notes, etc. intended for human consumption and devoting their writing efforts to building skills, prompts, CLAUDE.md intended for machines.

While my initial reaction was dystopian horror that we're losing our humanity, I feel slightly different after sitting with it for a while.

Ask yourself, how effective was all that effort really? Did any humans actually read and internalize what was written? Or did it just rot in the company wiki? Were we actually communicating effectively with our peers, or just spending lots of time on trying to? Let's not retcon our way to believing the pre-AI days were golden. So much tribal knowledge has been lost, NOT because no one documented it but because no one bothered to read it. Now at least the AI reads it.


For me, the loneliest period of my life was when I was socially active but hanging out with people that I didn't really like or respect. Don't neglect spiritual and mental health as a strong component of loneliness. It's not always about dragging your body from one event to another to maximize the number of people in your life. You have to make sure your mind, body and spirit are present and aligned.


It's remarkable how these papers show a deep understanding of programming 50 years ago. Even with anemic hardware, the limit is always in the programmers brain - as uncomfortable as that is to admit. Half a century of new tech and AI and the cloud etc, we still hit "terminal trauma" fairly quickly in the development cycle, almost like clockwork. All the tools and technical tricks don't seem to matter vs. our ability to hold the application in our heads.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: