Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And you missed my point.

Yes, calculators do that. I'm arguing that "AI" does not let programmers write better code faster. It lets them write worse code faster, or better code slower.



I'm sure people said the same thing about compilers. And then interpreters. Even today people complain about interpreted languages being too slow and not requiring people to understand "enough" of what's actually happening.

Turns out that really doesn't matter. I think your argument is incredibly weak; the fact that some people don't use these tools effectively doesn't mean that nobody can. Whoever figures this stuff out is going to win, that's just how it works.

That is to say, there will always be a niche for people who refuse to move up the chain of abstraction: they're actually incredibly necessary. However, as low-level foundations improve, the possibilities enabled higher up the chain grow at an exponentially-higher rate, and so that's where most of the work is needed. Career-wise it might be better to avoid AI if that's what you want to do, but as a business I can't see a dogmatic stance against these tools being anything but an own goal.


> Turns out that really doesn't matter.

Except that it does!

For every level of abstraction, you lose something, and abstractions are leaky.

The lower levels of abstraction make you lose the least, and they are also the least leaky. The higher you go, the more you lose, and the more leaky.

What I'm claiming is that these "AI" tools have definitely reached the point where the losses and the leaks are too large to justify. And I'm betting my career on that.


We all rely on abstractions over layers we don't deal with directly, that's just a fact. You're not running a home-grown OS on custom-built hardware made from materials you mined out of the ground yourself. AI is just another layer. Not everyone operates on the highest, newest layer, and that's absolutely fine. You can carve your niche anywhere you like. Telling yourself that the layer above you isn't feasible isn't going to do you any favors but it does generate buzz on social media which seems like it's the goal here.

You're not betting anything because the cost for you to change your mind and start working with AI tools is exactly 0. This rhetoric is just marketing. I'm sure you'll find the customers that are right for you, but you can at least admit that this kind of talk is putting the aesthetic preference of what you want work to look like above what's actually the most effective. Again, I'm sure you'll find customers who share those aesthetic preferences, but to pretend like it's actually an engineering concern is marketing gone too far.


> We all rely on abstractions over layers we don't deal with directly, that's just a fact.

Did I ever deny that? Sure, some of those layers are worth it. That doesn't address my assertion that these "AI" tools are not.

> Telling yourself that the layer above you isn't feasible isn't going to do you any favors but it does generate buzz on social media which seems like it's the goal here.

You're halfway there.

> You're not betting anything because the cost for you to change your mind and start working with AI tools is exactly 0.

And here is where you contradict yourself.

If I'm getting loud about this bet, and making customers because of this bet, then it will cost me a lot to start working with "AI" tools. My customers will have come to be because I don't, so if I start, I could easily lose all of them!

> This rhetoric is just marketing.

Yep! But that's what makes my best actually cost something. I'm doing this on purpose.

> I'm sure you'll find the customers that are right for you, but you can at least admit that this kind of talk is putting the aesthetic preference of what you want work to look like above what's actually the most effective.

No, I will not admit that because I believe very strongly that my software will be better, including engineering-wise, than my competitors who use these "AI" tools.


> Yes, calculators do that. I'm arguing that "AI" does not let programmers write better code faster. It lets them write worse code faster, or better code slower.

The idea isn't to write better code faster, it's to build better products faster.

Although IMO in the future, AI will probably also enable programmers to write better code too (faster, less bugs, more secure, more frequently refactored etc)


> The idea isn't to write better code faster, it's to build better products faster.

All else being equal, better code means better products.

Also, to have a better product without better code, you're implying that the design of the product is better and that these "AI" tools help with that.

Until they can reason, they cannot help with design.


I think all else being equal, better code means you aren’t changing the system as fast and likely have stagnated in the business or growth side. Maybe that is appropriate for where your company is, but worse is better wins so often.

And I would bet that AI design would help things where the existing designers are bad, e.g. so much open source UI (that is, not cli UX) written by devs, but it is still a bit away from the top quality like Steve Jobs.

Maybe this is like the transition from hand crafted things to machined things; we go from a world some some excellent design and some meh design to a world with more uniform but less great designs.


I don't need my business to grow. I want to support myself and my wife. That's it. You can call that whatever you like, but stagnation isn't it, unless you think that SQLite is stagnant because SQLite had the same business model.

"AI" design will not help until we have a true AI that can reason. (I don't think we ever will.)

Why is reasoning necessary? Because design is about understanding constraints and working within them while still producing a functional thing. A next-word-predictor will never be able to do that.


GPT4 can clearly already reason IMO (I mean it can play chess fairly well without ever being taught, or if you create a puzzle from scratch and tell it to it it can try to work it out and describe the logical approach it took). It’s definitely surprising that a next-word generator has developed the ability to reason, but I guess that’s where we are!

What is your definition of reasoning that you do not think GPT-4 would demonstrate signs of?


> What is your definition of reasoning that you do not think GPT-4 would demonstrate signs of?

Heh, there have been many attempts to define reasoning. I haven't seen a good one yet.

However, I'm going to throw my hat into the ring, so be on the lookout for a blog post with that. I've got a draft and a lot of ideas. I'm spending the time to make it good.


Well GPT4 certainly fulfils the existing definitions of reasoning, so maybe you should call your thing something else instead of redefining ‘reasoning’ to mean something different?

Otherwise it’s just moving the goalposts.


GPT4 is certainly not fulfilling the definition of reasoning. It's borrowing the intelligence of every human who wrote something that went into its model.

To demonstrate this, ask it to prove something that most or all people believe. Say some "intuitive" math thing. Perhaps the fact that factorial grows faster than exponential functions.

And no, don't just have it explain it, have it prove it, as in a full mathematical proof. Give it a minimal set of axioms to start with.

Merriam-Webster's definition of "reasoning" [1] says that reasoning is:

> the drawing of inferences or conclusions through the use of reason

So starting GPT4 off with some axioms would give it a starting point to base its inferences on.

Then, if it does prove it, take away one axiom. Since you started with a minimal set, it should now be impossible for GPT4 to prove that fact, and it should tell you this.

Having GPT4 prove something with as few axioms as possible and also admit that it cannot prove something with too few axioms is a great test for if it is truly reasoning.

[1]: https://www.merriam-webster.com/dictionary/reasoning


In order for an AI to reason it doesn’t mean it has to be able to reason about everything at any level - most humans cant rediscover fundamental mathematical theorems from basic axioms, particularly if you keep removing them until they fail, but I don’t think that means most humans are unable to reason.

Take this problem instead which certainly requires some reasoning to answer:

“Consider a theoretical world where people who are shorter always have bigger feet. Ben is taller than Paul, and Paul is taller than Andrew. Steve is shorter than Andrew. Everyone walks the same number of steps each day. All other things being equal, who would step on the most bugs and why?”

I think it’s a logical error to say “AI can’t reason about this, so that proves that it can’t reason about anything at all” (particularly if that example is something most humans can’t do!). The LLMs reasoning is limited compared to human reasoning right now, although it is still definitely demonstrating reasoning.


> "Consider a theoretical world where people who are shorter always have bigger feet. Ben is taller than Paul, and Paul is taller than Andrew. Steve is shorter than Andrew. Everyone walks the same number of steps each day. All other things being equal, who would step on the most bugs and why?"

Because Ben is the tallest, his feet are the biggest, and because he takes the same amount of steps as the others, the amount of area he steps on is larger than the area that the others step on.

Therefore Ben is most likely to be the one to step on the most bugs.

Easy. And I'm not brilliant.

The problem with testing these tools is that you need to ask it a question that is not in their training sets. Most things have been proven, so if a proof is in its training set, the LLM just regurgitates it.

But I also disagree: if the "AI" can't reason about that, it can't reason because that one is so simple my pre-Kindergarten nieces and nephews can do it.

But even if not, the LLM's should have "knowledge" about exponential functions and factorial because the humans who wrote the material in their training sets did. So it's not a lack of knowledge.

And I claim that most humans could rediscover theorems from basic axioms; you've just never asked them to.


“In this theoretical world, shorter people have bigger feet. Given the information provided, we can deduce the following height order:

Ben (tallest) Paul Andrew Steve (shortest) Since shorter people have bigger feet in this world, we can also deduce the following order for foot size:

Steve (biggest feet) Andrew Paul Ben (smallest feet) Assuming that everyone walks the same number of steps each day and all other things being equal, the person with the biggest feet would be more likely to step on the most bugs simply because their larger foot size would cover a greater surface area, increasing the likelihood of coming into contact with bugs on the ground.

Therefore, Steve, who is the shortest and has the biggest feet, would step on the most bugs.”

GPT4 solved it correctly. You didn’t.


My bad. I would have if I hadn't gotten mixed up on the shorter vs taller. You know this too.

And GPT4 didn't solve it correctly. It's a probability, not a certainty, that the shortest person will step on more bugs.


Sure, you would have got it right if you didn't get it wrong.

At the very least, this should be evidence that the problem wasn't a totally-trivial easy pre-kintergarden level problem though, and it did manage to correctly solve it.

It required understanding new axioms (smaller = bigger feet) and infering that people with bigger feet would crush more bugs without this being mentioned in the challenge.

Your dismissal that the AI messed up because it didn't phrase the correct answer back in the way you liked is a little harsh IMO, as the AI's explanation does make it clear it is basing it on likelihoods ("the person with the biggest feet would be more likely...").


That mix up must be the human touch you’ve spoken so highly of.


That all makes sense.


Everyone has limited time, and if AI assistance can increase the speed you can develop & iterate the product to better match user needs that is how it can result in a better product.

Equally if it can help devs launch a month earlier, that’s a huge advantage in terms of working out early product/market fit.

All things being equal, I would rather have a company with better product/market fit than one with great code (even though both are important!).


> if AI assistance can increase the speed you can develop & iterate the product to better match user needs that is how it can result in a better product.

That's a very big "if", and one I just don't think will exist.

Also, that only helps at the beginning. Add the product gets more complex, I believe the AI will help less and less until velocity will become slower than companies like mine.

And product/market fit is just a way for companies to cover up the fact that their founders wanted to found a company, not solve a real problem. If you solve a real problem first, founding a company is simple and you "just" have to sell your solution.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: