Hacker News new | past | comments | ask | show | jobs | submit login

> Such strong speculative predictions about the future, with no evidence. How can anyone be so certain about this?

The evidence is all around you. For anyone who has made any serious attempt to add AI to your current life and work process, you will fairly quickly notice that your productivity has doubled.

Now, do I as a random software engineer who is now producing higher quality code, twice as fast, know how to personally capture that value with a company? No. But the value is out there, for someone to capture.

> It's like everyone is in the same trance and simply assuming and repeating over and over that This Will Change Everything

It already is changing everything, in multiple fields. Go look up what happened to the online art commission market. It got obliterated over a year ago and is replaced by people getting images from midjourney/ect.

Furthermore, if you are a software engineer and you haven't included tools like github copilot, or cursor AI into your workflow yet, I simply don't consider you to be a serious engineer anymore. You've fallen behind.

And these facts are almost immediately obvious to anyone who has been paying attention in the startup space, at least.






> Furthermore, if you are a software engineer and you haven't included tools like github copilot, or cursor AI into your workflow yet, I simply don't consider you to be a serious engineer anymore. You've fallen behind.

That sounds like you're fresh out of college. Copilot is great at scaffolding but doesn't do shit for bug fixing, design, or maintenance. How much scaffolding do you think a senior engineer does per week?


I started teaching myself programming 40 years ago and I believe that Copilot and other AI programming tools are now an essential part of programming. I have my own agent framework which I am using to help complete some tasks automatically.

Maybe take a look at tools like aider-chat with Claude 3.5 Sonnet. Or just have a discussion with gpt-4o about any programming area that you aren't particularly familiar with already.

Unless you literally decided you learned everything you need and don't try to solve new types of problems or use new (to you) platforms ever..


40+ years of coding here. I've been using LLMs all day and getting a large boost from it. That last thing I did was figure out how to change our web server to have more worker processes. It took a half dozen questions to cure a lot of ignorance and drill down the right answer. It would have taken a lot longer with just a search engine. If you're not seeing the large economic advantage of these systems you're not using them like I am.

> If you're not seeing the large economic advantage of these systems you're not using them like I am

I just read the manual.


do you flip to the back of the book to the index to find the pages that references a topic, or do you use ctrl-f?

I also use the table of contents.

I think that one of the reasons there's a surprising amount of pushback because a lot of developers don't like the sort of collaborative, unstructured workflow that chat-oriented tools push onto you.

I once worked with someone who was brilliant, but fell apart when we tried to do pair-programming (acturial major who had moved into coding). The verbal communication overhead was too much for him.


This is a really interesting observation that makes a lot of sense to me. I can relate to this and it really helps to explain my own skepticism about LLMs "helping" with programming tasks.

I've always thought of software development as an inherently solo endeavor that happens entirely inside of one's own mind. When I'm faced with a software problem, I map out the data structures, data flows, algorithms and so on in my mind, and connect them together up there. Maybe taking some notes on a sheet of paper for very complex interactions. But I would not really think of sitting down with someone to "chat" about it. The act of articulating a question "What should this data structure look like and be composed of?" would take longer than it would take to simply build it and reason about it in my own brain. This idea that software is something we do in a group socially, with one or more people talking back and forth, is just not the way I operate.

Sure, when your software calls some other person's API, or when your system talks to someone else's system, or in general you are working on a team to build a large system, then you need to write documents and collaborate with them, and have this back-and-forth, but that's always kind of felt like a special case of programming to me.

The idea of asking ChatGPT to "write a method that performs a CRC32 on a block of data" seems silly to me, because it's just not how I would do it. I know how to write a CRC function, so I would just write it. The idea of asking ChatGPT to help write a program that shuffles a deck of cards and deals out hands of Poker is equally silly because before I even finished writing this sentence, I'm visualizing the proper data structures that will be used to represent the cards, the deck, and players' hands. I don't need someone (human or AI) to bounce ideas off of.

There's probably room for AI assistance for very, very junior programmers, who have not yet built up the capability of internally visualizing large systems. But for senior developers with more experience and capability, I'd expect the utility go down because we have already built out that skill.


I consider myself to be fairly senior, and use it all the time for learning new things. I work with some brilliant senior developers who lean on it heavily, but I do think it doesn't mesh with the cognitive styles of many.

There might be something to this; however, N=1, I'm very much the kind of developer who hates pair-programming and falls apart when forced to do it. But it's not the conversation that's the problem - it's other people. Or specifically, my fight-and-flight response that triggers when I am watched and have to keep up with someone (and extreme boredom if the other person can't keep up with me). LLMs aren't people, and they aren't judging me, so they do not trigger this response.

Chat interface is annoying, though. Because it's natural language, I have to type a lot more, which is frustrating - but on the other hand, because it's natural language, I can just type my stream of thought and the LLM understands it. The two aspects cancel out each other, so in terms of efficiently, it's a wash.


depends on what you're working on. i'm a senior engineer currently doing a lot of scaffolding for startups and my copilot saves me a ton of time. life's good.

It's getting better and new UIs for it are being tested like Claude and artifacts.

Sr. Eng adopted copilot and sung it's praises a lot faster then the jr engineers. Especially when working on codebases with less familiar languages.


Nope. 10 years experience working at startups and FAANG.

And yes cursor AI/copilot helps with bugs as well.

It works because when you have a bug/error message, instead of spending a bunch of time on Google/searching on stack overflow for the exact right answer, you can now do this:

"Hey AI. Here is my error message and stack trace. What part of the code could be causing it, and how should I fix it".

Even for debugging this is a massive speed up.

You can also ask the AI to just evaluate your code. Or explain it when you are trying to understand a new code base. Or lint it or format it. Or you can ask how it can be simplified or refactored or improved.

And every hour that you save not having to track down crazy bugs that might just be immediately solvable, is an hour that you can spend doing something else.

And that is without even getting into agents. I haven't figured out yet how to effectively use those yet, and even that is making me nervous/worried that I am missing some huge possible gains.

But sure, I'll agree that of all you are doing is making scaffolding, that is a fairly simply usecase.


> It works because when you have a bug/error message, instead of spending a bunch of time on Google/searching on stack overflow for the exact right answer, you can now do this:

That's not how I work since I stopped being a junior dev. I might google an error message/library combination if I don't understand it but in most cases, I just read the stacktrace and the docs, or maybe the code.

I don't doubt that LLMS can be quite useful when working with large, especially foreign, codebases. But I have yet to see the level of "if you don't use it you're not an engineer" some people like to throw around. To the contrary, I'd argue if you rely on an LLM to tell you what you should be doing, you aren't an engineer, you are a drone.


Sure, if you are one of the rare engineers who wasn't using Google search, or any sort of discussions or collaborations with other engineers in their day to day engineering workflow, then I can fully understand why a super powered version of the same thing wouldn't be useful to you.

Ive added "how have you incorperated generative AI into your workflow" as an interview question, and I dont know if it is stigma or actual low adoption, but I have not had a single enthusiastic response across 10+ interviews for senior engineer positions.

Meanwhile, I have chatGPT open in background and go from unaware to informed for every new keyword I hear around me all day everyday. Not to mention annotating code, generating utlity functions, and tracing errors


I think if sort of depends on the work you do. If you’re working on a single language and have been for a while then I imagine that much of the value LLMs might give you already live in your existing automation workflows.

I personally like co-pilot but I work across several languages and code bases where I seriously can’t remember how to do basic stuff. In those cases the automatic code generation from co-pilot speeds my efficiency, but it still can’t do anything actually useful aside from making me more productive.

I fully expect the tools to become “necessary” in making sure things like JSdoc and other domination is auto-updated when programmers alter something. Hell, if they become good enough at maintaining tests that would be amazing. So far there hasn’t been much improvement over the year we’ve used the tools though. Productivity isn’t even up across teams because too many developers put too much trust into what the LLMs tell them, which means we have far more cleanup to do than we did in the previous couple of years. I think we will handle this thing once we get our change management good enough at teaching people that LLMs aren’t necessarily more trustworthy than SO answers.


Would you consider hiring on a contract basis? I use AI tools like Copilot in vim, and have my own agent framework to ask questions or even edit files for me directly, which I have been trying to use more. And I could use a new contract. You can see my email in my HN profile.

What, in your mind, is the right answer to that question?

Good question.

The best answer is for someone who has found ways to boost their own productivity, but also understands the caveats such as hallucinations and not pasting proprietary information into text boxes on the internet.


Twice as fast is way over exaggerating the reality. In certain cases, sure, but more generally you are looking at 10%-50% productivity increase, more likely on the lower end. I say this as someone who has access to ChatGPT and AI code completion tools and use them every day, and the numbers are backed up by Google's study. https://research.google/blog/ai-in-software-engineering-at-g...

Very true! From where I sit, most of the hype cycles were overestimated in the short term & underestimated in the long term: world wide web, mobile, big data, autonomous cars, AI, quantum, biotech, fintech, clean tech, space and even crypto.

It's a bit weird to claim that "quantum", autonomous cars, and crypto are "underestimated". If anything they've been overhyped and totally failed to deliver actual value.

Underestimated in the long term. It's become clear that the time constant for deep-tech innovation & adoption doesn't match the classic VC-backed SaaS adoption curve.

We're nearing general quantum supremacy for practical jobs within the next 5 years. Autonomous cars are now rolling out in select geographies without safety drivers. And crypto is literally being pushed as an alternative for SWIFT among the BRICS, IMF, and BIS as a multi-national CBDC via the mBridge program.


Yeah some expert I work in the field type YouTubers I remember well over 7 months ago now kept saying that we're going to have AGI within 7 months. He was like the big prediction hinging practically his whole channel on... Sorry I don't have the name but there's a little anecdote.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: