IMHO, review is a misnomer for where software engineering is going. I'm not sure where we are going, but review implies less responsibility for the outcome.
But I do think that we will have less depth of knowledge of the underlying processes. That's the point of having a machine do it. I expect this, however, to be a good trend: the systems will need to be up to a task before it makes sense to rely on them.
This is how progress (in developer productivity) has always been made. We coded in assembler, then used macros, then a language like C, Fortran, then more of Java/Go/Puthon/Rust/Ruby et al. A developer writing a for loop over a list in Python need to necessarily know about linked lists and memory patterns because Python takes care of it. This frees up that developer from abstracted details and think one level closer to the problem at a higher speed.
LLMs _can_ be a good tool under the right hands. They certainly have some ways to become a reliable assistant. I suppose in the way of LLMs, they need better training before they can get there.
Frankly, its fine more often than we may care to admit.
As the parent comment suggested, UI elements are a great candidate for this. Often very similar (how many apps have a menu bar, side bar, etc) and full of boilerplate. And at the rate things change on the front-end, it's often a candidate for frequent re-writes, so code quality and health don't need to be as strict.
It'd be nice if every piece of software ever written was done so by wise experts with hand-crafted libraries, but sometimes it's just a job and just needs to be done.
UI is a terrible example to make your point. Tell me you don’t know frontend development…
Accessibility, cross browser+platform support, design systems, SEO, consistency and polish, you name it. You are most certainly not getting that from an LLM and most engineers don’t know how or don’t have a good eye for it to catch when the agent has gone astray or suggested a common mistake
You definitely have a point, but the reality is that LLMs are about as good as an "average" UI developers in some cases -- lots of people who work on UI every day think very little about accessibility and don't understand if their code actually runs in a non-chromium browser.
Does everything ever written need to be crafted by an artisan? And awful lot of useful stuff written “good enough” is good enough. Depth of knowledge or understanding is irrelevant to a lot of front end UI development where the key is the design itself and that the behavior of the design is accurate and reliable not that the engineer -really- understand at a core of their soul graphql and react with the passion of a Japanese craftsman when they’re building a user interface for the ML backend that internal users use for non critical tasks. There does exist a hierarchy of when depth matters and it’s not homogeneously “literally everything you do.”
When someone is using an LLM they are still the author.
Think about it like someone who is searching through record crates for a good sample. They're going to "review" certain drum breaks and then decide if it should be included in an artwork.
The reviewing that you're alluding to is like a book reviewer who has nothing to do with the finished product.
Yup that’s an old reviewer/author problem. Reviewer has a huge blind spot because they don’t even know what they don’t know. The author knows what he knows but more importantly also has a bigger grasp on what he doesn’t know. So he understands what’s safe to do and what’s not
Well it is not actually review where you have a PR. It is more like you are guiding and reviewing a very fast typer in your decided order that in any simple cases handles it 99 percent of the time.
not every developer knows how exactly his modern CPU oder memory layers work, or how electromagnetic waves are building up a signal.
people use tools to make things. Its okay. Some "hardcore folks" advance the "lower level" tooling, other creative folks build actually useful things for daily life, and mostly these two groups have very little overlap IMO.
I know of no review process that produces the same level of understanding as does authorship, because the author must build the model from scratch and so must see all the details, while the reviewer is able to do less work because they're fundamentally riding on the author's understanding.
In fact, in a high-trust system, e.g. a good engineering culture in a tech company, the reviewer will learn even less, because they won't be worried about the author making serious mistakes so they'll make less effort to understand.
I've experienced this from both sides of the transaction for code, scientific papers, and general discussion. There's no shortcut to the level of understanding given by synthesizing the ideas yourself.
> "the author must build the model from scratch and so must see all the details"
This is not true. With any complex framework, the author first learns how to use it, then when they build the model they are drawing on their learned knowledge. And when they are experienced, they don't see all the details, they just write them out without thinking about it (chunking). This is essentially what an LLM does, it short-circuits the learning process so you can write more "natural", short thoughts and have the LLM translate them into working code without learning and chunking the irrelevant details associated with the framework.
I would say that whether it is good or not depends on how clunky the framework is. If it is a clunky framework, then using an LLM is very reasonable, like how using IDEs with templating etc. for Java is almost a necessity. If it is a "fluent", natural framework, then maybe an LLM is not necessary, but I would argue no framework is at this level currently and using an LLM is still warranted. Probably the only way to achieve true fluency is to integrate the LLM - there have been various experiments posted here on HN. But absent this level of natural-language-style programming, there will be a mismatch between thoughts and code, and an LLM reduces this mismatch.
I believe pretty much anyone who has observed a few cycles can tell as much.
Often the major trigger for a rewrite is that the knowledge has mostly left the building.
But then there's the cognitive dissonance; because the we like pretending that the system is the knowledge and thus has economic value in itself, and that people are interchangeable. None of which is true.
It is similar to how much a student learns from working hard to solve a problem versus from being given the final solution. The effort to solve it yourself tends to give a deeper understanding and make it easier to remember.
Maybe that's not the case in all fields, but it is in my experience at least, including in software. Code I've written I know on a personal level, while I can much more easily forget code I've only reviewed.
Also, people shouldn't be allowed to use computers unless they understand how transistors work. If you don't have the depth of knowledge you get nothing.
The person Im responding to was gatekeeping. I responded by sarcastically doing the same to an extreme degree. A lot of people Will have agreed with the person i'm responding to. "Oh yeah of course You should understand these things, the things that I already understand", genuinely not realizing that there's no basis for that. When they reae my response they realize what they were doing, and are less feeling embarrassed for their senseless (and pretentious!) gatekeeping.
As a noob I copied code from Railscasts or Stack Overflow or docs or IRC without understanding it just to get things working. And then at some point I was doing less and less of it, and then rarely at all.
But what if the code I copied isn't correct?! Didn't the sky fall down? Well, things would break and I would have to figure out why or steal a better solution, and then I could observe the delta between what didn't work vs what worked. And boom, learning happened.
LLMs just speed that cycle up tremendously. The concern trolling over LLMs basically imagines a hypothetical person who can't learn anything and doesn't care. More power to them imo if they can build what they want without understanding it. That's a cracked lazy person we all should fear.
In our startup we are short on frontend software engineers.
Our project manager started helping with the UI using an IDE (cursor a VS-code fork) with native ChatGPT integration. In the span of six months, they have become very proficient at React.
They had wanted to learn basic frontend coding for multiple years but never managed to pass the initial hurdles.
Initially, they were only accepting suggestions made by ChatGPT and making frequent errors, but over time, they started understanding the code better and actively telling the LLM how to improve/fix it.
Now, I believe they would have the knowledge to build simple functional React frontends without assistance, but the question is why? As a team with an LLM-augmented workflow, we are very productive.
With the number of anti patterns in React, are you sure everything’s OK? The thing’s about declarative is that it’s more like writing equations. Imperative have a tighter feedback loop, and ultimately only code organization is a issue. But so many things can go wrong with declarative as it’s one level higher in the abstraction stack.
I’m not saying that React is hard to learn. But I believe buying a good book would have them get there quicker.
Yes totally. They already tried doing a few online courses and had a few books.
I'm a senior React Dev and the code is totally fine. No more anti-patterns than juniors who I've worked with who learned without AI. The contrary actually.
Not gonna fault people for learning, think the FUD is more so in the vein of being ignorant while working.
Yea, you dont really need to know how transistors work to code, but you didnt need that for 2 generations. Personally think (and hope), LLM code tools replace google and SO, more so than writing SW itself.
I got my start on a no-code visual editor. Hated it because the 5% of issues it couldn't handle took 80% of my time (with no way to actually do it in many cases). See LLM auto generation as the same, the problems that the tool dosent just solve will be your jobs, and you still need to know things for that.
Oh come on, the user is talking about building UIs. I don't know how else you learn. Your attitude just reeks of high-horse. As if it was better to learn things from stackoverflow.
Who learned stuff from stack overflow? In my own case, it was all books, plus a few videos. Stack Overflow was mostly to know why things has gone (errors not explicit enough) or very specific patterns. And there was a peer review system which lent credibility to answers.
“In my own case...” something worked for you. So what?
Are you sure that would work for others? And that other approaches might not be more effective?
I’ve learned lots of things from SO. The top voted answers usually provide quite a bit of “why” content which have general utility or pointers to more general content.
Yes, there are insufferable people on there, but there are gatekeepers and self-centered people everywhere.
Maybe I am being snarky, but saying “I don’t like that” or “that’s not how I did it” just isn’t that interesting. I’d love to hear why books are so much effective, for instance, or which books, or what YT channels were useful.
Because they’re consistent and they follow (the good ones) a clear path to learn what you want to learn. The explanations may not be obvious at first glance and that’s when you may need somone to present to you in another perspective (your teacher) or provide the required foundational knowledge that you may lack. You pair it with some practice or do cross-reference with other books and you can get very far. Also they’re can be pretty dense in terms of information.
> which books, or what YT channels were useful.
I mostly read manuals nowadays, instead of tutorials. But I remember starting with the “Site du Zero” books (a French platform) for C and Python. As tech is moving rapidly, for tutorial like books, it’s important to get the latest one and know which software versions it’s referring to.
Now I keep books like “Programming Clojure”, “The GO Programming Language”, “Write Great Code” series, “Mastering Emacs”, “The TCP/IP Guide”, “Absolute FreeBSD”, “Using SQlite”, etc. They’re mostly for references purposes and deep dive in one subject.
The videos I’m talking about were Courses from Coursera and MIT. Algorithms, Android Programming, Theory of Computation. There are great videos on Youtube, but they’re hidden under all the worthless one.