Hacker Newsnew | past | comments | ask | show | jobs | submit | more ZaoLahma's commentslogin

It isn't only frontend frameworks.

I currently AI-coma / tab-complete C++17 with decent results for stuff ridiculously far away from frontend, but I do wonder who is providing the training data for C++23 and onwards as there isn't wide adaptation yet.


I can chime in with a similar anecdote. I use co-pilot extensively in "fancy tab completion" mode. I don't use the conversational features - just the auto-complete to help my coding along.

I specifically found it very useful when dealing with a bunch of boilerplate C++ shim code that used some advanced template magic to declare families of signatures for typed thunks that wrapped and augmented some underlying function call with record/replay log.

It was arcane, bespoke stuff. Highly unlikely to be imitated in training data. The underlying code was written by a colleague (the CTO) but it was hard to grok just because of all the noise the template magic added to the syntax, and carefully keeping track of what was getting substituted where.

The changes I was making were mostly to add new shims to this collection of shims, and co-pilot happily used the pattern of shims in previous lines of code to extrapolate what my new shims should look like and offer me tab-completions that were sensible.

That included some bad guesses (like inferred names for other templated forms that referred to different argument types), but overall it was structurally close enough to use as a reference point and fix up as needed.

It was really great. Took what would have been about day's worth of work carefully figuring out from first principles how the template system was structured.. and made it into about half an hour of "get it to generate the next shim, verify that the shim does the right thing".


> It was really great. Took what would have been about day's worth of work carefully figuring out from first principles how the template system was structured.. and made it into about half an hour of "get it to generate the next shim, verify that the shim does the right thing".

That also seems to highlight the disadvantage too - if you'd taken a day, you would have come away with a deeper understanding of the template system.


Fair point. In that particular circumstance I had no desire to learn the details of the system - the need of the day was to get in there, get the shims in, and get out to the other code that mattered.

If I hadn't worked for more than a decade with C++ and already been reasonably fluent in template semantics, there's a good chance I might have introduced bugs as well.

I think the issue is that these feel like incredibly safe tools because all they do is generate text, but at the same time it can lead to bad hygiene.

I personally never use AI to write prose (I'll keep my voice and my quirks thanks). And for code, I utilize the system specifically as a fancy pattern extension system. Even in well-abstracted code, about 90% of the structure of it can be inferred from the other parts. I let the AI complete that for me with tab-completions and verify that each line is what I would have otherwise typed before moving on.

I'm not comfortable engaging at a deeper level with these tools yet.


Seems plausible, especially in combination with the AI-coma that occurs when you tab-complete your way through problems at full speed.


Eyes are good at generating the signal or we wouldn't have much use for the vision they do provide us.

On a somewhat related note, due to a head injury I suffered now many years ago, I started developing small-ish blind spots "once in a while" that remain anywhere from a few minutes up to a few months.

The spots are very noticeable when they first appear, grabbing the attention all the time. The ones that persist long tend to "disappear" when my brain filters out the broken (?) signal of where the blind spot appeared, and the spot mainly becomes noticeable again if it hides or interrupts a known pattern that I'm looking at.

It's as if the brain fills the spot with the average color around it (blue sky - blue spot, white wall - white spot etc), which works well for single color surfaces and such but not great for repeating and predictable patterns. And the hiding "lags" which means if I quickly shift between colors then the spot will momentarily be visible as the old color shows up on the new color.

So the brain does "imagine" things, but when it does, it isn't perfect.


Love it.

Also needs to simulate getting responsibility without authority, with weekly meetings to follow up on your progress.


Was thinking the exact same thing - build real world real experience, and learn how to deal with other humans, perhaps with different goals than yours, at the same time.

Tech and the stuff around it is just one small dimension of the software engineering space.


> What are they even counting as a 'computer' to get to 70?

When I worked in the industry, we had four different CPUs of different architectures on one small (fit in your hand) circuit board, each running its own software and communicating with each other via buses on the board. This was for controlling one single function in the car.

Strictly speaking, I would say that single circuit board was 4 separate computers. It quickly adds up.


And when that macro controller computer eventually (inevitably?) dies, it will hurt the wallet immensely.


> Most programming should be done long before a single line of code is written

Nah.

I (16+ years developer) prefer to iteratively go between coding and designing. It happens way too often that when you're coding, you stumble across something that makes you go "oh f me, that would NEVER work", which forces you to approach a problem entirely differently.

Quite often you also have eureka moments with better solutions that just would not have happened unless you had code in front of you, which again makes you approach the problem entirely differently.


Iterative work is THE way to work in large legacy codebases. The minute you wade into the code, all of your planning is moot. You don't know what's lurking below the surface. No one knows what's lurking under the surface. Except maybe Dave, because he vaguely remembers about 15 years back talking to some guy who wrote some code 30 years back about it.

Greenfield, absolutely design up front you lucky devils, but iterative is the way otherwise.


Greenfield lasts only for at best 2 years or the first public release. After that it is legacy.

I'm am the "Dave" on my current code since I was one of the first engineers on the project and the others before me have long moved to management. There is a lot I don't know about how the code works. There are dark corners we just lifted completely from an earlier project where the guy who wrote it 30 years back is retired. This is normal.

I'm fighting desperately to keep this code in shape as I don't want to go to management to ask for $$$ (billions) to rewrite it. I regret many choices. I'm making other choices in response that I fear someone will regret in 15 more years. I'm hoping to retire before then - better talk to me now because soon the people who have talked to the person who wrote the code 30 years ago will also be a memory. (the guy who write the code 30 years ago is still alive and someone has his phone number - they talk once a year about something weird to see if why is remembered)


> I'm making other choices in response that I fear someone will regret in 15 more years.

Junior dev made to maintain some code base: "wtf, all this old code suck. People were really bad at their job".

Same dev 5 years later: "wellll, this looks bad but there must be a reason.". Usually the reason is someone sold a new feature without asking the implementers or even checking what the impacts could be. So it has to be ready yesterday and you'll never get approval to refactor or clean-up anything, until it breaks.


As someone who's spent 12 years working on legacy codebases, I strongly disagree with this.

Iterative work in a large legacy codebase is how you end up making your large legacy codebase larger and even less understood.

Your planning should "wade into the code" from the start. I have always gotten better results by charting out flow diagrams and whiteboarding process changes than just "diving in and changing stuff".

Frankly, I'd say it's the opposite for greenfield development. Doing iterative work to build out a new product and make changes as you discover needs you didn't account for makes a lot more sense that flailing around making holes in something you don't fully understand that is tied to active business needs.


> I have always gotten better results by charting out flow diagrams and whiteboarding process changes than just "diving in and changing stuff".

In terms of a broad population, I am not sure there is a meaningful difference, though. You can iterate on your ideas on the whiteboard or you can iterate on your ideas in code, but the intent is the same. Either way you are going to throw it all away once you have settled on what should be the final iteration anyhow.

It just comes down to where you are most comfortable expressing your ideas. Some like the visuals of a diagram, others are more comfortable thinking in code, some prefer to write it out in plain English, and I'm sure others carry out the process in other ways. But at the end of the day it is all the same.


> Either way you are going to throw it all away once you have settled on what should be the final iteration anyhow.

I think this needs to be highlighted, because while I completely agree, I think it's often implicit, taken for granted, and neglected. Far, far too often I've seen code bases bloat because this never takes place. The sentiment at a lot of places seems to be, if the tests pass, ship it. Arguably, it may even be the right decision.


I have never really thought about it that way, but you're right.

Ultimately what matters is the final changeset. How you get there doesn't really matter.


>Everyone has a plan until they get punched in their face (by landmines in legacy code)

unless you have the old maintainer on call (I rarely did due to them leaving the company years ago) you definitely need to move slowly. rely on test suites if you are blessed with them. Submit small changes that pass tests.

BTW, nice and timely username.


Today's greenfield is tomorrow's legacy. So this statement still holds true ;)


> Greenfield, absolutely design up front you lucky devils, but iterative is the way otherwise.

That only works for greenfield projects where you have extensive experience with everything that is going to be used on that project. For all others you still learn as you go and all plans and designs need to be revalidated and updated multiple times.


Every development shop has a Dave.


> I (16+ years developer) prefer to iteratively go between coding and designing

I have an extra ten years on you and couldn't agree more.

There are two jokes:

- A few months of programming can save weeks of design.

- A few months of design can save weeks of programming.

Inexperience is thinking that only one of these jokes is grounded in truth.

Recognizing which kind of situation you're in is an imperfect art, and incremental work that interleaves design with implementation is a hedge against being wrong.


The difference between DRY and YAGNI is experience. Both are predicting the future, and you can only do that by having watched code evolve.


What's that saying? "Once is a mistake, twice is a coincidence, three times is a pattern".

Having to write something once, just write it.

Having to write something twice with small differences, think about it.

Having to write something three times? Consider refactoring.


I like to say:

- One hour of planning saves ten hours or programming...

- and one hour of research saves ten hours of planning.

You can also invert it:

- Ten hours of programming saves one hour of planning...

- and ten hours of planning saves one hour of research.


> - Ten hours of programming saves one hour of planning...

If the planning is done with 5 or more people in a meeting, there still might be some time savings...


Most programming is actually figuring out what already exists and what (and more importantly: why) the requirements are. This is best done long before a single line of code is written.

I think the author is taking a wider view of "programming" than the actual writing of code as the end product. Some of the most important work I've done is spend the time to argue that something doesn't need to be done at all.


And how do you figure out what the requirements are? In my 10+ professional years, I have never gotten requirements by asking for them. Almost always I had to show my interpretation of what I think the requirements are, and use the feedback I got to define the actual requirements. The quickest way to get there is by iterating.


You don't ask for the requirements. You ask what they're trying to do, or what problem they're trying to solve. Sometimes I have to ask "where is this data going" or "what do you expect the end result of this to be".


Not disagreeing here but whatever question you ask, you will only get the final answer _after_ you have implemented it, almost always after several iterations.


> Most programming is actually figuring out what already exists and what (and more importantly: why) the requirements are. This is best done long before a single line of code is written.

Calling requirements gathering "programming" is just misusing a term for no good reason. By all means, include it in "software development" but it clearly isn't "programming".


> what (and more importantly: why) the requirements are

Maybe in a startup? My experience as an IC in larger, more established companies is the requirements are dictated to you. Someone else has already thought carefully about the customer ask, your job is just to implement, maybe push back a little if the requirements they came up with are particularly unreasonable.


If you dig deep you discover they have figured out some requirements in detail, but there is a lot missing. Is this new feature that last one in that line, or will there be more options in the future? Is this new feature really going to be used - many times we have put large effort into features only to discover no customer used them (as evidenced by the critical bug that made the feature unusable outside of the test lab that nobody complained about until 4 years had passed). These things drive how you engineer the thing in the first place.


This makes me think it would be really cool to tie code sections to slack conversations or emails. There's always commit messages yes, but most product decisions on why something was done lives in slack at least where I've worked.

Even an AI tool that takes a slack thread and summarizes how that thread informed the code would be cool to try.


I always fight to get this stuff into JIRA (or whatever equivalent tool we’re using), and then make sure that all commits have the JIRA ID in them.


Works great until you're not using it anymore. We're on our third system, all the cases from the first one and most from the second one are long since gone. Meanwhile the commit messages survive it all, even across cvs -> svn and svn -> git migrations.


'Programming as theory-building' is an approach that has grown on me in the past few years.

Your first draft may be qualitatively an MVP, but it's still just a theory of a final product you want, which requires a lot of iterative building before you get to that.

As such, there's no way to not shift between code and design, especially when business requirements are involved and which themselves may change over time.


> Programming as theory-building

Sounds similar to https://en.m.wikipedia.org/wiki/Curry–Howard_correspondence.



Developer for 20+ years. I can't even design anything without coding something.


It's like giving an estimate for a bathroom remodel for a house you've never seen. You gotta get in there first.


I'd go one further and say it's an estimate for a bathroom remodel in a house you've never seen that turned out to actually be a garage remodel instead.


Exactly. I did a Ph.D. on software engineering and architecture before embarking on a career practicing what I preach. One thing that I realized early is that designs always lag implementations. They are aspirational at best. And people largely stopped using design tools completely when agile became a thing. Some still do. But you'll look in vain for UML diagrams on most software you ever heard off.

I now have a few decades of experience doing technical work, running startups, teams, doing consultancy, etc. Coding is my way of getting hands on. It's quicker for me to just prototype something quickly in code than it is to do whatever on a whiteboard. I always run out of space on whiteboards and they don't have multi level undo, auto completion, etc. They really suck actually. I avoid using them.

Of course, I sometimes chin stroke extensively before coding; or I restart what I'm doing several times. But once I have an idea of what I'm doing in my head, I stub out a solution and start iteratively refining. And if I'm stuck with the chin stroking, the best way to break through that is to just start coding. Usually, I then discover things I hadn't thought about and realize I don't need to over complicate things quite as much. This progressive insight is something you can only gain through doing the work. The finished code is also the finished design; they co-evolve and don't exist as separate artifacts.

The engineering fallacy is believing that they are separate and that developers are just being lazy by not having designs. Here's a counter argument to that: we don't build bridges, rockets, expensive machines, etc. Our designs compile to executable code. Physical things have extensive post design project phases where stuff gets built/constructed/assembled. Changing the design at that stage is really expensive. For software, this phase is pretty much 100% automated in software. And continuous deployment means having working stuff from pretty much as soon as your builds start passing. Of course refactoring your design/code still is important. You risk making it hard to evolve your software otherwise.

The process of designing a bridge is actually more similar to developing software than the process of constructing one. The difference is that when you are done with the bridge design, you still have to build it. But it's a lengthy/risky process with progressive insights about constraints, physics, legislation, requirements, etc. Like software, it's hard to plan the design. And actually modern day architects use a lot of software tools to try out their designs before they hand them over.

Just some simple insights here. There is no blue print for the blue print for either bridges or software. Not a thing, generally.


You could try writing an RFC or a tech spec sometimes with different approaches, proposed solutions, pros/cons. It's basically coding and designing the system in your mind and anticipating issues and challenges. It's a good exercise to do this before writing a line of code. The more you do it, the easier it gets, the mind starts to think about different approaches and pitfalls, you get into a focused state where the brain organizes the logical flow and then you can write a rough outline without caring about making the compiler happy or what the exact syntax is. Sometimes it also helps to translate this high level outline into pseudocde in a comment and then fill in the blanks with actual code.


I've compared it to finding the integral of a function. Unless it's trivial or closely resembles something I've done before, how am I going to have the faintest idea what it's going to be like until I start?

Sometimes the exploration is the design process.


Always think bigger picture than what you're immediately working on. (I don't mean that you can't ever just focus on the problem you're trying to solve for, say, hours. I mean you can't focus like that for the entire time you're in that development phase.)

Think about design and code (and functionality!) before you start coding. Think about design as well as code while you're coding. Think about design, code, and functionality while you're testing.


What I think is a better way to say this is that you need a `design` phase before actually writing the first `real` implementation code.

Something I do a lot, and even more with the LLMS, is that I make `scratch` projects where I sketch code over and over (and maybe make mockups in Keynote or similar, make some notes, etc), then write from scratch again in the real codebase.


The OP didn't say what it is they're talking about that should be done before writing any code.

He might have meant design, and I'm not sure about that.

But the other thing i think of is: Understanding the problem.

It's hard to do too much of that before you start coding, and easy to do too little.

It overlaps with design to some extent, because once you understand the problem better, some designs will naturally seem inappropriate or better -- without having to spend time allocated to "designing" necessarily, just when you design you're going to come up with things that work a lot better the better you understand the problem you are trying to solve.

How the stakeholders see it, and what's really going on, and why it's a problem, and what would make an acceptable solution, and what the next steps down the road might be.


Then the author should have said "Most software development should be done long before a single line of code is written"

Programming is specifically about the authorship of code.


Right, by your interpretation what they suggested is logically impossible (one can't possibly write any code, let alone most, before one writes a single line of code), so I understand you think they should have written differently, but its clear they meant something else, I would assume they meant programming as a synonym for software development, right.


I agree coding should start early. The design might look good, but might not be easy to implement, so you need to change the design.

The statement sounds like something out of a book on the waterfall method of software development.


Agree. Although I increasingly spend this iteration time on types/interfaces/documenting proposal of same. The actual implementation below that is often trivial once the boundaries are settled.


This kind of thing is incredibly context dependent.

But it basically sounds like waterfall development, which is a reasonable approach in certain contexts.

But this thinking doesn't really make sense for any of the projects I work on.


I assume there are people who are able to have those eureka moments before writing any code. I definitely write a lot of code before figuring out the final design but always think I should be designing more.


Always plan to throw one away. You will, so best to plan for it. (paraphrasing Fred Brooks)


Absolutely, though sometimes it's more about reading code or 'playing' with code than writing/committing code. I try to always be hoping around my codebase during meetings.


Yeah, validate your assumptions. Nothing ever works the first time. Quick iterations to get feedback is the way.


Yeah, the other stuff seems sensible or at least "Ok, I can see that", but I definitely disagree with this one.

You should spend time thinking about stuff beforehand, sure, but getting your hands dirty is also going to reveal things.


Very surprised at this attitude.

Or am I. The typical engineering savant omniscient to all future past and present engineering roadblocks fixed by “Just”(TM) thinking about it beforehand. I expect this from bay area mid level not someone with credentials.

Strange because I agree with so much more of the article


TBH, I think it's more of a 'manager' attitude. A lot of actual "hacker" type people are very much in the "rough consensus and working code" category where you see what works by doing it.


ok that sounds bad, you should have the option to go back to design, but depending on what point you find that issue, depends on how much time you have wasted?


It's about defining and solving small problems all the way, and avoiding trying to solve big problems.

If you manage to restrict yourself to only solving small problems (THIS is the true challenge with software engineering, in my humble opinion), then you won't ever have wasted too much time if (when) you need to reset.


That's a fantastic way to get the attention of and end up getting the stick from the US, so I think (hope) the EU would have mechanisms in place to quickly shut down tariff evasion strategies that would involve just relabeling goods.


We don't even have that in place for illegal Russian oil exports or imports of stuff they use to make rockets to shoot at Ukraine with.


It is already an established business model.


What irks me is how LLMs won't just say "no, it won't work" or "it's beyond my capabilities" and instead just give you "solutions" that are wrong.

Codeium for example will absolutely bend over backwards to provide you with solutions to requests that can't be satisfied, producing more and more garbage for every attempt. I don't think I've ever seen it just say no.

ChatGPT is marginally better and will sometimes tell you straight up that an algorithm can't be rewritten as you suggest, because of ... But sometimes it too will produce garbage in its attempts at doing something impossible that you ask it to do.


Two notes: I've never had any say no for code related stuff, but I have it disagree that something exists all the time. In fact I just one deny a Subaru brat exists, twice.

Secondly, if an llm is giving you the runaround it does not have a solution for the prompt you asked and you need either another prompt or another model or another approach to using the model (for vendor lock in like openai)


>What irks me is how LLMs won't just say "no, it won't work" or "it's beyond my capabilities" and instead just give you "solutions" that are wrong.

This is one of the clearest ways to demonstrate that an LLM doesn't "know" anything, and isn't "intelligence." Until an LLM can determine whether its own output is based on something or completely made up, it's not intelligent. I find them downright infuriating to use because of this property.

I'm glad to see other people are waking up


That’s an easily solvable problem for programming. Today ChatGPT has an embedded Python runtime that it can use to verify its own code and I have seen times that it will try different techniques if the code doesn’t give the expected answer. The one time I can remember is with generating regex.

I don’t see any reason that an IDE especially with a statically typed language can’t have an AI integrated that at least will never hallucinate classes/functions that don’t exist.

Modern IDEs can already give you real time errors across large solutions for code that won’t compile.

Tools need to mature.


Yeah, but it would have to reason about the thing it just halucinated. Or it would have to be somehow hard prompted. There will be more tools and code around LLM, to make it behave like a human then people can imagine. They are trying to solve everything with LLMs. They have 0 agency.


Intelligence doesn't imply knowing when you're wrong though.

Hackernews has Intelligent people...

Q. E. D.

% LLMs can RAG incorrect PDF citations too


> ChatGPT is marginally better and will sometimes tell you straight up that an algorithm can't be rewritten as you suggest

Unfortunately this very often it gets wrong, especially if it involves some multistep process.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: