Hacker News new | past | comments | ask | show | jobs | submit login
The Future of Programming – Interview with Richard Eisenberg (signalsandthreads.com)
106 points by Smaug123 on May 19, 2023 | hide | past | favorite | 33 comments



I appreciated Richard's take on AI-assisted programming:

> It doesn’t remove the need to communicate precisely. ... In a sense, that’s almost the definition of what makes a programming language a programming language, as opposed to some other kind of language. There’s a precise semantics to everything that is said in that language.

> With the advent of AI-assisted programming, now we have sort of a new method of communication in that it’s a communication from computer back to human. In that, you might have something like ChatGPT producing the code, but a human still has to read that code and make sure that it does what you think it does. And as a medium of precise communication, it’s still very important to have a programming language that allows that communication to happen.


> but a human still has to read that code and make sure that it does what you think it does

I feel like a lot of programmers are too stuck in their work to realize that there's a huge universe of problems for which this isn't the case. If you need to build a complex web app, absolutely someone needs to validate the code, but if you just need to build a simple internal app or a script to automate something for a small business, you can just test it and make sure it does what you expect.

I think the biggest benefit of natural language programming via LLMs isn't going to be for sophisticated developers; it's going to be for kinda smart businesspeople who have problems that can be solved by code. Maybe it wasn't worth the time to find a developer to solve them (if you don't have any connection to the tech industry, not only finding but also evaluating the quality of a developer is hard!) or it would've been too expensive. Now you can just fire up GPT4 and get your simple inventory tracking app or whatever it is you need built.

It's like the small claims court of software development. If someone owes you $500, you can't engage a lawyer to help recover it because the cost is too high. Small claims gives people the ability to get restitution at low cost and without much sophistication. GPT4 is the equivalent of letting you solve small legal issues without bringing in a lawyer, but for programming.


"I think the biggest benefit of natural language programming via LLMs isn't going to be for sophisticated developers; it's going to be for kinda smart businesspeople who have problems that can be solved by code...Now you can just fire up GPT4 and get your simple inventory tracking app or whatever it is you need built."

I'm already preparing myself for the "kinda smart businesspeople" who will come to me and say...

"Why is it taking you so long to fix [insert sophisticated software problem here]? I built [incredibly simple script] with ChatGPT in 5 minutes. Do you need my help?"

I'm excited...


And when people start drying like with Tesla FSD then they might come to their senses. Or they might first attempt to ignore it like many people tried to downplay Covid deaths.


That was the role of Visual Basic, back in the day.

And before that, it was the intended role of Cobol. It was supposed to make professional programmers obsolete. And, um, that's not the way Cobol worked out.

So how will AI-assisted programming work out? Like Visual Basic, or like Cobol? I don't even have a guess. I think it's too early to tell.


Also Cucumber test frameworks with Gherkin syntax.


That was much worse because while business people were suppose to write the scenarios, programmers were still expected to actually implement all the "steps" and logic behind and pray that combining them in all sorts of way still would work.


The LLM can replace the kinda smart "business person" easy before it replaces the computer scientist. Generating a kinda smart prompt in human language is really easy compared to good computer programs. LLM will write slide decks and spreadsheets before they write good computer programs.


I doubt it. Competent business people meet eye to eye with clients and suppliers and that’s largely based on emotion and human interaction. Some of their tasks can be automated - ie reading and sending some types of emails, and filling in some data fields but ultimately business is about human interaction. We, software engineers, deal with robots and machines and imagine everyone else does the same but that couldnt be further from the truth. Even facebook and google are all about mom and pop shops. Those are the people spending money on ads and the audience that at least in the early days had to be met eye to eye.


It's based on emotion and human interaction is not a moat that seems be reassuring to artists. I wouldn't count on it unless there is a technical reason.


That's not the kind of business person I mean - I'm talking about someone like the owner of an SMB.


Right, and this is already exactly the pattern we see with Excel!


> if you just need to build a simple internal app or a script to automate something for a small business, you can just test it and make sure it does what you expect.

So you end up with the oracle problem?

A small business will still feel it if an edge case screws up basically whatever you’d want to automate


> but if you just need to build a simple internal app or a script to automate something for a small business,

That was the promise of wordpress, nocode and boilerplate crud frameworks. You didnt even need to prompt them, just click a few buttons and be done. Yet no sane businessman spends time hacking scripts even if with the help of an ai. The idea that they will spend time prompting chatbots doesnt stand. They will however leverage it to boost productivity in their daily tasks as with any other tool.

Indie hackers on the other hand will use them. But they are not really business people.


>Yet no sane businessman spends time hacking scripts

Huh?

>The idea that they will spend time prompting chatbots doesnt stand.

Huh?

I guess your definition depends on the "sane" part, to function as a "no true Scotsman" argument. That is, if someone points to businessmen that do it, and that they are plenty, you can always argue that they're not the "sane" ones.

Otherwise, businessmen (startup founders, people with some idea they try to test for a sidegig, etc.) do "hack on scripts", and will absolutely use GPT-style chatbots to build code/websites/etc for their businesses.

So, if we remove the "no true Scotsman"-style "sane" qualifier, your statement doesn't hold: businessman do spend time hacking scripts even without the help of an ai, and the idea that they will spend time prompting chatbots stands 100%.

I think your idea of "businessman" is someone with a suit making deals and thinking about the business part only, which is not how a lot of tech/online business work. The distinction between "indie hackers" and "businessmen" is much more blurred.


The small claims analogy works for May 20th 2023. I can't remember when I first heard about word vectors and the king – man + woman = queen. That was so mind blowing and now look what we are talking about?

However long ago a person first heard about king – man + woman = queen, project that amount of time forward. Any prediction for that date on how things are going to be is pointless. A good bet though is that making complex software is not going to be harder on that date than today.


How far did the autonomous cars get since?


> natural language programming via LLMs

I wonder how far can this concept itself go. One of the hardest part of software engineering is figuring out what to build and translating that to code. Humans are not particularly good at describing what they want, nor writing code. If human involved is reduced in both, how will software engineering change?


> But for programming.

Why just for programming? ChatGPT wrties legal documents just fine.


People are going to do a lot of stupid stuff in the next 2 years. "What? It thought the most likely response to your carefully crafted prompt was to say you transfer all your rights to your lawyer because most of its training text had employees giving up rights."


> but a human still has to read that code and make sure that it does what you think it does

But an AI proficient at programming wouldn't need to spit code back to the human. It would produce binaries or usable services, and the human would refine the product also via natural language. The human would act more like a manager than a programmer, and would direct the AI towards the desired goal. The code is just the means to an end, and internally the AI would be free to use whatever machine-optimized language it wants to. Hell, it can write directly in machine code for all I care. That would be more optimal than having to produce human-readable code, and have a fallible human in the process.


I just can't imagine we don't get a new programming language at some point that is a more precise subset of English but is closer to English than say python.

I mean previously we needed absolute precision in a programming language because there was nothing to disambiguate natural language. chatGPT's ability to disambiguate natural language is unbelievable. We will figure out a way to leverage that ability at some point in a programming language to the point that maybe it won't make sense to even call it a programming language.

Making software is going to get much easier. Denial is the most human of coping strategies with change.


Isn’t a conversational LLM going to be really good at sussing out actual requirements from people who want software but can never really explain clearly what they want? Heck, that could be a product on its own even if never even writes one line of code.


Nerd writer made an eloquent video that discusses this theme. It’s called “The Real Danger of ChatGPT”: https://youtu.be/AAwbvGywdOc


> but a human still has to read that code and make sure that it does what you think it does.

Why not just write tests instead? If you have 100% test coverage, would you care about the code?


If you're talking about coverage in the usual sense (% of lines that were executed), it's pretty much useless. Here's an example of 100% test coverage in that sense:

  add a b = 2
  test_add = assert_equals (add 1 1) 2
A more useful definition of coverage would be the entire possible state of the program, but this is tantamount to a proof, which is a really hard problem for programs in general. Property based testing, e.g. QuickCheck[1], gets us close, but it is often hard to come up with the right properties.

[1]: https://hackage.haskell.org/package/QuickCheck


who will write the test


Humans, with a lot of help from LLMs.


Related ongoing thread:

In Rust, for memory, you don't pay as you go, everyone has to pay all the time - https://news.ycombinator.com/item?id=36000242 - May 2023 (58 comments)


It is still flagged, most likely because someone did not like the title, still it has quite an interesting discussion.


Agree. Maybe the flag should be removed or perhaps move the discussion (https://news.ycombinator.com/item?id=36000242) to here and under dang's post.


i’m more interested in what clearing arrangements they have, if they have their own risk management tools, how they make sure nobody sees their traffic to and from the exchanges (including the clearing firm), how they prevent their traffic being shared with 3rd party application providers etcetc

and how they started out, where they got funding and what opening account size


Well, that is one way to do job ads.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: