Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What value does a developer deliver when their entire process is done by an LLM?

Is there no desire for a creative process?



The dev gave the initial idea to the LLM. That's the creative process. Everything after that, arguably, is just technical details in order to realize the idea. Sure, implementation requires plenty of creativity, but of different kind.


Believe me, only a dev can get this working. Maybe in the future, LLM wizards will conjure all our technology, but at this point, having a working knowledge of all APIs from 2021 is an assistive technology, not a magical code-machine.

I've used LLM to generate a lot of code recently on side projects. It's a 10x jump in productivity, but it can only reliably do 50-80% of the work, and the last tail needs editing, verification, setup with infrastructure, etc.

It won't read your mind, you need to iterate, re-create, and guide. And each of those 3 relies on a working knowledge of software, libraries, tech, and user experience to get right.

For now.


Exactly this. I doubt a non-programmer would be able to produce similar output of similar quality and completeness. Like I said, I am not losing my job yet. Maybe next year...


I hope so, speaking from the perspective of someone who wants to keep their job. But at the same time I feel it's not trivial to bring good arguments against LLMs taking over without resorting to "they can't take into account all the context and might make tiny mistakes". But maybe people can be trained to be verifiers/testers as opposed to code writers.


> [snip]

This is mind blowing. ‘Only 80%’ is a bonkers thing to say.

> For now.

The really scary part, right here.


It seems to me the author maximised his creative process by getting a machine to do the less creative bits.


By this argument we should be writing in machine code.

Is there no desire for creative process if we use compilers?


This is not even wrong. The level of analogies people use in the first few months of $hype_cycle is mind boggling.


It's a spot on - moving up a level in the abstraction hierarchy.


Human: ChatGPT, please devise an algorithm that solves the traveling salesman problem in polynomial time.

A: Certainly, as an AI language model I'm happy to oblige. Here is the algorithm.

Human: I used a higher abstraction level! I have solved P == NP!


This is a non-example, because it is not how interaction with LLMs to write code works right now. You can check the linked page to see that this took the author several hours spanned within 3 weekends to implement, where they had lengthy back and forth discussions with chatGPT building the code.

And you still need to have some model of the code structure, you need to understand what is going on to go back and forth with it. It takes care of a part of the work that is quite standard so you work in a different level, the analogy is with a compiler taking care of certain optimisation part so that you do not have to invent it every time. If you think that you can build anything meaningful with chatGPT with one line prompts I would suggest you try to engage yourself in the context of a language/paradigm you are only a novice with to see how it works and the parts that can be a good learning experience, engaging and entertaining. Do not use some obscure language and libraries because it will start hallucinating a lot though.


I wish it were that easy.

If you see toward the end where it generates Einstein's field equations, I had to start from real numbers (dedekind cuts) to real manifolds to pseudo-Reimannian manifolds to the curvature metric to the final thing.


Just fyi, here's how it responds to that prompt:

"As an AI language model, I'm unable to create or prove the existence of a polynomial-time algorithm for the Traveling Salesman Problem (TSP), because it is an NP-hard problem. This means that it is unlikely that such an algorithm exists, unless P = NP, which is an unsolved question in computer science.

However, I can suggest some approximation algorithms or heuristics that provide good solutions to the problem, even though they don't guarantee an optimal solution:"


It answers that in the post:

> It was more like handholding a fresh grad who had absorbed all of human knowledge but needed someone to tie various parts of that knowledge to create something useful. Also ChatGPT is bad at dealing with abstractions beyond 2 layers.

> ChatGPT is definitely a productivity multiplier. I think it is rather a differential productivity multiplier, as it would enhance more the capabilities of those who already know more. If I did not understand deep learning and FAISS, or how projects are structured, I don't think I would have been able to pull this off. On the other hand, it also has some sort of a leveling effect—I have not worked on PyTorch in a while, have no idea of FAISS's new APIs, etc., but these gaps were filled in by ChatGPT.


What value does driver deliver if entire process is done by a car?


Not much, hence I dont need a dedicated driver to drive my car.


Until bird shits on the camera, your kid vomits in the car, tire is punctured, somebody breaks window, police hails to stop, you're choking with peanut, there is a crash nearby or crash with your car and all other kind of edge cases.


I think the OP meant "I dont need a dedicated driver to drive my car [because I can drive it on my own]".

The process is simplified so you can do it yourself if you have the right tool, instead of relying on dedicated professionals. The process can be traveling or designing and writing an app.

I don't understand the point you're trying to make with those edge cases, especially choking with a peanut, but driving your own car is extremely popular, despite those.


People had the same thoughts about cake mixes in the 40s. Oh you can just buy a cake mix? That's not cooking anymore444

https://www.youtube.com/watch?v=r6wKaLQ66r8


Fun fact: People didn't like cake mixes when they first came out, precisely because it wasn't "really cooking". Then someone (Betty Crocker?) changed the mix (and the instructions) so that the person had to add an egg, not just water. Then the humans felt like they were actually cooking, and cake mixes were more accepted.

A really smart AI would leave enough for the humans to do that they feel like they're still in charge.


They were right. The quality of home cooking has gone down dramatically throughout the 20th century.


I kind of agree about the cake mixes, but not the developer. It's pretty clear to me the value the developer provided, as he elaborated about it at length near the end and said the LLM isn't getting his job anytime soon.

The cake mix really isn't "cooking," by my standards. Neither is microwaving popcorn. But it's an arbitrary line, I wouldn't defend it very hard.


TBH, programming has lacked creativity since complilers got within 90% as good as hand-rolled assembly.

Hand-rolled assembly wasn't really fun because you could type it and get a response instantly, rather than the creative good old days of mailing in punch cards and waiting weeks for a result.

Punch cards weren't fun either, because using a computer wasn't creative. Doing math by hand was.

Ad nauseum.

If you think that really fucking excellent portal-opening tools don't enable creativity, you just have a dim view of what creativity is.


My manager is pretty confused about what ChatGPT is, just in the same way the same way they’re confused about Python. So not too worried yet.

They’re not going to be coding tomorrow.


Someone has to type the prompt.


Artists are way ahead of you on this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: