Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Also if there are fewer humans involved in the code production there is a lot of room for producing code that "works", but is not cohesive or maintainable. Invariably there will be a point at which something is broken and someone will need to wade through the mess to find why it's broken and try to fix it.


This is the future imagined by A Fire Upon the Deep and its sequel. While less focused on the code being generated by ai, it features seemingly endless amounts of code and programs that can do almost anything but the difficulty is finding the program that works for you and is safe to use.

To some extent... This is already the world we live in. A lot of code is unreadable without a lot of effort or expertise. If all code was open sourced there would almost certainly be code written to do just about anything you'd like it to. The difficulty would be finding that code and customizing it for your use.


To piggyback off the sci-fi talk, I imagine in the far future, the programmer will become some sort of literal interface between machines and humans.

I imagine some sort of segregation would happen where the "machine cities" would be somewhat removed from the general human populus. This would be to ensure the machines could use whatever information transport system they desired, unencumbered by the needs of the human populous, and vice-versa.

At a certain level of compute, I prognosticate that a certain level of logistical optimization would be trivial to advanced intelligences, and could be accomplished with almost-literally no effort using left-over cycles from whatever big calculation they were doing.

This would start to define different roles for humanity and machine. With logistics essentially "solved," a programmer would be a human-machine interpreter, sometimes journeying to the machine cities to disceminate needs of the people, or define a good way to introduce new technology to the populous.

This could look something like: During a headlining musical act, a "programmer," recently-returned from the machine city, grabs a mic and says "Does anyone want some of this BLUE, GLOWING, NON-RADIOACTIVE SELTZER WATER?" At which point the crowd would go wild. "If you liked that, just wait until you see what's coming next week!"

So essentially the programmer role becomes a hype-man for new, emergent technologies.


Thanks for the Book Title. It looks like an interesting read.


Caution - lots of people like to talk about this "code archeology" idea as if it's a central driving point of the book, whereas in fact it's mentioned once in passing in the prologue and is never again relevant to the story.

Don't get me wrong, it's still a decent book on its own merits - but don't go into it expecting that to be the main point of the book (I did, and disappointed as a result).


I'd argue that while its not a core diving part narrative... It is central to the idea of the book and its sequel. It's a decent sized book with a lot of ideas and the idea of code archeology and the repercussions of it are what the book is about as much as any of the other main ideas.

But yes, if you want a book that focused only on that... This is going to disappoint.


> It is central to the idea of the book and its sequel. [..] the idea of code archeology and the repercussions of it are what the book is about as much as any of the other main ideas.

Can't speak to the sequel as I gave up on the series after that, but it's _really_ not relevant to the plot or ideas of the first book at all. All that matters for the plot is that a hostile, powerful, uncontrollable AI arises. In the book, it _happens_ to be because of a code archeologist "delving too greedily and too deep"; but the plot would not be changed one iota if it had simply arisen (and gone off the rails) as a product of general AI development.


I'd argue that while its not a core diving part narrative... It is central to the idea of the book and its sequel. It's a decent sized book with a lot of ideas and the idea of code archeology and the repercussions of it are what the book is about as much as any of the other main ideas.

But yes, if you want a book that focused only on that... This is going to disapoint.


As a counterpoint, the main nemesis of the book comes from software that is found in archaeological expedition. While software archeology doesn't show up after the first chapter, the ramifications of what happens in that world due to so much software is pretty central.


This is certainly true, and doesn't detract from my disappointment not to have actually seen the software archeology in practice.


No problem. I've been a sci fi reader my entire life and was shocked I hadnt stumbled across Vinge earlier. The sequel/prequel to Fire Upon the Deep, called A Deepness in the Sky, is arguably even better and the same idea of tech/code being used and customized far after its written is even more central to the plot.

Two of my favorite reads of the last few years, so I highly recommend them.

Futher... After some digging it looks like there is an old slashdot discussion on the same topic: https://slashdot.org/story/06/11/04/0622246/no-more-coding-f...

Likely some spoilers for the books in there so may be worth holding off until after you've read them if you intend to.


Certainly many of us here already have a good amount of experience debugging giant legacy spaghetti code-bases written by people you can't talk to, or people who can't debug their own code. That job may not change much.


I remember one such occasion back in a previous tech boom (late 90s) and it turned out the reason I couldn't talk to the guy who wrote this particular pile of Italian nutrition was that the Feds had shown up one day and taken him to jail (something to do with pump and dump market manipulation via a faked analyst report [edit: actually a faked press release I now remember. "SmallCapCorp (NASDAQ: SCC$) announces they have received a record breaking order for their next gen product / aquisition offer / something like that from RandomIsraeliCompanyThatMightNotEvenHaveExisted"]).

A lot of software engineers would spend a portion of their day tracking their volatile stock / options etc. in those years.


Nah, you just throw it out and have the AI generate an all new one with different problems!


I really look forward to all programs now having new strange bugs every release. They already do, but I expect AI to do that more at first.


The hacking opportunities will be endless. Feeding AI the exploit will be new.


I don't know if AIs will ever get really good at QA in general, but I do think that AIs can get quite good quickly at regression testing.


That's how bads use GPT to code. The right way is to ask GPT to break the problem down into a bunch of small strongly typed helper functions with unit tests, then ask it to compose the solution from those helper functions, also with integration tests. If tests fail at any point you can just feed the failure output along with the test and helper function code back in and it will almost always get it right for reasonably non-trivial things by the second try. It can also be good to provide some example helper functions/tests to give it style guidelines.


If you're already doing all of this work then it's trivial to actually type all the stuff in yourself

Is GPT actually saving you any time if it can't actually do the hard part?


It's not really "all this work," once you have good prompts you can use them to crank out a lot of code very quickly. You can use it to crank out thousands of lines of code a day that are somewhat formulaic, but not so formulaic that a simple rules based system could do it.

For example, I took text document with headers for table names and unordered lists for table columns, and had it produce a database schema which only required minor tuning, which I then used to generate sqlmodel classes and typescript types. Then I created an example component for one entity and it created similar components for the others in the schema. LLMS are exceptionally good at this sort of domain transformation, a decent engineer could easily crank out 2-5k lines/day if they were mostly doing this sort of work.


Now your description of "good prompts" to reuse has created an abomination in my mind. I blame you.

The abomination: prompts being reused by way of yaml templating, a Helm chart of sorts but for LLM prompts. The delicious combination of yaml programming and prompt engineering. I hope it never exists.


You know with GPT you can do these steps in a language you are not familiar with and it will still work. If you don't know some aspect of the language or it's environmental specifics you can just chat until you find out enough to continue.


How do I know if a problem needs to be broken down by GPT, and how do I know if it broke the problem down correctly? What if GPT is broken or has a billing error, how do I break down the problem then?


1. Intuition built by trial and error 2. Domain expertise backed by automated checks 3. The old fashioned way, and if your power is out you can even bust out a slide rule


Maybe I'm being overly optimistic but in a future where a model can digest hundreds of thousands of lines of code, write unit tests, and do refactors, will this even be a problem?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: