Has anyone been able to use it to effectively build more than a green field poc? It's absolutely great for that, but it's been a little clunky when trying to iterate when the context gets too large or when trying to add stuff to am existing system.
> Has anyone been able to use it to effectively build more than a green field poc? It's absolutely great for that, but it's been a little clunky when trying to iterate when the context gets too large or when trying to add stuff to am existing system.
I'd like to see that too. Just this morning I got it to generate a whole lot of boilerplate. Then it appeared to get dementia: forgot which language the boilerplate was for, when I reminded it it then started creating type aliases for no reason, when I reminded it that the previous snippet (which was the first part of a single long snippet) didn't need those types, it switched languages again.
Very difficult once you get a long enough conversation going (something like 5 prompts in and it starts going haywire in random places).
I've used ChatGPT-4 to create new components in a larger game of mine. I provided an example class that does something similar and an interface and it flawlessly created a new component the way that I would have with the new functionality, in like 10 seconds.
I've also used it to create new functionality in the game, but those were mostly isolated features. Like I implemented controller rumble support yesterday in about 10 minutes thanks to its help. Probably would have taken me several hours on my own.
I don't know to what extent I can integrate it directly into the game, but I think the main limiting factor is the token limit and my desire to manually copy+paste enough context, not limitations in its actual abilities (which it does have some, I'm debugging along with it an issue in a 3D graph class I had it create last night).
The code it created worked fine when it was just drawing thin lines, but I had it convert the drawing to 3D rects so it could have different widths (technically it recommended that solution when I asked for line widths) and there's been some glitches, namely only one team is showing and the rest of it is now only visible when I set the draw mode to not cull anything.
Hoping to get past that tonight or tomorrow. It at least is giving me some good ideas for approaching debugging it, I'd be pretty much at a loss if it was just me, as I've always struggled to debug 3D graphics issues when the graphics just don't show up on the screen.
BTW, turns out changing to CullClockwise for the RasterizerState made the graph show up again, so the order of the vertices were in the opposite order, which it also noted and provided the corrected code for it (basically just reversing the order of vertices), so I could set it back to the default of CounterClockwise again.
The beauty of abstraction, though, is that it liberates a person from needing to worry about lower-level complexities. The same should be true for LLMs. It shouldn’t need to know your entire system. Start a new thread with judiciously restricted context when you move on to the next task.
Yup. We're back to the real problems that matter in sdev -- thinking about the problem, breaking it down, scoping things. Let an LLM do the grunt-work of writing the code, sure.
They'll be the ones begging and cajoling LLMs to write the code.
Notice that even these days, senior devs aren't supposed to code much - they're supposed to train up junior devs, up to the point said juniors become competent, at which point the juniors join the rank of seniors and begin to teach fresh junior hires.
I.e. increasingly, juniors are the only people actually coding anything, and LLMs will only reinforce this trend.
> they're supposed to train up junior devs, up to the point said juniors become competent, at which point the juniors join the rank of seniors and begin to teach fresh junior hires.
Some of the shittiest code comes out of developers with this level of experience, but they're senior now and teach fresh junior hires?
Assuming it actually happens, how different is it from a ton of other things? Hpw many rote sysadmin or paralegal tasks of 20 years ago still need basically a warm body to do them?
I doubt anyone has a clear enough crystal ball to give confident, actionable advice. Obviously AI, among many other things, erodes the value of most easy to acquire knowledge skills at the low end. But that kind of thing is constantly being eroded all the time and while uplevel/upskill is generally good advise, it's hardly unique to AI. Nor is there evidence yet that computer programming/software engineering is going to be uniquely impacted by these new technologies.
Content farms and other low-end writing almost certainly will be impacted significantly. But that was mostly not a good place to be anyway. Writing generally isn't a good bet unless it's effectively in support of something else that pays the bills or you get very lucky.
reasonable statement. I think of programming as a discipline with three core skills, those being choosing what problem to solve, writing a solution, and debugging the solution. I suppose it's the writing that's gotten a lot easier with the addition of GPT, debugging has gotten slightly easier, and choosing what to work on has gotten a lot more competition.
From that framework, debugging is the new code monkeying... anyone have thoughts on that analysis?
I'm sure you could get code that would work going this route, but I'm worried about the code base growing to have lots of duplication and it generally just growing into a mess without any standard patterns and it would become a nightmare to maintain if that's the case.
That will always be an issue and, moreover, always has been an issue — not only with code one obtains from other humans through forums, stack overflow, and blog posts — but also with code written entirely by oneself.
Code we get from an LLM, or someone else's fallible wetware, or our own fallible wetware, all need thoughtful consideration prior to committing.
I've been able to get it to write functions for me which I can then plug into a larger system.
As an example- I have data representing a cylinder defined like so- {radius: x, depth: y}, give me a function which takes this object and calculates the volume of the cylinder.
It is unable to understand the context to deliver a feature end to end, but if you can give it the fingerprint of a function you'd like it can implement it for you (sometimes)
I see limited context and the lack of sandbox dev environment is the main obstacle on way to have more sophisticated usage and faster feedback loop. The value would be greater if it could know everything about the code base, business requirements, related documentation, and, it could even avoid a lot mistakes by simple running type checks and the tests and correct itself before giving the answer.
I'm trying to (as a hobby-programmer / itsec(compliance :|) / admin / devops guy)
https://github.com/idncsk/canvas (early early alpha stage)
The biggest benefit for me is I can ask it design and programming-pattern questions for my particular use-case(given A, should I split some methods of A into a separate module, should I implement B as a facade of C or maybe implement some observer pattern instead etc)
One strange thing I noticed is the degradation of "quality" of the answers some time after a new version gets released, but that may as well be just my internal drift for expectations
It's nice for guidance when you start a project but you quickly run into problems if you want to rely on it entirely. I tried to use it to build a small library which implements an established algorithm from scratch where I can guide it and give it feedback but I'm not writing any code, only copying and pasting and relaying any errors back to it so it can correct itself - it doesn't do great, it loses track of its own implementation of things and suggests inconsistent code.
Still useful for many other use cases, just not for "write my entire project for me and I'll only copy and paste" yet
I use AI to help my productivity while coding a lot. I handle the big system, it spits out code for me where i know what i want but don’t know the libraries/syntax.
And of course, each project will eventually no longer be green field.
While it can help you understand code blocks, I'm not sure that it can help you understand an application as a whole.
I don't know that it can help discover edge-case types of bugs and suggest fixes. For example, the types of problems which crop up when system meets reality.
The can see this leading to a lot of spaghetti coded projects.
Yes. You need to really focus on decomposing your problem and use like "there is" "it is" and "it has" to describe the overall system instead of relying on it remembering everything. Which honestly makes code more organized and easier to read.
I imagine you could ask general architectural questions and it would give you general architectural answers. And then you ask more and more specific questions and get more and more specific answers.
Domain knowledge of projects becomes effective when apps gets complex. Asking the right questions to ChatGPT will warrant you answers to help you tread unknown territory. To ask the right questions is to first understand how your app is structured, know it’s major components, what’s potentially missing. It gets technical.