The problem with these kind of experiments and „determinations“ really is that we expect nowadays to be „all-knowing“. It shows the -at this point of time- most plausible route. Valid. But does it help without any sociological perspectives?
Old Roman Stoa for example, dictated people do act accordingly (the famous general comes in mind who actively went back to captivity and death after being freed based on values according to the stoic lifestyle). Not everyone lived according to its teaching, but some did and acted „unnatural“.
So what if those people back then had other beliefs that made them act accordingly „unnatural“ to us nowadays? Till this can be excluded with a certainty, this study is merely a shallow response and let’s people believe uncertainties instead of knowing historical facts.
I totally agree with you. On the other hand the theory behind it -to combine image recognition to predict the outcome based on specific physical impacts- does sound intriguing and like a somewhat newer idea.
But besides that, you‘re totally right. It’s too „loose“ since to realize that idea the process would have to be way different (and properly explained)
But do you use it now to help you code and if yes, how? The negative effects of relying to heavily on AI while coding are greatly discussed, hence I am wondering what a „good“ use case would be.
> The negative effects of relying to heavily on AI while coding are greatly discussed, hence I am wondering what a „good“ use case would be.
Really depends on your perspective. For some executives, a "good" use case may be the equivalent of burning goodwill to generate cash: push devs to use AI extensively on everything, realize a short term productivity bump while their skills atrophy (by haven't atrophied yet), then let the next guy deal with the problem of devs that have fully realized the "negative effects of relying to heavily on AI."
That’s a pretty dark perspective but it would imply that those executives are some kind of evil geniuses that grasp the extent of this situation. I personally try to count this kind of behavior on the statistics one of the ignobels present: 80% of asked uni professors felt they’re above the average (iq wise).
I haven't used it directly on anything except little test projects. But my general view is that it's like being an editor as opposed to a writer. I have to have mastered the craft of writing to edit someone else's copy.
I couldn’t agree more, thanks for answering! Anecdotally I’ve witnessed people using and talking big about ML/ LLM‘s while being in shock when learning about the fact that there are fundamentally basic statistical concepts behind those.
Not OP, but I specifically like to use AI to explain obtuse sections of code that would take me longer periods of time to understand by reading.
If I have a bug reported and I’m not sure where it is, pasting the bug report into an LLM and asking it to find the bug has yielded some mixed results but ultimately saved me time.
Interestingly enough, I also was wondering if I could improve my efficiency by condensing written text. The idea would be to remove the usual padding or „slop“ you have within most of the modern web environment.
Wouldn’t you loose a bit of that brain power if you stop to make those connections yourself while trying to understand those code sections?
I appreciate the feedback. I was not trying to prove I’m human. It was rather the case of trying to get the idea out as long as it’s fresh. Being a non-native English speaker didn’t help I presume.
Boring or the most simple solution? Okkam’s Razor e.g. and on top of that I feel like problems are nowadays not looked anymore at something to be solved and forgotten but a quest? But I get your point!
So what if those people back then had other beliefs that made them act accordingly „unnatural“ to us nowadays? Till this can be excluded with a certainty, this study is merely a shallow response and let’s people believe uncertainties instead of knowing historical facts.