Dawg, LLMs cannot reason, they simply return a response based on statistical inference (voting on correct answer). If you want it to do anything correctly you need to do a thought experiment, solve the problem at a 9,000ft view, and hold its hand through implementation. If you do that there's nothing it cannot do.
However, if you're expecting it to write an entire OS from a single prompt it's going to fail just as any human would also fail. Complex software problems are solved incrementally through planning. If you do all of that planning its not hard to get LLMs to do just about anything.
The problem with your example of applying analyzing something as complex and esoteric as a codebase is that LLMs cannot reason they simply return a response based on statistical inference, so unless you followed a standard like PSR for PHP and implemented it to a 't' it simply doesn't have to context to do what you're asking it to do. If you want an LLM to be an effective programmer for a specific application you'd probably need to fine tune and provide it instructions on your coding standards.
Basically, how I've become successful using LLMs is that I solve the problem at a 9,000ft view, instruct the LLM to play different personas, have the personas validate my solution, and then instruct the LLM step-by-step to do all of the monkey work. Which doesn't necessarily always save me time upfront but in the long run it does because it makes fewer mistakes implementing my thought experiment.
Fair enough, I might be asking too much indeed, and may not be able to come up with an idea how LLMs can help me. For me, writing code is easy as soon as I understand the problem, and I sometimes spend a lot of time trying to figure out a solution that fits well within the context, so I thought I could ask an LLM what different things do and mean to help me understanding the problem surface better. Again, I may not understand something, but, at this point, I don’t understand what’s the value of code generation after I know how to solve a problem.
Do you happen to have a blog post or something showing a concrete problem that LLM helped you to solve?
What I've found is that the capabilities of LLMs depend on the problem solving skills of the person writing the prompts. If you know exactly what needs to be done and can translate that into step-by-step prompts it can do pretty much anything, but if you're looking for it actually solve 100% of the problem you're going to run into issues. Which is to say you still need to find the solution but you just have the LLM do the monkey work.
I'm working on making a physical console for the Pico-8. Its pretty simple, works on linux booted into kiosk mode, looks for when a 3.5" floppy is inserted or removed, and loads or closes the game via a shell script.
I want it to be like a C64-style keyboard with all the guts inside it but wirelessly connect to the display/tv, so what I might do is use an ESP32 to read the game floppy and wirelessly transfer and cache the file to a dongle that plugs into an open HDMI port. Not sure yet.
Amazing! I went through the same but opting to implement the console itself on the ESP32[0]. It ended up being a bit too much and it's mostly shelved now
Baby sitting LLMs is already my job and has been for a year. It's kind of boring but honestly after nearly 20 years in the game I felt like I was approaching endgame for programming anyways.
Average salary? Salaries are determined by the size of the company, how much value software engineers add, supply of software engineers, and the location of the office.
That's kind of the tradeoff you make with any low-code/no-code technology. You leverage prebuilt components and string them together to achieve some kind of task. Which isn't most efficient thing in the world to do but it does work assuming you have enough compute resources to throw at it, and return what you generally achieve is an end product that's completed faster than the traditional development route.
You could just use SQL but then you'd have to develop and test the entire infrastructure to support your component-oriented architecture from scratch, and at that point you're kind of just reinventing the wheel because that's basically just pandas with less features.
Low-code is kind of just Authorware for a new generation... assuming you're old enough to remember that technology.
As someone who's not neurotypical and grew up in the 1990s I don't think we really did much for mental illness or have much of understanding of it until the past 15 or so years. Growing up the school system regarded people are disabled, learning disabled, lazy, normal, or gifted. There was no one checking kids out for social-anxiety, bipolar disorder, depression, etc. unless there was an extremely serious problem with their behavior.
Within the past 40 years they used to lock people like me up, give us lobotomies, forcibly medicate us, etc. Its easy to forget how society used to treat folks with mental illness. Its frankly no wonder that people to this day still hide it. Heck, I've had to contact the EEOC more than once. But the thing is, social-media didn't cause this, video games didn't cause this. I've always been genetically predisposed to this. In my opinion, unfettered access to the Internet in general is probably the worst environment for people with predispositions, but to simply blame everything on the environment we've create online through video games or social-media is wrong if not irresponsible.
I think your lobotomy timeline is off? As I understand the history, lobotomies became less common in the 1950s, once antipsychotics and antidepressants were available, and by the 1970s were rarely used. By 1984 it would not have been part of standard practice in the US.
"Not neurotypical" is a very wide category, and the vast majority of such were neither locked up nor given lobotomies.
On the other hand, ADHD kids in the 1990s were indeed forcibly medicated, as in, some schools coerced parents to give Ritalin to their child in order to attend school. IDEA 2004 included the 'Prohibition on Mandatory Medication' to prevent schools from doing that: https://www.law.cornell.edu/cfr/text/34/300.174 .
However, if you're expecting it to write an entire OS from a single prompt it's going to fail just as any human would also fail. Complex software problems are solved incrementally through planning. If you do all of that planning its not hard to get LLMs to do just about anything.