These experiments always seems to end up requiring the hand-holding of a human at top, seemingly breaking down the idea behind the experiment in the first place. Seems better to spend the time and energy on finding better ways for AI to work hand-in-hand with the user, empowering them, rather than trying to find the areas where we could replace humans with as little quality degradation as possible. That whole part feels like a race to the bottom, instead of making it easier for the ones involved to do what they do.
>ather than trying to find the areas where we could replace humans with as little quality degradation as possible
The particular problem here is it is very likely that the easiest people to replace with AI are the ones making the most money and doing the least work. Needless to say those people are going to fight a lot harder to remain employed than the average lower level person has political capital to accomplish.
>seems to end up requiring the hand-holding of a human at top,
I was born on a farm and know quite a bit about the process, but in the process of trying to get corn grown from seed to harvest I would still contact/contract a set of skilled individuals to do it for me.
One thing I've come to realize in the race to achieve AGI, the humans involved don't want AGI, they want ASI. A single model that can do what an expert can, in every field, in a short period of time is not what I would consider a general intelligence at all.
Ai hype is predicated on the popular idea that it can easily automate someone else's job, because that job they know nothing about is easy, but my job is safe from ai because it is so nuanced.
> the ones making the most money and doing the least work. Needless to say those people are going to fight a lot harder to remain employed than the average lower level person has political capital to accomplish.
They don't have to "fight" to stay employed, anyone with sufficient money is effectively self-employed. It's not going to be illegal to spend your own money running your own business if that's how you want to spend your money.
Anyone "making the most money and doing the least work" has enough money to start a variety of businesses if they get fired from their current job.
If you have a cushy job where you don't really work, and you make a lot of money (doesn't mean you have capital), how does that translate to being suited to becoming an entrepreneur with the money they are no longer earning with the effort capacity they apparently don't have?
Then they’re not going to be doing any significant lobbying so they’re not covered by GP’s comment, which was selecting for “people who have political capital”.
Yes, there are other forms of political capital besides money, but it’s still mostly just money, especially when they’re part of the tiny voter block of “people who make a lot of money and dont do much work and dont have wealth”.
Also I talked with the employees at my local McDonald’s last week. Not one of them had any idea who the owner was. I showed them a photo of the owner and they had never seem them. So apparently that could be an option for people who were overpaid and still want to pretend-work while making money.
Don’t dismiss the other forms of political capital so quickly. Sure, the people who are independently wealthy can independently influence political decisions, but there are so many situations in history where the once conditions worsen for the upper middle class, there is impetus to make political change, overthrow governments, etc. It’s usually when the scholar/merchant class gets annoyed that laws change.
> the humans involved don't want AGI, they want ASI
They are virtually synonymous. After all, a computer already exceeds human capabilities in some areas (for example, numeric computation). If (hypothetically, I don't believe this is possible) they would be able to achieve human-level performance in all other areas, they would already have achieved ASI as well.
> The particular problem here is it is very likely that the easiest people to replace with AI are the ones making the most money and doing the least work. Needless to say those people are going to fight a lot harder to remain employed than the average lower level person has political capital to accomplish.
> I can definitely believe that in 2026 someone at their computer with access to money can send the right emails and make the right bank transfers to get real people to grow corn for you.
I think this is the new turing test. Once it's been passed we will have AGI and all the Sam Altmans of the world will be proven correct. (This isn't a perfect test obviously, but neither was the turing test)
If it fails to pass we will still have what jdthedisciple pointed out
> a non-farmer, is doing professional farmer's work all on his own without prior experience
I am actually curious how many people really believe AGI will happen. Theres alot of talk about it, but when can I ask claude code to build me a browser from scratch and I get a browser from scratch. Or when can I ask claude code to grow corn and claude code grows corn. Never? In 2027? In 2035? In the year 3000?
HN seems rife with strong opinions on this, but does anybody really know?
Researchers love to reduce everything into formulae, and believe that when they have the right set of formulae, they can simulate something as-is.
Hint: It doesn't work that way.
Another hint: I'm a researcher.
Yes, we have found a great way to compress and remix the information we scrape from the internet, and even with some randomness, looks like we can emit the right set of tokens which makes sense, or search the internet the right way and emit these search results, but AGI is more than that.
There's so much tacit knowledge and implicit computation coming from experience, emotions, sensory inputs and from our own internal noise. AI models doesn't work on those. LLMs consume language and emit language. The information embedded in these languages are available to them, but most of the tacit knowledge is just an empty shell of the thing we try to define with the limited set of words.
It's the same with anything we're trying to replace humans in real world, in daily tasks (self-driving, compliance check, analysis, etc.).
AI is missing the magic grains we can't put out as words or numbers or anything else. The magic smoke, if you pardon the term. This is why no amount of documentation can replace a knowledgeable human.
...or this is why McLaren Technology Center's aim of "being successful without depending on any specific human by documenting everything everyone knows" is an impossible goal.
Because like it or not, intuition is real, and AI lacks it. Irrelevant of how we derive or build that intuition.
> There's so much tacit knowledge and implicit computation coming from experience, emotions, sensory inputs and from our own internal noise.
The premise of the article is stupid, though...yes, they aren't us.
A human might grow corn, or decide it should be grown. But the AI doesn't need corn, it won't grown corn, and it doesn't need any of the other things.
This is why, they are not useful to us.
Put it in science fiction terms. You can create a monster, and it can have super powers, _but that does not make it useful to us_. The extremely hungry monster will eat everything it sees, but it won't make anyone's life better.
I agree we don't have much to (physically) fear from it...yet. But the people who can't take "no" for an answer and don't get that it is fundamentally non-human, I can believe they are quite dangerous.
I mean... technically it would work this way but, and this is a big but, reality is extremely complicated and a model that can actually be a reliable formula has to be extremely complicated. There's almost certainly no globally optimal solutions to these types of problems, not to mention that the solution space is constantly changing as the world does. I mean this is why we as humans and all animals work in probabilistic frameworks that are highly adaptable. Human intuition. Human ingenuity. We simply haven't figured out how to make models at that level of sophistication. Not even in narrow domains! What AI has done is undeniably impressive, wildly impressive even. Which is why I'm so confused why we embellish it so much.
It's really easy to think everything is easy when we look at problems from 40k feet. But as you come down to Earth the complexity exponentially increases and what was a minor detail is now a major problem. As you come down resolution increases and you see major problems that you couldn't ever see from 40k feet.
As a researcher, I agree very much with you. And as an AI researcher one of the biggest issues I've noticed with AI is that they abhor detail and nuance. Granted, this is common among humans too (and let's not pretend CS people don't have a stereotype of oversimplification and thinking all things are easy). While people do this frequently they also don't usually do it in their niche domains, and if they are we call them juniors. You get programmers thinking building bridges is easy[0] while you get civil engineers thinking writing programs is easy. Because each person understands the other's job only at 40k feet and are reluctant to believe they are standing so high[1]. But AI? It really struggles with detail. It really struggles with adaptation. You can get detail out but it often requires significant massaging and it'll still be a roll of the dice[2]. You also can get the AI to change course, a necessary thing as projects evolve[3]. Anyone who's tried vibe coding knows the best thing to do is just start over. It's even in Anthropic's suggestion guide.
My problem with vibe coding is that it encourages this overconfidence. AI systems still have the exact same problem computer systems do: they do exactly what you tell them to. They are better at interpreting intent but that blade cuts both ways. The major issue is you can't properly evaluate a system's output unless you were entirely capable of generating the output. The AI misses the details. Doubt me? Look at Proof of Corn! The fred page is saying there's an API error[4]. The sensor page doesn't make sense (everything there is fine for an at home hobby project but anyone that's worked with those parts knows how unreliable they are. Who's going to do all the soldering? You making PCBs? Where's the circuit to integrate everything? How'd we get to $300? Where's the detail?). Everything discussed is at a 40k foot view.
[1] I'm not sure why people are afraid of not knowing things. We're all dumb as shit. But being dumb as shit doesn't mean we aren't also impressive and capable of genius. Not knowing something doesn't make you dumb, it makes you human. Depth is infinite and we have priorities. It's okay to have shallow knowledge, often that's good enough.
[2] As implied, what is enough detail is constantly up for debate.
[3] No one, absolutely nobody, has everything figured out from the get-go. I'll bet money none of you have written a (meaningful) program start to finish from plans, ending up with exactly what you expect, never making an error, never needing to change course, even in the slightest.
Edit:
[4] The API issue is weird and the more I look at the code the more weird things are. Like there's a file decision-engine/daily_check.py that has a comment to set a cron job to run every 8 hours. It says to dump data to logs/daily.log but that file doesn't exist but it will write to logs/all_checks.jsonl which appears to have the data. So why in the world is it reading https://farmer-fred.sethgoldstein.workers.dev/weather?
I think once we get off LLM's and find something that more closely maps to how humans think, which is still not known afaik. So either never or once the brain is figured out.
I'd agree that LLMs are a dead end to AGI, but I don't think that AI needs to mirror our own brains very closely to work. It'd be really helpful to know how our brains work if we wanted to replicate them, but it's possible that we could find a solution for AI that is entirely different from human brains while still having the ability to truly think/learn for itself.
> ... I don't think that AI needs to mirror our own brains very closely to work.
Mostly agree, with the caveat that I haven't thought this through in much depth. But the brain uses many different neurotransmitter chemicals (dopamine, serotonin, and so on) as part of its processing, it's not just binary on/off signals traveling through the "wires" made of neurons. Neural networks as an AI system are only reproducing a tiny fraction of how the brain works, and I suspect that's a big part of why even though people have been playing around with neural networks since the 1960's, they haven't had much success in replicating how the human mind works. Because those neurotransmitters are key in how we feel emotion, and even how we learn and remember things. Since neural networks lack a system to replicate how the brain feels emotion, I strongly suspect that they'll never be able to replicate even a fraction of what the human brain can do.
For example, the "simple" act of reaching up to catch a ball doesn't involve doing the math in one's head. Rather, it's strongly involved with muscle memory, which is strongly connected with neurotransmitters such as acetylcholine and others. The eye sees the image of the ball changing in direction and subtly changing in size, the brain rapidly predicts where it's going to be when it reaches you, and the muscles trigger to raise the hands into the ball's path. All this happens without any conscious thought beyond "I want to catch that ball": you're not calculating the parabolic arc, you're just moving your hands to where you already know the ball will be, because your brain trained for this since you were a small child playing catch in the yard. Any attempt to replicate this without the neurotransmitters that were deeply involved in training your brain and your muscles to work together is, I strongly suspect, doomed to failure because it has left out a vital part of the system, without which the system does not work.
Of course, there are many other things AIs are being trained for, many of which (as you said, and I agree) do not require mimicking the way the human brain works. I just want to point out that the human brain is way more complex than most people realize (it's not merely a network of neurons, there's so much more going on than that) and we just don't have the ability to replicate it with current computer tech.
Nobody can know, but I think it is fairly clearly possible without signs of sentience that we would consider obvious and indisputable. The definition of 'intelligence' is bearing a lot of weight here, though, and some people seem to favour a definition that makes 'non-sentient intelligence' a contradiction.
As far as I know, and I'm no expert in the field, there is no known example of intelligence without sentience. Actual AI is basically algorithm and statistics simulating intelligence.
Definitely a definition / semantics thing. If I ask an LLM to sketch the requirements for life support for 46 people, mixed ages, for a 28 month space journey… it does pretty good, “simulated” or not.
If I ask a human to do that and they produce a similar response, does it mean the human is merely simulating intelligence? Or that their reasoning and outputs were similar but the human was aware of their surroundings and worrying about going to the dentist at the same time, so genuinely intelligent?
There is no formal definition to snap to, but I’d argue “intelligence” is the ability to synthesize information to draw valid conclusions. So, to me, LLMs can be intelligent. Though they certainly aren’t sentient.
Can you spell out your definition of 'intelligence'? (I'm not looking to be ultra pedantic and pick holes in it -- just to understand where you're coming from in a bit more detail.) The way I think of it, there's not really a hard line between true intelligence and a sufficiently good simulation of intelligence.
I would say that "true" intelligence will allow someone/something to build a tool that never existed before while intelligence simulation will only allow someone/something to reproduce tools that already known. I would make a difference between someone able to use all his knowledge to find a solution to a problem using tools he knows of and someone able to discover a new tool while solving the same problem.
I'm not sure the latter exists without sentience.
I honestly don't think humans fit your definition of intelligent. Or at least not that much better than LLMs.
Look at human technology history...it is all people doing minor tweaks on what other people did. Innovation isn't the result of individual humans so much as it is the result of the collective of humanity over history.
If humans were truly innovative, should we not have invented for instance at least a way of society and economics that was stable, by now? If anything surprise me about humans it is how "stuck" we are in the mold of what others humans do.
Circulate all the knowledge we have over and over, throw in some chance, some reasoning skills of the kind LLMs demonstrate every day in coding, have millions of instances most of whom never innovate anything but some do, and a feedback mechanism -- that seems like human innovation history to me, and does not seem like demonstrating anything LLMs clearly do not possess. Except of course not being plugged into history and the world the way humans are.
I think we are closer than most folks would like to admit.
in my wild guess opinion:
- 2027: 10%
- 2030s: 50%
- 2040: >90%
- 3000: 100%
Assuming we don't see an existential event before then, i think it's inevitable, and soon.
I think we are gonna be arguing about the definition of "general intelligence" long after these system are already running laps around humans at a wide variety of tasks.
>That whole part feels like a race to the bottom, instead of making it easier for the ones involved to do what they do.
This is what people said while transitioning from horse carriages to combustion engines, steam engines to modern day locomotives. Like it or not, the race to the bottom has already begun. We will always find a way to work around it, like we have done time and again.
lol this is not the same at all. If these tools were so good as they claim they wouldn't be struggling so hard to make money or sell them.
The fact that they have to be force fed into people is all the proof you need that this is an unsustainable bubble.
Something to keep in mind that unless you can destroy something the system is not democratic and people are realizing how undemocratic this game truly is.
yes exactly, comma.ai is making a driver assistance product and this is similar to their stance... which is refreshing
they know they won't be able to make a fully autonomous product while navigating liability and all sorts of problems so they're using technology to make drivers more comfortable while still in control
none of this hype about full autonomy, just realistic ideas about how things can be easier for the humans in control
Using the example from the article, I guess restaurant managers need handholding by the chefs and servers, seemingly breaking down the idea behind restaurants, yet restaurants still exist.
The point, I think, is that even if LLMs can't directly perform physical operations, they can still make decisions about what operations are to be performed, and through that achieve a result.
And I also don't think it's fair to say there's no point just because there's a person prompting and interpreting the LLM. That happens all the time with real people, too.
> And I also don't think it's fair to say there's no point just because there's a person prompting and interpreting the LLM. That happens all the time with real people, too.
Yes, what I'm trying to get at, it's much more vital we nail down the "person prompting and interpreting the LLM" part instead of focusing so much on the "autonomous robots doing everything".
I feel you're still missing the point of the experiment... The entire thing was based on how Claude felt empowering -- "I felt like I could do anything with software from my terminal"... It's not at all about autonomous robots... It's about what someone can achieve with the assistance of LLMs, in this case Claude