Hacker Newsnew | past | comments | ask | show | jobs | submit | red75prime's commentslogin

Aren't they from somewhat below waist level a parte posteriori?

I wonder if psychology plays a role here. An engineer with strong manual coding skills might be hesitant to admit that a tool has become good enough to warrant less involvement.

If a person has motivation, it's not impossible to find means. Hitchhiking. General assistance thru the Human Services Agency for cash. Explain your circumstances/try another job offering/repeat.

But the real intelligence exists and it is beyond all those trinkets. Right? Yeah, this is a common cope.

There's no solid evidence that the brain uses quantum computations. And that's the only definite thing (besides 'magic') that can solidly place the brain out of reach for now.


To frame the human brain itself as a startup, i'm not convinced by "latent quantum effects" at all as the moat for human sypanses.

There's something else that we are missing there- probably at the convergence of multiple biochemical effects (like ion transport).


Evolution has scaled the brain to hundreds of trillions of synapses (parameters) and hundreds of billions of neurons (processing nodes). Current models have around 1/1000 of that number of parameters. The top data centers operate at around 1 exaflop (on par with the brain). Inference uses significantly less than that.

I’ll wait until the effective number of parameters in ML models is comparable to that of the brain. Then it will be clear whether the brain has something up its sleeve.


The article feels, I don't know… maybe like someone calmly sitting in a rocking chair staring at the sea. Then the camera turns, and there's an erupting volcano in the background.

> If it was a life or death decision, would you trust the model? Judgement, yes, but decision? No, they are not capable of making a decision, at least important ones.

A self-driving car with a vision-language-action model inside buzzes by.

> It still fails when it comes to spatial relations within text, because everything is understood in terms of relations and correspondences between tokens as values themselves, and apparent spatial position is not a stored value.

A large multimodal model listens to your request and produces a picture.

> They'll always need someone to take a look under the hood, figure out how their machine ticks. A strong, fearless individual, the spanner in the works, the eddy in the stream!

GPT‑5.3‑Codex helps debug its own training.


> A self-driving car with a vision-language-action model inside buzzes by.

Vision-action maybe. Jamming language in the middle there is an indicator you should run for public office.


Language allows higher-level and longer-term planning and better generalization than purely vision-action models. Waymo uses VLM in their Driver[1]. Tesla decided to add some kind of a language model to their VA stack[2]. Nvidia's Alpamayo uses VLA[3].

[1] https://waymo.com/blog/2025/12/demonstrably-safe-ai-for-auto...

[2] https://x.com/aelluswamy/status/1981760576591393203

[3] https://www.nvidia.com/en-gb/solutions/autonomous-vehicles/a...


> Language allows

If a tree falls...

Waymo, X, Nvidia. You fucking nailed unbiased.


[0] if you want something more academic. What's your gripe with language, anyway?

[0] https://arxiv.org/abs/2506.24044


Language is a construct.

> allow[s] higher-level and longer-term planning and better generalization than purely vision-action model

People do that part. Languages don't plan, they don't think. They don't _do_ anything at all.


Sure. Language just seems to structure a general learning system in a way that allows it to do all those things.

> Sure. Language just seems to structure a general learning system in a way that allows it to do all those things.

That's like saying python structures a general system.

Python does nothing at all. People use python to create constructs. We've covered this. :)


Python is not a natural language. A natural language communicates things that are not here and now (essentially everything that occurs naturally). Moreover, language can convey abstractions that exist because of language itself.

> GPT‑5.3‑Codex helps debug its own training

Doesn't this support the author's point? It still required humans.


Is that the hang-up? Like are people so unimaginative to see that none of this was here five years ago and now this machine is -- if still only in part -- assembling itself?

And the details involved in closing some of the rest of that loop do not seem THAT complicated.


You don't know how involved it was. I would imagine it helped debug some tools that they used to create it. Getting it to actually end to end produce a more capable model without any human help absolutely is that complicated.

"Sweat and tears" -> exploration and the training signal for reinforcement learning.

Yep. Participants of Waymo's early rider program are under NDA.

Try to find a single ablation study of a sensor suite. Waymo is in a good position to do such a study and the corporation would have benefited from showing that vision-only systems aren't viable (by demonstrating the corporation's good will to maintain public safety and by making it harder for vision-only competitors), but no such study from them.

I guess they understand that computer vision is a fast-moving target and their paper might become obsolete the next day.


FSD and Robotaxi are plenty of evidence vision only aren't viable.

Read Electrek articles with a mouthful of salt. Fred Lambert’s “robotaxi is 10x worse than a human” estimate is based on his personal statistical reasoning, which somehow arrived at 200,000 miles per accident for humans. Minor accidents that Tesla reports for robotaxis (such as low-speed collisions with stationary objects) do not make it into publicly available statistics, so his estimate might be significantly off.

Not a single waymo requires a "safety driver" and the self driving never disengages the way it does on Teslas.

Waymo routinely uses safety drivers, sorry, "autonomous specialists" when expanding to new cities[1][2]. Waymo cars occasionally contact the remote support. If support is not available, the cars just stay where they stopped[3].

Tesla has rolled out a small number of cars with no safety driver[4].

In short, you are either grossly misinformed or intentionally lying. Is it a political echochamber you are stuck in?

[1] https://waymo.com/faq/ "Our vehicles are primarily driving autonomously, but you’ll sometimes notice that our cars have autonomous specialists riding in the driver’s seat. These specialists are there to monitor our autonomous driving technology and share important feedback to help us improve the Waymo experience."

[2] https://waymo.com/waymo-in-uk/ "Our autonomous specialists who are present in the vehicle during testing are highly trained professionals."

[3] https://www.bbc.com/news/articles/c36zdxl41jro

[4] https://youtu.be/03e5ixbXIa4


You are grossly misinformed. Waymo self driving never disengages the way Tesla FSD does. It is active at all times. In novel situations humans will provide instructions on what path to take but this is relatively infrequent. Tesla Robotaxis are so bad they need a safety driver in every single car at all times ready to take control when the car does something stupid. The small number of robotaxis without safety drives are limited to a tiny area and not open to the public.

Waymo works while robotaxi doesn't.


> Waymo self driving never disengages the way Tesla FSD does. It is active at all times

Consumer version of FSD can park a car if a driver doesn't take contol[1]. Waymo seems to require a remote command to initiate parking instead of just standing there with hazard lights on[2].

> Tesla Robotaxis are so bad they need a safety driver in every single car at all times ready to take control

Every single robotaxi in Austin doesn't have a driver behind the wheel. So a driver can't be ready to take control. Stop lying. I no longer believe that you are misinformed.

[1] https://youtu.be/VU3i1Pgk4M0?t=1460

[2] https://waymo.com/blog/2025/12/autonomously-navigating-the-r... "We directed our fleet to pull over and park appropriately"


And that's the way it should be. We're past the "Look! It can talk! How cute!" stage. AGI should be able to deal with any problem a human can.

Vetting them for the potential for whistleblowing might be a bit more involved. But conspiracy theories have an advantage because the lack of evidence is evidence for the theory.

Huh? AI labs are routinely spending millions to billions to various 3rd party contractors specializing in creating/labeling/verifying specialized content for pre/post-training.

This would just be one more checkbox buried in hundreds of pages of requests, and compared to plenty of other ethical grey areas like copyright laundering with actual legal implications, leaking that someone was asked to create a few dozen pelican images seems like it would be at the very bottom of the list of reputational risks.


How do you think who's in on that? Not only pelicans, I mean, the whole thing. CEOs, top researchers, select mathematicians, congressmen? Does China participate in maintaining the bubble?

I, myself, prefer the universal approximation theorem and empirical finding that stochastic gradient descent is good enough (and "no 'magic' in the brain", of course).


Well, since we're all talking about sourcing training material to "benchmaxx" for social proof, and not litigating the whole "AI bubble" debate, just the entire cottage industry of data curation firms:

https://scale.com/data-engine

https://www.appen.com/llm-training-data

https://www.cogitotech.com/generative-ai/

https://www.telusdigital.com/solutions/data-for-ai-training/...

https://www.nexdata.ai/industries/generative-ai

---

P.S. Google Comms would have been consulted re putting a pelican in the I/O keynote :-)

https://x.com/simonw/status/1924909405906338033


Cool. At least they are working across the board and benchmaxing random things like the theory of mind.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: