> This doesn't even include the cost of hiring ~20 engineers to handle the buttons. ~6 people to check appearance and do testing... It doesn't include the assembly costs on the line. That 1% was just the cost of button + wire.
That doesn't make sense. $1 uninstalled might make sense for a fancy custom-molded button, even if it's too much for a generic button. (I'd rather have some generic buttons with labels than use a touchscreen, by the way.) But there's no way a few feet of signal wire and the proportional share of power wires get anywhere near $1 uninstalled.
Also I can find entire car stereo units with 15 buttons on them for $15? That kind of integrated button is cheap, has been common in cars for a long time, and can control things indirectly like a touch screen button if that's cheaper than direct wiring.
Was it ever a problem to get the kind of phone SoC or camera chips you'd need for a backup camera if you were willing to pay an extra $20? I thought the issue was more specialized things. And you need one gigabyte of ram or less.
You shouldn’t need any dedicated RAM. A decent microcontroller should be able to handle transcoding the output from the camera to the display and provide infotainment software that talks to the CANbus or Ethernet.
And the bare minimum is probably just a camera and a display.
Even buffering a full HD frame would only require a few megabytes.
Pretty sure the law doesn’t require an electron app running a VLM (yet) that would justify anything approaching gigabytes of RAM.
I just went on Amazon and a 1GB stick of DDR3 ram is about 30% cheaper than a 128mb stick of RAM. Why would any RAM company make tiny RAM chips when they can make standard-sized chips that work for every application that needs less?
I simply refuse to believe the cost difference between a CPU with hundreds of megs of DRAM is cheap enough to be an appealing choice over the same chip with a gig of RAM. We're not talking about a disposable vape with 3kb of RAM, this is a car that needs to power a camera and sensors and satellite radio and matrix headlights or whatever. If it's got gigahertz of compute, there's no reason it's still got RAM sized for a computer from 30 years ago.
I tried to think of a wording that wouldn't get this response, I guess I failed. Ram is generally bought in gigabytes, "1 or less" is as low as numbers go without getting overly detailed.
So what microcontroller do you have in mind that can run a 1-2 megapixel screen on internal memory? I would have guessed that a separate ram chip would be cheaper.
Back in the mists of time, we used to do realtime video from camera to display with entirely analog components. Not that I'm eager to have a CRT in my dashboard, but live video from a local camera is a pretty low bar to clear.
Your comment has basically no connection to the comment you replied to. (Which itself had a weak connection to the article, but that's a separate issue.)
I think that “one by one” part allows different interpretations of what guessmyname possibly meant.
But I fail to make sense of it either way. Either the nuance of lack of consent is missing, or Google is blamed for not doing what they just did from the very first version.
Unless I'm missing what you mean by a mile, this isn't true at all. We have infinitely precise models for the outcomes of LLMs because they're digital. We are also able to engineer them pretty effectively.
The ML Research world (so this isn't simply a matter of being ignorant/uninformed) was surprised by the performance of GPT-2 and utterly shocked by GPT-3. Why ? Isn't that strange ? Did the transformer architecture fundamentally change between these releases ? No, it did not at all.
So why ? Because even in 2026, nevermind 18 and 19, the only way to really know exactly how a neural network will perform trained with x data at y scale is to train it and see. No elaborate "laws", no neat equations. Modern Artificial Intelligence is an extremely empirical, trial and error field, with researchers often giving post-hoc rationalizations for architectural decisions. So no, we do not have any precise models that tell us how a LLM will respond to any query. If we did, we wouldn't need to spend months and millions of dollars training them.
We don't have a model for how an LLM that doesn't exist will respond to a specific query. That's different from lacking insight at all. For an LLM that exists it's still hard to interpret but it's very clear what is actually happening. That's better than you often get with quantum physics when there's a bunch of particles and you can't even get a good answer for the math.
And even for potential LLMs, there are some pretty good extrapolations for overall answer quality based on the amount of data and the amount of training.
>We don't have a model for how an LLM that doesn't exist will respond to a specific query.
We don't have a model for a LLM that does exist will respond to a specific query either.
>For an LLM that exists it's still hard to interpret but it's very clear what is actually happening.
No, it's not and I'm getting tired of explaining this. If you think it is, write your paper and get very rich.
>That's better than you often get with quantum physics when there's a bunch of particles and you can't even get a good answer for the math.
You clearly don't understand any of this.
>And even for potential LLMs, there are some pretty good extrapolations for overall answer quality based on the amount of data and the amount of training.
> We don't have a model for a LLM that does exist will respond to a specific query either.
Yes we do... It's math, you can calculate it.
> No, it's not and I'm getting tired of explaining this. If you think it is, write your paper and get very rich.
Why would I get rich for explaining how to do math?
> You clearly don't understand any of this.
Could you be more specific?
Quantum physics is stupidly hard to calculate when you approach realistic situations.
A real LLM takes a GPU a fraction of a second.
They're both hard to interpret, please realize I'm agreeing that LLMs are hard to interpret. But they're easier than QM on some other fronts.
And mentioning copenhagen or many-worlds doesn't show that quantum mechanics are easy to interpret, that's about as useful as saying an LLM works like neuron activation.
Sure, use it. But it very much shouldn't be needed, and if there's a bug keeping you from using it your performance outside video games should still be fine. Your average new frame only changes a couple pixels, and a CPU can copy rectangles at full memory speed.
reply