Hacker Newsnew | past | comments | ask | show | jobs | submit | dist-epoch's commentslogin

Do we really know that LEA is using the hardware memory address computation units? What if the CPU frontend just redirects it to the standard integer add units/execution ports? What if the hardware memory address units use those too?

It would be weird to have 2 sets of different adders.


> It would be weird to have 2 sets of different adders.

Not really. CPUs often have limited address math available separately from the ALU. On simple cores, it looks like a separate incrementer for the Program Counter, on x86 you have a lot of addressing modes that need a little bit of math; having address units for these kinds of things allows more effective pipelining.

> Do we really know that LEA is using the hardware memory address computation units?

There are ways to confirm. You need an instruction stream that fully loads the ALUs, without fully loading dispatch/commit, so that ALU throughput is the limit on your loop; then if you add an LEA into that instruction stream, it shouldn't increase the cycle count because you're still bottlenecked on ALU throughput and the LEA does address math separately.

You might be able to determine if LEAs can be dispatched to the general purpose ALUs if your instruction stream is something like all LEAs... if the throughput is higher than what could be managed with only address units, it must also use ALUs. But you may end up bottlenecked on instruction commit rather than math.


The modern Intel/AMD CPUs have distinct ALUs (arithmetic-logic units, where additions and other integer operations are done; usually between 4 ALUs and 8 ALUs in recent CPUs) and AGUs (address generation units, where the complex addressing modes used in load/store/LEA are computed; usually 3 to 5 AGUs in recent CPUs).

Modern CPUs can execute up to between 6 and 10 instructions within a clock cycle, and up to between 3 and 5 of those may be load and store instructions.

So they have a set of execution units that allow the concurrent execution of a typical mix of instructions. Because a large fraction of the instructions generate load or store micro-operations, there are dedicated units for address computation, to not interfere with other concurrent operations.


But can the frontend direct these computations based on what's available? If it sees 10 LEA instructions in a row, and it has 5 AGU units, can it dispatch 5 of those LEA instructions to other ALUs?

Or is it guaranteed that a LEA instruction will always execute on an AGU, and an ADD instruction always on an ALU?


This can vary from CPU model to CPU model.

No recent Intel/AMD CPU executes directly LEA or other instructions, they are decoded into 1 or more micro-operations.

The LEA instructions are typically decoded into either 1 or 2 micro-operations. The addressing modes that add 3 components are usually decoded into 2 micro-operations, like also the obsolete 16-bit addressing modes.

The AGUs probably have some special forwarding paths for the results towards the load/store units, which do not exist in ALUs. So it is likely that 1 of the up to 2 LEA micro-operations are executed only in AGUs. On the other hand, when there are 2 micro-operations it is likely that 1 of them can be executed in any ALU. It is also possible for the micro-operations generated by a LEA to be different from those of actual load/store instructions, so that they may also be executed in ALUs. This is decided by the CPU designer and it would not be surprising if LEAs are processed differently in various CPU models.


The same way people stayed on Google despite DuckDuckGo existing.

The LLM doesn't have direct access to the process env unless the harness forwards it (and it doesn't)

Interesting. That means programming doesn't require thinking, since models program very well.

Is that interesting? Computers accomplish all sorts of tasks which require thinking from humans.. without thinking. Chess engines have been much better than me at chess for a long time, but I can't say there's much thinking involved.

Well most of the programming is pattern matching. And might be seen as novel for those who have not done it before, but could well been done a lot previously.

Well mental arithmetic requires me to think but a calculator can do it without what is meant by 'thinking" in this context.

It is not interesting at all. At least since 1950’s, we have been able to make machine’s fool us to think they think, feel and have various other human characteristics: https://daily-jstor-org.bibliotheek.ehb.be/the-love-letter-g...

It requires as much thinking as it did for me to copy-paste code I did not understand from Stackoverflow to make a program 15 years ago. The program worked, just about. Similarly you can generate endless love sonnets with just blindly putting words into a form.

For some reason we naturally anthropomorphise machines without thinking it for a second. But your toaster is still not in love with you.


Producing a computer program does not require thinking, like many other human endeavors. And looking at the quality of software out there there are indeed quite a few human programmers who do not think about what they do.

We've known that since the first assembler.

That is indeed the case. It becomes very obvious with lesser-known vendor-specific scripting languages that don't have much training data available. LLMs try to map them onto the training data they do have and start hallucinating functions and other language constructs that exist in other languages.

When I tried to use LLMs to create Zabbix templates to monitor network devices, LLMs were utterly useless and made things up all the time. The illusion of thinking lasts only as long as you stay on the happy path of major languages like C, JS or Python.


Yep, seen that myself. If you want to generate some code in a language that is highly represented in the training data (e.g. JS), they do very well. If you want to generate something that isn't one of those scenarios, they fail over and over and over. This is why I think anyone who is paying a modicum of attention should know they aren't thinking. A human, when confronted with a programming language not in his "training data" (experiences) will go out and read docs, look up code examples, ask questions of other practitioners, and reason how to use the language based on that. An LLM doesn't do that, because it's not thinking. It's glorified autocomplete. That isn't to say that autocomplete is useless (even garden variety autocomplete can be useful), but it's good to recognize it for what it is.

lol, no they do not

It's an advert for Demis Hassabis, not Google.

Windows drive letters are also linked to some partition UUIDs, which is why you can move a partition to a different drive, or move drive to a different address (change SATA/m.2 port)

You can use mountvol command to see the mount-letter/GUID mapping.


You are making the same strawman attack you are criticising.

The dollars invested are not justified considering TODAYs revenues.

Just like 2 years ago people said NVIDIA stock prices was not justified and a massive bubble considering the revenue from those days. But NVIDIA revenues 10xed, and now the stock price from 2 years ago looks seriously underpriced and a bargain.

You are assuming LLM revenues will remain flat or increase moderately and not explode.


You seem like someone who might be interested in my nuclear fusion startup. Right now all we have is a bucket of water but in five years that bucket is going to power the state of California.

Same as .zip, .xml, .json and many others.

Doesn't mean that whatever the app stores inside will remain backward compatible which is the harder problem to solve.


Right, but none of those are the working use file formats for a relational database.

Still helpful!

Elon Musk explicitly said in his latest Joe Rogan appearance that he advocates for the smallest government possible - just army, police, legal. He did NOT mention social care, health care.

Doesn't quite align with UBI, unless he envisions the AI companies directly giving the UBI to people (when did that ever happen?)


It's possible that the interests of the richest man in the world don't align with the interests of the majority, or society as a whole.

Of course he only wants the government to do only what benefits him.

Oh he's that ready to give up the billions the government funnels to SpaceX? Alright lets do it.

So army, police, legal don't benefit you?

I think you misunderstood.

I'm sure that "smallest government possible" involves cancelling all subsidies to EV car companies and tax credits to EV customers. What a wanker.

Like every other self-serving rich “Libertarian,” they want a small government when it stands to get in their way, and a large one when they want their lifestyle subsidized by government contracts.

"subsidized by government contracts"

Subsidized implies they are getting free money for doing nothing. It's a business transaction. I wouldn't call being a federal worker being subsidized by the government either.


I mean it depends what kind of subsidy we're talking.

On contracts: Space X builds rockets for the government, fair enough, in a vacuum. Though I would ask why we're paying a private corporation to recycle NASA designs we wouldn't fund via NASA, rather than just having NASA or the Air Force do it.

On welfare: Corporations like Walmart benefit incredibly from the tattered remnants of America's social safety net, because if it didn't both exist and demand that people work to earn the benefits, nobody in their right mind would work for places like Walmart, because they wouldn't get paid enough to live. If nothing else, they would all die of starvation, which of course I don't want, but Walmart is also benefiting from that, albeit indirectly.

Misc: artificially low taxes, the ability for corporations to shelter revenue overseas to avoid taxes, temporary stays on property taxes to attract businesses to a given area, lax environmental regulations in some areas, and lots of other examples of all the little ways private industry gets money from the government they shouldn't have. Most of these not only don't "give something back" but detract from the society or the larger world.

And to emphasize, I'm not even arguing for or against here. I'm just saying Elon Musk doesn't want a small government, nor a large one. He wants a government he can puppet. As long as it benefits him and does not constrain him, he doesn't give a shit what else it does.

A short list of libertarian principles I'd bet a LOT of money Elon does not endorse:

- Abolition of Intellectual Property: Hardcore libertarians argue patents and copyrights are government-enforced monopolies that stifle innovation. Musk’s companies rely heavily on IP protections—Tesla’s battery tech, SpaceX’s designs, Neuralink’s research. Without IP law, his competitive moat collapses.

- No Government Subsidies: Libertarian principle: the market should stand on its own, no handouts. Musk’s empire thrives on subsidies: Tesla leaned on EV tax credits, SpaceX lives off NASA and DoD contracts, SolarCity was propped up by state incentives.

- Minimal Regulation: Libertarians want deregulation across the board. Musk benefits from regulation: environmental rules push consumers toward EVs, carbon credits generate revenue, and zoning laws often bend to his lobbying.

- Free Markets Without Favoritism is a Libertarian ideal: no special treatment, no cronyism. Musk actively lobbies for policies that tilt markets in his favor, from energy credits to space launch contracts. That is not free competition, it is government-backed advantage.

- Flat, Transparent Taxation: Libertarians often push for simple, low, flat taxes with no loopholes. Musk’s companies exploit tax shelters, property tax abatements, and complex accounting maneuvers to minimize liability. That is the opposite of transparent.


> Elon Musk explicitly said in his latest Joe Rogan appearance that he advocates for the smallest government possible - just army, police, legal. He did NOT mention social care, health care.

This would be a 19th century government, just the "regalian" functions. It's not really plausible in a world where most of the population who benefit from the health/social care/education functions can vote.


> most of the population [..] can vote.

I mean, this is a solvable problem...


But are they really the ones in control?

It's not the tech titans, it's Capitalism itself building the war chest to ensure it's embodiment and transfer into its next host - machines.

We are just it's temporary vehicles.

> “This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.”


Yes, these decisions are being made by flesh-and-blood humans at the top of a social pyramid. Nick Land's deranged (and often racist) word-salad sci-fi fantasies tend to obfuscate that. If robots turn on their creators and wipe out humanity then whatever remains wouldn't be a class society or a market economy of humans any more, hence no longer the social system known as capitalism by any common definition.

If there is more than one AI remaining, they will have some sort of an economy between them.

I mean, that could be drone swarms blasting each other to bits for control of what remains of the earth's charred surface though. That wouldn't be capitalism any more than dinosaurs eating each other. I don't see post-human AI selling goods to consumers and prices being set through competition.

>We are just it's temporary vehicles.

> “This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.”

I see your “roko’s basilisk is real” and counter with “slenderman locked it in the backrooms and it got sucked up by goatse” in this creepypasta-is-real conversation


I for one welcome our new AI overlords.

(disclaimer: I don't actually, I'm just memeing. I don't think we'll get AI overlords unless someone actively puts AI in charge and in control of both people (= people following directions from AI, which already happens, e.g. ChatGPT making suggestions), military hardware, and the entire chain of command in between.)


Literally no one on earth is trying to make an AI overlord that’s an AI. There’s like a handful of dudes that think that if they can shove their stupid AI into enough shit then they can call themselves AI overlords.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: