NXP LPC is a big one. Most of the families have an external memory controller (EMC) block that can drive an external DRAM at a small multiple of the system clock frequency. The EMC init happens in the secondary boot, usually loaded from the small on-board flash or an offboard EEPROM.
I regularly use the 1788 + a meg or two of DRAM to hold an LCD framebuffer. The LPC has a really nice interconnect between EMC and the internal LCD controller block and will drive the display all on its own (with a hardware cursor) once it is initialized.
x86 is a classic microprocessor architecture. I'm confident it is not outdated yet, as im using a fairly modern x86 desktop right now at the moment. RAM and chipset are separate from the processor.
> x86 desktop right now at the moment. RAM and chipset
"chipset" is very different than a traditional microprocessor chipset though on that. E.g. you don't have an exposed system bus outside the CPU, but rather specialized interfaces. (And ARM systems have evolved the same way over their history)
Modern x86 CPUs integrate the DRAM controller directly (e.g. with Intel ever since Nehalem, ~2008) and not over a bus through the northbridge. From that point on I wouldn't count it if you insist on a strict definition, no - it's roughly the same as if you put a socket between a current higher-end ARM chip with external memory and the connected DRAM chips. Chipset is relegated to dealing with I/O to peripherals mostly - and nowadays the CPUs also provide PCIe lanes directly, with the chipset often adding a few more, often slower ones.
Traditional microprocessors are getting really rare, even though pretty much all the modern architectures started as them, and thus some use the term more widely. Some vendors use the differentiation between microcontrollers and application processors now, which also isn't a 100% clear line and explicit crossover models existing, but more useful today and avoids the fights over "what's a microprocessor today". (where "application processor" ~ "can reasonably run a full OS like Linux"). But that's also often limited to discussion in embedded use cases again, e.g. I don't know if they'd call a standard desktop CPU that or insist on some arbitrary level of "embeddedness"... Not that standard desktop CPUs don't end up embedded, but that's another discussion entirely.
For the article, i don't want to leave it standing as it is. Redefining "Microprocessors" as "having an MMU" is making communication about these topics very unpleasant, especially when its about Retrocomputing. Same pains as i ranted about in https://news.ycombinator.com/item?id=30278936 .
It has a x86 Microprocessor architecture, but it is not IBM PC compatible because it does not has an BIOS, instead using a more embedded-style RedBoot setup.
Maybe you should do a quick web search before talking shit.
Most of them? Pretty much every ARM board supports an SRAM boot stage, for example. If you're using U-Boot, it's pretty likely that there's a SPL running out of SRAM somewhere in there.
If you look at two examples, they may have the same "architecture", but on one, more of the architectural blocks are on-chip, and on the other, less. I don't think what is on or off the chip really changes the architecture, though it would probably change the performance, and the pin boundaries do drive other system considerations. But otherwise, the architecture is the architecture, or am I looking at it wrong?
It is a meaningful distinction. A microprocessor architecture would require support chips, like a south- and north bridge, plus RAM in additional chips.
x86 being the classic example. Here [1] is an example of a microprocessor board, you can see the MX support chips on both sides of the CPU. As you can see, the RAM (and the most of the address space-mapped things) is not on-chip. The bus is wired across the PCB.
Even x86 has on-die SRAM that can be used in boot. It's better known as L1, L2, and L3, but you can configure Intel's FSP to make it usable for boot code with the so-called cache-as-RAM feature. The technical details are different than e.g. ARM because it's cache rather than properly addressable memory, but the general idea is the same.
All the RAM you see on the motherboard is DRAM. It's a totally separate thing and happens to be what the FSP initializes after it loads stuff into SRAM.
"VPN" evolved to mean proxying service, now people are denying that my tinc setup is a VPN.
"Emulation" evolved to mean virtual machine, now people are denying that wine or xterm are emulators.
"Operating System" evolved to require virtual memory and paging, now people are denying that MS-DOS was an operating system.
"Microprocessor" evolved to mean computing chips with MMU, im waiting for people to deny that the 8086 was an microprocessor because it does not have an MMU.
The infinite shitcycle of people improvising language instead of researching.
I mentioned the shift of meaning to "virtual machine" in the post you replied to, please read it more carefully.
Emulation and Simulation are different in some regard: emulation is only imitating some aspect of a thing, while simulation imitates a thing by using (usually physical) models.
I'd argue that wine is a bit of a stretch considering that we're on the third implemention of win32 in Microsoft land as well (DOS/win32s, Win95, NT/Win32k). At that point win32 is a concept already abstracted away from a specific implementation
I totally agree with your main point though that emulation is a broader topic than is generally thought.
Not exactly. Cache as Ram is used for stack/temp storage until proper ram is initialized. Code itself runs from flash until optional bios caching is turned on in the chipset.
The TLDR is that you need to get DRAM running before you can use it (e.g. by poking at DRAM controller's registers until it knows how to talk to your stick of RAM reliably and at a comfortable speed). Until then, you're stuck with SRAM. On ARM SoCs and Intel Apollo Lake, you use addressable SRAM. On most other x86 systems you use cache as RAM. Cache is just SRAM though, and the only difference between ARM and typical x86's case is that the latter doesn't let you directly address it.
You can find examples of DDR DRAM configuration in u-boot sources, and presumably in coreboot too.
As far as I know, all of them. It's possible that there are historical systems where DRAM configuration is fixed, but I wouldn't know about it. Any modern system requires rather complicated configuration to get the DRAM going.
Counterexample: The IBM PC was a microprocessor system which does not do it - the BIOS runs from ROM only and sets up the DMA controller for the refreshes. No DRAM, no RAM used.
Excluding the caches, this is how x86 functioned for quite a while, i don't know until when.
Microprocessors don't require internal RAM to bootstrap, because they expose their bus and its simple to map a ROM to the starting address, which can be executed from directly.
Ok, right, that really has a legacy smell to it. If you can find the pinout for a modern Intel CPU, it has dedicated pins from DRAM controller to the DDR. It's not on some external "all-purpose" bus. Whatever bus you might have is internal to the CPU. Same thing on any modern ARM SoC with DDR support.
The question is then, should we start calling modern Intel CPUs microcontrollers? That's bound to cause even more confusion.
If anything, the things people today refer to as microcontrollers have more to do with your legacy microprocessor than they have with a high end CPU or SoC.
> Microcontrollers have SRAM, but so do processors. That's where the firmware runs before it gets the DRAM running.
Which microprocessor architecture does this?