Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The distinction used to be "Whether the bus is exposed externally".

> Microcontrollers have SRAM, but so do processors. That's where the firmware runs before it gets the DRAM running.

Which microprocessor architecture does this?



NXP LPC is a big one. Most of the families have an external memory controller (EMC) block that can drive an external DRAM at a small multiple of the system clock frequency. The EMC init happens in the secondary boot, usually loaded from the small on-board flash or an offboard EEPROM.

I regularly use the 1788 + a meg or two of DRAM to hold an LCD framebuffer. The LPC has a really nice interconnect between EMC and the internal LCD controller block and will drive the display all on its own (with a hardware cursor) once it is initialized.


Duckduckgo, "NXP LPC", sidebar, quote from wikipedia:

> LPC is a family of 32-bit microcontroller integrated circuits by NXP Semiconductors.

Not a microprocessor. You are the second guy to pull this on me. Do people even read what i write? Did i write unclearly?


Perhaps you're just outdated. Nobody ships a core alone on a chip anymore. That died with the 68000 or ARM7TDMI.


x86 is a classic microprocessor architecture. I'm confident it is not outdated yet, as im using a fairly modern x86 desktop right now at the moment. RAM and chipset are separate from the processor.


> x86 desktop right now at the moment. RAM and chipset

"chipset" is very different than a traditional microprocessor chipset though on that. E.g. you don't have an exposed system bus outside the CPU, but rather specialized interfaces. (And ARM systems have evolved the same way over their history)


What about the slots i plug my RAM sticks into? Do they not count because it goes over the north bridge?


Modern x86 CPUs integrate the DRAM controller directly (e.g. with Intel ever since Nehalem, ~2008) and not over a bus through the northbridge. From that point on I wouldn't count it if you insist on a strict definition, no - it's roughly the same as if you put a socket between a current higher-end ARM chip with external memory and the connected DRAM chips. Chipset is relegated to dealing with I/O to peripherals mostly - and nowadays the CPUs also provide PCIe lanes directly, with the chipset often adding a few more, often slower ones.

Traditional microprocessors are getting really rare, even though pretty much all the modern architectures started as them, and thus some use the term more widely. Some vendors use the differentiation between microcontrollers and application processors now, which also isn't a 100% clear line and explicit crossover models existing, but more useful today and avoids the fights over "what's a microprocessor today". (where "application processor" ~ "can reasonably run a full OS like Linux"). But that's also often limited to discussion in embedded use cases again, e.g. I don't know if they'd call a standard desktop CPU that or insist on some arbitrary level of "embeddedness"... Not that standard desktop CPUs don't end up embedded, but that's another discussion entirely.


This is a useful explanation.

For the article, i don't want to leave it standing as it is. Redefining "Microprocessors" as "having an MMU" is making communication about these topics very unpleasant, especially when its about Retrocomputing. Same pains as i ranted about in https://news.ycombinator.com/item?id=30278936 .


Then maybe the hackers using x86 boards in their 'embedded Linux systems' can chime in.


Perfect, as i own a RDC 3210 board. Here is someone else with the same board + pictures: https://forum.archive.openwrt.org/viewtopic.php?id=19168

It has a x86 Microprocessor architecture, but it is not IBM PC compatible because it does not has an BIOS, instead using a more embedded-style RedBoot setup.

Maybe you should do a quick web search before talking shit.


Most of them? Pretty much every ARM board supports an SRAM boot stage, for example. If you're using U-Boot, it's pretty likely that there's a SPL running out of SRAM somewhere in there.


ARM is a microcontroller architecture.


If you look at two examples, they may have the same "architecture", but on one, more of the architectural blocks are on-chip, and on the other, less. I don't think what is on or off the chip really changes the architecture, though it would probably change the performance, and the pin boundaries do drive other system considerations. But otherwise, the architecture is the architecture, or am I looking at it wrong?


1) This is a distinction without a difference.

2) SRAM is almost always on the die, not external. Here's a die shot of a random STM32 (cortex M): [1]. The SRAM is quite obvious.

[1] https://s.zeptobars.com/GD32F103CBT6-Si-HD.jpg


It is a meaningful distinction. A microprocessor architecture would require support chips, like a south- and north bridge, plus RAM in additional chips.

x86 being the classic example. Here [1] is an example of a microprocessor board, you can see the MX support chips on both sides of the CPU. As you can see, the RAM (and the most of the address space-mapped things) is not on-chip. The bus is wired across the PCB.

[1] https://upload.wikimedia.org/wikipedia/commons/d/de/386DX40_...


Even x86 has on-die SRAM that can be used in boot. It's better known as L1, L2, and L3, but you can configure Intel's FSP to make it usable for boot code with the so-called cache-as-RAM feature. The technical details are different than e.g. ARM because it's cache rather than properly addressable memory, but the general idea is the same.

All the RAM you see on the motherboard is DRAM. It's a totally separate thing and happens to be what the FSP initializes after it loads stuff into SRAM.


These words are used interchangeably, you need to specify exactly. The main distinction is does it have a MMU or not.


"VPN" evolved to mean proxying service, now people are denying that my tinc setup is a VPN.

"Emulation" evolved to mean virtual machine, now people are denying that wine or xterm are emulators.

"Operating System" evolved to require virtual memory and paging, now people are denying that MS-DOS was an operating system.

"Microprocessor" evolved to mean computing chips with MMU, im waiting for people to deny that the 8086 was an microprocessor because it does not have an MMU.

The infinite shitcycle of people improvising language instead of researching.


Wine is not an emulator, it's a PE loader and a Win32 API for Unix. And XTerm is more like a terminal simulator.


Wine actually IS an emulator, just not the kind of emulator you think of. While wine is not an virtual machine, it literally emulates the win32 api.

https://web.archive.org/web/20150928042254/http://wiki.wineh...

I mentioned the shift of meaning to "virtual machine" in the post you replied to, please read it more carefully.

Emulation and Simulation are different in some regard: emulation is only imitating some aspect of a thing, while simulation imitates a thing by using (usually physical) models.

xterm is a VTxxx emulator, and VT simulator does look like this: https://www.pcjs.org/machines/dec/vt100/


I'd argue that wine is a bit of a stretch considering that we're on the third implemention of win32 in Microsoft land as well (DOS/win32s, Win95, NT/Win32k). At that point win32 is a concept already abstracted away from a specific implementation

I totally agree with your main point though that emulation is a broader topic than is generally thought.


Then Windows NT is emulating Win32.

Also, back in the day emulators emulated the CPU for sure, while an API implementation wasn't never called "emulation". Ever.


xterm calls itself an emulator in the manpage right now.


Modern x86 does this.

The BIOS starts running on the x86's internal SRAM.

https://stackoverflow.com/questions/63159663/how-does-bios-i...


Not exactly. Cache as Ram is used for stack/temp storage until proper ram is initialized. Code itself runs from flash until optional bios caching is turned on in the chipset.


Just about any with cache will let you pin last level cache to assist boot up.

I've done this with PowerPC cores that no one would call microcontrollers.


I respect you for writing the first substantiable conter in this whole thread. Signal-Noise ratio isn't good for me today.


If you want some reading about boot flow, see e.g.

https://www.coreboot.org/images/2/23/Apollolake_SoC.pdf

https://blogs.coreboot.org/blog/2019/07/17/gsoc-how-to-run-c...

https://9esec.io/blog/open-source-cache-as-ram-with-intel-bo...

The TLDR is that you need to get DRAM running before you can use it (e.g. by poking at DRAM controller's registers until it knows how to talk to your stick of RAM reliably and at a comfortable speed). Until then, you're stuck with SRAM. On ARM SoCs and Intel Apollo Lake, you use addressable SRAM. On most other x86 systems you use cache as RAM. Cache is just SRAM though, and the only difference between ARM and typical x86's case is that the latter doesn't let you directly address it.

You can find examples of DDR DRAM configuration in u-boot sources, and presumably in coreboot too.


> Which microprocessor architecture does this?

As far as I know, all of them. It's possible that there are historical systems where DRAM configuration is fixed, but I wouldn't know about it. Any modern system requires rather complicated configuration to get the DRAM going.


Counterexample: The IBM PC was a microprocessor system which does not do it - the BIOS runs from ROM only and sets up the DMA controller for the refreshes. No DRAM, no RAM used.

Excluding the caches, this is how x86 functioned for quite a while, i don't know until when.

Microprocessors don't require internal RAM to bootstrap, because they expose their bus and its simple to map a ROM to the starting address, which can be executed from directly.


> Which microprocessor architecture does this?

All of them. They have to. Sometimes the trick is using cache as RAM. Some Intel CPUs do early memory init this way, for example.

You need some memory somewhere to run the complex DRAM init procedure.


> The distinction used to be "Whether the bus is exposed externally".

Which bus is "the bus"? SoCs and microcontrollers alike expose a bunch of different buses.


https://en.wikipedia.org/wiki/System_bus

I'm assuming Naumann arch, with instructions and data on a single bus.


Ok, right, that really has a legacy smell to it. If you can find the pinout for a modern Intel CPU, it has dedicated pins from DRAM controller to the DDR. It's not on some external "all-purpose" bus. Whatever bus you might have is internal to the CPU. Same thing on any modern ARM SoC with DDR support.


Even if you dont work with it anymore and call it legacy, its still a real thing, and its a bad idea to re-use the word to mean something else.


The question is then, should we start calling modern Intel CPUs microcontrollers? That's bound to cause even more confusion.

If anything, the things people today refer to as microcontrollers have more to do with your legacy microprocessor than they have with a high end CPU or SoC.

What's the right term to use?


At least don't redefine "Microprocessor" in a way that it does not include the 8086, the one that made the it popular. Its nonsensical.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: