Compare the yields in a typical JACS (or any high end journal) paper versus those in OrgSyn and I think it's pretty clear that yields in many papers are more than exaggerated. It's a single untraceable number and the outcome of your PhD depends on it - the incentive is very clear. Leave a bit of DCM in, weigh, high vac to get rid of the singlet at 5.30ppm and no one's any the wiser...
Fun fact: in the UK low risk nuclear plant waste (for example workers' overalls) is bundled up and buried with... coal plant ash. Which is, of course, far more radioactive than the waste it is supposed to be protecting against. This was the case 15 years ago, may have changed since the UK has removed coal from its generation mix.
Why do you claim the coal ash is intended to be "protecting against" the nuclear plant waste and not just different types of radioactive waste being buried together?
Coal ash is not classed as radioactive and it was abundant and cheap as a useless by-product of burning coal. The point is more that things classed as "low level" waste from nuclear are most often completely harmless, just regulated differently due to public understanding of the word "nuclear". As such, its disposal is heavily regulated.
Coal ash is commonly used in cement [1]. I would suspect the nuclear waste is being encased with cement to prevent its leakage with of course the punchline being that the encasement is more radioactive than its contents.
Wait until you look at the leakage current of capacitors..! Very poorly specified, if at all, and can actually swamp the consumption of active components in these low or sub-microamp situations. The dual voltage rail that msanford described is the way to go here, gate as much as you possibly can and really focus on reducing the duty cycle.
ESP's deep sleep is not great - the datasheet for the C3 says 5 uA. That's an order of magnitude above low power microcontrollers (e.g. ATSAML), and two orders of magnitude above an ultra low power timer. Not horrendous, but higher than I'd prefer for a tiny watch battery.
Not wrong, especially for microcontrollers where micro/nanosecond determinism may be important - software running on general purpose cores is not suitable for that. They can also be orders of magnitude more energy efficient than running a full core just to twiddle some pins.
I’ve got a project that uses 4 hardware serial modules, timers, ADC, event system etc all dedicated function. Sure, they have their quirks but once you’ve learnt them you can reuse a lot of the drivers across multiple products, especially for a given vendor.
Of course there is some cost, but it’s finding the balance for your product that is important.
> They can also be orders of magnitude more energy efficient than running a full core just to twiddle some pins.
This used to be true, but as fabrication shrinks first you move to quasi FSMs (like the PIO blocks) and eventually mini processors since those are smaller than the dedicated units of the previous generation. When you get the design a bit wrong you end up with the esp32 where the lack of general computation in peripherals radically bumps memory requirements and so the power usage.
This trend also occurs in GPUs where functionality eventually gets merged into more uniform blocks to make room for newly conceived specialist units that have become viable.
No, still true - you’re never going to beat the determinism, size, and power of a few flops and some logic to drive a common interface directly compared to a full core with architectural state and memory. E.g., just to enter an interrupt is 10-15 odd cycles, a memory access or two to set a pin, and then 10-15 cycles again to restore and exit.
Additionally, micros have to be much robust electrically than a cutting edge (or even 14 nm) CPU/GPU and available for extended (decade) timespans so the economics driving the shrink are different.
Small, fast cores have eaten the lunch of e.g. large dedicated DSP blocks for sure but those are niche cases where the volume is low so eventually the hardware cost and cost to develop on weird architectures costs more than running a general purpose core.
> No, still true - you’re never going to beat the determinism, size, and power of a few flops and some logic to drive a common interface directly compared to a full core with architectural state and memory.
But you must know what you intend to do when designing the MCU, and history shows (and some of the questioning here also shows) that this isn’t the case. As you point out expected lifespans are long, so what is a designer to do?
The ESP32 case is interesting because it comes so close, to the point I believe the RMT peripheral probably partly inspired the PIO, thanks to how widely it has been used for other things and how it breaks.
The key weakness of the RMT is it expects the conversion of the data structures to be used to control it to be prepared in memory already, almost certainly by the CPU. This means that to alter the data being sent out requires the main app processor, the DMA and the peripheral to be involved, and we are hammering the memory bus while doing this.
A similar thing occurs with almost any non trivial SPI usage where a lot of people end up building “big” (relatively) buffers in memory in advance.
Both of those situations are very common and bad. Assuming the tiny cores can have their own program memory they will be no less deterministic than any other sort of peripheral while radically freeing up the central part of the system.
One of the main things I have learned over the years is people wildly overstate the cost of computation and understate the cost of moving data around. If you can reduce the data a lot at the cost of a bit more computation that is a big win.
> But you must know what you intend to do when designing the MCU, and history shows (and some of the questioning here also shows) that this isn’t the case. As you point out expected lifespans are long, so what is a designer to do?
Designers do know that UARTs, SPIs, I2C, timers etc will be around essentially forever. Anything new has to be so much faster/better, the competition being the status quo and its long tail, that you would lay down a dedicated block anyway.
I think we'll disagree, but I'm not convinced by many of the cases given here (usually DVI on an RP2040...) as you would just buy a slightly higher spec and better optimised system that has the interface already built in. Personal opinion: great fun to play with and definitely good to have a couple to handle niche interfaces (e.g. OneWire), but not for majority of use cases.
> A similar thing occurs with almost any non trivial SPI usage where a lot of people end up building “big” (relatively) buffers in memory in advance.
This is neither here nor there for a "PIO" or a fixed function - there has be state and data somewhere. I would rather allocate just what is needed for e.g. a UART (on my weapon of choice, that amounts to a heady 40 bits local to the peripheral written once to configure it, overloaded with SPI and I2C functionality) and not trouble the memory bus other than for data (well said on data movement, burns a lot and it's harder to capture).
> Assuming the tiny cores can have their own program memory they will be no less deterministic than any other sort of peripheral while radically freeing up the central part of the system.
Agreed, only if it's dedicated to a single function of course otherwise you have access contention. And, of course, we already have radically freed up the central part of the system :P
If you have a programmable state machine that's waiting for a pin transition, it can easily do the thing it's waiting to do in the clock cycle after that transition. It doesn't have to enter an interrupt handler. That's how the GA144 and the RP2350 do their I/O. Padauk chips have a second hardware thread and deterministically context switch every cycle, so the response latency is still less than 10–15 cycles, like 1–2. I think old ARM FIQ state also effectively works this way, switching register banks on interrupt so no time is needed to save registers on interrupt entry, and I think the original Z80 (RIP this year) also has this feature. Some RISC-V cores (CH32V003?) also have it.
An alternate register bank for the main CPU is bigger than a PWM timer peripheral or an SPI peripheral, sure, but you can program it to do things you didn't think of before tapeout.
They are available in that quantity. They do make sense up to around that number, although the around 1K is the sweet spot. Laying out and certifying a high speed design (not just a noddy A9 with DDR2) is expensive and it's all NRE cost. The cost of the components for the CM is driven down by the combined volume significantly - if you want to buy 1-10K of those by yourself, you will pay a premium over what RasPi gets them for. You might not need a full 6/8 layer board for the whole product as well. As important, the software support is great as it's just a standard RasPi - you don't have to support your own custom image.
There’s a big difference between supporting food security and subsidising otherwise unviable land usage and farming practices. In the UK, there are subsidies for upland farming for sheep with produces a negligible amount of food at high cost (monetary and environmental) for next to no return for the farmers even after the subsidy.
Re. green subsidies that is better characterised as investment in technology of the future. You might also like to compare subsidies to the fossil fuel sector as well.