It's a nice theoretical implementation but few practical examples would be found using floating-point math, running as they are on systems that don't support it.
Unless your chip has a hardware floating point unit, you're stuck using software float operations. They're slower, though not extraordinarily so. But the main drawback is the floating point library can massively inflate your code size to the point that it no longer fits in available memory.
If you can get away with it, it's fine, but when you can't it's a huge problem.
Is this still true? It seems like most IoT stuff has chips that easily eclipse the capabilities of my Amiga1000 and are 32bitbwith hardware floats. Looking around, I'm having trouble finding something that isn't a 32bitHF in my house. Possibly my Keurig machine? Even using digital instead of an RLC implementation in a coffee machine seems like a waste, but I'm old.
Cortex-M4/M33 are pretty widely deployed on microcontrollers with control hardware, for example the STM32G4 and Reneasas RA series.
There's still a few reasons not to use it, though. Saving and restoring the float registers adds interrupt handling overhead. If you can fit into 16 bit integers, you can also use the 2 way SIMD instructions and get at least double the throughput.
Finally, floats themselves have a lot of footguns. You have to be very careful about propagating NaNs etc into control loops. It's also really easy (in C) to accidentally promote to double.
Even if you could, floating point may not be the best representation for control applications. There are a lot of problems, like NaN, as somebody pointed out. Even neglecting that, a fixed point representation may be better.
You don't need the fine precision or large range that floats have (we get those using exponents). Signal variations below a certain threshold are drowned out by the noise floor. Signals in control systems also have an expected maximum bound.
At other end, stability calculations based on quantization noise may be difficult with floats, since their least count changes with exponent. Fixed points have a uniform LC throughout its range.
Most PID applications are not IoT. Many use a cortex M0 which doesn't have hardware float support, or non Arm, even 8bit. When you are selling millions it pays to use the cheapest part you can get away with.
What do those things go in anymore? Surely there are asic pid controllers that are insanely cheap? Tuning an analog one I can see probably doesn't make sense anymore, but there is something nice about being able to do that with a screwdriver instead of jtag. What do these M0s go into?
I used an 8-bit PIC micro a couple years ago for power applications (think non-IoT lighting). The specific microcontroller we used had nice peripherals for sensing, and controlling diodes, but no FPU. I remember looking into getting something external to handle the PID, but the cost and board layout constraints made it challenging.
I've had about as many floating point PID implementations running on the attached core of an embedded ASIC as I have fixed point implementations (running fully within the hardware).
I'm not a gamedev myself but have played a few games (Stormworks, Garry's mod with Wiremod, From The Depths, at least one or two more which I can't think of the name right now) that allow PIDs to be used in player-designed contraptions. Quite useful for building stabilised gun platforms and things.
I would be sure there's plenty of applications for PIDs in a game engine itself - a quick search turns up things like self-balancing physics objects, making game entities turn to follow a mouse pointer, and so on.
The Wiremod add-in for Garry's Mod does, alongside with programmable chips and all sorts of neat stuff... although having a look (it has been a number of years...) the PID controller was in the "extras" extension to Wiremod.
I encountered an example recently in a Star Citizen changelog entry about the PID controller governing the interaction of an NPC character pushing a trolley, and tuning the constants for it.
Any time you are using physics engine in game and need to control a thing while maintaining proper physics interactions a PID controller might be used. You could achieve desired state by cheating and directly animating position or velocity (which gamedevs often do), but if you want to maintain proper interactions with other physics objects the best way is to control force. How do you calculate required force - using a PID controller.
It could be a moving enemy, flying robot or a spaceship, rotating gun turret, maybe even doors which need to be moved to desired position.
Even without physics engines PID controllers can be useful in games for making procedural animations. Having an object suddenly starting and stopping a movement doesn't look natural. It looks much nicer if initially an object slightly lags behind, slows down closer to target and maybe even slightly overshoots the target. It looks more natural because that's how things move in real life even non mechanical things. Try quickly swing a hand and then stopping it, you will see a little bit of overshoot/oscillation at the end.
PID controller isn't the only solution to the situations described above and sometimes it's an overkill. Sometimes you might just script or animate certain behavior. Sometimes you might use a simplified control model. Sometimes you might implement P, and intentionally or not achieve other components by adjusting physics object properties like friction (unfortunately you can't easily change physics properties in real world). With regards to animation example similar look can be achieved by directly animating it and any good animator will be familiar with things like ease in, ease out and overshoot, that's fine for static animations but games are interactive so it's nice if things react to player movements. Procedural animations can also be achieved by simply using the physics engine and letting things dangle, but that will probably be more computationally expensive than simple PID controller, also harder to constrain and prevent violent shaking in case of unexpected collisions with other world objects especially when some of them are manually animated.
Each approach has their own tradeoffs. And which one gets chosen will depend on the specific situation and skillset of people making the game.
There are couple of factors that make using a PID controller in games a bit easier compared to real life. In games you can usually directly read the exact positions and speeds of all objects, you can generate force out of thin air, you can generate unrealistic forces and if something goes wrong you can simply clamp the numbers. In games you also choose at which level to operate the PID loop thus making tradeof of code complexity and while still getting benefits of nicer looking animations. In real world you might want to control position of flying quadcopter while the thing you can directly control is speed of propellers with linear velocity, linear acceleration, angle, angular acceleration being between without a way to skip them.
Even where the hardware is available, it can be harder to prove properties about the behavior of the floating point versions. It's no fun when everything turns to NaNs because of surprise cancellation in subtraction.
Isn't the only reason you might run into this with floating point but not with fixed point for the same task that you basically "waste" 10 bits or so on range which you could instead have used on precision (assuming all quantities of interest can be sufficiently well represented in a single range)?
Also because when you have operations with very differing range you they end up computed at the worst precision level without regard to the effects. In a fixed point implementation you would make the two scales different types and when doing computation involving both get an opportunity to preserve the precision. E.g. by dividing down the larger range one first rather than letting the smaller range one get crushed by the worst precision
Often these issues can be handled with very careful floating point order of operations (so long as -ffast-math isn't used...) but since its all implicit it's very easy to get wrong, while in fixed point you're forced to confront the scalings of different variables explicitly.
Absolutely not! The choice of fixed- vs floating-point for a controller should be based on domain knowledge of the problem, and the right choice may vary depending by operation. For example your PID's integrator needs to accumulate a running sum. Fixed-point addition is well-behaved for addition, with only the need to deal with overflow (which can be dealt with; the integrator likely needs to saturate anyway to respect a physical limit, so you choose a datatype that doesn't overflow before then). Floating point addition has much more nuance; adding a small value to a large one can result in no change at all to the output, causing a controller's small-signal transfer function to change based on the system state. This can lead to all sorts of unexpected bad behaviour, like vibration and limit cycles, which might happen only in edge cases after the controller has been running for some time, and so are very hard to catch in testing.
There are also many cases where floating point is the clear winner, but again, you should think about it and not just choose floating point reflexively because it seems easier. Its convenience makes it very easy to sweep errors under a rug, and they always come back to haunt you later.
I assume you're referring to applications where it's not important to do proper floating/fixed point design in the first place? Or do you have some other way of avoiding numerical problems?