My control systems prof said every engineer has done an inverted pendulum problem in school and nobody in their career has ever been asked to balance an inverted pendulum. So our final was a thermostat instead.
An unstable rocket (almost all of the big ones) is an inverted pendulum of sorts. The combined center of thrust and aerodynamic forces is usually below the C.G, and the rocket tends to tip over if not actively stabilized. To add a little, a slight margin of tipping is allowed, but the rocket tends to break up if the angle of attack crosses certain limit at high speeds.
Liquid fuel rockets complicate matters: think "inverted pendulum with a couple of stacked, full wine glasses that you absolutely must not spill balanced on top. Oh, and there's a gremlin drinking from them so both CG and total mass are constantly moving."
> full wine glasses that you absolutely must not spill balanced
I'm not sure what you mean by spilling. The liquids are usually contained in fully closed tanks and cannot spill. The thrust usually keeps the liquid at the bottom around the engine intakes. However, liquid sloshing inside the tanks is a problem. Even with slosh dampers, sloshing liquid manages to create attitude disturbances that can cause control instability unless managed carefully.
My inner child of Doc Brown and Wernher von Kerman says, it's time to develop non-newtonian fuel sponges, which would release fuel if squeezed gently and steadily, but behave like solid if things start shaking too much, preventing sloshing.
I believe https://en.wikipedia.org/wiki/MGM-52_Lance used roughly this configuration, with a gas generator pushing the back of the pistons. As I remember, the oxidizer and fuel pistons were physically connected (concentric, with concentric propellant tanks) to simplify getting the mixture correct.
It's a moving part to go wrong. But more than that, the fuel tank has very thin walls. I can imagine you don't want a solid object inside the tank that's able to get any momentum.
You do get baffles in liquid fuel tanks to cut down the effect, and some people have tried flexible bladders inside the rigid tanks for a similar reason but they don't work well with cryogenic fuels.
This actually makes perfect sense, so I'm hoping some rocket scientist will come and explain what are the "devils in the details" that prevent rockets from being built like this.
I'm sure it's easy: some liquids like oxygen and methane are stored under high pressure, you would need an incredibly well built seal and powerful piston.
If it's a room temperature liquid probably there are less complicated solutions like baffles and bladders.
And then you'd gave to over engineer the rocket iself to not break when tipping over, not just the fuel tanks.
my bad, the piston is indeed just the inner moving part, my point was that we think of syringe as easy because it is not under pressure, but if you think in terms of car piston/cylinder you appreciate more why this can't work in a rocket fuel tank.
In an inverted pendulum the support pushes up, in a rocket it pushes along the direction the rocket is pointed. You only need to steer the rocket, not balance it.
Yeah, his professor wasn't too bright if he said that. He probably never heard of Segway, One-Wheel, "hoverboards", or motorcycles with stability control.
It's a nice theoretical implementation but few practical examples would be found using floating-point math, running as they are on systems that don't support it.
Unless your chip has a hardware floating point unit, you're stuck using software float operations. They're slower, though not extraordinarily so. But the main drawback is the floating point library can massively inflate your code size to the point that it no longer fits in available memory.
If you can get away with it, it's fine, but when you can't it's a huge problem.
Is this still true? It seems like most IoT stuff has chips that easily eclipse the capabilities of my Amiga1000 and are 32bitbwith hardware floats. Looking around, I'm having trouble finding something that isn't a 32bitHF in my house. Possibly my Keurig machine? Even using digital instead of an RLC implementation in a coffee machine seems like a waste, but I'm old.
Cortex-M4/M33 are pretty widely deployed on microcontrollers with control hardware, for example the STM32G4 and Reneasas RA series.
There's still a few reasons not to use it, though. Saving and restoring the float registers adds interrupt handling overhead. If you can fit into 16 bit integers, you can also use the 2 way SIMD instructions and get at least double the throughput.
Finally, floats themselves have a lot of footguns. You have to be very careful about propagating NaNs etc into control loops. It's also really easy (in C) to accidentally promote to double.
Even if you could, floating point may not be the best representation for control applications. There are a lot of problems, like NaN, as somebody pointed out. Even neglecting that, a fixed point representation may be better.
You don't need the fine precision or large range that floats have (we get those using exponents). Signal variations below a certain threshold are drowned out by the noise floor. Signals in control systems also have an expected maximum bound.
At other end, stability calculations based on quantization noise may be difficult with floats, since their least count changes with exponent. Fixed points have a uniform LC throughout its range.
Most PID applications are not IoT. Many use a cortex M0 which doesn't have hardware float support, or non Arm, even 8bit. When you are selling millions it pays to use the cheapest part you can get away with.
What do those things go in anymore? Surely there are asic pid controllers that are insanely cheap? Tuning an analog one I can see probably doesn't make sense anymore, but there is something nice about being able to do that with a screwdriver instead of jtag. What do these M0s go into?
I used an 8-bit PIC micro a couple years ago for power applications (think non-IoT lighting). The specific microcontroller we used had nice peripherals for sensing, and controlling diodes, but no FPU. I remember looking into getting something external to handle the PID, but the cost and board layout constraints made it challenging.
I've had about as many floating point PID implementations running on the attached core of an embedded ASIC as I have fixed point implementations (running fully within the hardware).
I'm not a gamedev myself but have played a few games (Stormworks, Garry's mod with Wiremod, From The Depths, at least one or two more which I can't think of the name right now) that allow PIDs to be used in player-designed contraptions. Quite useful for building stabilised gun platforms and things.
I would be sure there's plenty of applications for PIDs in a game engine itself - a quick search turns up things like self-balancing physics objects, making game entities turn to follow a mouse pointer, and so on.
The Wiremod add-in for Garry's Mod does, alongside with programmable chips and all sorts of neat stuff... although having a look (it has been a number of years...) the PID controller was in the "extras" extension to Wiremod.
I encountered an example recently in a Star Citizen changelog entry about the PID controller governing the interaction of an NPC character pushing a trolley, and tuning the constants for it.
Any time you are using physics engine in game and need to control a thing while maintaining proper physics interactions a PID controller might be used. You could achieve desired state by cheating and directly animating position or velocity (which gamedevs often do), but if you want to maintain proper interactions with other physics objects the best way is to control force. How do you calculate required force - using a PID controller.
It could be a moving enemy, flying robot or a spaceship, rotating gun turret, maybe even doors which need to be moved to desired position.
Even without physics engines PID controllers can be useful in games for making procedural animations. Having an object suddenly starting and stopping a movement doesn't look natural. It looks much nicer if initially an object slightly lags behind, slows down closer to target and maybe even slightly overshoots the target. It looks more natural because that's how things move in real life even non mechanical things. Try quickly swing a hand and then stopping it, you will see a little bit of overshoot/oscillation at the end.
PID controller isn't the only solution to the situations described above and sometimes it's an overkill. Sometimes you might just script or animate certain behavior. Sometimes you might use a simplified control model. Sometimes you might implement P, and intentionally or not achieve other components by adjusting physics object properties like friction (unfortunately you can't easily change physics properties in real world). With regards to animation example similar look can be achieved by directly animating it and any good animator will be familiar with things like ease in, ease out and overshoot, that's fine for static animations but games are interactive so it's nice if things react to player movements. Procedural animations can also be achieved by simply using the physics engine and letting things dangle, but that will probably be more computationally expensive than simple PID controller, also harder to constrain and prevent violent shaking in case of unexpected collisions with other world objects especially when some of them are manually animated.
Each approach has their own tradeoffs. And which one gets chosen will depend on the specific situation and skillset of people making the game.
There are couple of factors that make using a PID controller in games a bit easier compared to real life. In games you can usually directly read the exact positions and speeds of all objects, you can generate force out of thin air, you can generate unrealistic forces and if something goes wrong you can simply clamp the numbers. In games you also choose at which level to operate the PID loop thus making tradeof of code complexity and while still getting benefits of nicer looking animations. In real world you might want to control position of flying quadcopter while the thing you can directly control is speed of propellers with linear velocity, linear acceleration, angle, angular acceleration being between without a way to skip them.
Even where the hardware is available, it can be harder to prove properties about the behavior of the floating point versions. It's no fun when everything turns to NaNs because of surprise cancellation in subtraction.
Isn't the only reason you might run into this with floating point but not with fixed point for the same task that you basically "waste" 10 bits or so on range which you could instead have used on precision (assuming all quantities of interest can be sufficiently well represented in a single range)?
Also because when you have operations with very differing range you they end up computed at the worst precision level without regard to the effects. In a fixed point implementation you would make the two scales different types and when doing computation involving both get an opportunity to preserve the precision. E.g. by dividing down the larger range one first rather than letting the smaller range one get crushed by the worst precision
Often these issues can be handled with very careful floating point order of operations (so long as -ffast-math isn't used...) but since its all implicit it's very easy to get wrong, while in fixed point you're forced to confront the scalings of different variables explicitly.
Absolutely not! The choice of fixed- vs floating-point for a controller should be based on domain knowledge of the problem, and the right choice may vary depending by operation. For example your PID's integrator needs to accumulate a running sum. Fixed-point addition is well-behaved for addition, with only the need to deal with overflow (which can be dealt with; the integrator likely needs to saturate anyway to respect a physical limit, so you choose a datatype that doesn't overflow before then). Floating point addition has much more nuance; adding a small value to a large one can result in no change at all to the output, causing a controller's small-signal transfer function to change based on the system state. This can lead to all sorts of unexpected bad behaviour, like vibration and limit cycles, which might happen only in edge cases after the controller has been running for some time, and so are very hard to catch in testing.
There are also many cases where floating point is the clear winner, but again, you should think about it and not just choose floating point reflexively because it seems easier. Its convenience makes it very easy to sweep errors under a rug, and they always come back to haunt you later.
I assume you're referring to applications where it's not important to do proper floating/fixed point design in the first place? Or do you have some other way of avoiding numerical problems?
It's robust. I hamstrung the model solution by forcing pistonAcceleration to be 0, and forcing hingeAcceleration to be 0 during the first second, but it still "caught" and balanced the ball.
I'm still on the first level playing with various 'P's and 'D's (no 'I's yet), but I think I have a novel scheme!
function controlFunction(block)
{
let L = Math.floor;
let x = block.x;
// Use Collatz. L(x+½) is the closest integer.
return
-(L(x+0.5) % 2 === 0
? x/2 // Even, we chop it
: 3*x + 1 // Odd, we triple-plus-one it.
);
}
It's not very good, but it does beat or tie some of my earnest attempts to get below 8 seconds.
On the "Cruise Control Intro" challenge it's not made clear what the output of the controlFunction is. Am I returning a throttle position? A delta to the throttle position? Something else?
I didn't have any trouble until I got to "Ball on Platform: Balance" which seems to be multiple steps more difficult than the previous ones.
I think the variable limits are not always made clear. The objective is only really accessible on that first menu, otherwise you cannot re-check it. Errors in the console are mostly squashed and don't give important details like line numbers.
Don't get me wrong, I think this is really awesome! We are only talking about a few UI improvements to what is already an excellent resource.
I think what you are suggesting is similar to the "dead-beat controller" in discrete-time control literature. While a controller designed with this method can get the position error measured at the sampling times to zero very quickly, it gives no guarantees that the error stays zero between the samples. This means the block could be wildly oscillating around the arrow in reality, but oscillating in such a way that it's exactly under the arrow at the times your digital controller measures it.
This would probably not be a concern in this digital simulator, but such an error can pop up when trying the same thing out in real life.
Related: Sabine Hossenfelder made a video back in December on use of AI/ML in Chaos Control which is pretty interesting. TLDR: scientists have known it’s possible to control or steer chaotic systems since the 1990s at least, but AI/ML have recently given them new tools for doing so. One interesting use case example is using ML to control in real-time the plasma in a tokomak reactor. More:
After a course on balancing PID controllers in university, our professor said that something like 90% of PIDs would be on out of the box factory settings..
This is very cool, I like the increasing complexity levels.
I actually made a similar thing based on writing your own autopilot for a Lunar Lander [1]. It's hard to make the difficulty increase linearly though. I like the use of different scenarios in this one.
Shouldn't this have some thrust limits? I'm not sure if they are added later(I've only seen the first example), but it would make the problems more realistic, interesting and related to control theory.
The rest of the puzzles get pretty interesting, I shouldn't have just stopped on the first and spent all the time optimizing it.
If the author sees this thread it would be fun if you could disturb the dynamics with the mouse after completing one. I wanna watch the ball catching piston right the ball if I poke it.
I just spent too much time messing around with this, so here's the results. The simulation has a limit on the maximum magnitude of the force of your controller, which in this example is 50. The system is described by x'' = -1.00204*x + u (by experimenting with console.log). The theoretical solution to minimize time to the origin on a system like this is a bang-bang strategy: go full actuation and at some point slam the breaks to stop exactly at the origin. Using this strategy you can calculate that you should hit the brakes(-50) starting at x= − 0.867, with the system arriving at the origin at t=0.403. But the timestep of the simulation is dt =0.02, so the system goes from x=-0.874060101875918 to x = -0.6686069466721608 in one timestep, missing the critical switching point. If you try to switch at -0.6686069466721608 it overshoots and at -0.874060101875918 its almost there when t = 0.4 but not good enough for the simulation to decide you finished the challenge. So 0.42 has to be the best you can do.
What you controller actually does, due to the actuation limits, is almost the strategy described above, staying either at 50 or -50, except for 3 time points, which make the dynamics just right to land on the origin.
There is a mistake in my first comment, the system is described by x'' = -1.00204*x' + u , I forgot a '. Also in the second example , where there is no x' term, you can reach the origin in 0.4 using this strategy.
An “obvious in hindsight” (I missed it) application of control theory in general and PID controllers in particular is exploring more of a corpus when the fleet is less loaded in an IR setting.
@ajtulloch might be the world’s leading expert in making 100 billion bucks in 2 months with a device that can be built out of mechanical parts.
So in many, if not most, contemporary Information Retrieval (IR) problems, there is a total document set larger than could be explored on an interactive basis and so the data structures get laid out in such a way that with some probability north of a coin, you’ll find “better” documents in the “front” half. This is hand-waving a lot of detail away, so if you’d like me to go into some detail about multi-stage ranking, and compact posting lists and stuff I’m happy to do that in subsequent comment.
But it’s a useful fiction as a model and the key part is that there’s still “good” stuff in the “back” half: you’d like to consider everything if you had time.
A PID controller (again oversimplifying a bit) is charged with one primary task: given some observed quantity (temperature in a room) and some controlled quantity (how hard to run the AC), maintain the observed quantity as close to a target as possible via manipulating the controlled quantity.
If you hook one of these things up to an IR/search system (web search, friend search, eligible ads, you name it) where the observer quantity is e.g. the p95 or p99.9 latency of the retrieval, and the controlled quantity is how “deep” to go into the candidate set something magical happens: you always do something close to your best even as the macro load on the system varies.
That’s again a pretty oversimplified (to the point of minor technical inaccuracies) TLDR, but I think it makes the important point.
If you’d like more depth feel free to indicate that in a comment and I’ll do my best.
So basically you're using PID as a metaheuristic to guide an optimization process? (like particle swarm, simulated annealing, genetic algorithms, etc)
Or it's more like, it's being used to fine tune the parameters of another search procedure? And in principle you could use a neural net rather than PID (that would encode a surface that matches the latency vs search depth profile that you want)
edit: just figured out those two are the same thing; the PID is optimizing the search depth for a given target latency
The number of documents (denoted as N) to search over consumes resources and increases the overall latency of the search infrastructure. However, the amount of traffic ebbs and flows. Under periods of lower traffic, we likely can increase N to provide good search results without violating latency constraints. Conversely, high traffic periods likely requires lower values of N.
Let's then approximate system strain by p99 (or p99.XXX) latency.
Solution:
Use a PID controller to set N as a function of latency (p99, p99.5, etc.) of the cluster. This leads to the outcome where N reduces when p99 latency starts to spike (resource starvation), and increases when p99 is low.
Misses the target by 6.1E-16 at an velocity of 6.3E-15. But yes, this should probably randomize the initial conditions or parameters like friction and gravity a bit so that you actually have to control the system.
It's cheating because real world PID controllers only apply linear feedback, i.e. feedback expressed by a linear function. Ifs/elses etc. are not linear.
PID controller only appears in the Hacker News title, the URL just says control challenge. Also, why would you allow people to write arbitrary code if you just wanted a PID controller, then you should just allow them tuning the parameters and not write the complete thing themselves.
True. I'd still argue it's not realistic to use arbitrary code though, as most control problems need linear or at least invertible control laws, and the analysis techniques that provide you with control solutions usually assume so too.
I have no real clue about control theory, what is the advantage of linear or invertible laws?
My idea was to equip the control program with the equations of the system and some unknown parameters for friction and so on. Then estimate the parameters from the response to control inputs and drive the system through state space as hard as possible to get the fastest time, taking into account deviations from the expected state and reestimating parameters each time step. Would be really interested in knowing whether this could work or if this will just consistently slam into a wall. Response delay would probably make this a lot harder, but I did not look if any of the tasks simulates delays.
Stability: Adaptive control algorithms need to ensure stability while adapting to changing parameters. This can be a complex task, and stability guarantees are essential.