Hacker News new | past | comments | ask | show | jobs | submit login
PID Without a PhD (2016) [pdf] (wescottdesign.com)
301 points by darshanrai on Jan 30, 2018 | hide | past | favorite | 114 comments



Fortunately, basic PID analysis, including linear state-space representation and practical tuning methods (e.g. Ziegler-Nichols[1]), is taught in pretty much any first EE control theory course at the undergraduate level.

[1] https://en.wikipedia.org/wiki/Ziegler%E2%80%93Nichols_method


I took a class in PID and other control systems (PIP) in my 3rd year of undergraduate Electronic Systems Engineering at Lancaster University in the UK.

http://www.research.lancs.ac.uk/portal/en/publications/-(657...

I enjoyed the class and got good grades, but I've never had to use it in practice.

Now I work for a tech company in Taiwan who make microSD cards, and some of the programming projects I worked on involved writing drivers for the testing machines. That was called "automation and control", but it's all been in software.

I like hardware, but I really don't know how to get back into it after 7 years in software.


You need some hardware projects. Designing and building something like a 3d printer or cnc machine from scratch is quite rewarding. Here is a weekend project I did using 3 RC servos and an arduino:

https://www.youtube.com/watch?v=cbNiJKSRCpA


Cool project! I had always thought 3D printers needed steppers for more accuracy (as opposed to servos). Is it the 3 arm config of the 3d printer that makes lower accuracy servos work? Curious where you sourced the hw pieces?


The RC servo project is definitely not accurate enough for printing. It was just a fun project. It would need gears or some other mechanism to get the resolution up there.


Most delta 3D printers use steppers and a linear rail of some sort to gain the required resolution (and height for volume), but I bet you could use BLDC motors in a similar control fashion as used in stabilization gimbals for cameras on hobby drones and similar. I'm not sure if it would be better resolution than micro or continuous "stepping" or not...


I have known people working on control systems for large industrial plants (offshore in the North Sea) and they said they used basic undergraduate level control theory (PID etc.) but that the "control theory" part of any project was probably only a couple of percent of the total size of the project.

Mind you that was a a few years back!


This is true for the most part. The majority of loops out there are still PID. Most mechanical systems are fast, simple and linearizable, and PID is robust and simple to implement. The control law is simple.

Advanced Control techniques like MPC are generally applied on complex plants such as chemical plants. In chemical plants, systems are slow but complex (highly multivariate with many interactions), so more complex optimal control techniques like MPC have a larger payoff. MPC has to solve an numerical optimization problem at every iteration, and requires more computational horsepower (not to mention prior mathematical modeling from step test data).

But even in chemical plants, the local loops are PID (actually, PI... D is rarely used). The MPC layer optimizes an objective function and sends setpoints to local PI loops to carry out control.


Not only that... Classic controls (including PID, Routh-Hurwitz, root-locus, etc) are NOT taught in many graduate PhD controls programs!


Those are undergrad courses. PhD should already know those by heart.


Not even for Masters in control programs. It's state space all day!


State-space is less relevant to regulatory and servo control loops (which is the majority of loops out there), and more of interest in advanced control systems.

Of course, in Masters programs, you are already expected to know basic PID control.

Graduate courses that cover PID control tend to focus on intricacies like loop pairing, RGA, etc. PID control is an eminently practical field of study, and has many interesting facets like bump transfers, split range control, etc. There's a lot to learn about PID control that is non-theoretical.


Are those topics no longer relevant to today's state of the art research?


It's possible that you're just expected to know them at that point.


Agreed. Many state of the art control strategies nowadays are based on optimization techniques (e.g. Model Predictive Control, MPC, optimizes a convex cost function of the state, setpoint and input to your system over a certain time horizon). This has become a viable control strategy for fast (>1kHz) processes only in the past 15 years, since the optimization can be computationally intensive.


Or be able to derive them. PID was never explicitly taught on my course, but all the theory behind it was.


Research? Largely no, although they crop up now and then in clever ways. They are still very relevant in industry, however.


From my experience in O+G and mining processing plant control systems many people that tune loops regularly for a while can basically do it by inspection, with small adjustments after the first pass.

First rule is you will not need any derivative except in a handful of particular cases that are well known/identifiable.


I never had Ziegler-Nichols as part of my Control Systems syllabus. My univ referred Bakshi & Bakshi as primary text book.

Google Books link → https://books.google.co.in/books/about/Control_Systems.html?...


The method is simple and practical enough that--if anything--it would have been covered as part of supplemental lab supervised by a graduate TA after professor lays foundation for PID control in lecture the week prior.


Thanks, I was pretty confused for a moment.


There was an "aha" moment for me when I realized that PID loop only works well for linear systems (and sometimes it's not obvious if a system is linear or not).

Real world example: imagine you have a single-axis motion stage driven by electric motor and you want to control position of a carriage. Usually your control output is motor voltage. Motor voltage approximately translates to current through motor windings, which in turn approximately translates to torque exerted by motor. Torque exerts a force (T = F * r) on carriage. Applying force to the carriage makes it accelerate (F = m * a). Acceleration linearly increases the carriage velocity (v = v0 + a * t). Carriage having some velocity finally causes the position change (s = v0 * t + a * t^2).

In the essence, the system turns out to be non-linear with the respect to parameter you are controlling.

To improve this system, one solution is to add velocity sensor (or differentiate the position sensor, if it's resolution is high enough) and introduce a cascading PID loop topology- the outer loop takes position error and outputs velocity error, which is fed to inner loop (input = velocity error, output = acceleration). The coefficients for the loops have to be tuned starting from innermost loop.

Another solution is to use different control algorithm which is suited for non-linear systems (e.g. LQR).


We actually ended up using this when writing a controller for a quadcopter[1]. Essentially, we have one PID controller operate on the absolute angle (error = desired angle - actual angle). The output of this controller is fed into the second controller as the desired rate of rotation (RoR) (error = desired RoR - actual RoR). The output of the second controller is finally fed to the motors.

Apart from being easier to tune, I just found a good article[2] for why this approach works better for problems such as this. For quadcopters, of course, this allows one to easily switch between rate/acro mode and angle mode.

[1] https://github.com/ThePinkScareCrow/TheScareCrow/blob/master... [2] https://www.controleng.com/single-article/fundamentals-of-ca...


This is also the approach PX4/Ardupilot folks use. It's actually even more elaborate in their case: position loop -> velocity loop -> acceleration/angle loop -> angular rate loop -> motor outputs.


I think it's worth noting that cascaded PIDs are no more expressive than having a state-space controller. But probably it is easier to tune, as you say.

Each integrator is one dimension of state.


What do you mean LQR is suited for non linear systems? This is all misinformation. LQR is a linear systems tool.

The nice thing about linear systems is that almost every dynamical system is locally linear near some operating point.


I don't have a lot of first-hand experience using LQR but I definitely have seen it being applied to systems that traditionally are thought of as non-linear with the respect of variable you can control.

Maybe it's explained by the state space containing more terms than PID controller, so the transfer function can be linear for each individual term?


Let's see, LQR is an optimal control technique for linear systems given a model.

Given a linear plant model and a PID controller, you can compute all the classic control metrics like phase and gain margins, settling time, and even how the controller would perform according to the LQR metric.

That's the theory. All linear. Theoretically it's as appropriate to apply LQR to a nonlinear plant as it is to apply PID.

If you tweak the four LQR matrices (say, for a second-order system), and couple the gain matrix with the output of a linear observer (Luenberger), that combination system should be able to generate any PID controller. It's an over-parameterization, however (which is why I think PID controllers are ubiquitous. Not many parameters). Many settings will produce the same controller.

If anyone knows a reference that discussed PID and LQR like this, I'd love to see it


Well, LQR is a linear control law. It's in the name: Linear Quadratic Regulator.

But as noted in my comment above, linear controllers can be used to control nonlinear systems under certain circumstances.

p.s. btw, having more states in the model does not linearize a nonlinear system per se. Only nonlinear transformations like log-transforms are capable of linearizing a system.


> I realized that PID loop only works well for linear systems

In practice, PID is applied to nonlinear systems too. The catch is the nonlinear system need to either be:

1) locally linear within the region of interest (very often the case)

2) mildly nonlinear

3) linearizable (by doing a mathematical transformation)


I have found that fuzzy logic control has proven to be useful for non-linear and multi-order systems in the past.

The first time I build a fuzzy logic controller for an industrial plant optimisation I was very surprised at performance vs complexity and how imprecise the characterisations can be and still get very good results.


Could you clarify what is nonlinear about that system? From your description, it sounds like a high (4+) order (mostly) linear system. The motor probably has nonlinear dynamics, but so does everything and treating it as linear should work ok.

I suspect the difficulties you've experienced stem not from nonlinearity, but from trying to control a system poorly approximated by a second order system with a second order controller. By cascading, both PID controllers effectively "see" a plant that looks more or less second order, so the controllers are much more effective.


I think you're saying that system is nonlinear because there's a t^2, but it's actually linear. The ODE for the system is certainly linear. You could choose a state space and describe the dynamics with linear operators.

Solutions to linear ODEs are in general exponential or polynomial in time. The solution doesn't have to be linear for the system to be linear.

(Unless I've misunderstood and you are modeling the system as actually controlling time. But I doubt that's what you meant)


PID will also work for non-linear systems provided they are not too non-linear. Even simple steam boiler is already non-linear but PID will work fine if you are just interested in practical results.


Sure, but for such systems even a simple on-off controller works fine, if the sensor is close enough to heating element (and if it's too much decoupled, PID won't help either).

The biggest issue for me was spending a lot of time trying to achieve a better system response (fast settling time with minimal overshoot) for system that was just designed badly.


I was able to achieve optimal control and no overshoot with my espresso machine by using Model Predictive Control (https://en.wikipedia.org/wiki/Model_predictive_control) aided by Moving Horizon Estimation (https://en.wikipedia.org/wiki/Moving_horizon_estimation) to continually update model parameters.

Additionally, the set value for the boiler temperature is calculated using separate model that takes the group head temperature into the account.

The end result is that the controller is able to stabilize immediately, with maximum power of the heater and with no overshoots or undershoots to a temperature that takes into the account the state of group head to provide the ideal water when brewing.

While PID is able to control the temperature very accurately I found even in best case it took additional 50% of the time to stabilize and also it was difficult to optimally use 100% power of the heater until last possible moment.

Being able to achieve stability almost immediately means I can keep lower water temperature in the boiler and bring it to temperature at the last possible moment. End result being that it is now much faster to get it ready.


The avoidance of overshoot when using a PID, especially for temperature control of any kind of thermal mass, can be achieved quite satisfactorily by using the posicast technique of setpoint control.

From my experience it is often observed in large processing plants that the operators have learned to do this instinctively from the standard faceplate.


Do you have any more info about the controller/implementation for this project? I'd love this kind of system on my espresso machine!


My controller and implementation is currently proof of concept stage.

I have slapped together few components to create a working prototype, STM32 Nucleo board with STM32L452 MCU, Texas Instruments development board with Bluetooth 5, some RTD temperature measurement development boards from MaximIntegrated, a bunch of relays, current sensors, water flow sensor from Digmesa (actually replacement part from one of Saeco machines), etc.

I am using a test board to detect existing espresso machine operation and perform readings using multichannel laboratory thermometer that measures external boiler case temperature, group head temperature, brewing water temperature (inside portafilter), tank water temperature and external temperature along with signals (when the brewing starts, stops, etc.) to provide experimental data to build the model.

I am using Matlab and Simulink to help me generate parts of the software using gathered experimental data. This lets me play with models and observe how it could behave without actually brewing any coffee:)


Why not use a predictor, the ultimate controller?

Predictors are often too hard in practice, but in this case seems simple as looks like you have the parameters you need to calculate the required energy as Q=ml (+losses) and just put exactly that amount of electrical joules into the heating element.

"Tune" factors like thermal losses by hand, and/or have a simple heuristic learning algo that tunes factors for you based on previous result errors.

I am assuming you have the level in the tank as well to calculate volume, as how else do you protect the boiler element from coming on when no water in the tank?


But I do!

Predictor calculates future evolution of the system based on current state and system parameters. The horizon changes dynamically and typically is about 5 seconds when in steady state and up multiple minutes when cooling from steaming to brewing temperature (depends heavily on insulation, steaming temperature, how long it was steamed, etc.)

The parameters are not directly physical values like thermal mass or energy but are related to them. For example, I am calculating the amount of energy lost based on water temperature and outside temperature (outside is inside enclosure). This is in units used to drive the heater (the unit is half-cycle of AC power). I am calculating (predicting) cooling of water when it is pumped based on amount of water pumped and temperature of water in the tank (this is also being measured).

I calculate initial values based on step responses and then use moving horizon estimator to continually test past predictions vs actual readings and adjust parameters to match.

I haven't yet analyzed which parameters are truly important and which can be easily compensated for without having to sense them.

As to the level of water in the I could not find suitable sensor so I built one. It can only give low level alarm (water falling below set level). It is a plastic pipe with floating magnet attach to the inside of the tank. On the outside, not mounted to the tank so that I can freely remove the tank, but pressed with a flat spring are two reed switches. Using those reed switches I can detect when water falls when its being used or when it rises when it is replenished.


Good work, if you can model the system always easier to control it, just often it is too hard to model in real life.

Also sorry, I should have said Q=mcdelta(t) in my post.


What type of espresso machine do you use that allows you to program a controller for the heating element?


I am using Rancilio Silvia. It is well built machine but has absolutely no electronics and it is easy to disassemble and plenty of free space internally which makes it perfect for experimentation.

My research of mods that are described on the Internet shows that most people put some kind of existing PID controller, SSR to control the heating element and a temperature probe which gets mounted on the outside of the boiler.

Minority is using Raspberry Pi, Arduino or some other kind of existing microcontroller board to allow for nice display and more functionality.

Few people are doing completely custom boards which is what I am doing for this project. I am currently learning electronics after almost 20 years in software and I find it very good exercise from design perspective.


Thanks! Sounds like a fun project.


Oh wow, this brings back memories. I almost wonder if you're the person who sponsored my senior design project.

We were given something almost identical to this, and the customer wanted a control algorithm. I had taken control theory, but there were two problems:

1. A lot of undergrad control theory assumes linear systems.

2. They almost always give you the transfer function.

Our system was likely nonlinear and we had no transfer function.

For simple physical systems, you can use basic physics to derive it. For anything else, figuring out the transfer function is a whole course in itself. I went to the control theory professor and asked for tips and he said "Forget it, it's way beyond the scope of a senior design project. I have students who do it for their MS project". Unfortunately, I couldn't convince the course's professor, and my grade suffered.

Oh well, you learn more in failure and all...


1. Yes, but linear controllers can be applied to nonlinear systems given criteria I listed in a comment above.

2. Yes, but the transfer function is usually only to simulate the system in Simulink or some such software. In the real-world, you never have the transfer function.

So how do people do it?

You either identify the transfer function model using step/impulse tests (covered in undergrad courses and not that hard to do -- the least sophisticated way is to just measure the response to a step and graphically estimate parameters K and tau; it gives you a good starting point), or you can tune a controller by hand starting with a tuning heuristic and then refining. https://en.wikipedia.org/wiki/PID_controller#Loop_tuning

More advanced ways of identifying the transfer function is to collect data (either using pseudorandom binary sequence, i.e. PRBS step tests, or just simple plant step tests) and then doing "system identification", that is, fitting a transfer function model via statistical methods. Matlab's System Identification toolbox is designed for this purpose.

I suspect your course professor did not specialize in control, and/or doesn't have real world experience. This is not unusual of course. Faculty are often assigned courses to teach, and some of these courses are outside their area of expertise.


>You either identify the transfer function model using step/impulse tests

This is what I tried (step, not impulse). Did not work well. The result was too dependent on the "starting position".

The course professor probably knew less about controls than I did. He was a computer engineering professor.


> The result was too dependent on the "starting position".

PID controllers always control around a nominal operating point (which you pick), which is why deviation variables are typically used in the mathematical derivation. That's just the basis of the control algorithm.


Thanks for pointing out an obvious reason of why position control for DC motor requires the cascade topology. Always used it, never understood formally why is was required. Just curious : how have you come to this conclusion / explanation ? Do you have any resources on the subject ? I'm still struggling to find a (formal again) rule on what should be the ratio between the outer and inner loop rates.


> how have you come to this conclusion / explanation?

Through personal suffering while working on firmware for UAVs (PX4/Ardupilot), CNC controllers (grbl, Marlin), inverted pendulum balancing exercises, RC car control etc.

Sadly, no resources. I guess the best advice I could give on this subject is: increase the visibility of internal state of the system. I have lost track of number of times I've seen people trying to tune PID controller just by outside observation (hmm, looks a bit better than last time) instead of real-time plots of measured process error.

Add high-resolution logging, look at the plots of error and P/I/D terms, have a simulation where you can replay the sensor values and observe the system output.


You've just summarized the difference between hacking and engineering -- data!

Without data we're just somebody with an opinion. It should be the root of our decisions.


Can't edit my comment anymore, but as multiple commentators pointed out, what I'm calling a non-linear system is probably just a higher order linear system.


Just going to throw my 2-cents in - I should note that my knowledge of PID is very limited.

The first time I encountered an explanation of how PID worked that made sense to me, was via the Udacity CS373 course that I took in 2012 (Thrun also had a great explanation on how a Kalman filter worked as well). This explanation of PID was repeated when I took the Udacity Self-Driving Car Engineer Nanodegree (starting in 2016).

In the CS373 course, Thrun detailed a few different ways to tune a PID controller - but one way he covered was rather curious in how it worked. It wasn't perfect, but it got you "close enough". I'm not sure if it would work well for a "real system" but for the purpose of learning it seemed to work well enough.

He called it "twiddle" - I don't know if he was the original author of it, or what - but here's a video that describes how it is implemented and how it works:

https://www.youtube.com/watch?v=2uQ2BSzDvXs

It's actually pretty neat - and the concept can be applied to virtually any algorithm in which you are trying to minimize an error amount.


Twiddle, AKA coordinate ascent, definitely works in practice. It's basically the same process a human uses to tune PID parameters, cyclically tune one parameter at a time until you're performance is good enough. It's closely related to gradient ascent and suffers from the same problems, namely that it can get trapped in a local minimum, but it's easy to implement and does a good job if you start with an ok guess at the parameters.


I am a little suprised that the author does not use a low-pass filter for the D part, for example 1/(1+s Tf) and instead tells you to modify the hardware.

Sure, you need cut off the noise with hardware in respect to your Nyquist frequence but you can still have a lot of high frequency noise. And it is not always possible to use hardware filters to get rid of the noise, for example if your are tracking an object with a camera and the x,y is your input to the pid. Ofcourse you can skip the D part.

What am I missing here?

I also find it more easy to use a reset based integrator anti-windup, where you specify your limits on your control signal instead of limiting the integrator.

Here is an diagram of what I usally do:

          +--------+           +------+
   -y     |    1   |           |      |
 -------->+ ------ +---------->+ Kd*s +----+
          | 1+s*Tf |           |      |    |
          |        |           +------+    v
          +--------+   +----+            +-+-+       +-------+   
   e=r-y               |    |            |   |       |    __ |      u
 --------+------------>+ Kp +----------->+ ∑ +---+-->+ __/   +---+--->
         |             |    |            |   |   |   |       |   |
         |             +----+            +-+-+   |   +-------+   |
         |                                 ^     |               |
         |   +----+   +---+    +-----+     |     |     +---+     |
         |   |    |   |   |    |  1  |     |     |   - |   | +   |
         +-->+ Ki +-->+ ∑ +--->+  -  +-----+     +---->+ ∑ +<----+
             |    |   |   |    |  s  |                 |   |
             +----+   +-+-+    +-----+                 +-+-+
                        ^                                |
                        |                                |
                        |      +----+                    |
                        |      |    |                    |
                        +------+ kt +<-------------------+
                               |    |
                               +----+


The kt gain will reduce the integrator when we have saturated output.

Does some one knows the pros and coins with the different anti-windup solutions?


I always do integrator wind-up similar to how you do it -- apply limits to the overall controller output (which are typically necessary for the system anyway), and use that to limit the integrator. Just putting limits on the integrator is too crude, and it's not like it's any easier to program. Maybe the author introduced that because it's easier for a beginner to understand.

However, I typically do not change the state of the integrator the way you are doing here. I just hold the integrator state (i.e. don't update it) if the controller is limited. That way, noise on the P or D signals doesn't cause my integrator state to hop around. I may do it your way if the controller limits are changing over time for some reason.

I also always low-pass filter the D term, as you mentioned. Otherwise, the D term is just too noisy.

I often find that my P and D terms are limited by how much sensor noise I'm willing to let thru to my controller output.


yeah


'Feedback systems: an introduction for scientists and engineers' is another nice reference [1]. I used it and another Wescott text to put together effectively a somewhat extended version of the OP's code [2].

[1] http://www.cds.caltech.edu/~murray/amwiki/index.php/Main_Pag...

[2] https://github.com/RhysU/helm


Or you can use my PID simulator, where you can play with the constants in a simulated car to reach different speeds, it updates the output graph instantly

http://codinglab.blogspot.be/2016/04/online-pdi-trainer.html


Great, thanks!


My SW eng program did not even hint at these. Is there a good edX or coursera course that goes through more extensive material than the pdf? I'm also curious what tools are used by professionals? Matlab, simulink???


Not sure if this is allowed but I created a Udemy course on PID control where I go through the theory and then in the assignments you write Python code to create a PID controller for an elevator (with the goal of moving the elevator to the desired height). A PID controller can be written in less than 10 lines of code but understanding the different components is very important for tuning it and getting it to work. If your interested just go to Udemy and search PID Control, you'll find it.

If you'd like a discount code (not worth the full price if your a SW eng) DM me on Twitter as I'm not sure I'm allowed to post it here.

When I did my MSc I did all my PID controllers in MATLAB/Simulink but since the actual code for a PID controller is very simple, it's easy to implement in Python or C++.


Looks interesting .. as a SW Eng, I find Udemy very useful for these hw courses. Thanks for the pointer!


Google for a class in (classical) control theory. This is a standard topic for electrical, mechanical, and chemical engineering programs.

For implementing a PID loop, you can do it in pretty much any language you want -- it's relatively straightforward once you know the math. You can even do it in hardware via an analog circuit if you desire. However MATLAB / Simulink are commonly used when experimenting with system parameters due to the presence of straightforward plotting tools and numerical ODE solvers.


The Control Lectures series by Brian Douglas on YouTube is a fantastic resource (https://www.youtube.com/user/ControlLectures). Also U. Mich. has a great set of control tutorials put together with a focus on implementations in MATLAB/Simulink (http://ctms.engin.umich.edu/CTMS).

All of my own experience in controls (both as a student and professional) uses MATLAB/Simulink to model and generate controller code. Other tools are out there, but I don't know much about them.


So here's an awfully uninformed question that will probably get me laughed at by actual engineers (as in, not 'software engineers' like myself - and yes Canadians please bite your tongues, I've heard your shpiel by now): when you have a more powerful computer to work with, isn't it much easier to just derive some regression parameters and call it a day?

My use case: I have this home-build growhouse that I use to start my seedlings in at the end of winter (and as a fermentation chamber for wine). It's controlled by a Raspberry Pi and as inputs I have 4 groups of LED grow lights, a 30w heating element, a fan that draws warm air out of the box and thus brings the temperature within the box towards ambient temperature, and DHT22 sensors that measure temperature and humidity inside and outside the box. The goal is to keep temperature as close to a certain target temperature as possible, keeping in mind that I want the lights to be on for a certain nr of hours per day and the lights give off so much heat that they will pretty much always make the temperature go over the target (so the fan then needs to bring it down again). It's easy to get this working the naive way with an hysteresis of say +/- 2 degrees (which is fine for my purposes), but the nerd inside me sees it as a game to make that band smaller. So I've been reading up on PID controllers and dear baby jesus I can't even work out how I'd get started. So the pragmatic part of my brain goes 'just run it for a few weeks with incremental inputs between 15 and 35 degrees (C, of course), measure the responses, toss all measurements into R and derive some regression parameters'.

Does anyone do this in industry? Probably not because everywhere I've asked, everybody's who's ever done real work on this uses PID controllers - but is that because they're easier to build on microprocessors (i.e., need less number crunching)?


There will almost always be external influences that change the behaviour over time. If you have a vehicle, going up the hill will require more energy to keep the speed constant. For your growhouse, if it's hot outside of the box it may requires less heating, etc. Or the LEDs that you use get less efficient over time. You really want to have a dynamic algorithm such as PID that can deal with these changes in a near optimal fashion.


> It's easy to get this working the naive way with an hysteresis of say +/- 2 degrees

Yeah that's a bang bang controller. Those actually work really well as long as you don't mind the temp swing.

It mostly sounds like your temperature control is via the fan and the heating element.

The heating element is the simplest to control with a PID loop. Just use a PWM output to control the heater power. Use a solid state relay to switch the power to the heater.

Once you have that you need someway to record and plot the setpoint and temperature.

Once you have that simplest way to get started is just play around with straight proportional control. Then add integral control with some sort of anti-windup. You don't actually need much theory to implement and tune a PI controller.

Three bits. For your case you likely only have to run the control algorithm at 10Hz. Second when tuning don't adjust your tuning constants linearly, instead double or halve them. On a Raspberry Pi use double precision floating point.


What you are describing is called "system identification". If you have a simple mathematical model, it is possible (and in the case of your fermentation chamber, it should be) to measure a couple of step responses and estimate the parameters to design a control system.

Finding good Kp, Ki, Kd gains by just sampling the [Kp, Ki, Kd]- space at random and trying it out on the real system is often not recommended. Firstly, because the state space is relatively large. Secondly, many systems can fail catastrophically if they become unstable, causing potential harm to man and machine. The second objection does not apply to a wine fermentation chamber, as far as I can see.


There's also this[1] series of blog posts by Brett Beauregard that speaks of a few improvements for real-world implementations.

[1]: http://brettbeauregard.com/blog/2011/04/improving-the-beginn...


That gets you through 90% of real world control problems. Filtering the derivative input gives you another 5% for some noisy processes. The other 5% require somebody who knows a bit more.


what I have been searching for years is not theories or tutorials about PID, but a detailed explanation on how to tune a digital controller(instead of a continuous one) in a easy-to-setup environment, whether it be simulation or a cheap hardware. I know examples can include inverted pendulum(IP), double IP, mountain cart, heater with thermometer feedback, line-follower, etc, but where can I easily play around with these examples?


I found this video useful: https://youtu.be/uXnDwojRb1g


Wish my highschool FRC robotics team had this when we naively attempted to control elevator levels with a PID.


I also remember tuning PID values on the old Jaguar motor controllers. It was not a pleasant or particularly fruitful experience.


The team that I mentor is about to start tuning their PID controllers... this is very timely.


The matlab system identification toolbox is nice to get a casual feel for the systems. Note that pid controllers are mostly used due to their computational rather than control performance. They are math equivalent for linear systems, but thats like saying x=inv(A)b is just as good...

The powerful idea is the linear system model or the stochastic linear system model. Understand them and the properties of recursive filters and pid will come naturally.


It's so sad. I studies this stuff in depth for almost 1 year in college... And now after 2 years of working as a software dev I forgot almost everything.


I wouldn't worry about it. In your career, you're going to forget a ton of stuff (if you're not repeating the same project over and over again). That's just how it works.

The win is that someday you will end up needing either the stuff you learned or the math that you used, and you'll find out it's way easier (and more illuminating) learning it the second time around.


509'd. Does anyone have the PDF for it? Google cache seems to just render the document, not give me the PDF (I like to download and read offline, from time to time).


He stresses regularly spaced sampling. I don't see why high-precision timestamps wouldn't work just as well, though that would complicate the computation of the I and D signals.

Actually, throughout he doesn't seem to acknowledge the different units between the P, the I, and the D, instead implicitly treating the whole system as discrete time for many purposes, in e.g. the recommendations for 100:1 ratios between P, and I; and D and P.


Regarding equal spacing: the sample rate determines a part of the system latency, and therefore performance and stability limitations. Eliminatating the stability of sample rate eliminates the decades of work basic controls analysis relies on. Or said another way, slow your sample rate and pay with your phase margin. Not to say it’s impossible to deal with (small) changes in sample rate, but in a text where he is already wildly simplifying things, this sounds like the wrong place to innovate.


Does anyone here have experience with Fliess iPID controllers?[0] The papers indicate they are superior, Fliess is a generally respected academic, but the math seems the same with slightly different terminology....

[0] https://hal-polytechnique.archives-ouvertes.fr/inria-0037232...


This is a great short overview, and timely for the First Robotics Competition season.

I took a course on this in college, and while that course mentioned the notion of sampling the D signal directly from feedback instead of from error, it never laid out why. This guide's practical explanation of why one might want to do that is great.



My sousvide controller uses a PID to regulate the temperature super precisely.

https://github.com/aguaviva/SousVide


I have been doing a (integer) PID without a PhD over the last years as well : meCoffee ( https://mecoffee.nl ) for espresso machines.


I just bought an espresso machine, and it’s crazy that something as simple as a PID controller justifies a several hundred dollar markup on some models. The processing power needed is miniscule...!


True. And if your machine does not have a pressostat, it makes a huge difference.


Does the optional dimming of the boiler make a difference in your design? Mine just fully turns on and off all the time, and I reckon dimming would probably be easier on the power grid. 2000W on/off vs ~80W average (I measured) is quite the difference.


I am using SSR with zero crossing and it makes much difference.

First, I am dithering the boiler power (the power is applied to the boiler on only some AC cycles), I am basically deciding per cycle whether I want my boiler turned on for the next 1/100th or 1/120th of the second. I am already running my model about 50 times per second (not using PID) so that's ok.

Being able to apply fractional power to the boiler means it is quiet and much less scale deposits.

Also due to the zero-crossing control I have much less electrical noise. Turning the boiler fully-on blindly causes electrical noise which is not nice for other devices (my Baratza Sette grinder turns on spontaneusly when this happens).


Your measurement seems to match up. It makes a difference in the fact the meCoffee PID can adjust its P response 100 times a second, it can react much faster.

Dimming such heavy loads is not within spec in all jurisdictions though.

It also includes normal interval behaviour.


I love my brewing water to be perfect temperature, too!

I am currently writing model-predictive controller with moving horizon estimator for my Silvia!

The idea of the controller is to model boiler water temperature based on current brew head temperature and previous heating/brewing history. Then the model-predictive part settles the boiler optimally, without any over/undershoots.

The estimator (general case of what is also called Kalman filters) monitors the operation to update model parameters. It is detecting the scale buildup over time! Kinda neat.


I like the idea of taking the brewing into account. It’s clear that the added cold water from the pump will result in cooling of the boiler water, and the amount of cooling is easy to calculate. But you probably need to add a sensor to measure the water flow.


I have a sensor for water flow (bought replacement part from Saeco machine) and also sensor that measures the tank water temperature. The sensor is on a spring touching the bottom of the tank so that nothing has to be disconnected:)

I also have a water level sensor which is a small plastic pipe mounted to the side of the tank. Inside it there is a floating magnet. On outside of the tank, there is a flat spring which pushes two reed switches to touch the side of the tank where the pipe is. This allows me to detect when water falls below acceptable level.


Is a machine learning algorithm capable of getting better results than a PID? Is the extra effort worth it?


> Is a machine learning algorithm capable of getting better results than a PID? Is the extra effort worth it?

Potentially yes, but by it being a component of a completely different control paradigm (MPC). ML itself will probably not improve on PID while still keeping it PID.

So, there are two parts to a control system:

1) The predictive model (prediction)

2) The feedback control law (action)

ML primarily concerns itself with function approximation from data, y = f(x, p), where p are parameters -- in the parametric model case. ML practitioners create predictive models but don't really consider feedback loops. That is more the domain of control theory, which is concerned with system behavior when exogenous inputs (actions) act on the system. The math is different.

ML can be used to fulfill the role of (1). It has nothing to say about (2).

In the PID case, (1) is extremely simple -- it is simply an error function (difference). (2) is where the magic happens... based on the error, the controller computes the right action. The beauty of PID control is that it is technically "model-free" (if you don't consider an error function a model). You can apply it to systems where you have very little data, and still be able to do fairly good job. You could replace (1) in PID with an ML model, but then you would defeating one of the some of attractive qualities of PID control: effectiveness under uncertainty without a model. You can use ML to guide the tuning of PIDs, but that's about it.

That said, ML models have a place. In advanced controllers like MPC, (1) is typically a dynamic (state-space, FIR/FSR, physics equations-based) causal model of the system under control. (2) is an optimization algorithm.

You could conceivably replace (1) with an ML model. This is done in many cases -- in many plants, there is a predictive model that infer states based on images and sound (for instance, using Near Infra Red sensors to inferentially predict certain chemical properties). But that's no longer PID.


How do you mean with "better result"?

For a non-linear system, machine learning can perform much better and solve more complex tasks. Like walking for a humanoid robot. A linearzied PID solution can not beat that.

For linear systems:

Depends of what you mean with better I guess. My understanding is that you can place some of the poles of a system with a PID pretty much where you want. So for a simple linear system: no.

There exist also other controllers where a LQR can be used in linear systems to get a perfect theoritical controller based upon to minimize the cost of control outputs. (yes, several outputs).

However maybe a ML can be used to choose and tune the controllers better than humans?

EDIT: Also, the discretication is not always perfect (backward or forward euler is common estimates). So maybe ML can help in that too?


> For a non-linear system, machine learning can perform much better and solve more complex tasks. Like walking for a humanoid robot. A linearzied PID solution can not beat that.

Right but you are comparing a nonlinear regression model with a linear control scheme, which isn't really fair. Plenty of nonlinear control strategies exist and are used in practice for things like humanoid walking. Moreover, many people prefer these controllers due to the theoretical guarantees they provide over a learned control policy.

That said, ML does have a place in control theory, and you can find papers going back to at last the 90s (I never looked earlier but they probably exist) that combine the two, and not just in a reinforcement learning context.

> However maybe a ML can be used to choose and tune the controllers better than humans?

Probably not, assuming you are discussing linear systems. You would need to come up with some cost function for your ML algorithm to determine the performance of a particular control policy, but then why not just use that as the objective function for your optimal control algorithm?


What's the reason he insists on using double precision math? (If not using fixed point)


Keeping things simple, if the input data SNR is, say, 16 bits, and the PID coefficients are 16 bits, then there is 32 bits of significand at the output of the control gain multipliers. For single precision float, this is truncated to 23 bits. So, the system may settle when the transducer input is near zero (and the exponent can scale down to maintain sufficient precision), but then fail to settle well at the ends of the transducer range (where the transducer input is near +/-2^15 and the exponent in the single needs to stay high to avoid saturation). Also, consider the numerical integrator is running an infinite series of adds, so therein lies a trap for error growth, as well.

I have made this mistake before, and the sometimes strange control behavior can be frustrating. Considering the small performance penalty for using doubles on modern hardware, it is excellent advice.


Thanks for the explanation! I have only hardware with no FPU (teensy 3.2) or single precision FPU(teensy 3.5)... I suppose in that case it would be better to write a fixed point implementation?


Fixed point often works better, as it has better resolution because it doesn't need the exponent. The flip side is that the range of a fixed point is very limited (because it doesn't have the exponent). Practically, that means you need to do some analysis to make sure don't overflow or saturate, and use the bulk of your fixed point range to get the resolution benefits. If you do that, fixed point is great.

If you don't have time or knowledge to do that, floating point works better because you can be more ham-fisted with your scaling and still not overflow or saturate.


I have the choice between using a slower processor (96 MHz) without an FPU or a faster one(120MHz) with a 32 bit FPU. I might use the slower one but with a fixed-point implementation. Thanks for your tips!


This reminds me of my Achilles heel during undergrad. Kalman filters.

No matter how much I read about the topic, I just could not grok it. The whole "state update" algorithm and sheer amount of different variables threw me off. Does anyone else feel the same way?


You are not alone! From what I've seen many people struggle with Kalman filters. If you want to build in depth understanding of them I heard lots of good things about this online book: https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Pyt...


Yes, indeed!

What worked for me in the end, was to first understand state observers (e.g. Luenberger observers) and then consider a Kalman filter as an optimal observer, where the observer gain is the steady-state Kalman gain.

The whole duality between state-feedback design and observer design -- and LQR and Kalman filters. Then it made sense :-)


The book "Probabilistic Robotics" explains the Kalman filter really well by putting it in the larger context as a Bayesian filter. I understood it only after reading that book


I haven't read that book, but seeing it as a Bayesian filter made it click for me. All the control theory guys who just handwaved with "it's basically a stochastic Luenberger observer" didn't capture the essence of the filter for me. With a strong statistics background it was much better to go from Bayesian first principles to the Kalman filter.



I totally understand where you are coming from. I read multiple tutorials and even implemented Kalman filters in Python, Matlab, and C++ for a balancing robot project. I never really understood what was happening even though I knew what the Kalman filter is used for and when I need it. The thing that "clicked" for me is that multiple measurements are always more accurate than a single measurement. That's why there is the feedback loop that derives the predicted value and the uncertainty that is associated with it. By combining the predicted value and the measured value (and their uncertainties), you get a more accurate measurement. Then you can use this more accurate measurement to "correct" your predictions so that the next prediction has even less uncertainty. This perpetuality is why Kalman filters lead to the "true" state.


The whole "state update" algorithm and sheer amount of different variables threw me off.

Imo it starts with the name, it tells nothing and probably implies something different. It is the dynamic programming of control theory.


So true. Filters should be reserved for something like low pass filters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: