For many applications, you're correct. Logic works just fine. However, we should also be careful about this because for many machines the programming interface to control it has a low level controller that may in fact have a PID to provide stability.
A PID control provides a method for a real-time, feedback controller. The real-time portion can be important for quickly and often smoothly guiding the system from one point to another. The feedback portion can be important because it provides a mechanism to adjust to a changing system based on something that we can tangibly measure. Though, at this point, programming logic could still do both of these things.
That said, a PID controller can provide theoretical guarantees that would be difficult to analyze using an arbitrary piece of code. At its core, we generally use a PID controller to control a system governed by a differential equation. For example, we could say that our system is well modeled by the system, `diff(y) = f(y) + u` where `y` represents the state, `f` the dynamics, and `u` the control. This `u` could be a piece of programming logic. It could be a PID controller. If we choose a PID controller, then we now have a new differential equation `diff(y) = f(y) + kp y + kd diff(y) + ki int(y)`. This differential equation is now all in `y`. At this point, we have theorems like Lyapunov stability that can tell us whether or not the system will converge to a particular point for all starting solutions within some region. This is a very useful and powerful guarantee. Without it, it may be possible for system to enter a feedback loop that damages the machine. Can we get the same for some logic? Sure. In fact, we can still use theorems like Lyapunov stability, but it's not always straightforward as to how what these equations look like and whether they're in a form that's easy to analyze. If the logic has time dependence, it can get hard. If there's some strange nonlinearities or discontinuties, like if statements, it can get hard. If the result can't be written as a first-order system, it can get hard. If there's a time delay and the result becomes a delay differential equation, it can get hard. Again, it doesn't mean it won't work and it doesn't mean that there may not be a stability guarantee, but ensuring that can be really important.
Now, candidly, most people don't ever model their system and check the differential equation. That said, if the underlying system is linear and time independent (or close to it), we can still use a PID and get stability guarantees without even knowing the particulars of the system. It may perform poorly, so we still need to tune the PID parameters, but there's still stability. If we take a linear system and add some logic to it and the logic is nonlinear, then it becomes a difficult question again to understand how the system will behave.
As one other side benefit, PIDs can also be designed from analog components, which can give a very, very fast response that can be much smoother than a microcontroller on an update loop. This is not always important, but nice when it helps.
A PID control provides a method for a real-time, feedback controller. The real-time portion can be important for quickly and often smoothly guiding the system from one point to another. The feedback portion can be important because it provides a mechanism to adjust to a changing system based on something that we can tangibly measure. Though, at this point, programming logic could still do both of these things.
That said, a PID controller can provide theoretical guarantees that would be difficult to analyze using an arbitrary piece of code. At its core, we generally use a PID controller to control a system governed by a differential equation. For example, we could say that our system is well modeled by the system, `diff(y) = f(y) + u` where `y` represents the state, `f` the dynamics, and `u` the control. This `u` could be a piece of programming logic. It could be a PID controller. If we choose a PID controller, then we now have a new differential equation `diff(y) = f(y) + kp y + kd diff(y) + ki int(y)`. This differential equation is now all in `y`. At this point, we have theorems like Lyapunov stability that can tell us whether or not the system will converge to a particular point for all starting solutions within some region. This is a very useful and powerful guarantee. Without it, it may be possible for system to enter a feedback loop that damages the machine. Can we get the same for some logic? Sure. In fact, we can still use theorems like Lyapunov stability, but it's not always straightforward as to how what these equations look like and whether they're in a form that's easy to analyze. If the logic has time dependence, it can get hard. If there's some strange nonlinearities or discontinuties, like if statements, it can get hard. If the result can't be written as a first-order system, it can get hard. If there's a time delay and the result becomes a delay differential equation, it can get hard. Again, it doesn't mean it won't work and it doesn't mean that there may not be a stability guarantee, but ensuring that can be really important.
Now, candidly, most people don't ever model their system and check the differential equation. That said, if the underlying system is linear and time independent (or close to it), we can still use a PID and get stability guarantees without even knowing the particulars of the system. It may perform poorly, so we still need to tune the PID parameters, but there's still stability. If we take a linear system and add some logic to it and the logic is nonlinear, then it becomes a difficult question again to understand how the system will behave.
As one other side benefit, PIDs can also be designed from analog components, which can give a very, very fast response that can be much smoother than a microcontroller on an update loop. This is not always important, but nice when it helps.