> But the rule of thumb of good optimization (something something Jensen inequality) is that all your objectives must be growing all the time so you just have to be sure you are not letting something regress. It's called striking a balance. Then all objectives will grow following a S-curve simultaneously.
Does this assume no cost for context switching?
When I have tried to juggle several objectives in my life, I often have regressions in the ones I don't focus on at the time. For example, after focusing on working out for a while, I may lose interest and not follow an "optimal maintenance" regimen, meaning my abilities decay below their peak. I don't feel too bad about it, and use muscle memory as justification: when I return to the activity, I will get back to my peak much quicker then when I first got there. Doing so allows me to extend the time before I have to refocus on that activity again. (But maybe that's suboptimal in the sense that I pushed myself too hard initially, so I am fed up with that activity and don't feel like doing maintenance at all.)
> Closely related is the eternal debate between satisficer vs maximizer which is just the primal-dual representation of the optimization problem.
Maybe you're working out or focusing too intensely.
I find that doing five minutes workout easily increases the amount of exercises I did for 40+ minutes a day. That's on top of my commitment to going to the gym and doing an hour worth of training.
>But maybe that's suboptimal in the sense that I pushed myself too hard initially, so I am fed up with that activity and don't feel like doing maintenance at all.
Yep, that's what I think. If you are oscillating for no reason you are doing it wrong. That's typically the yoyo effect people often experience when optimizing for their weight.
Humans do a really poor job at optimization in general. Quite often following a simple PID for control just make these oscillations disappear, and allow the optimization to continue instead of being stuck.
This is a simple model (quadratic), so it has its limits. Context switching shouldn't be a problem. Ideally you would have a zero cost to context switch, but if the cost is non zero, it only increase the variance but you should reach still your optimum as long as you remember that you are not just optimizing for the task at hand but for the combination.
The other limitation is when you have non-linear effects, but they are usually adverse, meaning big oscillations are more likely to result in an injury, than in an acceleration in progress. But sometimes, they can be beneficial like avoiding over-adaptation.
>Can you elaborate on this?
Which brings me to the satisficer vs maximizer point. There are often two school of thoughts when you optimize a problem that has constraints.
Once the problem become more complicated some people will assume a simplistic model and go with it, while other will try to find the best one before going with it. General-structure oriented vs detail oriented.
You can stay in the primal representation where you respect the constraints, then improve while respecting the constraints. Or you can switch to the dual where you give yourself some slack in the constraint while occurring a cost for violating the constraint that you add to the objective. You can also stay within the admissible region using some barrier method.
When you are working in the dual you are already juggling with multiple objectives.
You try to satisfy the constraints simultaneously focusing on those actively being violated, usually once all constraints are satisfied you are happy and stop maximizing even though there is still something to grab. If you want to progress further, you give yourself some additional constraints and satisfy again. Basically you are trying to land in a good-enough(TM) region of space.
The maximizer will aim for the peak. He will optimize for the sake of it. Instead of expanding the problem to a more interesting one, he will try his best to grab that extra-performance point on its limited toy problem. Once he reached the optimal point he knows that he has some slack left and only then he expands the problem to another dimension.
Sometimes he finds that deep peak inside a shallow valley, but most often he is spending a lot of energy just to make the satisficer look bad.
Does this assume no cost for context switching?
When I have tried to juggle several objectives in my life, I often have regressions in the ones I don't focus on at the time. For example, after focusing on working out for a while, I may lose interest and not follow an "optimal maintenance" regimen, meaning my abilities decay below their peak. I don't feel too bad about it, and use muscle memory as justification: when I return to the activity, I will get back to my peak much quicker then when I first got there. Doing so allows me to extend the time before I have to refocus on that activity again. (But maybe that's suboptimal in the sense that I pushed myself too hard initially, so I am fed up with that activity and don't feel like doing maintenance at all.)
> Closely related is the eternal debate between satisficer vs maximizer which is just the primal-dual representation of the optimization problem.
Can you elaborate on this?