I wrote a schedule for a team of ~20 co-workers for a 24/7 support shop. This required people change shifts / sleep schedules often.
The schedule was a point of contention for years.
After I volunteered to write the schedules, in 3 months I had everyone happy.
It was not hard.....but....
The catch was that I did it manually for a month at a time with a wide ranging amount of changing priorities that would be hard to program ... and the key was COMMUNICATION, and working with people and their personal preferences.
One of the senior teachers in my school used to organise schedule for 1000 pupils and 50+ teachers. With some minor things everything worked pretty damn well.Then they decided to get some software to do the same.Never ended up working...
Humans might be kinda slow and expensive, but their ability to handle a lot of variable data and inputs and weigh them dynamically, filter out absurd results before applying them, and etc is pretty amazing.
Like if someone comes to me and says "what if we... did X with the schedule". I could tell them how to do that and not to right away. The code would have to be changed / rewritten and tested and fail at it each time... so much mess.
Especially when scheduling other humans... the humans being scheduled can smell a computer scheduling a mile away and IMO they expect the worst at that point. They're not necessarily wrong.
I suspect that the big thing there is negotiation and social skills. It is not just about making schedule, it is also about talking with people, making them compromise, making them happy despite not gaining everything they wanted, explaining them that what they wanted they wont like actually, making them understand others too.
This is a good point on social aspect: no software can go to a couple of teachers and say: look, there's an overlap and we've only got one classroom,how about you do x,y,z and then next time you get x,y,z.
I think ultimately if you're working with humans the question is if you want the schedule to solve a sort of math problem for you ... or if you want folks to be happy.
They are very much not the same thing and the dynamic nature of keeping people happy and filling a schedule is extremely difficult to account for with software.
Computers solve the legible problems. But scheduling is one where the problems lack in hard specification, and plans involve several "layers" of prioritization, so what's called for is approximate results that avoid failure modes, which is something a skilled human does pretty well.
I do think that software speeds up some parts of the process, but like so many automated processes, it's computer assistance, not computer intelligence.
I was reading some old literature a few months ago on this related to police staffing where union issues had to be taken into account. It seems adding social penalties is a way to do this algorithmically
You’re giving me flashbacks to some software I worked on a while ago that had to use rules from three different collective agreements from nursing unions. Turns out writing business logic for rules that aren’t internally consistent is a challenge, and unlike most LOB projects I’ve worked on, the procedure can not be amended until the next round of bargaining.
They are far worse at it than humans. This is the difference between narrow AI and general AI. Humans can do millions of different complex tasks, and can adapt when the situation changes. Computers are terrible at updating their models automatically based on new information, as they don't have understanding of the bigger context. Worse still, software today has no way of analyzing and adjusting the assumptions it is operating off of.
People are (currently) better than computers at updating their models but I don’t think we’re particularly great at it. Look at how many business run on the “this is how we’ve always done it” model
Assuming you can get someone highly competent to do scheduling, which isn't all that likely. More likely you'll get someone who both produces mediocre results, and favors their friends.
Even if you can get competence, you may want to automate it anyways. I'm currently working in health, and I recently visited a radiology center that claimed it took a full year to get front desk staff fully up to speed. Understanding the full scheduling requirements for all scans, some medicare rules that influence scheduling, and how to handle all the various problems, takes a long time. They live in terror of having front desk staff leave, because a quick replacement is impossible.
It was one of the hardest things i wrote, making a schedule for students, teachers and the classes. Worked like a dream, written in pascal. They used it for years.
As a problem, classroom scheduling is very interesting. The kids and the teachers were well aware of challenges anyone doing the scheduling hadv and very much doubt any software at the time could handle non standard situations. Interestingly enough,just last year I was trying to find off the shelve software that could do scheduling and have a calendar feature for one our suppliers. After considering 10 or so options I ended up with a conclusion that unless someone would do custom solution, they'd have to continue using Excel for it.
Ya, Scheduling tools are often pretty bad in general and constraint optimization does too much assuming that things won’t change or that people are just automatons.
We ended up deciding we needed to build our own tool and have also started to sell it https://happy.tools/
The biggest challenge is just addressing the constantly shifting math on what is 'good'.
So late night shifts are generally undesirable, but required to be covered and so you really want to spread them out. Having said that for some folks they'd rather do a bunch in a row, others spread them out. Some folks mind them less than others.
Just making the math decide that an even number across the whole team is 'fair' always leads to some weirdness no matter how it works out.
So a weekend night shift might be 3 days of work then 4 off. That's awesome for a guy whose vacation starts the following week because he gets more time off. He might actually WANT that. So in that case a night shift is desirable and makes the guy happier.
Another person might not want that or not have a vacation... so his point of view might differ. But maybe next weekend he feels differently.
And that's just one given weekend shift with somewhat unpredictable results.
The real key though is not just noting how the variables change wildly it is that keeping those guys happy, being flexible for them... makes them MORE willing to take a rough shift now and then. That's even HARDER to quantify, but EXTREMELY valuable.
And of course my ability to walk up to a dude, explain the situation, he trusts me because I've done this before / him favors is just something a computer that fires off an email just can't do.
This is a nice reminder that some things just aren't amenable to automation. Things shift too often and/or the number and type of exceptions just aren't worth automating.
We run into this for customer support (www.assembled.com) and one of the things we've learned is that the ability to solve the fully constrained problem in the real world is not quite enough, at least for our customers.
Quite often, people are looking for the right set of constraints to apply so they're kind of running a meta problem on top of the NSP. How many back-to-back shifts should I allow? Should people's shifts always start at the same time? Can I give some people regular 9-5 schedules? All of the above questions will decrease the strict optimality of the solution, but increase the happiness and long term retention of your workforce. Thus, the question really become, how much do each of these cost in terms of optimality and what are the tradeoffs.
We've found that developing a very fast heuristic algorithm for the NSP allows people to iterate quickly on these types of questions (even though it's not quite as optimal as a SAT solver). We use a greedy algorithm with a heuristic that is relatively specific to our problem domain to ensure that we have a reasonable combination of speed and optimality.
Thank you for the pointers! I've been interested in crew scheduling systems for my company to make effective use of personnel. Is the problem of producing ideal time slots for appointments requiring a crew (pattern of ordered and unordered availability) very different?
Perhaps not very different, but each problem formulation has its own complication.
For column generation, specifically, the routine that generates the columns incorporates a lot of problem-specific information. The rest of the solver is more general.
We had to find a solution for doctors to manage 22 different peers distributing 30 shifts a month. The schedule had to abide by each employees vacation requests and distributed undesirable shifts like weekends and holidays in some equitable manner, and be easily interpreted by staff. The solution was an excel spreadsheet with a lot of auto-fill logic and formulas to show shift and burden counts for each doctor so the person managing the sheet could move around shifts when needed and ensure equitable distribution still remained. Excel really helped where python and other tools would require a lot more code and be a lot less flexible to operator changes.
I built a web interface for this type of nurse scheduling optimization. Mine uses CPLEX instead of the CP-SAT solver but the Python set up is very similar.
Are you referring to the first model? Only 5/5184 solutions are displayed. There's no objective function and the branching is probably defaulting to a simple systematic search with min domain, min value heuristic. If these are the first 5 solutions in the search tree, it's not surprising that nurse 3 keeps getting stuck with back to back shifts (does not work day 3)
By back to back, do you mean nurse 3 works shift 2 day 1 and shift 0 day 2? Or that nurse 3 always works two days in a row in the listed solutions?
The example doesn't mention back to back shifts, only that a nurse can at most work one shift a day which in turn implies the constraint that no nurse can work shifts back to back (in the same day).
One could easily add the constraint that a nurse scheduled on shift 2 cannot work shift 0 the next day.
Any thoughts about the scheduling projects open sourced by staffjoy a few years ago, regarding the relevant schedule optimizers chosen? https://github.com/Staffjoy
We had a few different algorithm iterations. One of our biggest learnings was that it's hard to build a generalized auto-scheduling algorithm. While most companies have similar needs at a surface level - things like local labor laws, cross-functional roles, breaks, and business quirks mean that you end up customizing the algorithm for most customers.
For the best auto-scheduling - Kronos is the incumbent, and Legion.co is probably the most advanced disruptor.
Thanks for replying! You gave money back to investors and open sourced your work. That's commendable behavior! I never forgot that story. As usual, the solution seems straightforward from 30,000 feet but once we go into the details, it turns out to be much more complicated. Noted about Kronos and Legion.
Have you done any further scheduling related work since Staffjoy?
I have been building a employee scheduling system in a factory environment. New employees need to be trained on machines, for a certain number of shifts, with a limit of how many employees can train on a machine at a time.
A question becomes "how many new employees could we handle?".
I looked at some solutions, but landed on an Excel spreadsheet with a small macro for spreadsheet setup, conditional formatting to call out an employee being allocated to 2 positions on the same shift, and a visual way to show for each operator, how long it takes to be fully trained. HR or a manager can setup training schedules by manually allocating people to training slots.
There were some constraints the business wanted (train on machine #1 before #2, do 2 days of training if you haven't already trained on _x_ machine). They would have been hard to accommodate in a way the end user could configure, so I have a notes section they can fill in (and they have to manually observe these constraints).
Totally open to other ideas people have on how to approach the proble.
This sounds like a good solution to me. I can imagine some HN readers might comment 'ew Excel!' but it truly is the right tool for something where you have to transparently demonstrate the logic to non-technical stakeholders in a way they can easily edit and adapt.
Of course, you could build or buy a web app that handles this specific scenario more powerfully and scaleably, but if this is a quarterly problem rather than a everyday problem then the additional overhead of the different interface / logins may not be worthwhile.
My only comments would be
1) macros are too hard for nontechnical users in my experience, even just w/r/t setup (excel prompts you a security warning to enable macros that is scary). I think you could maybe do this using the what-if analysis tables, simple ifelse formulas, and conditional formatting.
2) another improvement might be a cloud-based version where there is a link at which others can easily view + collaborate, like Google Sheets or Airtable. I think Excel is supposed to have this feature as part of the 365 subscription but I've never used it.
Using Excel sounds like a pain to maintain but if it works it works.
Your problem sounds like a JSP (job shop scheduling problem). If it is you could use MiniZinc https://www.minizinc.org/ and look at Håkan Kjellstrans MiniZinc page http://www.hakank.org/minizinc/, it has a couple of different JSP models.
This looks nice but actually doesn't adress the actual challenges of scheduling.
It's rather easy to generate a schedule and most of the feasible variants are available online already (there aren't infinite possibilities on how to schedule shifts) - you fit it to a 7 day or a 28 day (or a multiple of 7) cycle and then copy paste.
The scheduling challenges occur when people need to take unexpected vacations or during vacation period when staffing is low.
Another challenge is the skill level of the individual employees which is usually not binary - can do work / cannot do work - since employees come to work to perform tasks and they don't always learn all of the tasks the first day.
I’m a bit uncertain about what Google OR-Tools is…it seems to be some sort of framework for solving optimization problems, but based on the example here I am curious how general such a tool can be, as normally I find that a “drop in” algorithms framework is generally quite difficult to make or use. Is anyone using this for something useful? Does it perform faster than a hand-rolled implementation tailored to the problem? Is it easier to use?
It is. The two main components is the CP-SAT solver and the routing library (CP + local search).
As with all discrete optimization problems, a large part of performance and scalability comes down to how well you model the problem. Using a framework gives you the flexibility to focus on modeling and the ability to easily extend your problem to satisfy new constraints without the burden of maintaining an optimization algorithm. Generally speaking, there's no point in reinventing what's more than likely a worse wheel.
OR-tools is a state of the art OR framework. The CP-SAT solver has consistently won the Minizinc challenge the last few years and the routing library is a great local search framework.
It's also worth mentioning the fantastic OscaR, a Scala CP/CBLS library which I've used extensively in my masters thesis on vehicle routing. Gecode is another solid CP solver. Minizinc is an interesting project that aims to create a unified language to interface with a lot of different solvers (CP, SAT, MIP)
It's a lot of fun to play with. I used it at work to solve a partition assignment problem (you have data preintersected by property sets and a boolean query that constrains various properties, which set of preintersections are smallest for executing the query and cover all possible respondent documents?). Unfortunately it was not quite fast enough for our use case (runtimes of 50-100ms on harder queries) but it was fascinating to see provably optimal solutions to the problem.
Did you use a wrapper or work directly with the solver? If you require runtimes that are that small the compilation time into the proto-model fed into the solver is likely a bottleneck.
Usually the time horizon for solving these kinds of problems is on the scale of minutes/hours so the tool chain isn't optimized for that time span.
I tried both ways: compiling into the proto model to use the newer solver, and also using the older solver directly. But the profiling in each case showed that the solving time dominated. I don't think it was an issue of what the library is optimized for so much as it was just too big of a problem to solve in too little time. Even my hand rolled solution that gives non-optimal answers can take 5-10ms on difficult queries.
We ended up requiring users to adjust their queries a little to make it easier to spot the targeted preintersections. This seemed like a decent trade-off considering the latency impact of the alternative.
Actually, we tried both exhaustive search and sufficient search (where we had a soft-deadline and accepted the best solution reached within the deadline). Even with this dark launch found queries in our stream that had unacceptable delays. We were in contact with the team through the development process, so we at least had some reason for belief that we were using best practices. And you're right that ultimately the CP solver wasn't the right tool for this job -- simplifying the problem and giving up some capabilities was the solution. We had hoped that a general solution would be feasible, as that would allow expressing some things efficiently which we couldn't in our existing setup. But it was not to be.
I have used the or-tools VRP (vehicle routing problem) solver to optimize the drawing order of a pen plotter [1]. The Python API is a bit ugly but once I got past that, I found it had enough flexibility (e.g. asymmetric costs, disjoint routes) to represent the problem neatly.
It depends on how good your hand-rolled implementation is.
Will it find the shortest path in a graph faster than your Dijkstra implementation? No, leave the simple problems to hand-rolled code. Can it run e.g. TSP faster than a dedicated TSP solver written by an expert? No, but it sure will beat what you or I can code up of TSP over a week. These things have great heuristics and can solve/approximate a very wide of variety of NP-hard problems much better than you could yourself.
I've used the CP solver to plan sports tournament schedules (make sure everybody gets equal time on the streamed field, make sure that special team that arrives with the late train doesn't have the first game, nobody has to play two games in a row, etc.). I made the mistake of using the Python frontend (which is much worse documented than the C++ API), but otherwise, it was a good experience. Constraint programming feels a bit odd at first, though.
There are a lot of optimization software tools available. One of my favorite is GLPK which is a free software implementation of AMPL. I created a employee scheduling framework for GLPK. It's listed as one of the examples in the download.
<disclaimer>I work in a company that deals exclusively with OR. This means that we might be overly specialized for this product.</disclaimer>
I see OR-Tools as an educational product. The tool is easy to start tinkering with and it has superb documentation - even for people not specialized in combinatorial optimization.
Can you link to your company. I've always thought it would be cool to 100% work on LP, MIP, and Nonlinear models. Do y'all use CPLEX/GUROBI for the big stuff and Prolog for more discreet items?
I'm not either completely sure about what "OR-Tools" really is, I've considered it an API and packaging to different solvers.
I've used the LP solver [0] for real problems (not on a massive scale), but it worked very nice. Basically a scheduling problem where we very quickly could give the user a pretty decent solution as a starting point that they then could customise. Dramatically shortened the time for the user for that process, compared to the previous random/heuristic "solution" we give.
For the part I have used, it is a common API over various solvers (free and commercial), plus some google own solvers. It is pretty neat, but because of licensing reasons requires you to recompile it yourself to access all the free solvers.
I've utilized OR-Tools in conjunction with the CBC (Coin Branch and Cut) solver to solve a mold distribution optimization problem of plastic mold injectors for a client of ours. It was free and it got the job done very well. We used it in C# and it's pretty well documented. We tried LPSolve (55) and it was a pain to use and super slow. Also, commercial options like Gurobi and IBM's CPLEX are super expensive for smaller projects.
Using solvers for Linear Programming (LP) and Mixed Integer Linear Programming (MILP) problems can be ridiculously effective and generalizes very widely.
It can't solve every problem, but it can solve many more than you think, and there's an active research area around using them for solving problems.
My sister is in charge of scheduling physicians for shifts and on call for a large hospital and she has a really hard time. She does it manually with the help of a spreadsheet. She engaged with some software company to help and after many hours of work she still uses the spreadsheet.
Part of the problem is that some senior physicians do whatever they want and aren't interested in playing fair. They'll call her the day before a holiday and tell her they're not coming in, and my sister is left trying to figure out the least shitty way to cover the gap.
Scheduling is one of these areas where the core problem is quite well-defined, but the devil is in the details - both static and dynamic. Bit like one module of ERP.
Static details include things like local labour laws, skill constraints, unions etc.
Dynamic details include things like allowing employees to state preferences, swap shifts (while respecting labour laws), call in sick etc. without having to reshuffle everything.
There probably are quite a few submarkets (geographies/industries/sizes) that don't have tools; whether you can make money is a different question.
Many small-medium businesses wouldn't want such a tool because it eliminates managerial leverage. I'm not being snide either. In many places I've worked the boss's tyrannical power over the schedule was a form of control.
Yup. Small businesses don't need auto-scheduling. Enterprises already have functional tools that are hard to displace (particularly due to internationalization, localization, and payroll needs). Mid-size businesses often need the customized deployments of enterprises, but don't have the budget.
I run a small business and we use WhenIWork to schedule our shifts. We couldn't be happier with the software, functionality, and customer support, although it's a tad pricey.
Google or-tools are quite fun although a little bit of a pain to setup and get started at least if you want to use the JVM and support different OS.
Just to share a little anecdote about my use case: The department I work in consists of about 40 developers and once a year we had a day together just to form new teams in a self organized manner. Management wanted about one third of a team to move to a different team to spread and gather knowledge but completely backed off during that day.
We had a special coach and scrum masters supporting us but still it was very exhausting and a lot of the developers didn't like it. The devs that showed motivation to change teams were more or less forced to go to the teams which were disliked the most (either because of their domain or the already present members).
So in the end there was not much rotation and the whole thing was quite expensive so that this year it was cancelled. That's when I started to use Google or-Tools to write a little tool that suggests teams based on some constraints like size, at least one experienced dev / software architect per team and an optimization function based on preferences of each dev for a certain domain as well as their sympathy towards other devs
When I announced it it got almost no attention and was dismissed as "inhuman".
I'm glad to see the domain of constraint programming and operational research in general generate interest so I thought I would take the opportunity to share some links and resources I have found useful. Personally, I've spent most of my time on the routing (TSP, VRP, VRPTW) side of things using CP (OscaR-CP).
Discrete optimization and constraint programming can be a bit difficult to get into. There are two great free online courses on Coursera that do a good job at introducing the topic.
This is a great course by Peter Stuckey that uses MiniZinc.
The MiniZinc programming language is a great starting point that teaches you to think of solving discrete optimization problems only in terms of modelling the problem.
Håkan Kjellerstrands blog is another great resource
It contains hundreds of different models for a large variety of different solvers.
Chances are that he has created a model for your solver for a similar problem.
My personal favorite. An OR framework written entirely in Scala. As such it's an absolute joy to work with and the internals are (relatively speaking) easier to understand.
An extremly lightweight CP solver, designed for educational purposed to expose how a CP solver works without obfuscating performance optimizations found in research/production solvers.
Especially the work of Pierre Flener, Gustav Björdal and Justin Pearson who I've had the pleasure to work with during my masters thesis on vehicle routing using CP.
My team used to have an absolutely wild script for oncall scheduling. People would enter their availability weeks, and then we'd run a genetic algorithm, scoring the fitness of each schedule "offspring" according to the number of availabilities violated and then making random mutations to the most promising. It would slam CPU for several hours and still produce clearly ridiculous solutions. Brute force would have been better.
Maybe a little OT, but why aren't the OR-tools in general more focused (or more "marketed") towards GPU/TPU clouds since they seem like the exact problems which they are made for? More focus seems to be towards TF/PT/NN and not the real problems like what OR-tools and solvers in general push for.
Typical algorithms solving combinatorial problems are usually heavily branching, so not so much vectorizable. For example for a branch-and-cut MIP solver, you could run the LP-relaxed problems on GPU, but most of the logic lives in branching and cutting heuristics that are much harder to parallelize.
A lot of OR stuff can be parallelized, but it isn't a panacea. Those techniques don't necessarily get you the best answer sometimes. My company does some massive optimization problems and although we have lots of cores for all the simulations, any one is pretty much single core + a lot of RAM. It doesn't work well on GPU like some heuristic optimization problems can.
Edit: Someone smarter than me says it better below :)
For starters, CP and SAT are based on integers and boolean variables. Secondly, local search and systematic search are just no way near parallelizable to the degree required to see gains running GPU/TPU.
I see their documentation also covers routing vehicles across multiple deliveries, with time windows.
Can someone who's tested this stuff out advise on how many deliveries it can handle at a time? The examples only list trivial problems, but that might just be to keep the examples short.
I can chime in. Googler here, who knows the OR team very well. They are solving EXTREMELY large problems for customers. I can't tell you who, and I can't tell what problems, but they are among the biggest OR problems you can imagine.
(No, I don't mean within Google, although they solve scheduling problems there as well.)
The articles are more statement of ambition than indication of actually "solving problems". I will follow whatever information gets out about this project with great interest.
The big challenge in making something like this actually work, are not the fancy hightech bits. Dealing with humdrum data integrations and torrential rates of change in rules and constraints, sourced from hundreds of mostly non-technical people across dozens of independent departments is where it gets hard.
I am curious if some of the ITA folks landed (no pun intended) there, not because of the airline domain but because of expertise in dealing with complicated constraints.
Interesting project, I'd like to see it take into account decades of ambiguous union contracts, past practices, seniority, preferences, time off requests, preferences, training and certification levels, and not to mention the various local, state and federal labor laws.
Scheduling isn't an easy problem to solve in the real world vs the lab where there are far less constraints and consequences for getting it wrong. Kronos (kronos.com) is the recognized leader in this space and even their software has a hard time keeping it all straight.
Can someone chime in on how this compares with the software that takes part in the yearly planning competition, eg. FastDownward etc. What subset of PDDL is supported?
The CP-SAT solver would be used for combinatorial optimization problems. These solvers are very different than the ones you'd use for planning problems a la PDDL.
Great find, thank you. I ended up playing with the tools they provide. I always loved solving optimization problems and this page ia a gem, I did not know it existed.
WOW! There is still some interest in optimization? I'm shocked! The message long was "Operations Research is dead."
Gee, I got started in OR (operations research) at FedEx, covered a lot from some of the best people in grad school, taught it as a prof in B-school, and applied it to US national security and some commercial problems.
Good to see the interest in some of the flight operations work for Lufthansa -- I was considering (no great progress) some of those problems at FedEx!
One commercial problem was allocating marketing resources for some banks. I was given the formulation, 0-1 integer linear programming with 600,000 variables and 40,000 constraints. I derived some non-linear duality theory, used the IBM OSL (Optimization Subroutine Library), and in 500 iterations found a feasible solution with the objective function guaranteed to be within 0.025% of optimality!
Another commercial problem was some more marketing optimization. It was also 0-1 integer linear programming, but surprisingly it was just a problem in least cost network flows. That is also linear programming, but there the simplex algorithm takes on a special form where a basis is a spanning tree of arcs in the network. A simplex pivot is to add an arc to the tree, thus, yielding a circuit, and running flow around the circuit in the direction that saves money and removing from the circuit, thus, making a tree again, one of the first arcs where the flow goes to zero. There is a cute variation of the algorithm due to W. Cunningham (one of my profs) based on his strongly feasible basis, that avoids cycling.
A super nice point about the simplex problem on least cost network flows is if all the arc capacities are whole numbers and if start with an integer basic feasible solution, then the simplex algorithm will automatically maintain an integer basic feasible solution and terminate with an integer solution. It is good to suspect that a lot of integer linear programming problems are actually just such network flow problems or nearly so.
Once I used that stuff to say, for NASA, how to assign signals to satellite channels to minimize some signal interference -- it was an example of the bottleneck assignment problem which should be solvable very quickly by some post optimality tweaking with the network simplex algorithm.
I enjoyed using the IBM OSL, but IBM withdrew it from marketing in about 2003, maybe in favor of marketing C-PLEX. But it appears that at about 2004 IBM donated the source of the OSL (with lots of pieces, network flow, stochastic programming, etc.) to Riverware which might distribute the open source.
I got started with the OSL when I was in an AI group at IBM's Watson lab where the OSL was written. The Watson lab long had some high expertise in optimization -- Gomory (cutting planes, Gilmore-Gomory column generation, etc.) was head of the lab; there was Phil Wolfe (the Wolfe dual, etc.), Ellis Johnson (group theory and integer programming), etc.
So, in particular, if still want to do some OR and optimization, get some quite well written software, and save some money, then might look into getting all the IBM OSL code open source.
Of course optimization went back at least to Newton and his work to find the shape of a frictionless wire that would let a bead slide down in least time. That problem was a seed for the field of calculus of variations and later deterministic optimal control.
But the optimization of operations research was different, e.g., less central use of calculus: Some of the surprising history is in a book review at
And that is not really the beginnings of such things since there was the Kantorovich work on the cutting stock problem.
In that book review will see that linear programming (LP) had a launch as a big splash. Broadly it appeared that LP was so close to so many important practical problems -- military, commercial, maybe more -- that it was bound to become a big thing for a long time. And, can check, IIRC several Nobel prizes in Economics were given for LP and more topics in optimization applied to economics.
And there are connections with two sided optimization, i.e., the theory of two person games (e.g., paper, scissors, rock) since the basic saddle point result, IIRC first proved by von Neumann, follows easily from the duality part of basic theory of LP. So, again LP looked good.
Can't leave out the Hugh Everett work: He had a company around DC working on military problems -- best allocation of resources. Everett's theorem is a cute version of Lagrange multipliers and supposedly the foundation of his Lambda Corporation. Yes, that Everett, physics student of J. Wheeler where Everett worked up the many worlds interpretation of quantum mechanics.
Operations research had a more general big push during WWII, e.g., with some of the work of F. Dyson.
So, by the 1950s, it looked like operations research (OR) and its tools of optimization, especially LP, were to be big things.
Problems:
(1) Real Problems. What real problems were to be solved? Too commonly the problems were not really linear or were linear but linear integer (which led to the question of P versus NP).
(2) Data. Too often it was a struggle to get the needed real data.
(3) Software. The IBM OSL I mentioned was good software, and so are C-PLEX, Gurobi, and more, but in the 1950s the software situation was grim.
(4) Computing. The 1950s computers, e.g., the IBM 709, were clumsy to use, slow, and expensive.
Then (1)-(4) too easily resulted in projects with not so good fits with reality, over budget, and later than promised. Reputation fell.
Another problem, no surprise, was that the real business problem commonly evolved faster than the optimization software could keep up.
But for selected and sufficiently valuable applications, e.g., running an oil refinery, by 1970 LP could be a big money maker.
There were claims that companies that mixed animal feed did well with LP in, say, the 1970s or so.
In the 1980s the situation on LP software and computing was good enough to permit some better -- project time and budget -- results on real problems.
By the 1980s, B-school accreditation called for a course in LP. So, a lot of B-school students got a good introduction to LP.
But there was very little down to essentially no instruction to turn out well skilled professionals in OR or LP. The college profs concentrated on publishing papers, e.g., about P versus NP, and were discouraged from working on real, non-academic problems from outside academics, i.e., were not like MDs in medical teaching/research hospitals who were also clinical and professional. So, the OR/LP workers who were available still needed some years of hands on experience to become competent professionals for important practical applications.
The standard attitude in the C-suites was a special case of C-suite attitudes more generally -- don't spend money on chancy, risky stuff. Instead, if there is to be a development project, then time, budget, good ROI, etc. needed to be reliable. So, in the C-suites, LP and OR got reputations as "dead fields", flops. Or maybe no middle manager ever got fired declining to pursue an OR/LP project. Neither the C-suite nor a middle manager wanted to have their career at risk from an OR/LP project they didn't understand.
The problem of P versus NP also hurt: Or, the dream was that in business we should need only to formulate an optimization problem, type that into some software to get the problem ready for some solver software, and then have the solver produce an optimal solution quickly and reliably. Well, with the P versus NP issue, we saw that at least for now such a solver was asking for too much.
[My experience was that it was usually necessary and effective to take the problems one by one and exploit special structure.]
Giving up that way was mostly a mistake: If the real business problem is a $100 million project, an optimal solution might save $15 million, the problem is a case of NP-complete so that saving all the $15 million to the last penny, guaranteed, in reasonable time, is also in this particular case (not necessarily the worst case of the P versus NP issue) not reasonable but saving $14 million is reasonable, then still commonly people regarded the challenge of P versus NP and showing that anything about optimization for the real problem was unreasonable and would also believe that the challenge of P versus NP meant that the $14 million was also beyond hope. People just gave up.
But in broad terms from 100,000 feet up, the original observation that business needs optimization was correct then and still, now. For such problems, a lot of what has long been known in LP and optimization more generally is uniquely powerful, relevant stuff.
So, there's money to be saved.
The main question is, do C-suites want to fund the relevant projects?
My guess is that the C-suites and middle managers have not changed much and, no, rarely want to fund the relevant projects.
I would imagine that this tool would be helpful if you connect it with a social graph, i.e. self organizing Facebook groups which help each other take care of each other’s children during the Corona outbreak or schedule incident response teams in an organization,like:” who has to work together? Are there any hidden connections between employees?”
The problem with employee schedules where employees work in shifts is communication. At one place it was done using Excel as I'm sure it's still done today in many organizations.
You always have two people trading shifts but never telling anyone. Then the one who was supposed to work doesn't show up they forgot they traded shifts.
Are there any good projects that implement this in a user-friendly system that allows schedulers to choose from multiple options and make adjustments based day-to-day conditions?
Algorithms like this look cool but can end up being too rigid for many real-world scenarios. You can’t just take the output and force everyone to follow that schedule.
A good iterative approach taken by many hourly jobs is to have some scheduling horizon, say 1 week in advance.
You run the scheduler software, then send each employee a text message saying "We're offering you a shift Monday at 9am for 8 hours. Accept?". After 24 hours, rerun the scheduler with accepted/denied shifts as constraints, and repeat till all shifts are filled.
Obviously you may not end up with optimal schedules when people start denying shifts, but you get pretty close.
When a particular shift gets denied by more than ~3 people, they normally give a ~30% pay boost for that shift to get it filled.
One option for real world use is to take the rigid schedule, and tell people that if they don't like it they need to find swaps with other people. Make the fill schedule visible to all so people can easily find swaps by hand. Always pay the original shift owner for the shift, and preferably, don't even keep records or who really did the shift (doing so opens you up to liability if employees swap in some way that is illegal, such as doing not enough shifts to qualify for medicare/benefits/whatever).
Tell them that it is critical someone fills every one of their shifts, but it needn't be them. Employees will end up bartering, swapping, paying for others to fill their shift, etc. Overall, most shifts will get filled, and most people will be happy.
You run the small risk that an employee will deliberately use the ability to swap to work some illegal shift pattern, and then prosecute you over it. So far, it's never gone far.
Admitting you pressure your employees (not independent (sub)contractors) to pay off part of their wage to do their job? Offloading your logistical planning work to staff and not paying them for the time spent?
This is begging for a NLRB lawsuit and I hope someone wises up.
True, although from what I've seen it's actually rare for money to change hands. There seems to be quite an honor code - "I'll do your shift this time, as long as you remember that next time I want the day off.".
The only time people get upset is if someone trades a bunch of shifts, then quits. People who don't like that have the option of just not swapping shifts though.
I don't believe that's "the only time people get upset". Even if no one ever quits, this has got to be causing lots of drama. In most places I've worked, managers did management and hourly workers did not. When you want to abuse an hourly worker like this, you have to move them to a salary. Traditionally that salary would be a bump up from their 40-hour wage.
>Always pay the original shift owner for the shift, and preferably, don't even keep records or who really did the shift
Usually the sorta places that need shift scheduling also require you to check in before your shift and will only pay for the hours that you worked, often those places have some sorts biometric check in mechanism, your suggestion doesn’t seem very practical in light of that.
Really nice. OR is not my field, and when I must dabble, in the past, I always used MiniZinc constraint solving language (sometimes with Python bindings).
On the page above the linked page, on Google’s OR tools, there is mention that their tools won four gold medals in the MiniZinc completions.
"OR-Tools is open source software for combinatorial optimization, which seeks to find the best solution to a problem out of a very large set of possible solutions."
No it doesn't. It's an extremely basic model to introduce the CP-SAT solver to people with little to no experience in scheduling problems or CP. Pairing up people based on such criteria requires turning the problem into a MOCO or incorporating it with the current objective function.
It's an example model of how to use the or-tools CP-SAT solver to solve employee scheduling problems. It's part of the or-tools library which is a framework for solving scheduling, routing and other NP-hard discrete optimization problems
https://developers.google.com/optimization/introduction/over...
> Slurm is the workload manager on about 60% of the TOP500 supercomputers.[1]
Slurm uses a best fit algorithm based on Hilbert curve scheduling or fat tree network topology in order to optimize locality of task assignments on parallel computers.[2]
> [...] the Hilbert curve scheduling method turns a multidimensional task allocation problem into a one-dimensional space filling problem using Hilbert curves, assigning related tasks to locations with higher levels of proximity.[1] Other space filling curves may also be used in various computing applications for similar purposes.[2]
I wrote a schedule for a team of ~20 co-workers for a 24/7 support shop. This required people change shifts / sleep schedules often.
The schedule was a point of contention for years.
After I volunteered to write the schedules, in 3 months I had everyone happy.
It was not hard.....but....
The catch was that I did it manually for a month at a time with a wide ranging amount of changing priorities that would be hard to program ... and the key was COMMUNICATION, and working with people and their personal preferences.