Hacker Newsnew | past | comments | ask | show | jobs | submit | yorak's commentslogin

Geoffrey, thanks for sharing your story with us. OR sure is a weird niche. Good enough algorithms for solving these problems have existed for decades1, but we still see low adoption and your 95% estimate of the companies not optimizing their operations rings true.

Similarly to you, I spent a short while trying to sell VRP optimization with an API business model, and what dawned on me was that most companies do not have the necessary in-house expertise to integrate optimization into their existing tools even if the API is well-designed. There also really seems not to be any urgency to do that and most logistics companies just offload their inefficiencies onto their customers. Your routes are not effective? No problem, just bill more.

Some years ago I heard about a Swedish team of optimization experts who got so fed up with selling optimization to unwilling transportation companies that they founded their own—just to mop the floor with their ineffective competition. :D

I agree that ease of use is key here. In my PhD dissertation, I tried to address the issue by adding self-adaptivity within transportation management systems, mostly through automatic parameter tuning and algorithm selection. Such approaches remove some amount of fiddling when the optimization tool is adapted to a new optimization problem. Worth a look, perhaps, if you're interested.

Many thanks again for the interesting article and all the best with Timefold.

1) E.g., already by the '90s, we had quite capable algorithms for the VRP. I have open-sourced a library of classical VRP algorithms called VeRyPy, containing simple and not-so-simple heuristic algorithms. It has enjoyed modest success among VRP researchers and practitioners. Nowhere near the success of OptaPlanner, but also, the purpose is different—OptaPlanner is production-ready, whereas VeRyPy is more geared towards education and research purposes.


If nobody does something in a whole industry perhaps it's because it's not a differentiating factor. Do you have a link to the Swedish company? It just sounds like typical "software engineers know better" story where they go bankrupt after a few years because turns out the important bits are in other parts of the business.


I am interested in that company as well.

A recent example of "knowing better optimisation experts" I know of in my local area are with garbage trucks. A expensive company was hired to optimize - yet they apparently forgot some real world factors, like with snow and ice not all the trucks can go everywhere, some roads too narrow, etc. with the result of chaos and the workers fighting succesfully to be allowed to continue influence their working schedule, they sucesfully managed without the external experts.

People do inefficient things. And they also do it for years, thinking it is a universal law. But usually an expert from the outside will still not know better how to do things.


The company would probably be Airmee, they have a scheduled delivery at my door in an hour.

https://www.kth.se/en/om/innovation/alumner/airmee-levererar...

https://www.airmee.com/en/


Yes, this story is far too soothing to the software engineering ego to be actually true! Generally real world problems are messy, hard, and full of human problems. It's a little aggravating when 'software engineers know everything' stories like this are taken at face value and reinforce that mistaken idea.


True stories like this happen, but generally only when a domain expert on the business side was incorporated from the beginning—and that may not be mentioned by the software side.


There is truth in this.

Without involving the human planners, the project won't succeed. We're empowering them, with PlanningAI assistance, so they can focus on what they planning must do, for who, instead of how it gets there. So when the plan goes off the rails - 5 people call in sick - they can get it back on the rails in seconds with Real-Time Planning.

The engineering work is only half the work, or less. Fitting the technology into the human processes is another big chunk. Half of my videos on youtube deal with such cases: Continuous Planning, Real-Time Planning, Non-distruptive Replanning, Pinning, ... Not code, not technology, but design patterns.

And even then, this is far from 100% of the solution. Technology and education is still not enough.

That human planner with 30 years of business knowledge in his/her head is still a critical: he/she will always need to tweak, oversee and sometimes overrule the planning solution in production.


Sorry to dissapoint you and the few other curious ones, this was some 10 years ago and such details such as name of the company have fell of from my overfilled brain long ago. While the person who told me the story is a reputable fellow I must admit they still are a secondhand source. Being a finn I tend to trust people and take their word on it and, hence, do not recall doing my research to factcheck.

Still, it _is_ a good story, and plausible based on what I saw to be the state of the industry back then. Your run of the mill last-mile courier services were really badly organized, from the mathematics and optimization side as simple as they get, and ability to build a robust optimization transportation management system would've given serious competitive edge.

(edit: removed repeated words)


I, too, would like the reference.

However I can tell you that this kind of thing really does happen. One of the injection molding conferences I attended had several presentations where companies were contemplating building more injection molding lines, but instead hired consultants (of course rolls eyes) to re-optimize their injection mold programs. After tweaking all the parameters in to speed up injection rates, it turned out the company had about 50% more capacity than they thought.

Now, I suspect this was shit management more than anything. I strongly suspect that the people on the line told their superiors that they needed to fix the programs and got ignored.

However, you couldn't sell anything to the management chain until they were staring at having to spend cash. Selling people on "saving money" is always super difficult as it requires them to change something that is nominally "working". Selling people on "not having to spend money they are staring at imminently" is always way easier. Obviously the easiest sell is "spend money to make a lot more money", but that doesn't happen all that often.


I always assumed it was due to (a lack of) scale for most companies? 1% improvement in container planning for Maersk is hundreds of millions of dollars per year, so they can have an entire team dedicated to that. Meanwhile at the other end of the scale, 1% improvement of job scheduling at $DAYJOB probably wouldn't even pay for its own ongoing costs, let alone the costs of setting it up.


A 10% productivity gain is a lot for any company, regardless if they are operating a fleet of 50 or 50 000 vehicles.

However, the cost and risk to achieve that productivity gain is typically huge. Many Operations Research projects fail. And when they do, they are very expensive failures. "Managers getting fired" expensive.

With our technology, we're making OR projects easy and quick to put into production.


I did OR in my Bachelor thesis, and then at my first project at my first job. It worked like magic, and business was sure it wouldn't work. Still one of the projects I always talk about. It was for organizing screens unto pallets unto trucks. Then I never anything with OR ever since. Even for a few early AI products 8 years ago, in the end they didn't go for it. I think you are spot on with the expertise. Even though quite a lot of businesspeople heard about it in university.


Thanks for sharing your story too, Yorak.

Yes, having in-house expertise to integration optimization into their existing tools is hard. Especially if they use low level solver APIs (especially if it's math equations).

We're working making that easier with high-level REST APIs (Timefold Field Service Routing, etc). And with education (Timefold Academy) by creating videos and articles on how to integrate real-time planning, continuous planning, labor law constraints, fairness, cost reduction, etc

See https://www.youtube.com/@timefold/videos


Symbolic math equations isn't a low level solver API. That's the high level interface. A low level interface expects you to provide the optimization problems as raw matrices.

What you're talking about is known as a problem reduction.


I would argue that inputting a loadBalance(sum(shift.duration)) function per employee is a higher level abstraction than inputting a quadratic math equation to accomplish the same thing.

Think Java/C++/Python vs Assembly.

Ironically, talking about problem reduction... most math equation based solver can't scale quadratic equations, so users "relax" the business constraints (to the point that projects fail in production).


I like anecdote about the Swedish team of optimization experts! Did they succeed with their transportation company?

Edit: vasco was faster with their question haha


At least to me the argument does not hold water. My fear is that humans being human, simulation of a coral reef imaginary or future holodeck-like experiences will not save the actual coral reefs. They might even hasten their demise (as less people are interested of the "ugly" reality of the reefs). The tech is cool, but the environment angle, at least here, feels as an afterthought.


I agree and have been saying for a while that an AI you control and run (be it on your own hardware or on a rented one) will be the Linux of this generation. There is no other way to retain the freedom of information processing.


Similarly, I think an open model running on local hardware will be a must component in any web browser of the future. Browsing a web full of bots on your own will be a big no-no, like walking without a mask during COVID. And it must be local for reasons of privacy and control, it will be like your own brain, something you want physical possession of.


I kinda think the opposite, that blockchains true use case is to basically turn the entire internet into one giant botnet that's actually an AI hive mind of processing and storage power. For AI to thrive it needs a shit ton of GPUs AND Storage for the training models. If people rent out their desktop for cryptocurrency and discounted access to the ai tools, then it'll bring down costs for everyone and perhaps at least affect income inequality on a small scale.

Most of crypto I've seen so far seem like grifters/scams/etc, but this is one use case I could see working.


Nice. The resulting graphics style reminds me of a Short Hike, which is a family friendly bite sized 3D adventure and exploration game. I'm not affiliated in any way, just enjoyed the game with my kid. It has a nice balance of story, challenges, and some platformer action. https://ashorthike.com/

Edit: IIRC the shader a Short Hike used was discussed in the GDC talk by the game's author: https://www.youtube.com/watch?v=ZW8gWgpptI8


Yes, rather than exchange rate, I'd be more worried about corvids carrying trash from the landfill to exchange it to food. Or, in case of cigarette stubs, emptying well contained ashtrays to the tables and floors only to get to those valuable stubs.


This is more likely actually. Corvids tend to lean towards gaming the system in other ways, like breaking into the trash store and re-dumping the trash repeatedly (like the Indians did with snakes during the British Raj and the Vietnamese did with rats during French rule).


I blame being an operations researcher for always (pessimistically) first and foremost seeing how the system can be gamed. You have to think very very carefully what the objective function is and which kind of undesired solutions need to be forbidden using the constraints.


Guess Corvids are operations researchers then. :)

Once they figure out what the reward system is, they often try to figure out how it can be gamed.


Yes, obviously this is possible. The extent to which it's actually a problem is easy to test. Where I live, it's already necessary to tightly close trash can lids to keep crows and racoons from rooting through it for food.


Aka. the Chinese room argument. However, I'm not so sure us people are little more than just pattern matching machines. When I start to talk (or write, as I'm doing now), the words kind of just flow out. I can make the argument, that I understand the "real" world, but do I really?


> I'm not so sure us people are little more than just pattern matching machines

Yes that's what leads to multiple definition of what knowledge really is. Yann LeCun believe we are more than just that, hence why he is saying GPT-3 would have no knowledge.


What is most impressive here, which I think other commentators of the thread have not pointed out, is its ability to have an inner dialogue (monologue?) with itself in this sample. For me, that property of the generated text (or should I write, thought process) gave the chills. Now, given this, AGI seems to be quite a few steps closer indeed.


This is an interesting and promising take on the problem. Despite being introduced already in the 60'ies, the optimization of delivery routes is still not used as widely as it should. I'd argue that this is mostly due to the complexities and challenges inherent in adapting such optimization technology to solve real world delivery route planning tasks, and, on the other hand, the high cost and low availability of operations researchers with relevant software engineering background.

In my recent PhD dissertation I tried to address the challenges from a different angle: I proposed using machine learning to predict the most suitable heuristic algorithm and its parameter values for a specific logistics planning problem. This way the developer or the user does not need to worry about the details of the optimization solver. The book is freely available for download from: https://jyx.jyu.fi/handle/123456789/65790


Thank you so much for sharing this! I will read it.

I suspect there's a lot of potential for these sorts of techniques in production systems, especially combined with decision diagrams. We've been looking at DDs partly because they are capable of optimizing problems without a lot of restrictions on problem type or representation, and also because they seem like a very nice metaheuristic structure.


I've personally not seen this happen. Instead, I've seen many oversold ML/AI tools that offer no advantages over using open and freely available tooling that the data scientist (with or without PhD) is already familiar with. I know this must be frustrating to ML solution vendors, but as with any product, the value proposition has to be there and easy to see for the domain expert. And the value has to be great because the downside is the vendor lock-in of the proprietary solution. Hence, ask yourself: given a green field real world ML project, would you use your tool (with the the same learning algorithms, data manipulation methods etc. under the surface as everybody else), or resort to using some battle proven free and open stack.


Very true, my mind immediately wandered to thinking about Random Forests https://en.wikipedia.org/wiki/Random_forest


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: