This is at the very least a clever and involved optimization. Let me tell you the story of my similar 10h to 10min fix. We had this cronjob that was supposed to run every hour, detect all changes to the customers and related table and sync with the marketing saas tool. It was written in Rails and took 10 mins at first. As we grew the time taken by this job also grew linearly. To a point where it took 10 hours and we could only sync the data twice a day.
As it turns out the job was loading each user and then scanning for changes. We switched it into a somewhat complex sql query and MySQL ran it in a few mins. Add the additional processing and the whole job now took 10 mins.
This is when being the guy knowing a few database tricks is fun.
We had one reporting query that ran for 13 hours or so, in a normal web request. Not very successful, but it tried its best. Running this through explain while thinking of daloks revealed some dependent subqueries. Those are the devil in mysql because they are evaluated per row in the outer query. That takes a lot of time very quickly. Eventually we wrangjangled the dependency into a terrifying group by such that the dependent query could be precomputed and the outer query just joins with the result. Boom, runtime down to 2 minutes.
In another case everyone panicked ... until one of my dudes added an index - which took about a day - and slashed runtimes from 18 hour to 12 minutes. Learning to optimize queries on your RDBMS of choice is very powerful.
Saw one of these recently in a query that was ported from PostgreSQL to MySQL. Same thing join-with table-expression to fix it from 6+ hours to seconds. MySQL will sometimes consider a subquery to be 'dependent' even when there are actually no dependent terms and could have been evaluated once.
My first big win came as an intern, when I reduced the big O complexity of an algorithm written by a (very smart) statistician for our paper to O(n^2) instead of O(n^4). This was just driven by realizing that a couple data lookups in an inner loop were themselves O(n^2).
If I recall correctly, that raised the "feasible" value of K from about 4 to about 14 on the hardware at the time, which was more than enough for our purposes. I wanted to keep optimizing, but my boss correctly reminded me that shipping the paper was more important.
Or more likely, the classic “fetch an entire table from the database and work on it locally” problem that seems endemic in code written by people who don’t understand the capabilities of SQL.
Works great in dev with a local sqlite instance and ten rows. Not so great in prod.