Honestly I'd make the case for writing a simple python script for this kind of thing.
`requests.get(url)` is a lot harder to mis-type as `requests.delete(url)`.
At $dayjob we would sometimes do this sort of one-off request using Django ORM queries in the production shell, which could in principle do catastrophic things like delete the whole dataset if you typed `qs.delete()`. But if you write a one-off 10-line script, and have someone review the code, then you're much less likely to make these sort of "mis-click" errors.
Obviously you need to find the right balance of safety rails vs. moving fast. It might not be a good return on investment to turn the slightly-risky daily 15-min ask into a safe 5-hour task. But I think with the right level of tooling you can make it into a 30 min task that you run/test in staging, and then execute in production by copy/pasting (rather than deploying a new release).
I would say that the author did well by having a copilot; that's the other practice we used to avoid errors. But a copilot looking at a complex UI like Postman is much less helpful than looking at a small bit of code.
I've written Python scripts like that as well, using requests delete and post actions. I went a step further and hard-coded the URLs, too.
And because the main branch was protected, getting the script to run meant being forced to do a PR review.
But it made things easier to not fuck up. The URL that's being modified is right there, in the code. The action being performed is a delete, it's in the code.
Makes things slightly more inconvenient but adds extra safety checks.
Its funny when the safety checks aren't enough, though. Back in the olden days, I had to drop thousands of records from a prod database because they were poisoning some charts.
Well, I was smart, you see. I first did a SELECT on the records I expected to be safe. I looked at the results, everything is okay, I'm in the right db and this is the right table. Then I did a SELECT of the records I wanted to delete. Only a couple thousand records, everything looks good.
Now all I have to do is press the up arrow key once, modify the SELECT to DELETE, and run the command.
So I pressed the up arrow key but nothing happened. I must've not pressed it hard enough to register so I pressed it again and it worked. I see the SELECT command, change it to delete, run it, aaaand it deleted hundreds of thousands of records.
What must've happened is there was a bit of lag and all my up arrows registered at once, taking me back to the select command where I looked at the good records.
Obviously I wasn't being safe enough, because I should've double checked the DELETE command I was about to run. But I thought I was being safe enough.
I had done a backup before all that, so everything was fine. But I'm still traumatized like 10 years later. I quadruple check commands I'm about to run that will affect things in a major way.
For tasks like these I have either an input prompt (“is this testing or production?”, “write ‘yes’ to confirm”), or an explicit flag like —production.
DataGrip also allows you to color-code the entire interface of a connection, so even my cowboy-access read-write connection is brightly red and hard to miss.
`requests.get(url)` is a lot harder to mis-type as `requests.delete(url)`.
At $dayjob we would sometimes do this sort of one-off request using Django ORM queries in the production shell, which could in principle do catastrophic things like delete the whole dataset if you typed `qs.delete()`. But if you write a one-off 10-line script, and have someone review the code, then you're much less likely to make these sort of "mis-click" errors.
Obviously you need to find the right balance of safety rails vs. moving fast. It might not be a good return on investment to turn the slightly-risky daily 15-min ask into a safe 5-hour task. But I think with the right level of tooling you can make it into a 30 min task that you run/test in staging, and then execute in production by copy/pasting (rather than deploying a new release).
I would say that the author did well by having a copilot; that's the other practice we used to avoid errors. But a copilot looking at a complex UI like Postman is much less helpful than looking at a small bit of code.