Hacker News new | past | comments | ask | show | jobs | submit login

Being one click away from a DELETE vs a GET sounds like a serious foot-gun that I would wrap a check around. “Are you sure? This operation will delete 17M entries.”



This is the Postman HTTP method selection dropdown that you can see on the screenshots on this page (“GET”): https://learning.postman.com/docs/sending-requests/requests/...

Postman doesn’t know that sending a single DELETE request to that URL will delete 17 million records.

Arguably, REST interfaces shouldn’t allow deleting an entire collection with a single parameterless DELETE request.


I work in an environment where Postman is an administrative and testing tool for our developers and it worries me.

How do you produce repeatable test results when you’re just passing around Postman configurations (they don’t want to commit the to GitHub in case there are embedded credentials)? How do you know your services are configured correctly?


I agree. I feel like deleting a collection should require at least two requests, one to get a deletion authorization token, and another to perform the deletion with the token. The RESTful equivalent of "are you sure?". It's terrifying to think millions of records are just one DELETE request away from oblivion.


I'm going to remember this approach!


I'd be seriously scared of putting any production credentials with write access into my Postman/Insomnia/whatever. Those tools are meant for quickly experimenting with requests, they don't have any safety barriers.


I mean, it shouldn't really be very easy to even get a read-write token to a production database, unless you're a correctly-launched instance of a publisher service. This screams to me that they're ignorant of, and probably very sloppy with, access control up and down their stack.


This is actually discussed in the article. Basically, at least with older versions of elasticsearch, without X-pack elasticsearch didn't have granular permissions. Either you had access or you didn't.


This is when you put a gateway-type layer on top of a datastore that enforces your own company-specific authn/authz.

In this case, the datastore uses a REST API, so that should be fairly easy to implement. You could even do it in Nginx or Envoy.


Honestly I'd make the case for writing a simple python script for this kind of thing.

`requests.get(url)` is a lot harder to mis-type as `requests.delete(url)`.

At $dayjob we would sometimes do this sort of one-off request using Django ORM queries in the production shell, which could in principle do catastrophic things like delete the whole dataset if you typed `qs.delete()`. But if you write a one-off 10-line script, and have someone review the code, then you're much less likely to make these sort of "mis-click" errors.

Obviously you need to find the right balance of safety rails vs. moving fast. It might not be a good return on investment to turn the slightly-risky daily 15-min ask into a safe 5-hour task. But I think with the right level of tooling you can make it into a 30 min task that you run/test in staging, and then execute in production by copy/pasting (rather than deploying a new release).

I would say that the author did well by having a copilot; that's the other practice we used to avoid errors. But a copilot looking at a complex UI like Postman is much less helpful than looking at a small bit of code.


I've written Python scripts like that as well, using requests delete and post actions. I went a step further and hard-coded the URLs, too.

And because the main branch was protected, getting the script to run meant being forced to do a PR review.

But it made things easier to not fuck up. The URL that's being modified is right there, in the code. The action being performed is a delete, it's in the code.

Makes things slightly more inconvenient but adds extra safety checks.

Its funny when the safety checks aren't enough, though. Back in the olden days, I had to drop thousands of records from a prod database because they were poisoning some charts.

Well, I was smart, you see. I first did a SELECT on the records I expected to be safe. I looked at the results, everything is okay, I'm in the right db and this is the right table. Then I did a SELECT of the records I wanted to delete. Only a couple thousand records, everything looks good.

Now all I have to do is press the up arrow key once, modify the SELECT to DELETE, and run the command.

So I pressed the up arrow key but nothing happened. I must've not pressed it hard enough to register so I pressed it again and it worked. I see the SELECT command, change it to delete, run it, aaaand it deleted hundreds of thousands of records.

What must've happened is there was a bit of lag and all my up arrows registered at once, taking me back to the select command where I looked at the good records.

Obviously I wasn't being safe enough, because I should've double checked the DELETE command I was about to run. But I thought I was being safe enough.

I had done a backup before all that, so everything was fine. But I'm still traumatized like 10 years later. I quadruple check commands I'm about to run that will affect things in a major way.


Transactions are your friend! Even inside a transaction deletes scare the hell out of me, but at least I have that extra layer of defense.


For tasks like these I have either an input prompt (“is this testing or production?”, “write ‘yes’ to confirm”), or an explicit flag like —production.

DataGrip also allows you to color-code the entire interface of a connection, so even my cowboy-access read-write connection is brightly red and hard to miss.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: