Hacker News new | past | comments | ask | show | jobs | submit login

> How do you correlate those errors at the application level? If only 1% of sproc calls are the new sproc, any elevated error rates in the application will be hard to see over typical noise.

By using the DBs error reporting features to report back errors to the application, including information about which ultimate implementation was handling the request.




That only covers errors that occur inside the sproc, not errors that are caused by the sproc. For example: What if the new version of the sproc succeeds, but the data it returns causes the application code to fail in some way? The sproc metrics would report 100% success and the application error rate wouldn't get above noise until a significant fraction of traffic was using the new buggy version of the sproc.

It's possible to add more epicycles to try and correct for these deficiencies, but the truth is that sprocs are dangerously double-edged. Yes, your code runs next to your data, but to get parity with application code (canary deploys, debugging, logging, etc) you must reinvent lots of tooling.


@ggreer, the grand-grand...parent, wrote:

> there's no way to do a gradual deploy of the code

There could be version numbers?

    some_procedure_v1()
    some_procedure_v2()
and new v2 version application server code calls v2 stored procedures. And if v2 app server code gets rolled back / cancelled, then, the v2 stored procedures automatically won't be used any more.


Version numbers work great as schemas. You could have v1.proc and v2.proc.

Also, for an API schema, only exposing views + stored procedures and hiding tables in private schemas makes the SQL easier to refactor. This is something we recommend at postgrest.org: http://postgrest.org/en/v7.0.0/schema_structure.html#schema-...

Sharing in case this is useful to you.


Interesting idea — I had never thought about that (I mean, looking at tables as internals, and procedures + views as the public interface). I'll think about that the next time I'm doing a data migration.

Thanks for sharing, Postgrest looks like intended for people building a web app, and who don't want to write application server code, instead, they install just a database?

Also, I imagine Postgrest is nice for internal admin apps? then one "just" needs to write Javascript and connect to Postgrest via REST?

Actually, it'd be nice if https://postgrest.org homepage explained some different use cases :-) I had to think for a little while, still not sure if I understand all cases when Postgrest can be useful.

Edit: Now I see there're explanations here: https://postgrest.org/en/v7.0.0/ " Motivation Using PostgREST is an alternative to manual CRUD programming" etc, if I just scroll down ab bit. — I didn't do that until now, I probably stopped reading at the "Sponsors" text & images previously, or I thought it was a documentation page.

This is quite cool: "Creating new views happens in SQL with known performance implications. A database administrator can now create an API from scratch with no custom programming".

It looks a bit like GraphQL also: https://postgrest.org/en/v7.0.0/api.html#resource-embedding

Nice that it's written in Haskell, was my favorite language long ago :- )

(PS. http://postgrest.org redirect to: http://postgrest.org/en/v6.0/ which is not the latest version (& not https), maybe you'd like to redirect to: https://postgrest.org/en/v7.0.0/ instead)


> Postgrest looks like intended for people building a web app, and who don't want to write application server code, instead, they install just a database?

Besides usual webapps, I've also used it for building an IoT project(sensors logic in the db) and a GIS project(postgis/pgrouting). Some users have also used it for open data projects(OpenAPI is provided).

It's also great for internal admin apps and there are ready-made libs like https://github.com/Wellers0n/react-admin-postgrest-client.

Those are some of the use cases off the top of my head. In general, I think it's good for cases where you'd like to use the database as a "processing engine" instead of a "dumb storage".

The docs definitely need a lot of improvement. Thanks a lot for your feedback!


Exactly this. Treat the DB schema as you would any typical API schema. A lot of the techniques used for evolving application APIs can be used for sprocs and views as well, e.g. versioning for breaking changes, adding optional parameters or new result fields for non-breaking changes. Fundamentally I don't think there's much difference between say, a DB schema defined with sprocs/view or an HTTP schema defined with OpenAPI. Both describe an API contract between two remotely communicating processes, the former case just happens to use a SQL dialect/transport to do it.


Interesting that you mention doing this (version numbers) with views too, I didn't think about that ...

... Maybe that could be a way to "perview" a database migration, before running the migration for real. There could be a table 'Some_table_view_v2' that shows how that table would look, after an upcoming data migration. And then v2 app server code, would use that new view. — Then one could be more certain that the data migration, will work fine.

(At the same time, one might need to be careful to let just a small a fraction of the requests, use the new View, if the view is a bit / a-lot slower than the real table.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: