Check out this simple, easy API replacement: just set up Docker, APISIX, Grafana, Prometheus, etc.
Does anyone else miss the old days of just editing your site's code and adding new columns to your MYSQL database while people were surfing your website? That was fun.
Tip: Another way to turn your DB into an API is to just connect to it and then write SQL queries.
I miss aspects of those days, because I made a good bit of money from people who used MyISAM because "it's more performant" (or just because it was the default, yes, I'm dating myself here), and then corrupted their data because MyISAM didn't enforce constraints.
Other than the easy money, nah, don't miss the old days that much.
Having to use Reflector to decompile a core .NET library to figure out wtf it was doing because MSDN was inadequate, and the source was very much not open.
PHP 4 code bases that heavily leaned on dangerous globals.
I remember reading "High-Performance MySQL" in mid-200s. It was a real eye-opener: all the things you needed from a DB where somewhat randomly available across the different storage mechanisms, but not in any consistent form.
Something like: Oh, you need query optimisation? Use one. You need constraints? Use the other. you need fast indexes? Use the first one again. And so on.
Grafana and Prometheus aren't necessary, and Docker is at this point pretty close to a Linux platform feature. APISIX is a reasonable complaint --- the "poor man's API" is "using a big open source project that provides generic APIs", sure, that's not all that clever.
You still can do it, the main feature is: it just works. You don't have to follow the hype. One problem could be that new devs never experience the simplicity of such a setup, but that is not your problem.
> Creating a full-fledged API [...]. You need to think about [...] the REST principles
I can assure you you don't need to do that. And most people don't.
It's perfectly possible to create a useful API without adhering to the REST principles.
[Edit, having read the rest of the article]: Case in point, this very PostgREST tool promoted in the article appears very useful, while not thinking about the REST principles.
If you're talking about HATEOAS (which many zealots equate one to one with REST) you may be pleased to know there is no reference to it whatsoever in this article.
Or is it the other way around? I seem to remember that Roy Fielding did actually mean HATEOAS when saying REST, that REST isn't only CRUD but a discoverable and self-describing API.
EDIT: And the corollary of above is that most people when saying REST actually mean HTTP with correct method use.
If not, then is there a good source on what you mean by REST and HATEOAS?
REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state.
"When I use a word," Humpty Dumpty said in rather a scornful tone, "it means just what I choose it to mean — neither more nor less."
To me, like most people, REST means an http-based API where we use GET, POST, PUT, DELETE etc. to interact with resources. This in full understanding that under a literal interpretation of the enlightened one's (Roy Fielding's) utterances, REST mandates HATEOAS, meaning that there are only a few true REST APIs to be seen in the wild.
I also usually use the term RESTful, this seems to be a somewhat widely accepted middleground.
But that makes it hard to refer to, search for or have a discussion about real REST, the architectural style. Why use a term that means a different thing when you can just invent/use a new term describing what you want to describe. REST, as Roy Fielding defined it, is an extremely interesting concept to me and I'd love to explore it more, but good luck doing any research on it without getting flooded with countless junk about "REST" or "RESTful" APIs.
Sort of like how youtube is filled with videos about "damascus steel" which is nothing more than steel of random, irrelevant composition with certain visual pattern akin to actual damascus steel. But again, good luck finding a video about the real deal.
Same can be said about ASMR, and many other misappropriated terms people throw around for clickbait.
It seems like when taken to an extreme, strong HATEOAS would mean you could write a universal client that works across all APIs.
I don't think service developers take HATEOAS super-seriously. From what I've seen, returning resources with a URI rather than an identifier is about all many services do to be HATEOAS-ish. Clients are coupled to the server, but that's often okay.
Exactly. HATEOAS implies as much flexibility as a browser, and as much understanding of the server data as a browser. So if you want a HATEOAS API, just make it output HTML and go with the crowd.
What is the acronym for when something needs to be done, so you create an endpoint to do that thing, maybe return HTML if convenient (if it is a page request) or JSON if it is XHR, and then move on to the next user story?
The schema is ad-hoc and requires a human to interpret it. Link relations (at the bottom) are expressed the wrong way: a machine could understand the "next" or "license" relations, but not an arbitrary "next_url" or "license_url" JSON key.
Recommendation: adopt relevant IETF/IANA standards and reformulate the response to take advantage of them.
Postgres is amazing, but there are some issues you quickly hit with systems like this:
1. Side effects: Want to send an email when something happens
2. conditional permissions: Need access to data under certain circumstances not modelled by the security language?
3. Subset access: Access to parts of the row / document.
4. Aggregate access: Access to eg. a count of rows you can not access.
These are usually solved using serverless functions – or an API written in code.
Personally I err to the side of just writing some code, maybe because I enjoy that part of the project. Then I might be more inclined to use no-code solutions for the frontend, where others want the freedom and flexibility.
Does supabase do anything particular to address those or just hand you postgres's RLS? It's been a minute since I used supabase but when I last did you were basically on your own with that.
Which is not necessarily bad, RLS is sophisticated enough to handle all real world cases I've come across. But if it's just that I think it's more correct to say postgres already handles all of this.
The problem with event driven systems is that they can end up being a kind of spaghetti. It can be hard to track down all the side effects. There's no place where you can read some code to see what happens sequentially.
Nice, and by the looks of it you can still do joins to what is inside the jsonb. I presume indexes are possible too (or if not just pull that data out to actual columns).
No contradiction at all. Imagine a relational database where the only column type is a variant (union) of the usual datatypes, and where new tables and new columns are auto-created whenever an insert/update references them (and, possibly, unused tables and columns are garbage-collected). You’d still use regular SQL and have joins and everything, the tables would still be relations, but there’s no defined schema.
Hm, using Apache APISIX for DDoS protection on a single node? That won't really stop a real DDoS. Not much you can do on a single server, if a botnet is saturating the network links to your server(s), without help from your infra provider.
This setup can be used to prevent the backends from being overloaded, which one can probably already do from a single host, and depending on the speed/amount of work by the backends done, not a lot of bandwidth is required to overload most systems that have a limited amount of request processing capacity.
I would argue that this is load management/shedding though, and not DDoS protection.
The root of the issue has always been the definition of DDoS.
Network engineers define it as one thing: a massive amount of abnormal traffic generated by a large number of sources (distributed) that you need adequate bandwidth to soak up without impacting normal traffic.
Software engineers define it as another thing: unwanted traffic that causes resource exhaustion and should be blocked. They're almost always thinking of DoS but refer to it as DDoS.
That's why when TFA talks about DDoS, the example immediately attached to it is rate limiting at what to me (network engineering background) seems like an absurdly low limit (1 request every 5 seconds).
Databases are huge security vulnerabilities and should always have some kind of shim over them. Never expose your relational DB publicly if you want any control over it.
The folks created an almost full fledged Firebase replacement by clipping pieces together (similar to the article, replacing APISIX with Kong) and it allows them to iterate super fast.
Pragmatic, and nice if you're on the receiving end of it .
I find it a bit ironic that they don't support Apache2 as the web server, you know, it being an Apache project and all, instead going for Nginx or OpenResty (though admittedly they're great projects).
Oh if you have a database already and want to expose it, then yeah, it does make sense to go with the API adapter bolted onto it.
If you're starting from scratch (as the post is, since PostgreSQL is advertised as part of the solution), then you have to manually create the DB schema. I would argue that a Node developer would find it easier to do that in Prisma, or a Django developer in Django ORM, than to do raw CREATE TABLE queries manually.
This is far from what I'd call "poor man's" thing.
I'd want to host it for free, so Vercel and Netlify comes to my mind. But to go real poor man style, I'd use Google sheets as a backend and throw some caching layer on top of it. Firebase or one of its competitors is also a good idea to get some API up and running for free.
It's more than just a migrating issue, it's thinking about the structure of your data from the beginning. Your data is your application. Web framework flavor of the day should be a secondary consideration.
Supabase has a free tier with all this included, plus more. It's the best install of postgres I've seen and I'm obsessed with how it makes triggers, auth, etc., all included.
Did a user group presentation on Supabase, no one in the audience had ever heard of it. I came away quite impressed. Plan on building a complex API with it soon. If I was currently employed as a backend developer I'd be worried.
This just removes the need for some backend engineering, but will leave the actual hard work (schema design, etc)
More and more, developers need to add value to the business, instead of being the key holders of arcane invocations. It used to be that knowing how to use a computer was a technical skills; now it’s evolved into understanding the business.
Don't worry, somebody still has to design all the database tables/relations, views, triggers, etc. Which means we all become better database developers, which is a very good thing.
Also, these tools can't handle certain back end logic (ie, API interactions and integrations, long running processes, etc). Which is the complex bit, anyways.
Agree on the backups piece, that should even be in the free plan as that much space is not likely to be expensive if they used cold storage.
Otoh, if you aren’t sending 1 request per week to an installation, it’s really not active at all. The overheard of supporting this niche is probably too much for a company that clearly is in ruthless prioritization mode.
Neon is fresh out of private beta (or should be soon). I can't wait to play with it. However they're all about Hasura lately rather than PostgREST, which this article talks about.
Nikita, their CEO, was a guest on the Changelog[0] podcast a month or so ago. Well worth a listen, great episode.
Interesting. I assume Supabase requires it because supabase/realtime depends on it?
Personally I'm more interested in vanilla PostgREST than Supabase's implementation of it, or their realtime implementation. It seems really cool but I'm not 100% sold on their ability to evaluate complex RLS rules, which is an important one for me.
That's right (regarding listening to the WAL - replication). I actually just stole that part of the Supabase realtime code for my own lib (so I can do logic in Elixir).
I haven't used their role based stuff but it looks pretty good.
> I'm not 100% sold on their ability to evaluate complex RLS rules
You don’t need to be sold on our ability - all rules are run on the database itself. Supabase is just Postgres, we don’t run any forks. We run vanilla PostgREST too (behind a proxy)
I've always known that you run vanilla PostgREST, and that it evaluates RLS in-pg, and it all works great. However when I first looked at walrus[0] a while back (18 months ago?), I couldn't quite figure out what the mechanism of action was, i.e. whether the RLS rules were being evaluated in the database or being parsed in from the DDL and then (partially?) reinterpreted by your WAL subscriber. I hope that explains my comment.
I've had another brief look at that repo, and either you've clarified a few things since I last looked at it, or I didn't look at it closely enough in the first place. It makes far more sense to me now, the impersonation + re-query mechanism puts me at 100%.
Doesn't look like you're the owner, but are there any notable clients using this service? With Heroku it was legit enough that you knew enough enterprise users were subsidizing the free tier to where it won't just disappear overnight.
No idea. This is an early stage startup so who knows. But if you need guarantees (it ain’t a hobby project) then best to just pay for the service from any of the big or small clouds.
Sorry, what I meant by "not managed" was they just support PostgreSQL as in "yet another executable running on their containers", so they don't actually optimise or configure PostgreSQL for you.
I find Eloquent's approach to disconnect models from migrations a bit dangerous, but it could be me as I don't have much experience with Laravel. PlanetScale/Vitess doesn't support foreign keys, which is nice for distributed (planet scale, ha!) apps, but could be a problem for certain types of applications. I'd rather stick with (stock-ish) PostgreSQL.
Many projects grow to the point that the limitations of writing everything in views and stored procedures becomes more painful than writing the CRUD code in Python, Node, Ruby, PHP, or whatever.
For me - using a similar system (Hasura), it's migrating things dependent on views. Feels like this should be a solved problem (database migration issue, not due to Hasura).
At some point CRUD goes from being pure CRUD to mostly CRUD with a few special requirements - e.g. complex RBAC rules, decorating data, API versions, integrating third-party APIs, scaling. They're easy to do in most app frameworks (Laravel, Django, Rails), tricky in something like Postgrest. I haven't use Hasura.
It's not necessarily my preference but the last few years a lot of my work has been in systems that are just a thin api wrapper around postgres.
In practice the biggest issues with it are that the tooling is significantly behind the state of the art of mainstream dev tooling, both for working engineers and ops. It's also not built with this with style of working in mind, so you accumulate a lot of little struggles that are hard on engineering morale and confidence in the deployed system. DBAs have their own practices, but admin is a different discipline from dev and they aren't always directly compatible.
The other problem is that for most modestly sized money-making applications, the database is the single most resource-intensive and expensive necessary part of the stack. Putting a bunch of business logic in there will never help that, and query optimization is a difficult to acquire skill. You pretty much always end up with just a couple of people who can talk the query planner down off a cliff when you get into the shit.
You can get really far with database views (decorating data, especially), functions (all sorts of biz logic) and Postgres roles for RBAC. But I hear you on integrating with 3rd party APIs, etc. Fwiw, there's a number of PostgREST backend client libraries to help with that.
Totally agree that it's possible. But at some level, it may just be easier to use something else. If you're a strong Postgres DBA, I can see how you'd come down on the side of PostgREST. If you're a strong Rails dev, it would make more sense to use that.
Just to clear things up a bit (as I remember not knowing about this side of "RESTfulness" a while ago myself): an important part of REST is relying on hypertext, that is pointing to other resources by using URLs. Just returning an ID of a referenced object/resource as a plain number isn't very RESTful, but returning an URL of that object/resource is.
Right. The problem is not difficulty, but motivation. The responsible programmer begriffs does not have the understanding why it is a good idea to do so, despite bug reporters on Github having spent years in discussions.
To be fair, begriffs never shut down any discussion of HATEOAS, previous contributors did - because at the time users were asking for more JSON capabilities from PostgREST and we needed to focus on those.
Depends on your needs of API Gateway, Ingress Controller, and Service Mesh.
According to Apache APISIX Slack[1] channel discussions and GitHub Issues/Discussions, most people focus on Security, Feature Rich, and Performance.
1. Security: As an Apache project, there have a lot of users and maintainers who are watching project activities, and the Security Team follows up very quickly.
2. Feature Rich: Just join the Slack channel and you will find that lots of questions are related to "features".
Does anyone else miss the old days of just editing your site's code and adding new columns to your MYSQL database while people were surfing your website? That was fun.
Tip: Another way to turn your DB into an API is to just connect to it and then write SQL queries.