We're using Supabase for a client project and even three sprints in, we're severely hobbled by having to play with a couple of architectural choices forced on us with Supabase and Supabase Auth. Any of the time savings benefits have been wiped out already just fighting the choices and also the subtle differences between the local dev environment, implementing RLS without good tooling.
On an architectural level, it is also terrifying to have your database exposed to the public internet just hidden behind authentication and RLS policies (which are much harder to reason about for most developers, and for which there is very immature tooling, a recipe for disaster).
Plus the local dev experience leaves a lot to be desired. We spent days trying to get file uploads working only to find that the issue was with local Supabase, and we had to create a remote Supabase database just to upload files successfully. But then we can't "reset" the database to run migrations.
The product is definitely much less smooth to develop on and I've had to provide dozens of hours of free client consulting work to untangle my recommendation of Supabase (didn't feel right to charge them to fix a product choice that I recommended to them).
Right now I'm in a strong "never again" view on this set of technologies. At least not for a long time until the tooling improves.
But it seems like there are some happy customers. So would love to hear a counterpoint of how they overcame all the issues we've faced:
- how are your security teams ok with exposing your PG server to the internet, relying mainly on RLS? And RLS isn't turned on by default, so full tables are exposed to the public internet by default, behind a rather nice REST API.
- Reliance on RLS for a public service seems risky because RLS quickly gets harder to maintain as you have to get more complex policies in place without much tooling to help you.
- how about the fact that some of your most important code, the RLS policies code, is hard to unit test with today's tooling? And they recommend pgTAP, which, who knows how big this community is?
That combination seems highly risky to me.
Our best bet right now has been to just install prisma and implement more traditional filtering on top of RLS, and just not rely at all on client side connections to postgres. So in a nutshell, moving away from Supabase specific architecture to more traditional architecture. The real "It's just postgres"
> how are your security teams ok with exposing your PG server to the internet, relying mainly on RLS? And RLS isn't turned on by default, so full tables are exposed to the public internet by default, behind a rather nice REST API.
Tooling is improving constantly and security really is at the top of mind for us. We've got some cool announcements this week that will keep reminding you if you're doing something sketchy!
I really am trying to provide constructive criticism (even if my tone is not great because the pain I'm continuing to feel with the platform is still fresh).
I do think you guys are in a great position to actually improve all that tooling around RLS. Tools that can analyze your policies, that can visually map the recursive nature of policies that run, etc.
But that is the path that Supabase strongly recommends if you use their tech stack.
I fully agree that some of the issues (like poor RLS tooling) doesn't necessarily fall on Supabase's shoulders. But this is the path that Supabase strongly recommends.
So you kind of can't have it both ways and say "Supabase is just Postgres" but then say "this is not our problem, it's postgres", right?
I actually think Supabase is in a GREAT position to actually build some of this missing tooling. They're probably now the single largest beneficiary of more people using RLS.
So I do think they will tackle this problem, it is a smart team. I just think that because of these issues, as a cohesive platform, it definitely doesn't feel fully baked (or "generally available" status) yet.
On an architectural level, it is also terrifying to have your database exposed to the public internet just hidden behind authentication and RLS policies (which are much harder to reason about for most developers, and for which there is very immature tooling, a recipe for disaster).
Plus the local dev experience leaves a lot to be desired. We spent days trying to get file uploads working only to find that the issue was with local Supabase, and we had to create a remote Supabase database just to upload files successfully. But then we can't "reset" the database to run migrations.
The product is definitely much less smooth to develop on and I've had to provide dozens of hours of free client consulting work to untangle my recommendation of Supabase (didn't feel right to charge them to fix a product choice that I recommended to them).
Right now I'm in a strong "never again" view on this set of technologies. At least not for a long time until the tooling improves.
But it seems like there are some happy customers. So would love to hear a counterpoint of how they overcame all the issues we've faced:
- how are your security teams ok with exposing your PG server to the internet, relying mainly on RLS? And RLS isn't turned on by default, so full tables are exposed to the public internet by default, behind a rather nice REST API.
- Reliance on RLS for a public service seems risky because RLS quickly gets harder to maintain as you have to get more complex policies in place without much tooling to help you.
- how about the fact that some of your most important code, the RLS policies code, is hard to unit test with today's tooling? And they recommend pgTAP, which, who knows how big this community is?
That combination seems highly risky to me.
Our best bet right now has been to just install prisma and implement more traditional filtering on top of RLS, and just not rely at all on client side connections to postgres. So in a nutshell, moving away from Supabase specific architecture to more traditional architecture. The real "It's just postgres"