This looks pretty interesting! If anyone is looking for a tool to build a full fledge Back Office I warmly recommend react-admin (https://github.com/marmelab/react-admin/). Been using it for a year now and we went from 0 back office to a feature full one in no time!
React Admin is awesome. Combine it with Hasura (automatic GraphQL on top of PostgreSQL) and you can build an entire back office admin suite (API endpoints and admin front end) in a matter of hours. You end up writing more SQL than react as react-admin is basically a CRUD form generator.
"PostgREST serves a fully RESTful API from any existing PostgreSQL database. It provides a cleaner, more standards-compliant, faster API than you are likely to write from scratch."
PostgREST is really hard to figure out how to use though. I tried it a year or two ago because I was really excited about the posibillity, but the setup was super unintuitive, and the docs were hard to follow.
I got a simple case working, but in the end I gave up on using it for anything serious - I simply wouldn't trust myself to be able to debug it if something went wrong.
Thanks! Have you tried both systems? I've never heard of hasura, so don't know the extent to which they compare. At a glance it looks like they serve largely the same problem.
We’re using Hasura. Not affiliated but we do pay for their support (great team to work with).
Hasura have taken a few unusual steps that actually turn out to be good choices. They have their own version of row level security that you configure via the admin console. It plays very nicely with their subscription queries so there’s very little load on the dB (they effectively poll the dB and build a temp table of the config Params of each connected client so they can pull back the data for every subscription in a single query). It works really well in practice.
We kicked the tyres on most the products in the space (and even had our own implementation using socket io). There were a couple of things about Hasura that didn’t seem to make sense but after a quick call with them we decided that between their answers, and the team themselves, it was the product we liked and trusted the most.
The Hasura team are incredible. Not only that they're great people and responsive, but they have one of the most absurdly talented devteams I've ever seen (especially in regards to their Haskell devs).
I have tried both, but only a bit and for a personal project. In the end I went with Hasura because I like the admin interface, and I think the way it maps graphQL to SQL makes for easier-to-read (and write) graphQL queries.
I actually tried PostgREST first, because I had never used GraphQL and was more familiar with RESTful APIs. But I would go straight to Hasura if I was starting today.
When you combine graphql was graphql-codegen you can get typechecked queries, which is really nice.
Disclaimer: my startup. We will do a public launch soon, but the restful component of Supabase is a postgrest instance, and we are wrapping it with some libraries to make it a bit quicker to get started.
Great work! I once was a fan of custom well tailored DSL's for each tool. However, nowadays I really like the current direction of providing a lot of tooling with only SQL necessary to use them, even with the dialects differing a bit.
I've made my own contribution here too, with a tool to join and analyse data in various databases and file formats (JSON, CSV, Excel) using plain SQL, OctoSQL: https://github.com/cube2222/octosql
I'm working on a desktop application which has feature to let users query data across a bunch of different data sources, like csv, excel, and some others. The way I've implemented this is by first scanning all applicable files and populating a SQLite database with all the data from these files, a table per file essentially, and then allowing users to execute queries against the database. The database is updated whenever any of these files are changed on disk, but any writes to the database are not persisted back into the files on disk, so it's really a one-way kind of flow. This was all implemented mostly as a proof-of-concept, but it's been massively useful so we're probably going to expand on it this year, perhaps by allowing bi-directional workflows.
Even if the use cases and approach is different, it's nice and validating to see other projects with similar thoughts and ideas. Again, thanks for sharing!
I think I've seen an open source project with the approach described by you too.
However, I think philosophically we differ in that one of our goals is to push down as much work as possible to the underlying databases and our next big upcoming milestone is streaming (Kafka, possibly database change streams).
But yes, it's great that a kind of ecosystem is forming around those ideas!
Absolutely agree regarding difference in approach – and I'm by no means suggesting that yours is in any way worse! In fact, I believe yours is much, much better given the more general use cases it supports. Ours is inherently stateful and requires careful coordination, which works for us because the system is pretty much self contained and it makes things easier, but I don't think it's a good generic solution to be honest.
My point was really that it's nice and validating to see the the whole idea of a unified way to "query all the things!" as it were isn't all that novel. :o)
By the way, if you can think of the name (or a URL even!) of that project you mention I'd be very interesting to take a look at that too. Much obliged!
Haha, I'm sorry – I didn't mean to sound accusatory or anything like that. Text based communication can be tricky! :o) I very much appreciate the comparison!
Also, many thanks for the link and Apache Drill mention, will definitely look into these as well. Much obliged!
That looks quite useful. The name's pretty clever, too.
Tools like these are a viable, low-code (to use a hyped-up term) alternative to creating full-blown back-ends with all the boilerplate code that comes with them. Often these back-ends add little more than a REST API or a simple CRUD interface on top of an SQL database.
RDBMS on the other hand are immensely powerful tools that unfortunately more often than are used for little more than simple data storage.
i was stunned the first time i saw web frameworks like django and symfony automatically build a full featured admin panel based on model's definition.
I wonder if there aren't projects aimed a building just an administration GUI based on SQL introspection mixed with simple configuration files. Just point at a db, and there you go.
Side note : what is the current state of codegen based on api and model specifications ? I feel like 90% of most projects could be automatically generated just based on that.
> building just an administration GUI based on SQL introspection
I built exactly this. The idea was to make databases look and feel like a file manager where databases are shown as folder, tables as subfolders and rows are shown as files that once open shows a fully editable form that look like this: https://archive.kerjean.me/public/2020/screenshot_20200117_2...
It understands foreign keys and create relevant links to easily navigate through. The entire code is there: https://github.com/mickael-kerjean/filestash
As an advice from a marketing POV : i wouldn't start by explaining that it lets you navigate ftp servers using file abstraction. This is very confusing, because most people already access ftp servers using tools like filezilla clients that do exactly that.
The db -> file browser is a very interesting idea that i think should stand on its own (even if the underlying tech is similar).
> I built exactly this. The idea was to make databases look and feel like a file manager where databases are shown as folder, tables as subfolders and rows are shown as files that once open shows a fully editable form ... It understands foreign keys and create relevant links to easily navigate through.
I thought that was the whole point of e.g. LibreOffice Base and the like? (including Paradox, Access, etc.) So you built a web-based clone?
It's more of an elaborate answer to the infamous ftp comment from when Dropbox did launch in 2007 here on HN (https://news.ycombinator.com/item?id=8863). At its root, the project tries to solve the Dropbox problem by abstracting the storage aspect so that you can bring your own backend by implementing a simple interface.
The mysql plugin is just an implementation of that model to enable non nerds to crud a database without braking anything. I did a few one of those implementations: FTP, SFTP, S3, ... Mysql is just one of those "for the sake of science" one to be put in the same bag as the LDAP one: https://www.filestash.app/ldap-browser.html
There's plenty of database administration tools that do just that. I guess they could use work in the UX department to make it more user friendly though.
But beyond pure data access you'll want features like access rights (SQL servers will offer that to a point), validations, and data/context specific visualizations.
For my current project, we're auto-generating a relatively full-featured CRUD interface with granular access controls and the works. [one detail to add there is that while it's primarily used as an admin interface now, it's not meant to be restricted to that - it has access control and workflow support built in that we're building product flows on, where the UI is still generated the same way]
It's not just from the SQL, but we use Sequel (Ruby ORM) to introspect the DB as a starting point, coupled with a plugin to Sequel that lets us annotate the models to provide more precise information that our Sinatra based API introspects to return JSON data to our React frontend that drives how the UI is presented. This extends to e.g. declaring which fields are most useful for users to search for relations that should be linked via foreign keys, for example (e.g. users are linked to by id, but when linking a user from another record, they are searchable by name and e-mail). As much as practical we auto-generation validations etc. from the database schema as well.
I think doing this by SQL alone would be quite painful unless you sacrifice usability, because a typical database schema has too little information on how users think about the data.
E.g. we dialled back on using Postgres enum types because they're annoying to update and deal with if you frequently update the allowed values - but you need relatively little extra config to be able to generate very full-featured interfaces, and you can make most of that extra information entirely declarative. Foreign keys is another issue, as described above - a normal database schema just tells you how things are linked together, not how users would like to see it. E.g. knowing a comment was written by user 42 is much less useful than knowing it was written by bob@example.com; to allow user-friendly linking etc. you'll want more information. We refer to our additional layer of information about the structure of our data as our meta schema.
You certainly could put the parts that don't fit perfectly in your database schema in your database anyway as regular data, and to some extent we do - e.g. the meta information that we augment our Sequel model classes with can be overridden by rows in a table.
But the main reason we've not fully embraced that vs. annotating the model classes in code is that it makes version control painful, and if you update it with migrations it's really not saving that much vs. just putting it in the model classes (this might be different if you have other applications accessing the database and the database in effect needs to be a versioned API in itself accessible by multiple consumers, but in this case all the database access happens via the API exported from the model classes).
I think this is the long way around to simply making a DB procedure? Those have parameters, can easily be run by external tools like phpMyAdmin or an IDE, and if the database constraints are sane will help avoid invalid input.
That is true, and I considered this approach. But DB procedures cannot link to each other and the target users are different - giving some people a DB procedure is much harder than giving them a web interface.
So I made something similar to this at a couple of my jobs. We could basically add a SQL script as a file to a folder and it would automatically have a new report, downloadable, formatted, run on-demand. Even a pretty basic one is pretty handy, as they only take a day or two. I hadn't got round to adding variables, I like the solution.
One of the things I added is that you could add column formatting in the field names, so a $c would turn it into a currency field, there was one to turn it into boolean checkbox, etc.:
SELECT o.amount, p.price [price$c], p.price * o.amount [total$c]
FROM orders o
JOIN products p
It had other features too like you could set column widths by using @ like [amount@75] or even column grouping by dot notation [online.sales], [online.total_value$c], [instore.sales], [instore.total_value$c].
Authentication is not handled at all. Given it is a project with a front and back end (using a REST interface between) I think the correct approach is to do authentication on a load balancer. The approach is fully documented here: https://github.com/forbesmyester/esqlate/issues/5
It looks excellent! But I know https://redash.io already and used it in my last business - does this do something different? I'm always hungry for better SQL business admin tools though.
Yeh. It send through the key/values in HTTP POST and then you have the SQL and K/C's on the server, the SQL itself is not sent. Server passes them through as parameters to the Node.JS `pg` module so that takes care of defeating SQL injection.
This is cool. I built something similar awhile back for a side project. I had a folder that I would dump files into and each file would have a title, description, and SQL statement.
When I visited a specific page on my site (password protected) I had a bit of Go code that would read all the the files, execute all the (read-only) SQL statements and then dump them into an HTML template that created a table for each one.
Made it dead simple to keep track of some simple metrics like daily sign ups, number of users currently on the site, churn, etc.
Hopefully I can just reach for this next time and not write my own hacky tool :)
Nice work. I have also build a (mysql) sql based dashboard for our internal use. It mainly shows agregated lists of data and allows to click on many items to see more details.
We use it to get more insight in our data. It does not allow a user to enter anything.
Basically my app only uses a list of queries. To allow that any selected field is clickable, the result field should be like 'report|name|field,field' so the app knows this is a special link (report) refering to another defined query ('name') and should be passed some values (record specific, here 'field,field').
Yeh sounds like you had a similar revelation to me :-) This can also do the linking between "reports" or "forms". Happy you built one, makes me think this has at least some chance of success.
I believe admin panels are the MVP implementation for this idea.
Almost every business software is a select/insert/update. If you can do it over a view, or you have a trigger on insert/update/delete you are able to have a customer facing product with minimal manual operation.
You write the SQL into a JSON file. Users then can then execute that SQL with their own parameters. So the many-to-many is in your capability... A key reason for this is that it itself should not be clever.
Guys on my team struggle with Apex daily since we've started using it, which has lead to me look for a better, FOSS alternative. This looks like it might be the one
This is awesome! Saves so much work on building simple admin panels. I have experience in custom software development, and that's such a huge pile of work for developers it's insane
I wonder if there is anything like this but for a larger swath of tools, like say anything that is JDBC or ODBC compliant. More like data virtualization tools like dremio, denodo, atscale.
It might be the instructions for install a bit off but I've tested on MacOS and Ubuntu (both BASH)... Which method is your install? If it's Docker Compose ( https://github.com/forbesmyester/esqlate#docker-compose-meth... ) it has a BASH line there in the README that does the checkout for you.
You may want to revisit how you do this. At the very least use git submodules so a particular version of your code pulls the correct version of a dependency.
Does anyone else find it humorous that the majority of top-level comments in this thread start with, "Cool! I built something just like this <insert link to example> because <commercial solution> sucks!"
This tells me that (a) it is a common problem, and (b) it is not solved well. I'm guilty too: I manage a large MySQL DB and the admin panels are essentially CRUD UI rendered from the schema. The alt solutions I've clicked on here try to add more functionality, but it seems that's where things break, because every solution is different beyond the 1st-order CRUD UI. This seems to indicate there is no generic solution beyond that?
Feel free to laugh at me but I often miss MS Access layout tools.
MS Access is totally unscalable and due to it's proprietary licence I can't recommend it to anyone... but it was super freaking easy to stich together a graphical UI usable by non expert users.
I'm a SQL/GIS/Dev based on PostgreSQL/PostGis and I would totally harass my boss to buy such a solution if it were to exist. I have no time to learn latest front-end flavor so being able to quickly deploy any customizable AND simple GUI between my SQL illiterate colleagues an our database would by priceless!
If such solution does exist please have a laugh at me for being ignorant and I would really appreciate if you could send some links to that product!
> Feel free to laugh at me but I often miss MS Access layout tools
Hell, I miss Paradox. In DOS. There was a direct relationship between the tables and the UI so you got CRUD as a side effect.
I built several real application using it, but the best was a true "enterprise" application used across three facilities. The "backend" was a Netware file system and the key to making it scale was to cache necessary data locally and batch writes to shared tables. Contention had to be avoided, not because Paradox couldn't multiplex readers and writers, but because the performance was so poor.
It was still spinning when I moved on. Have no idea how long they maintained it, but I'll bet whatever they replaced it with cost an order of magnitude more. It was a part of every dollar of revenue that outfit earned employing ~1000 people.
There are decent top-to-bottom solutions like Quickbase and Google's Appmaker. I think "generic" is just hard to mix well into other code. Works better if you control the whole stack.