Indeed it is; TBH, having seen how easy this was to make, I wonder if there would be any merit to making a stripped-down postgres package which is just daemon + SQL parser + FDW interface (so no built-in storage engine) -- the FDW interface is amazing for building SQL interfaces for arbitrary data sets, but the whole postgres bundle seems a little heavy-weight and complex to install if that's the only thing you want it for...
Or I guess you could start from the other direction, and make a FDW/Multicorn-compatible API for SQLite? (Facebook's OSQuery is SQLite-based, but it looks like the virtual table interface over there is way more complex)
The interface is not quite as nice as Multicorn's but certainly nicer than the C/C++ SQLlite interface. But you could probably create a wrapper around apsw so it looked similar to Multicorn.
FYI, you can use ":memory:" as the sqlite data source, instead of having to conditionally drop the tables (unless you actually do want to store some persistent data on disk).
That would remove some of my hesitancy in letting this run as root for full process information. At least it limits what code could be exploited in the daemon process to some extent.
I can see we are going to get one of these DB / OS Adapters each day now. Looking forward to BerkDBOSQuery - because, well, the Berkeley DB needs some love too.
Run postgresql as root, make an interface for the filesystem, hook up to a Django ORM and you have a webpage with the power of bash? (yes, bad idea on so many levels but still, potential is .... staggering.)
Postgres can't be run as root. From src/backend/main.c:
if (geteuid() == 0)
{
write_stderr("\"root\" execution of the PostgreSQL server is not permitted.\n"
"The server must be started under an unprivileged user ID to prevent\n"
"possible system security compromise. See the documentation for\n"
"more information on how to properly start the server.\n");
exit(1);
}
Cool, I got inspired by this system too -- the aspw library and examples made it extremely easy to get started. I happen to have a lot of non-relational yet tabular data sitting around and the interface to read it is Python. It took a short time before I could join up several of my proprietary databases to do queries on them.
One thing I am planning on doing is to let the user mix in just about any data source: any list of objects with properties/values can be used really. So you can take your database (in proprietary format) and join it up with a text file (quite a useful thing for me). But you could also mix in an API call (again done using your identity). I've been thinking about how many REST APIs suport a subset of SQL via letting you seletct what fields to output, how to order things etc. -- if you instead allow the user to process it via SQL/sqlite you will allow the user quite powerful remixing/aggregation facilities.
Complex joining of multiple data sets with filters and conditions utilizing a query language that has been utilized to great effect for longer than SNMP has existed.
Truthfully, I'm not entirely sure the capabilities of snmptable, but from your statement I assume it allows joining of different sets of data in ways to show specific correlated data (I'm not seeing that when searching, but I'll assume my search skills are to blame here). In that case, the real benefit of this is SQL the language, which while it would be overkill if it was designed for just this application, is well understood and utilized and a has a large existing userbase.
I'm not sure how to use snmptable, as I've never had the need, and while I suspect in the simple cases it may be easier to learn and use than SQL, I'm highly skeptical the medium complexity cases that SQL handles are even possible in it (unless it has a fairly well fleshed out query language of it's own, as opposed to specifying parameters on a command line).
shawn-butler is technically correct -- he refers to standard SQL (first standardized in 1986) and SNMP being about the same time, and both in the late 1980s, and this is correct.
Of course, SQL was around quite a while before being standardized.
I looked it up before posting, and saw the same thing, that SQL was standardized the year before the SNMP RFCs were put out, but that it was in use since the 70s. That made the statement correct by either a little or a lot, depending on how you wanted to look at it, so I felt safe in making it.
While this looks nice, it still depends on Postgres. Why not just import a set of key/values into any SQL database? Really dumb 5-minute example below.
~/$ ps af | perl -lne 's/(^\s+|\s+$)//g;next if(!@h&&(@h=split(/\s+/,$_)));@_=map{s/"/\\"/g;$_}split(/\s+/,$_,@h);print "INSERT INTO process_table (".join(",",@h).") VALUES (".join(",",map{"\"$_\""}@_).");"'
INSERT INTO process_table (PID,TTY,STAT,TIME,COMMAND) VALUES ("19789","pts/8","Ss+","0:00","/bin/bash");
INSERT INTO process_table (PID,TTY,STAT,TIME,COMMAND) VALUES ("30616","pts/23","Ss+","0:00","/bin/bash");
INSERT INTO process_table (PID,TTY,STAT,TIME,COMMAND) VALUES ("29661","pts/22","Ss+","0:00","/bin/bash");
The reason is obvious - the postgres-driven version lets you query the system in realtime, rather than repeatedly updating the database through a manual process every so often.
Ohhhh, I didn't realize that was the end goal. In that case yeah, this is useful, if you want to run Postgres on every node in your network (which, yikes). Would still rather separate the query engine from the data feed, though. My ideal would be a database application that sends out a request to update the database and returns new records as they come in (or sorted/ordered if preferred)
In the 30 minutes where the example was at the bottom of the README: 7 upvotes.
In the 30 minutes where the example was moved to the top, so you could see it without scrolling: 30 upvotes.
Given the fast-paced nature of HN, getting to the point ASAP seems to have significant merit. All the other projects that I've announced on HN, with weeks of work put into them, and websites which expect the user to spend 5 minutes reading before they get the point, have had no upvotes or comments at all :-P
I was tl;dring before tl;dring was cool. In fact, it was called "Executive Summary" and was greatly appreciated by people who make more in a year than I will make in my life.
Wow. That does look easy.