Recently I've been exploring the idea that small programs should consist of two parts; a "backend" that you can communicate with via a strict message protocol like protobuf. It can handle multiple requests because it is a mini server, the message format is locked to a "type" and you communicate with it via RPC.
The second part is a "frontend", a command-line client that does all the unixy stuff with text streams etc; but is just another RPC client to the backend.
This provides flexibility; for quick duct-tape situations you use the front end and pipe anf ilter to your hearts content. Then when stuff needs to get serious, you can bypass the command-line front end and connect directly over RPC using a strict message format which has the ability to handle a couple (doesn't have to many) of concurrent requests.
Of course this is early doors yet, but I think it might have legs in the flexibility in the text streams/hard message format debate, for very little additional work.
You're sort of describing the design idea common in unix-y areas to build libfoo that does all the work, then a foo frontend that makes tasks useable from the command line. I don't know how widely used this model is, but it's certainly out there.
The difference is that in what I've described, the ABI is the common interface and an RPC server would be just another consumer of the library.
> Recently I've been exploring the idea that small programs should consist of two parts; a "backend" that you can communicate with via a strict message protocol like protobuf. It can handle multiple requests because it is a mini server, the message format is locked to a "type" and you communicate with it via RPC.
I like this idea! It seems very Unix-philosophy-y (sorry) itself; break up the programme even further, from "do one thing" to "figure out what to do" + "do it".
That's how a lot of well-designed software works, including Windows (in general) and old-school Mac Classic with AppleEvents (not sure if OS X still does that.)
Yeah it kinda reminds me of the "plumbing and porcelain" approach of some programs like git. It also is nice that a lot of modern init process can fire up the back ends on demand and kill them after a while.
Surely a lot of that could be done with a commandline switch?
tape_robot start -tapeid=1
as the quick hacky version, and:
tape_robot --protobuf 'msg:start;tapeid:1'
(replace the 'single:quote;string:thing' with your protobuf message.), or
tape_robot --protobuf_file filename
The filename could be a fifo or socket, of course. Then use a general purpose server which runs the programs:
proto_serve tape_robot 127.0.0.1:8080
or whatever.
This (to me) is closer to the unix idea - one server program, one actual processing app. Why should my tape_robot program actually have a server embedded in it?
If in the future, I want to add authentication, I only have to add it (once) to the proto_serve program, rather than to every single application that is a 'server', for instance.
It would also allow a version which 'pre-forked' the processes, and left them waiting for the data on the socket/filehandle, or whatever.
You could do a bunch of this already using nc or similar, I suspect.
Interesting. This seems very non-unix-y to me. The `tape_robot` doesn't "do one thing", it does many things, including parsing protobuf from shell strings and files, which seems error prone and way outside its core competency.
Maybe I misunderstood @jalfresi's idea - as I understood it, it was that each command would do not only parse protobuf and unix style flags, but also contain a RPC server of some sort.
My reinterpretation was to say how about factoring out the server part, and leave the command only understanding either flags or protobuf commands - which could be delivered to the command either as an arg, or as a file given to it by an arg.
You could go a stage further by having all commands only accept protobuf (or similar), and distribute a spec/human-mapping to go with it. Then your shell would parse the args that you give to the command using the spec, and actually call the command using protobuf.
This would allow very awesome shell completion / highlighting / etc. It should also allow much simpler end-point/commands, as they'd not have to do hardly any type checking / re-parsing, as it would arrive in protobuf.
This is actually a step toward how PowerShell (is it still called that?) works, where you pipe objects (ie. a schema) around instead of arbitrary strings.
The second part is a "frontend", a command-line client that does all the unixy stuff with text streams etc; but is just another RPC client to the backend.
This provides flexibility; for quick duct-tape situations you use the front end and pipe anf ilter to your hearts content. Then when stuff needs to get serious, you can bypass the command-line front end and connect directly over RPC using a strict message format which has the ability to handle a couple (doesn't have to many) of concurrent requests.
Of course this is early doors yet, but I think it might have legs in the flexibility in the text streams/hard message format debate, for very little additional work.