Sounds interesting, care to give an example how you use it exactly? You dump everything to JSON and then simply run some RecordStream queries on the data? Or do you generate something visual?
If you're lucky enough to log straight to JSON, you can pipe straight into recs. Otherwise, recs-frommultire is a great way to get the stream started.
From there, I typically end up using recs-grep or recs-xform to filter or transform the raw data. This is when you can isolate the data of interest or exclude data you don't particularly want. You can also transform dates and times or change units of measure.
The actual analytics tend to be done with recs-collate. This lets you aggregate in almost any way you desire and get a ton of great stats out.
For presentation, reds-totable gives you a nice, quick glance at results, and recs-tognuplot gives you quick graphs. For additional fun and presentation, recs-tocsv is a good way to go.
A really, really, simple example:
1) cat your webserver log to recs-frommultire, defining an expression to pull our URLs, page lantencies, and response codes
2) pipe to recs-grep and filter out URLs that aren't of interest, like favico gets
2) pipe to recs-grep to select only 200s
3) pipe to recs-xform to translate dates into something like a utc millisecond
4) pipe to recs-collate to build a histogram of average latencies for each unique url
5) pipe to recs-totable for prettified output
The real power of the system lies in the ability to chain transforms. You can bucket out percentiles, dice by many dimensions, and collate along many axes. Best of all, since it's just shell commands, you can mix in perl or python for extra-complex steps.