Wow! My eyes popped out for a second when I saw my name there in the README. I wrote an alternative router for Rails (https://github.com/stevegraham/rails/pull/1) because I wanted to be able to generate links to parent and child resources automatically by reflecting on the routing table. I think this would be a useful feature for Hypermedia APIs. I'm currently working on implementing this in a project called Restafarian (https://github.com/stevegraham/restafarian/blob/master/examp...). I couldn't see a way to do this with Journey (The existing Rails router) which is a deterministic finite state automaton, so I wrote one using a modified trie structure where the nodes also have a reference to their parent. It also happened to be a bit faster, which was a nice bonus, but it looks like they've made some changes to bring it to nearly the same performance characteristics.
Hi! your implementation is inspiring to me. so I mixed some other ideas to implement this (for fun and experimentation).
this router library can be used for different purpose (outside of URL dispatching), although we don't gain so much performance from the point of view of each single request.
(Pre-disclaimer: I think it's great that the author spent some time benchmarking and hacking, and I don't mean for this comment to tear down their work, but rather I'm just answering the above question which was also the first thing that came to my mind when I saw this.)
The slowest URL router benchmarked there[1] does 10462.0 runs per second, which is just under 0.1ms per run.
I'm not sure what performance is expected of Rails handlers but the first stackoverflow thread[2] that came up on a simple Google search had twitter doing 600 req/s across 180 instances which comes to 300ms per request.
By that math, the slowest URL router costs 0.033% of the request overhead.
Caveat: 300ms/req is slower than you'd like for an interactive website; real websites may have more complicated routing tables than this benchmark. Counter-caveat: 0.1ms is not enough to matter pretty much anywhere except for the stock market.
The linked SO answer is fairly dated (2011), Twitter is able to handle bursts of up @140K req/s (with Scala) as of last year[1]
Assume Rails has improved hugely since then, 600 req/s over 180 instances is absolutely abysmal performance O_o
I'd expect Rails to clock in well under 300ms per request now, and looking at Basecamp's site[2], that seems to be the case, impressively snappy page loads (if caching the kitchen sink and not hitting live Rails stack then we'd need to look at a different example, but this site has pretty much optimal load time).
I wrote the trie implementation referenced in the benchmarks. I didn't do it for speed, I did it because I couldn't add the router functionality I needed using the existing underlying implementation.
Cool, nice work! I didn't actually look at the benchmark and was just making (perhaps poor) assumptions. I'd imagine it does come with a nice little performance boost though?
Probably not. But it's a very nice library for use in C application with small embedded webserver.
This + libev/libuv + the Node.js HTTP parser makes a nice little application server.
You shouldn't go replacing your Ruby apps with C but there are many cases were embedding Ruby/Python/etc into your C app just doesn't make sense and this is a good way of bringing nice features from those more webby frameworks to C.
Please don't use autotools, this is a prime example of why you shouldn't use autotools as the whole of autotools within the project is larger than the project, an alternative is a simple shell configure to check in env. A few other issues that I don't understand is why you made strndiff / strdiff, str_repeat, str_split it is the same as strncmp / strcmp, strcpy, strtok you just duplicated libc for some reason.
The behavior of (strndiff / strdiff) is different from strncmp or strcmp. strn?cmp does not return the offset. they are different functions.
and in order to ship this package to deb, deploy to different platform, use autotools to test C features is a requirement. (we used to build the project with cmake before we use autotools)
Reading it again strdiff is the same as strspn, but your's doesn't actually handle c strings and reads past null. You can do feature tests yourself without pulling in the whole autotools mess: http://git.musl-libc.org/cgit/musl/tree/configure
I too reliably avoid libraries that depend on these old bloated tools (although I appreciate its helpful for packaging; the script complexity and lack of portability is undeniably a trade off cost in using automake; the usefulness of this library for me would be for a static embedded use case, where automake is of questionable value)
well. implementing these feature testing scripts is kinda waste of time. so I rather to pick up autotools, which provides a integrated features.
to use it for a static embedded use case, you may write your own build script to compile the library (it's not hard to write one). the cflags are describe in src/Makefile.am
and you can use pkg-config to list the flags you need.
One thing you might want to consider (or might not) is looking at Mongrel 2. It uses ANSI C and supports a variety of platforms, using only gmake and standard macros. I haven't looked at the code very much myself, but it's a pretty neat accomplishments.
And then package maintainers hate you because they have to patch your shell scripts when building for a non-standard platform or environment. Good job.
Almost all of those are not part of a 'configure' check when you pass cflags, cc, prefix its just passed to make Cross compiler working have no place in a build system.