I'm interested enough in the idea to pick up the (partially-completed) book, but I wonder if this isn't far too complex of a solution for 95% of cases. By the time that your system is in the 5% of cases, you're looking at a massive rearchitecting effort.
Of course, to be fair, you're probably already looking at a rearchitecting effort at this point!
However, looking at the architecture diagram, much of the complexity could be hidden behind the scenes with a devoted toolset built on top of Postgres, rather than trying to cobble together Kafka, Thrift, Hadoop, and Storm.
Sometimes, one big tool beats a lot of small tools.
For that matter, I wonder if a series of Postgres-XC (or Storm) servers couldn't do the same thing without learning a series of complex tools.
Step 1: Everything goes through a stored procedure. Deletes, updates, and creates are code generated through a DSL. Migrations would be a pain under this system, but mitigated by the fact that the complexity is being handled by the tool.
This stored procedure then writes to the database of record and then sends a series of updates to, essentially a system of real-time materialized views that serve the same purpose as the standard NoSQL schema.
The lambda architecture purposes would still be fulfilled with lower data-side complexity. After I read through Big Data I'll revisit this with a more nuanced view, but I wonder if hadoop and nosql really give you anything.
Of course, to be fair, you're probably already looking at a rearchitecting effort at this point!
However, looking at the architecture diagram, much of the complexity could be hidden behind the scenes with a devoted toolset built on top of Postgres, rather than trying to cobble together Kafka, Thrift, Hadoop, and Storm. Sometimes, one big tool beats a lot of small tools.
For that matter, I wonder if a series of Postgres-XC (or Storm) servers couldn't do the same thing without learning a series of complex tools.
Step 1: Everything goes through a stored procedure. Deletes, updates, and creates are code generated through a DSL. Migrations would be a pain under this system, but mitigated by the fact that the complexity is being handled by the tool.
This stored procedure then writes to the database of record and then sends a series of updates to, essentially a system of real-time materialized views that serve the same purpose as the standard NoSQL schema.
The lambda architecture purposes would still be fulfilled with lower data-side complexity. After I read through Big Data I'll revisit this with a more nuanced view, but I wonder if hadoop and nosql really give you anything.