This is a reasonable architecture, but it introduces complexity of its own - you have to implement all your view creation logic in two places, which should go against every programmer's instincts. If you have a system capable of streaming all the events and writing out the reporting views, why not use it for everything.
At my previous^2 job, we had a distributed event store that captured all data and could give you all events (or events matching a very limited set of possible filters) either in a given time range, or streaming from a given time onwards. For any given view we'd have four instances of the database containing it, populated their own streaming inserters; if we discovered a bug in the view creation logic, we'd delete one database and re-run the (newly updated) inserter until it caught up, then repeat with the others (queries automatically went to the most up to date view, so this was transparent to the query client - they'd simply see bugged data (some of the time) until all the views were rebuilt).
The events system guaranteed consistency at the cost of a bit of latency (generally <1 second in practice, good enough for all our query workloads); if an event source hard-crashed then all event streams would stop until it was manually failed (ops were alerted if events got out of date by more than a certain threshold). This could also happen if someone forgot to manually close the event stream after taking a machine out of service (but at least that only happened in office hours); hard-crashes were thankfully pretty rare. Rebuilding the full view after discovering a bug was obviously quite slow, but there's no way to avoid that (and again this was quite rare).
In use it was a very effective architecture; we handled consistency of the events stream in one place, and building the view in another. We only had to write the build-the-view code once, we only had one view store and one event store to maintain. And we built it all on mysql.
At my previous^2 job, we had a distributed event store that captured all data and could give you all events (or events matching a very limited set of possible filters) either in a given time range, or streaming from a given time onwards. For any given view we'd have four instances of the database containing it, populated their own streaming inserters; if we discovered a bug in the view creation logic, we'd delete one database and re-run the (newly updated) inserter until it caught up, then repeat with the others (queries automatically went to the most up to date view, so this was transparent to the query client - they'd simply see bugged data (some of the time) until all the views were rebuilt).
The events system guaranteed consistency at the cost of a bit of latency (generally <1 second in practice, good enough for all our query workloads); if an event source hard-crashed then all event streams would stop until it was manually failed (ops were alerted if events got out of date by more than a certain threshold). This could also happen if someone forgot to manually close the event stream after taking a machine out of service (but at least that only happened in office hours); hard-crashes were thankfully pretty rare. Rebuilding the full view after discovering a bug was obviously quite slow, but there's no way to avoid that (and again this was quite rare).
In use it was a very effective architecture; we handled consistency of the events stream in one place, and building the view in another. We only had to write the build-the-view code once, we only had one view store and one event store to maintain. And we built it all on mysql.