We did something similar at a previous position I had, except we set up an amazon lambda function that was triggered on every insert or update to a DynamoDB table. The lambda function flattened the updated record and inserted it into our redshift cluster, which gave us a real-time ETL pipeline for our DynamoDB data. That allowed us to report on our DynamoDB data just like our relational data.
We have another GR team that does that and it works well. It’s complicated by the fact that at Amazon every team uses their own AWS account. Concretely, a service’s DynamoDB tables don’t exist in the same AWS account let alone the same VPC as the redshift cluster. Obviously, you can figure out the permissions, etc.
We’re trying to get to a place where we have the data in S3 for engineers to build products off of and for the oncalls to do sanity checks and the data in Redshift for our BI needs.