Compute/Storage separation, instant shut/scale up/down (horizontally/vertically), multi-warehouse, Semi-structured queries, change streams for tables and external tables, external tables (data-lake), stored procedures, UDF/UDTFs, cloud agnostic (AWS, Azure, Google), data-exchange/data-sharing, CLI and drivers, external functions (remote inference engine invocation), snowpipe (ingest files even when your warehouses are down), tasks (DAGs), i could go on...
Scaling redshift up and down was a nightmare. Tracking files on ingestion was a nightmare. Semi-structured data into structured data was a nightmare. i could also go on...
I'm a very early customer, and a big fan of SF if you couldn't tell.
- Performance from tables not cached on the warehouse instance is awful. That the price you pay for shutting a warehouse down.
- I wish it were cheaper. If you run queries against a warehouse 24/7, preventing from auto-shutdown, you better hope it's tiny. And even then, the cost might incentivize you to employ a different strategy entirely.
The other stuff you mention is them building a moat around your data lake. They're pretty good at that. I'd happily get locked into Snowflake for the moment. Redshift really does look like an amateur hour product in comparison to the tools you get with Snowflake.
Even then, as good as Snowflake is, our internal users went from complaining about the performance of Looker and Redshift to complaining about the performance of Tableau and Snowflake. I don't know if you can ever please anyone in this space...