Well you might be want to tell Facebook, Twitter, Netflix, Yahoo, Spotify, eBay etc that they don't know how to design software systems. Because all of them have a long history (check their Githubs) of creating and adopting pretty cutting edge technologies.
For me the best software systems are those that are well architected and use the best available technology. This doesn't mean we all should be writing Tomcat, Oracle, Apache stacks just because they are less shiny.
Yes, please do look at them, because Google, Facebook, Twitter, Yahoo, and eBay use MySQL pretty heavily for many of their core storage needs (look at the contributor list for WebScaleSQL). Spotify uses Postgres for the same. All of them also use other things as well, but only for cases where they are willing to make performance or reliability trade-offs like for colder or analytical data. Of course they also use things like Sherpa, Cassandra, HBase, etc... but they lose consistent low latency or consistency or availability when they do so.
The point is, if you are going to bet your business on a technology, it helps if it has been tested with production workloads in many different conditions and scales. You want to know about as many shortcomings as you can. For many of the use cases that people use things like Cassandra for, they will be tolerant of 30ms++ reads and potential read inversions. Redis is used pretty heavily, but it is relatively simple code and you can trace through the entire writepath pretty easily and get a sense of its limitations (being single threaded is a blessing and a curse, you really need to be careful about bad tenants because a single slow query will cause an availability event for everyone). HBase is used in a few places, but usually only for cold data after they expect it to be read only occasionally and they don't want to use up space on their MySQL pci flash devices for it anymore. There are a bunch more, but they all have some latency, consistency, or availability downsides compared to a traditional sharded B+ tree backed transactional store.
All the big guys do have a history of creating new/novel infrastructure pieces. They create them because they don't (can't really) trust any brand new infrastructure software they they didn't have at least a major hand in creating. You'll notice that if they use a new thing, it's after extensive testing and patching and contributions.
As a small startup, you might not have the time for extensive testing and patching of new hot technologies, which facebook, twitter, netflix do.
For the big guys it's not about trust. They just hit the limits of the current tried and true before anyone else does and as a result they have to forge new ground.
If you don't run at the same scale as those guys you won't hit those same limits. But if you do reach the scale of those guy's you will find that your needs are suddenly very much a unique snowflake that will require you either creating something new or heavily tweaking something that already exists.
Plenty of time for that when you reach the scale that justifies it though.
This is simply not true. Many companies in the world (including on this list) do use brand new software that they had no hand in creating. Classic example is in the Big Data space. There are plenty of very early adopters to most of the Hadoop stack e.g. Spark. Or look how many companies started using Nginx or Go even though less shiny solutions have existed.
And not sure if you've worked for a large company but they largely comprise lots of little startups sized teams. The same principles apply regardless e.g. spiking technologies out, managing risk etc.
What I have an issue with is these stupid generalisations. Less shiny = good, Shiny = bad. The merits of the architecture and technology seemed to be completely ignored.
The common rule of thumb seems to be: Less shiny = battle tested (hey, if it has bullet holes even better), Shiny = not gone even through pre-flight testing.
With less shiny I mean that it's old. Not that it is crummy qualitywise. Old code is not like wine. If it was crap then it will be crap now. Old code is like an old house. If it's well made and tended to, and built on solid principles it can last generations.
Extensible domain logic is something that generally does not age well. Old utility libraries with well defined interfaces, on the other hand, are invaluable in technical computing.
It looks to me like they use bleeding edge solutions or develop their own when "standard" tech isn't doing the job well enough. And they start slow and use it in non-critical pieces of software first.
For me the best software systems are those that are well architected and use the best available technology. This doesn't mean we all should be writing Tomcat, Oracle, Apache stacks just because they are less shiny.