this is simply because most high volume sites don't use these things anyways because they get very hard to scale later on.
If you start with the assumption that "you don't need a DBA" then that's probably true.
Just to give you an idea of my background, I work on a system using a commercial RDBMS that "scales" to thousands of commits/sec and tens of terabytes of data. We expect to take it to tens of thousands of commits (we already do that many reads!) and hundreds of teras with no major structural changes. One thing you learn in this game is that database agnosticism is a wild goose chase. To really scale, you need to intelligently choose a technology and use its features to the fullest and just accept that you will be "locked in". We couldn't port to another RDBMS if we tried because certain things, like our chosen database's locking strategy for example, are baked in to the way we do things. It's not a matter of SQL syntax. We'd be starting again from scratch, we'd need new algorithms. But we can do things, we take things for granted, that most of the Internet peanut gallery takes to be impossible, because they start from assumption that abstracting the database actually helps anything.
That's what I got from it too. The framework being this flexible should help a DBA fully use the database to his advantage. The project I work on has had to do some non standard things with Rails and it's flexibility has made it much easier. In our view rails and it's default are a starting point providing the basic infrastructure to get the project going, not an end all be all solution. I see the same thing in NoSQL solutions, they try to give the developers and DBA as much flexibility as possible. They don't really seem to offer any default. RDBMS is basically the sensible defaults without the flexibility. Maybe the in between can be found.
If you start with the assumption that "you don't need a DBA" then that's probably true.
Just to give you an idea of my background, I work on a system using a commercial RDBMS that "scales" to thousands of commits/sec and tens of terabytes of data. We expect to take it to tens of thousands of commits (we already do that many reads!) and hundreds of teras with no major structural changes. One thing you learn in this game is that database agnosticism is a wild goose chase. To really scale, you need to intelligently choose a technology and use its features to the fullest and just accept that you will be "locked in". We couldn't port to another RDBMS if we tried because certain things, like our chosen database's locking strategy for example, are baked in to the way we do things. It's not a matter of SQL syntax. We'd be starting again from scratch, we'd need new algorithms. But we can do things, we take things for granted, that most of the Internet peanut gallery takes to be impossible, because they start from assumption that abstracting the database actually helps anything.