Technical writing is boring. But having an artsy trendy programming guide as part of a set of introductory materials is a way to "take a break" mentally without having to go off-topic.
Would you rather have 1 type of car that the government makes that works pretty reliably and is a middle-ground car, or would you rather have a selection of cars to chose from; sports, truck, sedan, minivan, etc. ?
Everyone has different needs. I use MySQL extensively for it's memory-based storage engine, MyISAM for quick & dirty non-escential data i/o, and InnoDB for when data needs to be managed securely for processes.
I would use PG for transaction-based data processing in a high-volume situation with multi-processing clusters. Otherwise MySQL works fantastic for all my needs.
Would you rather have a car that runs into a wall and explodes and kills everyone inside or no car at all? The car is like using car analogies on Slashdot and not having a car is like not using car analogies on YC.
In conclusion and in summary, go back to Slashdot. KTHXBAI.
It isn't about better vs worse. It's about the right tool for a job.
For example, if you needed to store key => value pairs, how would you do that? If you use an array (array[0] = ('key', value')), you would have to iterate over the entire array to find the key you're looking for (Order n), but if you use a hash table, you can do key look-ups in constant time (Order 1).
MySQL's different storage engines provide similar opportunities to use different ways of storing data for different ends. MyISAM has amazingly fast read speeds, but is really poor under write conditions. Sounds like a crappy storage engine, right? Well, what if you have a table of zip codes and their geo-coordinates? How often are you going to be writing new data to that table? Every couple months in a batch update? So, if you put your postal codes in a MyISAM table you can take advantage of high read rates and ignore the fact that MyISAM is terrible for writes since you don't really write to the table.
I think the argument is that different domains impose different requirements on the storage engine. The type of data organization I would like to use for a read-mostly web application vs. an OLTP-type transaction processing workload vs. an OLAP-type analysis workload might all differ fairly substantially.
It's not. I spent a week setting up and learning to checkout, update, add files, and commit. The only workflow change is an occasional trip to the shell to commit, but this is usually at the very tail of a session where it doesn't break flow anyway.
Then (if you put in on a public server, of course) your work is accessible from anywhere. Run into a friend at Starbucks and want to show him the module you wrote today? Just get into the shell and checkout your repository.
Not only that, it's a perfect record of your project. Occasionally I need to assess the status of the project vis-a-vis some point in the past (for myself when strategizing, for my boss when he needs a report, etc). Being able to look at the commit logs saves time and produces better reports (commit logs don't forget).
So even if you use only the most basic features, you still get an immense benefit from using a VCS. And the hours it's saved me in trivial tasks made it well worth the week it took to hammer down.
(At least, that's why they appeal to me...)