My brother was driving his beater as an unlicenced hack in Baltimore for a year. I observed a number of issues.
I think a barrier between the passenger and the cabbie is generally good.
I think a meter system or a zone system that is externally verifiable is better than some dude's cell phone.
After you spend 40-60 hours in one week schlepping people in your un-designed hack, an understanding will form of why modern cabs have fairly universal accoutrements.
I don't know about your friend, but I'm happily using the latest version of Sparrow with no issues whatsoever, maybe he has some other unrelated problems...
They didn't choose Python, or Ruby, or Java, because none of those languages are as expressive or shut-up-and-get-out-of-my-way as PHP. Just because a high schooler can write PHP doesn't mean it's a toy not also meant for world class engineers.
I worked as a DBA somewhere that did everything in 5NF. It was a nightmare. I only stayed there 3 months. In my last week there, I remember having to debug a 35 table join. I'm so glad I left.
I had left before they were willing to performance test the application. But I heard that after they launched that it was awful.
The Oracle optimizer quits optimizing after the first ~7 joins so we had to do manual optimization after that. Almost every query was at least 7 joins. The guy running the DBA group was following the Data Model Resource Book to a "T", which I think is appropriate for an OLAP database, but is not cool for OLTP.
If you want to see something hilarious, get a 12 join query and add "AND 1=0" on the end of the WHERE clause.
Oracle doesn't look at the query and immediately return no rows, instead the optimizer continues to generate a full plan. Apparently, according to Oracle support, they have no intention of finding "AND false" predicates to shortcut query plan compilation.
That could work in PostgreSQL with the genetic join optimizer. You would not get the best plan or necessarily a stable one but it could be fast enough. I believe the worst reporting queries I have written could have been almost as bad as 35 joins. Of course they were not fast but not as slow as one might think.
Over the years, I have come to realise that even the third normal form is perhaps too much of an overhead for web apps. I guess that's why we have all these modern alternative DBs.
I think 3NF is a good starting point, then as I said in another comment, you start profiling to find where expensive queries are coming into play and hurting your performance, then denormalize those issues away.
There must be a better answer than denormalizing. I'd try to get a DBA or DBE involved to look at the situation if possible. Re-structuring the query might be all that's needed.
I agree, I think denormalizatuon is one of those things you do after trying to rewrite the query. After all, if you can get the optimiser to choose a different access plan that's quicker by rewriting the query to be more efficient, then you're better off overall. There's costs to denormalization too... Not to mention you need to ensure that your application code writes correctly.
It depends what you are doing. If you have a lot of semi structured data, like documents, then go with a document oriented database. If you have semi-structured data that you don't want to do much analytics on, consider a key/value store. But it you require full ACID properties, want to reduce your application code and not worry so much about integrity, then an RDBMS is still the way to go.
That's a process called de-normalisation, covered here: http://en.wikipedia.org/wiki/Denormalisation You should only consider doing that after thorough profiling showing your joins are causing the performance loss.