There was a (crazy-talk) rationale for it, though: fsync wasn't thought to be "reliable" enough, so the order-of-magnitude slowdown was for our own good. Doubt bless() really "needed" the slowdown for RHEL.
Fsync() causes all modified data and attributes of fildes to be moved to
a permanent storage device. This normally results in all in-core modi-
fied copies of buffers for the associated file to be written to a disk.
Note that while fsync() will flush all data from the host to the drive
(i.e. the "permanent storage device"), the drive itself may not physi-
cally write the data to the platters for quite some time and it may be
written in an out-of-order sequence.
Specifically, if the drive loses power or the OS crashes, the application
may find that only some or none of their data was written. The disk
drive may also re-order the data so that later writes may be present,
while earlier writes are not.
This is not a theoretical edge case. This scenario is easily reproduced
with real world workloads and drive power failures.
It's still Apple's fault on some level (after all, they control everything from the fsync implementation to the hard drives they choose to ship in Apple hardware) but from the perspective of the guy configuring sqlite, the full filesystem sync makes sense.
Apple totally fucked sqlite for awhile (it may still be, I compile from source now) by doing a full filesystem flush (not fsync) on every commit:
http://adiumx.com/pipermail/adium-devl_adiumx.com/2008-April...
There was a (crazy-talk) rationale for it, though: fsync wasn't thought to be "reliable" enough, so the order-of-magnitude slowdown was for our own good. Doubt bless() really "needed" the slowdown for RHEL.