We run an architecture somewhat similar to that described in the article, and a lot of the issues you raised we either dealt with up-front, or realized we needed to deal with them very soon after rolling it out.
Regarding extra latency and sql select result marshaling, we cache extensively and invalidate through message queues. The few exceptions include data that are involved in transactional contexts, like actual order placement and fulfillment.
We have effectively solved debugging/logging by generating and chaining request identifiers, which turns out to not be as computationally expensive as one would think.
YAGNI is, of course, the elephant in the room. Some aspects of the architecture have turned out to be quite beneficial, but the traffic scalability afforded by shared-nothing, API-driven architectures is not something we have had the luxury of really exercising as much as we'd like :)
Yes, the caching at the API layer has proven critical for us, too. Although, of course, this introduces a cache coherency problem, and we all know what fun those can be.
You bring up a very good point that I left out:
The few exceptions include data that are involved in transactional contexts...
It's sad how few people even understand how to write transactional code when you hand them a direct interface to SQL. (I like to think I do, but I may be fooling myself.) And trying to implement, document, test, and maintain a stateless RESTful HTTP protocol that properly supports transactions on the underlying data store is even harder.
We run an architecture somewhat similar to that described in the article, and a lot of the issues you raised we either dealt with up-front, or realized we needed to deal with them very soon after rolling it out.
Regarding extra latency and sql select result marshaling, we cache extensively and invalidate through message queues. The few exceptions include data that are involved in transactional contexts, like actual order placement and fulfillment.
We have effectively solved debugging/logging by generating and chaining request identifiers, which turns out to not be as computationally expensive as one would think.
YAGNI is, of course, the elephant in the room. Some aspects of the architecture have turned out to be quite beneficial, but the traffic scalability afforded by shared-nothing, API-driven architectures is not something we have had the luxury of really exercising as much as we'd like :)