Hacker News new | past | comments | ask | show | jobs | submit login

> The level of complexity this introduces seems to be way higher than anything in the original article.

I don't find it very complex at all. You send a message to a service. You want to get some state after, you query for it.

> but anything that deals with the cart - things like checkout, removing from cart, updating quantities, etc. Adding to cart has to be mindful of queued checkout attempts.

How so? Those pages just query to get the cart's state. You'd do this even in a sync system. The only difference is that on the backend this might be implemented via a poll. On subsequent pages you'd only poll the one time, since the 'add-to-cart' call was synchronous.

> But by using asynchronous comms you're actually introducing more state into your application than synchronous comms.

I don't see how. Again, with the cart example, there is always the same state - the 'cart'. You mutate the cart, and then you query for its state. If you have an expectation of its state, due to that mutation, you just poll it. You can trivially abstract that into a sync comm at your edge.

    def make_sync(mutation, query):
        mutation()
    
        while not expected_state(query()):
            # handle retry logic/ timeouts



Your solution seems to assume only one thing will be accessing what is being mutated at once. If another thread comes in and gets a cart (e.g. maybe the user reloads the page) and they aren't waiting on the operation to be processed anymore. If you remove it from the queue after a few seconds of failure then fine. But if the point is "self healing" it presumably hangs around for a while.

You have to deal with this to some extent in any webapp that has more than 1 OS thread or process. But if you're keeping actions around for minutes or hours instead of second you're going to have to account for a lot of weird stuff you normally wouldn't.

If you really wanted something like this, I would think you would want a concept of "stale" data and up-to-date data. If a process is OK with stale data, the service can just return whatever it sees. But if a process isn't OK with it (like, say, checkout), you probably need to wait on the queue to finish processing.

And since the front end may care about these states, you probably need to expose this concept to clients. It seems like a client should be able to know if it's serving stale data so you can warn the user.

Maybe I'm mistaken.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: