> In the case of [backend for frontend] we’re adding one extra hop before the client gets their data.
> You either have the client call 20 service/data?client_type=ios or you have the frontend backend call 20 different service/data?client_type=ios
The article touches on this point, and it mirrors what I've seen as well. The time from client -> backend can be significant. For reasons completely outside of your control.
By using this pattern, you have 1 slow hop that's outside of your control followed by 20 hops that are in your control. You could decide to implement caching a certain way, batch API calls efficiently, etc.
You could do that on the frontend as well, but I've found it more complex in practice.
Also a note: I'm not really a BFF advocate or anything, just pointing out the network hops aren't equal. I did a spike on a BFF server implemented with GraphQL and it looked really promising.
> You either have the client call 20 service/data?client_type=ios or you have the frontend backend call 20 different service/data?client_type=ios
The article touches on this point, and it mirrors what I've seen as well. The time from client -> backend can be significant. For reasons completely outside of your control.
By using this pattern, you have 1 slow hop that's outside of your control followed by 20 hops that are in your control. You could decide to implement caching a certain way, batch API calls efficiently, etc.
You could do that on the frontend as well, but I've found it more complex in practice.
Also a note: I'm not really a BFF advocate or anything, just pointing out the network hops aren't equal. I did a spike on a BFF server implemented with GraphQL and it looked really promising.