Hacker News new | past | comments | ask | show | jobs | submit login

A couple of weeks ago I was listening to a podcast and the guest worked at this data integration company. In order to provide integration with external APIs they needed to do this in case they changed any endpoint without backwards compatibility. They would monitor the API docs for any change so they could see if something had to be updated on their side. I thought it was kind of "brute" but then I realized there's no good way of doing this today.



Hey, Tim here. I was the one talking about this on the podcast. As I mentioned in the podcast, the monitoring of APIs was a method that we quickly found out didn't really scale or help. It suffered from a few issues, mainly the vendors were updating their api's but not their docs. The way that we tackled it in the end was to let things fail. We have actually taken this approach with many things i.e. fail, but have a method which cleans up after the new changes have been deployed.

The way we handle it now is that our integration will start throwing serialisation errors. This will then have our platform send a probe to get a RAW response from the system and then we let the admin see side by side, the old data and the new data. This allows the developers to schedule a new deployment to make these fixes. The good thing is that when the new deployment is made, the orchestration around it will handle fixing the data that it couldn't resolve while the serialisation was failing.

We do get other benefits out of this, including the ability to better handle integrations where you have absolutely no idea what to expect from the API e.g. old Oracle, IBM products that don't have discovery endpoints like Dynamics, Salesforce etc does.

Our recommendation after managing so many integrations is "let things fail". Embrace a data integration pattern that allows things to fail.


I think you mentioned in the podcast that managing integrations is one of the toughest parts of your whole operation. (Apologies if I am misrepresentating what you said.) I can 100% believe that embracing failure is the only realistic way to handle this at scale.

But as someone responsible for producing API docs, it really pains me to say this! I know my team goes to great lengths to ensure that our API docs are up to date; we even maintain a changelog of every single doc modification, in large part because we have people integrating directly with our Swagger/OpenAPI based docs.

After a bit of Googling, it looks like CluedIn has an integration with us (Zuora). In case it can help you in any way, our changelog is available at [0].

Full disclosure: I work at Zuora, but am speaking for myself only.

[0]: https://community.zuora.com/t5/Developers/API-Changelog/gpm-...


Swagger/OpenAPI goes a fair way towards solving this problem by recommending semver, but it's far from widely adopted...


I'd be really interested in listening to that podcast. Are you able to share a link?


Sure. Here you https://www.dataengineeringpodcast.com/cluedin-data-fabric-e.... He talks about that around minute 40.


API versioning is the contract by which this is supposed to be managed...


Breaking changes to production APIs with little to no notice occurs more often than you'd think. Fun at scale!


Haha I never revealed how much I think this happens, I just stated what the theoretical solution to this problem is.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: