I always found that a simple merge style patch is enough, and a lot simpler. You just have to know that there is a difference between null and undefined. For large arrays, it's usually some sort of bulk operation that can be solved by an array of patch/delete requests, which is both safer and simpler. Maybe I've just not hit a proper use case for this yet
The problem with this standard is all the free text and bank specific fields that banks will use instead of the standard. One bank I integrated with had the equivalent of "Our fee is 5.65" in a text field which you had to parse, instead of the field for fees. Of course, the language of that string could also change. Fun times
I work in banking in the EU, we process SEPA messages only (not SWIFT) and the standards for interbank communication are very strict and top-down. I.e. (fictional), if you want to charge a fee when you return money after you received an investigation, it MUST be put in "field xyz" and if you do so, "field abc" MUST contain the code "ABC1" or "DEF2" etc.
The times when the standards are expanded or updated are fun (https://www.europeanpaymentscouncil.eu/what-we-do/epc-paymen...), translating hundreds of pages of PDF into working code and then have hundreds of banks implement those changes in the same nightly hour during a weekend...but once it is working, there is no ambiguity or (horror) manual intervention in payment messages. Either you as a bank send valid messages and they are processed, or you don't and they get rejected.
In my experience at the frontlines (banks allowing users to submit SEPA XMLs), the situation is a lot messier. I ended up building an exporter from Xero (globally renowned cloud accounting software) to SEPA for both payments and direct debits, and we have several bespoke export templates for a handful of banks that want it this-not-that way.
That does sound really really fun..
What's great about XML is that free text / bank specific fields can be handled elegantly with XML's extensible structure. That is why I think ISO20022 is here to stay.
That said, this library is made to be extensible. One day I think it will even be able to encapsulate any type of bank. For example, imagine bofaISO20022.createACHPaymentInitation or something
You can have extensible structure and fields with JSON Schema, gRPC, Cap’n Proto, etc. There’s nothing XML-specific about that.
The only thing XML gives you over any of those formats is unstructured mixing of text and data, which is more a foot-gun than anything. Oh, and of course, being significantly more verbose.
It can be very helpful when trying to figure out why one machine won't understand another, for instance.
You can put meta-data in for debugging without compromising anything, schema wise.
Or in the case of config files, there can be detailed instructions on what fields are what they should contain.
The thing about XML is that it strikes a sweet-spot between machine readable files and human readable files. (I can't believe I'm coming out as an XML apologist!)
If it were only "by machines for machines", we wouldn't consider JSON, YaML or XML as much, we'd all go for Protocol Buffers or Parquet or something.
It's the extensible nature of XML that gives it an advantage. You can add custom elements and attributes whilst conforming to the base schema.
Granted, XML isn't the only format where this is possible. You can sort of achieve it with JSON, though XML's namespace system helps deal with name collisions. Adding bank-specific messages wouldn't be possible (or would be difficult) with fixed-column formats, for example, unless they had been specifically designed to be extended.
Banks add their own features to the spec - imagine they want to add a new "Bank only" attribute that makes their XML schema differentiated and better in some way.
ISO20022 / XML allows this to be possible without breaking anything. In the past payment formats used to be fixed width text files - impossible to change or improve functionality for
Excellent example clearly from a fellow soldier from the trenches!
As somebody who has built several instances of both payments- and travel booking systems, I have seen things in systems that "adhere to published schemas" (often because the schemas were beastly, design-by-committee hellscapes of extensibility) that defy belief.
While there is a strong argument to be made that strict type systems in programming langues like Haskell and Rust make it very difficult to play outside of the rules, this is unfortunately not the case in practice when it comes to APIs - both at present where you have a JSON Schema or Open API spec if you are lucky, and in the past (XML Schema, SOAP).
I wish that the ability to express a tight, fit-for-purpose schema or API actually resulted in this commonly being done. But look at the average API of any number of card payment gateways, hotels, or airlines, and you enter into a world where each integration with a new provider is a significant engineering undertaking to map the semantics - and the errors, oh the weird and wonderful errors... to the semantics of your system.
I am glad to work in the space-adjacent industry now, where things are ever so slightly better.
(Note the lack of sarcastic emphasis - it really is only _slightly_ better!)
This has me a little dumbfounded as either really profound or slightly misguided. How do you mean?
As I am reading this you think a custom schema wont effect an implementation, but how do you expect to implement an external service (API for example) without the required defined schema. That's kind of the definition of a schema in this scenario.
Extending the schema might be another thing. But implementation can't work without adhering to the defined schema of the provider? Right?
Other than bank specific custom extensions, another problem with this standard is its scope and, consequently, its size – it is vast. ISO 20022 breaks down into over 700 what they call «messages» that cover pretty much everything, from the interbank settlements to bank-to-customer account statements.
Another challenge is that different banks may use slightly different versions of the standard messages that are enunciated via the implementation specific concrete XML namespace in the xmlns attribute of the message envelope.
Overall, ISO 20022 is an improvement over MT940/MT942 and friends, although it is not easy to use.
Same thing happens with ISO8583. Plenty of firms have an ISO8583-compatible spec, except anything remotely interesting happens in vendor-specific fields with a galaxy of different architectures.
Then you should store the time as well, because the number of decimals in a currency can change (see ISK). Also, some systems disagree on the number of decimals, so be careful. And of course prices can have more decimals. And then you have cryptocurrencies, so make sure you use bigints
I agree, and I just want to highlight what you said about generating a config file. It's extremely useful to constrain the config itself to something that can go in a json file or whatever. It makes the config simpler, easier to consume, and easier to document. But when it comes to _writing_ the config file, we should all use a programming language, and preferably a statically typed language that can check for errors and give nice auto complete and inline documentation.
I think aws cdk is a good example of this. Writing plain cloudformation is a pain. CDK solves this not by extending cloudformation with programming capabilities, but by generating the cloudformation for you. And the cloudformation is still a fairly simple, stable input for aws to consume.
People who are more into it usually prefer human made puzzles since they often have a logical path that you're supposed to find, which can be quite satisfying. Generating sudoku puzzles is actually quite easy. Just put random numbers on the board until you have a unique solution. It runs surprisingly fast. The tricky part is deciding on the difficulty. I made a program that would identify all the different sudoku techniques needed to solve each puzzle (x wings, pairs, all the way up to chains), then set the difficulty based on what techniques were required. Code is here for anyone interested: https://github.com/magnusjt/sudoku/
Sadly I don't think anyone would pay millions for this anymore
Has anyone had any success in code generation? I feel like chatgpt usually completely fails to write even a small function correctly unless it's a very trivial or well known problem. I usually have to go back and forth for a good long while explaining all the different bugs to it, and even then it often doesn't succeed (but often claims it's fixed the bugs). The types of things it gets wrong makes it a bit hard to believe it could improve enough to really boost dev productivity this year.
This is a pretty hard problem. And I haven't found anyone that's too good at this, but here are some interesting players:
- https://www.phind.com/ is a custom model fine-tuned on code, and pretty damn good
- https://codestory.ai is a VSCode fork with an assistant built in. One of the things it does for you is write code, but imo that's not its biggest strength yet.
- https://sweep.dev have a bot where you create a GitHub comment and it writes the PR to fix it. They have between 30% and 70% success rate. This is pretty bad but they're one of the best today
- https://sourcegraph.com is pivoting and building a copilot application (named Cody). This is pretty good, since sourcegraph is great at understanding your code
Have you tried Cody (https://cody.dev)? Cody has a deep understanding of your codebase and generally does much better at code gen than just one-shotting GPT4 without context.
It's not perfect, but I have had success running postgres in Docker and running integration tests against that. Usually you can trigger the problem by running a handful of queries in parallel.
If you have different currencies you need to keep track of the number of decimals used, e.g. YEN has 0 decimals, bitcoin has 6, etc. It could even change over time like Icelandic ISK did in 2007. If you have different services with different knowledge about this you're in big trouble. Also prices can have an arbitrary number of decimals up until you round it to an actual monetary amount. And if you have enough decimals, the integer solution might not have enough bits anymore, so make sure you use bigints (also when JSON parsing in javascript).
Example in js: Number(9999999.999999999).toString() // => 9999999.999999998
And make sure you're not rounding using Math.round
Math.round(-1.5) // => -1
or toFixed
(2090.5 * 8.61).toFixed(2) // => 17999.20 should have been 17999.21
8.165.toFixed(2) // => 8.16 should be 8.17
The better solution is to use arbitrary precision decimals, and transport them as strings. Store them as arbitrary precision decimals in the database when possible.