I've always had this process described as "inner platform effect". Also, I've written a system like that before I was skilled enough to use parser generators. Database driven rules engines are teh suck.
It powers some pretty heavy duty stuff for a Fortune 20 and I still await the call from the company in question if they need to update those rules. Luckily, there is no user interface for this system so they can't screw it up. The funny thing is the program actually digests the whole database of rules and then builds an AST to execute the rules.
edit: Isn't the system described by the article essentially the entire basis of SAP?
This came up on Reddit. IMO, Inner platforms are a superset of duck programming. “Inner platform” describes the implementation, while “duck programming” describes the process.
If we make a DSL that encodes business rules, I think you could say it’s an inner platform of sorts. If that DSL is updated live in production by design without being wrapped in risk management process, I think you have duck programming.
"Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."
- Greenspun's Tenth Rule
One of the ways ive seen these systems born is when the user requirements are ill-defined.
"Okay so you are not sure - you may need either foo or bar, so we will put this little foo-to-bar knob here so you can tune it just as you like, however you like, whenever you like". Neither of the parties involved in the project definition can be bothered to figure out what actually has to be done, and next thing you know you end up with an overpriced-overengineered specialized turing complete language interpreter that has the customer do all the actual work implementing the business logic.
My theory is that this often stems from a communication issue. And some laziness.
And it's fun for novice programers to implement this kind of overenginered specialized turing complete language... Until one has been burned by one, it's very easy to be lost in overarchitecturing and be very happy with one's own cleverness.
When I look back at the projects I started, way back when I started programing, they all tended to have this characteristic and were completely unmaintainable (which is why they were never finished)
And I spent quite a bit of my early years as a team lead dragging novice programmers back off the slippery slope of the configuration system. You can tell someone's edging out onto the slippery slope when they extract things out of the source code and into a configuration file. Then they realise they need to expose values from the runtime environment into the configuration file, so you get symbolic values added to the language. From there it's a short slide to implement expression evaluation and conditionals (one usually comes before the other, but they fit together so well that I've never seen someone fail to implement both 'to cover future scenarios').
A lot of configuration systems reach a complexity maximum at this point, where the ability of the implementor (or their memory of the compiler construction class) runs out before their ambition for their configuration language does. Some particularly resourceful individuals press ahead regardless though and find a way out of this sticking point. At this point they will often also separate their configuration system out into a library so that others can gain the benefits of a poorly thought-out programming language being embedded directly into their software. For some companies, the configuration library would be the only reusable component shared between different applications because the guy who wrote it was the one everyone else looked up to.
The other route out of this sticking point came through the trendiness of XML and the availability of general purpose XML parsers. This meant that not knowing how to build a proper parser (with decent error messages and such like) was no longer an obstacle, just as long as you could convince people that writing programs as XML elements was a step forward. XML parsers also turned your configuration file into something similar to an AST (if you made your language look like a serialized AST) so programmers who had never even done the compiler course could stumble across the solution without too much effort.
At least configuration files could be checked into some source control system and managed properly. Don't get me started on storing business rules in relational databases...
I've seen a very bad case of this that was a direct result of the 'second system effect'. The devs were told that they could have a re-write, but it would be the last re-write, so everything became configurable.
This is exactly why I prefer tools that take a stance on what the right way to do something is. Even if I don't like some aspect, I at least have a little confidence the author thought hard about a corner case and made a well considered decision.
At DevDays in Austin a couple years ago Joel Spolsky complained that at Freebirds ordering a burrito consists of telling them the recipe for a burrito. I've got work to do, I don't want to tell my tools how to make a burrito.
The problem is a developer is not always in a position to make the decisions on what the right way to do things are.
In order for a developer to understand the requirements of the software he requires a deep understanding of how the business works, which is probably at least as deep a the person specifying the software or managing the developer.
Even if the developer has a better understanding that the manager about what is required he may still have rank pulled on him by the manager.
A developer who understands the business well enough to write good software for it probably understands the business well enough to run it which makes the manager superfluous anyway.
Yes, this is a problem that has existed for a long, long time. Consider the origin of HLLs. Waaaay back when COBOL was born, at least some of the people involved, seemed to be thinking that, instead of writing a new program each time one was needed, they only needed one program: a compiler. Non-programmers could interact with an easy, human-readable description of the software (in COBOL), and the compiler would do the actual coding.
For those who prefer to think of the "inner platform" effect, the above is a clear example. The two platforms are the HLL and the machine code. But then we can see that the idea of having multiple platforms, might not be a bad thing (unless you'd prefer to do all your coding in machine code?).
A lesson to be learned from all this, is that duck programming is not a problem with the tools or the code itself, but with the way we think about and manage these.
> ... that is thought to be “not programming” ....
Quite right. Specifically, rules engines stored in a database can be very cool. Just recognize that the rules are code and use an appropriate interface and management (i.e, Prolog or something similar).
What mystifies me is when large enterprise-y programs intended for developers require large amounts of duck programming to get them configured, when regular programming would work just as well.
This is often true because you want to be able to change the configuration without recompiling the program (which requires installing all the build tools on a production system as well as increased downtime).
When I first tried to use a J2EE system I was literally amazed by the amount of XML configuration required.
While one of the main points of the article seems to be "duck programming is bad because it circumvents proper workflows", I suspect you could also reverse it and say "people write duck programs to avoid terrible workflows"
Not to say you should avoid best practices; but I think a lot of these systems come into being because the overheard required to do a task becomes more cumbersome than the task itself.
Exactly. I have a colleague who worked on some part of the control software of the Space Shuttle code in the 70s and he tells me that at some point, NASA's management decided to freeze the source code so no more modifications to it were permitted (sensible, if it's done) but they still wanted to be able to change the behavior of the software. The inevitable result was that someone wrote their own DSL for patching the object files generated by the original source code directly.
I've run into this a number of times. The worst cases are when the configuration is under the users' control. My catch phrase is: if you have a user controlled configuration file, you don't have a single program, you have 2^N programs where N is the number of configurable items. '2' is larger when the settings aren't binary.
I've had to develop a system like this before. We were a data centric, mostly SQL shop. The idea was that the BA, CEO, and other devs could all implement the system for different customers having slightly but sufficiently different business rules. This is how we did it. Thankfully it was rewritten and the few customers migrated to the new system before it had a chance to fall on its face.
Many years ago I spotted a related anti-pattern, and explained it to people this way: "Yes, you don't have to be a programmer to customize this system. But you do have to be a rocket scientist. Just programming would be easier." Usually just writing the code as a "one off" is far simpler, more maintainable than soft coding everything.
I wish I liked the name better. Whenever I encounter these sorts of systems, I think of a plug-board, or a breadboard.
The wiring is in a separate file/database and everything is supposed to be better because it is infinitely changeable, but it is infinitely changeable.
There's another point of view regarding this issue. Complex applications tend to evolve towards an 'inner platform'/'duck programming' state because serious power users want to avoid repetitive point/click operations. They demand a programmatic way to macro common tasks that they wish to apply to large amounts of data.
One might say at this point ok, the application should provide an API, and the power user should hire a part-time programmer to write a plugin. But that may not be feasible, and it misses the point that the user may not want to hire programmers and learn about API's.
The deeper question it seems to me is, is it always a sin to allow an end-user to have access to a Turing-complete, user-domain set of tools that are effectively a programming platform?
An answer of yes to this question smacks a bit of elitism: we are the priests of code; users must forever be relegated to pointy-clicky automatons.
The article does not suggest that users should not have the power to be programmers, it suggests that the organization benefits from wrapping programming in certain risk management practices and that this is just as applicable to duck programming as it is to code programming.
Heh. I'm fighting against the introduction of a duck programming system at the moment. My cunning plan is to make the system generate source code from the config files. That makes it clear that modifying XML is the same thing as modifying source code, because it explicitly makes modified source code that then needs to be deployed. I may have to post the results on HN :)
Business workflow engines are a special case of this, with the defining qualities that a) it's something you do intentionally, and b) humans are supposed to be "in the loop".
I am not convinced whether workflow engines are genius or pure evil yet. You can certainly perpetrate evil with them, but that's true of anything.
In the days when people were calling it 'meta-level architecture' there was this sense that highly configurable systems were defined by what you couldn't configure. If everything was configurable, it was a bloody mess.
one of the basic purposes of system architecture is to partition a system into one of roles and responsibilities.
For instance, software belongs to the ISV; configuration belongs technicians, and business rules belong to analysts. Since each have their own core competencies and access to tools, one would not expect the system analyst to write the rules out in C++ even if the system was developed in that language.
However, the author is right in that traditional programming processes still ought to apply, and many in non-programming fields will have to learn this skill before they can execute this properly.
Rule engines like Tibco actually provide tools for users to perform regression tests. In fact, in the game world, Chris Crawford had an interactive story engine that will perform regression tests on their storylines - which I find very cool.
I apologize because this is related to the medium, not the message, but the contrast is too low on the text and it makes it so I don't want to read it.
Alas, I have no control over the styles. But this and so many other anti-readbility designs on the Internet have me reflexively clicking the “reader” button in Safari. I rarely bother with a web site’s native look and feel for content.
This is a problem caused by the differing world outlooks of technical people and "business people".
I have fallen down this rabbit hole plenty of times myself as well as seeing others do so.
Technical people have a desire for maintainable and fast code , coherent architectures , DRY principles and designing a robust , well tested system with the long term in mind.
Business people are interested in cashflow , agility and being able to get their ideas into production quickly. They are worried about waiting months for a requested change to make it into production and not having it be exactly what they wanted/needed.
Managers often have a belief that programming should be unnecessary and there should be some way to drag and drop functionality into their applications. For example anybody who implements simple email capabilities into their web app will soon have users demanding to know why it does not have a spellchecker , contacts database , spam filtering , mail merge , mail folders etc etc (basically everything that MS outlook has)
This conflict leads to problems where business will ask for what they feel should be a simple change and will want it working by the next business day. The technical team will take this change and promise to include it in the next large update. For example the small change may interface with parts of the system that are undergoing bigger changes upon the next release.
Developers do not want to mess up their code base doing these half assed changes without proper testing so they attempt to anticipate everything that might be required and simply add more and more options to their program making it increasingly complex.
When this becomes too much to manage they will eventually create a small DSL or a "rules engine" of some kind. This starts off simple enough that the secretary can use it and everyone is happy since the developers get their nice codebase as seperate from the messy code developed by the users.
Of course this rules system becomes more and more complicated in itself and it becomes a full time job to maintain this rules engine while more and more of actual live code is being written by people who have a baptism by fire introduction to programming as well as no testing facilities or version control.
Eventually the developers have to step in , view the programming atrocities that have been created in the higher level engine and somehow re-integrate it all back into the original codebase and start again.
Basically you are moving the mess out your area of responsibility and moving it to someone elses.
This isn't always a bad idea provided you limit the power of the rules system and make sure expectations are clear as to what it will be able to perform.
It powers some pretty heavy duty stuff for a Fortune 20 and I still await the call from the company in question if they need to update those rules. Luckily, there is no user interface for this system so they can't screw it up. The funny thing is the program actually digests the whole database of rules and then builds an AST to execute the rules.
edit: Isn't the system described by the article essentially the entire basis of SAP?