Gall's Law: "A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system."
Conway's Law: "organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations"
Also, Kernighan's Law: "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."
Notice that this can be interpreted in two ways. The hacker's interpretation is that, if you want to improve your smarts then you have to code as cleverly as possible, otherwise you cannot learn anything when debugging! This is called kernighan's lever: http://www.linusakesson.net/programming/kernighans-lever/
Zawinski's Law of Software Development: Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.
Atwood's Law: Any application that can be written in JavaScript, will eventually be written in JavaScript.
But there is perhaps no greater legal scholar than Norman Augustine:
Law XVI: Software is like entropy. It is difficult to grasp, weighs nothing and obeys the Second Law of Thermodynamics: i.e., it always increases.
Law XXX: The optimum committee has no members.
Law XXXV: It is true complex systems may be expensive, but it must be remembered that they don't contribute much.
Do what is simplest, and remember, everyone is attacking you with stupidity. Most of your simple work will be done quickly, a small amount will be done in the time remaining. You will think you are better at doing all this than you are. To wit, seek the help of others to talk you down-- accept things from others. You'll need it because eventually you become a stranger even to yourself. And from there you'll rise to the level of stupidity you detested not too long ago. Or maybe some unwise fool will intercede on your behalf, dragging you up to their level of idiocy. This will all take longer than you think, because all along the way you underestimated your skills, but don't worry there'll be plenty of work to fill the time. Most of it will be petty, and you'll fight over the least essential things, but that's because you chose to work in a field full of smart people who continually underestimate their abilities.
>
In RFC 3117, Marshall Rose characterized several deployment problems when applying Postel's principle in the design of a new application protocol.[3] For example, a defective implementation that sends non-conforming messages might be used only with implementations that tolerate those deviations from the specification until, possibly several years later, it is connected with a less tolerant application that rejects its messages. In such a situation, identifying the problem is often difficult, and deploying a solution can be costly. Rose therefore recommended "explicit consistency checks in a protocol ... even if they impose implementation overhead".
The other big problem with Postel's law is that it makes enforcing security guarantees a nightmare. When each component in a system is "liberal in what it accepts", it's very easy to fall into a trap where component A thinks that component B is performing the security checks and component B thinks that component A is performing the check and as a result the security check is either never performed or trivially sidestepped.
A good example of this was the vulnerability with null characters in SSL certificates [0]. In that vulnerability, both the browser and the certification authority interpreted the domain with the null character as "liberally" as possible, but their differing interpretations of what constituted "forgiving" made it trivial to spoof SSL certificates merely by including a null character in your domain.
Another example is with guarding against XSS vulnerabilities. Browsers try to be liberal in what constitutes valid Javascript, and will attempt to execute even the most malformed scripts they encounter. This makes it unnecessarily difficult to shield against XSS - one must block a whole range of vectors, rather than just script tags.
That may be true, but it's only a problem if you believe we are capable of designing sane, useful standards initially. Postel's law allows a limited form of evolution for standards. Sometimes that allows for useful extensions to be adopted into later revisions, other times it can run rampant and create monstrous mutations (which might still be accepted. I'm looking at you, IMAP).
I would argue that if you're strictly defining the accepted variations in the grammar, then all you've done is define a new grammar that you're conservatively adhering to. And that's fine! There's no rule that says that standards cannot evolve to match new use cases. However, I would argue that having a strictly defined (yet evolving) standard is much different than the common interpretation of Postel's Law, which is that undefined behavior should "do the right thing", with the specific interpretation of "the right thing" being left to the implementers. Predictably, different implementers interpret that in different ways, which leads to security holes. But if the specification defines what the right thing to do is, then the behavior is no longer undefined, and therefore Postel's Law no longer applies.
Yeah. I agree that this is a slippery slope. SMTP followed this, and there were a lot of whacky things, then qmail came and rejected the bad implementations.
Perhaps it's better to use this law for UI, where being liberal with accepting things by trimming white space is better than rejecting, which would confuse an end user.
Perhaps, but we introduce a new implicit specification by doing this, possibly closing the door for situations where white-space is relevant.
The common appearance of the ignorant user seems to stem from him generalizing patterns as a holdover from one UI to another. Perhaps we should better study these patterns and steer the user to a consistent path. But instead, it seems we're trying to out-smart him time and time again, creating a jungle of user interactions which merely belittle and confuse.
This is better observed as having a separate standards compliance check and 'reference implementation'.
The standards compliance check evaluates if a given program or module complies with the written standard. The reference implementation is free to be more fault tolerant or otherwise assume that invalid input is allowed to produce undefined results (as long as they are not clearly the wrong behavior).
It took me two days to read it, so I'll provide a summary:
In a corporate hierarchy are the losers (bottom), clueless (middle), and sociopaths (top).
Losers: Economic losers in that they prefer a fixed monthly paycheck over their fair share for their contribution. Risk-averse. They know they made a bad deal and don't do more than expected. They play social validation games with each other to feel better in order to stand their fate.
Clueless: They believe in the corporate story and work more than they're payed for. They're promoted into middle management where they serve as a buffer between the top management (sociopaths) and the workers (losers). They execute all ideas from top management, from the stupid over the risky to the good. If something succeeds, the sociopaths take the credit. If something fails, the clueless take the blame.
Sociopaths: Nihilists who emotionally broke under the realization that no true values exist in this universe, so they're free to make values and goals up as they like. Those stories are given to the clueless to believe in. In every corporate life cycle exist different sociopaths. In the beginning they're the risk-takers with a vision and the willpower to push something new through. In established companies they kill off new and threatening ideas to protect the successful products. During decline they milk the company for the remaining values, then leave the sinking ship and hand responsibility over to the clueless to let them take the blame.
One important part are the different languages of the three classes. Losers have "game language", the social games they play to validate each other, but the talk and the content are unrelated to reality and don't advance them in the hierarchy. The clueless try to speak like the sociopaths but don't realize that one has to have stakes in the game (i.e. something to lose or something valuable to offer) in order to be taken seriously, so their language is full of empty phrases and threats. The sociopaths' "power talk" is full of hints (without commitments) that are only understood by equals and when they talk about something they have to lose or gain something -- it's about the real things, not the empty phrases the clueless use.
Though I had adopted the maxim s contained within as worthy of remembrance, I had searched in vain to find this page again after my first encounter with it several years ago; thank you for restoring it to my bookmarks list.
My favorite one from him is: "How does a project get one year late? One day at a time." as it captures the priority decisions that you make consciously or unconsciously all the time.
Over on the education side of things, we have a lot of problems with Campbell's law. Once people focus on a quantitative indicator to evaluate things of any importance, it essentially gets corrupted. Like test scores, university rankings... On the software side of things, there might be things ranging from website hits, ad clicks, page rank, karma points, etc. More on the development side there might be things like commits, github stats, etc.
Greenspun's Tenth Rule: "Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."
Dijkstra's Law: "Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better."
For any given reasonable estimate for an item of software, add 2^N days to that estimate, where N = the number of date/datetime fields involved on that form or database table.
eg - You have a complex data entry screen to add to a webpage, and you KNOW for certain that you can complete it thoroughly in 2-3 days tops, testing every possible edge case. Good stuff.
However, if that screen has 2 date fields involved, then its going to take 2^2 or 4 extra days over and above the reasonable estimate to get it done properly, and handle NULL dates, timezone differences, comparison of dates for equality, parsing date inputs, converting internal representations between front end / backend / storage ... and every other unexpected abomination.
And that neatly applies to his law of argumentative comprehension, which should really read "The more people _think they_ understand something, the more willing they are to argue about it, and the more vigorously they will do so."
The Hanlon's Razor portion vastly underestimates the sheer amount of contempt and vitriol that many people in business harbor for their technology staff.
Sometimes (oftentimes, in my experience) they really do hate you.
Atwood: "Any application that can be written in JavaScript will eventually be written in JavaScript"
Zawinski: "Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can." s/mial/API etc
And I can't find this one (IBM study?) "Adding 10% of features increases cost by 100%"
Good software will grow smaller and faster as time goes by and the code is improved and features that proved to be less useful are weeded out. [from the programming notebooks of a heretic]
If I were to quote the above to a layman--someone well outside the world of software development--the response would be, "Well of course that's what would happen, it's obvious."
Linus' Law loses water when you consider using formal specifications, model checking, and -- if you have the budget and time -- proof. These systems and languages have come a long way and are far better than any human at checking your work and telling you where you are making mistakes.
Model checking in particular, in my limited experience using it, is effective at this.
You can eliminate many (not all... nothing's perfect) of these rules of thumb with a little math and analysis[0].
Even when you're using some base that already exists (often a highly configurable thing that's been sold to you by someone else), you STILL have the frequent stories of nightmares involving either an inability to configure it to the desired specification or cost and time overruns of epic proportions.
Note also that some "laws" may remain as "laws", even in the face of experimental evidence which directly contradicts the predicted results of that law.
Which gives rise to "The Law of Research Funding", in which the amount of funding that can be attracted to support a given law is proportional to the amount of real world experimental evidence which contradicts that law. :)
Gall's Law: "A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system."
https://en.wikipedia.org/wiki/John_Gall_(author)#Gall.27s_la...
Conway's Law: "organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations"
https://en.wikipedia.org/wiki/Conway%27s_law
Cunningam's Law: "the best way to get the right answer on the Internet is not to ask a question, it's to post the wrong answer"
https://meta.wikimedia.org/wiki/Cunningham%27s_Law