The central thesis of the post “Optimizing for your software project becoming big is the same as optimizing a car to hit a rock wall - you are optimizing for failure” is garbage.
I can't imagine any sane engineer not realizing that writing say an operating system is on a different level of magnitude than writing some HTML peppered in with a little Javascript.
Edit:
"This is the reason I like agile. It emphasizes small, working pieces all the time. If you work with code this way, you can’t really become big. Instead, your project will be forced to be modularized and divided into smaller, more logical components that are highly cohesive and decoupled from each other."
Using Agile really doesn't guarantee you any of this. You're still very free to abandon separation of concerns and end up with a tightly coupled mess.
"Using Agile really doesn't guarantee you any of this. You're still very free to abandon separation of concerns and end up with a tightly coupled mess."
This is what I find usually happens with agile development. The software tends to start out small, but because it's not well thought out and because it tends to be rushed, even the small stuff becomes spaghetti startlingly quickly -- as happened on a current project, where after less than three months, we managed to cram THREE separate object models into a UI with a whopping TWO screens. (Imagine my frustration when what was six lines of code when I last worked on it now had over 500.)
I'm guessing you were modded down for implied snark, but it's well worth asking how a small project becomes a big project with no noticing, or noticing but not taking appropriate action.
It happens to the best of us, the feature creep, the must-haves, the rush to ship. So, does agile actually fail to address this, or is this something subtle that manages to evade detection despite better practices and intents?
but it's well worth asking how a small project becomes a big project with no noticing, or noticing but not taking appropriate action.
Simple, nobody pays you to notice how much of a problem the software that works perfectly has become. They pay you to improve feature x by a very small amount, or to add feature y.
So, sure Agile works fine, in the same world that most every other methodology works fine.
It's not the methodology that makes good software, it's the people behind it.
No, that's the sad part. But when you have 10 people tromping around the same code in a hurry it's hard to keep things clean and consistent even with code reviews... and there's been such a push for new stuff that there hasn't been time for refactoring.
This article embodies what I hate about the Agile / TDD crowd. They have a bunch of good ideas that appear to be helpful in their problem domain(s). But then they make these grand pronouncements as if their problem domain represented all of programming.
It's particularly bad here because in the process of telling every programmer with a different problem domain that they don't know what they're doing, he appears to concede that dynamic languages are not good for handling big projects...
Do you disagree with those who say dynamic languages are bad for big projects, which the author appears to concede by arguing big projects are bad in the first place?
If we're applying the argument to Perl, then sure.. I get it. With Perl, it's more a question of syntax. By supporting so many ways to accomplish the same thing, it can be very unwieldy when you have many programmers all exercising all of those options.
It's not a question of dynamic in that case. It's a question of a (in my opinion) fundamental shortcoming in the language.
However, languages like Python or Ruby both seem pretty well suited for large scale development. We've seen large scale projects written in both, and it works just fine.
So no, I don't agree with the fundamental premise that dynamic languages are bad for 'big' projects. If by 'big' we mean memory/processor intensive... well lets have THAT debate instead.
Having never tried it, I can't really say. I'm willing to at least seriously consider that the right dynamic language might be an excellent choice for certain big projects.
For instance, I've been doing a good bit of playing around with Perl 6 this year. I think by this time next year Rakudo will be stable enough to make it an excellent choice for developing large projects where execution speed is not of the essence. But that's just a hypothesis, I certainly haven't done enough Perl 6 programming to feel confident in that yet.
Well, cars are engineered to protect the occupants from trauma if they hit a brick wall at moderate speeds, but that isn't the same thing as optimizing them for hitting walls, so the analogy doesn't do anything for me.
And likewise some software is big, period, regardless of the language used. The easiest way to measure the size of a piece of software in advance is to imagine the size of the test suite. I have a poorly thought-out idea about the minimal complexity of a software program being the complexity of the automated test suite that validates the program...
Anyways, some software is bigger than other software, but again that doesn't necessarily mean that given a large test suite to be satisfied and anticipating a larger program or collection of programs you necessarily need to discard certain tools that work well for smaller programs.
Steve Yegge made this point with a much more entertaining analogy: He talked about pushing dirt, which illustrated the fallacy with some tools that are espoused as being appropriate for large projects: namely that they are really good at solving the problems introduced by the tools themselves.
Just because the product is big doesn't mean its a monolithic codebase. Amazon is powered by a lot of different pieces. Decoupling let's you keep the pieces small and yet grow and scale to huge sizes.
It's also a maintenance nightmare. A LOT of that code is terrible, there's a lot of tight coupling, and a heavy reliance on developers with pagers to keep the mess operating.
There's a reason that they have such insane turnover so few people who stick around after they collect their signing bonuses...
Optimizing for your software project becoming big is the same as optimizing a car to hit a rock wall - you are optimizing for failure
No you're not. You're optimizing a car that can survive running into a rock wall. If that's what the car needs to do, you're doing exactly the right thing. If, instead, it needs to get people places, or be fuel efficient, or drive over 120mph, then you're probably doing the wrong thing.
Yeah.. true, but indeed the best scenario is not to hit the wall... Drive slower, more energy efficient and take the best route and try not to fall asleep while you're at it
I like the old mantra "Fast, good, and cheap. Pick two." If you build one big project all at once, it may not do anything you want until it does everything you want. Whereas if you start small and keep your features separate and cooperative you are more likely to get some of what you want right away, and over time get everything you want working together.
The danger here is that in optimizing for fast and cheap you may be prematurely optimizing yourself away from being able to do "good" without a complete re-write. Fast and cheap has a seductive nature of getting good feedback on broad concepts, but it also tends to lead to very shallow implementation of these concepts. Like a lot of evolutionary algorithms, it is easy to get caught in a local maxima and not realize that you have only reached the top of a foothill while a competitor that can see where the mountain really is will have made less progress but will end up miles ahead if you discover that you climbed the wrong slope...
I agree that whatever you optimize for early, you are probably stuck with later.
There are more than three vectors. "Good" could mean reliable, secure, highly functional, or many other things. Picking any three vectors and analyzing the tradeoffs helps to figure out how to apply resources.
Having an intuition for the scope of a task, and for the likelihood of being able to complete it using existing components, is part of being a good programmer. His post would seem to imply that the only acceptable way to write big software is to stumble into it by accident.
It's not like there isn't close to a 3/4 million lines of code in JRuby.. Nothing quite like practicing what you're preaching. (The naive code count is something like 250,000 lines of java, 560,000 lines of ruby, 14,000 lines of yaml(?!) and about 2400 lines of xml. About have the ruby is in tests. I just piped 1.4rc1 through "wc -l")
It's almost like he's trying to prove the converse of his thesis. Not to be mean here, but he's a fairly junior level developer, he's a younger guy who done some things but not done that much. I've worked on some very large, highly profitable projects and we always knew use-cases and requirements long before we got in to the project. There are tools that allow not-great engineers to produce good stuff. There are also great engineers which are capable of dealing with the astronomic complexity of a large project and it turns out that people pay a lot more money for software that solves problems that they can't otherwise solve. If there is a simple solution that satisfies the requirements and beats the competition, then more power to you. Unfortunately, many complex problems have complex solutions.
"Big" or "small" software is relative to the problem at hand. The essentially author posits that "big == tightly coupled", which doesn't have to be true whether the team uses agile or not.
I think the real thing that the author denounces is blanket statements like "this project will be big so we must use Java" without really thinking about the project's potential structure or the best language.
I don't agree with the post because there are projects when you know in advance that the project will be big. You can probably avoid to make it HUGE, but you cannot keep it small. Trivial examples:
- Writing a web browser
- Writing a professional image manipulation software like Photoshop
- Writing a professional 3D modeler like 3D Studio
- Writing an IDE like Eclipse Idea or NetBeans
Yes there are projects that you know beforehand that will be huge, but that doesn't mean that they should be made or that that is the best way to go about them.
For instance:
Browser - if you're going to start to build a web browser, it doesn't have to be huge... unless you're going to have your own rendering engine, your own JS engine etc... You'd probably just be addressing some set of features that were missing/misdone in other browsers
Writing photoshop - what, are you going to remake all the features of photoshop? And do you think your software will be better? That's a fail. You'd probably be starting by doing some image manipulation program that does a limited set of features that photoshop can't do etc...
I get your point and I totally agree that if you _WERE_ going to remake photoshop, it's totally going to be huge... but that doesn't mean that it's the best path to go down.
You're making a marketing argument, not a technical one. Trying to build a new Photoshop competitor from scratch is probably a losing proposition for a variety of reasons, regardless of how you run the project or structure the code.
However, there are a large class of problems out there which do require a huge chunk of software if you want to address them in any useful way. No matter what you do to reduce coupling between modules there is a certain level of irreducible complexity. For those cases you need to do a lot of up-front design work or you're going to end up with a huge mess. And no, refactoring is not the solution here.
This a concise yet necessary shared observation on software design. The problem invariably becomes feature creep or inheriting someone elses working code.
Not every problem can be optimally fast, and also optimally sized for simplicity. As an alg coder of quite a few years of course I prefer and love simple easy to read, remember, and modify code. But then someone comes along and says, we need this fairly quick code, parallelized, ported to these other platforms, and faster than real time. Dynamically size and share memory as needed to scale to any number of available nodes, and have it driven by a GUI, but make those buttons overlay an elliptical earth model…..
As you can imagine, writing simple (and reusable) code is a desirable thing for programmers. Unfortunately we don’t dictate the terms and requirements, real physical systems do. Sometimes deadlines trump both of the above, and getting something to work, means messy throw away code. I guess we could all agree never to write software like that, good luck getting consensus (herding cats made easy).
<i>Dynamically typed languages are fine for smaller programs and simple web applications, but if you’re building something big, something that will be several millions of lines of code, you really need all the tools you can only get from a statically typed language</i>
From my experience I am only speak to the benefits provided by a statically typed languages as you try modularize the components. That isn't to say that a dynamically typed languages do not provide an alternative or similar benefit, I just do not have large project experience with them.
Not certain how he draws the causation from using agile to the resulting software being modularized and cohesive. It just is a case of correlation on the projects he has worked on.
Both the arguments for dynamic and static languages as presented here are vacuous.
yeah. and if you write small, anybody else can likely do it, too. so, you write big and complex, you have barrier to entry. but it's hard. you write small and easy, you dont have barrier to entry. which is hard. welcome to planet earth.
He said "optimizing a car to hit a rock wall." Believe me, if your car was optimized for hitting rock walls, it would look very different than it does now.
I can't imagine any sane engineer not realizing that writing say an operating system is on a different level of magnitude than writing some HTML peppered in with a little Javascript.
Edit:
"This is the reason I like agile. It emphasizes small, working pieces all the time. If you work with code this way, you can’t really become big. Instead, your project will be forced to be modularized and divided into smaller, more logical components that are highly cohesive and decoupled from each other."
Using Agile really doesn't guarantee you any of this. You're still very free to abandon separation of concerns and end up with a tightly coupled mess.