Slightly similar lesson I learned when developing games (and other 3d apps). Make sure your debug-output is so good that you don't have to start the debugger before you already have an idea what's going on. Simple stuff like - your 2d and 3d cursor positions can be printed on screen always. You can add lines and polygons which are visible for debugging in your code with a single line (and you only comment them out afterward and don't remove them as you will need them again). When developing things like A* - make sure you can print every value on screen easily. Adding all those kind of easy debug-outputs is usually worth it over and over again and still it often only gets added to a project after people get stuck on the algorithms for too long. So right problem in 3D-development: debug output first.
If/then guards are better than commenting out code, so you can toggle debug at runtime, or hardcode a constant to get an optimized production build that skips the if-check.
You need both. The thing with easy debug-output is that you start using it so much that you simply get too much stuff on the screen at the same time if you keep it all enabled.
"The problem was the problem. Paul realized that what we needed to be solved was not, in fact, human powered flight. That was a red-herring. The problem was the process itself, and along with it the blind pursuit of a goal without a deeper understanding how to tackle deeply difficult challenges."
I think this is untrue: people were solving the right problem, but doing so with insufficient tools. MacCready's solution was to improve the tools, but other alternatives (for example, hiring a thousand engineers and building a thousand planes a year) would have found the same answer. Of course that answer would have come at a much higher cost and for this problem it would have been uneconomical, but there are plenty of domains where MacCready's approach would have been suboptimal.
"Iterate faster" is a good lesson that I think is probably applicable for all of us, but the temptation to solve tangential problems is a hard one to resist and often unnecessary.
"but other alternatives (for example, hiring a thousand engineers and building a thousand planes a year) would have found the same answer"
Before MacCready came along, there were a thousand (well, a large number) of engineers building a thousand planes a year. But because these attempts were disjoint, there were all making the same mistakes and were slow to learn from them.
The point of the article (as I understood it) is that when you're working towards a goal, you usually don't fully understand why that goal may be difficult to achieve -- what the real problems are. This is why is import to get early feedback about whether or not you're on the right track, and to adjust your attempts accordingly.
You are right: the problem was not the problem, but the available tools, namely the inability to "fail fast and cheap". However, I am not sure what alternative to building better tools first would be viable - throwing more resources into broken processes rarely, if ever, leads to results.
Check out _The Mythical Man-Month_ for a refutation of this.
Quoting from memory:
- Nine women cannot have a baby in one month.
- Adding more developers to a late project makes it later.
The problem is that in order for throwing a thousand engineers at the problem to be time-effective, each one must communicate what he has learned to all of the others. That takes time and adds friction.
Your examples are only true because gestation and many software projects are not easily parallelizable. It's a rule of thumb that applies to many scenarios, but obviously not all. If it takes 2 hours for one person to mow his lawn, then 2 people could mow it in about 1 hour. 100 people really can lift 100 times the weight of one person. Not to mention that nine women can have nine babies in nine months.
Making many different aircraft prototypes simultaneously is certainly parallelizable, although it obviously would be enormously expensive.
In experimental tasks, parallelizing does not provide the same gain in success probability as accelerating the process of prototyping.
Building prototypes in parallel does not provide the benefit of a feedback loop, as the iterative process does.
Let's say you have a 10% probability of success on the first attempt, which improves by 40% after each experiment (to 14%, 19.6%...), and you have resources for 5 attempts. The chance of getting at least one successful solution is: parallel ~ 40.95%; iterative ~ 72.19%
A message from Guido van Rossum on the Python-Dev mailing list confirms this (http://mail.python.org/pipermail/python-dev/2006-December/07...). YouTube was launched in 2005 and the message is from 2006, so it’s a safe bet they’ve been Python-based right along.
> “Python is fast enough for our site and allows us to produce maintainable features in record times, with a minimum of developers”—Cuong Do, Software Architect, YouTube.com
In this case, the problem domain is still very well understood and success is easily defined in concrete terms: human powered flight across a distance.
In startups, you often start with one problem in mind but stumble on another (and another, and another...). And even when you solve the technical problem, you still have to solve the business model problem.
Yes, you may solve the problem of finding the cheapest flight through search, but can it make money? And how long will it take to break even? etc etc
So redefine the problem as "Use my software skills to address an unmet need in a sufficiently large market" or "find a scalable business model" or something else along those lines.
It's true that this is the problem, but it's not like people have not been trying to find a way to make the process quicker. It's not recently that we've been able to effectively start applying computational techniques to aid in drug development, before that it was mostly wet lab work.
Of course that's only going to help address the first stage of the pipeline, eventually you still need to do a bunch of research in to manufacturability of the drugs, clinical trials, etc. Those all take time. Maybe they could be sped up as well but I can't say that's an aspect of it I know much about :)
I know you have good intentions, but that documentary wasn't very realistic -- it wasn't really well-grounded in science.
For example, they keep presenting an argument about how milk protein is harmful because lab rodents developed liver cancer when fed with a diet of 20% milk protein, as opposed to rats fed with a diet of 5% milk protein. What they don't mention is that many of the rats fed 5% milk protein died.
In terms of meat, they don't make a logically sound argument against eating lean white meat or fish. Fish can be extremely healthy and nutritious. Every argument presented in that movie applies mainly to red meats or unhealthily-cooked white meats. They just use red meats as an example, and then make generalizations.
And like most documentaries, they include anecdotes to keep your attention.
It's a movie with good intentions (it's a lot easier for the average American to be healthy just by cutting down on meat consumption), but exaggerated claims (meat -- especially fish and lean white meats -- and milk protein isn't going to kill you, and can actually be very nutritious).
You're kidding right? Steve Jobs delayed his cancer treatment because he believed this sort of stuff. We do not know how to cure cancer by changing your diet.
Can someone explain to me how one can use the "iterate faster" method to build highly reliable systems?
How would Google look like if they were to release GMail using an "iterate faster" mentality? Your INBOX just got accidentally erased because of our UI bug. Oops sorry, we'll release a fix in 5 minutes.
How about Windows? or a generic OS driver? Oops, your machine is just blue-screening now. Don't worry, we'll just release an update in an hour. It'll certainly make your OS drivers very reliable.
Don't confuse the Web 2.0 "iterate faster" era with "the right way". Building "iterate faster" systems is easy. Building systems "the right way" is hard.
> Find a faster way to fail, recover, and try again. If the problem you are trying to solve involves creating a magnum opus, you are solving the wrong problem.
I bet GMail and Windows do iterate and fail fast, internally. They probably often do fix bugs within 5 minutes, and have continuous integration tests running at least hourly. This is "the right way", and yes, it's hard. You don't see the fast iterations in the final product, but that doesn't meant they're not there.
Iterate rapidly to test environments, have more controlled releases to production.
You can also use techniques like feature switches to roll out experimental features to only part of your userbase, although even then you'd ensure basic reliability first.
To be honest, it sounds like you've confused 'iterate faster' with 'write crap code'. That's not what it means at all. It's HARDER to iterate faster with crap (presumably untested/untestable) code since you're piling technical debt upon technical debt.
Clean, tested, maintainable code is essential for quick iterations to succeed.
> If the problem you are trying to solve involves creating a magnum opus, you are solving the wrong problem.
I disagree. If your goal is to create something great, you aren’t necessarily so misguided—how you go about realising that goal is what’s relevant. The Gossamer Condor was no less a masterpiece for having been created iteratively.
In solving the "right" problem more constraints were added to solving the "wrong" problem. These constraints helped in solving the wrong problem. Often constraints, self imposed or not(ie limited resources, etc..), produce innovative solutions.
I think it's time for a new word to describe that concept. The word "agile" has been co-opted by process dweebs for their latest bureaucratic nightmare.