I can give you a counterexample anecdote from my own career: I have spent my entire career writing code mainly in Common Lisp. I have a long list of successfully executed projects, some of which were done in the face of directly competing efforts written in other languages which failed spectacularly. I have a "profound belief" in Lisp, and I can justify that belief with a litany of empirical data and theoretical explanations. Nonetheless, every single time I have ever expressed that belief it has led to my career advancement being cut off and ultimately to the loss of my position and having to start over.
So no, it's not enough to have "profound beliefs". You have to have the right kind of profound beliefs (unless you are extremely lucky -- see below). They have to not conflict too much with the profound beliefs of your management and co-workers because if they do you're sunk no matter how much data you have to back them up. That just turns out to be how the world works. I learned this lesson the hard way, and far too late in life for it to do me much good, but I thought I'd pass it along.
There is one exception to this rule, and that is if you just happen to get have iconoclastic beliefs that are also correct and you somehow manage to acquire the resources to act on those beliefs and the results you produce happen to find a large market. Steve Jobs is the poster child for this, and even he is a cautionary tale because his career very nearly ended when he was first fired from Apple. It was only the good fortune of Apple management being utterly incompetent that gave him a second chance, which he seized on to spectacularly good effect. But Jobs was literally one in a billion.
Steve Blank's lessons are mostly for founders and executives. Of course employees don't get to just choose different beliefs than management! Management sets the direction and you either agree, disagree and commit, or disagree and quit.
Yes, but I'd extend that to really anyone who is responsible for making decisions about something.
If it's your job to chose the development tools, you should have some profound beliefs about that. If it's your job to use the chosen tools to implement things, you really don't need to have any profound beliefs about the tools, and though it is helpful to understand the beliefs/motivations of those who are making the decisions, having your own (especially if contrary) is a recipe for "how to be frustrated at work."
If you are hired to follow, then you are supposed to follow. You can give advice and suggestions but you have to fall in line when an executive decision is made. That's how a team works, otherwise you aren't doing your job.
If you're hired to lead. Then you need to make decisions and you're responsible for the success and failure of those decisions. You should have profound beliefs about why you make your decisions but also be able to take advice and suggestions, but at the end of the day, the success or failure is on your responsibility, not the advice givers - thus you always have the right to not act on others advice and suggestions.
It's important to know what your role is in any organisation.
I actually like this 1995 movie Crimson Tide that illustrates these principles while exploring moral ambiguities and procedural issues while under huge amount of stress where the lives of not only men in the sub is at stake, but the lives of whole nations when it comes to making decisions and when it is a good time to challenge those decisions.
I think Steve is approaching this from much more of a business angle. Looking at the Business Model Template slide in the article, programming language choice may not affect the business model much, except perhaps in rare cases where the language/tech achieves a business moat. Founders can definitely hold random theories about the business market/strategy, as long as they are willing to invalidate them (profound beliefs that are loosely held).
Lisp has a profound effect on the business model because it can give you a dramatic productivity boost (like an order of magnitude or more) but it makes staffing more difficult because there aren't many experienced Lisp programmers because very few organizations use it, which makes for a vicious cycle. But this is precisely the sort of situation that if enough people simply changed their minds that by itself could change the underlying reality.
It's not just Lisp. There's a similar thing happening today with Rust, which is clearly superior to C from a technical point of view, but which very few people use simply because there are very few people using it. But Rust might be one of the rare exceptions where the technical superiority is enough to allow it to break this cycle.
> An order of magnitude "or more" is an extraordinary claim. The evidence just isn't there.
Let me clear: I am claiming that these kinds of productivity gains are possible, not that using Lisp will automatically give you a 10x improvement under all circumstances. And yes, I can give you concrete examples of demonstrable >10x productivity improvements which resulted in products succeeding where they otherwise would undoubtedly have failed. These are generally found in niche applications where there is a lot of domain knowledge that needs to be brought to bear. So you're not going to see big wins in, say, commodity consumer products, which is the reason that the wins don't get much press. But the evidence is definitely there if you look in the right places.
> I can give you concrete examples of demonstrable >10x productivity improvements which resulted in products succeeding where they otherwise would undoubtedly have failed.
Well, I would definitely be interested in these examples. I occasionally write Lisp (admittedly, Emacs Lisp rather than Common Lisp) and while I appreciate having macros and other metaprogramming tools at my fingertips, I've never encountered a situation in which their use was critically important. I can always replicated the thing I wanted to do in Python with a bit of boilerplate; if I had to choose, I would certainly take Python's huge ecosystem over Lisp's metaprogramming. Frankly, I don't think there have been any language silver bullets after structured programming and garbage collection. So I'm very skeptical of the claims of extreme Lisp productivity. I'm open to being convinced otherwise though.
I think a couple factors are at play here. First, most developers never really learn metaprogramming or use it, even in languages with native facilities for it. You don't need it to get the job done, strictly speaking, and it is a difficult skill to acquire. Second, many software applications don't benefit that much from metaprogramming even when you have those skills. The benefits aren't universal, which brings the costs into question.
Nonetheless, for some types of software, writing code without using metaprogramming will have several-fold the LoC, complexity, etc of the equivalent with metaprogramming. But if you never developed metaprogramming skills, you are unlikely to recognize when these opportunities arise. In these cases, you do see large productivity multipliers. I see this pattern all the time in C++; most C++ developers have no idea how much concision (and type safety) metaprogramming enables in contexts where it is perfectly suited for the job because they never learned metaprogramming in C++, so they write vast amounts of brittle boilerplate instead.
I've used metaprogramming in enough languages and contexts to recognize it as solving a broad class of problems in a general way, but you still want to pick your moments because it isn't free. Similarly, garbage collection is the right choice for many software applications but it isn't free and there are contexts in which garbage collection introduces far more complexity than is justified by the benefits.
Recognizing these situations and being able to take advantage of them is a market opportunity.
The big wins are in niches that involve a lot of domain-specific knowledge. The two best examples that I was personally involved with were the NASA Deep Space One Remote Agent and the Meta chip design tool from Barefoot Networks (acquired by Intel in 2019). In the former case, an attempt was made to do the implementation in C++, which failed outright. In the latter case you can do a pretty direct apples-to-apples comparison of the design cycle time relative to off-the-shelf design tools. Meta lets you iterate in minutes what would take hours using standard tools. (To be fair, Meta does not do everything that the standard tools do, and before you can tape-out you have to do a few iterations on standard place-and-route and timing verification. But it's still a huge win over just using those for the entire design.)
I have experience with EUROPA, which has gone on to have a long life independent of RAX (e.g. I believe it's still used via SACE for the ISS solar arrays). EUROPA is C++, not Lisp, even though Lisp isn't an unusual choice in AI planning.
I don't doubt that RAX used Lisp, but they replaced it with C++ within a few years, somewhere between HSTS and the second generation of EUROPA. The C++ versions are the ones that have been in 'production', so to speak, for a couple decades now. The early 2000s Mars Exploration Rovers might've been on an old enough version to still be running Lisp, though.
That's a good question without an easy answer, but there are two leading theories. One is that languages are infrastructure and it's really hard to replace infrastructure once it gets established (look at how much time it's taking for electric cars to replace gas-powered ones). The other is that Lisp's productivity boost allows individuals to get things done by themselves and so it tends to attract people who aren't good at collaborating (the famous "Lisp curse"). So on an individual level it's a win, but at an organizational level it might not be unless you manage it very carefully.
> So on an individual level it's a win, but at an organizational level it might not be unless you manage it very carefully.
This is why I choose Go many times over other languages. Its a bit easy to keep on the rails since it is so restrictive (at the cost of repetitive, explicit verbosity).
Historically I think there's a very strong case that Symbolics did outperform others with their software productivity, especially in graphics. They had an ability to wade into certain domains and produce legitimately shockingly competitive products, which really should not have been possible.
But I also think Lisp leads to spectacular burnout as I think it imposes a greater cognitive requirement on the part of the developer.
"But I also think Lisp leads to spectacular burnout as I think it imposes a greater cognitive requirement on the part of the developer."
I have never heard about this as a speciality of Lisp.
> there's a very strong case that Symbolics did outperform others with their software productivity
Symbolics used object-oriented programming in the form of Flavors. It had an integrated, GUI-based development environment. Development was mostly incremental: one can write some piece and run it immediately inside the application under development. Everything runs in a debug mode, without an explicit debug mode. The whole system, the whole documentation, the whole source code was always available. The applications were large, but for today's standards nothing shockingly large.
But there were also downsides. Mostly: hardware developed outside much faster and software had to move there.
First the Symbolics graphics suite was ported over to SGIs running Allegro CL, then also Windows NT systems. A refresh of the graphics suite was developed under the name Mirai.
I developed my first 3D graphics system on a Symbolics back in the mid-80's, before Symbolics released their S-Geometry package.
Though the software was powerful and made use of the extensibility provided by Lisp, it failed to gain much traction in the industry because the hardware was not suited to 3D graphics. Silicon Graphics (SGI) workstations took that market by the early 90's.
I do think it's the relatively superficial uniformity of it, regardless of the level of abstraction at which you're operating, which eventually makes things crack.
My hunch is top performing lisp devs are capable of holding more of this in their head at once than mere mortals. A key benefit of more conventional languages is you can limit the scope of your concerns and reasoning (within limits), however, that does prevent you from being able to make whole system optimisations, which is notably what Symbolics (at least in graphics) were outperforming everyone else at. What the Symbolics devs achieved seems to me only viable by having an unusually capable team focused on a specific domain for a long time, without too much diversion/distraction. The bus factor must have been horrific.
This thread made me go off and look more into this, and I couldn't help thinking the software industry has really lost something with the commercial failure of these very deeply specialised and incredibly high end tools. (ICAD was the other). It's odd how people could justify spending huge amounts on the hardware just to run the software, but didn't want to pay for the software when it became possible to run it on just about anything, and so the software stops being developed.
> Lisp ... can give you a dramatic productivity boost
... if you are smart enough. If it were that easy to get more productivity, everyone would be using Lisp. But you need to hire very smart developers to get that prodictivity, and most developers are by definition average. Your average developer will be frustrated, not more productive, with Lisp.
Those very smart developers would be just as effective in a mainstream language, and probably more so in an appropriately chosen mix of languages. It seems like the real advantage of lisp is that it self-selects for smart people who program for fun, and these people typically are also good developers.
The difference I would point out, is that Rust has corporate sponsorship. I don't recall any large corporations sponsoring e.g CMUCL/SBCL to the same level.
There are plenty of counterexamples here, with languages that had corporate sponsorship but did not succeed (e.g. Go) and vice versa (Perl, Python).
In the case of Lisp, it was done in by two things: AI winter, and the fact that the Lisp community was never able to organize itself. This is the famous "Lisp curse": it is precisely the fact that Lisp is a productivity multiplier that seals its fate because it allows individuals to get things done without collaborating.
I always thought this essay was a great take on Lisp(ers), but it's interesting how those sort of statements eventually can self-perpetuate a couple negative events into a state of learned helplessness.
Re-read the article, it is truly great.
It is not about "profound beliefs", but about "profound beliefs, loosely held".
It is not about sticking with your believes, but about validating them, and changing them based on data.
Now, talking as a technical CEO of midsize company, who coded in Lisp, and still codes:
You are missing following pieces of data to justify your belief that List would be a good language for any projects for most companies:
1. Long list of successful companies with large Lisp codebases (are there examples of successful projects/companies based on such codebases)
2. Long list of examples of long term successful projects written in Lisp (how maintainable the code base will be if written in such language)
3. Long list of job positions for Lisp developers (ability to find more talent to expand the project when needed)
4. List of people in the company who can join the project (bus factor)
Management does care about how long it takes to write the code. Yet, it is only one of many aspects. So, while your beliefs might be profound, they are ... short sighted.
> your belief that List would be a good language for any projects for most companies
That's a straw man. I have never made that claim. I think Lisp could be effective in a lot more areas than those in which it is currently deployed, but I have never said that it would be good for "any" project or "most" companies.
What I said was that 1) in the past I have advocated for my "profound belief" that Lisp would be effective for particular projects on which I happened to be working, 2) in those circumstances in which I was able to persuade the decision-makers to actually use it, it worked spectacularly well and 3) despite 1 and 2, the net result was a net negative impact on my career.
This is not sour grapes, just a cautionary tale. Holding and advocating for profound beliefs is not always a good idea.
> It was only the good fortune of Apple management being utterly incompetent that gave him a second chance
Not only: before that second chance, he set up its sensibility, as he had the willingness to test the idea that there was a growing market for widespread unix 'workstations'.
his hardware was too expensive for a huge hobbyist audience, which I know as I had one, but NeXTstep and its tooling and resulting apps placed a solid foothold in a somewhat grotesque early 90s (from 1988 or so to even get that foothold).
I had one, had been using Suns before, and very early microsoft before that, and apple ][ and atari 800. I think his getting fired is something his brazen young self sublimed into a determination to prove an idea's viability such that an evolution of his NeXTstep effort not only brought him back to a hardware company better able to execute on the hardware portion, but such that his post-firing gamble is evolved into things ubiquitous now.
How do you define a well executed project? I can think of some criteria that might be relevant:
- Are the projects still supported? How easy is it to find/hire people to maintain them? Is it easy to add new features? What is the quality like?
- Are they small/stand-alone or have they evolved into large systems with large teams working on them and many new features?
- How would you characterize their performance and scalability? Can they expand to run larger workloads? Do they make good use of the underlying hardware?
- Is there a good ecosystem, libraries, packages, drivers, etc. for the domain you're targeting?
I don't think you necessarily need to avoid conflict completely but you need to decide what hill you are willing to die on. If your co-workers, team, management, disagree with you then you can either try to convince them or you need to find consensus somehow. They could be right and you could be wrong. I can think of many examples where a single person introduced new technologies or languages or methodologies to an organization (including myself) ... Most good organizations welcome this sort of contribution. Ofcourse big decisions require compelling arguments. I've also had what I think were really good/compelling proposals for change end up being rejected, that's tough, but life goes on.
EDIT: This is mostly unrelated to the article but I think the principles in the article do apply more broadly. Having a belief is good, it is what helps us drive in the right direction, but we should constantly be open to new information that challenges our belief and adjust it if needed. The initial "belief" also has context, it's given some circumstances, what do we think is the right approach. Then as we move on and circumstances change, and we get more data, we adjust our belief. This is different than what I'd call religion. Religion never changes, pretty much by definition it can not change. You're not going to start working on Shabbat if you're Jewish, regardless of what new information comes up or the situation. If you have no belief however you're at the other extreme, you're doing random things without any strategy or plan for the future (which is sort of where the author was when he was scolded by his CEO in the story).
if your belief is that strong you should found your own company and build your software with Lisp and beat competitors by being able to launch features faster and with fewer engineers. Ruby got popular for a while because several successful startups used it, Python for similar reasons
Steve (Jobs, not Blank) was an unparalleled visionary. I give him credit for the Apple II [1], the Mac, Objective C, NeXTOS/OSX/MacOS, and the whole i-series from his second stint at Apple (iMac, iPod, iPhone, iPad). To call him an overachiever would be quite the understatement. He is truly in a class by himself. I can't offhand think of anyone else in his league. (Elon Muck comes closest, but I'd rate him a very distant second.)
---
[1] Yes, I know Woz actually designed and built it, but Burrell Smith designed the original Mac, and neither of those things would have been possible without Jobs.
Glorification doesn't have to be intentional. Steve Jobs left a trail of havoc in the lives of the people he used to build Apple. He did objectively bad things to achieve objectively good things, so to speak. The good result is tightly coupled to a bad technique. Omitting the bad technique is tantamount to glorification.
The language in your posts is also very...enthusiastic?
Speaking of hard workers, have you read about Niklas Luhmann?
> Omitting the bad technique is tantamount to glorification.
The reason I left it out is not because I want to "glorify" him but rather that I have no specific knowledge of his "bad technique". All I know is that he had a reputation for being an asshole. I have no reason to doubt that this reputation was well deserved, but I don't have any specific knowledge one way or the other.
> The language in your posts is also very...enthusiastic?
I have no idea what that means. What does "un-enthusaistic" language look like?
> the whole i-series from his second stint at Apple (iMac, iPod, iPhone, iPad)
Don’t forget the iBook and iSight!
And a whole pile of software and services like iTools (with its iDisk and iCards), iWork, and most of the iLife suite (Garage Band somehow escaped being named iBand).
1. Developing beliefs and acting on them is a key part of leadership.
2. Without it, you limit career advancement.
3. It's a skill that can be learned.
I really like Blank's idea of a profound belief -- a vision of the future tempered with constant checks against evidence. With that picture, the word "bet" in the business context makes a lot more sense to me. Companies operate in the present, but they spend for tomorrow, laying track for where the company will be in the future.
Nice article, can be applied to the scientific enterprise.
A scientist should have a strong enough about a paradigm so they can work with it, but loosely held enough to see the cracks in the theory and find the right problem that will change the field.
I really wish vague no context titles like this wouldn't be allowed. I don't know who votes for two words with no other information, but it seems silly.
It does have context -- it's on Steve Blank's blog. Now, if you have never heard of Blank, I suppose it is context-free in that sense. But I was expecting an essay riffing on the idea of "profound beliefs", and I got exactly that.
I have the same problem with other titles -- the world of Javascript front-end frameworks seems to very commonly have a name collision with some word or phrase from a context that I care about, and then I click through and find web front-end stuff, which I care about much, much less than what the cafe is serving for lunch. It's just part of the overhead of surfing HN. You'll be fine.
Obviously, they have information in them, perhaps not information that's to your taste. "Why are titles that are not to my taste allowed" is at least a question that's relatively simple to figure out.
First you complain that the title doesn’t have enough information–more information please! Then you imply there should be no titles at all–less information please! Do you see the impossible double standard?
They're saying 'what's the point of titles if they don't tell me what's inside the tin.' There's no double standard there at all. It's an impossible one (unless promoted to Mystic Master of All Titles in The Universe or At Least on HN) but there's nothing double about it.
So no, it's not enough to have "profound beliefs". You have to have the right kind of profound beliefs (unless you are extremely lucky -- see below). They have to not conflict too much with the profound beliefs of your management and co-workers because if they do you're sunk no matter how much data you have to back them up. That just turns out to be how the world works. I learned this lesson the hard way, and far too late in life for it to do me much good, but I thought I'd pass it along.
There is one exception to this rule, and that is if you just happen to get have iconoclastic beliefs that are also correct and you somehow manage to acquire the resources to act on those beliefs and the results you produce happen to find a large market. Steve Jobs is the poster child for this, and even he is a cautionary tale because his career very nearly ended when he was first fired from Apple. It was only the good fortune of Apple management being utterly incompetent that gave him a second chance, which he seized on to spectacularly good effect. But Jobs was literally one in a billion.