Every time I spin up a new project, I try to answer the following question honestly:
"Am I using this project as an excuse to learn
some new technology, or am I trying to solve a problem?"
Trying to learn some new technology? Awesome, I get to use one new thing. Since I already understand every other variable in my stack, I'll have a much easier time pinning down those 'unknown unknown' issues that invariably crop up over time.
Trying to solve a problem? I'm going to use what I already know. For web stuff, this'll be a super-boring, totally standard Rails web app that generates HTML (ugh, right? How last century), or maybe a JSON API if I'm trying to consume its output in a native app. For mobile stuff, this'll be an Objective-C iOS app.
Waffling about it and saying 'well, I am trying to solve a problem, and I think maybe a new whiz-bang technology is the best way to do it' is the simplest path to failing miserably. I've watched incredibly well-funded startups with smart people fail miserably at delivering a solution on-time because an engineer was able to convince the powers that be that a buzzword-laden architecture was the way to go.
You don't know what the 'right' solution is unless you understand the tools and technology you'll use to deliver that solution. Anything else is just cargo-culting.
Comments here are geared against picking a technology just because it is brand new and exciting, but sometimes you need to pick up something that is just different from what you or your team know well.
In a project I worked on once, we went with "what we knew" (standard normalized SQL schema) to build an analytics engine. The problem with "going with what you know" is you are likely to badly reinvent well-established patterns. If we had stop for a minute and learnt about star schemas, the project could have ended in a much better shape than it did, and maybe the effective time to release would had been shortened.
BTW, learning "new things" is almost always useful but isn't always precisely exciting. Data warehousing is one example :-).
Becoming proficient with a selected set of technologies is still a good idea, but I'm willing to learn and try new things all the time. First thing I ask myself is if a problem was already solved by someone else, and how.
FWIW, maybe the approach in this case is traditional normalization to get the product out the door. When it becomes untenable, hire someone for whom star schemas are boring.
Put another way, the star schema was known to be a better approach only after the fact. Had your team researched “exotic” (unfamiliar) approaches early on, there is no guarantee you would have landed on a star schema.
Eehhh... I knew about star schemas already, although I've never used one. I do think my cursory knowledge would have caused me to research them when starting a project like that though. Obviously it's all got to be a balance, but I think that the OP is probably a little bit too conservative.
Especially if you are working for other people, we are paid to innovate, and we are paid to learn stuff until it is boring. We've got to stick up for ourselves and learn on the job when we can. One new thing a project sounds great to me.
That's a great approach. By restricting yourself to one new thing, you can evaluate it in isolation.
Then, when you make your decision about that new thing, you'll know why you like/dislike it. Your decision won't be clouded by arbitrary things like libraries inter-dependencies.
Using what you already know may not always be the best approach, because of "When your only tool is a hammer every problem looks like a nail" phenomenon. I would do my research and use what makes sense and what is best in the long run, despite of me being experienced or not in the technology. When you're a software engineer with years of experience under your belt, picking up the next one will not be a big challenge.
Yes, I think this is where experience comes into it. Knowing when it will likely pay off in the long run. As a junior its difficult to differentiate between hype and genuine usefulness.
I generally agree with your post, but I think there is a critical difference between your argument and that of the blog post. Of course teams are more productive with technologies they know, but that isn't necessarily an arbitrarily-defined "boring" technology.
To pick on one specific example in the post: Node.js is popular enough that there are lots of teams and engineers that are most comfortable and productive working with it. For these teams, choosing Node certainly wouldn't cost an innovation token, while deciding to build some service in Python, Ruby or PHP (if we take at face value that this is more "boring") may end up being more costly.
> For these teams, choosing Node certainly wouldn't cost an innovation token, while deciding to build some service in Python, Ruby or PHP (if we take at face value that this is more "boring") may end up being more costly.
It absolutely does if it is only one team in the organization. If the entire organization is using PHP and, let's say, you acqui-hire a team based on NodeJS, unless they are doing something absolutely fundamentally different they should learn PHP and push code in your existing infrastructure. This way you have one way to deploy, one type of application server to support, one set of gotchas relevant to your domain, one set of QA tools etc. Building good products is about far more than just shipping the product, it's also about the cost of long term support. Because what you are doing is fundamentally automation, the less you have to manage the more benefit of the automation you are getting, the more you can forget about it and focus on shipping other things.
What you are describing is pretty much definitionally local optimization and is exactly what you shouldn't do in large engineering organizations.
> It absolutely does if it is only one team in the organization. If the entire organization is using PHP and, let's say, you acqui-hire a team based on NodeJS, unless they are doing something absolutely fundamentally different they should learn PHP and push code in your existing infrastructure.
Substitute PHP with Java, and you've described the situation at my company exactly. The acquiring company had a legacy Java application and a lot of automation invested in making that platform work. The acquired company was a NodeJS shop that was using it long before this article or the comments in this thread would advise (this was pre-npm days). To give you an idea of the numbers, the acquired team was 4 engineers as compared to the 100 engineers of the acquiring company (50/50 split with an off-shore development team). I won't say which side of that divide I was on or go into the full year of culture shock that we went through, but fast forwarding these past 4+ years and now the bulk of the company's main product has been re-written in Node and developers are significantly more productive. Features that used to take months to push out in complex releases using a convoluted process of branching, meetings and tons of arguments are now delivered continually using the Github flow with little-to-no drama and far fewer production bugs/downtime. Our customers have never been happier with us and developers have never been happier to work here. All of this came from the fact that the CMO who advocated for the acquisition supported the small team of 4 in every effort to pervade the small team's technologies and practices across the larger organization. Having been in organizations that performed at a much higher level, he recognized just how much opportunity there was for improvement and recognized that the team of 4 had the vision to create the necessary blueprint for the rest of the organization to follow. It wasn't easy, and most of the developers who were here at the beginning of the shift are no longer part of the company. But it worked...and while a sample size of one is hardly conclusive, I have a hard time agreeing with your point having seen it play out so well in the real world.
> Features that used to take months to push out in complex releases using a convoluted process of branching, meetings and tons of arguments are now delivered continually using the Github flow with little-to-no drama and far fewer production bugs/downtime.
I have a really hard time believing that Java was the culprit and Node the savior rather than the organizational stuff you mention...
It was most certainly the organizational stuff that was the main problem. But if, as the poster I replied to suggested, the small team and simply started to submit Java code instead of their Node code, that organizational stuff wouldn't have changed.
Much like a change of location can help break someone's self-destructive habits, the change of platform helped break a lot of the toxic organizational habits that had built up over the years. The shift could have been to many other platforms. And if the platform had been something other than Java, a shift to Java could have improved the situation as well. The important part was that the new mindset and practices around more frequent/frictionless development and delivery.
I do think that it's easier to have that mindset when you use Node rather than Java, but Java has gotten better in this regard over the past few years.
You're talking about an organization that has an existing infrastructure that is bad. This thread is about an organization that has an existing infrastructure that is good but not 100% optimal for NewProjectX, and whether or not it makes sense to use a a better fitting technology for NewProjectX.
I've been on both sides of this very important scenario that truly does matter for this industry (the failure rate of acquisitions is hardly as well known as start-up failures when the amount of dollars wasted - oftentimes publicly traded - is probably larger on these sunk costs), and the VAST MAJORITY of acquisitions result in the larger company overriding the smaller with basically just existing customers sticking around out of little choice and almost everyone disappearing (Palm and HP, anyone?). Do you think a company of 4 being acquired would be able to as dramatically affect a company of 200 engineers? How about 2000? What if the engineers aren't even in charge of the platforms they're required to use? (It's literally defined by what your customers want, for example, in a hosted software shipping company)
I am not doubting that your scenario happens, but the possibility of changing a team of 100 (likely pretty jaded) engineers while certainly difficult is not necessarily what people think of when we're thinking acqui-hire. In fact, I'm barely entering my second decade as an engineer and I'm starting to think that surviving an acquisition intact and with career advancement somehow is probably far more lucky than hitting a start-up lottery jackpot in the first place.
There's gotta be a sort of trend of engineers that have gotten acquired so many times that their specialty now is to be able to scale / re-focus technology stacks and integrate and operationalize them better for other companies. Start-up companies typically want to see engineers that have a history of building stuff fast, growing rapidly, and the usual stuff that people get glory for as engineers. Established companies really aren't as picky. There's so many companies getting acquired you'd think that there's a niche for transitioning software over by now at least as contracting gigs.
Depends on your increment of isolation. This is, in theory, why microservices with APIs mean that it really doesn't matter. As long as there are sufficient hire-able engineers who know that technology, it can be used.
> This is, in theory, why microservices with APIs mean that it really doesn't matter.
No it still really does matter, because if your company needs to deploy those microservices in different ways then you need more people to support the deployment infrastructure. If you need to test those microservices in more ways you need more people to support the testing infrastructure. For engineers it's the amount of friction felt when trying to move around and work on new and different problems in your company because they're one of five people who knows how the hell the Java infrastructure works.
If you have an existing testing and deployment infrastructure etc. your team gets those for free and doesn't need to reinvent (and support) those wheels.
> As long as there are sufficient hire-able engineers who know that technology, it can be used.
Yikes. Hiring and firing engineers is pretty hugely expensive and something you want to help avoid having to do.
> No it still really does matter, because if your company needs to deploy those microservices in different ways then you need more people to support the deployment infrastructure.
This is what PaaSes make a non-problem.
I should know, I worked on Cloud Foundry Buildpacks.
Here's how to deploy the PHP app:
cf push your-php-app
And the Nodejs app:
cf push your-nodejs-app
And hell, why not a Ruby app too:
cf push your-ruby-app
And let's not forget that Python microservice:
cf push python-code-works-the-same-way
We also kept up with the cool kids:
cf push your-go-code-with-a-godeps-file
And for the "boring" crowd:
cf push your-java-app-too
In general, these are all intended to Just Work™.
You know how making these surprisingly unalike systems deploy identically is really hard?
So does Heroku, from whom a large body of Cloud Foundry buildpacks code is derived. So did we, when we found issues specific to making code that assumes a connected environment work in a disconnected environment.
The point is, if you're doing this all by hand, you're doing it wrong. You should rent or install a PaaS and move along to the part where you create value instead of inventing a cool-sounding wheel.
> The point is, if you're doing this all by hand, you're doing it wrong. You should rent or install a PaaS and move along to the part where you create value instead of inventing a cool-sounding wheel.
That makes a lot of sense for small organizations, but I'm sorry PaaSs absolutely do not scale to the needs of many organizations.
> In general, these are all intended to Just Work™.
My emphasis added. When they don't Just Work it's really nice to own that infrastructure and be able to fix it yourself. It's also nice to be able to tailor things more specifically to your needs. Again, I agree that owning that infrastructure is not the right solution for organizations of all sizes, but neither are PaaSs.
> That makes a lot of sense for small organizations, but I'm sorry PaaSs absolutely do not scale to the needs of many organizations.
Outside of the giants who rolled their own because there was nothing around in the early 2000s, who?
> When they don't Just Work it's really nice to own that infrastructure and be able to fix it yourself.
Cloud Foundry is specifically designed to run either in the public cloud, the private cloud, or both. You can get it hosted it from Pivotal or IBM, amongst others.
The work of my peers and I made that possible.
> It's also nice to be able to tailor things more specifically to your needs.
Cloud Foundry is opensource and the IP belongs to an independent foundation. I am personally aware of at least two companies who have private forks of buildpacks because that suited their extremely precise requirements. It took them about two developer days, tops.
And their modified buildpacks also Just Work™, because they're based on a robust design that Just Works™.
That's really cool, I did not know about that. You were talking about PaaS and Heroku, which I don't think is the same as forking an OS project and owning it yourself, which is fine for what I was talking about. I don't think it's difficult to name companies for whom Heroku is not appropriate. Regardless, it's all about where you draw the line, gradations not black and white.
I still stick to my main point: your organization gets a massive benefit by all using the same toolset. If you are using Cloud Foundry, I'd still suggest the whole company stick with one language, one deployment infrastructure etc.
To be clear, if you're google I'm not suggesting the entire company all be forced to use one language or something. In that case your company is likely working on products that are different enough that it makes sense to do away with some global optimization. Some judgment is obviously required. But if you're in the sub-500 range (which the vast majority are) it makes a lot of sense to really optimize globally with your toolset, even if deployment infrastructure is relatively easy to setup.
PS I love that you are using the phrase Just Works - the company I work for is called Justworks :)
I mentioned Heroku for two reasons. First, they are the pioneers of public PaaSes. Second, several Cloud Foundry buildpacks are extensions of Heroku's buildpacks.
I think that the nice thing about something like CF is that a whole range of problems just goes away. On the other hand, as Weinberg observed, when you solve the worst problem, the second worst problem gets a promotion :)
Cloud Foundry doesn't get much buzz on HN. But I'm a one-eyed bigoted fan, so I mention it whenever I can. I'm actually a Pivotal Labs employee, my main work is agile consulting. But I've seen enough gigantoglobomegacorps who are choking on their own impossibly heavyweight deployment/ops mechanisms that I am a bit of a bore about talking up Cloud Foundry.
It definitely sounds awesome. We've got a pretty simple deployment system at the moment and so it's a solved problem for us, but when it starts to breakdown we'll definitely take a look at CF.
I haven't worked in a specifically microservice environment, but I am currently working in a company that has quite diverse technology choices. One of the problems we have is that we're pretty small, and sometimes there is a lot more work that needs to be done on one component than another, in those cases you can't really have people who are relying on the work help.
E.g. we use erlang/cowboy for our web server and when there are bottlenecks changing that, they pretty much fall on two people who know erlang well enough to work on it.
It seems like it would have been better if the web server was written in something a larger chunk of the engineering org could modify so that when people needed changes to it for their project, they could make the changes, get them code reviewed by the maintainer, and get it shipped.
The other concern I have with microservices is that doing any wide-ranging changes is hard. Maybe it's always hard, but microservices seem like they would exacerbate the problem. I feel like I already run into this issue at my current job where we mostly have small (~10k) codebases, where people don't really want to make changes that will require making changes to more than two of them at once.
I do the same thing and usually end up trying to solve a problem using WordPress which is probably cringe-worthy to a lot of people. Most of the things I come up with are basically content-publishing so it works really well for hacking around and making something fun.
My latest creation is a instagram-style feed of the beers a few friends of mine have been enjoying recently.
It works just fine and none of my non-technical friends have cared that they add beers in their mobile browser rather than via an app or something.
Usefulness to yourself is sometimes the only thing that matters. I'm not trying to be snarky here: your own self-development is not anyone's interest, except yours.
If you're a PHP developer wanting to start some Python, your boss would love you to continue working with legacy codebase using PHP instead of migrating parts of it to Python. At the expense of your future career prospects, of course.
I'm talking more about delivering value and less about self-improvement. Both have a time and a place, but your boss is unlikely to look kindly on you saying "I'm interested in learning Python, so I've decided to rewrite our 1mm user web app in Python instead of fixing the bugs you asked me to, because I think I could make more money with that than PHP."
Actually, as a boss I frequently do look kindly on that. I'd much prefer to have that built into assumptions and explicit upfront.
Sometimes I may say "can we do that on our 1K user side project instead?" but I think that if you are managing developmental resources and you are not allowing them learn new things, you are not following a strategy that is likely to work out long term.
You adjust the schedule as required and don't do it for projects with tight deadlines.
The truth is if you don't ensure your developers career development is baked into your schedules then you get a combination of high turnover, unmotivated engineers, and engineers who learn on the clock "in secret" by making decisions that are best for them rather than best for the company.
All of these cause far more problems and are harder to account for than being upfront in the first place.
> and engineers who learn on the clock "in secret" by making decisions that are best for them rather than best for the company
Yes, and possibly not even consciously. Without that constant reminder of the pitfalls and learning curve of new technology, it's easy to convince yourself it's all upsides, or at the least undervalue the downsides.
Easy, you spec out doing it "conventionally" and pad with a reasonable first guess to do something unconventional.
Oh we need that in 2 weeks? And its about one weeks work the old fashioned way? Well try something else fun for a bit less than a week. Sometimes you win, sometimes you lose.
Sometimes you can do both in parallel. In the real world there's always wall clock time delays imposed by whatever. So when you're stuck waiting for whatever, learn as much as you can about XYZ till the main line is unblocked.
I run an in house database with a Django front end.
Started off with the Django Admin application, which is great for getting things up and running quickly. Its definitely been worth learning more and more of Django, as forcing everything into the Admin gets pretty hacky, and basically generates a lot of technical debt.
Spending a bit of time and learning class based views for example (over the function based views) has paid off, as it leads to far more concise code, and basically less code to mange over the long term.
Today for example I know enough about Django and how the Admin application is built from it's components that I am adding features I wouldn't have assumed were possible a couple of years ago. (Stack Overflow hasn't got a decent solution to the question of a redirecting to a confirm page on save - I'll write one up assuming I get it working).
How much can you learn if you're not trying to solve a real problem, though? My personal experience is that I can dink around with a new technology all I want, and not learn half as much as when I'm trying to apply it to do something real.
Sorry, I was more vague on that initially than I should've been. I've answered this elsewhere, but: what I was referring to was delivering value on behalf of someone else vs. using a given problem as an opportunity to learn something new. Delivering value (e.g. adding a new feature to your product, fixing a bug in a shipping product, identifying and attacking a market opportunity) shouldn't be seen as an excuse to learn a new technology unless you have absolutely no pressure to ship that work on any sort of schedule at any presumed level of acceptable quality.
I'm the same way: I need a real problem to work whether I'm applying skills I know or skills I want to learn. I just don't ever promise anyone (except myself) anything when I'm applying technologies I don't yet know to the problem.
Its not really about a "last century" thing, "last century" most apps need not multiple front-ends like today, unless you are 100% you won't need multiple front-ends, doing a json api from the start simplifies things a lot.
unless you are 100% you won't need
multiple front-ends, doing a json
api from the start simplifies things
a lot.
This also presumes that I already know Backbone, Ember, React, Knockout, Batman, Angular, or whichever new JS single page app frameworks have appeared in the last 30 seconds.
Would you agree that it's easier[1] to write a web app that emits HTML than it is to write a web app that emits JSON which is consumed by a single page web app written in Javascript?
[1] Where I define "easier" in this context to mean requiring less wall clock time to get something showing up on-screen and doing something useful.
You don't have to use a JS MVC framework just because you built your core logic behind a JSON API. You can just as easily have a server side rendering app talk to the JSON API instead of the directly to the datastores. You get to stick with 'what you know' while still maintaining flexibility for later.
Last century most apps absolutely did need multiple front-ends. Don't you remember the days when there was a Windows version, a Mac version, a Java version, if you were lucky a Linux version, and maybe a web version for the bleeding-edge developers?
Relatively fewer apps were network-enabled, but even then, there were a bunch of technologies like CORBA, DCOM, and RMI to marshal communications. There were also a bunch of custom binary protocols; back then, it was common for software developers to work on all levels of the stack, and the culture of library re-use was not as entrenched (at least in the enterprise) as it is today.
Great advice. Also extends to teams, 'What do we all know well enough to execute?' should trump 'What would be fun?' every time. I've worked in situations where someone saying 'Oh I decided to write this in Clojure, even though I'm the only one here who knows it and we're running out of cash' cost significant time and resources to fix (the fix was rewriting the project in plain ol' Python myself). It just isn't a sensible risk to take.
The innovation tokens concept seems to be a stand-in for both good engineering judgement and iterative exploration of the design/implementation space before committing to a path. I've been in several (successful) startups that leveraged both of these principles to great effect.
Both "innovative" and "boring" can shoot you in the foot. TFA focsues on "innovative" as a risk, but that's just daft. This industry is constantly rolling its lessons learned back into its shipped and shared technology. Ever gone back to a pre-Rails era web/backend codebase and screamed in horror? Ever gone to a "new" shop that never assimilated those lessons, used "boring" technology (thus dodging their shared/encapsulated forms), and recreated the old horror? (personally: check and check)
Trite policies are not a replacement for spending dedicated up-front (and occasional ongoing) time cycling between 1) evaluating/understanding your problem, 2) researching the current state of the art {processes, technology, etc.} related to your problem, and 3) using good engineering judgement to choose the best path then-and-there.
I have seen some incredibly good 'legacy' codebases written with vary old tech. There is a huge advantage when someone works with a technology for 10+ years and knows all the rough edges to avoid and then bakes it into their design.
Java may be the worst example of a ‘bulb’ language I can think of. However, I recently spoke with a team which had an awesome response to all the things I hated about the language. The closest analogy I can think of is mechanics working on popular cars get to the point where they can diagnose problems in seconds because they know the kinds of things that break. Cars come with plenty of sensors to help diagnose problems, but in this case familiarity often beats better tools.
We are still a freepascal shop and it works flawlessly. Every now and then someone wants to do a rewrite in something fancy, but we really have nothing to gain. The current codebase is actually quite pretty, compiles fast, and is easy to maintain.
New hires don't need more than a week or two to get the gist of everything. The web frontend has been migrated from perl though. It got the job done, but it wasn't pretty and nobody dared touch it.
Being an expert at something is not equivalent to saying that thing is boring. One insidious danger of "boring" tools is how they lull developers into complacency.
"You can’t pay people enough to carefully debug boring boilerplate code. I’ve tried."
-Yaron Minsky
> Both "innovative" and "boring" can shoot you in the foot.
This is true, but it misses one of the points of TFA, which is that with boring tech you already know the ways it can shoot you in the foot, because lots of people have had their feet shot by it before you came along. You can learn what not to do just by looking around and seeing which sets of feet have the smoking holes in them. With exciting tech, you don't know; you get to be one of the people who discovers those things.
I guess I do know the ways in which CGI-based web applications shoot me in the foot because I wrote them in Python and Perl. Said knowledge is why I no longer use:
Not the OP, but using TCL in the first .com wave taught me that any language without JIT or AOT support on their toolchains are bad fit for anything that needs to scale.
This is specific to Tcl, but it is byte compiled [0], and work is ongoing right now to target llvm[1]. To say nothing of punting and coding performance-critical code in C and orchestrating it all via Tcl.
Well, the LLVM announcement certainly is new. You'll have to forgive me though, I thought you may have been propagating the old "in Tcl, everything literally is a string" trope.
What did you press it into service for in ye old .com boom?
An application server built on top of Apache and IIS modules, targeting Windows and multiple UNIX flavours.
Similar to AOL Server and already using quite a few patterns that people seemed to only have discovered years later with Ruby On Rails, but since Portugal isn't SV no one heard of them.
All critical path routines were actually done in C.
However, eventually the platform was migrated to .NET with focus on Windows.
The founders of this company went on to found OutSystems, with the lessons taken from this attempt.
I do most of my work in ruby and am a fan of it (less of some decisions it's community makes, but I'm still okay with it)... and I shy away from rubinius because I don't think it has enough to offer to justify the non-boring risk/tax. I don't want to be discovering that some gem I use (perhaps only a couple versions after i start using it) isn't compatible with rubinius. Even if it's a small risk (and I honestly am not sure how much of a risk it is), for what? Slight performance edge? I don't need it. Multi-core parallelism? I usually don't need it, and if I do, I'm going to choose JRuby (reluctantly).
This seems to be written from the "Engineers are monkeys" perspective. As if they spend their time flinging poo and you really need "solid" boing technology that's already well designed so the poo doesn't mess it up.
You shouldn't choose node.js or MongoDB because they are "innovative"-- but because they are poorly engineered. (Erlang did what node does but much better, and MongoDB is poorly engineered global write lock mess that is probably better now but whose hype way exceeded its quality for many years.)
The engineers are monkey's idea is that engineers can't tell the difference-- and it seems to be supported by the popularity of those two technologies.
But if you know what you're doing, you choose good technologies-- Elixir is less than a year old but its built on the boring 20 years of work that has been done in Erlang. Couchbase is very innovative but it's built on nearly a decade of couchdb and memcache work.
You choose the right technologies and they become silver bullets that really make your project much more productive.
Boring technologies often have a performance (in time to market terms) cost to them.
Really you can't apply rules of thumb like this and the "innovation tokens" idea is silly.
I say this having done a product in 6 months with 4 people that should have taken 12 people 12 months to do, using Elixir (not even close to 1.0 of elixir even) and couchbase and trying out some of my "wacky" ideas for how a web platform should be built-- yes, I was using cutting edge new ideas in this thing that we took to production very quickly.
The difference?
Those four engineers were all good. Not all experienced-- one had been programming less than a year-- but all good.
Seems everyone talks about finding good talent and how important that is but they don't seem to be able to do it. I don't know.
I do know is, don't use "engineers are monkies" rules of thumb-- just hire human engineers.
Having come from Etsy and witnessed the success of this type of thinking first hand, I think you missed the point of the article and I think you are using a tiny engineering organization (4 people) in your thinking, instead of a medium to large one (120+ engineers).
The problem isn't "we are starting a new codebase with 4 engineers, are we qualified to choose the right technology?" it's "we are solving a new problem, within a massive org/codebase, that could probably be solved more directly with a different set of technologies than the existing ones the rest of the company is using. Is that worth the overhead?" and the answer is almost always no. Ie: is local optimization worth the overhead?
Local optimization is extremely tempting no matter who you are, where you are. It's always easy to reach a point of frustration and come to the line of reasoning of "I don't get why we are wasting so much time to ship this product using the 'old' stuff when we could just use 'newstuff' and get it out the door in the next week." This happens to engineers of all levels, especially in a continuous deployment, "Just Ship" culture. The point of the article is that local optimization gives you this tiny boost in the beginning for a long term cost that eventually moves the organization is a direction of shipping less. It's not that innovative technologies are bad.
> But if you know what you're doing, you choose good technologies
No, if you know what you are doing you make good organizational decisions. It matters less what technology you use than that the entire organization uses the same technology. Etsy has a great engineering team and yet the entire site is written in PHP. I don't think there is a single engineer working at Etsy who thinks PHP is the best language out there, but the decision to be made at the time was "there is a site using PHP, some Python, some Ruby etc., how do we make this easier to work on?" Of those three python and ruby are almost universally thought of as better languages than PHP, but in this case the correct decision was picking a worse technology because more of the site was written in it, the existing infrastructure supported it more completely and so as an organization and a business we could get back to shipping products more quickly by all agreeing to use PHP. Etsy certainly does not think of its engineers as monkeys, quite the opposite.
This optimization may as well become global when it'll come to hiring in the future. Quote from the post:
[...] what it is about the current stack that makes
solving the problem prohibitively expensive and difficult
Etsy, as a very successful PHP shop, surely understands that PHP codebase itself presents an expense in a form of non-hired smart engineers that pass on the company because they won't work with this language.
Plus, there are examples when the local optimization (i.e. staying with whatever legacy stack because it's proven) may lead to a global failure because of the unmaintainable "spaghetti blob" codebase with duck tape everywhere.
To me, that explains why Etsy has trouble with new tools and languages - they can't hire the developers it'd take to successfully run a project outside of the tools they are used to.
Analogously, if you promote by external hiring, you hemorrhage the kind of employee you'd want to promote internally. If you always stay in a particular sandbox, you lose the kind of employee that can work outside it.
If PHP is a deal breaker then that person isn't the type of person you want to hire, since they obviously care more about incidental issues like language choice than solving real problems. That's not to say people can't groan about it (like any other workplace annoyance) but I'll still go work somewhere doing amazing things even if the cafeteria food kind of sucks.
Anecdotally, it usually turns out that good engineering practices can be brought into any medium. PHP comes with a higher than average number of foot guns, and there is a lot of terrible PHP code out there that is unfortunate to find when you Google something, but it's self evident that a good engineering organization can build solid systems in PHP. (See also: the diligent use of a specific set of C++ features in shops building cutting edge graphics/game technology.)
It's not incidental -- choice of language has a massive impact on your developers' day-to-day experience as a human being, and many of the best programmers will find themselves incredibly frustrated by using a language like PHP (not because of ego but because PHP has properties which makes it frustrating to work with).
Of course, from a certain mindset I suppose anyone unwilling to sacrifice their happiness on the altar of your corporation's profit might be dubious... the question then becomes whether this affects recruitment and retention, and if so, whether you can still accomplish the things you want with mediocre talent and high turnover...
You see C++ used in graphics/game because it strength is low level memory/cpu control needed for cutting edge work. PHP is optimized for "time-to-market" not cutting edge, so chances are the "real problems" you want to hire for are not that interesting. Heck, a big advantage is PHP is that most of the problems you encounter are already solved for you.
Most of the time, "use of PHP" correlates to "not really interesting problems", so is a good proxy to use when deciding where to work :)
The truth is that even at great companies there is plenty of "crud work" to do. It is important work and you are solving real problems but there is little personal/professional growth or learning.
One way to keep that kind of work interesting and to provide growth and learning is to use new/different tools and technology to do it.
I totally agree, that's why I said medium to large. My main point is that you are solving radically different problems with even 20+ engineers than you are with only 5 or so.
Both Facebook and Google (!) aren't really large-scale by the standards of tech companies gone by. Facebook has about 9000 employees. Google has 50,000. By contrast, Microsoft has 128,000, HP has 300,000, IBM has almost 400,000, and DEC had about 300,000 at its peak.
In the startup world, we (rightly) focus on growth, but it's worth remembering that there are giant companies out there using really, really boring technology. In some segments IBM mainframes, DB2, and COBOL are still the technologies of choice.
To add to what you're saying there are also government departments and giant companies out there that do your tax, pay for the roads, handle your insurance and handle your banking where somebody 20 years ago chose a technology that wasn't boring.
These entities are now having huge problems trying to get off 1980s or 1990s non-boring non-standard technologies that are no longer supported.
There are places that have bought the company that was going insolvent that built their non-standard database or framework....
"Nobody ever got fired for buying IBM" had good reason behind it.
A lot are consultants, at least at HP and IBM. (Microsoft and DEC are much more engineering-heavy; I've heard that Microsoft has the structure of 1 engineer, 1 PM, and 1 tester per team.) Remember that out of Google's headcount, only about 20,000 are engineers. When I was on Search, Bing had more engineers working on it than Google Search.
> By your measure, Facebook is still not a large-scale engineering organization
"9,199 employees as of December 31, 2014" -- that's probably close enough to his metric to call it a large-scale engineering organization.
Of course, there's the real question of why Facebook needs to be a 10k+ engineer organization. For a minute it looked like they'd grow past their MySpace 2.0 roots. That becomes less convincing every day.
Ah nice catch, I misread the quote as 9k engineers. That said, after seeing their new offices, I'm not sure I can retract the general sentiment of my previous post.
My take tends to be not that 'innovation' is bad, but that there are a couple of risks:
- The weaknesses of new tech may not be fully understood. A lot of new tech solves existing problems, while re-surfacing problems that the old tech solved. Everyone thinks it's great until they've used it for a bit longer, and run into those issues.
- New tech runs a higher risk of disappearing/becoming unsupported. If you plan to support your product for a long time, that's a valid risk factor.
For myself, I'm wary of having very new tech as a fundamental underpinning of any piece of work I need to stick around. I'll likely adopt frameworks or database systems cautiously, unless their superiority is overwhelmingly obvious. On the other hand, I'd be a lot more willing to take risks on a simple library.
With a smaller, simpler piece of tech, it's easier to replace if something goes awry, and it's easier to evaluate in its totality prior to taking the risk.
It's not only that, it's also that if you have two languages in your codebase you now need two ways to deploy, two types of application servers, two types of testing frameworks/QA setups etc. If having the two languages means you can create a product only marginally faster/better then it is not worth all of that overhead. As mentioned in the article, there are places where the cost becomes worth it, for instance faceted searching is done in Java via Solr at Etsy. But for the most part fitting your problem into the existing infrastructure is a lot better for the organization than bringing in the perfect technology.
I agree with this. That being said, the author of the post seems to knock 'the right tool for the job', but I recently built two scrapers: one that scrapes an API (one time thing) and one that scrapes some websites (will probably be used once a month). The API one runs on PHP and auto-refreshes with a <meta> tag -- boring, but it works.
The one that scrapes websites I did with Node since some sites are multi-steps and the latency of a single scrape, plus the database latency could've turned this into a multi-week run with PHP
Individual humans are smart. Groups of humans are dumb. When you're hiring people that you will personally work with, you can filter for smart. When you have to work with another group of humans, it's safer to assume that they are stupid.
> Individual humans are smart. Groups of humans are dumb.
Actually, you have that exactly wrong[1].
"Behavioural economists and sociologists have gone beyond the anecdotic and systematically studied the issues, and have come up with surprising answers.
Capturing the ‘collective’ wisdom best solves cognitive problems. Four conditions apply. There must be: (a) true diversity of opinions; (b) independence of opinion (so there is no correlation between them); (c) decentralisation of experience; (d) suitable mechanisms of aggregation."
"Crowds" != "Groups". In a crowd, the individuals behave independently; each person makes their own judgment as to the best course of action and pursues it. In a group, the individuals are constrained to come to a collective decision and implement it.
That difference is crucial. Markets function based on the wisdom of crowds; they work because if one person has the right information but everybody else is dumb, the one iconoclast stands to make a lot of money and force out all the dumb people. Statistics function according to the wisdom of crowds; it works because errors contribute little to the mean, while most people, arriving independently at their conclusion, tend to be closer.
Groups all have to agree on the same conclusion. When this condition occurs, the only conclusion that they can agree on is one that can be communicated to all members of the group, which is necessarily limited by the ability of the weakest group member to understand it.
Both markets and groups are much better at quantifying power differentials than in assessing information objectively and making useful predictions about the future.
This is why groups tend to be dumb. So much energy goes on hierarchical posturing and social signalling that there's relatively little left over for practical intelligence.
Orgs that can break through this can do astounding things. But the successes tend to be more rooted in the values of science and engineering as processes than in market processes.
Historically, every so often you get an org that works as intelligence amplifier and is more than the sum of individual talents.
But this configuration seems to be unstable, and so far as I know no org has ever made it stick as a permanent feature of a business culture.
Of course I was oversimplifying :) In any case, that study removes many of the reasons that groups of humans make bad decisions - which is unfortunately impossible to do in most real-world contexts.
If you want to be more precise, we often make assumptions of people belonging to a group that is not our own. The safest assumption to make is that all other groups are dumb. Ironically, this likely reinforces the problem: Why is this other group assuming our application doesn't have feature XYZ? Of course it does, because we're good at what we do. But obviously they must not be very bright to make such an assumption...
Groups are a low-pass filter on the abilities of the individuals that compose them. To teach something to a group, you have to communicate it to every member; this communication is naturally bound by the ability to understand of the person who is least familiar (or least enthusiastic) about the particular tech.
You can often cut the time needed for a complex project in half simply by cutting the team in half and telling each group to work on it independently. The problem is that now you have two problems - or rather, two solutions. If you try to integrate them together, you end up reintroducing all the communication hassles and more. If you throw one out, you'll alienate and probably lose all the developers who worked on it. If you bring both to market, you confuse your customers and lose brand equity.
I'll throw another one down your way. An organization I worked with had a about 5 million lines of COBOL in one system (they had several more and this one systems was only about 15% of their total transactional workload). It used a proprietary pre-relational database that allowed users to do both queries (of a sort) and do things like the value from the query result + 1500 bytes.
They tried re-writing pieces in Java at a cost of tens of millions of dollars. Java was the new hotness. In addition, they built out a Java hosting environment using expensive, proprietary Unix hardware to reach the same production volume as the mainframe. However, it was grossly under-utilized because the Java code couldn't do much more than ask the COBOL code what the answer was to a question by using Message queues. More millions of dollars went to keep up licenses and support contracts on essentially idle hardware.
They tried moving it to Windows, using .NET and MicroFocus COBOL. But the problem was they would still be tied to COBOL, even though they (conceptually) had a path to introduce .NET components or to wrap the green-screen pieces in more updated UIs. But that in itself was a problem because all their people knew the greenscreen UI so well it was all muscle memory. Several workers complained because new GUI actually made them slower at their jobs.
They were stuck because they had no way to reverse engineer the requirements from the COBOL code, some of it going back 25+ years. Of course it wasn't documented, or if it was, the documentation was long gone. For the most part they were tied to that COBOL code because no one understood everything that it did and there were only a handful of COBOL programmers left in their shop (I think 6) and they were busy making emergency fixes on that + several other millions of lines of code in other systems.
They were, however, looking for an argument to retire COBOL and retire the mainframes. The cheapest solution would have been to stick with COBOL. Hire programmers. Teach them COBOL (because it was painfully difficult to find any new COBOL people and for various reasons they could not off-shore the project). Continue to develop and fix in COBOL (especially before the last remaining COBOL programmers died or retired). If you cleaned up or fixed a module, maybe move it to Java when possible.
The long story short is the decision to introduce a new technology, even in the face of an ancient, largely proprietary (since it's really about IBM COBOL on mainframes), and over-priced solution can actually lead to a worse outcome. Had they stayed with boring technology. Had they in-sourced more of their COBOL workforce. They might not have felt happy, but they would have been in a much strong, better position. Instead they were paying for a mainframe, and a proprietary Unix server farm, and software licenses on both Unix and z/OS.
When I last was there they were buying a new solution from Oracle which was supposed to arrive racked up and ready to go. Several weeks in they essentially said it would take months before the first of the new Oracle servers would be ready for an internal cloud deployment on which to try to re-host some software. I'm not even sure what they think they would be re-hosting but they talked about automatic translation of COBOL to Java.
> They were stuck because they had no way to reverse engineer the requirements from the COBOL code, some of it going back 25+ years. Of course it wasn't documented, or if it was, the documentation was long gone.
Can you explain for people who never ever been close to such an environment how this can happen, and why do they still care about upholding the requirements they don't know about?
Let's say you have a business process, like if a shipping manifest goes through any one of the following 3 cities, then you need to file form XYZ, unless the shipper is one of the following official government agencies and they've filed forms ABC and DEF. That was the original requirement in 1980. It was documented, put in a series of binders, and placed on a shelf.
1982, Adds another port to the list of special port cities, but only if shipping goods of type JKL or MNO. That change was documented in an inter-office memo and filed away. Except the only time you have the type of goods information is in a different module - so even though it pertains to the original business process, it's in another module that prints the ship's manifest to (physically) mail it to the insurer.
1989, the original requirements binders are moved to a storage facility.
1992, The memo is also sent to an archive facility. Original manuals have been destroyed because the records retention policy is 10 years.
1994, There's a change in the law and an emergency fix was put in, and the comments were put into the source code.
1995, The source code with the comments is lost, so an older version of the source code is recovered with the just the code change.
And so on and so on
Until 2015. You have 5,000 to 10,000 lines of code that deal with the original requirement. They're split into multiple modules. They reside in a source code base of 5,000,000 lines of code. The people that use your software have a combination of the software + a whole bunch of unwritten rules like: "If it's this country, and this port, and this port of origin - PF10 to get to the override screen and approve the shipment. Add 'As per J. Randal' in the comments."
Yes, this "boring = good!" trope is frequently weaponized to shut down people's voices. Happened to me.
One thing I realized is these blogposts are consumerist. They talk about "Python" and "MongoDB". Very little about underlying ideas like "algorithms", "computational paradigms" or "expressive power".
And they have hypersimplified plans about "three innovation tokens". Instead of "risk analysis" or "evaluate tradeoffs".
One company shut me down with such blogposts... while it let devs run amok with an architecture which did n^2 (more?) network calls... where each call transfered one RDBMS row at a time. It dragged down the intelligence of everyone who really knew better; they spent "sprints" trying to find micro-optimizations, knowing exactly that the system was fundamentally ridiculous.
So I spent a weekend reimplementing it in the Scary Fun Language. Because it was my weekend dammit, and Embracing Boredom damaged my brain too much. Scary Fun was the only way to start mending it. And it succeeded.
So of course the first order of business was to rewrite it in the Embrace Boredom language.
I think the point the parent was making was that using terminology like "scary" or "boring" for technological choices is fundamentally an abrogation of responsibility. It's cargo cult programming. Good programming, intentional engineering, is one where people are cognizant of the risks and other tradeoffs of different languages, frameworks, designs, etc.
It shouldn't be surprising that an organization that opted out of a discussion on technology stacks would also ultimately opt out of discussing algorithmic complexity as well, it speaks to a lack of sophistication and maturity at the institutional level.
I think it's more nuanced; certain technologies afford certain designs. There really are differences in languages, and those differences expose users to different misbehaviors -- or the dual, to different optimizations.
You're actually demonstrating the point of the blog post, which is that the Boring RDBMS had a well-understood failure mode that "everyone who really knew better" would've recognised.
I recently went back to SQL from noSQL after I realized that a lot of noSQL was just reinventing wheels. I realize there might be cases where noSQL databases shine, but in most use cases SQL is better. It's slightly more work up front (only slightly) but it pays off later in keeping your data organized and making it easy to query. It's a great example of a very old technology with excellent longevity. That's in part because it's built on math and logic (set theory, etc.). There are universal mathematical/logical truths encoded elegantly into the structure of the SQL language, and they describe things you are going to need.
Your tools shouldn't be the exciting thing. The thing you are building with them should be the exciting thing.
By linking Aphyr's Redis article "call me maybe, Redis" as an example of possible troubles with new technologies, the author of this article shows that he actually does not understand very well the failure modes of MySQL itself, which are identical to the ones of Redis failover (and of every other master-slave system with asynchronous replication, more or less). This in theory contradicts the whole article, but actually I think the idea happens to be reasonable, but formulated not very well. The point is not what is new and what is old, is to switch to new technologies without a good reason which is a useless risk. If you analyze the failure modes, and the strenghts, of what you used in the past, and there is something new that performs much better, IF you are a good programmer, you can analyze, and test for a few days, read the doc, check some code, of something new, and understand if it is a better fit. This is why it's always the set of the best programmers that adopt new technologies that later turn into the next "obvious" stack, they are brave, not because they are crazy, because they can analyze something regardless of the fact is new or old.
I love the way Maciej Cegłowski describes his setup at Pinboard:
"Pinboard is written in PHP and Perl. The site uses MySQL for data storage, Sphinx for search, Beanstalk as a message queue, and a combination of storage appliances and Amazon S3 to store backups. There is absolutely nothing interesting about the Pinboard architecture or implementation; I consider that a feature!"
I also immediately thought of Maciej and Pinboard. He expands a bit in this interview [1]:
> Can you explain why you think that's a feature?
> I believe that relying on very basic and well-understood technologies at the architectural level forces you to save all your cleverness and new ideas for the actual app, where it can make a difference to users.
> I think many developers (myself included) are easily seduced by new technology and are willing to burn a lot of time rigging it together just for the joy of tinkering. So nowadays we see a lot of fairly uninteresting web apps with very technically sweet implementations. In designing Pinboard, I tried to steer clear of this temptation by picking very familiar, vanilla tools wherever possible so I would have no excuse for architectural wank.
"I think many developers (myself included) are easily seduced by new technology and are willing to burn a lot of time rigging it together just for the joy of tinkering."
Part of the problem is the jobs market.
I have been developing database applications for 12 years. Now many of teh jobs that I would seem suitable for want "experience in MongoDb" or "NoSQL skills" (whatever the hell that means - inability to design a schema?). I haven't used those because I have read up on them and decided they were not suitable for any use cases I have. I have used MySQL and Postgres successfully with billions of rows. I know when these new technologies might be useful.
If I need schema-less / JSON stores, I'll use Postgres. If I need things that won't fit onto one server easily then I'll look again at the NoSQL tech that is available and evaluate from there.
You can debate which axes matter - you can debate the weighing and scaling of them - but you can't get away from the conclusion that "pushing all your risk boundaries at the same time equals failure". As a matter of fact, this is structurally identical to the famous "fast, good, cheap" triangle.
n.b., this analysis really starts hitting home in multi-team environments, say, over 50 engineers.
I understand how someone might believe in "innovation tokens," but it's really just a confused way to look at ROI. There's no inherent cost in some innovation, though. If our programmers already know an "innovative" programming method, there's no cost in doing things that way.
The author seems to be conflating the cost of innovation with the cost of doing something you're less familiar with, which are not necessarily the same things. The risk of chasing shiny new objects is real, but sometimes those shiny objects can actually reduce costs and time spent to accomplish a goal (like a MVP or new version).
Sometimes it's worth innovating if you already have experience in the area. Sometimes it's worth innovating even if you have to learn and try new things. Sometimes the time/monetary cost of innovation is 0, and sometimes it's so high that you shouldn't innovate even if it improves your product.
This idea of limited innovation resulting in cumulative costs is overly simplistic. The smart founder will recognize the difference between innovations that will yield net returns and those that won't.
The problem is that "you" is not a person, "you" is every person who will ever work on the code in the future. And "innovation" isn't "the code you are writing now", "innovation" is "the code you are writing now, and next year, and the third-party library you want to integrate in 6 months, and the unit tests you don't have time for now but will become critical in 2-3 years as you become unable to ship working software, and the bug you'll spend a month working on because nobody has ever encountered it before."
Yes, you can look at this in terms of ROI. The author's point is that engineers - particularly ones who have never scaled & maintained a system over years and millions of users - consistently underweight the problems that they've never encountered before. With boring tech, other people have encountered them, and solved them, and you can Google for the answer or pull in a library. With bleeding-edge stuff, when you run into one of these, you have to drop everything you're doing and fix it, because nobody else will.
> "This idea of limited innovation resulting in cumulative costs is overly simplistic. The smart founder will recognize the difference between innovations that will yield net returns and those that won't."
I agree with you completely. Furthermore, I'd add that the view on maintenance is too simplistic also. Effective maintenance requires more than just a tech stack where the limitations are known, you'll also want something testable and refactorable. Ballooning code bases are a real problem, sometimes the smart move is to clean up the cruft. If you're smart about integrating new tech into your stack there's no reason you can end up with a solution which is both more robust and efficient.
Furthermore, there's the whole scaling issue. Perhaps the mode du jour is just to assume increased server costs (regardless of where they're hosted) are just a necessary part of scaling a website to more users, but rethinking your tech stack can help keep these costs under control. Perhaps this is a decision that can wait until you have a decent userbase, but it's still a good reason to be open-minded about what benefits a new solution could bring.
This article is a good starting point to talk about technology choices. But there are many issues with applying the advice in the real world.
First, he is intertwining two seperate issues, limiting tech choices in an organisation, and incorporating new technologies. Keep in mind that you can have seperate strategies for both.
Secondly, there is no notion as to how big a change a token is worth. Obviously switching languages is a much bigger change than switching caching libraries.
Thirdly, there is no mention of project size. Should a 3 month project get the same tokens as a two year project? This year we have created ~300 microservices. If each was allowed 3 tokens we would have 900 new tech changes in this year alone. That's unmanageable.
Fourthly, what is your organisational strategy and culture? If an engineer prototypes in a new language is that a problem because it is seen as wasteful? Perhaps it is something that will make the other devs jealous? Or is it considered an investment in the company and a risk mitigation strategy? Do you have the kind of engineers and tech leads who will do a lot of this prototyping and experimentation on their own time?
Unfortunately I think the answer to all of them is, 'it depends'. How much inertia does change get in your organisation? That will help place value on the tokens.
For your third point specifically, I think taking a pragmatic view is the best. You mentioned you created ~300 new microservices this year. I imagine they're all based on the same pattern, so perhaps your tokens will apply to that pattern rather than each individual project (eg. you get to change the stack for future microservivces). On the other hand, at at least one new service per day, it's obviously pretty efficient for you, so consider why you'd change it unless necessary?
The "innovation tokens" concept expands even past technology.
Want to innovate in the way your board is structured or remove standard protections from the term sheet? Or even set up your Twitter account in this never-before-seen way? Want to remove the idea of management, or rethink the way offices work? You lose an innovation token.
The site is currently down for me (503). While we're talking about boring technology, please consider hosting your blog on a static file host + CDN. It will be faster, easier to maintain, and virtually impossible to take down.
I see people recommend static sites in general, but I've recently done some research and couldn't find a static site generator that can give me a WYSIWYG editor in my browser. What I need is a blog that lets me edit posts from a PC, a tablet, or a phone, including picture uploads and one-click publishing. Everything I found either left the editing portion up to the user or said "just use WinSCP to upload your HTML/markdown".
I just went with Wordpress. My personal blog is not my job, I just want to write down my thoughts. What specific technology would you recommend to generate a static blog with a WYSWYG editor and picture uploads, on my own server (not S3 or some proprietary paid hosting)?
You could try to use Wordpress for generating a static blog [0]. ;)
More seriously, I work in vim+git all the time, so managing my blog with it feels natural to me. Editing in my favorite editor is more important than drag&drop image upload for me.
Just like the author, I don't have time to learn the ins and outs of new "local optima" technologies all the time. I want a static site generator that "just works". So yes, give me a Wordpress-style GUI for creating a blog, then "compile" it to a static site, then deploy.
Seems like all the static site generators have lots of directory structure conventions and hoops to jump through for simple things like pagination and dates.
And at this point we are really talking about what fits naturally in our own hands. I use a documentation tool (mkdocs) for my blog, but that's because like GP, I prefer working in git+vim.
I don't think git plus a text editor counts as a local optimum…
Well, git might on a time scale of decades. But I sincerely doubt we're going to see anything better in the text editor space than emacs and vi for the remainder of my professional career.
Ironically, Rails is now in the category of "boring technology" but each major version introduces enough breaking API changes that many apps never get updated. So all the pain of spending a token and little of the pleasure.
With smaller, more loosely-coupled modules, one can spend a fraction of a token here or there and still revert back to the boring way when necessary.
Sticking with the same set of technologies is a premature death for your career as a programmer.
The whole article builds up on a point that people tend to fail more when they are using new tools. That point is false. In reality, when you use a wrong but ’accustomed’ tooling in inappropriate situation, you end up writing code that you would never write if you had chosen right tools. You are effectively reinventing the wheel.
You also have an idea about ’innovation tokens’ that builds up on a static representation of weight of a new technology in a project. That is ridiculous.
There is no definition of ’boring’ in this article. I don't understand why you call PHP, Postgres and Cron ’boring’. What is ’interesting’ then?
It seems like you have made a wrong choice while thinking about the problem. The problem is clear: people fuck up projects by using modern, hyped technologies that are inappropriate for project's domain. They are just as wrong as you are.
On the one hand I agree with you, but having looked at some CV's recently, I see people who list every language and web framework under the sun. If you have learned 10 new frameworks in the last year, then you can' have any in depth knowledge of them.
3 innovation tokens? The supply is fixed for a long while? People on HackerNews of all places buying that?
It's plain wrong. Innovation is good for any kind of organization, if done properly. What the author should focus on is the lack of agility that prevents companies from experimenting and failing quickly. It's not the innovative technology that gets you in the end, it's your inability to evaluate/adopt/discard fast. Granted, that ability is hard to find in large-ish organizations, but to willingly limit your innovation sounds like a recipe for a slow death. It's like a gentleman boxer from the 19th century limiting himself to just jab, cross, hook entering a modern MMA fight.
So, call them "agility tokens". You've still got a limited supply when it comes to trying out new languages, new databases, new whatever. If you've got ten years Python and MySQL experience, and 95% of your codebase is in Python with data stored in MySQL, what do you gain and what do you lose by introducing Node.js and MongoDB into the mix? Sometimes it's worth the trade...other times it's not. But, Node.js and MongoDB is probably not going to provide enough of a productivity boost to make up for the costs of maintaining two codebases, two build/test/deploy environments, two databases, etc. You're making a trade; sometimes it's beneficial (usually long term), and often it's not (usually short term).
In short, yes, I'm buying this. I think it's a perfectly sensible analogy; a somewhat leaky abstraction, if you will, since none of us actually have any "tokens" that we are trading in for a new database. But, the meaning is clear to me, and I can't find fault in it.
> Sometimes it's worth the trade...other times it's not.
That's exactly what the article disputes - "It's not worth the trade if you had 3 of those already".
Sure, you could drown in meaningless (for your project) new technologies, and this is a risk you should be aware of. If that's the meaning of the article that you are referring to, I agree it is sane (and also wildly accepted). But that's not what the article actually says. What it says is that you get a superficial number of shots at new technologies, and that number is limited by time/growth.
https://consul.io/ is mentioned as an "exciting" technology. What is a "boring" alternative for this... that is, a multi-datacenter, service discovery/health-check/config-distribution software that'll "just work" ?
You can try using DNS with dynamic zones as a simple service discovery mechanism (sharing one master with all your environments), but you'll soon find out that:
- healthchecking really is a good idea in service discovery
- clients are awful about refreshing state from DNS
- single-master systems are a bad idea in a large environment.
- DNS replication is finicky; DNS caching is slow.
Puppet with puppetdb can sorta fill this gap, too, as long as you don't need fast convergence (or fast puppet runs, if your puppetdb is more than a few milliseconds away from any of your nodes).
Consul may be new, but it's built on really solid ideas and technologies. You can read papers[1][2] about the underlying technologies to get a sense for how Consul will fail. I'd like to think that counteracts some of the problems you get with newness.
This also makes me think of the problem of legacy code. Once you've written an app, it's feature complete, and it's profitable, expending effort to rewrite it in a new tech is not only a questionable value proposition, it can be actively dangerous. Replacing ugly but works with beautiful but fails is not good engineering.
Plus, if you're going with some trendy new framework that emerged in the last month, chances are that when you do need maintenance, no-one'll be around who wants to work on it.
Why not let the data make the technology choices for you ?
The way I go about making technology choices is by examining the data that I'll be working with in conjunction the data access patterns inherent in the features that I'll need to support.
I look at things like projected read/write throughput, latency characteristics, total data volume, concurrency, and whether or not the problem domain actually requires highly relational queries.
I think that a lot of shops don't put enough thinking into figuring out what kind of data access patterns they'll need to support throughout the life-cycle of the business. This is no big deal if the product doesn't experience growth. But in terms of rich web applications which begin to experience growth the team inevitably ends up with a massive scaling problem unless their system architecture was designed to support these access patterns from the ground up.
It seems that this "growing pains" scaling nightmare has become almost a right of passage for successful tech startups. Founders are generally led to believe that it's a good thing for them to need to sell equity to outside investors in order to "scale out" a much larger team to build out the infrastructure required to perform in-flight rocket surgery on the application before it either explodes or becomes increasing cost inefficient.
While this whole process greatly benefits VCs, the high end tech engineering job market, and recruiters, it's absolutely terrible the founding team because it means they inevitably get massively diluted as a consequence of experiencing success. I'm not saying it's a conspiracy, but I am saying there is massive financial incentive to keep this kind of technical knowledge about best practices an open secret within the highly paid IT consultancy world.
TLDR: It's my supposition that small teams can build scalable, composable, systems by thinking about web scale data access patterns from the beginning.
Agreed. Best expression I've heard to sum up this concept is "This is not an after-school club". Playing with shiny new technology is not the point. The point is to make money for the company and you use the best tools for the job. Most of the time that means solid tools that everyone understands.
My favorite piece of 'boring' technology: Sphinx (the search software).
I've been using it for maybe six years non-stop. I've thrown large data sets at it and it always runs fast; it's trivial to set up and always has more than enough options for my search purposes. It has also become a much better product over the time I've used it, with an active development group behind it. Sphinx works so well as is, I've never had a reason to look elsewhere at the latest hot search tech, it would be a waste of my time to do so.
As someone who has never done web development and knows nothing about it, how do I learn what to do? Every time I try to determine what people are using and what is the path of least resistance to make a website, I am overwhelmed by choices. How do I determine what is good? If I build a website, I don't want to spend my time learning an interface that has been superseded by something better that all pros now use. What backend do people use? Rails? Django? PHP? Perl? Some Javascript? What javascript libraries do people use? etc.
Really old boring tech (c. 2002): Perl or PHP, MySQL, very little JS (and usually plain vanilla JS if used).
Old boring tech (c. 2007): Rails or Django (the two are largely interchangeable, it's largely personal preference), JQuery, PostGres.
Old tech (c. 2012): Node.js, Angular, Express.js, MongoDB. Not boring, because you will still face lots of problems deploying this stack at scale.
Boring tech (c. 2012): Native iPhone/Android apps, JSON-RPC, often Java on the server. Usually Guice or Dagger is used for dependency injection with Java. Not really old (except for the Java part), there's still a lot of innovation going on in this space.
Bleeding edge stuff (today): React, Polymer, Go, Rust, Erlang/Elixir (Erlang is interesting in that the runtime and standard libraries are rock solid, but because it's so different from most mainstream languages, you can face a lot of integration pain when looking for third-party libraries), Haskell (old, but very different from anything mainstream). Basically everything you read about on Hacker News.
Erlang isn't bleeding edge at all. In fact, it's a battle-hardened and conservatively evolving platform dating back to 1986, which is one of its selling points amongst all the technical benefits.
Depends on your problem domain. It absolutely is battle-hardened and conservatively evolving, but it grew up in the telecom industry, and most of its "mainstream" uses (Facebook chat, Whatsapp) are in messaging.
Erlang strings, for example, are lists of bytes, which will blow up your memory requirements and algorithmic complexity if you do any serious string parsing. You're shut out of common libraries like protobufs. There are libraries available for things like HTML parsing, MySQL, Postgres, and even Apple Push Notifications/Google Cloud Messaging, but many of them are some guy's personal project on GitHub rather than something that's gotten widespread use & testing and has plenty of StackOverflow posts for help.
Lists of Unicode points (integers), specifically. That said, most real world string manipulation is done by passing them as binaries. It really isn't much of an issue, nowhere near as much as is the hell that is NULL-terminated character arrays in C, which, mind you, power most of our software nonetheless.
Erlang does have a good Protocol Buffers library, by the way: https://github.com/basho/erlang_protobuffs. Even if it didn't, you'd use a more native serialization format like BERT.
As for abandoned projects and library sprawl, that is true. However, I'd say that this is far more bearable in Erlang than in other languages. For one, the module system makes deducing how to use a program's API from source code much easier even if there is no explicit documentation - every Erlang program basically gets a user interface for free just by virtue of being a module. In addition, if the library in question is a properly structured OTP application or if it uses vanilla process primitives efficiently, I can have relative confidence that it is less likely to blow up in my face than, e.g. a random Java library.
Even still, there's all sorts of libraries despite the small community. If you're doing web development and there's some RESTful API that has no Erlang bindings, those are relatively trivial to roll yourself.
Elixir, on the other hand, is still a bit bloody on the edges (and the MVC-ish frameworks for it are very much bleeding edge), but because it's built on the same foundations as Erlang, it ends up getting a lot of its benefits (and a lot of the existing ecosystem), which puts it uniquely in-between "boring" and "shiny".
I don't know what your end goals are but I'd say the safest thing to do is probably to pick tech that is not yet outdated (eg. PHP) and not too shiny or arcane either (React or Haskell respectively - while both great, probably not the best place to start).
If your goal is to seek employment 6 months from now, I'd say Django / Angular is a pretty safe bet. I guess if you want to focus on one language, you could use Node instead of Django and target frontend development.
> What backend do people use? Rails? Django? PHP? Perl? Some Javascript?
All of the above, and then some.
If you're just starting out with web development, it's much more useful that you learn MVC and REST; those are architectural skills that will result in you being able to adapt to most frameworks rather quickly. I'd personally vouch for Rails (or perhaps Padrino) and Ruby as an introduction to those concepts, seeing as that's where my own personal experience is, and seeing as Ruby is generally regarded to be an easy and programmer-friendly language, but feel free to make a choice based on your own personal language preference (Django if you like Python, Catalyst or Mojolicious if you like Perl, Chicago Boss or Phoenix or Sugar if you like Erlang or Elixir (respectively), etc.). You can't go wrong, so long as you learn those underlying concepts.
Lots of folks around here express distaste for "polyglot" programmers like (probably, at this rate) myself, but I personally appreciate that focusing on thorough understanding of core concepts rather than the ins and outs of specific implementations thereof is really helpful in this context.
Having played with most of "Rails? Django? PHP? Perl? Some Javascript? What javascript libraries..." I'd suggest starting with Meteor.js. Reasons - quick to learn, easy, one language throughout, one framework throughout, works with modern tech like push notification and phone apps. Check out the tutorial at meteor.com. Or http://meteortips.com/book/ You can be up and running quickly. Once you've got the hang of it you can always try something more complicated. You'll need to learn a bit of javascript for it.
One of the reasons to choose interesting technology is to lure interesting engineers, especially when your domain is boring. But maybe boring engineers work good enough with boring business problems.
My rule of thumb is that if the project has a deadline, then I use components I already know. And to test something new I use it for something that is internal only.
This does not really apply to me because I tend to bind boringness/excitement to problems and not technology.
For example if my task is to create a typical website, that is relatively boring by default. Speaking about databases I choose postgresql not because it is boring but because that is the most convenient, elegant, and general-purpose solution I know. I dont't get excited about NodeJS or MongoDB by default. What I get excited about if I encounter a hard problem: maybe a very hard scalability issue that is impossible to solve with PostgreSQL. Searching for the solution to that problem is interesting and then I might get exited about some NoSQL solution.
Also I don't really get excited about new programming languages as almost all programming languages try to solve the same problem. On the other hand I got excited about recent advances in Deep Machine Learning (and frameworks like Caffe, cxxnet) because with these new advances and tools it is possible to solve problems which we had absolutely no chance previously.
Also it is pretty standard for me that the stuff I do at the workplace is relatively boring, and the stuff I do at night by myself is exciting.
The rule of thumb I used to have (which is a little dated now but imagine 5-10 years ago):
" Any server should have only one thing that is not installed via the default OS packages "
So you could have one weird bleeding edge version of your language or some unusual daemon that nobody else used but that was it. The idea was the rest of the server was stock and it had only one weird thing.
We would fail your rule on so many cases. We run Ubuntu, which might be our mistake, but off the top of my head, I think our installations of nginx[1], Python[2], pip[3], rust[4], mongo, consul[4], openldap[1], gcc[3] and several other things fail your criteria. Not all are on the same server, I suppose, but there's definitely overlap. Most of these are simply because the version in Ubuntu is unacceptably out of date, some are because there are bugs in the provided version, some just flat aren't available, and some are re-compiled with additions. (Like, non-default USE flags, if you're a Gentoo user; Ubuntu lacks the concept.)
I think the issue I have with rules like yours, and that proposed in the article, is that they're fine when you're working with no information, but when an engineer lays out a need, shows how the "boring" package available does not fit that need, and then proceeds to choose an "interesting" package that meets the requirements of the problem, the last thing he wants is nebulous objections over how the choice is an "interesting" tech. For example, the article calls out consul (we also considered etcd and Zookeeper…) as an "interesting" choice, but need a multi-node distributed database with a good consensus algorithm for things such as service discovery and locking; what other techs fit the bill that aren't "interesting"?. Consul fits the bill. It's HTTP- and DNS-interfaces interest me because they play well with our existing boring tools, like http (or curl) and dig…
IIRC,
[1]: default package lacks features
[2]: unacceptable issues
[3]: default package too far out of date
[4]: default package is non-existant
Off-topic, but might I ask what you're using Rust for? We love getting feedback on the language, especially so for people crazy enough to use it in a production setting. :)
(I'm also curious exactly how old your version of Ubuntu is, as testing the language on older versions of Linux is currently a bit of an annoyance and it's glad to know that someone out there is benefiting from that effort.)
Sorry about the slow reply. We're using a mix of Precise/Trusty[1]; we're not (presently!) using Rust in a production setting, I'm only using it to experiment currently, but it's still something outside the package manager that _if_ I wanted[5] would apply (and it's tempting…), so I decided to mention it. I'm presently using it to parse an internal file format we have, mostly as a project (one of two) to teach me Rust. (I'm using Rust presently on Trusty & Gentoo presently, both using the binary downloads from the website. I tried brew-installing it on OS X, but that doesn't give me cargo; haven't tried again in a few weeks, and haven't filed an issue…)
While it isn't presently in a production setting, I would be completely willing to put it into production. There are some spots where we need more performance than Python can offer, and the memory safety and good static typing are very appealing (I'm vary wary of memory-unsafe languages anywhere near the path of user input…). (I personally use C++ presently here, but the included gcc in Precise/Trusty IIRC lacks some of the more modern C++11/14 stuff, so heavy use of Boost is needed. Also, some third party libs — mongo, to name one — have less than elegant C++ interfaces…) The Rust standard library has made great strides in the few months since I started using it (I started picking it up in December?), and since my normal work is in Python, the static typing is a very welcome change. I'm still wrapping my head around lifetimes[2][3], regex! tripped me up a bit[4]. I found some iterator based functions — such as zip — odd: it's a member function on the Iterator trait, and only allows two args. I find Python's free-standing zip function which takes any number of iterables much more natural. Compare Python:
zip(a, b, c, d)
to my understanding of Rust:
a.zip(b).zip(c).zip(d)
I also wonder (since I've not tried it) what effect this has on unpacking. Instead of (a_item, b_i, c_i, d_i) being the items of the iterable, I feel you'd end up with a (a_item, (b_i, (c_i, d_i))); I wonder if a destructuring let would be hard on that? (I've not actually gotten that to work…) Also, I wish enumerate had Python's start argument; I use enumerate a lot for doing numbered lists for humans, which start at 1.
Overall, it's shaping up to be quite a language. I sincerely hope it uproots C. (I'm a big proponent of modern C++, and so I'm still attached there. :D)
[1] I really wish upgrading happened faster; I tried to cull items that weren't relevant to Trusty because I do consider Precise out of date. We're limited to LTS simply because people aren't comfortable with non-LTS (and I don't know that we could upgrade quickly enough to stay on a supported OS…)
[4]: https://github.com/rust-lang/rust/issues/23326 — (I had to follow up in the IRC room here, it's not quite as simple as the answer in the issue. Not only do you need to depend on the crate, you need the "#![feature(plugin)]" "#![plugin(regex_macros)]" as _crate_ attributes, and this can be confusing if your use of regex is in a module-in-a-modue-in-a-module; the use of regex in that module (foo/bar/baz.rs) causes you to need to edit a different file (lib.rs).
[5]: one of the things I miss about Gentoo's ebuilds in Ubuntu is that it's so darn easy to throw additional packages into the purview of the package manager. Building .deb files is vastly more complex than an ebuild.
Loving the feedback, thanks! That's a good point about destructuring the returned type of a chained zip, I'll need to take a look at that. And I'm happy that you've found the IRC channel, don't be a stranger if you need help or have any more feedback (especially wrt stdlib APIs that you'd like to see).
For a 'wannabe entrepreneur learning WordPress', anything with more than 50 lines of code is insane. They may be able to build an ecommerce, forum,...., but usually no real value is seen.
Still, even if one takes an exception to this rule(example Groupon) it can't be compared to a real technical innovation.
Aha. So now, the toothpaste in the tube (if you will) shifts to the issue of which technologies are boring, and which use up innovation tokens. Surprise: the technologies you prefer are boring (note the direction of causality there), and the ones you don't like require innovation tokens!
The general point is quite good. The only thing is that PHP should be considered as a particularly bad outlier and people still should avoid it at all costs. It's just way too old. It's not "boring", it's antiquated. With Rails/Django being around for almost a decade, they're much more sensible choices than PHP. Sure if your whole massive codebase started out in PHP that's one thing, but if you are spinning up a new project I see definitely no reason for using PHP... Even if it means you'll have to spend a little bit more extra time getting up to speed in the beginning, you'll benefit a whole lot in the end. The same goes for MySQL vs. Postgres, to a lesser extent of course.
For me, Python is boringly productive. When I need to get something done, I'd probably have more fun figuring out how to do it in Haskell or Clojure, but chances are there is already a Python library that solves my problem with a few lines of method calls.
The article was very interesting - I find the whole idea of "innovation tokens" very compelling.
However, I didn't understand how Rumsfeld came into play here. Did somebody understand this point ? I feel like I'm missing something out here.
He was ridiculed for talking about "unknown unknowns" because of the ridiculous phrasing. See http://www.youtube.com/watch?v=GiPe1OiKQuk
Interestingly, this ridicule became so common that now people know more about known unknowns, and unknown unknows, which is perfectly sound logic, but just weirdly associated with Rumsfeld now.
I'm no Rumsfeld fan, but I never understood why that statement would be ridiculed. The quote below makes perfect sense to me.
"Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones."
Yeah that statement also made sense to me despite not liking the guy. I like to think of "known unknowns" as knowing the question but not the answer, and "unknown unknowns" as not even knowing the question.
Of course knowing the answer but not the question is best left to Douglas Adams.
Yeah, I always found that ridicule odd. I'm no fan of the guy but it was a perfectly good point (and, in fact, was disastrously proven true pretty quickly).
The reality is, you need senior engineers, people who have built systems before, and are over the stage of their career where they just want to build stuff for fun, and are actually focused on building systems that provide value for the companies they work for.
It's not about new tech, or old tech, or boring tech, or exciting tech. It's about looking at the specific problem at hand and making an assessment about which technologies make sense to use.
As much as our industry doesn't want to admit, there are advantages to having real work experience.
Postgres is definitely not boring as there is a lot of interesting stuff happening with this database. I think the author here should really be making the distinction between sexy & unsexy.
Sorry, let me try again, without humor this time. :)
"I dislike the use of the word 'sexy' to describe things which are considered attractive for non-sexual reasons, partly because it dilutes the original word and partly because it doesn't convey any reason for the attraction to the actual thing."
A few of my friends had shared this on Linkedin and since it was also referenced in some of the mailing lists I am a member of, I felt compelled to write up a response to this rather shallow article and line of reasoning.
http://esfand-r.github.io/2015/04/don't-choose-boring-techno...
Was forced to use Xamarin Forms in a project recently. It is less than a year old, and not suited to what we need. But somehow the idea was it will save time.
My rule of thumb is: you can do something right, or you can do something new.
You need to find a balance, but if you're doing something new in business you should use reliable tech, if you're doing new tech you need to isolate that.
Corollary: you can hire for standard tech, but have to ramp up in-house tech.
Using multiple tokens does slow things down, but it is sometimes worth it if you get a bit lucky and make good choices.
These days I generally optimize for fewer lines of code (boilerplate or not), as few dependencies as possible, and a general respect for the CPU cycles needed to run it.
I've always had a soft spot for the choose boring argument, but for some problems boring tech is a poor match. Rather, I try to look at each problem objectively - decide what I want out of a solution, and select accordingly.
We started building with that approach. We're looking for people to tell us what's cool and what's not!
We at GeoG are building a safe, easy and sustainable platform for IoT. We can totally do everything if we put time and effort into it - which we are ready for!
What we want you to do, is start hacking and building things with us. No strings attached!
We recently set up a Community page http://community.geog.co where we want to start listening to you, your needs and suggestions.
Did we mention we have an API? We'd like you to start beaming data and create cool things with us. http://api.geog.co
Also, if you have reasons to hate us, bring it out! We're listening.
Author here--yeah the irony is killing me. It's PHP though. I'm fighting quota issues with my dumb host. I assure you migrating to something better is not my day job. Here's a pdf of it:
I got it from politics and Rumsfeld, long before I read Taleb. That said, Taleb certainly influenced, or entrenched, my approach to technology decisions. People often look at the known improvements a new technology makes over the known problems of an established solution, being ignorant of the unknown problems that could arise and be of a rather serious nature.
great article, however the Jerry Seinfeld gif was bit annoying. Although must admit it is funny and probably appropriate ;-) it was hard to read anything around it. Could only concentrate on the content when the gif was outside the screen.
I get the point, if the end goal is the reason, don't get caught up in hype. But I won't lie, my job is not a end goal, as a employer never believe this.
Plus there's complicated issues like a bored worker is a bad worker.
Trying to solve a problem? I'm going to use what I already know. For web stuff, this'll be a super-boring, totally standard Rails web app that generates HTML (ugh, right? How last century), or maybe a JSON API if I'm trying to consume its output in a native app. For mobile stuff, this'll be an Objective-C iOS app.
Waffling about it and saying 'well, I am trying to solve a problem, and I think maybe a new whiz-bang technology is the best way to do it' is the simplest path to failing miserably. I've watched incredibly well-funded startups with smart people fail miserably at delivering a solution on-time because an engineer was able to convince the powers that be that a buzzword-laden architecture was the way to go.
You don't know what the 'right' solution is unless you understand the tools and technology you'll use to deliver that solution. Anything else is just cargo-culting.