Pretty interesting that according to him, Airbnb didn't really have a functioning testing infrastructure only a year ago. So you really can hit a billion dollars in valuation without testing :)
Google didn't really have a culture of testing until around 2005-2006. At that point they were public and had a $100B+ market cap. I'm sure there are other public companies out there who don't do automated testing at all.
The technical infrastructure and code quality of a company are secondary factors in its success. The biggest influence they have is in the ability to attract and retain brilliant engineers, because most brilliant engineers don't want to work in a place where they're just treading water with bugfixes (which is the situation you eventually get into without automated testing). But the mechanism is that brilliant engineers write features that your competitors can't match, not that testing itself will make you successful.
What does a brilliant engineer have to do with success? You don't need a brilliant engineer to have a successful company. Some companies do lean heavily on their engineering department, while others simply have better direction and leadership to execute with.
Google's success was largely focused around an engineering solution to a problem - giving people search results that are more relevant than before. But other companies like Uber aren't solving engineering problems, they're solving multi-dimensional problems without relying solely on brilliant engineers for success.
Uber attempts to reduce friction in locating, requesting, and being updated on their transportation, reducing cost, and improving the quality of the service - all of which are not engineering-bound problems. Sure, being updated on the location on a map is a significant feature, but overlaying gps coordinates on a map does not require engineering genius. What does take considerable effort, though, are the varying logistics going into managing the drivers, the payment systems, the regulations, the customer service, etc that are the heart of the business.
Their apps and mobile site (at least while i've been using it) have been unreliable or limited in terms of what would be the most useful capability for its users. It's not a stretch to say the technology involved in Uber has been weak compared to the great lengths they've gone in managing real-world resources to get their product to market. I'll take 5 regular engineers and 1 really good leader over 5 brilliant engineers any day.
I think the advantage of having really good engineers is that you can then go after adjacent markets that take some technical prowess to serve, and you can defend your core market quickly against upstarts who take what you've learned but don't have the curse of legacy code and users.
Google, Facebook, and DropBox all took the strategy of "hire the very top, and try to build a really strong engineering organization even though it doesn't look like you need it." For Google, it was obvious that it was a hard problem (or maybe not - why didn't Infoseek and Lycos do this?). But for Facebook and DropBox, the problem initially seemed like it was just a simple utility, but because they hired really good engineers they made it challenging and impressive. And empirically, their trajectories seem to have worked out better than, say, MySpace and YouSendIt, two competitors that had first-mover advantage on them but didn't make engineering hiring a priority.
Obviously this doesn't apply to all markets, but I do wonder if sharing-economy companies like Uber and Lyft will eventually be displaced by a company that does hire for technical ability.
Having both good leadership and good engineers is more likely to provide success than just good leadership. That said, having only good engineers without good leadership will lead to failure. But the companies you mention require different amounts of net engineering effort to achieve their respective goals.
Hiring the best engineers doesn't seem to be as big a factor for companies like Facebook, for example. Facebook found means to connect people in ways that kept them engaged with the site, which was more of a social programming [psychology] problem, which while still an interesting engineering project (from a web-design standpoint) is more of a question of designing features people want to use.
Google wanted to provide more relevant search results, and so engineering was a key component in solving a complex problem. DropBox was similar in that they were providing a way to reduce friction in a basic user experience: synchronizing file copies. But DropBox's solution is nearly all engineering, because the problem they're solving is almost entirely based on managing logical resources (which is something engineering is great at).
However, if you leverage technology poorly, your company will suffer. Myspace ended up developing quicker with less engineers in Coldfusion than Friendster's greater number of engineers using JSP. Years later, Facebook gained great success using php - but was it because they used php, or because the goals they were focused on were different, or their execution more deliberate?
It's a major reason I still use something similar, although I've been using HHVM instead. Amazing for prototyping, and years of experience makes it easy to work with personally :)
you did. Thanks for answering! I'm definitely the one leading the (small) group of devs who test here. Your article was great motivation for our cause.
You really don't got a test culture unless you're 'allowed' to take time to debug errors that happens inside the test suite and not the product itself. This is the difference between writing tests and really having a TDD-culture.
I mean those issues where the test suites requires maintenance but the actual code base or product is "working".
Everyone has had them. I guess it that's what the article means by 'great pain'.
Recently I've heard a few non-engineers use "continuos integration" as a way of charging clients more as use per buzz word rules.
Interesting that you guys went with Solano. We used them back when they were TDDium and found the experience to be very bad. There was notable downtime, a poor interface, and a crappy configuration experience (getting environmental variables into it was very annoying). We've been happily with Codeship ever since.
What reasons did you have for choosing Solano? How has your experience been?
As someone running a company with around 50 people, and a quickly growing codebase, introducing testing as "a bar so low you can trip over it" is an amazing way to articulate exactly how i feel about this.
For us at first we did this by introducing tests on our most complex and commonly used code, that could be run locally. Moving onto using pull requests and having a more robust CI setup to enable more regular deploys is currently the task at hand.
Like testing, PRs are one of those things that seems like it will slow you down, but once you learn how to use them they can actually increase velocity (among many other benefits). It's been awesome to watch how good people have gotten at collaborating/communicating via PRs at Airbnb.
Does anybody have recommendations for where/how to start learning best practices for TDD?
As (nominally) top nerd at a tiny startup (2 engineers), I feel like I should set a precedent sooner rather than later for testing. This is currently not possible since I don't know anything about it, so any resources would be appreciated :)
Edit: Primarily looking for resources involving Node.js and client-side testing of a jQuery-based website.
One aha moment for me that's talked about here is to treat your tests more as a form of documentation and specification of how to use the system. They talk about how you should even do some basic tests to confirm enumerations and constants in the system as a way to be clear about their use.
You don't always have to be as thorough, but the mental shift from test to specification was helpful for me.
Edit: Unfortunately, if you download this podcast it's a bit out of order, so you want to read the blog and use it as a guide for which order to listen to things. They're in the process of writing a book about TDD and this blog is part of their process.
I've been setting up client side testing with grunt+browserify+karma+jasmine lately. It's pretty freaking powerful. But, none of the usual suspects for mocking http.get requests work well with it (sinon fails to execute in a browserify test environment; nock expects to be running in a node environment with the ClientRequest object available).
using npm to install sinon; shimming sinon in my karma browserify plugin config (`'sinon' : 'global:sinon'`)since it doesn't follow CommonJS; using browserify (and the karma browserify plugin) to require('sinon'); sinon is undefined after var sinon = require('sinon'). Might be something up with my config, but none of the other non-commonjs modules I shim (jquery, swfobject) have the same issues.
Yes, we're using Express.js. QUnit looks interesting, I think I'll start with it. Is it pretty standard to just have an HTML file that runs all your client-side tests?
I wonder if the guys are doing code reviews for each PR along with making sure build is green. In our team, we've been doing code reviews for about three years now and can't imagine our workflow without them.
Yes, we're absolutely doing code reviews for each PR. Should have mentioned it in the post. Our general policy is to have engineers merge their own PRs, but only after at least 1-2 people have reviewed them (and obviously more for sensitive changes). The dialog that takes place in PRs helps enforce (and sometimes define) our style standards, teach engineers the idioms of a languages they may be new to, and ensure that we're always moving our codebase in the right direction. (They're also a great place to teach people how to write cleaner and less brittle tests!)
Ironically, AngelList posted a slideshow the other day about how they don't use tests because they increase development time and make it hard to be agile. They instead iterate quickly, pushing out new versions and fixing rapidly as things come up.
If you can test everything manually with confidence that every incremental change breaks nothing you've ever thought about prior to the present, I suggest you work on more complicated things. Otherwise, you're mistaken.
The irony is that AngelList allegedly generates funding for real engineering teams.
Well, AirBnB is a good example of a startup where the real value - and challenge - was in its novel business model, NOT in the engineering challenges of creating their webpage - which is very nice, but they might have succeeded with a Craigslist-like page for all I know if they solved the problem of creating a marketplace.
So I agree with the general idea behind what AngelList is saying. If the startup value proposition is based on solving a "hard " engineering problem, OTOH, it's not such a good idea... but it seems that most problems startups tackle are NOT engineering problems.
Wow. I'm surprised at how big they were able to scale, while still pushing most commits directly to master and having a test suite that took 1hr to run.
Still not disabled! Although at this point we're so habituated to PRs as a team that in practice it never happens. We did finally disable force pushes to master, though. Don't miss those one bit.
It is very easy to only allow others to pull your repository. It then functions much the same way as an open source project where pull requests are required to get code into the main repo.