While I cannot judge how those practices work at Google, I have seen similar practices fail miserably when applied in smaller companies.
Opinionated best practices stop being useful when the emphasis is on opinionated and not on best. If the cabal that has political influence is not the strongest there is, standardization just increases how much technical debt can be built in a short period of time.
I have seen expenses on tools teams that went nowhere, for two reasons. The first is that tools teams are attractive to people that do not want to deal with customers, so they can attract the opinionated and antisocial. The other problem is that the generic tools really have to solve the problem very well: Lack the engineering quality in said tools team, and you are spending a bunch of money on infrastructure that will both fail to evolve at the speed of OSS, and get entrenched everywhere in your codebase.
Automated testing will only get you far if it is pragmatic. I have seen millions of dollars spent on test automation that was extremely fragile and never paid for itself, because it had an extremely short shelf life. Picking the wrong tools for it makes it even worse.
Code reviews can be great, or can be terrible. A code review searching for real problems in the code, or that really considers alternative solutions, will be very valuable. But code reviews can quickly become pissing contests used by people to impose their preferences on others: A passive aggressive way of asserting dominance on a team. You can see a lot of that in proponents of the 5 minute code review: No actual analysis of the code is done, but 5 minutes is plenty to use it as a tool for abuse.
So while the techniques described in the article can be very helpful, the wrong implementation of them will just help you standardize on a monoculture of bad engineering. So before implementing such things think about how good your engineering really is, and whether you really are better off making sure people bend to your standards, or you are better off learning from the experience of new people. Chances are you are not Google.
You have to be careful about who you hire. If you do that well, then opinions, code reviews, one shared repository, and so on, work great. If you hire a bunch of assholes, then they won't work.
I do about 20 code reviews a week; there is never bullying and they are only rarely rubber stamped.
(My favorite code review of my own code this week was where I updated a counter to use atomic.AddInt64 instead of sync.Mutex. But where I was reading the counter, I just read the integer value directly instead of using atomic.LoadInt64. 30 seconds of the reviewers time, and we avoided a potentially difficult-to-track-down bug. No pissing match, no bullying. Just better programs.)
I looked in more detail and if I average it out over 2.5 years and only include actual code I care about (not importing OSS packages, which I also review), it's more like one a day.
I was too lazy to calculate a histogram, but let's go with ~100 lines a day.
Have to put out a shameless plug here for Hacker School. I looked forward to code reviews there because I learned so much from them. This is to the point that it actually surprised me reading your comment about how code reviews have been unpleasant for you. Would be great if more workplaces could adopt that kind of mindset and set of values as well.
In short, code reviews don't have to be bad, from personal experience.
1. I never said I've had unpleasant experiences with code reviews, I said they're too often wielded as a bullying tactic. This is something that can be recognized without actually being a part of the bullying tactic.
Furthermore, your comment attempts to subvert my statement by implying my observation is more about bad personal experiences and less about thoughtfulness.
2. The context for which code reviews has been used is not throwing up some code on a random website and asking random strangers to look over it for educational purposes.
The context is one of a professional environment where code is typically required to be reviewed by another professional before being allowed into the general codebase in an attempt to both educate about the current codebase, catch possible inefficiencies, and to try and maintain some form of quality control with respect to the mechanical aspects of the code.
These are completely different contexts. On a site like Hacker School the worst you're going to get is people mouthing off about shit. In a professional environment, the worst you'll get is fired. Very different environments.
Hacker School is not a random website where you throw up code to have random strangers look at it. It is a three-month-long "writer's retreat for programmers," where you are sitting and working with other developers every single day, just like in a workplace, except that you're working on what you like and won't be fired.
Code reviews are in person and could easily become unprofessional or attacking--but they don't because there are very strong social rules about how to conduct them effectively and with improvement in mind rather than criticism.
I respect the point about thoughtfulness vs personal experience, but you should at least make the effort to look up what Hacker School is before making comments about it.
I have seen similar practices fail miserably when applied in smaller companies.
Me too.
Opinionated best practices stop being useful when the emphasis is on opinionated and not on best.
Sure. Also, if changing the way things are done causes friction for the team, sometimes it's not worth it. There's always tradeoffs.
Lack the engineering quality in said tools team, and you are spending a bunch of money on infrastructure that will both fail to evolve at the speed of OSS, and get entrenched everywhere in your codebase.
This is a great argument for investing in OSS tools, and for open sourcing tools invented in-house.
Automated testing will only get you far if it is pragmatic. I have seen millions of dollars spent on test automation that was extremely fragile and never paid for itself, because it had an extremely short shelf life. Picking the wrong tools for it makes it even worse.
When discussing the relative pragmatism of tests, it can be important to understand where they live within the perceived development workflow. Generally, it makes sense to attach fast, automated tests to precommit hooks in your version control system. Slow, automated tests can be postcommit and run on shared infrastructure. Very slow, exhaustive regression tests involving repeated, complex, time-consuming virtual infrastructure setup can be run when committing new tags/releases.
It makes little sense for regular developers to focus on the latter two. However, the former is a great target for extra tests that's generally approachable by and visible to all.
But code reviews can quickly become pissing contests used by people to impose their preferences on others: A passive aggressive way of asserting dominance on a team. You can see a lot of that in proponents of the 5 minute code review: No actual analysis of the code is done, but 5 minutes is plenty to use it as a tool for abuse.
To avoid these issues it can be better to do these written/offline than in person. Maybe then discuss the results in person.
So while the techniques described in the article can be very helpful, the wrong implementation of them will just help you standardize on a monoculture of bad engineering. So before implementing such things think about how good your engineering really is, and whether you really are better off making sure people bend to your standards, or you are better off learning from the experience of new people. Chances are you are not Google.
> Opinionated best practices stop being useful when the emphasis is on opinionated and not on best. If the cabal that has political influence is not the strongest there is, standardization just increases how much technical debt can be built in a short period of time.
I agree so, so, so very much. We've had issues where one teams wants everyone to standardise on a given technology for ideological rather than practical reasons.
Speaking from my experience as an intern at Microsoft, I really wish there was more focus on training. I've heard from a lot of praise about the quality of codelabs from friends at Google and never received anything like that when I was working. I was basically just given a project and expected to pick up things by asking people or reading some half-outdated sharepoint pages.
From speaking with other interns it seems their experiences varied entirely based on their teams. The impression I got was that it was really fragmented across teams and that each team felt like it had its own way of doing things.
I'm not exactly sure how it is at Google but I've heard from a friend that the entire company works on a single shared codebase. The project I was working on had 3 different forks of the framework I was developing (that I knew of), each maintained by different teams. Basically, the impression I got was that everything just felt really fragmented and that I was only working with my team rather than with an entire company.
I should mention that this was my 2nd software development job ever so maybe this is normal for lots of companies and that Google is one of the few companies doing it right, but the experience didn't really leave me with lots of confidence in the engineering environment/culture of the company.
I've heard tons of good things about the engineering culture at Google so I'm considering if I should try applying there.
Ex-Google engineer here. The way Google does things is indeed very rare, your experience at Microsoft is more typical.
Note that the path Google chose isn't easy! As the codebase got larger and larger they had to basically design and build their own build system, a distributed unit testing engine, custom code refactoring and hyperlinking tools, do major IDE modifications to Eclipse to make projects even loadable, eventually even build their own version control system because no other VCS scaled to the sizes and speeds they needed. They invested a TON of time and gold into allowing a single codebase to scale like that; it's practical if you're sitting on a hosepipe of money and you can hire brilliant engineers then assign them to build tools, but it's probably not practical for most companies despite that the resulting environment was quite pleasant.
That said, although the codebase was remarkably consistent, there were still variations. The most notorious was the split between the C++ and Java codebase. Some frameworks were written in c++ and bound into Java using things like SWIG. Others were bound using cross-process RPCs. A lot of smaller libraries were simply written and maintained twice. The culture was different too. Some Java codebases were way over the top of excessive abstraction and use of dependency injection, making them an absolute nightmare to understand or debug for newcomers. Others were simpler and more understandable (typically the older ones). The C++ side felt much more light weight, but on the other hand, it was largely stuck in the 90s, with memory management being entirely manual even in cases where conservative GC could probably have helped avoid mistakes. When I left C++11 was in the process of being whitelisted one feature at a time.
The biggest problem with the culture I had at the time I left (and was one reason I decided to pack my bags after 7.5 years there), was that basic tasks were becoming more and more bureaucratic. Often this was due to poorly designed processes put in place in a panic after some PR disaster around privacy or data handling, like Street View or Buzz. Sometimes it was just because some engineer needed to redesign a widely used system in order to prove they were being "impactful" and get promoted even though it wasn't really broken. There was a running joke there that there were two versions of every tool, the deprecated one and the one that didn't work yet. It started out as a lighthearted take on the companies rapid progress and by the end unfortunately just reflected a sad reality.
As an illustrative example, around the time I started to step back, one member of my team had spent on the order of 6-8 weeks attempting to simply download a file via HTTP in production, from our own servers (was a self test). This task involved filling out forms, arguing with the owners of the HTTP download service (the old one didn't require this process but was deprecated), discovering that the new version was hopelessly buggy and was breaking random products for end users without anyone on those teams noticing, etc. This is a task that could be accomplished in two minutes with a random Linux VPS and "wget" but turned into an epic struggle against a Kafkaesque corporate disaster zone. The problem was not that one team was poorly performing though: the problem was the company had lost the ability to pre-emptively catch this or even recognise that there was a problem. Most team members were happy to just collect a paycheque each day; if they got paid for filling out forms justifying why they needed the "fast reliable" HTTP download service instead of the "slow unreliable" service, why worry? Best not to rock the boat.
> One reason I was able to quickly become productive within Google was because the company had invested so many resources into training documents called codelabs.
Counter perspective here. The on-boarding training at Google for SWEs is indeed very extensive, spanning several weeks with tons of material. For example, the first code lab alone covers how all of Google's source is in a single repository, how to check it out, how it gets built on a server, how to do code reviews, what the coding style is, etc.
I threw it all out the window after my first two weaks. I was joining an Android team, and as it turns out, Android works completely differently. Different source repo, different SCM tooling, different build system, different development workflow, different deployment mechanisms, even different workstations.
And the company culture was different as well. For example, in the Life of an Engineer class, they impressed upon us Google's culture of openness by poking around the source tree for search, and showing how we already have access to its code! But the Android source tree is tightly controlled, and my access request required two weeks and VP approval. (It's still the case that the majority of Google SWEs do not have access to Android source, or the Android café menu.)
I eventually came up to speed the old fashioned way: poking around outdated documentation, trial and error, and lots of bugging my neighbors. So in my case, the Google on-boarding process was mostly useless and partially misleading. This illustrates one way that "reusable training materials" can go wrong, especially with a larger organization.
From your experience, it just seems like the android team was lacking those "training material". I think your case shows it would have been a good idea for them to have some as well, doesn't it ?
More like: when company A acquires company B, company B does not instantly replace its culture with company A's. Especially if company B is quite successful.
> One reason I was able to quickly become productive within Google was because the company had invested so many resources into training documents called codelabs. Codelabs covered the core abstractions at the company, explained why they were designed, highlighted relevant snippets of the codebase, and then validated understanding through a few implementation exercises. Without them, it would’ve taken me much longer to learn about the multitude of technologies that I needed to know to be effective
Any Googler care to share what a "codelab" looks like? Always interested in seeing innovations in documentation/onboarding (and I'm guessing it has nothing to do with the now retired "Code Labs"? https://code.google.com/labs/)
Basically, a tutorial. The internal codelabs have some logic to substitute your username throughout the text (good for things like "create a test directory"), but are otherwise very similar to the public ones.
It is very difficult for me to translate lessons learned at google to anything except google. I have read a lot of blogs from googlers and formers who write from a perspective of, I guess, a luxury I have literally never seen, even at MS during its golden age. For example, the idea of code reviews and whitespace. I have never worked in an environment where we had that kind of money to spend.
" code reviews and whitespace. I have never worked in an environment where we had that kind of money to spend."
Also my experience with google coding practices. Code reviews process can be excruciatingly long and painful (especially when you're starting out). Lots of back and forth over things like whitespace between operator/operand and whether to put the stream operator at the end of a continuing line or at the beginning of the continued line etc.
Someone else said that the google practices are optimized for code maintainability rather than developer efficiency. Absolutely true. Reading code (at least C++ which I have experience with) at google is a joy while writing (and submitting) code is at best a chore. At it's worst it's bad enough that I've seen a new hire (senior engineer, known for delivering at a high level from his previous company) go back to his old company after a quarter or so of frustrations with the process.
"Invest in reusable training materials to onboard new engineers."
So important! Doing this really helps keep people on page and limits the amount of time spent arguing about semantics or why we came to a certain decision.
Opinionated best practices stop being useful when the emphasis is on opinionated and not on best. If the cabal that has political influence is not the strongest there is, standardization just increases how much technical debt can be built in a short period of time.
I have seen expenses on tools teams that went nowhere, for two reasons. The first is that tools teams are attractive to people that do not want to deal with customers, so they can attract the opinionated and antisocial. The other problem is that the generic tools really have to solve the problem very well: Lack the engineering quality in said tools team, and you are spending a bunch of money on infrastructure that will both fail to evolve at the speed of OSS, and get entrenched everywhere in your codebase.
Automated testing will only get you far if it is pragmatic. I have seen millions of dollars spent on test automation that was extremely fragile and never paid for itself, because it had an extremely short shelf life. Picking the wrong tools for it makes it even worse.
Code reviews can be great, or can be terrible. A code review searching for real problems in the code, or that really considers alternative solutions, will be very valuable. But code reviews can quickly become pissing contests used by people to impose their preferences on others: A passive aggressive way of asserting dominance on a team. You can see a lot of that in proponents of the 5 minute code review: No actual analysis of the code is done, but 5 minutes is plenty to use it as a tool for abuse.
So while the techniques described in the article can be very helpful, the wrong implementation of them will just help you standardize on a monoculture of bad engineering. So before implementing such things think about how good your engineering really is, and whether you really are better off making sure people bend to your standards, or you are better off learning from the experience of new people. Chances are you are not Google.