I was there when "Walkabout" started being bandied about. It irked me to no end that I could jump through all of the hoops and finally get behind the NDA curtain by being hired and then be told "nope, you can't hear about that". That was a severe jolt to the internal culture of the whole company.
There had been other projects which had been kept under wraps successfully. Chrome and V8 were demoed in Seville long before anyone on the outside heard about it. They said keep it quiet, and we did. Google TV had been knowable for ages before it went anywhere externally. There was no reason to think someone would leak Wave.
Instead, they started playing the "we're special and you are not" card, and that started a sense of resentment growing. Not even the infrastructure teams which were going to provide services to them were let in on what was really going to happen in there.
Then after far too long, demo day of Wave arrived. I only stayed long enough to see them hit backspace and have it echo out to everyone else who was connected. I remember my exact comment at the time: "packets". As in, lots and lots and lots of packets flying around to generate RPCs for all of those deltas. Then those turn into XML or whatever going out to web browser clients, and ... yeah. SO many packets. That right there worried me greatly.
So then I see this thing about it not scaling properly and choking JVMs and suddenly it all makes sense.
Oh well. All of the secret code depots and restricted access areas must have been practice for what is now happening with Plus. Entire floors of buildings you can't open as a full-time employee? Yep. More code depots being locked down? Yep.
Just like I used to tell candidates when they asked if we used Solaris, or Windows, or whatever you can think of: if you can think of a technology, the place is big enough to where there's probably an install of it somewhere.
Along those lines, I doubt there's one of anything in there any more. It's just too big.
"Crazy" is probably the wrong word for me to use (I have been known for my hyperbole in speech). What I meant was that, from my point of view, having multiple depots is almost the same as having multiple directories at the root of a single depot. The Perforce Support article you linked to says the same thing.
Hence, I feel that having multiple depots on one Perforce server simply eats up client names and causes confusion among those who haven't learned the distinction between "depot" and "server".
It's an interesting post and raises several cogent points but it misses the biggest problem with Wave: it was a solution in search of a problem.
I say this simply as someone who was, at the time, outside looking in. I know little more than that but it had all the hallmarks of what happens when engineers are running the asylum. Here's this communication medium in which basically all other communication media can be implemented (Email, IM, forum posts, Twitter, etc). It's the kind of general solution that engineers come up with it.
I read a post from someone else (can't find it now but I think it was on Quora) who was familiar with the matter and they were saying the risk-reward thing (which this poster mentions in passing) was all messed up. Basically the incentive structure rewarded mediocrity.
I can't speak with any knowledge of those matters but I can believe it. After all, in a startup what happens if the startup fails? You find a new job. There is a strong incentive to make your runway last and get to your next funding round (or, Heaven forbid, profitability). Inside somewhere as cashed up as Google, those incentives (IMHO) disappear.
If the "startup" fails, what happens? You just move to another part of Google. What do you think the odds were that with Wave going away, any extra Wave incentives became worthless (as would happen in a startup)? Basically zero (IMHO).
Every time I tell an engineer that I worked on the wave team, I'm greeted with the same response: "Oh, Wave! You know, the real problem with wave was X". And then I stop listening, and tell them this:
I have heard that for all the values of X. Here are the most common:
- It was too hard to use / I didn't know what to do with it
- It didn't integrate with email / didn't have email notifications
- It didn't support IE
- I started using it, but none of my friends were using it, so I stopped
- It was too slow
Too late, the team spent a bunch of time talking to actual users and finding out how they were using the product and what their real pain points were. People use wave to communicate in small teams. IIRC the biggest pain point our users cited was that wave didn't support printing. The people who did get over the adoption hurdle ended up using wave a lot. (I still wonder if maybe wave might have survived if we had charged for it.)
Wave died because we were too slow to make it good. The code base consists of around 1M lines of java. With 1M lines of code, you need a giant team to get anything done, and a giant team working hard on features tends to add even more code. The wave team burned too much money too fast. And we were still working too slowly to get explosive user growth. It made lots of people sad, but Google was right to kill wave.
If I could go back in time and give advice to the team, I would say:
- Don't use Java
- Don't scale past a few engineers. You will have a much longer runway that way, and you'll need it if you want to replace email.
- The network effect will kill you unless you integrate with email from day 1.
- You don't know what you're building, so talk to users early. Like, today.
- Don't optimise for scalability until you know what the product is supposed to be
- Don't try and reinvent the scrollbar, you idiots. (So much stabbing.)
The fundamental problem with wave wasn't that it used one technology instead of another or had one feature instead of another or anything so mundane. The problem with wave was that it solved all of the easy problems with communication. Granted, it seems to have done a good job of that, and that alone created something different that maybe was worthwhile in an abstract sense but wasn't worth switching to and wasn't worth adopting in addition to everything else.
The problems we face with communication today are not ones of distribution, speed, or longevity. Currently there are some very popular forms of communication which have fundamental issues with one or many of those aspects, but ultimately those are problems which are amenable to very direct and relatively inexpensive solutions.
The real problems we face are ones of organization, discovery, workflow, meaningful semantics, and overwhelmingly managing information overload. I don't believe that Wave significantly addressed any of those issues. But only by addressing those issues can you create the sort of value for users that will drive significant adoption of your tools.
Since this is the first time I've managed to address someone who worked on Wave, I'd like to say "thank you". I thought it was tremendous, if flawed, and found it useful while it lasted.
In a team of about 6 remote workers we used it to create collaborate documents - starting with something that looked like a chat and then giong back and editing the chat into a finished document.
You may well be right that Google was correct to kill Wave, but thanks nonetheless.
I want to add my "thank you too". I used wave as a personal wiki of sorts. I added my own development journals and ideas, kept track of projects. I used it mostly as a one man team but the fact that I could share and collaborate with partners far away was the big plus. I was really sad that it was killed and now I am running wave in a box in my computer, not the same thing but it still good.
It would take more than a couple weeks, but 10-50k LOC sounds about right. Since leaving google, I've rewritten wave's concurrent editing algorithm in coffeescript in just 3k LOC.
Was there any particular reason why it ended up so huge then? Trying to handle each and every possible use/edge-case completely? Why not Java (from your previous answer)?
I loved Wave despite its warts and managed to run several extremely successful small team document writing collaborations through it, completely replacing IM, email, groups and lots of early document drafting in Docs/Word. There was nothing that anybody on the team had experience with that came close to that. We went from hundreds of emails a day and hours of IM'ing to absolutely zero of each within a week of moving to Wave.
My only real complaints were that the web client was dog slow and going from a Wave of draft-edits to a final document was kind of painful.
Never did find a use for lots of the other bells and whistles (embedded widgets things, channel bots, etc.), perhaps that's where a lot of the extra LOC went to...
The entire Wave story would make an amazing case study if Google would ever let the entire story out.
I don't think there's any one single reason why the code base is huge. Its written in proper idiomatic java, so a lot of the code is bureaucracy (interfaces, factories, managers, assertions, etc). The code that was opensourced is reasonably representative of the rest, if you're curious:
I think we should have spent more time keeping the code clean; but thats always a tough call when you have features to ship. Its easy to point to lots of things and say they were mistakes. But I worry that, had the project been successful, we might point to the very same things and talk about how clever we were.
A huge effort was put into making the client more responsive in the few months just before wave got cancelled. I'm not sure if its still the case, but at one point wave was loading faster than gmail. We were a month or so away from shipping server-side rendering of waves with progressive enhancement for editing. That made waves load in a snap. (The guys finished that piece of work and put it in the opensource wave in a box project.)
You weren't alone in getting a lot of value out of wave. We used it internally for everything. When we had meetings, we shared the agenda in a wave. People edited the wave with meeting items, and during the meeting people collaboratively annotated the agenda with what was decided, turning into the minutes. It was a thing of beauty. I still think there's a need for something like wave - it solves real problems.
When wave was cancelled, there was a series of honest retrospectives internally on different aspects of the project. I hope they get eventually get published eventually too. A lot of people were quite career-sensitive after wave. Most blog posts you see by wave developers are written by people who have left google anyway.
Agreed. Today's technology - CoffeeScript + Backbone.js + node.js-ish backend would make it a lot easier to develop something like Wave in a couple of months by one or two developers (plus maybe a designer).
Looking at the app I'm building (alone) with these tools, I could say that it's on par with Wave on the realtime UI and complexity, but the development experience is a really pleasurable one. These tools are crafted for exactly what I'm doing and it seems like they know what I want to do before I want it.
But then again, back then you didn't have these tools so I guess you had no choice but to go the 'academic' Java way.
But anyway, congratulations on this incredible achievement and also for the many lessons learned and shared with the world.
Those technologies probably don't scale well in terms of hiring.
Also, those technologies probably don't scale at all to 1000s of servers, something a Google project has to do from day 1.
(I'm not sure they'd even be fast enough on a single server. As a rule of thumb, the more technologies you put in the mix, the more abstractions there are, the harder it is to minimize the work done.)
Yes. We did work to make wave scale before we knew what features would be important to users. That was definitely a problem, but its hard to avoid scaling when you're google and the whole world is watching.
The earlier you launch, the sooner you need to do that scaling work. But the later you launch the more development you do in a vacuum, without real customer validation.
I guess you're talking about node.js .
That, of course is a hot topic today, but I'm confident that this problem is solvable.
That's why I called it 'node.js-ish', meaning a backend that can do similar things as well as or better than node.js.
CoffeeScript and Backbone.js have nothing to do with scale, as they are used on the front end.
The author talked about how horrible it was to have the UI written in Java (having to wait 3 minutes for a recompile after a CSS change).
These two tools (plus SASS and alternatives) on the front end makes Java->Javascript totally redundant, imho.
My guess is that the Web interface only exposed a small part of the functionality. There's all that syncing and federated server goodness under the hood to take into consideration too.
Every line of code adds spec inertia. Java makes you write a lot of code, which makes it super hard to redesign your project when you pivot on features.
Java works fine in enterprise software written against a strong spec. But wave never had a strong spec - It was an experiment. Inevitably, we spent a lot of time making the wrong thing.
We needed to be able to change wildly based on user feedback. We needed a light, nimbler language which would let us pivot easily and throw away code. ... And you optimise for a changing spec with a small team and a terse codebase.
Today I would recommend Coffeescript, Rails or Scala. Maybe even Go. A few years ago, Ruby, python or C++. Erlang, clojure or haskell would probably work too, depending on the team.
I'm just really surprised about one suggestion: C++. Even if I used it extensively and with great satisfaction back in the day, and even if I personally don't like Java, I'd like to know what would have made C++ a better choice than Java for terseness etc.
I would think if a service-oriented-architecture (SOA) were used as much as possible to modularize a project such as this to make it easier to change functionality, was a modularized approach taken or was this just one big codebase that was essentially all lumped into one big "application"? I work on a legacy Java project that has something like 1M lines of code and its a nightmare, the team I work with is finding that modularizing pieces of it into essentially individual projects (which is seamless to the end user) helps with keeping the complexity down, just curious about this as I think the Google employee who did that rant online the other day mentioned something along these lines.
I don't think the use of Java would necessarily lead to a project's downfall, I mean how are you going to scale something like this in Python if its already slow in Java?
Interesting, these are the exact reasons I have trouble using anything but Java. I don't work off specs anymore, and I can pivot/refactor in Java like nothing else I've yet used or seen. Scala might contend with this eventually but the toolset is still not quite there yet. Can you mention a few specific areas/choices where you feel it held you back?
Wave is probably extremely connection and packet-heavy, so likely something evented, continuation-based or otherwise able to handle a large number of connections on little memory. Erlang, Twisted, Go, stackless, Racket WS, Seaside, Tir, ... potentially behind some sort of load balancer for the evented ones (and the non-multithreaded continuation-based ones) or they're going to block other requests.
I would probably implement business logic in Lua and do routing/messaging/connection handling with Erlang.
Stackless might work, it would be great for a prototype. Too bad there isn't a Python on Lua or a Python to Lua translator. The Lua runtime is soo much better and less resource intensive.
I would say it should really be don't use GWT? Blaming entire java for a framework that is developed originally at Google doesn't seem right to me (particularly when Google Employee does that).
I didn't notice a mention of GWT (not that it wasn't in there) and I would agree if thats the case, Java is very strong for server-side development but GWT definitely can be bloated and it isn't really Java anyways, you write it in Java but it runs as JavaScript so I'm sure it loses most advantages that it would have if it really were running as Java
"Part of the deal initially was that Wave would be compensated much like a startup, base salaries were supposed to be low but with heavy performance linked bonuses which would have made the Wave team rich upon Wave's success.
During the course of negotiations and building the Wave product, the "heavily reward for success" part of the equation remained but the "punish upon failure" got gradually watered down into irrelevance, making the entire project a win-win bigger proposition."
I totally agree. Running a 'startup' in a company like Google compared to a REAL startup is like having your own room in your parents home versus sleeping in a tent under the Brooklyn bridge. One has serious ramifications, the other is a walk in the park: Putting your life on the line or not.
I call BS on all of the anti-wave comments that come out. "Problem without a solution" is complete nonsense. It was the solution to a very real problem I have, and I only have 5 friends. We still use wave daily, exclusively really, and find it to be FAR AND AWAY the best platform for group communications. It has a few bugs still, and that's to bad... but it's full of greatness.
I disagree. Wave's biggest problem was that it wasn't user friendly. If you wanted to search for open topics, you needed to use under documented search modifiers. It worked just fine for most early adopters, but never gained traction because it just wasn't accessible to a wider market.
The problem wasn't that it didn't serve a purpose. The real-time collaboration has moved to google documents, and fills a real need. That said, your analysis for the failure is pretty accurate. It was a focus on engineering over user experience that sealed Wave's fate as a case study on how to not build a product.
> And this is the essential broader point--as a programmer you must have a series of wins, every single day. It is the Deus Ex Machina of hacker success. It is what makes you eager for the next feature, and the next after that.
I liked the whole article, but this paragraph really stood out to me. It's something I've never thought to put into words, but is so true. Going for days and weeks and not seeing any success in what you're doing is horribly demoralizing. It's happened to me only once, and that was enough to reconsider my appreciation of programming entirely.
It's because the real appeal of programming is, like he said, the incremental gains, the constant moving forward, the iterative process. It's why small side projects are so much fun, because there's nothing getting in the way of your next small accomplishment. It's what draws people (like myself) to programming in the first place. And in its lack, it's the slow killer of large projects.
While true, I suspect the reason is that it creates that dopamine release we all find so pleasing and addictive. It's why people also do drugs, have sex, play MMO's, etc. Our built-in carrot.
>And in its lack, it's the slow killer of large projects.
It's interesting that one reason for large project failure may be failure to stimulate dopamine release in its programmers.
It stood out for me also. Right now I'm going through a period devoid of 'wins'. Amazing what the lack of (small, visible) progress can do to one's morale and confidence in one's programming ability.
"Now, I don't mean to imply that Wave did not have some very smart engineers working on the UI, we certainly did. But talent is different from experience. The latter is a guard against 3.5MB of compressed, minified, inlined Javascript. Against 6 minute compiles to see CSS changes in browser. Against giving up on IE support (at the time, over 60% of browser market share) because it was simply too difficult. Against Safari running out of memory as soon as Wave was opened on an iPad."
I wonder how much of the failure was this sort of thing vs just having a product that people didn't understand. (I definitely agree that if the UI had been simple and snappy it would have been better)
On the bright side, I feel pretty confident that some real startup will take the open sourced Wave technology and do something good...
> On the bright side, I feel pretty confident that some real startup will take the open sourced Wave technology and do something good...
I don't share your confidence. Making even trivial changes to the 350k lines of opensourced java takes a lot of time and a lot of skill. There are only ~5 part time developers working on it. I doubt that it will be useful anytime soon.
I agree, although if I had the chance I would change the protocol spec too.
The client-server protocol uses protobufs encoded over JSON, and the federation protocol uses protobufs encoded over XML, via an XMPP extension. As others have said, there weren't accepted standards for this stuff just a few years ago. Today, you could build whip something together reasonably easily using JSON and socket.io or something.
Fantastically generous post. I wish more engineers were as publicly honest about their own, and their team's failures so that other engineers and engineering teams can learn from the experience.
You need the same mix of experienced talent working in the UI as you do with traditional "serious" stuff. This is where Apple is simply ahead of everyone else.
Not only does Apple not shunt n00bs at the UI, it actively hires extremely brilliant people to do UI invention/R&D.
E.g., consider the CV of the (obviously brilliant, IMO) "Up and Down the Ladders of Abstraction" guy, Bret Victor (1).
That's about two lightyears removed from "UI is boring and easy; make the junior programmers work on it while we do the algorithmically hard stuff on the backend."
(As an aside, Holy Shit is BV impressive and refreshing -- incredible tech chops combined with awesome aesthetic/design sensibilities, all wrapped up in a humanistic focus on usability...just, damn.)
UI is hard in the way that art is hard. It's difficult to understand what people really want or will value.
Backend is hard in the way that building a skyscraper is hard. You need a solid design and framework, but you also have to make sure every wire runs in the right place. Managing complexity is really the problem there.
Which is harder? I don't think they're very comparable, in the way that I don't know if creating beautiful art or building a skyscraper is harder.
Yes, that's the mistake most people make: back-end is engineering, front-end is art. No wonder we have such shitty front-ends.
Sure, there are a whole slew of "applications" that will suffice with a basic form-driven UI accessing a rails or servlet back-end. And indeed there are applications like twitter where the entire complexity is in the complexity of the back-end. But then there are all the apps that simply aren't web-apps yet, because nobody has figured out how to do complex, rich, and above all Interactive, applications using fucking javascript. We're getting there, but people are writing the code to do it as I type (jquery, backbone, etc).
So this is why UI is hard: its art and its engineering. And nobody thinks about the engineering part. And when they do, they have to find an engineer who can talk to the designer without scaring him/her (very often "her").
> Backend is hard in the way that building a skyscraper is hard ... Managing complexity is really the problem there.
UI is hard exactly for this same reason: managing complexity.
There are a lot of great great tools and abstractions for back-end work, but UI has too long suffered with same old OO-oriented approaches that GUI frameworks dictate. They are quite bad for managing the complexity.
I think art is probably the wrong analogy, as UI has a lot of science and research behind it, but I see what you mean. I think UI is hard because it's hard to "prove correctness." With software engineering, correctness is enforced at many levels - is the syntax correct, does it compile, do the unit tests pass, do the integration tests pass, does it deploy correctly? All these things can be answered via a yes or no question. UI, on the other hand, is only proved "correct" through repeated use of the program, discussions with users, distillation of that feedback to new UI changes and then more repeated use, discussions, etc. And even then, you're not really sure if you got it right. You just have to use the damn thing and see if it "feels" right. And even then, what "feels" right to one person may not actually be right. Or what feels right today may actually feel wrong after weeks or months of using the software. It's a hard problem.
Exeptional UI skills (both the design and the implementation of it) require taste, which isn't something you can just learn. You either have it, or you acquired and cultivated it. I've seen this time and time again: Great developers producing awfully horrible UIs. They sense that something doesn't 'look right' but can't fix it.
It's usually a culmination of things, starting with basic layouts issues, spacing/margins off, wrong logical grouping/clustering of information on screen, off color schemes, the entire gamut. That, and a low tolerance for precision work. They would obsess over shaving off another fraction of a millisecond on a backend server transaction (nothing wrong with that, btw), but if something is a few pixels off, "who cares". Well, everyone who has to stare at your UI...
It's also noteworthy that UX designers who rely on other developers to actually implement their designs get much more frustrated with such developers...
Long story short, trying to shoehorn 'back-end' developers into UI jobs is a recipe for a crappy user experience.
Doing CSS and HTML is completely different than programming and its frustrating to go back and forth between the two. For example, if I'm tasked with fixing some bug or implementing a feature to where I have to write Java code and then also write some JavaScript (such as an AJAX) app, then also do HTML and CSS, by the time I get the functionality working with the Java / JavaScript code it can be hard to focus on that CSS / HTML.. sometimes doubts start to creep in about the code, things you want to double check, etc, thinking about maybe making small optimizations, etc, basically I think most engineers simply don't want to think about the HTML / CSS layout stuff because thats not what they're paid to do, for the most part, and if they make a mistake in the server-side logic, for example, by putting in something to where its not adequately supporting concurrency and data loss / corruption occurs, this is going to get them into a lot more trouble (damage reputation, etc) than if they messed up the spacing on the HTML page.. bottom line is if you want a UI done well then hire someone who spends all their time thinking about that sort of thing
Speaking of UI is hard - and I mean this in the most constructive way possible - this article was exceedingly hard for me to read with the transparent/light-grey "Mythical Man Month" ever-present behind the left-side of the text.
Its like listening to Japanese do English songs at a Karoke bar..sure they get the words right..but the understanding of the music and song is not there to allow them to carry that into their performance.
Why do you think many companies/organizations are giving UI work to interns or fresh grads?
There are many difficulties with UI, but the most relevant one here is that UI is a bike shed. Everyone's a critic. If you work in a big company you'll learn to dread the UI projects because you're going to have to sit in a room with fifteen people, at least fourteen of whom will dislike something about your approach and all of whom will offer suggestions. There will be 23 revisions and lots of compromises and turf battles and possibly blood, and usually the result will be the legendary horse that was designed by committee.
This is why the author's primary complaint is the team was too big, not we didn't have the best UI people. Without mindful management the big team will squash your UI people no matter how good they are.
Esoteric technical problems don't suffer from as much bikeshedding. Just as in the original parable of the bike shed, nobody wants to suggest changes to the nuclear reactor shielding. If you're the expert on reactor shielding you can basically run your part of the show.
So the author's argument is: build a giant team that is prone to endless meetings, and you will select for traits that help people avoid endless meetings. People will retire into shells. They become narrow technical experts whom nobody wants to gainsay. They will focus on problems that your company already understands well, to take advantage of the cultural consensus which precludes endless argument. And only the young and naive will step forward to do something like UI, where every move is weighted down by bureaucracy and politics.
yeah but if the user loses a crapload of data because someone did something stupid on the server-side that can get the company in a lot of trouble. (even if they have backups, as it may take a day or 2 to get them in place) With UI bugs, it usually becomes obviously quickly that its not working and the user will then have to wait for it to work, but usually existing data, etc isn't in jeopardy
At some point, people are going to have to admit the web is not as interactive without javascript, and will need to stop complaining when they choose to disable it. Those people are welcome to revive gopher if that is what they are after.
One of the deeper messages of the Mythical Man Month is that large teams in software don't work. The author of the post mentions that he should have known the project was doomed when the hiring manager didn't think 26 employees on a "startup" project was too many.
Yeah. According to the Mythical Man Month (or was it Peopleware?), when someone gives you 100 software engineers to do a project, what are you supposed to do? Set up 10 teams of 10 people, don't tell them about each other and make them all do the same project.
The argument is that you are going to be better off doing that and picking the best one at the end, than you are running the project with 100 engineers in the first place.
Maybe it's in reference to the fact this "startup in a startup" scaled people to a point of disfunction, which is kind of what Mythical Man Month is getting at as well.
But you are right, it isn't really the primary point he is making.
The 20% time policy that Google allows for one's own projects has probably served as a nice way to experience those "small wins" he is referring too (though the win may not be in the main line of work). I wish I had a similar policy at the place I work - I feel I could have avoided a burnout phase.
I'd be curious to know if Google revokes that policy for focus teams like the ones that worked on Wave and Google+.
Great post. Reminded me of the following for some curious reason :
" If the land mechanism as a whole is good, then every part is good, whether we understand it or not. If the biota, in the course of aeons, has built something we like but do not understand, then who but a fool would discard seemingly useless parts? To keep every cog and wheel is the first precaution of intelligent tinkering. " - Aldo Leopold
I really appreciate this post. I'm going through a very similar experience myself. Was at a startup, which was recently acquired by a larger corporation. Since I've previously worked at both small(and fast growing) and large companies in the past, I thought I possibly had the experience to make the new larger team I would be working with more "agile". But taking months to do what I normally accomplish in a week or two definitely is hurting personal morale. I'll try to achieve some daily small wins though=)
I don't think that's fair. GMail and Adwords are production applications and do just fine with Java.
I think it's more the opposite: Do not use Java when you haven't achieved product/market fit yet. Because it will cost you, dearly, in iteration velocity. Go write your initial version in Python or Ruby or Lisp or something that lets you quickly try out new ideas, and once it starts creaking under load, then you rewrite it in Java or C++.
You're implying that Wave engineers don't know how to use Java. But Wave engineers are Google engineers. Google engineers are hired somewhere in a high 90th percentile of programming ability and experience.
You've proven the GP's point -- don't use Java because the only people who know how to "use it" are 1% of programming experts that you probably don't have working for you.
> You're implying that Wave engineers don't know how to use Java. But Wave engineers are Google engineers. Google engineers are hired somewhere in a high 90th percentile of programming ability and experience.
According to comment in this same thread by a member of the Wave team, the problem is that they used Java the way it is supposed to be used, and that is why they ended up with over a million lines of idiomatic Java code when the same project would have been under a hundred thousands if written in another language.
Like most acquisitions a great concept gets stuck in neutral....not sure why....wave fundamentally is a fantastic idea....but a complex problem to solve....the fact that no one else has solved what wave attempted to solve is in itself a testament....but having said that I don't think it was an issue of programmers coming up with a problem and then trying to solve it....I think virtual communication as we know it is still pretty bad....and this post gives you a good insight into how complex problems cannot be solved by adding more people to the team....that just adds more complexity to an already complex problem.
Early versions of wave ran on IE7. The problem wasn't HTML feature support, it was a lack of engineering time. In the leadup to the IO announcement, the client team dropped IE support in order to get wave ready for the demo. Dropping IE support was supposed to be temporary, but everybody was too busy to fix it. "Just tell people to install Chrome frame" was the regular excuse.
It was being worked on when wave was finally cancelled.
There had been other projects which had been kept under wraps successfully. Chrome and V8 were demoed in Seville long before anyone on the outside heard about it. They said keep it quiet, and we did. Google TV had been knowable for ages before it went anywhere externally. There was no reason to think someone would leak Wave.
Instead, they started playing the "we're special and you are not" card, and that started a sense of resentment growing. Not even the infrastructure teams which were going to provide services to them were let in on what was really going to happen in there.
Then after far too long, demo day of Wave arrived. I only stayed long enough to see them hit backspace and have it echo out to everyone else who was connected. I remember my exact comment at the time: "packets". As in, lots and lots and lots of packets flying around to generate RPCs for all of those deltas. Then those turn into XML or whatever going out to web browser clients, and ... yeah. SO many packets. That right there worried me greatly.
So then I see this thing about it not scaling properly and choking JVMs and suddenly it all makes sense.
Oh well. All of the secret code depots and restricted access areas must have been practice for what is now happening with Plus. Entire floors of buildings you can't open as a full-time employee? Yep. More code depots being locked down? Yep.