Having been involved in a number of these reports, this is flat-out false. Basic "fix your CSS to not rely on this Chrome bug; here's the exact diff you want to make it work both in browsers that follow the CSS spec and in Chrome" things regularly take 2-3 months to apply on the Google side. Basic "your browser user-agent sniffing is broken _again_ in the same way as three months ago" things take a month or two to fix.
Disclosure: Mozilla employee, have been cced on far too many of these bug report threads over the years.
The article where the Mozilla guy complained said two weeks, but, yes. Months is possible. Maybe we have a different definition of quickly.
Disclosure: former Google employee, I did many pushes and fixed browser compatibility bugs whilst I was there.
In the timeframe we're talking about most products were on a biweekly push cycle. In the ideal case where there was no delay at all between you reporting a bug and it reaching the right person inside the company who could fix it, if that day happened to be on a branch cut day then that's a minimum of a two week delay to reach production unless the bug was so severe that the entire product was hosed for all Firefox users. Minor glitches in rarely used screens wouldn't count for a rollback or emergency push, for instance.
But sometimes pushes fail. There's a severe bug, attempts to fix it are too slow and the push window closes so the servers are rolled back. Now the latency is a month.
That's only software bugs. Now include the time taken for the bug report to be triaged, mis-directed, pinged, rerouted to the right person. Now include delays incurred if that person goes on holiday, gets sick or has other higher priority tasks, as bug handoff works about as well there as anyone else. That can easily add more time.
Finally, regressions are to be expected in an environment where the extent of browser testing is a function of engineer interest rather than centrally mandated. If they weren't testing Firefox well enough before they weren't testing it well enough after either. If Google had a central rule about which browsers had to be supported then I didn't know about it. There was just an assumption you'd try to support the browsers people used as best you could.
I'm not saying it was awesome or right, just that this sort of thing was not Firefox specific and there were plenty of IE or Safari compatibility bugs too.
I think we might in fact have a different definition of "quickly", yes. I think we may also have different definitions of "entire product is hosed". If gmail is significantly worse to use (not "doesn't load", just "loads with obvious visual artefacts"), that's obviously enough to get people to change browsers, all else being equal.
And just to be clear, there are certainly cases when things got fixed quickly. But "always" is really stretching it; this was the exception, not the norm.
> regressions are to be expected in an environment where the extent of browser testing is a function of engineer interest rather than centrally mandated
Sure. The problem is the environment and corporate policy, not individual engineers. They're just responding to incentives as best they can, and in my experience are generally quite helpful within the constraints of the system.
> just that this sort of thing was not Firefox specific
Indeed, I don't think it was. It was not-Chrome specific.
Here's a thought experiment. Say someone at Google who did _not_ test in Chrome committed a change that degraded the visual experience of gmail in Chrome and it got shipped. How would fixing that be prioritized vs a similar visual degradation in Firefox or some other non-Chrome browser? Assuming there is no emergency push involved, if the fix was not ready by the next push cycle, would it just slide, or would the original commit get rolled back?
In the early days there were quite a lot of bugs that affected Chrome and not Firefox from what I recall. The problem was Chrome didn't run on Linux or Mac in the first versions, but Google engineers (by policy) didn't run Windows, they almost all ran Linux. So there was a huge testing gap. On the other hand it was helped by the fact that Chrome was basically Safari at that time so Mac users within the firm tended to notice bugs during dogfood/canary periods.
I can't quite recall the timelines, but I remember it felt like years before the Chrome team shipped a native Mac/Linux version. They also fixed a lot of bugs that broke features of the various apps, e.g. lack of printing was a big one for a while.
I imagine you're talking about later when Chrome got really big. But it's hard for me to say what would have happened because when the features are themselves being developed in Chrome there are hardly ever cases like that when it works in every browser except Chrome and this is somehow not noticed during the canary period.
The idea that google web apps are frequently released, and the fact that there is high latency between identification of a defect and it being fixed 100% in production, are not conflicting facts.
Disclosure: xoogler with my name carved in stone outside Mozilla HQ.
Having been involved in a number of these reports, this is flat-out false. Basic "fix your CSS to not rely on this Chrome bug; here's the exact diff you want to make it work both in browsers that follow the CSS spec and in Chrome" things regularly take 2-3 months to apply on the Google side. Basic "your browser user-agent sniffing is broken _again_ in the same way as three months ago" things take a month or two to fix.
Disclosure: Mozilla employee, have been cced on far too many of these bug report threads over the years.