Hacker News new | past | comments | ask | show | jobs | submit login
The truth about Goobuntu: Google's in-house desktop Ubuntu Linux (zdnet.com)
169 points by CrankyBear on Aug 29, 2012 | hide | past | favorite | 91 comments



"Google is a paying customer for Canonical's Ubuntu Advantage support program."

Good news for desktop Ubuntu users.

'Chris Kenyon, who is Canonical's VP of Sales and Business Development, and was present for Bushnell's talk confirmed this and added that “Google is not our largest business desktop customer.”'

Better yet.


""We chose Debian because packages and apt [Debian's basic software package programs] are light-years of RPM (Red Had and SUSE's default package management system.]”"

Would someone mind giving a brief overview of why apt is better than rpm (or why someone might think this?)?


I'm a Debian fanboy, but in all honesty, the RPM-based distros have long since caught up with apt in terms of sophistication.

I did find yum (the apt equivalent used by Red Hat and friends) a bit slow (it's written in Python) and slightly rough on the usability front in some cases, but perfectly serviceable.

Mandriva and its derivatives (Mageia is worth looking at) use urpmi to provide the equivalent of apt. I really, really like urpmi, but none of the distros that use it have satisfied me (for unrelated reasons).

SuSE and friends have advanced package management baked into YaST, which handles other configuration and setup tasks as well. I haven't used SuSE in ages, but it the package management seems pretty robust.


It's a bit sad that many people attribute yum's slowness to Python. Most of that is because it builds an XML db every single time it runs.


Absolutely. Package managers are mostly IO-bound so there is no reason why a Python-based PM should be substantially slower than a C implementation.


Except when they are calculating dependencies

But I agree with the yum sentiment, it really looks bad when compared to other solutions that existed: urpmi, apt-rpm, etc


Yeah, i've always found that quite odd. Do you seriously need to refresh the package db every. single. time? I might agree that cron-based/user-initiated solutions are suboptimal (some users will just not bother, which is a security and support problem), but surely there is a compromise, like limiting the refresh to once a week.

(Cue some RedHat fanboi saying "but it's easy, you just do xyz" -- no, it's not easy unless it's default behaviour. Otherwise we might as well just run OpenBSD, because opening network ports every time you have to sneeze is "so easy".)


Yum is getting replaced in Fedora as well.

https://fedoraproject.org/wiki/Features/DNF


Now that yum and apt have feature-parity, shouldn't picking a single package format (rpm/deb) be an easier choice now?


Here's an example of why dpkg (and its associated tools) is more sophisticated than rpm (and its associated tools):

While building a debian packakge, there are tools which will scan all of the binaries, determine which symbols they are using in a shared library, and compare that with a list which shows when a particular symbol was first introduced to that shared library, and use that to calculate the minimum prerequisite version for that particular shared library. This is important, because otherwise you could install a new version of a binary, and it might not work because you don't have a new enough version of its dependencies.

I recently had a user who complained about this with e2fsprogs, when it used a new interface, who demanded that I bump the major version of the library (thus declaring it backwards incompatible) because this was the only way rpm and yum would automatically figure out the version dependency. I refused, and instead asked them to manually update the version dependency in their package, and pointed out that debian could automatically figure out major and minor version dependencies without needing any manual work.

For an example of this text file which maps symbol versions to minimum package versions, please see:

http://git.kernel.org/?p=fs/ext2/e2fsprogs.git;a=blob;f=debi...

P.S. Because RPM doesn't do this sort of thing automatically, sometimes the only safe thing you can do to make sure there aren't any overlooked version dependencies is to download the latest versions of all of the packages installed on your system, and run the command rpm -Fvh *....


I'm just a casual user, and I didn't really notice any difference between deb and rpm. But I did notice a significant difference between apt and yum. Yum was much slower and less clever in figuring out dependencies. I've had yum uninstall unrelated programs sharing libs with the one being uninstalled. It seems like apt(itude) has more advanced resolution algos built in. Plus more is avail as deb than rpm, no need for manual download via websites.


I'll second this. It is also my experience, having used both Fedora and CentOS (2002-2008) and Ubuntu (2004 to now).


For me apt is yum, so I've never bothered to check whether yum is apt.

I'll get my coat and leave this comment thread now.


It was never about the packing format (dpkg, rpm) nor about the management system (apt, yum)... it was ALWAYS about the number of packages and how well integrated they were and still are. Debian based distros still have the upper hand here as everything belongs to the official distro and is packaged as part of the whole system. RPM based distros all seem to go for the idea of the smaller core set of packages with other repositories filling out other parts of the ecosystem. It just doesn't work as well.


I haven't noticed much of a difference between apt and rpm, but the one thing that keeps me on Debian and Ubuntu (versus other distributions or OS X) is the huge and properly maintained Debian software repository that has no competition.


I haven't used rpm for a while having left redhat for ubuntu years back so this may be out of date: but certainly at the the time rpm was famous for "dependency hell" and deservedly so - almost everything I tried to install was a circular nightmare of incompatible dependencies. To the extent that i'd have to give up or resort to compiling from source. OTOH, apt was an absolute dream. It just worked. It'd fish anything required out of remote depository. There were never any version incompatabilities. I've never once had a failed install. On moving to apt I never looked back and would never use rpm again: indeed for me apt was Ubuntu's "killer feature".


> I haven't used rpm for a while having left redhat for ubuntu years back so this may be out of date: but certainly at the the time rpm was famous for "dependency hell" and deservedly so ...

True, but since then yum has taken over from rpm, and yum automatically resolves dependency issues. Just saying.

All major distributions have tools that avoid dependency hell, in one way or another. The bad old days of rpm trying to sort things out on its own -- and failing -- are gone, and good riddance.

> On moving to apt I never looked back and would never use rpm again.

Yes, but it's important to point out that Red Hat / Fedora users don't use rpm any more either.


apt is not the equivalent of rpm. dpkg is the equivalent of rpm. And dpkg is equally dumb (both of them "by design" - it's not what they're meant to do)

But Yum and Apt as well as other tools have been available to do automatic dependency resolution on RPM based distros for many years - I believe at least apt-rpm predates Ubuntu.


They aren't comparable and just show that the speaker hasn't used the alternative in over a decade. RPM is a packaging format like dpkg. Apt does dependency resolution like YUM/urpmi/zypper.

I do a lot of packaging for both Fedora and Debian and here are my thoughts:

* Building Debian packages is a horrible experience. Writing an RPM spec file is much nicer and easier.

* Apt is much much faster than yum.

* Using a true (NP-complete!) SAT-solver (ie. zypper) is lightyears ahead of anything Debian or Fedora are currently doing, although there's a project underway in Fedora to fork yum to use a SAT-solver.


What are the practical benefits of using a SAT-solver? Is it faster? I haven't had any problems with the results of apt's algorithms, but maybe others have. When installing/upgrading stuff, the update step seems to be much slower than the other steps.


Package upgrades are an example of SAT (the satisfiability problem)[1]. Apt and yum don't recognize this result and instead use a bunch of ad hoc rules. These often fail, when they could succeed. SAT will always find the answer, eventually, if there is one.

Now the issue is that SAT is NP-complete, so it could run for a very long time before it finds a solution. But there are very good solvers which "most of the time" (meaning, basically always) find a solution in a short time. They are also open source projects[2].

There's been an existence proof (zypper) that using a standard SAT solver is possible, works, and is fast.

[1] http://arxiv.org/pdf/1007.1021 [2] http://www.cs.sunysb.edu/~algorith/implement/minisat/impleme...


I understand what using a SAT-solver implies, but I haven't noticed apt failing often in practical terms. The heuristics used seem to work fine for me.


I'm actually surprised that this is the case. I dont know if yum's slowness (as a by product of its implementation) is one of its shortcomings, but I always presumed that it was more sophisticated as a package manager. Especially, since it implements a SAT solver (libzypp) that is pretty nifty.

I have heard that it managed package dependencies (cyclic, broken packages, etc.) much better than aptitude.

I'm an Ubuntu user, so the above is pure speculation - but could someone answer whether the SAT solver causes the slowdown ?


The SAT solver work is great, for anyone interested check out http://www.mancoosi.org/, there's even an annual competition for solving Linux package upgrade problems.


Do rpm-based managers still work out the package dependences incorrectly, take a long time to download the headers, and generally break constantly?

Honest question, I haven't used an RPM-based distro since a couple of years ago. Last was opensuse 11, then fedora 3.

rpm is a pretty great format, but the frontends are not very good. apt, on the other hand, rocks, and aptitude rocks even more.


Note that both apt-get and aptitude, together with dpkg, are part of APT (advanced packaging tool), which is the suite of tools handling .deb packages.

While I'm at it, neither apt-get nor aptitude are simply front ends to dpkg - both have fairly complex (and different) policies for how to handle dependency conflicts, etc, while dpkg makes you do everything by yourself.

See http://superuser.com/questions/93437/aptitude-vs-apt-get-whi...


Nowadays, that actually isn't very true for end-users - rpm is catching up really fast in all the areas that dpkg was formerly better in.

A good StackExchange thread: http://unix.stackexchange.com/questions/634/what-are-the-pro...


Does RPM have any advantages?


Better integrity checks, mostly. Built in checksums of files, stricter limits on what you can do as a package ( no controlling terminal and so on ) whereas in deb it's "optional" and "flexible" (aka. not built in and maybe people will get together and decide something)

rpm has some build time analysing tools to track library (.so, python, perl, php, ruby imports) use and annotate this as dependencies.

This above feature is one that causes the mess when you don't have the full dependency graph for a package, because rpm by itself does no dependency resolution.


As a developer, I'm actually able to easily package stuff with rpm. creating .debs is stupidly frustrating


Whenever I've wanted to package .deb's I've just blatantly ignored the official way, and opted for a much simplified setup (write the control file and the pre/post-int scripts, and use a Makefile to build the filesystem hierarchy required and run the appropriate tools).

I agree, though, as much as the RPM spec files are horrible, it's still overall simpler.


Here is a talk from May 2012 by Thomas Bushnell (the same guy) about Linux on the desktop and Google. Link: http://www.youtube.com/watch?v=fu3pT_9nb8o


See 3:25 "a reboot costs a million dollars" (x-thousand engineers & workstations idle for 15min quickly adds up)


As always, another link bait zdnet article with a flashy title and zero content. Normally I don't bite but I guess I have been taken in and got exactly what I expected. How come zdnet articles still get posted? I have yet to see one that actually brings content to the table that is new or informative.


"That said, desktop problems , even on Linux, will happen"

Sounds like something written by someone who's spent almost no time using any of the Linux desktop environments (let alone trying to connect to a projector). And sure, Mac lovers might move to Unity rather than Gnome, Xfce or whatever, but presumably only if someone forced them to use Linux in the first place.

Also hilarious is the suggestion that Google's graphic designers are running Ubuntu, considering Creative Suite is Windows and Mac only. I've received seen plenty of creative out of Google, and none of it was done with the GIMP.


I've been using Ubuntu for years on my laptop, both for work and for home entertainment.

I've had no problem connecting to a projector, whenever I needed to do that. Of course, I was careful when buying my hardware. Also, problems do happen, that's why I'm staying on LTS and personally I consider the intermediate versions as being beta-quality. And the Unity stuff is not stable and is too rigid, so I switched to Xubuntu (Xfce), which is more stable and stays out of my way.

Ubuntu may not be ready for normal users that would want it at home, but a company like Google has people ensuring that these workstations are compatible and that upgrades work properly. They also afford to pay for support from Canonical. So the employees choosing Ubuntu can benefit from its advantages with less downsides.

     considering Creative Suite is Windows and Mac only
If I were a designer I would definitely pick OS X over Ubuntu, but on the other hand you can run Creative Suite in a virtual machine like VMWare. And you can even install an X Windows server on your Windows VM and make that Photoshop window work on your desktop as if it was a native app, only styled differently.

Also, designing stuff in Photoshop is not the only thing a good designer does. A good designer also writes HTML, CSS and preferably also do a little programming for proofs of concept, amongst others. Those changes may involve running a complex app on your localhost. And it's definitely easier to have access to the same tools that the developers use. And it's preferable to use the same operating system Google uses in production for its servers (at least a virtual machine).

     none of it was done with the GIMP
True, designers don't use GIMP, but GIMP is a perfectly capable app for doing design work. Along with Inkscape. The only thing truly keeping designers on Adobe's Creative Suite is that Photoshop is a de-facto standard, so it's easier to get training and support for it and it's also easier to send the raw files (PSDs) to other people.


I've had tons of mac users go goggly eyed over how well gnome3 handles multi monitor (automatically creating profiles for different monitor sets that are plugged in, and remembering exactly what it's supposed to do under various circumstances with zero setup).

Caveat: It definitely does the magic best with an intel graphics chipset, because proprietary blah blah blah

Point is, if you are referencing projector problems, you're talking about a far away time now...


Photoshop has run under Wine for years, with hardware acceleration, thanks to Disney.


Author is posting it here and gets it voted up by cronies.


I don't doubt that Google is a very public and very popular target for those looking to breach insecure systems. Having said that, I've heard this "we are the ones that need this the most" sort of argument that they use to justify their in house network authentication by many other companies in many different scenarios. However, most of the time, the problem they are having applies to just about everyone else in the field they are a part of. Which is why open source development has been so successful. Many with the same problem, helping to implement a common solution, instead of a custom one for each of them. So I wonder, what are the deficiencies of other existing 'network authentication' solutions out there? Why couldn't they help improve those? Why not contribute whatever improvements they've mode to this area?


Have you heard about Google Authenticator? http://code.google.com/p/google-authenticator/ It's a wonderful, standards-based, open-source project that brings two-factor authentication to Android, iOS, Blackberry, and PAM.

I doubt Google would want to talk about every single way that we try to secure our networks from hackers, but Google is pretty good about open-sourcing things (or publishing papers) to contribute to the industry's security practices.


I did not know about this project.

I agree, Google does open up a lot of projects and publishes a lot of papers as well on their work. I guess my comment was a bit of a gut reaction to the wording used that entailed "our problem are unique" which are many times used to justify brand new proprietary projects.

I imagine whats used is based on a combination of open source projects (either theirs or others) with their own secret sauce. Which is pretty standard process I guess.

Alright, maybe I commented too early. The wording threw me off.


Because they don't wanna fall prey of a 0-day in the wild?

BTW, a company like Google enforcing software choices for employees... Meh.


I don't think they do. As far as I understand it, employees can choose Mac, Goobuntu or Windows (though windows has to be justified b/c of the higher overheard of support). Sounds reasonable to me.

From talking with Thomas before, if I have this correctly, is that Google doesn't allow offsite development. So it seems most Googlers have a desktop for development and an laptop for other things with NO code on it. I may have that wrong, but I'm sure some Googlers here can verify that or not.


It's not "offsite development" that's prohibited so much as not allowing source code to be stored locally on laptop HDD's (even with full-disk encryption). You can develop remotely over SSH, NX, NFS, SSHfs, etc., you just can't have the source (or compilation artifacts) on an easily-stolen device.


So thick client development with tools that can't or aren't setup for remote use — eg. coding an iOS app — is usually done on site?


Java developers that use Eclipse or IntelliJ seem to work remotely pretty well using NX or VNC to get a remote Linux desktop, and Googlers who work on open-source projects obviously have different rules for that code.

I'm actually not sure what the IOS devs do. They might have different rules since their projects are more standalone and not tied into the rest of the main Google source tree, but it might also be that they just develop on-site. You could probably be fairly successful with XCode using something like SSHfs if you're on a fast enough connection, but I don't know if anyone actually does that.


So uh yeah, Google doesn't do this at all. And they have open-sourced many of their authentication tools. What's your bias?


Here's a video from Puppet Conf 2011 on how Google uses Puppet to manage Linux workstations:

http://www.youtube.com/watch?v=A8mbMjlr_b0


Puppet is also used to manage the large fleet of Mac laptops (and desktops): http://redmonk.com/cote/2008/06/11/puppet-at-google-redmonk-...


The link to the mp3 is broken, anyone have a working link?


Oops, I didn't notice the broken link. The article has a transcript at the bottom, though.


Interesting to see this and the "Death of the Linux Desktop" posts both on the front page. Are there heated opinions on the issue here?


Only that both the success and death of Linux on the desktop is vastly overstated.


It has simultaneously been the Year of the Linux Desktop (tm) and the Year of the Death of Linux on the Desktop (tm) for several years now.

The only thing that is new is these stories are hitting HN, instead of just Slashdot like usual.


That's what I was thinking. I've seen this headline on a few sites in the last few days and the commenters all seem to be talking about Linux in the past tense. Apparently I didn't get the memo because my laptop with Ubuntu on it is still working fine and the projects that I make use of haven't missed a beat. That's some kind of "dead".


Indeed. I've also become converted (finally) to Unity on a single screen. Still have a hard time with multiple moniters, but whatever.


Ubuntu works for me and my wife and has for years. I pine for the days of Gnome 2 but that's not coming back. If Unity could be made into Gnome 2 but keep Unity people happy then that would be cool.


> I pine for the days of Gnome 2 but that's not coming back.

Check out Xubuntu, it's not too different.


Linux Mint also has a desktop, Cinnamon, that resembles Gnome 2: http://cinnamon.linuxmint.com/. I haven't tried Cinnamon, but I've used Linux Mint before and will most likely be switching to Mint next time I reinstall Linux.


Irritatingly, Xfce still feels like a step back from Gnome2.

Personally I've come round to Unity and quite like it now.


You might want to look into MATE[1].

[1] http://en.wikipedia.org/wiki/MATE_%28desktop_environment%29


Linux Mint 13 with MATE is what you're looking for.


http://askubuntu.com/questions/58172/how-to-revert-to-gnome-...

sudo apt-get install gnome-session-fallback

This is Gnome3 configured to look very much like Gnome2. The primary differences are that "System Settings" (and some other things like printer configuration) moved to the "gear" at the far right of the top bar and you need to use <ALT>-right-mouse to add shortcuts to the top bar.

The <ALT>-right-mouse may only work with Gnome Classic (selected when you log in).


Interesting comments about using Windows.

When it's difficult to use Windows at one of the world's biggest Tech giants, you have to wonder how long windows can stay relevant in the tech/dev space.


Actually, having worked at Google and used Goobuntu (which was great) and then tried to use Ubuntu LTS outside the constant vigilance of a large support team, it isn't a threat to Windows (or OSX).

The challenge in an Enterprise that doesn't have a stock to penguinistas to keep it on the straight and true, it goes pear shaped at odd times for unpredictable amounts of time. That and the fact that there is no "real" third party software for it makes a lot of folks sad. (see the other conversation on HN on why that is)


True, I've had my share of bad updates, but Windows goes "pear-shaped at odd times" at least as often as Linux, IME. Care to elaborate further?

(I do agree that it is harder to get support people for Linux.)


Dude!! 2013 is the year of the Linux desktop.


It's always been a big challenge to manage thousands of MS Windows desktops and it certainly has not gotten substantially better with BYOC / mobility of users.

The real challenge for MS will come with establishing Windows 8 in the corporate space - don't believe that it will be happening without substantial changes / limitations removed by MS.

Reminds me a bit to the first versions of Windows activation / update a few years ago when MS had to paddle back almost the whole way before commercial customers started to upgrade to the new Windows version.


I have come to love Arch Linux. Pacman and AUR are much simpler and efficient than APT and DPKG.


How are they simpler and more efficient?


The Arch Build System is utterly wonderful. All you do is define a PKGBUILD and it will handle downloading sources, compiling them, and creating a package. It's so simple that once you've done it a couple times it's basically fuck-all effort to create them for whatever you want.

So say I wanted to have dmenu on my systems with a custom colour scheme. I make the colour scheme changes as a patch and then create a package on the AUR like so: https://aur.archlinux.org/packages/dm/dmenu-dogs/PKGBUILD

Then all I need to do is run pacaur -y dmenu-dogs (pacaur is an AUR helper, which automates the process of downloading the PKGBUILD + any patches/local sources, running makepkg, and installing it) on my other boxes and it sorts everything out for me.

Pacman itself is also very lightweight and fast, and has a very simple and clearly separated API. Want to know what package owns a file? pacman -Qo /usr/lib/blah. Want to check a package has all it's files? pacman -Qk mypackage. Anything removing is -R, anything querying is -Q, anything installing (syncing) is -S.

The last thing I love about Arch is the wiki. It is fucking AMAZING. There are well explained and thought out posts detailing how to install and configure a vast, vast range of software. I even refer to it when I'm not using Arch.


And installing yaourt[1] makes using PKGBUILDs from AUR and ABS, and binary packages from the repos amazingly transparent and fast. I've never heard of such a level of flexibility on package management.

For instance, you can install the precompiled chromium from the repos with yaourt -S chromium; if you need to recompile it from ABS, throw a yaourt -Sb chromium and you are done. To install the binary build from AUR, yaourt -S chromium-browser-bin will do it.

[1] https://wiki.archlinux.org/index.php/Yaourt


Thanks for writing that. Arch Wiki is amazing. Ubuntu has also started writing and updating their wiki. AUR still makes your life easier for examples android-sdk, android-sdk-platform-tools, android-udev, and android-eclipse will install complete android development environment including ADT Plugin, etc.


Completely agree; a lot more transparent. They don't patch unnecessarily.


Not related to the content of the post (which, it seems, is mostly a rehash of already-available data), but I must say, the comments over at zdnet must really be going downhill-- there are some truly horrific examples at the end of the article.


It event seems the author mangled a direct quote. If you're not sure you can type a quote correctly, copy and paste. And call your editor.

“Google is a target Everyone wants to hack us.”


A lot of people I know at Google use a Mac.


The typical engineering setup is Mac laptop + Goobuntu workstation.


A typical engineering setup. There are quite a few of us using Goobuntu Thinkpads or Chromebooks. (Since you can't have code checked out on your laptop, a Chromebook is not as disadvantageous as one would imagine.)


You have a policy that forbids you to checkout code on company laptops?


This is discussed more up-thread: http://news.ycombinator.com/item?id=4452490


Does the chrome secure shell support keyfiles now?


In the dev release it does. Info here [1]

[1]https://groups.google.com/a/chromium.org/forum/?fromgroups#!...


Yesterday's update, 0.8.2, added support for them: https://groups.google.com/a/chromium.org/forum/?fromgroups=#...


There is an entire talk about this at some Ubuntu conference a few months back.

Basically exact information, this article is just a rehash

http://www.youtube.com/watch?v=fu3pT_9nb8o&feature=playe...


> Bushnell explained that “Goobuntu is simply a light skin over standard Ubuntu.”

Given the headline, I was hoping for a bit more.


"But Linux on desktop is dead guys!!!" -- An OSX User


This is probably spam. Poster is likely the author.


well, that's an odd definition of spam.

also, while writing this I happened to recognize you from reddit..

http://reddit.com/user/joshu/submitted/


Shrug. And people promote their own projects here, too. I am also not using a voting ring to get crap to the front page.

But this is generic blog spam. Abuser of the commons etc etc.

Not sure I get your point re reddit?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: