Hacker News new | past | comments | ask | show | jobs | submit login
A Thank-You Note to the Hacker News Community from Ubuntu (dustinkirkland.com)
786 points by dustinkirkland on April 6, 2017 | hide | past | favorite | 227 comments



I have a strong dislike for systemd, so while I'm really sorry that upstart "lost" the fight, Ubuntu gained a lot of respect in my eyes with the decision to go with the rest and avoid unnecessary fragmentation. This could have easily ended up another as another community rift, slowing down everybody along the way.

Now they do it again with Wayland/Mir! It actually takes a significant amount of both balls and goodwill to give up on the product that you invested so much into for the sake of aligning better with your open source community. Bravo!

FWIW, I too would like to keep the DE experience of Unity, and especially the Dash panel and shortcuts. If that expose-text-search could scan non-focused browser tabs that would be a killer feature, but that's for the other thread.

The idea of "Let's simply ask HN users what they think" is a gem, that I suspect will now make it into many PMs' playbooks ;)


"It actually takes a significant amount of both balls and goodwill to give up on the product that you invested so much into for the sake of aligning better with your open source community."

They didn't do it to align with the open source community. They did it because Canonical wants outside investors, but potential investors don't think Unity can make money.

https://news.ycombinator.com/edit?id=14056369


Why not both? It hasn't to be an either/or type of thing.


True, but companies don't often come out and say that straight up. Usually they let things die on the vine. I think the OP was trying to say they could have taken the easy way out, but didn't.


I don't think anyone has ever dropped a distro because it wasn't using systemd, just like I don't think anyone ever moved to Ubuntu from a sysvinit system just because Ubuntu had upstart. I don't think there was ever a "fight" - they simply wanted to use systemd, and so they did.

If Slackware ever moves to systemd, it will be a sad day, but i'll keep using Slack because it's the whole distro that I want, not just how the system initializes services. If I could deal with Windows's bullshit, I can deal with systemd. Just like ConsoleKit, and PolicyKit, and NetworkManager, and UPower, and PulseAudio, and HAL, and UDev, and DBus, and all the other annoying crap that's been shoved down our throats over the years. Now I know how old people feel when they talk about modern cars...


> "I don't think anyone has ever dropped a distro because it wasn't using systemd"

That's an odd assumption. I have. I'm a big fan of standardisation.

I understand some people not liking systemd, but there are a lot of folks out there who really prefer it. Once I started adopting it, I no longer needed/wanted to support cross-distro start scripts, since systemd does that. So non-systemd distros were dropped.


Most distros have different package managers, different flavors of compiled and packaged software, different libraries, different system management tools, and different kernel versions. Yet you switched distributions because of the two whole different methods of start-up scripts. The one thing that only has to be written once, and then forgotten about forever.


Isn't this the point though?

We're moving more and more to having unified low-level services across Unices, so it stops being about stupid things like "Oh sorry, Arch can't run this because it uses its own audio stack even if we're just using audio to make a notification sound", and starts being about the things that are fundamentally different about the systems.

Think about how nice it is that we don't have to recompile most things for most kernel versions. Why do we have to set up multiple startup script mechanisms?


> Think about how nice it is that we don't have to recompile most things for most kernel versions.

Yes, nobody has ever had to recompile their application to work on a new kernel. If you mean that's nice that there's binary backwards compatibility, I agree. I don't see what this has to do with supporting multiple systems.

> Why do we have to set up multiple startup script mechanisms?

Why do we have to support multiple <insert any kind of software> ?

> We're moving more and more to having unified low-level services across Unices

AFAIK, the only thing I know of moving toward middleware unification specifically is Linux desktop software. Choosing to use systemd in exclusion of everything else is "unification" in the same way that nationalism/xenophobia/homophobia are. (i'm aware of how mean that sounds, but it's the closest comparison I can think of)

It is certainly nice that there is now middleware that abstracts the underlying software that controls the hardware, for example, or that authentication and authorization are more uncoupled. But this is really almost the opposite of what systemd does.


Trivial example: Debian vs Ubuntu, when Ubuntu used upstart. Now I don't need to worry about process management anymore and can provide better Ubuntu support.

Although I'm speaking with my sysadmin hat (I have a few hats). I have a lot of software deployed/managed through Ansible (from node/ruby/python apps to proprietary apps). They might not provide a unit file, so I have to write my own (and I might send it upstream).

The main pain point I encountered regularly, was making sure that the process starts at the right time, has proper stderr logging and monitoring. I rarely bump into kernel or library problems (but that's just me, I'm not denying the problem). I understand what you mean though, systemd is just one of the base components. It doesn't fix everything, but it helps.

On "write once": I feel bad for people who have to read the shell scripts I've written. I think they're good and I'm a fan of keeping it simple, but my scripts don't come with a manual. Any junior dev can understand my unit files (tangential: that's also why I like Ansible/Puppet/etc).


This exactly. I was quite happy when I learned about systemd and learned that it was coming in Ubuntu. I want a distro-agnostic easy way to keep programs running. init files were way too complicated, and upstart scripts were not distro-agnostic.

I didn't write upstart units for that reason. I now write systemd units and it's wonderful.


Messing with sysvinit scripts is just that painful.


I still don't understand how sysv scripts can be so long and hard-to-read for such a simple task.


systemd isn't a standard, it's a monoculture. There's a difference.


Id be interested to hear what the difference is.

By moniculture, are you saying that adopting systemd leaves you vulnerable to cross-distro vulnerabilities hitting a bunch of systems all at once? (Most of my knowledge on the downsides of monoculture come from Irish history) Or are you pointing to something else?


A standard can have multiple implementations and changes happen by committee. A monoculture like systemd changes by the whim of their developers. If some implementation is too buggy, which by many accounts systemd is, being able to switch to/develop an alternative implementation without causing fragmentation is an important distinction.


It's a standard in the sense that Service Control Manager is a standard. But it's a monoculture in the sense that writing your application for Service Control Manager means it will only ever work on Windows NT-derived operating systems.

That's not a good standard for an operating system supposedly about flexibility, compatibility, and choice. And it's even more annoying for software which isn't intended to be run in a monoculture, like most of the open source/free software world.


To be fair, how hard can it be to port your app from one init system to the other ?

And for that matter, did anyone make a systemd-to-sysv converter ? Because that should surely be possible, systemd service files being declarative.

If such a converter did exist, then suddenly, the systemd "service files" could become an actual standard, since they'd be (somewhat) compatible accross sysv and systemd. And maybe we could work from there and support other init systems...


Making a service file is only a small part of the systemd monoculture. People don't care about the init system as much as how it hijacks the entire OS and shuts out alternative Linux systems.

Your suggestion of turning systemd's APIs and formats into a standard is not out of the question, but it would never happen. Poettering and his team showed they have zero interest in making a compatible system, or in making any concessions at all. Standards have to do both. And other platforms have to buy into the standard, otherwise it just sits there like a third leg.

The way it would probably end up is a project like Alien would adopt some of systemd's quirks. But so much of systemd is tied directly into the application via APIs that it would be a huge PITA to support both systemd and anything else - hence why some devs simply drop support for anything that isn't systemd. It's the same sort of thing that causes devs to only develop for Windows, or iOS/OSX, because it's more popular and porting to Linux is too expensive.


Do you have an example of an app that is effectively locked into systemd ? I just don't see what systemd does that would lock apps in...


For one example, socket activation. To get the benefit of systemd (socket activation is supposedly the whole reason they supposedly created systemd) you have to patch your application to use the sd_* family of system calls. Old apps can be patched to support this, but new apps may not have any incentive to be built to use traditional sockets. So newer socket-communicating apps may not work at all on non-systemd systems.

Another is daemonization. In traditional systems, long-lived processes daemonized themselves and took care of themselves. The big benefit was independence and flexibility. You don't have to do a lot of work to port it, or maintain it, or administrate it. A shell script and some simple conventions allow it to run in almost any environment, and all of the services basically worked the same way. Often this was out of the necessity of a complex program's needs, where the way it manages its tasks, or shares memory, or communicates with other processes, or handles signals, or is checkpointed, or debugged, etc may have had complex requirements.

But now, systemd recommends you not daemonize your process. Let systemd manage it for you! This works great for very simple services, but not so well for complex ones. Now people are writing more services that can't manage themselves and thus need something like systemd to take care of it, or it simply won't work as a daemon on a non-systemd system.

The systemd people would say, any system can adopt these calls and these methods! (Which is equivalent to saying, any OS can adopt Linux's completly nonportable and specific syscalls, even though they already have a working alternative) But that's not the end of it. Every piece of systemd that you adopt, then depends on other piece of systemd, so you can't just pick up one piece of the pie. You have to eat the entire thing.


I don't believe people would have dropped Ubuntu (in general) for not using systemd (but I'd not bet on the word anyone). My point was that Ubuntu went with the fold even if people would not have dropped them for sticking to their own system.

The benefits of that unification are very, very tangible for package authors/maintainers - they now have to write just one flavor of a service file. And that is a great thing(tm) for the Linux community as a whole - well worth my own gripes with systemd itself.


Systemd does more than init - it keeps popping up its head all over the place, such as for login or timekeeping. After all, it's 'systemd', not 'initd'...


Instead of "Linux", you say "GNU/systemd/Linux" from now on.


> FWIW, I too would like to keep the DE experience of Unity, and especially the Dash panel and shortcuts. If that expose-text-search could scan non-focused browser tabs that would be a killer feature, but that's for the other thread.

Maybe switch to KDE’s KRunner? It’s the most functional and extensive search framework yet.


Just curious as to how many of these replies have used systemd


I would drop it in a heartbeat if I could. Dealing with servers it's not solving a problem I have, and just introducing the strangest behaviour along the way.


> Dealing with servers it's not solving a problem I have,

For me it's solving the problem of simply and reliably wiring up persistent and resilient services (not shipped with the OS).

I don't know about you, but that's a need I especially have on servers.

When systemd works, it works remarkably well.


>For me it's solving the problem of simply and reliably wiring up persistent and resilient services (not shipped with the OS).

That aspect was already largely solved, and has been for a long time. runit, upstart, daemontools etc.

Daemontools, like a number of djb's tools, manages to really manage to blend an amazing level of "just works" and "ZOMG, I configure this how?!"


I always hated the prevalence of so many tools -- I get sick of having to learn 30 variants of the same stupid thing because none of them seem to excel in every area. One server from recent memory had crap running in sysv, inittab, and with deamontools because not everybody liked the same system and kinda just did what they knew. I'm rather happy that systemd throws all that out.


That's just lousy technical culture. If you can't ensure you've got clean practices on your servers to that degree, you've got bigger problems on your plate.


As if one is always able to influence a lousy technical culture, especially if they just walked through the door...


Indeed, I've deployed it in both embedded linux and server environments and it works quite nicely in both roles.


Agreed, the config files for upstart were so easy to template out and always worked reliably.


I use it on my desktop and laptop today (Archlinux)

Have been using it since it was introduced in Fedora 16 (as I was a fedora guy) and I've never been convinced of its use on a server.

I understand it solves some problems but does so while introducing new and harder to remedy ones. (like tight ABI integration).

I don't mind systemd (ok, I'd criticise it but not nearly hate it as much), but I don't like that it's now the default target which forces me to use it in future.

I'm a Systems Engineer, so I have different needs from servers than Developers I guess, I want things to be easy to debug and diagnose.

I will say that I'm not only accustomed to systemd or sysvinit, I've also used runit at scale, along with SMF on Solaris and openrc.

SMF Beats the pants off systemd, even if it uses XML internally.


Thank you!


I have a thread on when Pottering was hired at Red Hat:

https://news.ycombinator.com/item?id=14044287

I think there was a lot of hate for PulseAudio too.

I wonder what about Kay Sievers now, which was even hated by Linus Torvalds at one point.


The 'haters' are the vocal minority. Lots of people I know, (myself included), actually prefer systemd as it greatly simplifies system management and writing of service files and is a lot more dynamic than upstart was. Maybe it's just my bubble, but systemd tends to be accepted as the right move now.

I am sure the people who hate it have good reasons, but all I've heard is 'doesn't follow UNIX philoshophy', (guess what, the whole Linux ecosystem doesn't really, even Stallamn isn't a fan), 'centralized', (somewhat valid, but there's still Gentoo and Void Linux), binary text files, (installing a 'text logger' over journalctl takes 5s and the filtering capabilities of journalctl are awesome).

Also, the hate for PulseAudio largely died years ago. I for one am glad for people like Pottering, who are willing to think longer term and produce great* software.

Yes, maybe more buggy at the start, (but not really the case with systemd), but great eventually. Also, let's be honest, the reason you see more bugs in i.e. PulseAudio is because it is being developed in the open from its inception.


I don't like systemd for a bit more practical reasons.

For example, how do you like the fact that init now comes with "hidden" timers ? After you've scoured every place that a "traditional" Linux might put scheduled tasks, you've come to the realization "Ooooh, now my init has its own cron!". Because obviously an init system needs its own cron!

Or the subtle way it breaks existing SysV compatibility. An RPM/Deb package drops its SysV init file at /etc/init.d, and the next thing your ConfigManagement system does is try to start the service - quite standard. Bang - "Unit not found"! Why? Now on systemd-enabled flavors, you need to reload the mofo after dropping in your files. Of course Systemd could hook into dbus and run crons, but it can't be bothered to inotify or even jit-check-upon-request /etc/init.d. Because subtle breakages are cool!

How about the systemd-hostnamed service? Why on earth would we need a service to change the hostname? And why should it care about the "chasis type" of the machine?

These are just some of my own WTFs encountered during my admittedly short but language-colorful interaction with systemd. I have nothing against the Systemd unit files, but the functionality/bug scope of the whole thing is way bigger than I'd feel comfortable with!

P.S. Poettering is reportedly a townie, so I'd love to buy him a beer someday and berate him over it.


> For example, how do you like the fact that init now comes with "hidden" timers ? After you've scoured every place that a "traditional" Linux might put scheduled tasks, you've come to the realization "Ooooh, now my init has its own cron!"

They're not "hidden", just managed with systemctl like other services, you can still use regular cron if you'd like.

As to the rationale of including timers, it is for stuff like mounting and unmounting remote disks etc, at boot, which I personally would include under the responsibilites of a modern init system and 'system manager' in general, same with taking care of TRIM every so often etc., but I appreciate that it isn't for everyone.

> Or the subtle way it breaks existing SysV compatibility.

The SysV init scripts would regurarly break in subtle ways by themselves to be honest, I find systemd a much saner option.

> How about the systemd-hostnamed service? Why on earth would we need a service to change the hostname? And why should it care about the "chasis type" of the machine?

The systemd network stack is entirely optional and intended for scenario where you can't afford/don't need the 'fatness' of NetworkManager, just because it's there, doesn't mean you have to use it.

> P.S. berate him over it.

Plenty of people already did so, over and over, but have fun, considering you'd be buying him a beer...


> They're not "hidden", just managed with systemctl like other services.

So far in the history of nix, services have never equaled periodic tasks, to my knowledge. In that sense, it's a hidden surprise that would sooner or later bite everybody that has not been initiated in systemd. It makes things harder to debug.

> As to the rationale of including timers, it is for stuff like mounting and unmounting remote disks etc, at boot

Now do you go from that need to "and that requires an internal crond implementation in your init system". Why not atd, or regular cron, or sleeping processes? I'm genuinely curious, so if you're aware of public discussion of it, please point me in that direction.

> The SysV init scripts would regurarly break in subtle ways by themselves to be honest, I find systemd a much saner option.

I appreciate that the chaining and dependencies of SysV init were horrible. That doesn't make it OK for systemd to introduce more subtle breakages in even a very basic* usecase.

> The systemd network stack is entirely optional and intended for scenario where you can't afford/don't need the 'fatness' of NetworkManager

In the scenario where you do not need NetworkManager/GUI hostnamed would be quite worthless as well. /etc/hostname and the `hostname` command are more than enough to handle that case, thank you.

I guess my main issue with systemd is that it introduces (mostly?) unnecessary complexity, which makes me waste more time debugging problems. Of course it is better than SysV init, and I'm very happy with the syntax of system files. Yet, upstart showed that you can have those niceties without an excess of complexity.


As to the rationale of including timers, it is for stuff like mounting and unmounting remote disks etc

The implementation is retarded. Last week my providers' iSCSI fabric suffered a glitch, which hosed a whole bunch of servers for hours because systemd refused to boot when it found the volumes not present. These were in no way critical to the operation of the stack. However, some moron somewhere decided that locking the system for 5 minutes on boot, and then simply refusing to boot properly, when everything required to boot is actually in place and OK is the correct course of action. I have enough of that kind of deranged thinking to deal with coming from Windows, I don't need it from my Linux machines.

Systemd sucks for servers. It might do a lot of nice fancy tech stuff, but it is extremely poorly thought out for use on the server.


> systemd refused to boot when it found the volumes not present

This is not normal systemd behaviour, it waits for the resource, but only 1min 30s by default and then continues booting, (unless the services are considered critical for reaching certain target), logging a failed service, someone must have therefore explicitly configured a custom behaviour in your situation.

> The implementation is retarded.

May be, or may be whoever configured the server this way is, who knows?

> I have enough of that kind of deranged thinking to deal with coming from Windows

As I said above, this is not standard systemd behaviour.

I am not saying that systemd is perfect, but your case seems like misconfiguration, rather than "deranged thinking" from the systemd devs.

I'l encarouge you to read more on its configuration, it's actually fairly flexible, this[1] is a solid starting point.

1 - https://wiki.archlinux.org/index.php/systemd


Bailing out and dropping to the rescue shell when a mount point in /etc/fstab failed is DEFINITELY the normal systemd behavior.One has to mark it as nofail otherwise systemd will assume it is required for boring the system.

(this was originally meant as a reply to above comment was mistakenly posted to grandparent.)


Standard Ubuntu 16.04. From my reading, it waits 90 seconds, then some more, and then even more.


Not the case on Arch, but you can customize timeout anyway[1], using TimeoutSec, TimeoutStartSec and TimeoutStoptSec, or even the global setting[2].

1 - https://www.freedesktop.org/software/systemd/man/systemd.ser...

2 - http://stackoverflow.com/questions/33776937/how-to-change-th...


Bailing out and dropping to the rescue shell when a mount point in /etc/fstab failed is FOR SURE the standard systemd behavior.One has to mark it as nofail otherwise systemd will assume it is required for boring the system.


> The systemd network stack is entirely optional and intended for scenario where you can't afford/don't need the 'fatness' of NetworkManager, just because it's there, doesn't mean you have to use it.

Currently.


> How about the systemd-hostnamed service? Why on earth would we need a service to change the hostname?

If you want the hostname to be preserved across reboots, you need to save it on disk - canonically, in the /etc/hostname config file.

Since /etc/hostname is an important per-machine config file, you probably want it owned by root.

Since you may want to edit this config file from a GUI control panel of some sort that you launch from a desktop environment running under a non-root user, you need a mechanism for the control panel process to be able to change the config file.

There are several ways to accomplish this:

* run the control panel process under root (for example, using sudo). This is not a good idea, especially under X11, since GUI toolkit libraries are not security hardened, have a large attack surface thanks to various GUI IPC mechanisms, and your control panel process could be subverted by hostile processes that have been waiting in the background for just such an occasion to arise.

* put your non-root user in a group which has write access to /etc/hostname. This is the traditional Unix solution, but it's not very flexible. If your user was not in the hostname-writer group, you will have to log out and back in for the change to take effect. And you can't create policies like requiring entering a password before performing administrative-type actions (unless the password is for sudo - and that is dangerous, see above).

* run a daemon under root that allows making edits to /etc/hostname. Provide an IPC interface to request a change to /etc/hostname; have the daemon check, via a call to a flexible, configurable authorization service, that the caller process has the right to perform a "change /etc/hostname" action (the authorization service might reply yes if, for example, the caller's user belongs to a specific group and verified his password within the last N minutes); and only then make the change on behalf of the unprivileged caller.

The latter seems like a better solution. Maybe over-engineered for managing a one-line config file, but definitely the solution to go with for more complex situations; so we might as well use the general solution in this case too.

The daemon is hostnamed; the flexible authorization service is polkit. See https://www.freedesktop.org/wiki/Software/systemd/hostnamed/ for more details.


Honestly, all these points make me feel sick from an architecture point of view. Why cannot groups be applied to the user instantly? Why doesn't system have regular ways to change 'registry' values (sorry for analogy, but /etc is a standard-defined setting storage; I'm a unix guy, ftr). If the kernel team has good arguments against that, why hostnamed is not named privilege-d and not using some system bus to change anything beyond hostname? Is hostname the only thing with such problem, the only gui-configurable system-wide setting?

While being a formally valid solution, it all seems like waste of simplicity and totally uncontrolled design process.


> Why cannot groups be applied to the user instantly?

Good question! I've wondered about this before too, ought to research it.

> Is hostname the only thing with such problem, the only gui-configurable system-wide setting?

Of course not :) Systemd provides a half-dozen similar daemons for configuring things; hostnamed is probably the simplest one of them, so I was trying to justify why so much engineering went into something that looks so trivial: it's because the approach is generic.

The idea is that a non-systemd system could reimplement some of these daemons (or rather, their dbus interface; you could implement them in one executable if you want) - and the parts of a standard control panel application relevant to your system would just work.


Wait, since when is sudo considered dangerous? Forget the GUI for editing the hostname file, we have much more important things to worry about if that's the case.


He's not saying that sudo is dangerous but that a GUI program running in X11 as root is dangerous. This is because most GUI toolkits aren't hardened against attackes (buffer overflows, badly formed events, etc.) that can all be sent as a non-privileged user/program under X11 to one that's running as root. This means that if someone had an exploit in your browser that let them see X11 events or send them to another window then they could potentially use that to gain root access the moment you went into ANYTHING that ran as root under you X session.


Yes, exactly. I should have been more clear.

"sudo vim /etc/whatever.conf" is perfectly fine.

"sudo gedit /etc/whatever.conf" might give an attacker already on your system a way to gain root.


Then why use a GUI to edit the hostname in the first place? Either use sudo or fix the security issues with Xorg.


> Either use sudo

Yes, just keep doing that. I run several Linux boxes with systemd, all of them without hostnamed.

Forcing all your users to use the One True Way on a complicated system usually just means you're ignoring many legitimate use cases. Microsoft still does that on Windows (APIs and ugly registry entries over everything) - and while I cringe everytime I see it, I still recognize that the approach has value for some situations.

Many of the daemons for systemd on the other hand are optional, which I personally find to be great. I can use the ones I need and leave the ones I don't.

As for the GUI: Why use a text file? That might be a good use case for you and me, but a terrible one for someone not used to administrate *NIX systems. Why not allow both? As long as the file-based approach is not neglected, I'm perfectly fine with that.


If you tell users to edit the file, they will do something stupid like using LibreOffice. Ubuntu wants to be useable without using a terminal.


Device drivers have timers.


> Yes, maybe more buggy at the start, (but not really the case with systemd), but great eventually.

This kind of excuse-making makes me rabid. "We will use our position of power to drag everybody through debugging our poorly made software because we are amazing and one day it will be great" is not a position I will ever have much respect for.

If somebody's enough of a genius to see the future of software, they're enough of a genius to make their shit work for the audiences they are currently shipping to.


Philosophically I dislike the idea of systemd, primarily because it seems to be growing more and more. (Gaining the ability to launch machines, run DNS, etc)

But practically? The service-files are a pleasure to write, the documentation is excellent, and I've not personally had a failure I could attribute to systemd. (Though I did come close, learning that systems with an old version of "snoopy" installed wouldn't boot under systemd.)


If it takes "five seconds" to fix the presentation (but not the implementation) of SystemD's log files then there's no reason why people who do not understand SystemD's deficiencies shouldn't be able to spend the same amount of time Googling for one of the many clearly written accounts of them.

And I know a lot of musicians who still find PulseAudio not fit for purpose. There's no hate, just frustration at the adoption of a bad solution crowding out ones that work better for their given use case.


Why should I spend 5 minutes googling for negative accounts of something I like? I honestly find systemd quite easy to work with. Untangling a maze of init scripts when a package maintainer wasn't being careful or someone made a mistake was miserable. Now startup is more more standardized, in my opinion. I don't like it as well as upstart, but I see no reason to go digging for problems. When I encounter one, I'll do it then.


> I see no reason to go digging for problems. When I encounter one, I'll do it then.

It was the same with init, but instead of digging into greppable script files, you need to dig into Google and systemd manpages first, then dig into the actual issues second.


PulseAudio may not serve every use case (and what does?), but it's been an absolute godsend for anyone who remembers the bad old days of Linux sound. Musicians, a small minority of users, can still install JACK to accommodate their needs.


Yes, PulseAudio explicitly does not aim to offer low-latency for real-time audio. Need to use JACK for that.

PulseAudio actually implemented a Dbus API to release its lock on the AlSA device. jack2 can use this automatically take over when it starts. There also exists two-way sound transport for sending sound from Pulse to JACK. If that was automatically set up also, then we would have a plug & play solution for pro/real-time audio on Linux.


> there's no reason why people who do not understand SystemD's deficiencies shouldn't be able to spend the same amount of time Googling for one of the many clearly written accounts of them.

I listed the ones I could find and stated why I disagree with them, you could've made this a useful post by listing some additional ones, but instead you choose to just spell, (miscase?), systemd wrong.


Personally, I wish the Linux community went with ZFS. IMO it's still better than systemD as systemD still doesn't have a solid disk backup system...

https://wiki.archlinux.org/index.php/ZFS


How is ZFS related to system initialization?


It's a shot at how systemd supposedly includes everything, I believe...


It can't, due to licensing incompatibility.

ZFS is not compatible with the GPL. Canonical is treading a risky path with what they're doing.

I would love ZFS to be native on Linux, BTFS has had too many scary data loss bugs to be able to trust it.


I believe that the major pitfall here is that the feedback you've received is mostly about the changes that people want to see. However if we consider the number of people who want Gnome vs the number of those who want Unity 8 vs the number of conservative users who like Unity 7 as it is now - the results might be different.

I personally am very happy with the current Unity. I find it intuitive and more aesthetically pleasant/polished than Gnome Shell (I've only used that as it comes with Ubuntu Gnome).

So please, don't drop current Unity. Or if you have to switch to Gnome Shell - please keep the user experience as close as possible to the current Unity to help users migrate.


In other words: people who like the current status quo are less likely to make themselves heard than those who want something changed. So the trends you tried to extrapolate from the HN discussion may be biased.


As an anecdote, that was exactly my experience. I'm one of those light-weight desktop Linux users. I've only experienced Windwos, OSx and Ubuntu. I can say that I really enjoy the current Ubuntu UI, but due to lack of knowledge of Wayland and whatevery, I'm disinclined to speak up.


I'm pretty sure their decision to drop Unity was already done (internally) even before Dustin started the suggestions thread. So this conclusion isn't necessarily accurate.


To get more unbiased input one might try something like http://www.allourideas.org/ . This method of priorisation and feedback gathering is much better than "more votes = higher priority". The downside is lack of comments/discussion.


I also wonder how representative HN users really are, but maybe that's less of a concern with a product like desktop Linux which hardly has any nontechnical users in the first place.


I think the biggest thing that people want (or at least I do) is for Canonical to drop Mir, and focus on Wayland. Apparently Unity and Mir are so tightly coupled that it wasn't even worth mentioning Mir in Shuttleworth's post.


Noob question: Why? Do you think Wayland is better? Or is it more efficient to have everyone working on and things being written for one protocol?


The second one. As far as I've been told, Mir was conceived because Wayland had some deficiencies, but AFAIK they've been sorted out. So it's really a matter of saving work now.


Those particular deficiencies were essentially fictional in the first place.


I see. What were they?


Is Wayland better than Mir? I have no idea.

Is Wayland better than Xorg? Yes, or at least it will be.

Wayland and Mir are distinct projects made to solve the exact same problem in the exact same problem space.

Usually this is a fine idea, but Wayland could really use the support that Canonical has avaliable, especially since it depends so heavily on graphics driver support, which Canonical can help push for using its association with NVIDIA (the only graphics manufacturer with a completely proprietary driver now).

Notice that Xorg doesn't have any real forks anymore. That is because it a much better idea to focus driver support on one library (graphics drivers are hard enough as it is). Unfortunately, Xorg has some inherent problems that Wayland is designed to fix; so until Wayland is complete and stable enough to replace Xorg, we need all our graphics drivers to target both stacks (Wayland and Xorg). That's already difficult enough without Mir demanding its own attention.


One of the things that has always confused me is that Gnome Shell is really extendible, from a programmer point of view. Even if you don't like Gnome Shell, you can actually start with mutter and build a window manager really easily (or, at least this was the case about 5 years ago when I last looked into this). Building Unity on top of Gnome Sell should be very straight forward. I'm still surprised nobody has tried to do it.

Again, I haven't looked at it lately, but the last time I did, mutter was just fantastic. I was tempted to write my own WM on top of it, but since I have grown disenfranchised with the rest of the Gnome ecosystem (and it's hard to pull that out of mutter), I gave up. This shouldn't be a problem with Unity, though, as it is already entrenched in Gnome.


One potential problem is if Gnome make breaking changes, it's hard for Canonical (or anyone) to use it as a base for their own UI. I've not experienced this first-hand, since I haven't used Gnome or written GTK+ apps since the 2.x days, but judging by what I've seen online they like to break APIs at each release, drop working features if/when their opinions change, and be semi-hostile to any use-case other than their own default settings ("brand"), e.g.

https://igurublog.wordpress.com/2012/11/05/gnome-et-al-rotti...

https://davmac.wordpress.com/2016/07/05/why-do-we-keep-build...

I suppose Canonical are big enough to have their needs taken into account, unlike other app authors.


This is actually the reason I gave up on my idea of working with Gnome Shell. But as far as I can tell, most of the badness happens before you get to Shell. Basically, Canonical has had to deal with it in Unity anyway.


> Building Unity on top of Gnome Sell should be very straight forward. I'm still surprised nobody has tried to do it.

I think this is what ElementaryOS[1] did. It was funny/interesting to me that the "flashiest" features demoed on the website were pretty much just stock Gnome 3 features.

[1] https://elementary.io/


"more aesthetically pleasant/polished than Gnome Shell" I really disagree with that. Are you speaking about Gnome 3? It's a subjective thing, but Gnome 3 is incredibly aesthetically pleasing and polished imho


FWIW, one more liker for Unity here. Too late I guess...


Awesome to see our response followed with such attention, not to mention feedback with concrete promises about what (and what not) to expect dealt with.

Good job, Canonical! Happy to be a user!

(And good job finally ditching Mir. You could have kept Unity for all I care. Linux can handle a few dozen DEs. But having more than one display server, now that was just nuts.)

Edit: while the feedback post here may have been the most discussed post on HN ever, the announcement of dropping Mir clearly made rumbles too, with a record 10,000+ upvotes in a "niche" subreddit like /r/linux. When a player like Ubuntu does the right thing, people clearly care.


> Official hardware that just-works, Nexus-of-Ubuntu (130 weight)

I really like this one. I'd made comments of wanting pre-installed Linux, but the Nexus concept is a much better idea. It's following a model that seems to have worked well for Google.

It's a fantastic way to break the chicken and egg situation of getting pre-installed Linux available.


I hope down the road, if the program is successful, they can increasingly push for open hardware in the machine.


I also like the Dell XPS 13 model.

Offer a pre-installed Linux version of a machine at higher cost, but do your best to upstream patches and homogenize hardware with the Windows pre-installed version.

Seems to split the difference between "a Linux laptop is economically unfeasible" and "with a little care in component selection and upstreaming drivers / fixes, compatibility can be assured."


Offer a pre-installed Linux version of a machine at higher cost, but do your best to upstream patches and homogenize hardware with the Windows pre-installed version.

I actually find the fact that the Linux version is more expensive to be insulting. If anything, it should be cheaper, given that you're paying the "Windows Tax" on the W10 version of the laptop.

Now if they cost the same price, with the caveat that the $80 or so that would normally go to the W10 license is instead going to Dell's Linux team to ensure hardware compatibility and/or submit upstream changes to Canonical / Red Hat, then I'd be all for it.


I actually find the fact that the Linux version is more expensive to be insulting. If anything, it should be cheaper, given that you're paying the "Windows Tax" on the W10 version of the laptop.

I just created a product comparison¹ on Dell's site and it looks like the Ubuntu version of the XPS13 is $100 cheaper than the Windows version.

① - http://www.dell.com/us/business/p/configuration-compare.aspx...


McAfee LiveSafe could explain that difference. It costs around $90 for a 12 month subscription that you get with the Windows version.


In terms of man hour costs to Dell to verify that everything is working, I'd guess that the Linux laptop costs more for Dell to produce than Windows.

This is the same reason why other sellers will happily give you a computer without an OS and take the price off, but they won't give you any even basic support for installing anything else.

If Linux users were willing to pay more, then there'd be an incentive for sellers to put the effort in pre-install Linux.


Also, that final OS support cost per laptop is total_dev_hour_to_support / number_of_machines_sold. So if the denominator is small, that can still be a big enough number to surpass $OEM_Windows_cost.

We (sadly) don't get the luxury of pretending the markets for Linux on laptop and Windows on laptop are identical except for the OS.


> I actually find the fact that the Linux version is more expensive to be insulting. If anything, it should be cheaper, given that you're paying the "Windows Tax" on the W10 version of the laptop.

Unless things have changed since the Windows 7 days, you're paying the Windows Tax on both units. Dell (and other major manufacturers) pay Microsoft for every unit sold no matter what ends up on the hard drive. That's why it's half-jokingly called a "tax" in the first place, and it's likely why the Linux version costs more; the base price is the same no matter which OS is installed, and the little bit more you pay for the Linux model goes towards recouping that extra work integrating Linux.

> Now if they cost the same price, with the caveat that the $80 or so that would normally go to the W10 license is instead going to Dell's Linux team to ensure hardware compatibility and/or submit upstream changes to Canonical / Red Hat, then I'd be all for it.

I love that idea too, and maybe the smaller OEMs can get away with that if they have a per-install license fee from Microsoft instead of a per-unit-sold fee.


The HP Z2 Mini desktop supports Linux and is $199 cheaper without optional Windows OS.


Does anyone has a service tag for a dell xps 13 2016 (preferably 9350) with Ubuntu preinstalled? I would like to download the iso for a reinstall... I bought mine with win 10 V__V...


My understanding is that because all the required patches and drivers have been pushed upstream, you can use a vanilla iso directly from Ubuntu and it will "just work".


You'd be better off just using the latest release. Add some contact info to your HN profile.


is there any standard for contact info in the HN profile?


The email field doesn't display to anyone but you and mods. I think it's used by mods if they want to talk to you.

If you want other people to see your contact info put it in the about textbox.


I just bought my second Dell Precision pre-installed Ubuntu machine yesterday. I loved my first one (which belonged to my previous employer). And I'm considering getting a System76 desktop later this year.

So to me, the check and egg situation is already solved.


uNexus laptop, uNexus desktop/workstation, uNexus vr gamer.


Hey Dustin, thanks for the follow-up!

In your original story, I posted a request for Canonical to come up with some viable strategy to get Adobe CS (and related color-calibration HW/SW) usable on Ubuntu.

I expected a lot of traction for that suggestion, because AFAICT a lot of creative professionals are looking for a way to escape the Windows/Mac duopoly.

However, it looks like my suggestion didn't make the cut for the list you just posted.

Can you share any thoughts on why getting Adobe CS usable on Ubuntu is / isn't a strategic priority for Canonical?


Hey there! Thanks for the suggestion.

We're engaging with dozens of major vendors of traditional/proprietary software about delivering their software onto Ubuntu via Snaps -- which is a new packaging format that solves many of the traditional problems associated with proprietary software working well on Linux.

I'm going to ask Evan Dandrea and Michael Hall from Canonical to engage with Adobe around CS and anything else in their suite that might make sense to Snap.

Cheers! @dustinkirkland


I think for many web front-end developers, the only thing keeping them on Mac, is not being able to use Adobe tools on Linux.


If we were in the same room right now, I'd give you, Evan, and Michael all a big hug. We'd all feel awkward afterwards. It would be worth it.

But seriously, thanks for giving it some consideration. Fingers crossed.


I'm sure there are so many web devs, myself included, who would use Ubuntu professionally if only tools like Adobe and Sketch worked on it.

I know it's not directly Ubuntu's problem, but solving this problem will go a looooong way towards developer adoption.


Adobe Photoshop CS6 works almost perfectly in wine, but Photoshop CC isn't working. Probably, it can be fixed easily.


You can create a snap with Wine and a windows application together, which is one route this might go. However, because Photoshop isn't open source, we wouldn't have a legal ability to distribute it.


I'll have to disagree with you there. There are still many bugs and it's slower than on Windows.

I would much prefer to have it without installing Wine.


Hey there, Dustin asked me to chime in. We would of course love to have Adobe's software available to Ubuntu users, and would do whatever we could to help make that happen.

However, as far as I know there is no Linux port of these products, which really limits our ability to even start on it. Likewise the software license limits our ability to share it without Adobe's approval even if we could package it. The best thing that Canonical can do, and indeed what we are doing now, is to provide stable platform for Adobe to target and the tools to make it as easy as possible for them to do that.

Ultimately the only thing that will sway Adobe are their own customers showing a demand for those products on Ubuntu. The more they hear from Ubuntu users who want to buy their software, or people who have bought their software who want to use it on Ubuntu, the more willing they will be do make it happen. And when they're ready, Canonical will make sure that Ubuntu is an inviting and worthwhile platform for them to distribute their software on.


What I want is not free Photoshop on every Ubuntu CD. Instead, work on WINE! Devote engineers to fixing bugs in WINE that block the latest versions of Creative Cloud from running. If WINE was rock solid stable for creative applications -- and promoted as so by Canonical -- I think you'd see many more people switching.

If you can sell Windows applications (with WINE included) in your store, that's all the better.


a lot of purists explicitly hate that idea, and I can understand why. I'm not for or against it, myself.


Thanks for chiming in!

> The best thing that Canonical can do, and indeed what we are doing now, is to provide stable platform for Adobe to target and the tools to make it as easy as possible for them to do that.

There's one more thing which you guys can perhaps do that the rest of us cannot:

Canonical is well-enough known that you may be able to get a meeting with someone at Adobe, to find out what they think would convince them to pilot a Ubuntu-compatible release of CS (or at least Photoshop and Lightroom).

I assume that it's some combination of projected sales revenue, and ease porting / maintenance / support.

If someone (perhaps Canonical) can get Adobe to throw out a number regarding how much sales revenue they'd need for a port, we could use something like Kickstarter to line up the first round of purchasers as well as demonstrate the amount of (pent up) demand.


> There's nothing in their license

... that's not a suggestion for you, but we should probably set up a pirate PPA with already-cracked versions of Adobe software, ported to Linux. It's evil, but it proves that the demand exists and would allow developers to work of Linux.


I think it is important for us as a community to make it clear to Adobe that there is a market for this.

I am a creative professional and dev, my father is a professional photographer and we have traditionally been a mac shop with a huge spend on their hardware.

Their latest offering is driving a lot of us to machines such as the Dell XPS 13 (which I purchased instead of a MBP).

I dearly want the same experience I get with OS X - but the one thing you are missing is a good experience with Adobe.

We've got professionals escaping the Apple ecosystem successfully to Ubuntu, but this is the one sticking point. I bet a lot of people would jump ship to your platform and give it a massive boost if you could get Adobe to put money into it. It would have more effect than any other effort in driving people towards Ubuntu.


Adobe CS is proprietary software owned by Adobe. What do you propose Canonical do? What makes you think there exists any viable strategy that Canonical could follow?


As I have said before - the second there is even a rumor that Adobe would allow (license wise) for their products to run outside Win/Mac, the correct business decision for apple would be to buy Adobe and ensure that hole is closed very quickly.


Would be great to have Lightroom / Photoshop CC work on Ubuntu. Not as interested in Premiere, as DaVinci Resolve comes in a Linux flavor.

Also any chance we get solid support for Wacom on there as well?


I fear there'd have to be some sort of deal-with-the-devil thing to make that happen. Is getting subscription based desktop software into the Linux ecosystem a good precedent to set? Maybe there are already examples of it but certainly not at the Adobe CS scale. I think Ubuntu could better serve the community by paying talented developers to work on an alternative.


Reinventing the wheel is almost never a better option, especially not when the users are actually happy with the current option.


There is no way to replace Adobe tools easily.

It's definitely worth it to try and get Adobe tools working on Linux.


I'll donate $100 to Canonical if they can convince them to offer a one-time-fee option and I don't even use Adobe products.


How much of that work would fall on the shoulders of Adobe, rather than Canonical?


A sincere thank you to Ubuntu from someone who didn't study anything remotely close to IT but is now a software engineer anyway.

I've always been interested in software and computers in general and, besides with the raspberry pi, I think Ubuntu has been the biggest influence on my interest in software and decision to learn programming.

A few months back I looked up my first ever forum post. https://gathering.tweakers.net/forum/list_message/28824598#2...

Asking why my PC wouldn't boot after 14 year old me stuck some components, including the HDD, in another pc. Turned out you needed something called 'drivers' to run a motherboard.

Shortly after this I ordered my first red Ubuntu live CD that you guys shipped and that was my first experience with Linux.

Anyway, open source projects that allow you to thinker with software and even break it played an important role in my life, and Ubuntu was my doorway to a decade of learning, playing and wonder about software and technology.

Running 1604 LTS now. Sad that you guys are dropping Unity for GNome but still happy with Ubuntu. I'm sure it'll work out.

Thanks, and good luck.


They're not dropping Gnome. They're dropping Unity and Mir for Gnome and Wayland.

And that's just awesome. A few years late, but still awesome. Linux needs more Wayland-love than just Red hat/Fedora.


Hit refresh. I noticed :)


> Add night mode, redshift, f.lux (42 weight) This request is one of the real gems of this whole exercise! This seems like a nice, little, bite-sized feature, that we may be able include with minimal additional effort. Great find.

This one should come for free with the switch to Gnome, since it's now present in 3.24. https://www.gnome.org/news/2017/03/gnome-3-24-released/attac...


I'm using 3.24. There's still a few improvements to be made. People want the change to be a little bit less sudden (never noticed the transition, but apparently it's like an on/off thing). And personally I'd like the thing to turn off while you're watching a movie.

You can temporarily turn it off, which is pretty cool. Once you do the real colours seem super bright.


Ad "LDAP/AD integration":

> This is actually a regular request of Canonical's corporate Ubuntu Desktop customers. We're generally able to meet the needs of our enterprise customers around LDAP and ActiveDirectory authentication. We'll look at what else we can do natively in the distro to improve this.

OK I get the need that some may have to integrate an UI but please don't ship a full-blown Samba/winbindd plus config generator as default.

Here's the why:

Everyone has different LDAP setups. Some use a homegrown LDAP, some use MS AD in varying versions, some use Samba as AD in varying versions - and then everyone uses a different LDAP/AD scheme (e.g. is the username attribute lowercase-able, which attribute is it mapped to, are all PCs/users in a single OU, do you want to restrict logins to specific groups, does the organization need "full" AD setup or will a plain ldap_bind be sufficient ...) and you almost always need to hand-tune the configuration for your specific setup. A GUI configurator will most likely only work OOTB for people sticking with a standard MS AD, and make problems with non-standard setups, multi-domain memberships or similar.

And: Non-enterprise users will most likely not need AD/LDAP support. Those who do should have competent admins anyhow, but what I can certainly say is that the documentation could be updated (e.g. https://wiki.ubuntuusers.de/Samba_Winbind/ only works for 12.04/14.04). I'd rather like if the documentation were improved than yet another shoddy Samba config generator that's falling out of sync with Samba more sooner than later...

(source: lost more than a few hairs wrestling with AD and LDAP)


FreeIPA (https://www.freeipa.org/page/Main_Page) is an LDAP+Kerberos+other stuff implementation which is available in the Ubuntu repositories and is probably a good target for this. If it was easier to set up, I think it would become apparent to more people that it is useful outside the enterprise.

For example, if you have a laptop and a desktop, you can set up SSO and an NFS automounted home directory, so you don't need to maintain two (or N) home directories/sets of dotfiles/etc at the same time.


> For example, if you have a laptop and a desktop, you can set up SSO and an NFS automounted home directory, so you don't need to maintain two (or N) home directories/sets of dotfiles/etc at the same time.

Good luck with unstable network connections. NFS is already enough of a PITA in a fixed office network, much less over e.g. a spotty Wifi.

Also you will need a VPN because you don't want to expose raw NFS to the Internet.


I've wondered if maybe zfs send/receive might be press-ganged into working as a substitute for coda/"magically cached nfs"...

[ed: In fact I wonder if we (the community) shouldn't be able to bundle up a usable ssh server with less features, protocol 2 only, a great easy-to-use CA, maybe some bonjour/dns srv convention, and some simple db (perhaps ldap is easiest, sigh) and pair that up with ZFS for a minimal, but great, identity-of-user, identity-of-server/daemon cached/networked/encrypted data...]


Yup, all that's true--but just because I mentioned a laptop doesn't mean I'm necessarily talking about an in-network, out-of-network situation.


Even in-network can get nasty very soon when you introduce wifi into the mix or have an overloaded switch between the client and the server.


And: Non-enterprise users will most likely not need AD/LDAP support.

I strongly disagree with this. Everyone with more than 2 users in their organization can benefit from AD/LDAP support, if it is easy to set up and administer.

Because... what else is viable for managing multiple user accounts across several machines? Twenty years ago, I would have said NIS (from Sun, originally called YP for 'yellowpages'). But that was horribly insecure. NIS+ was supposed to fix that, but support was never there in Linux land.

Kerberos? That seems too difficult for most small networks.

Don't get me wrong, I don't like LDAP, but there isn't anything better that I'm aware of. LDAP has some support for other applications (for example, we use it for Redmine user accounts), I don't know of anything else besides LDAP that has widespread support.

But the initial configuration was a bit of a mess, where I was going back and forth among the official docs, the Ubuntu docs, and other guides. I should write my own guide so that I can add to the confusion.


>> And: Non-enterprise users will most likely not need AD/LDAP support.

> I strongly disagree with this. Everyone with more than 2 users in their organization can benefit from AD/LDAP support, if it is easy to set up and administer.

Actually anyone with either more than one user, or more than one device/computer IMNHO.

We really (still) need a sane, easy-to-set up authz/auth solution. Things have gotten a lot better - but still, nfsv4 is sort of not here, samba/cifs isn't quite secure, coda is sort abandoned, davfs isn't quite fully-featured... And so on and on.

Ssh keys are (almost) easy to use, but really hard to manage securely (everyone just ignores this, and pretend private keys won't leak) - managing a ssh CA is essentially as tricky as any other CA.

Ms realised this, and boxing up kerberos+ldap+dns (and some other bits) was their solution: Active Directory.

I'm not convinced we can't do better for a turn-key linux-first, modern solution (we have public key cryptography now, that should(?) simplify a kerberos work-a-like? Make it a standard, and bsd support should be easy, and a port to Windows/gina feasible.

I strongly believe the particulars of the solution doesn't really matter much - just make it open, with a good test suite and decent reference implementation.

Redhat is AFAIK doing great work here, as was/is Debian Edu (nee "skolelinux").


> I strongly disagree with this. Everyone with more than 2 users in their organization can benefit from AD/LDAP support, if it is easy to set up and administer.

You will always need a server. And aside from QNAP NASes (which aren't cheap) there are no "set it up and it runs" options which are free and easy to maintain.

Cheapest option, hardware wise, would be a RPi but it will melt when you try to use it as a filer. Next option is a PC, which adds at the minimum 200W of 24/7 power requirement, not exactly cheap given today's electricity prices.

Software-wise you have the option of MS Small Business Server which clocks in at 200€ but definitely requires a PC plus someone who can set it up, and a Linux variant with Samba which is free but definitely requires someone skilled.

Then there comes the maintenance - with Windows there shouldn't be a problem with regular updates, but with Linux... not so much.

The maintenance and the energy consumption of a server is what keeps small businesses off AD.


FreeIPA and RPi should be pretty good combo for prosumer/SB market. RPi has definitely enough horsepower to run a small domain, and FreeIPA bundles all those admittedly gnarly pieces into one neat packages


> RPi has definitely enough horsepower to run a small domain

Maybe enough for the DC part (i.e. Kerberos server, LDAP server, ntpd) but not enough for a fileserver given the Pi is still 100 MBit and limited by its USB ethernet connection, as well as that it doesn't have eSATA, mSSD or any other high-performance storage option. Plus SMB/CIFS implementations are known for performance issues.

Also, a Pi is nowhere near reliable enough for running something as mission critical as an AD server. Good luck when your micro-SD card gets corrupted, e.g. due to power fluctuations. And you WILL get corruptions, especially if you have high write throughput.


Next option is a PC, which adds at the minimum 200W of 24/7 power requirement, not exactly cheap given today's electricity prices.

I've got just the hardware for you:

http://www.tedunangst.com/flak/post/new-home-router

https://www.newegg.com/Product/Product.aspx?Item=N82E1685620...

There are various other NUC type devices based on laptop chipsets (or you could just use an old laptop with hardware Ethernet and a large-enough hard drive... built-in battery backup!).

The software maintenance is definitely an issue, but I don't see Windows Small Business Server being any better than Linux, there's still a lot of mysterious things that can happen.


Non-enterprise orgs might just be BYOD plus something vendor-specific like IAM?


Maybe I'm old-fashioned, but I prefer to have a locally-hosted solution.


It's a reasonable preference, but it won't be shared by everyone. Many of us don't even have onsite phone equipment anymore...


We can have both: Ms has Azure Ad - a new/refined open standard that provided sane caching/off-line use should allow self-host on-premise, in cloud as well as Saas/IaaS - add in a new coda-like fs (also optionally as a service) - and I'm pretty sure people would throw some money at it.

Imagine having a box running a lan-local cache/node for a few tb of cloud-backup disk - network mounted as /home/$user and/or / - with local machine cache and regular mirroring to the cloud? With the added bonus of being open - making pure self-host an option as well as moving to other vendors?


Yes, I'd get rid of our phone equipment if I could. It has been a hassle. I just wanted to get everyone Skype accounts with dial-in numbers years ago.

Well, even if you wanted to cloud-host authentication, what easy solution is there for Linux? Where I can create directories on a local file server and assign groups for restricted access?


Well you can host a Docker image of an openldap container, secured by SSL certification (in my experience, Let's Encrypt works for this, just take care to not expose the ldap port, only ldaps) and prevented from unauthorized access by denying anonymous binds.

Using service accounts you can then have other cloud services like Atlassian, Slack or Gitlab authenticate against the LDAP server.

Ad phone equipment: Asterisk in Docker combined with a VoIP provider (and exposing a SIP server) can work, but I have not tried this in practice. It should support standard Android and iOS SIP clients, but beware that this will drain your battery life due to permanent connections and keepalives - I don't know how easy (and supported) push notifications for calls are. Also, going from the VoIP provider through a questionable (in terms of QoS) Docker hoster to your phone will introduce a measurable latency, and the re-coding that may happen in Asterisk can also negatively affect audio quality.


Well you can host a Docker image of an openldap container, secured by SSL certification (in my experience, Let's Encrypt works for this, just take care to not expose the ldap port, only ldaps) and prevented from unauthorized access by denying anonymous binds.

When I set all this up, Docker wasn't even a thing, so it's nice to have that as an option now.

For VoIP, the hardware itself is a pain, though the open-standards software side of things is a pain too.

To deal with QoS issues, we have previously had POTS lines from AT&T plugged into a phone card on the server. So we've got that wonderful digital -> analog -> digital conversion in there.

We've recently switched to Comcast, which has a box with... analog phone ports coming out of it. So we've still got the digital -> analog -> digital conversion, plus any QoS issues on Comcast's last-mile network. Though that hasn't seemed to be a problem, so maybe they've got that figured out. And no, they didn't offer a SIP solution, at least not to us.

As you allude to, I don't see a SIP based solution for our mobile devices as viable, because of the battery drain and roaming. I really just wanted to use Skype or something similar. Who calls me on my desk phone anymore? I'll tell you who, sales people. I don't give out my desk phone number, I'd really rather you just send me an email. If you are important enough, I'll give you my mobile phone number, but that's rare.


Agreed, understood, thanks!


There's something very HN about the fact that the author of one of the most discussed posts in the site's history can issue a follow up post, dealing with that discussion, to such little notice.


It was very interesting to see the summary view of the comments, but frankly, the real follow-up was the announcement that Ubuntu is moving back to GNOME in 18.04: https://news.ycombinator.com/item?id=14043631

That action was, IMHO, one of the biggest thank yous they could have given us.


I agree. I did stop using Ubuntu when Unity was introduced. May be time to be back


Same here. Was a huge fan before. Disliked Unity so much.

Seems some people honestly like the look of it, how it works etc.

I however strongly dislike DEs that:

1. messing with alt-tab

2. mess with menus (I liked the idea but found the implementation to be painful)


This dislike should be mostly psychological. There are people that got primed to dislike Unity and when they attempted to use it, they struggled because of the 'old dog learning new tricks'.

We saw this with the intro of Gnome Shell, which fractured the community between the old Gnome UI and new UI.

If we continue to split up when something new is introduced, we are doing it wrong.


I am not against new. I'm almost embarrassingly simple to get enthusiastic about new tech.

The things I mention however are things I have found to be actual problems in my workflow.

They obviously work well for others and I respect that. I hope others can respect my observation that with Mac/Unity alt-tab more keystrokes or waiting might be necessary.

It is also an objective observation that with Mac style shared menu on top of the screen in a dual monitor setup you will sometimes have to move the pointer across two screens to reach the menu, then back again to continie working.


To this day I still get frustrated on Windows because they broke Alt+Tab.

Used to Alt+tab would simply cycle through in order of most recently used. Now they've got some functionality where shit can inject itself into that list so alt+tab and then alt+tab back and then alt+tab back to bounce between two different programs doesn't work consistently anymore.

I'm sure I just don't understand the use case for the new behavior, but it drives me bonkers.

I also hate, and I mean HAAAAAAATE when text editors autocomplete matching brackets, single/double quotes, etc. It completely fucks with my flow.

so maybe I'm just an old control freak.


Obviously this wasn't the case when you commented, but this is now on the front page (ranked sixth). I guess people noticed after all!


I also really like the idea of 'Official hardware that just-works'. Not just for users without much technical knowledge but also for the rest of us.

I mean nowadays we somehow manage to get most hardware working 'somehow'. Sometimes it takes a few years before your sound/bluetooth/wifi chip actually does what it is supposed to do, but most of the time we find ways to make use of our hardware.

But when you are going to buy new hardware you are a bit lost. You can try to find out if there are any major problems with some hardware, search the ubuntu hardware database, but especially for new, rare or expensive hardware there is often not so much to find about. For example a few years ago I bought a 22" touchscreen for my desktop and for almost a year it somehow worked but didn't do the things it was supposed to do.

Officially supported hardware by vendors would be a great step in the right direction.


Perhaps you're not aware that Canonical already do this and publish results at https://certification.ubuntu.com/ ?


Well, I was not aware of that ;-) Thanks :-)


I feel guilty: I didn't respond on the initial post because I doubted the really outrageous/far-out ideas like "dump everything you've been working on for the past five years" would actually be fruitful, but man, Shuttleworth shut me right up haha. Congrats on the Ubuntu team for the courage to make bold changes when necessary and to actually (finally?) listen to pointed community feedback and constructive criticism.

I haven't been this excited for Ubuntu since 2010.


Not related to Ubuntu, but maybe someone here knows a good way to kill the swipe left/right to navigate in blogs. I accidentally left the page about 5 times, which is rather annoying.

Using ff mobile if that helps.

Edit: hallelujah! Set dom.w3c_touch_events.enabled to 0 in about:config


Yeah this is a real pain in the ass of the Blogger® platform. Not sure there is anything Dustin can do about this, other than move the whole blog over...

Nice find of the about:config setting though!


Googlers around here: any idea when you'll start fixing/resetting blogger to how it was?


Whoa, good to know, I've never hit that before!


Why is dropping Unity lumped with dropping MIR?

Unity is great and I love it, I don't want to lose it.


If I remember correctly Unity was a fork of Gnome with lots of patches applied to the point of having both Gnome and Unity on the same system taking effort and being risky w.r.t. stability.

My guess is that they've invested entire Unity vNext on Mir, and if they're dropping Mir, they'll might as well drop Unity 8. That leaves them with Unity 7, and to unfuck a decade worth of changes they've done to make it work with mainline Gnome source.

Basically lots of work to have a 6 year old product back to square one, before they can, once again, attempt to further develop it.

I'm guessing they've decided that's not worth the effort and that their time is better spent helping improve something already mature (Gnome 3).

Touché, but they could have listened to the community back when they announced Mir and Unity 8 too ;)


They reduced the number of patches quite a bit. For some of the things they do, they depend on components that are not commonly used within GNOME. Meaning, it's not a patch, but as a sole user they'll have to maintain components if they're the only "user". Something written on top of Tracker IIRC; not started by Canonical, but nicely used by them. That's always a bit of a difficult explanation, sometimes Canonical expects maintenance to happen magically.


Unity 7 is a computer plugin.


I hope they are dropping Unity 8 in the sense they are dropping Unity 8 over MIR and are going to develop Unity over Wayland. Because seriously, Unity is nowadays the most usable of the Linux desktops.


>Because seriously, Unity is nowadays the most usable of the Linux desktops.

Oh yes. I tried all the DEs and it's by far the most pretty. I use Numix themes and it's so great. I love my pastel colored, flat icons, with brutalist window edges.


It pretty much went the other way, ubuntu is dropping mir because they dropt unity8

edit: unity8 seems to be thightly coupled to mir, although there seems to be a fork in the works to port unity8 to wayland


Sorry for the loss. Honestly.

I for one however had a really strong dislike for the alt-tab behaviour as well as the shared menu.


This is possibly configurable, I don't know and didn't care. In GNOME, this is configurable behaviour. I liked this default and kept me on Unity because of that.

This comes down to personal workflows. I got very frustrated dealing with the alt-tab behaviour of only showing windows when my focus is what application I need to use when I do the alt-tab. This is also the one holdover from using OS X daily, the one feature I felt was right (really, for me and probably a lot of people.)


Link to the original post: https://news.ycombinator.com/item?id=14002821 "Ask HN: What do you want to see in Ubuntu 17.10?"


I currently use Ubuntu on one of my laptops precisely because it's not GNOME. I can't stand GNOME3's hamburger menu UIs, giant title bars, lack of real menus, inability to make changes without digging into their version of dreaded "registry" and that pointless menu bar at the top. If I'm being forced to move to GNOME, I'll be switching to Fedora instead since they have a solid track record of GNOME and Wayland support.

My desktop will always be Arch, however with Cinnamon and, occasionally, i3 for development.


Even if Canonical only keeps a fraction of its desktop staff, they could easily improve on GNOME's default settings and theme, something that Fedora won't do.


It should be much easier since GTK+ "4" (not too many changes in GTK+3.x) combined with finally having a theme API, though the intention is to have some bits of GNOME run these development GTK+ versions at some point in future.

They're redoing how widgets work in GTK+ "4". That's basically like changing HTML and having slightly different elements. Though there's a theme API, I wouldn't be surprised if some theme work is needed as a result.

See the last bits at https://blogs.gnome.org/mclasen/2017/03/31/gtk-happenings-3/ for why I think above things.


Congrats to Dustin not just for some amazing crowdsourcing with the original post, but then for an extremely concise, thoughtful and humble followup for those of us who couldn't wade through all of those suggestions ourselves.


You're welcome Dan! It was a lot of work, but so, so, so worth it. I'd do it again in a heartbeat. Or rather, maybe in 6 months :-)


I'm impressed with your thoroughness in processing the responses. But I'm just curious: Why did you lump usability for children and accessibility for users with disabilities into one suggestion? Anyway, returning to GNOME should help with the latter.


The accessibility of GNOME under X11 should be really good. Not so sure about Wayland though. E.g. not sure how dasher would work, nor orca. Secondly, some of the things (IIRC pointer highlighting) actually is done using some X11 application. That needs to be moved (e.g. mutter).


While it's true that Ubuntu still has a few inconveniences, from my perspective it's still much better than the alternatives.

The mere thought of having to use Windows or IOS for development make me want to curl up in the corner with my blanket.

Daily I use ubuntu for my desktop, laptop, TV streamer and cloud servers. Overall my satisfaction level across all devices is at least 9/10.

Great job team Ubuntu!


No one mentioned better Steam support/collaboration with Valve? CPU and operating systems are made for gaming, it's gaming that attracts users and investors from big companies and developers. Ubuntu would only benefit if they invested in MESA, faster AMD GPU integration and cooperate with Valve on Steam clients and gamedev research for Linux.


The one thing I'm incredibly worried about is the packaging war starting all over again. There are zero reasons for having multiple packaging formats across liNux distros .

I was hoping that snap or flatpak becomes universally adopted, but it seems that Redhat and Canonical are again split along political fault lines here.


Flatpak purely deals with GUI/desktop applications. Sandboxed, then having a GUI way to break out of the sandbox securely (user interaction). Snaps also allow non-GUI things and Canonical wants to use it as well for cloud (or maybe already are using it). Due to Snap having a different focus, you cannot just drop Snap.


As Canonical has moved its focus away from end users, we might end up with a more intuitive split: snaps for the cloud, flatpak for desktop apps?


not really. check this - https://news.ycombinator.com/item?id=14053627

We're engaging with dozens of major vendors of traditional/proprietary software about delivering their software onto Ubuntu via Snaps -- which is a new packaging format that solves many of the traditional problems associated with proprietary software working well on Linux.


It is sad to see the modern and reliable Unity 7 put out to pasture. GTK was not used for Unity 7. Canonical chooses QT for Unity 8. KDE can easily replicate Unity 7 look/feel. QT no longer has an objectionable license. It is not clear why Gnome is the choice over basing on KDE?


" Fix /boot space, clean up old kernels (92 weight)

...committed to getting this problem solved once and for all in Ubuntu 17.10 -- and ideally getting those fixes backported to older releases of Ubuntu."

Great news! Thanks for taking this feedback onboard.


Just an irrelevant side note:

When you do a requests.get(url), you can use the .json() method on it instead of all the json.loads([...].text). I remember doing the same for years and only discovering .json() recently, I love it.


:-) Awesome. Thanks for the hint!


Just a note about Reproducible Builds: "We've been working with Debian upstream on this over the last few years, and will continue to do so" — well, apart from the regular sync/merge flow from Debian to Ubuntu, AFAIK Canonical never reached out to us Reproducible Builds folks from Debian. That said, we/I plan to reach out to Ubuntu/Canonical soon :)


Canonical employs dozens of Debian developers, whose work goes to both Debian and Ubuntu. ;-)


I know, I collaborate with several, and I am both an Ubuntu an d Debian developer :) Just that I know no Canonical employee is working with us on Reproducible Builds. (I only remember a very spotty contact like 2 years ago on IRC from somebody saying Canonical was interested) If they are working, they are doing it behind a curtain, which I'd find interesting ;)


It would be nice to get this list sorted by most surprising as well. A lot of these feedback points seem to fall into the category of "faster horses" which usually tend to dominate when you solicit ideas from the general public.


Just want to chime in with others and say thank you for making this followup post. Regardless of the outcome, distilling the results the way you did means a lot to me and I'm sure others as well. Thanks!


You're welcome. Thanks for following along.


I would have really liked to see gnome2-esq style DE - bare bones, focus on a great windows like task bar / windows management. I really hated the unity dock thing and I especially hated the color scheme.


I think the Mate desktop aims to deliver just that.


Indeed, but it's just missing a lot of the polish and modern features


Will this mean, that the integrated Amazon search in the desktop is also gone forever? I'd like to see Canonical making money with support instead of that Amazon thing or selling "Apps" to the user. That is the number one reason, why I do not recommend Ubuntu. Other than that, I'm amazed how much the community was heard! Also thanks for the in-depth analysis blog post :)


The Amazon thing was gone in 2016. Having an option for integrated payments in the software store is a nice thing, isn't it? You get the donations feature in the store instead of going elsewhere.


Nice read, and I am wishing I had been in on the original topic.

If I had, I would have commented that the version upgrade process sucks for anyone that starts from a minimal base(like I do). If I go through the upgrade, I will end up with all the default stuff.

If I don't have thunderbird installed, don't install it for me.

In short: upgrades could be smarter, faster, better, stronger.


I think the best news out of this is hopefully that it seems you (as Canonical) have shifted away from "This is not a democracy. [...] we are not voting on design decisions." and the (paraphrased) "This is good because we [Canonical] made it!"


Which project is "voting" on design decisions?


I am slightly sad by the fact that rarely anyone mentioned about the battery life of ubuntu, but got excited with the "Nexus-of-Ubuntu" thing. I wish the best to ubuntu.


Power/battery management definitely made the list. See the blog post ;-)


I was really happy when that Ask HN thread was first posted and even happier now knowing that all of the comments were read and thoroughly considered. That said, what this blog post and the general feeling I get from Ubuntu as a whole is that:

1. There is too much of a focus on how far Ubuntu has come and not enough focus on how far things still need to progress. I too remember the days when sleep/hibernate were a crap shot, but that was when I viewed Ubuntu as an open source alternative without much expectation. Nowadays, I see Ubuntu as a mature desktop and as such, I judge it much more harshly. Anything that doesn't work or isn't 99.99% stable is a red flag for me. So I do hope that the Ubuntu team puts more focus on making things up to date, rock solid, and super stable rather than go chase after new features. Just like a building, you need a stable foundation before building upwards.

2. There isn't a clear target audience for Ubuntu Desktop. What does Ubuntu Desktop want to be? Before it was convergence and while a very neat idea, I was never clear on who was supposed to use it. The requirements screamed high income, tech savvy end users. However, development focused on Unity, Mir, etc. with work going into features that didn't fit the target audience. For example, like the post says, HiDPI & 4K were a surprise. Why was this surprising? The group that would most likely be your early adopters and trend setters are the same exact group that would have this type of hardware. Same with trackpad, gestures, customizability, flux, root on ZFS, security, etc. All of those are used heavily by the demographic most likely to follow the news on Unity and convergence. It baffles my mind that Ubuntu's Product Management couldn't make this connection and understand what core features to build out first in Unity/Mir. Yes, these are on the sidelines now, but I really hope the Product Management team takes the time to figure out some direction.

3. At the moment, Ubuntu is at a major crossroad. Even after Mark Shuttleworth's post, this post, and all of my usual Linux news following, I don't really know where Ubuntu Desktop is going to go. Tell us what we can expect as users. Tell us when we can expect it to come. Tell us how you intend on getting there. And most importantly, tell us how we can help! Either though regular posts to various communities like the Ask HN one, or ways we can contribute actual work. Not everyone is a dev, but as an example, I used to do professional QA and yet I found it extremely difficult to find out how I can help QA things and submit useful bug reports (this isn't just Ubuntu but most open source projects). The usual "check the docs/wiki" or "submit something on the issue tracker" are not helpful. In all of my years using Linux, that Ask HN thread + this blog post was the first time I ever felt like I was heard and managed to contribute to Ubuntu. Even something as simple as periodically getting feedback from the community and telling us what you heard from us makes me feel more optimistic about Ubuntu's future.

I apologize for the rant-like nature of my comment, but hopefully this gets read and something positive comes out of it. Thanks for reading.


> Anything that doesn't work or isn't 99.99% stable is a red flag for me. So I do hope that the Ubuntu team puts more focus on making things up to date, rock solid, and super stable rather than go chase after new features. Just like a building, you need a stable foundation before building upwards.

I agree. I was surprised that the "More QA, testing, stability, general polish" wasn't higher on the list.


Re: Make WINE and Windows apps work better

How about booting ReactOS to run Windows apps? Sounds like a Frankenstein-ish solution, but...


Thanks for deciding to focus on Wayland from now on. This will benefit everyone.


Here's a perspective on FOSS and Ububtu. One point of free open source software was we didn't need to be stuck with features we didn't like, that were decided by some majority, or leadership, because we could fork our own and customize as we like. So I think the enthusiasm for responsiveness of a central body to feature requests, is an enthusiasm for a thing not normally associated with a FOSS, which therefore demonstrates that either: that the Ubuntu ecosystem says it is FOSS, but actually operates like something else, or that FOSS fork-your-own theory doesn't apply in practice on large projects with lots of people, or that this perspective just related here is missing something.


The UI is the type of big software that needs design and follow this design. Otherwise you end up with a complex piece of software. You get this with all types of big software that need to keep a clean design to avoid future pains.


Back in the days there were backroom deals to get Adobe on Apple platform. Steve would meet with Adobe executives and trade them cash and offshore stock (untraceable, of course) so that they would port to Apple. All the development for Apple was fully funded for Adobe. In case it isn't obvious, Adobe could choose to support only Windows and do just fine - eliminating development and support costs for the Apple platform. Thus, bribing the rich and greedy became the effective technique.


You've posted many such claims and stories to HN in the past. Unless you can substantiate them, would you please not continue to post like this? The genre of the internet forum comment doesn't allow us to disambiguate true first-hand stories from people making shit up for weird reasons, and unfortunately there are many more of the latter than the former.

We detached this subthread from https://news.ycombinator.com/item?id=14053357 and marked it off-topic.


Source? This seems incredibly unsubstantiated. There were public efforts Steve made to keep Adobe on the platform and I know there were developer meetings to ensure the transition to Carbon and Cocoa went fine.

I have seen no evidence of illicit stock grants or Apple giving direct cash to executives at adobe.

I hate to be that guy but I would like more information on this, otherwise it feels rather unsubstantiated and just paints both Apple and Adobe in a bad light for no reason.


I worked on projects for Steve and reported to his direct reports. One would have to be very naive to think that this isn't just typical business practice. Or do people really fall for that "free market" stuff that we are being sold?


Well, I'm not naive. I'm sure there is a lot of things that the public would find unsavory that happens all the time at the upper echelons of companies like this, sure. I'm willing to believe that.

I just have never seen any substantiation between Adobe and Apple doing this sort of thing, even when Steve was alive. This is the first time someone has even confirmed it yet alone alleged this happened. I know both companies had done things in the past that were less than savory (see that whole lawsuit about suppressing wages. Very disgraceful!)

Never this. It's been alleged for years, but never once was there evidence.

I'm sure you could make a mint here if you wanted to, if you could substantiate any of it.


Steve talked about a lot of things - and even more to his direct reports. But the only people who would have proof are also people who would never talk about it. I also worked for a direct report of Bill Gates and a direct report of Larry Ellison. They did the same types of things - often illegal. But everybody who was involved is very motivated to never talk about it. I may be one of the few people who turned down executive positions when offered - I didn't like the "business values".


Instead of adopting Gnome3, they should adopt KDE. It's far more customizable (so "Easily customize, relocate the Unity launcher (53 weight)" would be already done) and much better architected than Gnome. And for people lamenting the loss of Unity, it wouldn't be that hard to make a custom theme for KDE which largely replicates the look-n-feel of Unity. Gnome simply is not set up to allow any kind of customization, and the devs actively discourage it. The opposite is true for KDE, and a distro that wants to stand out with its UI would be better served with a DE that allows them the freedom of customization.


I stopped using Ubuntu after it came with Amazon shit installed.


ok




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: