Awesome. I'd just like to thank the whole Debian team myself - I'm not affiliated so its not a pat on my own back :). I've been running Debian for years now on my servers, on my own personal computers and its a great system, so thanks!
On a side note, just last night I was watching a video of Linus at DebianConf where he talked a bit about solving the linux desktop problem (https://www.youtube.com/watch?v=1Mg5_gxNXTo). Its great that SteamOS is building on top of Debian and I'm exited to see what effects this will have towards making distribution of cross-linux-distro apps easier for developers. I think the fact that valve is on Linux is going to have a big impact on providing distros that non-technical users can easily enjoy.
None of them are filed against systemd. I've been watching this page daily for the last month, and I only remember one systemd-related bug being filed in that time (there was one longstanding one from March affecting a rather obscure use case that sat there for a while as well, but was fixed).
But there are bugs in systemd integration if you look closely (hdparm resume issue and kde battery low issue are the ones which are on top of my mind.)
edit: Also you're looking at the wrong list :) Jessie is being released with these [0] RC bugs, and those [1] are the ones which are not fixed in jessie and sid.
At least long enough to decide which Linux distro to use in the future.
Unfortunately many distros have already jumped on the systemd wagon but I think soon there will be a increasing collection of non-systemd alternatives which will continue to follow the KISS principle which made Unix and Linux so great. There already _are_ alternatives to systemd and systemv -- launchd and upstart (used by ChromeOS) for instance.
I hope soon we will have an init system which takes the best of all current ones while still following the KISS principle. Linux must keep this principle alive unless it will probably fall into the trap of Windows' monolithic bug hell. Ironically Linux 3.11 kernel was already called "Linux for Workgroups" :-)
I just installed void on my laptop (x230) and it's quite nice. Very minimal and uses runit for managing services. Not many packages but it's had everything I've needed so far.
It also uses libressl and the packages build against alternative libcs so that's pretty cool. It's also interesting in that it's a new distro rather than a fork.
> many distros have already jumped on the systemd wagon
Odd. If systemd is so Objectively Terrible (the general tenor of these posts: systemd is bad, it's obviously bad, with no redeeming features whatsoever), why is that happening? There can't possibly be a financial incentive.
"Objectively terrible" certainly isn't the case with systemd, but there's plenty of reasons for adopting software in general that has nothing to do with technical merit. Much of the Linux desktop daemons (particularly ones associated with GNOME and Freedesktop.org) have begun using systemd's interfaces, sometimes as hard dependencies. Thus, for the major distributions that want to tailor to the most popular use cases, the cost of adopting systemd is probably lower than patching against the ever-expanding upstream that requires it.
Lots of programmers aren't particularly good at analyzing the cost of surface convenience in proportion to future technical debt. Software is just as frequently adopted purely because it's convenient, well marketed or in a self-serving feedback cycle, because it's already popular.
It's also worth noting that ChromeOS still uses Upstart.
The primary reason why systemd got adoption it that it solved real problems both for end users (system administrators) and for higher layers of the stack.
It has replaced various kinds of NIH'ed and pointlessly differently colored bikesheds in different distros with stable public interfaces that obviate #ifdef hell in higher layers of the stack.
Your technical debt argument is very apt. It's how we ended up with piles of brittle and unmaintainable shell scripts that don't do error handling worth a damn.
As long people have the option which init system (and software in general) to choose everything works fine. However it is not ok to force people to use something they question, or actually don't want. By the way, systemd is not the only way to go. There are also runit and other systems.
I always follow the KISS principle because the more complicated a system gets the more difficult it is to be fixed. I am a Linuxer since 1990, and I am concerned that current Linux distros follow a way which will make maintainability much more difficult by leaving the KISS principle.
There's no set schedule, but the previous release generally continues to receive security updates for about a year after the release of a new version. See https://wiki.debian.org/DebianOldStable.
(This assumes that Debian doesn't decide to declare Wheezy as a new LTS release once support for Squeeze ends in February 2016... AFAIK it's not clear what their plans are there.)
Don't forget that the "Stable" definition for Debian is not the normal one, they are super zelous. Testing and Sid distributions are "stable" for the rest of distros. And remember that all those bugs on the Debian release are present in other distros, and they even don't acknowledge them sometimes.
sure, but thats exactly why I was asking - I am looking for some background on this decision and hoping that someone with more insight can share a link or something
As a speculation from an outsider, I'll list these reasons: Release team was very motivated in this cycle (auto removal of packages, not letting any new package in freeze, very strict exceptions even for packages that fix bugs etc.), most of current rc bugs are lurking there for months without any progress (and since Debian is all volunteer work, they can't force anybody to fix 'em), and they couldn't also remove those packages because they've already removed packages they could remove. Another thing is some of those RC bugs are security issues, which are handled by Security Team for stable and oldstable, so there is no reason for security issues to delay the release. When all these are considered, they may have not deemed further delaying the release worthwhile.
I would also like to hear the real reasons from a team member, though.
It was very sad to see Russ Allbery leave the Debian Technical Committee; his deeply insightful and remarkably well written discussions will be missed.
As a long time Debian user in a professional environment I was really excited to see that Debian 8 finally added proper SELinux policies - however my colleagues and I were very disappointed to see that these (although working without noticeable issue) were dropped at the last minute with no explanation.
Several people in the community have asked what happened to the packages but had no reply from the Debian mailing list. I've asked several times on twitter but I too have had no response from the official Debian twitter account.
It's unfortunate that you didn't get an answer from your posts in the mailing list but the package has been dropped because of those "grave/serious" bugs:
The main problem seems to be there is not enough manpower to keep those policies up-to-date. Once there are "grave/serious" bugs, a non-essential package is usually dropped from testing (hence from the next stable). If people care enough, this is usually a hint to fix those bugs.
That's interesting indeed - I just don't see how a modern security conscious distribution can be considered releasable without SELinux working.
We were using them just fine in a pre-prod (waiting for Jessie to be released) environment. We weren't using GPG but experienced no other issues.
Right now, to get around the problem we have ported Fedora's policies across. I'm unsure if these two bugs exist when using Fedora's policies but I'd say they would be.
SELinux is not a release goal. It's possible to advocate for new release goals (for example, it is likely that reproducible build will be a release goal for the next release) but this means that some people have to volunteer to do the work.
I don't know enough SELinux to comment on the technical details.
MAC is a JOKE. feds only wrote SELinux because largely they're required to use MAC. there are far better things to worry about than MAC unless youre getting paid triple digits per hour. windows had MAC forever, look how no ones uses it. i could go on.
Have been running Debian on all of my computers for a while now and couldn't be happier with it. Fantastic distro made by a great team, thanks for another great release!
How about a mail[0] on the mirror announce mailing list from half an hour ago?
“As you are probably already aware of, Debian 8.0 "jessie", is just a few hours away from becoming the next stable release. This is a heads up in case you were not aware of, as there is probably going to be a higher load on the
mirrors.”
You linked to a release page that has a series of announcements. The most recent announcement is from March, and also a mailing list. That's all they meant.
Check the submitted page (https://release.debian.org/) and you'll see that even now that Debian 8 is released they don't mention Debian 8 being released or added any new item about that release.
So, no that page didn't (and still doesn't) announce Debian 8 release.
I wonder why, following their new "openness" motto, Microsoft is still not:
1) porting Visual Studio to Linux.
2) porting Office to Linux, or
3) contributing to Wine, with the goal of solving long-standing issues with their (above) software - but hey they could contribute in other areas too, since Wine is far from complete (a USB driver/stack is much needed IMHO).
I won't believe in Microsoft "openness" until I see one of the three above.
Edit: the fact they're using this event mainly to promote Azure services rather than talking about differences/upgrading issues from debian 7 to 8 speaks of itself.
MS is only open in so far that it aligns with their business plans. They have a long term plan for Azure and need more .Net developers for it. Hence making .Net more attractive by opening it up. Does not mean they will open up other things.
This seems to be their strategy but it could backfire. Once Windows developers and enterprises start using Linux and other clouds instead of Windows Server and Azure, we might see the old MS again.
Azure is twice the price of Google for compute (VM). Azure was really a PaaS and it still shows. They really hope you bring you in on using their software, not just the VM aspect. I'm not overly convinced, but it is probably a decent strategy. Some folks will say "hey I need a message queue" and use Azure's instead of hosting in a VM. Lock in. Azure is even doing this for stuff like Redis, I suppose to get more people on the idea of using hosted software vs machines. In fact their infrastructure offerings are pretty weak (like SSD, or networking).
Indeed. And why would a somebody who makes their own operating system and has great tools for it be worried about tooling on other systems? The whole point of them having their own OS is to make money from it or to have the ecosystem they want. This helps with neither of those.
It would probably be easier to make a stripped-down, open-source version of Windows. Then they probably wouldn't have to port anything.
Probably because MS (like many others) are excited about and use Linux on the server side but don't take it seriously / want it to be taken seriously as an end-user desktop OS.
I have some legacy linux servers that need an update to a new OS. Is Debian 8 a good choice? All I care about is that stuff just works for as many years as possible, gets security updates and does not break.
Debian is really well-made, we've used it at my college to host a yearly programming competition since 2012, it's stable, quick, and rather simple to maintain, especially if you're not doing anything too complex, other than hosting a couple of PHP files on top of a web server, that's the full scope of what we've done. I am not a security expert so that I couldn't answer on, we've not yet had any breaches yet, despite all the major things that came out last year or so, 'Heartbleed', and 'Shell Shock'. I think the best option for you is to try it out on one machine maybe via LiveCD and see how you like or don't like it.
Because FreeBSD has really good community support and great documentation, while Debian kFreeBSD is a tiny community. And I really do not see the point - the GNU userspace is mostly available in ports if you want it on freebsd, but the FreeBSD userspace is generally as good.
It's an interesting port, but things don't just work. There's a lot of fiddling around when parts of the system expect gnu utilities that aren't available on FreeBSD, and the BSD utilities work differently. Ifconfig for example does the same job, but in different ways, so stories won't work, etc.
"Finally, the Debian ports to the FreeBSD kernel, kfreebsd-amd64 and kfreebsd-i386, included as technology previews in Debian 6.0 and Debian 7, are not part of this release."[0]
This is incorrect. stable releases are supported until one year after the new stable release, which happens to be ~3 years, so it's 2018 for jessie. If LTS project also decides to support it, it extends to 2020.
Please be aware that with Ubuntu LTS, only the small number of packages in "main" gets support for the full LTS time-frame, while the majority of packages in "universe" gets no official support after the 18 months (?) of a regular release time-frame.
So in practice you don't get a larger number of supported packages than with RHEL/CentOS.
Wow, really? That's nothing! No security updates after that?
> Ubuntu 1404 LTS has support until 2019
That sounds more reasonable. But still a bit short.
> RHEL 7 has support until 2024 (and Centos).
Interesting. Now I understand why I hear "centos" so often lately. So far, I only know Debian based distros. I wonder how much work it would be to switch to one of these two.
There was an article complaining on how RHEL support a flavor of python that is very old but it doesn't work with any other popular python libraries. (update: RHEL 5 and below is stuck with Python 2.4 apparently)
Long term support is great and all but Red Hat can only support so much, the libraries and any other ecosystem that is part of that software or programming language will be dropped by group that are responsible for it. That was my take away from the post/user comments.
In general Debian is pretty rock solid imo and as a good system admin you should stay a version or two behind and you should be pretty set imo. Waiting until 2024 is crazy in term of updates and such, I rather go OpenBSD route if you want to go that long.
It's hard to blame RH for that. Last time I checked, 2.5 was just released a few months before the first release of RHEL 5. Now they have software collections (https://www.softwarecollections.org) as a workaround, providing optional newer components.
I agree Red Hat is hardly the guilty one. I mean, RHEL based distros have a lot of plumbing and utilities written in Python, so it is unacceptable to upgrade system Python. Now, notice it's the system Python, the main reason of Python's existence in RHEL is the system utilities written in Python, providing Python language to customers is secondary to that. So there's no wonder nobody wants to touch it for reasons other than patching security issues, and users who try to run all the shiny Python libraries and frameworks using it should be banned from using Python ever again :) Just use a newer version for god's sake (and it's not that there isn't a dozen different way of getting a newer version).
(That being said, if obsolete version in RHEL fits the purpose of the user, that's great, and there is no reason for getting a new version, but it's wrong for those people to pressure developers for supporting old versions, and it's immoral for foss developers to continue supporting 10 year old releases at the expense of holding back progress. there was a post regarding that point lately, I'll try to find the link)
Yes, the problem is mostly with the users. The last time I was using CentOS 5 there were alternate repositories (e.g. IUS) providing non-conflicting newer versions of stuff. Alternatively, there's always pkgsrc.
I'm not sure if you're implying that 6 months is too long or just providing more info. I don't think it is too long given that most distributions have stabilization periods where no new versions will be accepted e.g. Debian 8 was frozen on November 5 last year.
> Wow, really? That's nothing! No security updates after that?
First of all, it's 2018, not 2017. For the latter point, there is now a LTS project, which provides +2 year after offical support ends, but they only support squeeze for now (since wheezy is on offical support and jessie is not released) and AFAIK it's not decided yet if jessie will be supported by LTS or not. So it may extend to 2020.
If you want certainty on this regard Ubuntu LTS is also a pretty good choice.
On CentOS, I don't use it so I can't comment on it in depth, but beware that number of offically supported packages are much smaller compared to Debian, so make sure the packages you want to use are supported. (There are semi-offical/unoffical repositories, but they may not be maintained as well as offical packages.) (Actually that same point applies to Ubuntu also, as only main and restricted archives are supported by Canonical and universe/multiverse is where big number of packages reside in.)
What are your servers running now? Is there an upgrade path from whatever Linux it is? Do you have a lot of custom configurations?
Debian is widely regarded and typically provides two to three years update support for a given code name (e.g. Wheezy or Jessie). There is a proposal for providing Long Term Support (LTS) for Wheezy as has been done for Squeeze.
I really don't have any custom needs. Any old linux distro will do.
But only 2 to 3 years? That is very short. What would be a good alternative with longer support? There must be market for it I guess. Don't companies simply want to run their stuff as long as possible? Many companies still use Windows XP. And that's 14 years old.
Any major distris that commit to 10 years of support or something?
Yes, RHEL has 10 year support periods. Red Hat Enterprise Linux is commercial - you pay an annual subscription for updates.
CentOS/Scientific Linux/Springdale Linux are free clones with free updates. Oracle Linux is a free download, but not sure how updates work. Current is version 7 with support until 2024.
PS: if you are asking this kind of question here, you might want to take your local Linux sysadmin for coffee and explain your use cases, applications, hardware spec and likely traffic in detail.
SSLv3 is so 1996... Please, please, please with sugar on top, use TLS only. Also there are lots of changes in the software world in the last 20 years, which the SSL codebase didn't keep up with (mainly meaning OpenSSL).
Upgrade from wheezy. Continue using sysvinit because "continue using what's installed already" trumps "systemd is default for fresh installs". Problem doesn't even exist.
(I actually like systemd, just adding to the list of ways that Debian makes it easy to avoid it, in the hope of the whiners whining less :P)
As an AWS user, I switched from Amazon Linux to Ubuntu 14.04 because it's easy to replicate the development environment. No performance/stability issues so far however I'm curious if Debian 8 has an edge over Ubuntu 14.04 for a medium sized website on AWS. My only gripe with Ubuntu is that apt-get doesn't have the latest stables packages (like Amazon Linux does). I'm guessing it would be the same with Debian.
> My only gripe with Ubuntu is that apt-get doesn't have the latest stables packages (like Amazon Linux does). I'm guessing it would be the same with Debian.
Depends on your definition of "stable". Since distros vary widely on what they consider to be "stable", comparing various distros' stable releases is like comparing processors by raw GHz values or dSLRs by megapixels alone.
Debian Stable is the last place to go if you want very recent builds of packages, but the first place to go if you want absolute, rock-solid stability.
Debian Sid (unstable) has the latest versions of each of the packages, though Debian's guidelines are strict enough that testing and sid are oftentimes more stable than the "stable" releases of other distros.
My advice (and this is what I do): Run Debian stable (Jessie, as of today) as a base image, and use an appropriate container for applications that require more up-to-date applications. Best of both worlds.
Actually they call it stable because it doesn't change. It achieves a reputation of stability because of the development process and not shipping until bugs are eliminated.
The definition of "stable" for Debian means that the version number of every piece of software is frozen. Only security fixes (through security uploads) and critical fixes (through point releases) can get into the distribution if they are backported to work with the version currently in Debian. You don't get the latest software but you are ensured that an upgrade won't break anything.
Since the freeze is around 6 months, this means you get 6 months old software when Debian is released. There are some exceptions, like browsers that are too difficult to maintain at the same version.
We believe most people like this definition. This can be frustrating when you need the latest version of nginx but you are happy that upgrading some basic stuff won't break anything on your system: no deprecated configuration option in X, no command-line flag that doesn't exist anymore in Y. All should work exactly as before, with fewer security holes and bugs at each upgrade.
However, if you really want to have the latest version of a selected set of software, have a look at the official Debian backports. This is a great strength of Debian over Ubuntu (where backports are almost inexistant with the notable exception of the kernels): there are many backported packages. For example, if you need a more recent version of nginx and you are running Debian Wheezy, you'll get nginx 1.2.1. If you need something more recent (because you want to get SPDY), you can get nginx 1.6.2 through backports. See here: https://tracker.debian.org/pkg/nginx.
Backports are packaged from the versions that will be in the next Debian release. So, they should keep the same quality than the packages which are currently in Debian. This is a great strength over random PPA. Some of them are maintained by skillful people, some others are not. If you trust Debian for its packages, the backports are made by Debian Developers too.
For nginx, there is no 1.8 because backports are taken from the next release. As this next release is currently frozen, the version proposed in backports is still 1.6.2.
Using a Debian Stable with backports should allow you to get what you want: stability for most packages but latest releases (and latest bugs/changes) for a selection of packages.
It probably depends what packages you're specifically looking for newer versions of. For "common" web server setups (i.e. LAMP or similar), using Debian stable plus the Percona and Dotdeb repos will give you a rock solid OS with more recent versions of things like PHP, Redis, and MySQL/Percona Server.
If you can identify the specific packages (i.e. php? nodejs? mysql? redis? etc) that you found outdated in Ubuntu it will be much easier to make a suggestion about how appropriate Debian stable will be for you.
On the other hand, sid/unstable tends to have the latest releases of upstream software (golang, docker, redis? ... I can't think of any great examples), but I've found that Ubuntu's packaged versions of upstream software often lags behind by quite a distance, even in the latest branch (eg. in next, unreleased, say 15.10 tree).
I can't say why that is, but I've seen it on multiple occasions. Usually you can resolve this in Ubuntu with a PPA.
> I can't say why that is, but I've seen it on multiple occasions.
Because Ubuntu releases are, well, releases. Early in the release cycle, they sync from Debian, they package their own things, then freeze the versions and release. They don't change versions of most packages in released! releases.
Debian unstable on the other hand, has no concept of a release, so maintainers upload new versions of packages into unstable pretty much all the time (except freeze time).
So what you've seen is actually the norm, not exception.
What you're saying makes sense, except that I would expect 15.10 to always be more current than it is before 2015-10 arrives. In reality, it's often not more current than the released version. If there is a new version of a package in sid, why isn't it in the "bleeding edge" ubuntu next-release? (I'm sure it's a good reason)
In Debian, actually, I am pretty sure there's never really a freeze in unstable as you mention. They make a new stable release from the testing branch some time (a good long while) after the freeze is called, then for a brief period you have only stable and unstable (and oldstable), and later on a new testing branch is created (not yet frozen) with whatever packages from unstable meet the criteria to go into testing.
When testing is not frozen, that means "the package has been in unstable for 2 weeks without any reported bugs" or something like that. When they're getting ready for a new release, they freeze testing, and then the criteria to get your package in for the next release gets more stringent (is it, bugfixes only? security issues only? I'm not sure, but it's probably even stricter than I think.)
> What you're saying makes sense, except that I would expect 15.10 to always be more current than it is before 2015-10 arrives. In reality, it's often not more current than the released version. If there is a new version of a package in sid, why isn't it in the "bleeding edge" ubuntu next-release? (I'm sure it's a good reason)
I'm not closely following Ubuntu development, but they have 6 month release cycles and they do the "sync from sid" early in the cycle, AFAIK first 3 months or so, so you can expect 15.10 to have what was in sid on August-ish. This may further skew when
1. Debian is on freeze. For example, Debian was on freeze starting from November '14, which means practically sid was also on freeze. Debian import freeze date for Ubuntu 15.04 was February [0], so while normally you would expect 15.04 to have latest versions of february; in reality they were latest versions of october. (there are, of course, exceptions). When you consider import freeze of 14.10 was August '14 [1], you can see why 14.10 and 15.04 have very similar versions.
2. Ubuntu is importing from testing for LTS releases, instead of sid. This shouldn't matter in an ideal world, where the difference between testing and unstable is 5-10 days, but sometimes packages got stuck in sid so bad that it may cause a difference in what lands in Ubuntu release.
3. I'm not so sure on that but if I understand correctly, they do a complete import of Debian in the beginning of the release cycle, and then maintainers can do ad-hoc imports for individual packages until the import freeze date, so the packages that land in the release may also be older than what was on sid near the import freeze date.
I guess you'll see much newer packages on 15.10, though, since sid will be full-speed during the 15.10 cycle, so nothing to worry about for now :)
I don't know Amazon Linux packages but Debian 8 packages are newer than Ubuntu 14.04 ones, so he might find what he's looking for unless he always wants newest versions.
I'd wait a month or two for edge cases to be worked out and then go for it.
If you have a fairly vanilla 7.8 system (i.e. no source compiled stuff on there, mainly just big name packages etc.) then you should be fine doing it now or in a few days.
We updated some simple 7.8 installs to the rc jessie release a month or so ago and everything went fine apart from an obscure bug with monit and inherited umasks. Got around that by manually installing the sid deb of monit.
Edit: the main thing you'll want to do is go read up on systemd ahead of upgrading as it's quite a change and there are still some wrinkles ('systemctl daemon-reload' is a new command we've had to use quite a bit).
Well, if you only have one 7.8 lying around, then fire up a VM, replicate your setup (best is to stop the machine tar-gz the whole, unpack it in the VM, boot from a rescue disc, isntall grub[2]; but if you don't want to go that bullett proof, then just copy sources.list [+ .d], install the same packages, copy /etc, reboot a few times) and try the upgrade.
We've a lot of 7.x instances (bare metal and VMs both) and a few running jessie for about 2 months. The upgrade was flawless. systemd had quirks, but that faded too.
All in all, go ahead, it's yet again a nice little step forward.
Counterpoint: Upgrading from 6 to 7, every single machine I use, from a home server to various professionally managed machines at work, had at least one serious problem. RAID and bootloaders raised a few issues for example. We got everything working eventually, but the amount of wasted time even with very experienced sysadmins looking after some of those machines was silly.
Debian is generally very good with stability for things like security updates and we certainly plan to continue using it. However, our plans for updating this time are more along the lines of "set up completely new machine with Debian 8 from the start, install our own choice of packages and applications, and then systematically migrate data/connectivity from the old systems to the new ones". We expect the time and money costs of having the transition period to be less than the potential downtime if direct upgrades take as much effort as they did from 6 to 7.
Your mileage may vary, Linux has infinite possibilities and ours may just have been unlucky, the plural of anecdote is not data, etc.
Best strategy is having a backup image and testing the upgrade procedure on a test machine beforehand, if possible. This release may be especially problematic because of the systemd change (it is possible to boot with sysvinit in Grub menu, but remote upgraders should beware).
Did your last upgrade issues stem from the upgrade procedure or were they because of new versions?
Were your last problems stem from the upgrade procedure or were they because of new versions?
I can't remember all of the different problems now, but one I do remember is that if you had a typical set-up with mirrored (RAID1) drives but the boot-related partitions cloned rather than mirrored, one of the bootloaders got upgraded but not the other. That is, the drives were left out of sync and booting from one of the drives wouldn't work properly if the other failed. The thing that really concerned us wasn't so much the specific details here but that this was essentially a silent failure in the upgrade process, combined with a potentially catastrophic failure in a basic system function as a result.
I think there is quite a big difference between the theory of updating a couple of sources files and running a couple of upgrade commands and the practice of manually checking things like basic RAID configuration and reinstalling missing bootloader updates. This time around, the fact that Jessie uses systemd made the discussion for whether to even try a dist-upgrade a very short one, because literally everyone in the room agreed that the probability of failures was too high for that strategy to be worth considering. The substantial discussions were more about migration to fresh machines relatively soon vs. sticking with 7 at least until we know the LTS situation.
Am I understanding this correctly: instead of having the boot partitions configured as a (MD?) RAID set, you had somehow manually cloned them between two disks? A mirrored boot partition works just fine if you're legacy booting... With EFI I guess you have to do manual cloning (which is fragile) or rely on hardware RAID.
Did you use some tool to do that? How do you expect the upgrade process to even be able to take that kind of thing into account?
Without knowing any details it's hard to say if it was an actual bug or just plain old human error, but it sounds like the latter.
I've long forgotten exactly why these systems were first set up that way. Presumably it was because at the time someone was leaving their options open about the RAID set-up for the main drives/partitions and bootloaders of that generation didn't support MD well so keeping boot as a non-RAID set-up was not uncommon. Whatever the history, the fact is that before the automated part of the 6-to-7 upgrade there was a fully working system, and after it there wasn't.
How do you expect the upgrade process to even be able to take that kind of thing into account?
I don't think it's rocket science to suggest that if you're migrating to a new bootloader, and you've got a system with multiple drives in it (RAIDed or otherwise), and you're installing an OS that is widely used in server or multiple-OS environments, just assuming that you should upgrade the bootloader on one specific drive and ignore anything else is not a great idea. What if the sysadmin installing the update wasn't the person who installed the original and simply hadn't realised how the /boot was set up?
Without knowing any details it's hard to say if it was an actual bug or just plain old human error, but it sounds like the latter.
There was no "error". The situation before the upgrade was what it was, and after the upgrade the problem was quickly detected and fixed. But it took time and effort to do that, instead of having a smooth, fully automated upgrade process. Again, the fact is that before the automated part of the 6-to-7 upgrade there was a fully working system, and after it there wasn't.
Will the 7-to-8 update now expect everyone performing it to be intimately familiar with the implications of things like systemd? Because I'm betting plenty of people will encounter it for the first time as part of this upgrade cycle.
What about package compatibility? Some packages have been entirely removed in Jessie; see the political debates about FFmpeg vs. Libav for a relatively high-profile example. That is inevitably going to break some people's install scripts/tool recipes/etc.
My point here is that there are significant changes as part of the upgrade, and upgrades always carry a degree of risk, and my personal experience (based on several different projects) of the 6-to-7 upgrade process was that the risk was real and the fully automated part of the process was not able to do everything necessary itself. Consequently I would not recommend that anyone assume a 7-to-8 upgrade will necessary go completely smoothly and be fully automated either.
[Edit: To be clear, I'm not saying you shouldn't do it or something awful will happen. Nor am I criticising Debian for not anticipating every possible scenario and handling everything completely automatically. I'm just saying my experience last time around was different to kasabali's experience, and as one data point, projects I work on where the experience was not as smooth last time but the desire is to move to 8 quite quickly are generally favouring a clean install and application migration strategy rather than an in-place upgrade. The expectation of those teams is that this will incur less risk and might be faster anyway once you take all implementation and testing effort into account.]
Hmm, this sounds like a difference in expectations. I don't think anyone said that Debian upgrades are fully automated; the package manager does what it can (and it usually does a good job) but it's always the sysadmin's job to verify that the configuration at reboot is sane, especially if there's even a hint of something special in the configuration.
Upgrade scripts certainly could try to predict every crazy thing people do with their computers, but past a certain point, it's not very productive. People are creative.
In the end, the admin must make the decision whether reinstalling and reconfiguring a server has a lower general cost than verifying and potentially fixing an upgraded installation.
Of course in the end it's the sysadmin's job to administer the system, but that's also a convenient way to shift responsibility for problems away from the tools. As I mentioned, the problems were quickly detected and subsequently fixed in the cases I'm aware of. But that still required time and effort, and since realistically no sysadmin is going to be an expert on every part of their system that might be affected by an OS upgrade on this scale, I still think it's fair to highlight the risk.
How could any update to the bootloader have been installed properly in that setup?
If you did something unorthodox, such as building a boot process dependent on a manual step to clone drive, you surely must be prepared to deal with this in any number of situations that can arise?
All non-standard solutions carry a debt where all future admins must understand what you built and how this affects operation.
Given that Debian's standard installers have always been pretty bad at configuring any non-trivial disk set-up without manual intervention, I feel some people here are a little too quick to criticise. As I said in another post, I don't know why the systems where that issue came up were originally set up as they were, but there have certainly been times, particularly before the current generation of bootloaders, when that sort of set-up wasn't unusual.
The point remains that this doesn't matter. Before the upgrade, there was a fully working system. After the automated part of the upgrade, there wasn't. The original question was how safe the upgrade from 7 to 8 is, and this is a demonstration of the fact that such upgrades can carry risk. I'm not saying don't do them, I'm not expecting Debian maintainers to be omniscient, and I'm not telling you your child isn't beautiful. I'm just saying if you're thinking about moving from 7 to 8, be aware of the potential that there will be things the automated tools can't or won't do for you that may break your system, and plan your upgrade or other migration strategy accordingly.
No one argues the packaging system can handle every possible situation. It's just that this case seems, on the face of it and without knowing any of the details, have been one where the system was manually placed in a state where the updater was broken.
I'm not a DD and I have no vested interest in it, but that particular data point is an outlier no matter how you look at it.
There are more obvious situations where updates will break your system. Most common probably when you've installed third party packages with dependencies on system software. But that's not generally what's referred to when asked if the update process is stable. Such things will break no matter how stable the process in itself is.
Depends what you're using it for. If you're putting Debian on a server, go with netinst, deselect everything in tasksel during the install and then apt-get only what you need post installation.
That way you start with a very lean < ~700MB base install with no unnecessary garbage on your system to worry about.
If you're not sure you can get the wifi card working in installer, then get DVD. Otherwise (you can get the wifi card recognized, or installing via ethernet etc.) there is no reason.
On a side note, just last night I was watching a video of Linus at DebianConf where he talked a bit about solving the linux desktop problem (https://www.youtube.com/watch?v=1Mg5_gxNXTo). Its great that SteamOS is building on top of Debian and I'm exited to see what effects this will have towards making distribution of cross-linux-distro apps easier for developers. I think the fact that valve is on Linux is going to have a big impact on providing distros that non-technical users can easily enjoy.
Anyways, thanks again :)