Hacker News new | past | comments | ask | show | jobs | submit login

So I understand the whole "SLOC doesn't always need to be positive to have made useful contributions to a codebase" thing, I do it fairly frequently. However, is the argument for removing support ("known working good code"), aka functionality, a good thing? Do people now need to draw a line in the sand and say: If I want to put linux on my m32r sound mixer, I can't use a kernel newer than 4.16?

I'm not really sure I'm arguing for or against the dropping of support, more just curious about others thoughts.




The problem is that often enough it is not "known working good code", because nobody has been compiling and/or running and/or stress testing this code for a long time. Given that constant churn in the kernel, refactoring of APIs etc., it's quite likely that subtle (or not so subtle) breakages creep into that code over time. (Even if people who perform this refactoring try to update these archs, too; or perhaps I should say: "especially if", because usually you can't test them, after all).

Leaving it in gives the wrong impression that this is "known working good code", which is one reason to remove it. Another is that removing them makes it easier to be agile and refactor things, because you don't have to worry about these archs anymore.


> because usually you can't test them

We should be able to test that code. We could if, in order for an architecture/piece of hardware to be considered for inclusion, there were a software-based emulation tool that could be used to run a full-set of tests against a kernel build.

It's a lot of work to produce that and to write such a test suite, but it can grow incrementally. We have very good (and reasonably precise) emulators for a lot of the PC and mobile space and having those running would give us a lot of confidence in what we are doing.


So you want to write a new emulator each time Intel releases a new chip? I think you're vastly underestimating the scale of this task.

Video game emulators are possible because there is only one (or a very small number of) hardware configuration(s) for each console. Emulators for mobile development are possible because they simulate only the APIs, not the actual physical hardware, down to the level of interrupts and registers.


I don't think x86 would be a particularly attractive target for emulation in this case - x86 hardware is readily available and testing is much easier than, say, SuperH or H8.

Intel probably has internal tools that (somewhat) precisely emulate their chips and it'd probably be very hard to persuade them to share, but they seem committed to make sure Linux runs well on their gear, so it's probably not a huge problem.

I think of this as a way to keep the supported architectures as supported as possible even when actual hardware is not readily/easily (or yet) available for testing. One day, when x86 is not as easy to come by as today, it could prove useful.

It's good to keep the software running on more than one platform, as it exposes some bugs that can easily elude us. Also, emulators offer the best possible observability for debugging. If they are cycle-accurate then, it's a dream come true.


We are able to create tests with emulated hardware from specification, but writing emulation taking into account quirks, edge cases and speculative behaviour would be great amount of work, even for simplest devices.

I'd recommend reading Dolphin emulator release notes for a reference how much work is required to properly emulate hardware such that actual software may run without glitches and errors even for (AFAIK) 3 hardware sets.


> writing emulation taking into account quirks

I believe quirks would be added as they are uncovered. It'd also become a form of documentation. Emulation doesn't need to be precise to be valuable - if we can test it against an emulator on a commodity machine before testing it on metal on a somewhat busy machine, it's still a win - we don't need to waste time on a real multimillion zSeries or Cray if it crashes the emulator.


That way we would be able to create integration test coverage of 100% while still being able to panic on real hardware, until all messy behaviour is implemented. It's like writing unit-test kind of integration tests which then fail when provisioned :)


Passing tests and then failing when provisioned is still better than just failing when provisioned. At least you know that it should work (and maybe does, on different variations of the hardware).


> We have very good (and reasonably precise) emulators for a lot of the PC and mobile space

If that was true, services such as AWS Device Farm wouldn't exist: https://aws.amazon.com/device-farm/


I don't think covering all devices available is necessary before this approach brings some benefit to kernel developers. Also, this service would not be necessary if all phones ran the same version of Android and had displays with the same number of pixels, none of which is that much relevant for kernel developers.


Code that isn't executed and doesn't have anyone to care for it will often 'rot' inside of a larger codebase in my experience. When that happens, it adds mental overhead whenever anything related is refactored or changed, and can sometimes do nothing but create barriers for further improvements.

In this case, it looks like it was a number of unused architectures that were being put to pasture - anyone who is interested can look through commit history and pull back what they need if they're invested enough.


> m32r sound mixer

Wrong google hit. It's a minor 32-bit architecture from Renesas that seems to be targeted at ECUs. They're still on sale but I doubt there's much consumer demand to run Linux on them. They have fairly limited Flash.

http://www.linux-m32r.org/


Heh woops, thanks for the correction.


This is a case where a difference in degree makes a difference in kind.

Some of these haven't been sold for over 10 years and no one knows who still has one or where to get a compiler for them. Some of them are only a few years old, but have never run an upstream linux kernel (they always ran their original hacked-up/ported kernel), and again you can't find a C compiler from the last 5 years that supports them.

Linux does not drop support for architectures lightly, it was hanging on to most of these for years when they were clearly un-used un-tested zombies. And, FWIW, sound-blaster sound cards from the 90s are still supported ;)


I'm unfamiliar with kernel development cycles, but there might be some amount of maintenance needed each patch to ensure changes work for the various supported architectures, in which case leaving them in without updating them would result in insecure, increasingly buggy mess.


It _potentially_ paves the way for a net increase in functionality. If you can make changes without worrying about if it breaks some obscure architecture that you know no one is using, that _could_ make the process smoother and thus lead to the easier inclusion of functionality that will actually be used.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: