Do you know what is the other thing that 4.17 also does? Optimizing idle loops so your laptop can run cooler for longer.
Phoronix's initial testing seems to suggest a 10-20% less power usage while idle which is a fantastic news for those who own a Linux laptop.
Since 4.0 or so Linux has become rock solid for me so I no longer anticipate new kernel releases but this is one of those times I run the cutting edge kernel because it is so cool. If you run a relatively recent version of Ubuntu, you can also test by googling Kernel PPA. It's just three clicks away at most.
There's lots of reasons that might happen. Lots of businesses are tied to peaks at specific times of the day or seasonality. And not yet sophisticated enough to build in elasticity.
Even AWS has no reasonable automated way to be elastic in a vertical way, like auto changing instance size. Some apps can't scale well horizontally.
My database servers, for example, are built for "Easter Sunday Attendence" and underutilized the rest of the time. We do better with things like app and web servers, but there are inefficiencies.
The linked article mentions (on page 3) that, on servers, the gain was seen when not idling: "On this Tyan server, the idle power usage ended up being the same across these three most recent kernel branches. However, the power usage under load was found to have some nice improvements."
Also, if you're designing for high availability, you're going to overprovision by definition, otherwise the loss of a server or datacenter is going to cause a cascading failure.
Seems like so. From Phoronix: "I began the weekend work with trying out a Lenovo ThinkPad X1 Carbon with an Intel Core i7 5600U processor... It's a mature Broadwell platform that has been working well under Linux for years. But not exactly a system I would expect years later to have a significant power boost from."
I wonder if the time has come to add user-configurable resource consumption throttles to browsers, eg. settings for max CPU, max FPS (foreground and background) for developer-triggered redraws, etc. On my laptops I’d have it ratcheted all the way down, because not needing to plug in and my lap not being cooked is more important than whatever frills the websites I visit have. And if some site can’t run properly while throttled, well, there’s probably a more lightweight alternative I should be using instead.
You can do that with ulimit. But it affects the whole browser, so it is not very granular.
Tab-specific and/or domain-specific resource management would be more useful.
This sounds amazing and I'm looking forward to testing it. I'm thinking of finally pulling the trigger and getting my first non-Mac laptop in over half a decade. I've been rather impressed with how well Linux behaves on laptops nowadays. This is the cherry on top.
My best year, I removed 10,000 lines of code more than I added without removing any functionality from the project (and in fact adding some new features). It isn't always a good thing to do this- but when it is, you know it as soon as you start reading the code.
I often wish I could be paid just to refactor and rewrite existing code bases to make them more maintainable.
> I often wish I could be paid just to refactor and rewrite existing code bases to make them more maintainable.
Yeah, me too--I'm very good at it and it's quite satisfying. Unfortunately it's very hard to communicate the business value of it (although the business value is huge in some cases).
Give me a ring if you all ever find a place that lets us use our powers to their fullest. I feel like realizing removing more code that you put back is usually a good thing is a tipping point in a coder's career.
> Every new line of code you willingly bring into the world is code that has to be debugged, code that has to be read and understood, code that has to be supported. Every time you write new code, you should do so reluctantly, under duress, because you completely exhausted all your other options.
It's worth noting: code you import counts toward your total line count. Don't think that because someone else who doesn't work with you wrote the code, it doesn't count. In some ways, that's worse. I've spent all day today debugging a no-longer-maintained library which is used in a legacy codebase I'm maintaining.
This is commoditization. People "tried and reinvented" the wheel many times back in the days out of lack of time, knowledge, legal/licensing, essential features, trying to be smart or many other reasons. And then there are more libraries today that solve the same thing way better. That automatically makes all the old ones legacy. Its like inflation, why take money away from people when you can print new ones?
Today, while I love the simplicity of Go, I shudder to fathom how much copy-pasted lines of Golang code I wrote will be commoditized in next year or two, and thus automatically creating legacy. And there will be nobody to give a ring to except my past self.
I understood that SPDX can easily be used in new projects, for example.
But when an earlier project has a license that explicitly states something like "The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.", can you replace it by a single SPDX link?
I think that technology allows you to easily link something on the internet instead of providing a offline text file, but is it lawful? And justifiable, since having _n_ text files saying one thing is easily compressable ?
Got me thinking about it, does removing lines-of-not-code does really make it smaller or just a little
For my own code I will use spdx references in the source code files but still have a LICENSE file in the repo root like I already do with the full text of the license.
IANAL so I can’t say what you legally can and cannot do with other people’s code but I would think that placing a copy of each of the licenses used by others in your repo and name them as LICENSE_THIRDPARTY_MIT, LICENSE_THIRDPARTY_GPL, etc and mention in your README that these correspond to the spdx references found in source files would mean that you were still in compliance.
As for the question “does removing lines of not-code make the file smaller or just a little”; it does not reduce the SLOC count of course. Personally I just find the idea of spdx appealing because it means that when I open files to edit them I don’t have to scroll past a lot of license text first. Additionally scanning source files for what license is being used will be simplified.
Not a complete answer to your question, but at least if you hold the copyright to the source code (including all modifications), you're of course free to re-license it.
"The architectures that are gone are blackfin, cris, frv, m32r, metag,mn10300, score, and tile. And the new architecture is the nds32 (Andes Technology 32-0bit RISC architecture)."
So I understand the whole "SLOC doesn't always need to be positive to have made useful contributions to a codebase" thing, I do it fairly frequently. However, is the argument for removing support ("known working good code"), aka functionality, a good thing? Do people now need to draw a line in the sand and say: If I want to put linux on my m32r sound mixer, I can't use a kernel newer than 4.16?
I'm not really sure I'm arguing for or against the dropping of support, more just curious about others thoughts.
The problem is that often enough it is not "known working good code", because nobody has been compiling and/or running and/or stress testing this code for a long time. Given that constant churn in the kernel, refactoring of APIs etc., it's quite likely that subtle (or not so subtle) breakages creep into that code over time. (Even if people who perform this refactoring try to update these archs, too; or perhaps I should say: "especially if", because usually you can't test them, after all).
Leaving it in gives the wrong impression that this is "known working good code", which is one reason to remove it. Another is that removing them makes it easier to be agile and refactor things, because you don't have to worry about these archs anymore.
We should be able to test that code. We could if, in order for an architecture/piece of hardware to be considered for inclusion, there were a software-based emulation tool that could be used to run a full-set of tests against a kernel build.
It's a lot of work to produce that and to write such a test suite, but it can grow incrementally. We have very good (and reasonably precise) emulators for a lot of the PC and mobile space and having those running would give us a lot of confidence in what we are doing.
So you want to write a new emulator each time Intel releases a new chip? I think you're vastly underestimating the scale of this task.
Video game emulators are possible because there is only one (or a very small number of) hardware configuration(s) for each console. Emulators for mobile development are possible because they simulate only the APIs, not the actual physical hardware, down to the level of interrupts and registers.
I don't think x86 would be a particularly attractive target for emulation in this case - x86 hardware is readily available and testing is much easier than, say, SuperH or H8.
Intel probably has internal tools that (somewhat) precisely emulate their chips and it'd probably be very hard to persuade them to share, but they seem committed to make sure Linux runs well on their gear, so it's probably not a huge problem.
I think of this as a way to keep the supported architectures as supported as possible even when actual hardware is not readily/easily (or yet) available for testing. One day, when x86 is not as easy to come by as today, it could prove useful.
It's good to keep the software running on more than one platform, as it exposes some bugs that can easily elude us. Also, emulators offer the best possible observability for debugging. If they are cycle-accurate then, it's a dream come true.
We are able to create tests with emulated hardware from specification, but writing emulation taking into account quirks, edge cases and speculative behaviour would be great amount of work, even for simplest devices.
I'd recommend reading Dolphin emulator release notes for a reference how much work is required to properly emulate hardware such that actual software may run without glitches and errors even for (AFAIK) 3 hardware sets.
I believe quirks would be added as they are uncovered. It'd also become a form of documentation. Emulation doesn't need to be precise to be valuable - if we can test it against an emulator on a commodity machine before testing it on metal on a somewhat busy machine, it's still a win - we don't need to waste time on a real multimillion zSeries or Cray if it crashes the emulator.
That way we would be able to create integration test coverage of 100% while still being able to panic on real hardware, until all messy behaviour is implemented. It's like writing unit-test kind of integration tests which then fail when provisioned :)
Passing tests and then failing when provisioned is still better than just failing when provisioned. At least you know that it should work (and maybe does, on different variations of the hardware).
I don't think covering all devices available is necessary before this approach brings some benefit to kernel developers. Also, this service would not be necessary if all phones ran the same version of Android and had displays with the same number of pixels, none of which is that much relevant for kernel developers.
Code that isn't executed and doesn't have anyone to care for it will often 'rot' inside of a larger codebase in my experience. When that happens, it adds mental overhead whenever anything related is refactored or changed, and can sometimes do nothing but create barriers for further improvements.
In this case, it looks like it was a number of unused architectures that were being put to pasture - anyone who is interested can look through commit history and pull back what they need if they're invested enough.
Wrong google hit. It's a minor 32-bit architecture from Renesas that seems to be targeted at ECUs. They're still on sale but I doubt there's much consumer demand to run Linux on them. They have fairly limited Flash.
This is a case where a difference in degree makes a difference in kind.
Some of these haven't been sold for over 10 years and no one knows who still has one or where to get a compiler for them. Some of them are only a few years old, but have never run an upstream linux kernel (they always ran their original hacked-up/ported kernel), and again you can't find a C compiler from the last 5 years that supports them.
Linux does not drop support for architectures lightly, it was hanging on to most of these for years when they were clearly un-used un-tested zombies. And, FWIW, sound-blaster sound cards from the 90s are still supported ;)
I'm unfamiliar with kernel development cycles, but there might be some amount of maintenance needed each patch to ensure changes work for the various supported architectures, in which case leaving them in without updating them would result in insecure, increasingly buggy mess.
It _potentially_ paves the way for a net increase in functionality. If you can make changes without worrying about if it breaks some obscure architecture that you know no one is using, that _could_ make the process smoother and thus lead to the easier inclusion of functionality that will actually be used.
sure, Linus announces more lines of code were removed than added and the crowd goes wild, but when I do it people are all "who broke the build" and "how is the repository totally empty again."
I am mildly surprised that Blackfin support has been dropped. A few years ago, many digital cameras had a Blackfin for both GUI and DSP. I remember working with SHARC (harvard-architecture CPU from Analog Devices) and seeing the main advantage of Blackfin being that it could run Linux and, supposedly, being better equipped to run a complete GUI with several applications. I expected AD to take advantage of this and support Linux for some time, but instead, it rotted. Good thing I did not went through this route.
Also, I guess the architectures pruning was one reason he decided not to go with the 5.0 version. It would give even more meaning to the version number.
Interesting. I'm a bit removed from DSPs for some time, I wonder which multi-core processors AD are talking about. Are these just common ARM and the like or do they have some new architecture in the pipeline I haven't heard about?
I wish it said that kind of thing in the changelogs more often (in general). I would be much more eager to download updates if I knew the app was getting slimmer, and not fatter.
On a side note, I'm annoyed Skype now forcibly updates itself on app launch. I have no idea how big that app is getting but I just assume it's slowly getting bigger and bigger.
To be fair to Linux, it does support an ever-growing list of hardware. It’s not exactly lean, but continually increasing in size is hard to avoid in its case...
That could have been avoided if they hadn't been so insistent on all drivers being part of the kernel.
Then again, for as much trouble as that policy causes it is difficult to argue with its success, since Linux supports more hardware than any other open source OS and most proprietary ones too (or all, depending on how you count ancient hardware).
Linux is kind of unusual as far as large software projects go (IMO.) It has very few build dependencies and the quality of the code is usually higher than average.
Same here, only used menuconfig since both config and xconfig were unpractical. I'd be curious to read how many people still compile their kernel today for normal use (ie not for testing or kernel development). Back in the day I did because a new card often meant the driver (if any) was available as source only, while today nearly everything has been already built into modules and can be installed as such.
I compile my own ... for fun mostly. There are performance gains in selecting the correct CPU architecture for your machine. I have some old machines and it really makes a difference vs. distro stock kernels.
The modern procedure (for upgrading) is pretty straightforward:
make oldconfig; make; make modules_install; make install; reboot
Unfortunately, feature-cram and bloat seem to be the mostly inescapable reality of software, even in many open source projects where you don’t have someone from sales or marketing breathing down your neck for moar moar moar. It is nice when project maintainers can spend time on bug fixes, performance improvements, memory optimization, etc. Even better when a project can agree that they are “done” with features and all additional changes are maintenance.
I think it is one of open source's biggest failings that it succumbs to feature bloat at least as easily as proprietary software does, just for different reasons. The problem is that open source people generally work on what interests them, and simple solutions are boring.
To quote de Saint Exupéry "...perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away..."
I have no idea how you read that tone from the post. It was as matter of fact as you can get save for a bit of excitement at the end about the total LOC count going down.
Phoronix's initial testing seems to suggest a 10-20% less power usage while idle which is a fantastic news for those who own a Linux laptop.
Since 4.0 or so Linux has become rock solid for me so I no longer anticipate new kernel releases but this is one of those times I run the cutting edge kernel because it is so cool. If you run a relatively recent version of Ubuntu, you can also test by googling Kernel PPA. It's just three clicks away at most.
Ref: https://www.phoronix.com/scan.php?page=article&item=linux-41...