Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Improved default settings for Linux machines (tobert.github.io)
140 points by dctrwatson on June 24, 2014 | hide | past | favorite | 95 comments


Sorry but this is really bad 'default' advice. Shocking, but defaults often are default for a reason. Cranking everything up to 11 is a sign of ignorance in which case you need to step back and understand what you are doing first.

The mmap, file-max, and SHM advice is application dependent. Understand what your system is doing, and only increase if necessary. i.e. PostgreSQL < 9.3 is the only large user of SHM I can think of off hand.

The limits.conf advice is also bad. You should have a safety net here and increase these as needed and per user in /etc/security/limits.d

A less harmful guide would be something like "these are the knobs you may need to turn for certain apps, and here is the documentation on what they affect" - this looks a bit better https://wiki.archlinux.org/index.php/sysctl


Ya, this reminds me of car modders who suggest stupid things like drilling holes in your air intake. Any variable whose change (1) is cheap or free, (2) doesn't violate some emissions standard or whatnot, and (3) doesn't cut into profit of a pricier model, is practically guaranteed to be set by the manufacturer at its optimal value for the vehicle's intended use. They're not morons and they have an interest in maximizing the car's utility.

Sysctl variables in Linux meet all these same criteria. Linux is already tuned for general use (or is damn close to it). Any knob you tweak is likely to make things worse in a way you don't understand. Just leave things unless you have a specific use case that requires different tuning.


> is practically guaranteed to be set by the manufacturer at its optimal value

The optimal value for what? Speed? Performance? Comfort? Economics? I don't drive a car, but wouldn't trade-offs apply to car tuning just like it does to everything else?


You clipped my sentence. "For its intended use", I said. Sporty cars are tuned for performance. Cushy cars are tuned for comfort. Econoboxes are tuned for economy. Of course if you want to make your Yaris do 0-60 in 6.0 s there are changes you can make. But drilling random holes on your STi ain't gonna do shit for its time on the track.


Sporty cares are tuned for performance, but the really high-end ones are often speed limited, too; I guess some of these settings are the equivalent of removing / turning off the speed limiter, if you know what you're doing / can drive on a proverbial track where you have need for those speeds.


That's exactly his point. Those speed limiters aren't there to arbitrarily limit your fun, they're there because the stock wheels and tires can't handle greater speeds. If you don't understand why the limit is there in the first place, you're going to have a nasty surprise when you exceed the tires' limits while going well north of 150 MPH.


In fact any RDBMS of a similar design to Ingres requires a larger shmax value including, postgres, sybase axe, and informix.


I put together a simple page of some notes here:

http://tweaked.io/guide/kernel/

In practice the file-handles tweak is the one that many people seem to forget, and can make a big difference for heavy webservers.


instead of blindly suggesting noatime, investigate on nodiratime and relatime, which may or may not be what one needs.

Side note: noatime is possible on OSX too [0]

[0]: https://github.com/tlvince/noatime-osx/blob/master/com.tlvin...


mplayer and VLC also use SHM and are completely broken with these settings.


Redhat and Centos have the command "tuned-adm" which has various machine profiles with settings like this. It is an official thing supported by the vendor.

https://access.redhat.com/site/documentation/en-US/Red_Hat_E...

eg when I run it on one of our KVM hosts.

  $ tuned-adm list
  Available profiles:
  - throughput-performance
  - laptop-ac-powersave
  - virtual-guest
  - latency-performance
  - enterprise-storage
  - default
  - spindown-disk
  - desktop-powersave
  - virtual-host
  - laptop-battery-powersave
  - server-powersave
  Current active profile: virtual-host


The git repo is here: https://git.fedorahosted.org/cgit/tuned.git/tree/profiles

Easy enough to implement own scripts using the profiles as a basis.


This is a very clean approach preventing people from reinventing the wheel. I just found this, although not recently updated, which is tuned ported to Ubuntu: https://github.com/edwardbadboy/tuned-ubuntu


This is a great feature. You also have the ability to create and use your own profiles.

I wonder if they made any new improvements in RHEL 7?


"this disables swap entirely, which I think is virtuous" -- sigh.

Swap isn't some artifact from the days of 640k, used only because memory is expensive. Shit is always stored on disk; swap just allows that shit to be unused pages of active programs rather than actively used pages of files on disk.

Without swap, you force the kernel to prioritize cold code paths of rarely used daemons over, say, your web browser's cache. That's just dumb.


For a desktop with sufficient amount of memory swap is just bad from my experiences. When i was using swap on my desktop system, which i usually have running 24/7, i'd often come back in the morning to find that linux decided to swap out my X session or something stupid like that. How much of my 16GB memory will a "rarely used daemon" use? Insignificant compared to how annoying it is to have to wait for your interactive applications to be swapped in when that happens.


It sounds like you had a process running over night that needed enough memory to force X (or whatever) to swap. Without the swap, that memory-hungry process would presumably have run less optimally (if it got memory allocation failures and adjusted its behavior to utilize less memory) or it would have simply died. It's not so clear that having processes die because they can't get memory is better than having something else run slowly, in general.

Edit: Possible caveat: Maybe whatever was running overnight did a lot of disk I/O and the OS decided to cache it at the expense of moving idle processes to swap (not sure if Linux does that or not).


"Maybe whatever was running overnight did a lot of disk I/O and the OS decided to cache it at the expense of moving idle processes to swap (not sure if Linux does that or not)."

In my experience (Ubuntu 9.10, kernal 2.6.31), with the default settings it does not do this during intensive disk I/O unless there is very little free memory to start with.

In other words, if the system is not low on memory to start with, I can start a bunch of processes that read and/or write multiple terabytes of data and when I come back the next morning little or no additional swap will have been consumed.


How much swap space should you allocate for e.g. 16GB RAM?

Every reference I've read suggests there's no need for swap space once you have more than ~2GB of RAM, but I find that extremely hard to believe.


I find this an interesting, common, conceptual misunderstanding.

When somebody asks this question, he always thinks this way:

- I have a system "A" with x GB of ram, with y GB of swap. - I have a system "B" with x+y GB of ram, and no swap, because it has all the virtual memory A is using.

well, the problem is that one should not compare system B with system A, he should compare system B with system C:

- I have a system "C", with x+y GB of ram, and z of swap.

system C will perform potentially better than B.

The generic explanation is that the kernel may decide that it's better to swap out some data, and use the space for caching purposes. This is a concept, that within limits, it's not related to the amount of RAM.


As long as 'z' is not proportional to x or y that might be fine. If you consider 'z' as a percentage of your RAM then you'll notice that your expensive server with lots of RAM is way slower than a cheap one with small RAM and smaller swap, just because it takes less time for the kernel to fill the swap and finally kill the offending process.


I'd allocate swap on the order of how much RAM your applications use. So if they need 2 GB of RAM to be comfortable, 1-4 GB of swap will give them room to be paged out when necessary. In most situations disk space is cheap, so erring on the side of too much won't hurt.


If you plan to hibernate, I believe you need at least 16GB of swap (i.e. the amount of memory you have).


You don't need swap as big as the amount of memory to hibernate. https://wiki.archlinux.org/index.php/Suspend_and_Hibernate#H...


I turned off swap on my desktop. Even if you have a lot of RAM swap just becomes an annoyance during development. Sooner or later you have some runaway process eating up all the memory. If the system starts swapping it will take a long time to recover (unless you're lucky enough to hit Ctrl-C within a few seconds and the app responds to it). Even with swap off there are long pauses when the system is low on memory, maybe it just waits too much before the OOM killer kicks in, but its more usable than with swap on.


Unless you care about latency. Just because an indexer ran through my hard drive doesn't mean I should have to wait 60+ seconds next time I log in for everything to page back into memory.


I haven't had this experience in 10 years of using a desktop system that updates the locate DB every night.


Since we are sharing Anecdata, my experience is quite the opposite of yours. SSDs might be the middle path here.


Nah, I cultivate Chrome tabs. After sleeping my computer for a month, playing games, running VM's, doing clojure programming, etc; I found switching to an old Chrome tab or maximizing that Clojure book I hadn't looked at in 3 weeks would cause quite the pause. This is with SSD's all the way down.


locate indexer cronjob should never be enabled on a production box. Problem solved.


If you think swap introduces latency, you need to learn a lot about memory management.


So, if your working set (files, application memory, etc) fits in memory, how does swap not introduce latency?


Some pages of memory in your working set may be rarely used. If so, then allocating physical pages of memory may be a waste. The kernel may be able to get better performance by using those pages for the disk cache - decreasing latency by hitting the disk less.


I'm very much against changing kernel settings in production servers without really understanding the implications. Take for example the "swappiness = 0", most likely what you think it does it's not what it does.


Looking through his list though, most of the settings are less controversial than swappiness.

I do wish people were more rigorous about changing swappiness. Maybe if swappiness=0 was documented as "pageout every byte of /usr/sbin/sshd and /lib/libc.so before you pageout a single byte of java's bloated heap" then people would be less eager to apply it.


Are there people running servers that actually swap without it being a disaster? For regular 'cattle not pets' servers I just decline to create a swap partition and vm.panic_on_oom = 1.

Java's bloated heap, for example, has easy to use startup options that cause the Java process to use a deterministic amount of RAM, as does any other sane program that uses nontrivial amounts of memory.

OOMing (or swapping) a server almost always indicates an error that needs human intervention, so cleanly killing/rebooting the faulting system and raising the alarm in your monitoring system is the right thing to do.


"Swapping" does not imply that the system is only swapping. Yes, if the system is only swapping - or, more accurately, if kswapd is what's pegging your CPUs at 100% - then you have problems. But the kernel will try to swap out unused pages to make room for the disk cache. If you turn this off, it won't do that, and you could end up hitting the disk more.

Modern systems are usually teetering near the edge of using all physical RAM. This is by design; unused RAM is wasted. You want to use as much RAM as possible to avoid going to disk. What you call "OOMing" is when the applications on the system require more physical RAM than what exists. This is independent from the swappiness ratio.


Yes I believe swap makes some workloads faster.

The goal of virtual memory is to apportion physical memory to the things that need it most -- keep frequently used data in RAM, and pageout things which aren't frequently used. This makes things go faster. When you disable swap, you constrain VM's ability to do this -- now it must keep ALL anonymous memory in RAM, and it will pageout file-backed memory instead. Even if that file-backed memory is much hotter.

I came to this conclusion by following the kernel community, starting with the "why swap at all" flamewar on LKML. See this response from Nick Piggin http://marc.info/?t=108555368800003&r=1&w=2 who is a fairly prominent kernel developer. Nothing I've read from the horses mouth has refuted this since then. This is true even on systems with gobs of memory.

You're worried about systems which grind to a halt under memory pressure, which is unquestionably a concern. The thing is, disabling swap doesn't fix this. As soon as you're paging out important file-backed pages (like libc), your system is going to grind to a halt anyway. And disabling swap can't prevent this -- (same point from a VM developer here http://marc.info/?l=linux-kernel&m=108557438107853&w=2). To really give your prod systems a safety net, you need to (a) lock important memory (like, say, SSH and libc) in memory or (b) constrain processes which hog memory with (e.g.) memory cgroups. IMHO cgroups/containers are a better solution.


I understand the idea, but I don't have a lot of faith in the kernel to make the right decision under memory pressure--and neither do server application developers, hence the proliferation of massive userland disk caches in processes. The amount of allocated memory that the kernel can find to discard on a no-swap system is fairly small, so I'm somewhat relying on the rogue program overrunning the point of blowing out the caches and going all the way to panic the system.

Ideal tuning would probably also reserve some decent amount of space for file caches and slab but I'm not aware of any setting that does that.

It's an entirely different story on laptops and development servers, where the workload varies widely, may contain large idle heap allocations worth swapping, and manually configuring memory usage isn't practical.


>I understand the idea, but I don't have a lot of faith in the kernel to make the right decision under memory pressure--and neither do server application developers, hence the proliferation of massive userland disk caches in processes.

There is a school of thought which thinks especially server devs should just trust the operating system in this regard. One notable person of that school is phk of FreeBSD and Varnish fame: https://www.varnish-cache.org/trac/wiki/ArchitectNotes


I don't disagree with swap being helpful in some (maybe most?) cases.

But there are cases where it clearly does more harm then good. I had a postgresql datbase server with a lot of load on it. The server had loads of ram, more than what postgresql had been configured to use plus the actual database size on disk. Even so linux one day decided to swap out parts of the database's memory, i assume since that was very rarely used and it was decided that something else would be more useful to have in memory. When the time came for queries that used that part of the database, they had a huge latency compared to what was expected.

Maybe i'm misremembering and maybe there was some way of preventing that from happen while still having a swap enabled on the server..


vm.swappiness [1] could be what you're looking for. I find the default value of 60 leaves my desktop more prone to swapping out than I would like (but then firefox consuming >15gb of ram leaves it little choice).

[1] http://en.wikipedia.org/wiki/Swappiness


You reboot a machine every time it starts to use swap?? What the hell kind of applications are you running?

Even with java apps (which I grant you are difficult to estimate memory limits on without knowing the application's design) swapping can be a useful way to page out unneeded parts of memory - and more importantly, keep needed parts of memory intact. This can also mean keeping the Java processes in memory while paging out apps which are less crucial, which leaves more room for Java, etc.

Swap is, for lack of a better comparison, the canvas sheet you use to catch someone jumping off a building, or a new york city sewer. In the first example you can use it to save your applications/servers so you don't need to reboot them (the higher your availability requirements, the less you can stand random reboots). In the second example it's the place you send those inhabitants you don't deem worthy of RSS.

The other thing you have to consider is the idea of memory overcommit in application and kernel design. The system is built with a promise that it has way more memory than it actually physically does. Applications will always reserve a fake huge chunk of memory and doesn't care that the system is lying to them about how much is really available. Without swap, when these apps attempt to use the nonexistent available memory, they crash. With swap, they survive.

Then there's apps designed to rely on swap like the Varnish malloc() and file storage methods, or database servers. Even if you disable swap there are still performance and stability problems related to the VM, and understanding this helps your apps run more efficiently. (http://blog.jcole.us/2012/04/16/a-brief-update-on-numa-and-m...)


That's another way of saying "I always overprovision my servers for their worst case scenario instead of their expected state". Not that there's anything inherently wrong with that, as long as the person signing the checks understands and agrees with the decision you've made for them.


Under what scenario will a server start swapping due to load, only to continue performing as it should? It's hard for me to come up with a real world scenario where performance doesn't drop from peak, leaving a backlog of requests rapidly growing, which then compounds the problem.

Either way you handle it, the server is toast. So the real solution is to start dropping requests or downgrading service in some way to make sure the server never reaches OOM.

It definitely is important that the person signing the cheques knows how much capacity they're paying for, but that goes two ways. If you underprovision, they must understand that if they ever get mentioned in the NYT, their site will likely go down. It's up to them to balance the risks.


Take something as simple as SSHD, NTPD, CROND or any other of the many background daemons that get used once or twice a day. They only need a few bytes of themselves in active memory to sit waiting on a socket connection or on a timer. The rest can be paged out for the actual application you're using the server for. It doesn't even matter if these services take a few milliseconds to get paged back in when needed since things like SSHD are working over the network (many orders of magnitude slower than hard drive access).

There's basically no reason not to have at least a small amount of swap. There's never been a benchmark showing no-swap as faster.

It should also be mentioned that things like mmap()+MAP_PRIVATE on a file will flat out fail if the file is larger than free_memory+swap. It's a common, easy and fast way to work with large files where the portion of the file you're working on gets paged in and out as required. Turn off swap and you break this functionality. You're basically limiting what you can do on your system with no performance gains.


Amen.


Since kernel 3.5-rc1, vm.swappiness=0 will disable swapping entirely. https://news.ycombinator.com/item?id=7660385


Well I can attest that having user's file limits set lower than 4096 will cause some web servers to not handle a multi-user client performance test of any decent size.


Indeed, I wish this post had his rationale behind the changes and descriptions of what each change does.


My recommendation for any server with enough RAM is to disable swap. This works on just about every system whether swap is configured or not.

That said, on my Linode I have swappiness=1 and a few GB of swap on the SSD since it is memory-constrained.

In any case, I've been running vm.swappiness=0 or 1 on nearly every machine I touch for many years now and have yet to see any problems. Swap is almost always a bad idea on modern machines.

Thanks for reading!


I strongly disagree, and prefer the swappiness at the default level with a modestly sized swap file (1-2GB). There are lots of poorly behaved applications that will allocate RAM that they never touch again. If you turn down swappiness you end up wasting that RAM which could be better used for disk cache.

I currently have 1.2GB of swap in use on a machine that is not doing any active swapping at all; that's 1.2GB more space for caching.

[edit for spelling]


Agreed. I'm not sure if the author of the original article really understands how modern operating systems deal with memory. The point of swapping pages to disk is to avoid hitting the disk in the long run.

See: http://archive.today/FKlQ


Except thats not what causes problems in practice. You get problems with swap when your working set size exceeds your RAM. You'll always be hitting the disk as long as the offending application keeps using too much memory, and it slows down the entire system not just that one application. When you try to SSH in/log in that requires reading files from disk (or swapping things back in) which can take quite a while due the backlog caused by swapping.

If you ran out of memory and malloc returned NULL that would be better, however a lot of applications rely on overcommit so there is no good answer:

* you can run with RAM + as much swap to make all applications happy and overcommit off, and accept the long delays caused by swapping

* run with just RAM, overcommit on and no swap, and accept that your applications may be killed by the OOM killer anytime


As has been stated several times in this thread, the "swappiness" factor is not about over-committing RAM. It's about giving the kernel the freedom to say, "Hey, these pages in RAM have not been used in a long, long time. I'll swap them to disk so I can use that RAM for things that will speed things up more, like disk caching."

This is independent of applications whose working set size exceed that of physical memory.


Yeah my reply was to the "disable swap" part of the discussion, not the swappiness one.


Disabling swap means setting swappiness to 0; I'm not sure how you can talk about "disabling swap" without talking about swappiness.


Because that depends on the kernel version, I took the more reliable approach of using 'swapoff -a' (or in fact not defining any swap in my /etc/fstab). https://news.ycombinator.com/item?id=7940387 https://news.ycombinator.com/item?id=7940136


Which then means that you're not allowing the kernel to make such decisions as what I described. (Rarely used pages paged out to allow for more disk cache.)


It should be noted that memory is still in your swap even if it's read back to memory, it only gets deleted upon the memory being rewritten. So that figure isn't telling you how much data is exclusively in swap.


> If you turn down swappiness you end up wasting that RAM which could be better used for disk cache.

Except the downside of turning swap off is that you (possibly) don't cache your disk as aggressively, but the downside for having too much swap is that ill-behaved programs can grind your entire system to a halt for minutes/hours at a time by having too big of an active dataset. We're talking multiple seconds of latency, where you can't get anything done.

For me that's way too big of a downside for gaining a few extra MB of disk cache. I'd rather have it OOM right away.


ulimit


There are lots of poorly behaved applications that will allocate RAM that they never touch again.

If you only run applications you wrote yourself, you can prevent this behavior.


I do a lot of dev work on a machine that has 12 cores and 96 GB of RAM and I currently have swappiness set to 85.

96 GB would be plenty of memory for what I need to do--mostly I benefit from lots of filesystem caching. BUT--there's another user of this system and he does most of his work using Matlab. For the stuff he's doing, Matlab routinely sucks down 10s of gigabytes of memory. And it may stay that way for days at a time.

Without swapping, Matlab will happily sit on all that memory indefinitely, whether it's doing anything or not. Meanwhile, everything I do takes ages because of the paltry amount of memory left for FS caching.

With swapping, I can get some of that memory back for FS cache when Matlab isn't being used, and it makes a huge difference.


I'd agree with you only if you'd agree with me that:

There can never be enough RAM. ;)


Isn't "swappiness = 0" recommended for SSD-based swap partitions to reduce SSD wear?!?


Relatively modern SSDs will write at least 500TB before dying:

http://techreport.com/review/26523/the-ssd-endurance-experim...

I've an SSD that's been running in my development Linux laptop for very, very close to three years. The drive houses a couple of encrypted swap partitions along with the rest of the system. According to the SMART attributes, I've written 18TB to it in that time.

Don't worry about SSD wear. Really, don't. Either you'll get a drive that succumbs to crib death or super-shitty v1.0 firmware, or you'll get a drive that will last until long after you outgrow it.


It's like a new pair of shoes; give it a few weeks and you won't mind the wear so much.


I've seen swappiness = 1 as a recommendation rather than 0.


if you want to reduce SSD wear, just don't have a swap partition on SSD (or by not having a swap partition at all if you only have SSD)


Yeah, my swappiness is set to default (60) I still never use my swap space :/


vm.swapiness is not magic: http://en.wikipedia.org/wiki/Swappiness

People should of course learn about these setting as they change them, but many of the suggested values really are improvements for lots of use cases.


swapiness=0 in some Linux versions disables swapping, also even with swapiness=0 and plenty of RAM, kernel sometimes will swap; people put too much faith in these values intead of doing proper capacity planning/monitoring imhho


Swappiness is not just about swapping. http://www.linuxjournal.com/article/10678 <- This is a great article on Linux swap and how it works. It will change your life.


Thanks so much, random guy on the internet.

I can't wait to see these settings cargo-culted onto systems of customers who then complain Linux doesn't behave the way they expect it to.

Next time, keep your sysctls to yourself.


Note, vm.overcommit_memory is not even mentioned.

How disappointing.


Doing one of these things may introduce a security vulnerability, depending on the rest of your environment.

Some programs that use select(2) are known to assume that FD_SETSIZE is at least the maximum number of file descriptors available (instead of checking FD_SETSIZE). This lack of bounds checking may lead to a stack or heap overflow and a security vulnerability.

More recently, if you build with fortified glibc options, then you'll get automatic bounds checking, but do you know that your own daemons are built this way?

This is an example of why it's not a good idea to arbitrarily change a list of default settings system-wide without understanding the implications. The defaults have not been changed for a reason; otherwise distributions would already ship with these changes.

References: https://lists.ubuntu.com/archives/ubuntu-devel/2010-Septembe... http://www.outflux.net/blog/archives/2014/06/13/5-year-old-g...


Increasing the number of file descriptors on machines running HTTP servers is one of the first recommendations I make to my consulting clients.

It's much easier to overlook than you'd probably imagine. I have seen apps serving hundreds of thousands of API requests per day that had the default settings. It's one of those quick changes that can have a big impact.


>> Edit: I've run across a few comments complaining about these large max values. The reason I set them high is that the machines I work on are not multi-user in any way.

Then why is this posted at all, this isn't improved default Linux settings, it's settings some guy likes for some customized environment.


> # allow up to 999999 processes with corresponding pids

This is now the default in DragonFlyBSD: http://freshbsd.org/commit/dfbsd/3a877e444fff816b8a340d35fe3...


An even better idea for developers is to reduce limits (memory, PIDs, file handles) and start triggering those rarely-used (or non-existent) error handling code paths.

Also, I think I would prefer process sbrk failure to OOM killer activation. So setting vm/overcommit_memory=2, overcommit ratio to 80%, a decent swap size, and code actually handling errors. IE consistency versus randomness.

Not that randomness is bad for testing, cf Chaos Monkey: https://github.com/Netflix/SimianArmy/wiki/Chaos-Monkey


Agreed. The last thing you want is to have to debug those weird "it worked on my desktop" issues that come from having a really weirdly configured system


wow, wrong on almost all the settings. seriously: maximize bufferbloat, cause huge intrusive IO pauses, keep useless pages in memory, etc? the only good ones there are kernel.panic = 300 kernel.sysrq = 1


Don't change max values unless it's really needed. Not every production machine needs billions of IPC handles.

My philosophy is keep it do default unless you have an issue. Guess what ? It works just fine.


Increasing the max setting does not consume additional resources. It merely makes it possible for applications to get the resources they ask for.

For example, just this morning Chromium started failing because I hand't disabled limits on one of my machines. I pulled down my standard settings, applied them, then the problem went away. It wont' be coming back either.


> Increasing the max setting does not consume additional resources. It merely makes it possible for applications to get the resources they ask for.

From unswappable kernel memory, yeah.

> kernel.pid_max = 999999

> * - nproc unlimited

1.000.000 pids is 1 mil task_struct's.

On my quite stripped out kernel 14 task_struct's fit into order 3 slab -- 14 objects per 32KB or kernel memory.

1000000 / 14 * 32 * 1024 = 2.18 GB of kernel memory

and that's not even counting other kernel structures!


It would be good if you were to explain your reasoning for your changes.

As alluded to before, defaults are default for a reason. Having someone explain why they change them is a good exercise for both reader and author.

for example fiddling with swappyness means that you'll end up with less RAM for important things, like file cache.


I've added some notes explaining my reasoning. I hope that helps. I'll dig in and explain some of the settings more thoroughly in the future.


Interesting, but I tried them out in my Ubuntu 13.10 desktop and applying the whole lot completely killed Chrome. Strangely, certain tabs would consistently load fine, while others would end up with white screens - seemingly consistently per-uri over several browser launches and reboot. Other thing seemed to be working fine. Oddly, the settings screen was one uri that did not work. I took his file handle limits and scrapped the rest and it went back to normal. Something in there did not agree with my video card settings is my best bet.


I have the same issue.

If I use the --disable-gpu flag, I will hit the file handle limits. I have increased the limits and it works fine now.

I have an AMD GPU and chrome/chromium just does not work. It will constantly flicker.


Well here are my 'improved' settings for sysctl.conf http://sprunge.us/dhgM. Most of the TCP stuff is to guard against server resource exhaustion by syn flood, etc, the vm settings are optimized for hot cache (vs cold cache but programs) and spinning media (page cluster).


To apply the sysctl changes right away:

sudo sysctl -p /etc/sysctl.conf

My oldish kernel doesn't recognize the PID settings, which is unfortunate.


Wow, how old is that kernel? I've been using that since at least the 2.6.32 era.


I think I was on 2.6.38. Seems like it ought to work, but didn't. Dunno.


OMG! I can improve the audio 200% by simply setting: pactl set-sink-volume alsa_output.pci-0000_00_1b.0.analog-stereo 200%

Teh sound is so much more sound-ier! Way much more cranked up than the lame defaults! http://goo.gl/TJLTMF




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: