Hacker News new | past | comments | ask | show | jobs | submit login

I wonder how well it works. If you take an x86_64 system and make a memory control group with only 32MB of memory (and kmem tracking is enabled, which is true by default) the container will OOM right away. I've been operating under the belief that Linux now requires 64MB at a minimum, but perhaps a 32-bit uniprocessor build can get by with less.



> the container will OOM right away

that doesn't really makes sense in linux-parlance. you should try and see which is the process that is actually triggering the OOM.

if you're able to do that then you can try and swap that with another binary.

you could try and build an hello world that does while(1) sleep(30); build that with static linking and boot the container with that -- it will probably work.

also, i should look for that page, but i remember reading a page about trimming away anything possible from an hello world binary and ending up with something as small as like ~60 bytes.


here it is!

https://www.muppetlabs.com/~breadbox/software/tiny/teensy.ht...

i was off by a bit. the binary is 45 bytes and prints the number "42" (plus a newline).


It does not print anything. It returns with exit code 42.


I see that you haven't tried it.

Note: read for comprehension.

A container may OOM from kernel memory alone.


What process or processes were you running in the container?

Edit: nvm, saw you answered it in another thread: `find / > /dev/null`. Interesting!


Yes, find and its parent, the shell, but that's not why it OOMs. It OOMs because Linux uses a lot of memory for inodes and dirents and whatever it puts in slabs and then when memory pressure occurs -- when Linux must allocate in the kernel but cannot, due to limits -- it just loses its mind and instead of reclaiming it kills.


What are you putting in your container? A trivial container with non-bloated userspace should not take nearly that much memory.


Thanks for your contribution. The reason I value HN so much is the willingness of inexperienced people to inform me of what Linux "should" do.

  # cd /sys/fs/cgroup/memory/
  # mkdir lol-hackernewses
  # cd lol-hackernewses
  # echo 32000000 > memory.limit_in_bytes 
  # echo $$ > tasks
  # find / > /dev/null
  Killed
  Memory cgroup stats for /lol-hackernewses:
               anon 135168
               file 11567104
               kernel_stack 0
               slab 19771392
               sock 0
               shmem 0
               file_mapped 0
               file_dirty 0
               file_writeback 0
               anon_thp 0
               inactive_anon 0
               active_anon 94208
               inactive_file 5517312
               active_file 5881856
               unevictable 0
               slab_reclaimable 19075072
               slab_unreclaimable 696320
               pgfault 14817
               pgmajfault 2376
               workingset_refault 25773
               workingset_activate 3465
               workingset_nodereclaim 0
               pgrefill 969886
               pgscan 971861
               pgsteal 50508
               pgactivate 913407
               pgdeactivate 969886
               pglazyfree 0
               pglazyfreed 0
               thp_fault_alloc 0
               thp_collapse_alloc 0


"should"... Unfortunately, almost everything is bloated these days. My first Linux box, a 386SX/20, had only 4 megabytes of memory. I ran SLS Linux which installed off of roughly 20 floppies. I could compile C programs, and barely run X!


Tiny-X and fvwm with rxvt would run far better than any other VM + an XTerm.


You're probably being killed (in part) by the default stack sizes, which is much larger on x86_64.


That doesn't seem likely, since stacks are and have always been demand-paged on Linux. If you don't use more than 4K, your stacks will be 4K for ever.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: