Hacker News new | past | comments | ask | show | jobs | submit login
Tinycore Linux (tinycorelinux.net)
185 points by fctorial on Nov 21, 2020 | hide | past | favorite | 81 comments



I use a similar distro, Puppy, because it has the ability to copy its squashed file system (SFS) files to RAM and then continue running with the persistent storage medium detached.

It has certain peculiarities like defaulting to logging in and running everything other than web browsers as root, so I’d like to switch to Tinycore if it’s different in that respect. My main concern is package availability — Puppy is compatible with the repos of major distros like Ubuntu.


Puppy is very interesting in that at first glance the distro comes of as sort of a toy distro. But then when you look closer at the files system and the security ramifications related to that, it's a very serious distro.


In what way? Web browsers running under root do not inspire confidence.


"everything other than web browsers"


> In what way?

The file system.


Ubuntu (a quite a few other distros) can be made to run entirely from RAM (that's how the LiveCDs actually work). It's probably not as easy to configure are Puppy or Tinycore, but it's possible.


It's as simple as adding a single boot parameter to grub

https://askubuntu.com/questions/829917/can-i-boot-a-live-usb...


That's different than what the GP said.

I'm pretty sure you can't boot into an Ubuntu live disk, detach the usb, and keep using the system without any errors or loss of functionality.


That was what I thought too, thanks to abdullahkhalids for linking to the askubuntu thread that shows — with the right boot flag — you actually can https://news.ycombinator.com/item?id=25172520


Yes, in fact it's pretty easy. You just add the right arguments to GRUB, though I don't remember them off-hand.

I used to have a flash drive with GRUB installed, and several "Live CD" ISOs. It would have two options per ISO: the second would copy the squashfs to a ramdisk before boot.

Tinycore is especially great when you don't have USB3 available, because you don't have to wait minutes for >1GB of data to be copied to RAM.


toram is the parameter to add to the GRUB kernel directive.


Back when I was in high school (~2009), the computers in our "computer lab" would take a solid 10min to boot Windows XP.

I ended up bringing a flash drive with Tinycore and a floppy with Das U-Boot (the BIOS didn't have a USB boot option). I was able to get a snappy Linux system with Chromium running, all from a ramdisk, while everyone else spent most of their time waiting around for Windows XP on slow platter drives. It was great.


Back when netbooks were all the rage (ironically, ~2009), I used to run Tinycore on a shitty, cheap Asus EEE. It was a decent linux setup, it had wifi, a browser, standard tools... even a C compiler; much snappier than the XP setup it came with.

I couldn't tell you how complete/solid it was, I was in middle school back then and I doubt I ever did any "work" beyond toy programs on vim, but it was my first contact with linux and a major factor that pushed me towards getting into CS.

Glad to see the project is still going strong after all these years.


One of the best things I had in High School was a teacher who taught us basic system administration. He had us all go through the process of installing Windows XP from scratch, including service packs 1 & 2. Then he had us install PC-BSD, just to see how much better life is outside the Windows ecosystem.

I was already a Linux geek by then, but there are plenty of others who found that class to be an eye opening experience.


Hah, I got detention for a week for doing something fairly similar (booting Linux off external media). School's position was that if they couldn't spy on your computer usage you must be doing something bad.


Yeah, I got in trouble in my high school's computer class for booting Ubuntu 8.04 off a live CD. Probably would've gotten away with it had the machine's built-in (and therefore impossible-to-mute) speakers didn't blast the startup sound: https://www.youtube.com/watch?v=CQaEXZ-df6Y

Thankfully I got off with a stern warning and a "whatever you did, I can't see your screen any more, so you better fix it".


Hah.

Their only really effective way to "spy on" us was to either log DNS requests or literally look over our shoulders.


They netbooted XP on all the machines yet the images were full of spyware and AV software. The teachers would get a remote view of all the students screens in some tiled layout. Never really got the point of the AV software since none of them had local storage or connected to any shared drives (we were required to email stuff to ourselves).

The screen recorder was the primary burner of CPU time.

Plus side was that after this episode I was allowed to bring my own laptop. I had a guest account on the wifi which I knew logged all my DNS traffic.


were you?


Nah. These were new Core 2 Duo machines running so much enterprise spyware / anti-virus software that if rendered them nearly unusable. MS Office was extremely slow, IE 7 took 30 to 40 seconds to load pages.

OpenOffice and Firefox on a Ubuntu 10.04 LiveUSB ran laps around the stock software.

The positive outcome was that I was allowed to bring my own laptop to school after that. They even gave me an account on the guest wifi.


I wanted a Linux distro with small memory footprint (1GB) for a VM, since Tincore Linux obviously fits the requirement I went for it only to find out that package installations require a bit of learning curve and that not all common Linux softwares were available.

I couldn't put time into learning Tinycore's quirks or building packages from scratch due to low VM specifications and so I went for Lubuntu. Lubuntu now comes with LXQt and is surprisingly robust.

Default applications on Lubuntu are all Qt apps, fast and memory efficient. Falkon Web Browser is great unless the websites use some weird JS and since there are several websites which do weird JS stuff I had to install Firefox too and with it increase memory by another 1GB to prevent hang ups(which still happen with Firefox occasionally).


I quite like using this distro for simplistic VMs as they have a small footprint and are very fast. An example may be to have some people in a workshop compile and run some code, or perform some networking tasks. As some here will know, having 10+ Ubuntu VMs really starts to eat some disk space!


Very interesting concept, system reassembles itself on every boot like from lego blocks. Each installed application package is mounted and connected to the core image. This way the system is refreshed to the defined state on each boot.

I wanted to set up a minimalistic pocket environment on a USB flash stick, but eventually stumbled on the need to re-package some of additional applications.

For example, I needed to add Maxima, yet it was not in their prepackaged repo. Trying to recompile it manually threw some dependency-related errors, well, familiar story...

But the idea is still great, it works well on older boxes. Will probably revisit it at some point.


try d-core. based on tinycore and debian.


I used this distro to boot up an old 486 machine that I had lying around. It worked. Just did it to try it out.


That page looks like it hasn’t been updated since 2008 and the wiki is down. :/


Looks like the images were updated on April 1st of this year: http://tinycorelinux.net/11.x/x86/release/

https://en.wikipedia.org/wiki/Tiny_Core_Linux#Release_histor...

The 2008 date is likely when the page was initially authored rather than last updated


The forums are live: http://forum.tinycorelinux.net/


RepoBrowser too. Edit: but the forum has recent posts (some from today): http://forum.tinycorelinux.net/


I saw that as well. May be the admin just had not updated the home page for decades.


This brings me back to a job I had 10-12 years ago, where I would make custom PXE or flash-booted linux "distros" based on the debian-live/casper project, for companies that needed kiosks with custom branding, that would wipe on each boot, etc. It was pretty fun and taught me loads about a linux and how it works.


SLAX has similar goals with little bit bigger base...

Fun facts: - SLAX was Slackware based, but it switched to Debian base (apt) - The first 'linux-live-kit' was from Slax's author. - It used layered filesystem (overlayfs) long before Docker & containers were a thing. - It had 'persistence' while running off the usb, via 'file' or partition. (Then Ubuntu did similar thing with casper-rw <- term for squashfs+overlay persistence in ubuntu)

Link: http://slax.org


Iirc knoppix preceded slax by a couple of years.2005 or so.


cough Yggdrasil cough


it sad that the minimum to boot linux is now 11MB. I recall booting a full QNX desktop demo from a 1.44mb floppy :-(

Why is the kernel so fat ?


I guess you're talking about storage size and not RAM, so you should check out openwrt. There are still a lot of micro router which are still 4MB rom / 64 MB ram. I compiled and run the 19.06 release (with a 4.19 kernel) on one of this target, a WR702N, last week actually :)

the kernel itself is about 1MB.


The QNX 4 demo was astonishing, but after QNX 4 they skipped 5 and 6 was a lot bigger already. One important developer also passed away back in those days (~2001).

There was also BeOS and Amithlon [1], AmigaOS running on x86-32 (IIRC booted up with some patched Linux kernel). Archive.org supposedly preserved this. [2]

20 years ago, one could run a Linux firewall on a floppy. One good thing about that, is that you could have a floppy physically put to read-only mode (SD cards also have this feature).

[1] http://www.hd-zone.com/amithlon/

[2] https://archive.org/details/amithlon


Ive recently learned that the 'write protection' of sd is an honor system.


A full Oberon system image is about 1MB, including GUI, utilities, and the compiler, though a 100KB image can boot and run a text editor.

https://schierlm.github.io/OberonEmulator/

The original Oberon system was something like 12,000 lines of code, something you could imagine reading in its entirety.

IIRC most of the Linux kernel source code is device drivers; the linux kernel also has a huge amount of functionality, much more than you need just to boot a PC into a GUI. Much of Linux user code is bloated libraries with a good deal of duplicated functionality.

Many GUI-based systems from the 1980s and 1990s were very compact by modern standards. (Even a full Smalltalk-80 system image is less than 2MB including the source code.) You can still run RISC OS on the Raspberry Pi to try out a 1980s/90s-style PC OS. Classic text-based UNIX OSes were obviously also dramatically smaller than modern Linux distributions.

Regarding device drivers, I wonder if the BIOS approach is better - assume the hardware vendor supplies the device drivers, and let the OS use those drivers rather than supplying its own. Someone will probably argue that "oh legacy BIOS is 16-bit, doesn't support [caching/acceleration/scatter-gather/async/polling/ feature x] or hardware vendors don't know how to write drivers or it's closed source or blah blah blah" but that would be missing the point that the idea of separating the driver subsystem from the OS might not be such a bad one.


You should be able to reduce the size by compiling the kernel yourself and disable features you don't need. Not sure how far you can shrink it down though, probably not enough to fit in a floppy anymore.


thanks it’s good to know the fat can be opted out at compile time. I am very curious what a modern kernel contain that an old linux kernel did not that would make it more than 30 times bigger


Three major areas - Device Drivers, Virtualization, and Network Filtering

It's not so much fat as it is feature support and what's enabled by default. Most people want the "default" configuration to work out of the box on whatever computer they have. E.g. If you take a look at "x86_64_defconfig", nearly every Ethernet card that Linux supports is enabled. Odds are you only have one of those in your (desktop) system. However, unless you shopped around for specific NICs, odds are you don't own two systems that use the same ethernet driver.

For those of us who want to only compile the support for exactly what we use, it is still possible to compile the kernel small enough to boot off of a 1.44 MiB floppy with a minimal initial ramdisk, assuming the system you want to use it on is old. Modern systems require so much more code to use the hardware in them, and as Linux has become the single most used operating system kernel on the planet, the amount of code for device drivers has grown astronomically.

The kernel contains a minimal configuration called "tinyconfig" which turns off literally anything that be while retaining a functioning kernel. A tinyconfig kernel + TTY support compiles to about 500 KiB. Enabling all of the drivers to support hardware via the BIOS (AT style disk control, floppy, display out, etc.), and ELF support will yield a kernel of ~790 KiB, a little more than half of your 1.44 MiB floppy. Assuming you can fit useful userspace tools in the remaining space, you could still load from a floppy.


> I am very curious what a modern kernel contain that an old linux kernel did not ...

Um, perhaps support for all of the hardware, filesystems, network protocols, etc., that did not exist?


The only measure by which I believe it would be acceptable to judge Linux "fat" would be to compare the size of kernels generated using "tinyconfig". Basically, everything that can be disabled without rendering the kernel useless it turned off. You're only left with the core of the kernel. At this point in time, you get a kernel image that's like ~450 KiB.

If you actually want to boot that on real hardware, you can enable the drivers that are stubs into the BIOS calls for classic devices (disks, serial ports, SVGA graphics, etc.) for another ~100 KiB. It's extremely limited though - no specific PCIe support, ACPI, networking, etc.


Raspberry pi kernel 4.14 on my pihole is around 4-5MB, linux 4.4 generic kernel in ubuntu 16.04 on one of my older box is around 6-7MB, and Linux 5.4 generic kernel on ubuntu 20.04 on my vps and desktop is around 12MB. Tuning down the size of the compiled linux kernel seem to be a sport of some sort, with many guides like this [0] available online.

[0] https://elinux.org/Kernel_Size_Tuning_Guide


Drivers for all the hardware that was created in the meantime for example :)


That's usually compiled as modules.


Because of lack of modularity. Drivers became really big over time and those usually live in the kernel. It would be great to see a more detailed breakdown of the actual space usage of recent kernels.


You can run Linux off of a single floppy (though formatted as a bit more than 1.44MB):

http://www.toms.net/rb/

Text-only, however, unlike the QNX demo.


Tomsrtbt is based on a 2.2 kernel which may have problems with modern CPUs and devices. It remains amazing for what it does.


Some organizations using BigFix endpoint manager have TinyCore Linux VMware images that are pre-configured to act as a "relay" for their deployment. https://help.hcltechsw.com/bigfix/9.5/platform/Platform/Inst...


BigFix definitely seems malware-like.


Can it support 64-bit ARM? Perhaps it will run faster than Ubuntu 20.10 on my RPi 400


I think that would be this one:

http://tinycorelinux.net/12.x/aarch64/releases/RPi/

Note how there are ports on this page but it doesn't seem to be updated well. The Raspberry Pi links lead to ARMv6 ("piCore") ports and not AArch64 ("piCore64") ports:

http://tinycorelinux.net/ports.html

I found the first link from their Raspberry Pi forum:

http://forum.tinycorelinux.net/index.php/topic,24384.0.html


Have you tried Alpine? It has a Rpi version.



Does anything like this exist?:

- A program to build a bootable Linux image from a config file, like NixOS

- The image is tiny, like Alpine Linux

- Support for x86, Raspberry Pi, DigitalOcean, and EC2

There is a project to build bootable NixOOS images for Raspberry Pi [1], but it produces 2GB+ images. The iteration time to build + copy + boot is about 10 minutes. And the images are too large to easily distribute.

Packer [2] has some of these things, but it has to run on an actual EC2/DigitalOcean VM to produce images for them. That's a lot of extra config, potential for cost overruns from failed VM shutdowns, more secrets to manage, and more ACLs to maintain.

[1] https://github.com/Robertof/nixos-docker-sd-image-builder

[2] https://www.packer.io


You shold try out buildroot[1]. It's used for building a complete linux image for embedded systems so the result is quite small, efficient. There are several guides online on how to build for Raspberry Pi, might support cloud as well.

[1] https://buildroot.org/


Thanks. Buildroot looks useful. The docs say to use a graphical program to configure it. The program edits a .config file which presumably could be code reviewed and checked into source control. Do you know of any open source projects that check in their buildroot .config file?

Reviewing a generated config file has downsides. The file does not show the context associated with changes. Comments are either unsupported or easily destroyed by the editor. The file format may be confusing or may destroy blame info by putting many options on one line. The file format and how to use the graphical tool are extra chunks of knowledge that engineers must load and maintain.

Buildroot is designed for folks making custom kernels. I don't need that. I want everything related to the hardware to just work. This includes automatically mounting attached network block devices on DigitalOcean & EC2 and configuring wifi on Raspberry Pi.


Its a nifty Linux that I used for my "fake NAS" running on an old thin client. The docs are fairly simple, and their package manager has mostly everything you need.

It's easy to set up a weekly cron job and samba server on it for some NTFS drives.


Tinycore was used at my last job to get around some frustrations of updating the embedded systems on the machines we made (which ran a full blown Linux system).

We mailed customers USB drives that when plugged in would mount and start tinycore Linux, then write the new system to a parallel partition, reboot the machine, then run a few health checks.

Some might say it's hacky (it is/was) but it worked pretty well and allowed for recovery of totally borked systems.


> We mailed customers USB drives-

This would be a non starter for any security conscious client.


Agreed that it's not a good practice security wise. This is what I argued against when the initial request was made (we also had over the air upsates) but it was interestingly the _most_ "security conscious" customers that had requested this - they specifically wanted us to mail them either replacement microSD cards (the disks on the machines) or USB sticks. The machines themselves verified the contents and we also offered a way for customers to download the images directly and verify, but most customers just opted for snail mail. /shrug.

Several of these companies also had strict email attachment filters that we were instructed to get around by just appending `.allow` to the filenames. There was tons of this type of stuff that we encounted which really highlighted that corporate security at many companies is just theatre.


Shouldn't it be quite okay if using secureboot and the embedded system has locked down keys?

I have no idea if they did that, but mailing USB keys should be fine with the right encryption or validation


It's not okay at all. You should not be plugging in a USB someone sent you in the mail straight into a machine on your corporate network. There is no chain of custody, an attacker could do many things to make it seem like the USB is legit as possible, until it isn't.

Just a bad idea all around. There should not even be USB ports IMO.


Who said anything about a corporate network? You seem to be thinking that I said that it's okay to take random USB's and plug them into your laptop which I definitely do not.

I read it as that Integrity and Confidentiality was the only factors here, so USBKill-style stuff (Availability) would just mean the machine was replaced, Integrity should handled by the boot process and Confidentiality should be handled by the embedded OS itself.


If all you're looking to do is verify the USB storage, sure. But USB doesn't have to only be a storage device. You can include all sorts of other evils in a USB payload.


I read it as that Integrity and Confidentiality was the only factors here, so USBKill-style stuff (Availability) would just mean the machine was replaced, Integrity should handled by the boot process and Confidentiality should be handled by the embedded OS itself.


Depends. If the image is signed/ encrypted it would be fine.


Until it switches to emulating a USB keyboard after you boot :)


I had a great time running Tiny Core on a ThinkPad R31 about 10 years ago. I eventually put Arch on it because Arch is so easy to use, but I miss how snappy Tiny Core was and I think I may try it again.


In this day and age I always verify the authenticity of downloaded ISO images, but this seems not possible with Tinycore Linux as no signatures are available, which makes me somewhat uncomfortable.


I reduced the kernel size (stripped out unnecessary drivers) and stuffed a stripped-down version with full GUI into my BIOS / ROM chip running coreboot. Pretty neat.


I like to use Tiny Core as the rescue environment on my machines, since it fits perfectly in boot partitions. The small size also makes it great for thin clients.


There's a small typo on the main page:

>> CorePlus ofers a simple way...


Dropping the 'f' is actually a way of reducing the filesize of the website, which is very important for the Tinycore project.


Exactly. It's too compact to have a complete offering.


Maybe report it to the TinyCore Linux team then?

There's absolutely nothing HN can do about it.


It's interesting, though, that when a website hits the front page of HN it often gets the attention of the owners.



I just tried it for fun: (5.9.10/x86 built in VM on laptop)

423kB bzImage, 85s build time




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: