Hacker News new | past | comments | ask | show | jobs | submit login
Building the worst Linux PC ever: 6 hours to boot Ubuntu (hackaday.com)
348 points by voodoochilo on March 28, 2012 | hide | past | favorite | 59 comments



These crazy projects are the most fun. As Richard Feynman notoriously said: "What I cannot create, I do not understand".

shameless plug below:

Last year, inspired by Bellard's jslinux, I too wrote an emulator that can run Linux on the browser. Only I was lazy and emulated the vastly easier LatticeMico32 processor.

Anyways, the result was very intellectually satisfying.

After writing the interpreter, I went ahead and wrote a version that generates Javascript code on the fly (and captures up to 3 backwards jumps to the same block), for massive speed ups.

Anyways, it doesn't serve any purposes, but boy was it fun...

The code is at: https://github.com/ubercomp/jslm32/

And there's a demo running on: http://www.ubercomp.com/jslm32/src/

BEWARE: It only works well on Chrome (takes download time + 10s to boot on my machine).

If anyone is interested in this stuff, just ask and I'll write a post describing what I did to take boot time from 2.5 minutes to 10 seconds.


> If anyone is interested in this stuff, just ask and I'll write a post describing what I did to take boot time from 2.5 minutes to 10 seconds.

Yes please. Optimisation war stories are always interesting.


It would be great if you could write that post! Wonderful project!


Very cool! FYI, it also works fine in Firefox.


Wow. Contrary to my expectations, this picked up some interest. A lot of people actually visited my demo.

As I don't want to kidnap this thread, which is about an awesome feat of hackery by dmitrygr, I submitted one about jslm32: http://news.ycombinator.com/item?id=3769498

I'll check the jslm32-specific thread sporadically to answer any questions.


I love stuff like this. If only there was an easy way to make 'GUI' programs Flash/Java might have competition.


Working on it...

The LatticeMico32 toolchain (gcc, gdb, ar, ld) is very bad, seriously... I'm thinking of doing a MIPS or ARM emulator, just because the toolchains are so much better to work with.

Anyways, I do have a framebuffer demo that runs at a very decent frame rate on my machine (at this moment it is Chrome only and I unfortunately don't have the binaries for the demo on github). But this is not on Linux, it's on a barebones newlib environment (no Operating System).

Writing a mouse interface is trivial, the only reason I haven't done it yet is I'm thinking if it is worth to continue investing on the LatticeMico32 architecture, as the toolchain makes me want to pull my hair out.

Besides running Linux, the emulated system also runs RTEMS (actually it runs anything as long as you can get the toolchain to produce working binaries), and it might be easier to get an RTEMS system running with a simple graphical environment, but then there wouldn't be man y libraries do choose from.

So that's the status. I believe we are on the verge of having a viable option for making "GUI" programs on a canvas screen. If I were to work full time on it, I could pull a prototype off in about a month or two (literally), maybe a little less as this thing is so addictive I would easily work 16h days on it if I could.

IMHO, the ideal thing would be having an easy to use environment with a mature toolkit on it (I was thinking Qt) and let the user choose the language of choice to develop in. Possible choices would be C, Python, Ruby and Lua, which are fairly easy to have on the web.

Technically, it can be done, I just don't know if there would be enough interest, and how to have a sustainable business model around this idea. What do you think?


I think it's a great idea! I also think you are on the right path with using a barebones environment; I use Linux, but it's not right for the purpose.

The biggest market I see for something like this is games or displaying some sort of video. It seems like it is getting harder and harder to find something that Java/Flash can do that HTML5/Javascript can't but they still have their uses. For Flash it's RTMPS for encrypted video. For Java it's ease of development.

I'm so intrigued because it seems it seems like an awesome way to do some browser fuzzing :D


Well, given the recent post about getting Access running on the web, I reckon you could take it to the next level: get Access running on Wine under Linux emulated in JavaScript on a web browser. Or get DOSBOX to run any old school line of business application in a web browser under any operating system.


Hey, I've made a start on an ARM emulator. (Hardly anything done, unreleased, nothing runs yet.) I'll email you.


>BEWARE: It only works well on Chrome

It boots and I can login using Firefox Nightly.


The linked blog post misses the interesting bits. Yes, it boots Ubuntu (slowly) via an ARM emulator written for AVR. That's just software.

What struck me is that he wrote a controller for an 8 bit FPM DRAM bus instead of just using a big SRAM. That's surprising, and not at all trivial to do over a bunch of GPIO pins.


Would love a more extended explanation of what this means and why it's surprising, for a this software geek who dabbles in hardware.


You have to periodically refresh DRAM. That's what the dynamic means in DRAM. If you don't, the memory will "forget" what it's holding. This is because the memory cell is essentially a capacitor.

SRAM would be easier because it's set and forget. It needs a more complex internal circuit, and therefore more expensive.


It can't be that hard; I once bitbanged a 30 pin SIMM on a PIC with an interrupt to trigger the refresh.

The 30 pin SIMM is 16MB and easy to wire (0.1" pitch). 16 MB of SRAM is probably going to be TSOP or worse, and probably in multiple packages. The multiplexed DRAM bus would also help with the pin count, but it's not clear to me that was a concern.


The point was more than you can wire the SRAM to the address bus of the CPU with maybe a little GPIO logic for bank switching to expand the address space. The DRAM has to be done over GPIO (my count is 22 data signals) solely, with attention to timings and refresh.

And sure, I can see how it's done and announce it's "easy". But I'd be hesitant to try it. So if you've actually done it, bravo.

(edit: Among other things, how did you handle the case of your refresh ISR firing in the middle of a read cycle? Certainly can be done correctly, but it's a non-trivial (albeit software-side) problem to solve.)


I handled it thusly: while reading or writing RAM, interrupts are off. Between bytes read they are re-enabled. This means that the longest refresh delay is the length of a ram read/write. This is why my refreshes happen every 62ms and not every 64ms as the DRAM datasheet specifies - to allow me this leeway to be a bit late with the refresh.


Hm... sounds wrong to me. What if the software ends up spinning on the DRAM? You'll run with interrupts disabled pretty much all the time and miss your next refresh. You need to guarantee the timer, but not clobber a transaction in progress. You need to check a flag out of the DRAM access routine or something that tells you a refresh was missed and do it synchronously I guess.

The point being: it's non-trivial.


Please note what I said. I release interrupts after every byte READ/WRITTEN, so that the maximal delay to a refresh is the length of a single byte read/write. In case it wasn't clear, the only interrupt in use is the ram refresh interrupt. If it is masked, and triggers, it will execute the handler when it is unmasked. This means that even a loop on RAM read/write will not starve RAM of refreshes

[also, why do I keep being told I am submitting too fast, please slow down. I just posted 2 comments here, that is all. Had to make a new username :(]


> [also, why do I keep being told I am submitting too fast, please slow down. I just posted 2 comments here, that is all. Had to make a new username :(]

I think there is extra throttling on brand-new accounts, to help keep spam under control. Once your account is a little older it will be less restricted.

Also: welcome, and thanks for joining the discussion!


It's a microcontroller with no external bus, so you're bitbanging it either way.

I'm very impressed with his project!


Yeah, that is actually a pretty awesome bit of engineering. I would never have thought of this, and it is exceptionally clever in some exciting hacker way.


From the actual web site[1], describing the video of the boot process[2]:

"The raw video is in a few segments, since I had to change camera batteries a few times while filming."

[1]: http://dmitry.co/index.php?p=./04.Thoughts/07.%20Linux%20on%...

[2]: http://www.youtube.com/watch?v=nm0POwEtiqE


Impressive. And now I'm wondering if a Minecraft inside Minecraft would actually be realizable. As 8bit CPUs have already been made (altough with unknown architecture), it is a bit less crazy to imagine running linux with a java stack on it. It will be hellish slow and you probably need some sort of "hardware" graphic acceleration. And a lot more memory than the current versions. But hey, running Minecraft in Minecraft would be pure brainmelting awesomeness!

Maybe some sort of Hardware Description Language to Minecraft compiler would be useful.


How about Conway's Life in Conway's Life? (Yo, dawg: http://www.youtube.com/watch?feature=player_embedded&v=Q... )


That pretty much completely blew my mind


Normally, Minecraft logic requires some time in-game to execute, but last year (IIRC) someone was able to invent "instant" logic gates and transmission wires, by exploiting some unusual interaction between pistons and the block-update rules. They're quite bulky, but they do get the job done of achieving FTL communication and supertask computation ;).

With an optimizing compiler to take advantage of these, it might not be unrealistic to implement Minecraft in Minecraft. The real-time speed would only be limited by the real hardware, not by the Minecraft hardware.


This link from the comments on that site just has to be shared here:

"Hackers Successfully Install Linux on a Potato" http://www.bbspot.com/news/2008/12/linux-on-a-potato.html


I see your potato and raise you a dead badger!

http://www.strangehorizons.com/2004/20040405/badger.shtml


In the '90s I wanted to show a friend the Apple ][ game Robot Odyssey. The only Apple ][ emulators I could find were for Windows. There was a Windows machine in a lab, but it was really awkward to access. However, I had a SPARC on my desk and found a Windows emulator. Running the Apple ][ emulator inside the Windows emulator running on a Sparc I -- it actually ran at about the original speed of the Apple ][.

And this was 15 years before Inception... ;)

Anybody have a SPARC 1 emulator?


qemu will do a SparcStation 5 or 10. http://www.aurel32.net/info/debian_sparc_qemu.php


Sorry, only a 68020 based Sun 3/50.

4 Megs RAM and a b/w console.


The article claim that an MMU and minimum bit width of 32-bit is required for Linux. Wasn't µCLinux rolled back into the official kernel, meaning that with the right configuration the Linux kernel will run on CPUs without an MMU and with bit widths less than 32-bit? Eg. the H8 [1].

Perhaps the author meant to write "Ubuntu", rather than "Linux"?

[1] http://kernel.org/doc/readme/arch-h8300-README


Linux is generally considered the go-to OS for under powered computers

"Under-powered" is a moving target. It seems to apply best to computers that don't quite meet the specs of the current Windows, or perform very poorly with it. Anything without a prayer of running WinXP is much worse than under-powered, and modern Linux is a poor choice. An old linux2.4 distro would be a good option, or my personal favorite for truly pathetic hardware, OpenBSD.

[Dmitry] threw an antique 30-pin RAM SIMM at the problem. It’s wired up directly to the microcontroller...

Whoa. You can do that? I mean, it makes complete sense, but it never occurred to me... suddenly I might have a use for all those old free RAM sticks.


Depends on what you define as a distro. LFS and Gentoo run pretty fast on anything that will run OpenBSD. OpenBSD runs on underpowered stuff because it runs nothing by default. NetBSD is probably more portable though.


I hope Sophie Wilson is reading this, as that microcontroller probably isn't far off the BBC Micros that were originally used to prototype the ARM instruction set.

I bet nobody then imagined anyone would emulate un*x on a chip being emulated at such slow speeds.


The best part about this is the battery pack in the picture.

I wonder if there's a niche for the equivalent of the Model 100 in today's world?

http://www.trs-80.com/wordpress/trs-80-computer-line/model-1...

A fully mobile general purpose computer with instant boot, impressive battery life, running off 4 AA's.

The Wikireader comes close as the modern equivalent. The input method would have to be something like Siri, with an optional keyboard. With a form factor and batteries from an iPad 3, but with a slower processor and a fast refresh eInk screen, you could have truly phenomenal battery life.


Siri (and most other voice recognition on phones) relies on a network connection, that'll put a bit of a dent in the battery life.


I have a working model 102 on the Shelf Of Dreams here at work.


this is an amazing hack. Well done for this guy, and his totally useless project.


As I said earlier, he needs to run jslinux on this stack. It will be like `computing slowmo @ 1Mfps`


I'm sure he could coax at least a few more hours out of it by somehow finding a working 1x CD-ROM drive to boot a LiveCD from.


I've got one in my basement but getting a working 8 bit scsi interface might be harder.


Sorry, my Mitsumi 1x went in the dumpster many years ago.


A single cool C hack is a million times more cool than all html/css/js hacks combined.. awe inspiring.. jaw dropping stuff.


He hasnt left anything for the rest.. the next coolest stuff will be to run linux in the air.


I disbelieve. How could this possibly boot to X without a framebuffer emulator too?


There is no X, he is accessing the system through minicom.


Theoretically you could boot X without a hardware framebuffer emulator. X supports rendering to normal RAM as well (for example, for VNC connections).


Actually, if you go read my source code, you'll see that the emulator DOES in fact emulate a framebuffer. In fact I even have code in place to output the image. I just didn't connect a graphical LCD to this particular built. It is, however, supported.


Seriously, amazing ridiculous work. Kudos.


Wow, neat.

What's your day job?


software engineer @ google


Figures. MTV? (xoogler here)


And someone told me recently that Linux won't run on a 286.


What's really impressive, is that it still work somehow...


This article got me laughing into tears.

Only to make it even funnier by the glazed look in my girlfriends eyes when I was trying to explain whats so funny about guy bootstrapping a 32bit OS on a 8bit micro controller.

I mean the idea of emulating 32bit OS on a 8bit machine is not something too ridiculous on its own. But the setup he used. Oh my god. Must stop trying to comprehend this madness.

Kudos to Dmitry.


That's how the first version of the ARM was made, by software simulation on a 6502 in BASIC:

http://en.wikipedia.org/wiki/ARM_architecture#History


UUENCODE?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: