Hacker News new | past | comments | ask | show | jobs | submit login
My Watch Runs GNU/Linux (sam.today)
437 points by BuuQu9hu on Jan 6, 2017 | hide | past | favorite | 234 comments



Being an old Slashdot user, the first incredibly dumb thought of mine was "Imagine a beowulf cluster of these!" With a pic of Natalie Portman for the watch face, of course.

But, one fun thing I could imagine doing is using it as an incredibly portable PirateBox. Or any other use of a file server hiding in plain sight.


Man, I remember when those jokes were old, 15 years ago.

Speaking of Beowulf, has there ever been an evolution of the concept? The closest I've seen since has been QNX's QNet, which allows transparent management and communication between process on nodes of the cluster. I suppose Hadoop or even Kubernetes can be seen as the continuation of the concept?


The idea of a Beowulf cluster was using networked commodity PC hardware and Linux to do HPC / scientific computing.

In some ways this idea came to dominate. If you look at the top500 list:

https://www.top500.org/list/2016/11/

All of these machines are big clusters running Linux. Mostly on Intel CPUs.

But on the other hand, the idea of using commodity hardware is kind of a thing of the past. It's mostly Xeon CPUs, not desktop processors. And it's specialized network hardware. And more and more you see dedicated compute hardware like Intel Phi and Nvidia Tesla cards.


Yeah, it's pretty intense to see these clusters in person. In our data centers, we have 40G optical interlinks per rack overhead, 100G spidering across the racks to different rooms and the main network room.

And thinking of he main network room, with the amount of brocades in there, it's probably more expensive than the main enterprise pod just in sheer super-expensive network stuffs.

We're also behind the times in lots of our management. 80% of our servers are bare metal, with limited automation. But we also do "NOC in a box"... many of our use cases wouldn't cleanly work right using tech like docker and kubernetes.


That's a narrow definition of "commodity" -- the special networks cost less than the same speed of Ethernet, and Intel server chips (non-phi) aren't that different from desktop CPUs.

If you look through the archives of the beowulf mailing list, occasionally someone makes the argument you're making, and few people agree with it.


There is no 'same speed of Ethernet' for infiniband or omnipath or aries, etc. There is more to these networks than throughput, and the switches approach a million dollars apiece.

The rest of the non-phi/non-tesla hardware is pretty much off the shelf, but the interconnect is one of the two distinguishing features of a supercomputing-class cluster; the other is high-performance shared storage (which of course requires the interconnect to function).


You're explaining high speed networking to the system architect of infinipath :)


It's a shame feel like I need to. There's no world where high-speed interconnects are as cheap as ethernet, nor is there a world where it is appropriate to replace them with ethernet. Congratulations on your successes but they're not really relevant to the accuracy of your post.


and me


I guess the current "beowulf" pocket cluster are 4 GPU cards to crack passwords

https://twitter.com/hashcat/status/817367152927866880

and now to do machine learning...


> Speaking of Beowulf, has there ever been an evolution of the concept?

I don't know if you'd call it an "evolution of the concept", but there are people who've made "low cost" clusters of Raspberry Pi boards (anywhere from four, to several hundred), not so much for practical purposes, but more for learning how to set up, use, and maintain such a system, without needing the space or power requirements a real system would need.


In Soviet Russia, Natalie Portman clusters you!


This is awesome. I got in on the smartwatch craze super early (like sony smartwatch 1), and the one thing I wished was for was some F/OSS to run on my watch (believe it or not sony smartwatch 1 actually had a dev kit), and to finally getting rid of the oddly-intrusive smartphone apps that came with most smartwatches. Hyped to hear someone actually did it with Asteroid OS (even if it's Alpha).

I also thought FirefoxOS would evolve to maybe get in this space, but I was mega wrong about that and lots of other things so there's that. I'm excited that Asteroid won't meet the same fate, but maybe I'm biased.

Also stumbled upon http://www.openembedded.org/wiki/Main_Page while looking the repo for Asteriod. Excited to see what comes of this project and maybe even contribute in the future.


OE is very much a thing in embedded Linux. You might also want to look up the Yocto Project.

I know that places like Formlabs use it (source: interned there), and 100% agree with the sibling comment: there's a huge, painful learning curve to get started.

It's a combination of a lot of problems: the question of what expertise level to write tutorials/walkthroughs for, decent documentation (that you think you understand but then realize, oh shit, no, I don't), knowing the ecosystems (man, the sheer F/OSS drama that you can discover while searching for something...), were all problems that I noticed just trying to extend our build system.


Part of the problem is due to the small community of developers involved. There are a lot of ways to package and build your source tree, especially if you draw from a lot of different projects. See the chromium or android projects to see what I mean. If you compare the smaller and older embedded projects to the Arduino community, it's like day and night. The Arduino project has a much larger community of users and contributors which has given it much better libraries, tools, and support.


OE has been around a while, and while it has matured well it is still somewhat of a bridge to building bespoke distributions. It's a great start if you're looking to maintain a custom distribution for a custom product/form factor but it's a bit rough for end users. Worse still, there is little-to-no motivation for vendors to open up their products to their customers.


I've always thought that one of the best things about the Raspberry Pi/ODROID/Zero/Arduino/other mini computers/microcontrollers is that they made it obvious and easy (easier at least) for people to bring hardware to consumers, at the sacrifice of some speed and efficiency.

When the Raspberry Pi came out and only cost $25, it made me think I could write some relatively resilient/robust software, put it on a SD card, put it in a PI, add a case, and sell useful hardware for $50. OE seems like a good step in the direction of recovering some of the speed/efficiency losses that running even some of the most lightweight linux distros would force you into.

I think the end user problem can be solved with extremely robust client-side installers and amazing instructions. If IKEA can get people to build furniture (even if badly), why can't we manage to get a user who has booted an operating system on a running computer to flash a device, when usually most cases are the default case (as in you usually don't have to change a ton of ADB/system settings to connect to most android devices).


Because there's an extremely large difference in complexity and reliability.


meanwhile, everyone's phone here runs gnu/linux but completely out of reach.

everyone bought a computer from an advertising (google) or fashion (apple) company that only runs in kiosk mode. how does your 90s self feel about that?


> meanwhile, everyone's phone here runs gnu/linux but completely out of reach. everyone bought a computer from an advertising (google) or fashion (apple) company that only runs in kiosk mode.

Pretty sure there is no GNU component in iOS which is based on BSD. Also, Android uses only the Linux kernel, not even close to a GNU/Linux system.


I feel like probably half the people who insist on calling it "GNU/Linux" don't even really understand why.


Same as the people who insist on calling it Linux when talking about the desktop.


If you wouldn't mind referring to the desktop by it's more appropriate name, K/Mozilla/Apache/GNU/Linux, that would be great.


You forgot C in there :-)

Anyway, this misses the point of the FSF in insisting on calling it GNU/Linux.

The point of the GNU is not to name every application that runs on your system, but to say that you're running the Linux kernel AND the GNU userland to have a functional system, i.e. that GNU is the second half of a complete system, GNU/Linux.

You don't need Apache to have a functional system. You do need libc etc. to have a functional system and when these are provided by GNU, I think it's fair to call it GNU/Linux.


Also there's now a thing called "GNU/NT", or as Microsoft calls it, "Bash on Ubuntu on Windows".


"GNU\NT", FTFY


Win32 actually accepts both. :)


For a very specific definition of functional that's true. But you can (and many people do) have a functional Linux system without GNU userland, and you could hand a GNU/Linux to most people and it would be practically non-functional for them. Pretty arbitrary line.


Name a system that is used commercially by non-tech people and they are not aware of it.


Android, last I checked. Also, the vast majority of consumer network hardware (that ain't running VxWorks, at least). Both of these categories of Linux distribution typically use Busybox instead of the GNU userland, and more often than not use an embedded-friendly libc instead of glibc. The only significant GNU component remaining - GCC - is usually counted as part of the distribution's build environment rather than as part of the distribution itself, so it'd be a significant stretch to count it.


Anyway, this misses the point of the FSF in insisting on calling it GNU/Linux.

The point of the GNU is not to name every application that runs on your system, but to say that you're running the Linux kernel AND the GNU userland to have a functional system, i.e. that GNU is the second half of a complete system, GNU/Linux.

I thought the point was to bring attention to the idea of Free Software as a philosophy. Where as Open Source is more of a marketing tool.


Totally, by association with the GNU Project, you'll certainly come across FSF and their work to promote free software, which is the ultimate goal.


Totally true. What is the purpose of opensource anyway? somehow most corporations believe that opensource is good for them.


I wonder if something running busybox + musl on top of a Linux kernel is still any "GNU" in this sense.


Of course not, and hence RMS would not call it GNU/Linux.


Absolutely, that's indeed the reason behind GNU/Linux as opposed to GNU Linux.

Linux is not a GNU package and hence not GNU Linux, however when it includes the GNU userland, it runs many GNU packages, hence GNU(userland)/Linux(kernel).

RMS is not taking credit for Linux and does not want anyone to call the kernel GNU/Linux, but rather the whole system IF it is using GNU userland.


But my original point is that the overwhelming majority of Linux systems include a lot of non-GNU software to achieve basic functionality. Why not include all that when referring to the operating system?


Because GNU/Linux is the very core you need to have even a remotely functional system, i.e. you need both GNU and Linux to even have an OS of some sort, the other parts are used to make the OS even more useful, but they're not the parts without which your system will fail to boot into a reasonable state to even get these other pieces working.

In Windows there is lots of software not written by Microsoft, then there's the NT kernel, (which by itself does not make an OS) and the userland (which together with the kernel makes a basic OS that you can install all the other nice pieces onto to have a great experience, but the core is NT + userland)


You still need GCC (Clang still can't compile the kernel).


That's usually not considered part of the distribution itself, though, and thus not really a contributor to the discussion of "GNU/Linux" v. "Linux". That'd be like calling pre-Clang versions of Mac OS X "GNU/OSX" just because Apple used GCC to build their operating system, or calling OpenBSD "GNU/OpenBSD" just because OpenBSD still uses GCC.


yeah, okay.

---

No, Richard, it's 'Linux', not 'GNU/Linux'. The most important contributions that the FSF made to Linux were the creation of the GPL and the GCC compiler. Those are fine and inspired products. GCC is a monumental achievement and has earned you, RMS, and the Free Software Foundation countless kudos and much appreciation.

Following are some reasons for you to mull over, including some already answered in your FAQ.

One guy, Linus Torvalds, used GCC to make his operating system (yes, Linux is an OS -- more on this later). He named it 'Linux' with a little help from his friends. Why doesn't he call it GNU/Linux? Because he wrote it, with more help from his friends, not you. You named your stuff, I named my stuff -- including the software I wrote using GCC -- and Linus named his stuff. The proper name is Linux because Linus Torvalds says so. Linus has spoken. Accept his authority. To do otherwise is to become a nag. You don't want to be known as a nag, do you?

(An operating system) != (a distribution). Linux is an operating system. By my definition, an operating system is that software which provides and limits access to hardware resources on a computer. That definition applies wherever you see Linux in use. However, Linux is usually distributed with a collection of utilities and applications to make it easily configurable as a desktop system, a server, a development box, or a graphics workstation, or whatever the user needs. In such a configuration, we have a Linux (based) distribution. Therein lies your strongest argument for the unwieldy title 'GNU/Linux' (when said bundled software is largely from the FSF). Go bug the distribution makers on that one. Take your beef to Red Hat, Mandrake, and Slackware. At least there you have an argument. Linux alone is an operating system that can be used in various applications without any GNU software whatsoever. Embedded applications come to mind as an obvious example.

Next, even if we limit the GNU/Linux title to the GNU-based Linux distributions, we run into another obvious problem. XFree86 may well be more important to a particular Linux installation than the sum of all the GNU contributions. More properly, shouldn't the distribution be called XFree86/Linux? Or, at a minimum, XFree86/GNU/Linux? Of course, it would be rather arbitrary to draw the line there when many other fine contributions go unlisted. Yes, I know you've heard this one before. Get used to it. You'll keep hearing it until you can cleanly counter it.

You seem to like the lines-of-code metric. There are many lines of GNU code in a typical Linux distribution. You seem to suggest that (more LOC) == (more important). However, I submit to you that raw LOC numbers do not directly correlate with importance. I would suggest that clock cycles spent on code is a better metric. For example, if my system spends 90% of its time executing XFree86 code, XFree86 is probably the single most important collection of code on my system. Even if I loaded ten times as many lines of useless bloatware on my system and I never excuted that bloatware, it certainly isn't more important code than XFree86. Obviously, this metric isn't perfect either, but LOC really, really sucks. Please refrain from using it ever again in supporting any argument.

Last, I'd like to point out that we Linux and GNU users shouldn't be fighting among ourselves over naming other people's software. But what the heck, I'm in a bad mood now. I think I'm feeling sufficiently obnoxious to make the point that GCC is so very famous and, yes, so very useful only because Linux was developed. In a show of proper respect and gratitude, shouldn't you and everyone refer to GCC as 'the Linux compiler'? Or at least, 'Linux GCC'? Seriously, where would your masterpiece be without Linux? Languishing with the HURD?

If there is a moral buried in this rant, maybe it is this:

Be grateful for your abilities and your incredible success and your considerable fame. Continue to use that success and fame for good, not evil. Also, be especially grateful for Linux' huge contribution to that success. You, RMS, the Free Software Foundation, and GNU software have reached their current high profiles largely on the back of Linux. You have changed the world. Now, go forth and don't be a nag.

Thanks for listening.


Linux is not a operative system and calling it that was just a temporarly quirk, based on the concept that a operative system only has a single available kernel.

In Debian, the Linux kernel is just one of many optional packages. Replace it with BSD and you still have Debian the operative system, running on a BSD kernel.

Some people call it a Linux distribution, but that's incorrect. Its a Software distribution, similar to how Apple distribute software through the app store, and how Microsoft distribute software through the windows store. To link the kernel to the distribution make sense if the distribution only support a single kernel, but that's not true any more. Debian is no more a Linux operative system than its a BSD operative system or a Hurd operative system. Debian is however a operative system.

If there is one thing I wish people would do it is to stop confusing the role of a kernel and the role of a operative system. I don't go to the kernel.org and expect to get a full blown operative system to install on my laptop. I don't tell people to go there when suggestion an alternative to windows and mac. Nothing that people use to distinguish which operative system they currently got involves a kernel, and one do not talk about kernel code when recommending people to switching from one operative system to an other.


It's "operating system".


I know and it bugs me that I didn't see it until after the edit period (its a Swedish to english mistranslation).

To say a few more words about Debian, the operating system has targets for multiple architectures, multiple kernels, and multiple platforms/hardware. Some treat them as four different operating systems, ie "Debian GNU/Linux", "Debian GNU/Hurd", and "Debian GNU/kFreeBSD" and "Debian GNU/NetBSD". It look silly, and its the same software in all of them unless you do things very close to the hardware.


> One guy, Linus Torvalds, used GCC to make his operating system (yes, Linux is an OS -- more on this later). He named it 'Linux' with a little help from his friends. Why doesn't he call it GNU/Linux? Because he wrote it, with more help from his friends, not you. You named your stuff, I named my stuff -- including the software I wrote using GCC -- and Linus named his stuff. The proper name is Linux because Linus Torvalds says so. Linus has spoken. Accept his authority. To do otherwise is to become a nag.

The proper name of "Linux" (the kernel) is indeed Linux, because Linus has said so and everyone, including RMS agrees, no one insists on calling Linux (the kernel) GNU/Linux, as there's no GNU in there and it would be pretty silly on insisting on calling it GNU/Linux.

Also, GCC is hardly the only critical GNU component that modern GNU/Linux systems rely on, but even so, it's not that anyone wants to name your program GNU/something just because it was compiled with GCC, rather is that Linux is the kernel, GNU is the userland. To make a functional system, you do need a kernel (Linux) and the userland (ie GNU), so if you're using both components and one is called GNU and the other Linux, it's fair to call the result GNU/Linux.


userspace is really easy to swap out. linux is the actual important part. what is gnu without linux? nothing. i can use suckless coreutils, llvm, and musl and still have a functional linux system. i don't need gnu for anything.


Not totally true, ie you still very much need GCC to compile the kernel, but sure - you can also do the reverse with Linux and run the GNU userland on a different kernel, then it will not be called GNU/Linux, (see Debian & Android), however WHEN you're using the GNU user land with Linux, it'll be nice to call it properly as GNU/Linux, that's all. It's still the most popular free set of packages, (there's a reason even MS used the GNU userland in LSFW), and thus giving it some credit for starting and substantially contributing to the free software movement by adopting the GNU/Linux naming convention, (Only when GNU is indeed used, of course), is the least thing we could do.

The reason RMS wants people to do this is not to take more credit for himself than is due, but to bring more attention to "free software", (which GNU promotes), as opposed to just "open-source" (which Linus promotes).


i understand what he is trying to do, but i think he should focus his attention on, i don't know, actually shipping his GNU operating system. or perhaps building compelling products that are free software. free software enabled open source to eat the world, but they, themselves, are not doing the eating.

i'm saying, shut the fuck up with this pedantic shit and make something that people want. our competitors are multi-billion dollar companies. we can't just promote ideas, we actually have to fight head to head. most people aren't ideologically driven, they just buy whatever seems best/most convenient.

focus on actual measurable things like marketshare. how many people are getting the four freedoms? that's the goal, right? so measure it. free software has benefits, but they aren't being marketed aggressively enough to actually reach consumers. we have all of the pieces, but no vision or marketing strategy.

strong copyleft provides equal protection for IP as proprietary licensing, especially when you consider AGPL. charge for shit. make it sexy. whatever you have to do to make money and spread free software. sue over license infringement. fight, damnit!


What is the history of this copy-pasta? There are some valid points and it is a nice rant about the GNU/Linux naming controversy (that seems to be never ending), but AFAIK nobody said it, and it is just a hypothetical rant by Linus engineered by the Hive mind?


> To do otherwise is to become a nag. You don't want to be known as a nag, do you?

RMS a nag? No way, I can't possibly believe that.


No, because you appear to be missing some important facts.

GCC became relevant when UNIX vendors, initially Sun, decided to sell the developer tools instead of bundling them for free.

So the 80's hipsters that had largely ignored GCC, decided to contribute to its development instead of paying UNIX vendors for their tools.

Long before Linux was even an idea.


Most don't understand the history of the GNU Operating System:

https://www.gnu.org/gnu/thegnuproject.html

GNU contains a lot of non-GNU-written software because, to create a fully free OS, the only portions that needed to be written were the portions that did not have free replacements.


make a distro, call it gnu/linux, problem solved.


Why not make a GNU distribution of Linux (sic) and call that GNU/Linux?

https://www.gnu.org/gnu/gnu-linux-faq.html#gnudist

All the “Linux” distributions are actually versions of the GNU system with Linux as the kernel. The purpose of the term “GNU/Linux” is to communicate this point. To develop one new distribution and call that alone “GNU/Linux” would obscure the point we want to make.

As for developing a distribution of GNU/Linux, we already did this once, when we funded the early development of Debian GNU/Linux. To do it again now does not seem useful; it would be a lot of work, and unless the new distribution had substantial practical advantages over other distributions, it would serve no purpose.

Instead we help the developers of 100% free GNU/Linux distributions, such as gNewSense and Ututo.


Not same. People who call GNU/Linux knows atleast linux is just kernel.


I run arch linux and nixos. I've never heard of a distribution called gnu/linux, is it new?


Despite downvotes, this, admittedly sarcastic, question makes perfect sense. The name of a "distribution" is the name of the OS. It may not even be a "distribution", like Android. Nobody calls Android "Linux".


Linux is the operating system, the distribution is only a collection of apps running on top of the operating system. Ubuntu does not handle stuff like memory allocation or scheduling.


An operating system is a different concept from a kernel. What you are describing is a kernel, not an operating system.



If you install Termux on android, you get a full Linux experience though, with bash, gcc, and even things like emacs, ruby, and python. And it's just a normal app -- not requiring root or anything.


> not requiring root

i would require root to have most things i do on a full linux computer. Such as running network diagnostics, using special kernel drivers and properly debugging programs.

the problem is not that some app "do not require root" the problem is that every time the user does need root, they are denied! because only the advertising company has root access to their pocket (and wrist!) computers.


Love termux, but wish I could have it the other way around moqtly. Boot to a terminal and "startx" if I need to. That would be a fancy terminal, granted, if it is to handle modern touchscreens but still. Would be fun


Never heard of Termux, thank you very much for mentioning it!


never heard of termux either. I use ConnectBot (a very good SSH client) that can also open local terminals.


But that Linux kernel is usually compiled using gcc, still?

I remember there were some effort to port to first intel's compiler, then clang - but AFAIK the official stance of upstream Linux is "we use GCC, get over it"?


i confess i typed "gnu" mostly out of habit this time. But I still think gnu/linux makes sense as linux relies(ed) heavily on gcc. But you are right anyway. Should have said Linux only there.

And yes, IOS is BSD. you are right again.

But in the end, your reply is completely off-topic and misses the core issue of my comment :)


My 90s self had a Super Nintendo for which the SDK was out of reach. Now I can write apps for my iPhone, iPad and Apple TV.

My desktop machine is still a desktop machine.


Sorry to hear about your nintendo limiting your childhood.

I did have a Master System, but still, I mostly played with my father's MSX and XT computers. Which had better games anyuway, and where i learned to edit hex values and cheat on my save games and later on to write simple qbasic games.


Unfortunately there is no GNU on iOS or Android. Every open a terminal on an Android phone and wonder why things just don't feel like the GNU/Linux systems you're used to? It's because it uses things like busybox instead of the GNU coreutils.


Not to mention what happens if you decide you would like to start writing a little program (as one does on a real Unix system) and realize that you are in a sandbox designed to prevent you from doing that unless you use all the official SDK stuff through Dalvik, which has a completely different flavor and set of capabilities.

Sure, I have a very awesome computer that cost hundreds of dollars but I can't really use it because... somebody said so?


https://play.google.com/store/apps/details?id=com.termux

Zsh, bash, make, clang, SSH, Python, vim...

Their package selection is small compared to debian, but still quite nice.


being able to install it doesn't make it part of android.


To be fair, that's all most users need, and could also use for desktop OS'. You don't need admin rights to open Facebook in a browser...

But it hurts as a power user.


I spend a good part of my day in one variant of a gnu/linux environment, whether it be development of apps (for which I use GNU/Emacs), or on a server administering stuff (which technically isn't gnu/linux), so having my fashion phone in kiosk mode feels pretty good, actually. I think I'd be pretty unhappy if the stuff I count as "work" in my mind bled over in to my personal life, and I love not having to jump through the same types of hoops I do for work just to make a simple phone call or send a text message.


My 90s self never wanted a PC in his pocket, he wished his iPod and flip phone could be one device.


Exactly. I didn't buy a pocket computer, I bought a phone that can play music files and show a map. When I wanted a general purpose computer, I bought a general purpose computer.


My 90's self feels quite good knowing there are enough computers to hack among the big ones (and card sized, and the one on my TV (yes, with GNU/Linux), and the ones I rent...).

It would be great if my phone was completely under my control. But my priorities are getting my communication protocols back, and avoiding losing my desktops and servers. The phone can wait.


> how does your 90s self feel about that?

It's a knife to the heart when you put it that way.


:D


Feels great because my 90's self can look into the future of the early 2000's and see what Windows was like when malware first erupted on the scene and ordinary people were totally unprepared to handle it. I am happy to trade my freedom to tinker for that to not be a problem so I can do actual things I need to with my phone.


Kind of a weird argument. Has malware prevalence really fallen so much? It feels like someone's going to start selling AV software for browsers soon.


What phones run GNU/Linux? Most run Android (Linux kernel, but no GNU in sight) or iOS (based on BSD, maybe with GNU?).


My phones run Java/Linux and Windows Phone.

The Linux fork used on Android and the set of official NDK APIs, make it so that Google can at any Android release change the kernel for something else and only OEMs or devs using forbidden APIs will notice.


True, but Termux[1] on android makes this less of a pain.

1 - https://f-droid.org/repository/browse/?fdfilter=termux&fdid=...


I wonder if it still would work in Android 7, with the new restrictions that kill any application that tries to link to non public NDK libraries.


And since Android 4.4, access to external SD cards has been restricted unless you use an annoying API or have a rooted device.


> everyone bought a computer from an advertising (google) or fashion (apple) company that only runs in kiosk mode.

This type of attitude just discourages people from ever wanting to leave the so-called kiosk mode.


Since I own the same watch...I'm intrigued. Battery life is a concern (as is integration with some stuff). But yeah who am I kidding I'll try this and even if all it will ever do is display the time there's something to be said about "well yeah I got my Linux box right here!" :D

Remembering the "runs on a toaster" shirts I am now curious if NetBSD (or any BSD) will run on it. The thought that I never even considered messing with the watch makes me a bit sad (I've turned into too much of a consumer, not enough tinkerer left :P)


One of the big draws of buying and wearing a mechanical watch is the emotional feeling of something busily working away on your wrist. In the same way I would fine great joy in wearing a flavour of linux right on my wrist.


Absolutely - I'd love to see more along the lines of Tag Heuer's 'Connected' watches (which I think all have LCD faces), but stepping back to mechanical time.

At it's most basic, just a notification light that mirrors that on my phone/tablet.

Ideally, I think a ticker-tape-style circular display around the edge of the (real mechanical) watch face to give notification headings would be awesome.


Check out the Withings Steel HR. No ticker tap display, but it does have a small inset LCD to give you connectivity, but with style/class of analog.

https://www.withings.com/eu/en/products/steel-hr


I do love my Apple Watch, but I would also be thrilled if they put some sort of really fancy mechanical gizmo in there for the sole of having some ultra complex mechanical escapement only instead of software.


Apple used to put speakers in their ipods just so there was a click when you used the scroll wheel, so I wouldn't put it past them to have some sort of haptic imitation of ticking that 99.999% of people will never notice or feel. :)


Apple put an electromagnet in their newer laptop trackpads so they click when you press them, even though they can't move and aren't buttons.


Yeah, and it's amazing how realistic it feels.


I find the clicking on my MBP 2012 really irritating - I much prefer tap to click. Do others really spend the entire day noisily clicking around to do stuff?

Weirdly the tap to click stops working after connecting to the MacBook using VNC, even though if you open System Preferences and look at the touchpad options, it believes it is enabled with tap-to-click. They've said it is fixed about 10 times now - not sure if it is. Must retest.


I've never understood why anyone doesn't use tap to click on Apple trackpads. Physical or not, the full click is such a flow interruption for me it drives me crazy when I have to do it (fortunately, effectively never).

There was a viral Flash(?) game a few years ago involving a frog sticking out its tongue to trap insects; the catch for the game was that it had no help, everything in the UI was discoverable, but barely.

Unless, of course, you used tap to click, which wasn't registered by the game. I spent 5 minutes trying to play before deciding the whole thing must be a hoax.


You might want to try NoMachine, it works on Linux, macOS and Windows. It's pretty fast, but the picture quality isn't that good because it uses video compression (VP8) instead of bitmaps.


I use it every day, and until reading this thread I didn't even realize it wasn't mechanical...


on iPhone 7 you get the haptic feedback when e.g. scrolling the picker items. I think 100% of users notice that.


Meh... Maybe I'm just jaded by having had Linux running in my pocket for many years. (Linux, then BSD, then Linux again...)


What GPU does it use? Is it a native GPU driver + Mesa or you use it with libhybris and Android blobs? Wayland is neat, but it's pretty annoying when there are no native drivers available. One of the problems with Android is, that it became like Windows of the past. Hardware makers produce Android drivers with closed userspace blobs, and leave it at that. Blobs built against bionic make running a proper glibc Linux on such devices a pain unless hacks like libhybris are deployed, or you manage to replace them with proper open drivers.


From the front page:

"AsteroidOS is built upon a rock-solid base system. Qt 5.6 and QML are used for fast and easy app development. OpenEmbedded provides a full GNU/Linux distribution and libhybris allows easy porting to most Android and Android Wear watches."


Understandably it can use libhybris. I was asking what GPU is used in the poster's watch. If it has native drivers, libhybris isn't necessary.


Qualcomm, so in theory can use Freedreno.

AFAIK, Hybris isn't just for the GPU and is used to port various binary Android drivers to ubuntu touch, sailfish, tizen, luneos etc as can be seen from the following chart:

https://wiki.merproject.org/wiki/Adaptations/libhybris


Yes, it can be used any time there is a need for blob that depends on bionic and there is no open replacement. May be for touchscreen driver in this case? That's why it's such a mess with Android only hardware.

No idea why Tizen needs it though. Samsung can afford writing normal drivers for all their hardware.


Love the hack but I can't agree that they're a fad. Long before smartwatches my cousin had a bluetooth watch connected to his phone. And today I have a smartwatch connected to my phone to avoid having to stop and take it out when I'm on a bike. I bike everywhere, like thousands others where I live.

So there's clearly a market for some sort of wrist-device that makes using your phone easier.

The thing that makes it feel like a stupid fad is when you have to charge it every day and therefore forget to put it on. It hasn't become habitual quite yet.

Which is why I love my smartwatch for having an e-ink display and not an amoled display. So even after more than a year of operation I still only charge it once a week.


That's why he said it was a fad for him. And it is, for the vast majority of people, just a fad. Bar some use cases like biking and so on, having a smartwatch brings nuisances for no tangible benefits.


One could argue that in the larger context of modern cities (outside of specific places like the Netherlands, or some Chinese cities), biking is mostly a fad too.


> Even more amazingly, running on that tiny package of hardware is some live multitasking

Yep, pretty amazing that a quad-core 1.2GHz machine with half a gig of RAM can run more than one thing at a time!


While I don't find this that impressive. I do find amazing that a quad-core 1.2GHz machine with half a gig of RAM can fit in that tiny package of hardware.


I sense dark sarcasm hidden in this message, perhaps from behind a smug UNIX beard?


Or maybe someone that remembers the Amiga...


No need for that much memory. If you can remember 8 years ago that's enough.


Scans mental dictionary for interesting things from 2009

  Nothing found
What are you referring to?


Its hard for me to consider a smartwatch when my current watch is solar powered and I haven't had to change batteries for the last 8 years. Maybe when these can run on a charge for 2+ weeks at a time I might consider it.


That's a completely different usecase though. You don't wear a smartwatch just to see the time. You wear it because it can do stuff for you outside of telling you the time.


The ability to tell time is enough alone. The varied selection of clockfaces to match my mood or clothing makes me prefer my huaweii smartwatch over a mechanical one. I don't use any special features.


Im confused. Isnt that the definition of a Smartphone? :P


But why would someone want a wristwatch when they could have a pocket watch?


> Isn't that [do stuff for you other than telling the time] the definition of a Smartphone?

The definition?


On my case, I already went through the first smartwatch wave in the 80's, so I don't see any value on the current one.

I see it as a way for mobile OES to sell more electronics, now that everyone and their dog owns a mobile phone and a tablet, and don't plan to buy news ones anytime soon.


RIP pebble.


My Seiko Kinetic is going on 15 years without a single charge and will probably do fine for many years to come. Also, I can just look at it to tell the time, no need to turn it on first!


I found an old Casio in a box a few years back, it must have been five or six years old but was still going strong and had the right time.


My huaweii smartwatch is always on. That, and being round instead of square, is the reason I bought it.


What would be the killer app for a smartwatch from a hacker/techie perspective? Because I'm still struggling to see any realistic use case for myself.


My two killer "apps" are a google authenticator like app (OATH OTP token generator) and notifications.

Being able to glance at my wrist and see if I need to get my phone out is pretty nice. I don't try to be inconspicuous in meetings or anything, but when I'm doing something, or walking down the street, etc., it's nice to be able to decide if I need to stop and handle it (like a call from my wife or the daycare), or can ignore it, like an email from a newsletter.

And with 2FA everywhere, it's nice to have a standalone token generator that I can wear. The one on my pebble is strictly offline operation (doesn't need the phone connected), which means it's even useful if I break/loose my phone or the battery's dead or something.

Music control is pretty cool as well. But mostly for things like when I'm doing the dishes, swimming in the pool/at the beach, or in the shower. My Pebble is fully waterproof, so I only take it off to charge every few days. So I can stream music from my phone to a bluetooth speaker, and control it without needing to handle my phone.


For me, killer application is basic usable text input. I'd love to covertly send text messages, write down some thoughts and TODO items, while still maintaining eye contact and following discussion.

For it to be usable, it should not require sight and higher typing speeds should be achievable by training. This rules out virtual querty keyboards and predictive suggestions. I think glyph recognition would be usable, but also other gesture systems would work if they don't demand supervision by sight.


Does there exist any input method that doesn't require sight other than physical keys? If so, I'm not aware of any.


gesture input doesn't really need sight, especially if it is on a constrained surface (e.g. if the watch has a bezel that sticks out a bit, and a touch surface, you can easily "recalibrate" your absolute position on it every time you hit the borders). With adequate sensors it might be possible to do gestures in the air above it, or on the arm next to it.

Downside is that it requires training to learn the symbols, and of course without sight or other feedback you don't notice mistakes while doing it (same with keyboards though).


Not that it is perfect, but the Fleksy keyboard is pretty good about guessing what you meant to type, even if you miss several letters. I'm able to use it without looking, but I'd never trust it entirely.

edit: I forgot to mention Minuum. It's quite similar, but compresses all the keys into a single row. Consequently, it doesn't seem to be as accurate.


Maybe I'm weird but I don't look at my phone anymore when using a swype keyboard. I know where the keys are on the screen and can usually get about 90% word accuracy without looking. A quick glance at the final message when I'm done and off it goes.


I'd imagine the old Palm Graffiti (or similar) simplified glpyh recognition might be usable with training, but I'd still prefer some kind of physical references.

Maybe you could compensate a bit with some haptic feedback.


Something like https://en.wikipedia.org/wiki/Graffiti_(Palm_OS) could work on smartwatches?


Voice recognition obviously?


That's not covert though, actually it's so overt I generally feel like a douche using it...


Yeah, seriously. Voice input is the primary way I use my watch. I use it for sending to FB & text messages, looking up the weather, setting reminders, etc.


morse code ?


It's been done. http://www.cultofmac.com/409175/nifty-app-uses-morse-code-to...

Although I think a better input method would be voice (like the GoPro cameras) for most use cases. "Watch, send text to Mom...." Morse Code is just too slow and most humans can no longer send or copy it. It should be the input method of last resort.


My first goal would be to set up a working jasper [https://jasperproject.github.io/] or "jarvis" type of voice-controlled system. Admittedly, there aren't that many use-cases, more of a "cool to implement" sort of thing. But it would be cool to speak voice commands into my smart watch.


In a movie, we'd be disabling CCTV cameras from it as we ran through corridors of the Pentagon, y'know, hacking stuff.

In reality, I think the author just did it for its own sake:

> I ended up with a free LG Watch Urbane ... I realized that smartwatches were just a fad (for me at least), and this was a device I could experiment with.


For me it's notifications. Calendar alerts, texts, app notifications, and alarms are much easier and less disruptive to look at on your wrist than to pull out your phone. It's a much better experience.


For me: email. This is the one thing that, if I could do it properly on my watch, I'd get one.


I run Asteroid OS on the old LG watch (which I also got for free; probably a prime market for this OS). It's still in alpha so it's fairly buggy. But it looks really nice, particularly for being FOSS. It can (in principle) do notifications, weather, and music control. I look forward to them smoothing it all over, but I wear it already.


This is amazing (I also have a moment of amazement at my smartphone every so often).

But an issue is power usage. (eg) ubuntu runs on a smartphone, but with much shorter battery life than android. (Tho TBF, I don't know the power efficiency of Asteroid OS).

One side-benefit of non-rooting linux (eg termux, terminalIDE) is retaining battery life.

However, Asteroid OS is open source, which counts for a lot!


Quad core + 512MB of RAM. Now it just needs HDMI out and we've got a quite capable portable computer on our wrists.


My only concern with smart watches is battery life.


It's definitely one of the main restricting factors. That said, even my gen-one Moto 360, while not new and not known for being amazing in the battery department hasn't been an issue for me. I just put it on the cradle on my nightstand when I go to bed and pick it up in the morning. It charges pretty quickly as well so if I get home from work on a Friday and anticipate being out late, I can put it on the cradle for 30-60 min and top off the charge before heading out and there's never an issue.

Still, the battery isn't something like the battery in a traditional watch so at some point I won't just be able to order a button cell on Amazon and swap it out like I could on any old Timex, etc. Definitely hurts the lifespan and it's a major reason I only bought this watch because I got it on sale for $100. I wouldn't feel as comfortable buying a $500 watch that would be forever battery-dead in 3-4 years.


throw it in a pot, add some broth, a potato... baby, you've got a stew going


This needs a microphone. Would be great to be able to drive activity via voice, or make reminders. If it had some kind of bluetooth/wifi also then you could send emails via dictation but I guess the size/battery constraints rule that out.


It's got a microphone. Also both Bluetooth and WiFi, and you can (with the original Android Wear -- no idea if Asteroid has got to the point of implementing anything similar) drive activities by voice, set reminders and send emails by dictation.

The battery lasts for around two days.


Wow that's cool. Battery life is always a pain but two days is not so bad.


This would be extremely appealing if it had sufficient I/O to make it into a mobile, basically headless computer you could hook up to whatever display or input was handy. Looks like it only has a single MicroUSB port though.


Theoretically, it has the CPU, and the USB port is 2.0, so you could manage a USB 2.0->VGA converter.

I've heard such converters are the hardware equivalent of running the unaccelerated VESA driver due to the low bandwidth though. I don't expect it would do 60fps beyond 1152x864 and 30fps beyond 1280x1024.


Speaking from experience: these things are only useful for running powerpoint presentations. They break down even on scrolling a browser page, much less playing video.


Ah. Yikes.

Thanks for the info, I'd always wondered.

Kinda sad none of them spent the extra effort on building a differential update protocol of some kind - but then the processor inside would probably need to be 250MHz+...


You could do microUSB -> MHL -> HDMI, and maybe use the watch bluetooth for mouse/keyboard input.

Though I don't think MHL is open source, and I'm probably completely wrong in thinking the BT hardware on that could be used with the bluetooth host stack


MHL is not based on USB, only runs over the same cabling, so that wouldn't work (unless for some totally crazy reason the watch has the necessary hardware, totally unused)


Four processors in a watch? That's impressive, but if you actually use them, what's the battery life?


> Four processors in a watch?

Four cores, which makes a difference.

> what's the battery life?

It might not be as bad as could be feared, specifically regarding the CPU at idle anyway. A lot of modern CPUs support turning cores individually on/off (or at least into very low power sleep states) as needed and if the OS scheduler is bright enough then taking advantage of this can be a lot more power efficient than trying to fiddle around with variable clock rates. There might be a performance hit for single one-thread tasks of course as per-core performance might be low, but at times when you care (while in active use, interacting with an app) there will be at least three distinct tasks going on: core function management, display management, and at least one user task. While the watch is idle there will just be one active task most of the time so only one core needs to be powered up (of none most of the time, with device management tasks and user apps that respond to events/notifications only waking up on interrupt).

Having said that, having to charge it at least once most days is my only major complaint with the MS Band that I wear most of the time. It would be interesting to see how well this manages in that regard.


I notice that nobody answers to the "what is the battery life" questions...


The article has both the hard fact "410 mAh" as well a a more colloquial "a good days worth!"

Sure, the author doesn't confirm this nor expand on it, but I'd say his comment is sufficient.


410 mAh is a measure of battery capacity, not battery life. And the phrase "a good days worth" presumably describes the watch as shipped with Android wear.

I think what the original question was getting at was more like "what is the battery life of the watch when running AsteroidOS?" That seems like a pretty fair question which was not addressed.


Looks cool. Wouldn't trust it to tell the time though


Does it just run date on the CLI and pipe that out? That'd be useful. Or show the date in your BASH prompt.


    watch date


Probably don't want the default refresh of 2 seconds. And the title/header is probably redundant, as well. Maybe this, instead?

$ watch --interval 0.1 --no-title date

;)


Is it 2003 again? The content container here uses a white png as a background instead of background-color: white


It's not a perfect white: it's a 476-pixel square with some random (very small) splotches of light grey. As the image is called 'texturetastic-white.png', I assume the intention is to give a more natural texture feel to the background.

I'll admit that it's not the design choice I would have made, but then again, I'm not a designer!


Does it run Doom? :-)


Obviously it does.


Is this really something to be happy about?

Even ignoring technical considerations (the dizzying amount of code and cruft required to run a watch), it goes against one time-honored watch tradition: simple, elegant mechanics.


>it goes against one time-honored watch tradition: simple, elegant mechanics.

So, just like every digital watch ever created?


Congratulations. You've won today's "Find something to moan about" prize!

Seriously though - stop getting hung up on the word 'watch' - it's not the real point here. Nobody is arguing that telling the time doesn't require a full blown OS.

> simple, elegant mechanics.

Have you ever looked inside a watch!


  > simple, elegant mechanics.
  Have you ever looked inside a watch!
Well, I'd grant the "elegant mechanics", getting all those teensy cogs and wheels and levers and bits to work together like, er, clockwork…

But "simple"? It's telling that the term for each feature on a watchface is "complication" :-D


Oh neat! I've always wanted this.

When I last tried hacking my Moto360 it was possible to get Debian running in a chroot reasonability easily.

The trouble came mostly with video access. The userland graphics libs are all compiled against BIONIC rather than glibc. And they were at the time only available in compiled form. That meant it wasn't really possible to have a clean glibc system.

I guess either something has changed, or they're using a hack, incorporating BIONIC, which is what many people have done on other mobile platforms.

Very neat though, I'm going to have to try this out!


Apparently libhybris [1] solves the glibc -> bionic problem, but I've never been able to work out how to use it. I'd like to see a "Hello World" done with a small rootfs+libhybris on something like the Nexus 4.

1: https://en.wikipedia.org/wiki/Hybris_(software)


Whats the battery life like running Asteroid vs Android Wear?


Something only a nerd can be excited about.


Great project. I had no idea the LG Urbane used an ARM Cortex-A processor. I have a few 1st gen Android watches, and those used the ST ARM cortex-M series micros. I have an LG Urbane as well, but wouldn't want to dump it since I find Android wear to be useful.


Ooops, my bad. I was mistaken about the 1st gen Android wear watches. They also used Cortex-A processors and can be updated with AsteroidOS. I was thinking of other smart watches.


Site has responsive problem, cells overflow on Nexus 4


Love the nexus 4 mate! Watch out for breaking power buttons though!

Thanks for the bug report btw. I'll fix it when I'm thinking straight in the morning!


Oh yeah. It's those pesky little things that push the width beyond 100% at least here it was easy to find. Sometimes you have to use the console>inspect that highlights parts of the page to find what is causing the width overflow.

Also yeah Nexus 4 isn't bad, too bad they don't update it anymore :(


I tried asteroid on the sony smartwatch 3 this month -- graphics are iffy. Docs suggest experience is better on other hardware.


Newbie question - isn't Android wear's kernel open source? Android has switched to Wayland as well right?


Android Wear's kernel is linux and some of the low level userland is open source but everything above that is locked down and proprietary. It's definitely not using Wayland. In general Google has been moving away from Android's open source legacy as much as they can.



Tizen uses Wayland


Very high CPU usage while browsing this site, Firefox' Reader View saves the say once again..


I had the same problem here, while scrolling Firefox couldn't catch up and showed white just white for a short time. Then I saw the huge background gradient. The background is an image, a radial gradient, and also background-blend-mode: hue;

edit: there are even more huge div's that have the same background, so there are multiple layers of gradients with hue blend mode.


Spot on. I found this out the hard way today and wrote up a post with some benchmarks and the tiny patch that fixed the bug: https://learntemail.sam.today/blog/1-css-property-that-will-...


If I were you I would also consider getting rid of your textured background (https://learntemail.sam.today/static/images/texturetastic-wh...). The little grey blobs just make me wonder whether my monitor is dirty.


is there anything like this for Pebbles?


I like how he is sarcastic about being very happy that systemd is installed on the watch.


If you're right, that's a big sigh of relief from me.

The guy mentions how "Lennart Poettering would love it!" as the h2, and also describes X11 as "legacy".

With these in mind I feared he was serious about the systemd bit.

I'm really sad X11 is legacy software myself, as an aside. It's a disaster, sure, but now we have one more layer of "uhhhh..." for all the UX-types to get scared away by: it used to be "(WinAPI) vs ((Qt)/(GTK+)/(Xlib/XCB))", which was embarrassing enough; now it's "(WinAPI) vs (X11((Qt)/(GTK+)/(Xlib/XCB))/Wayland((Qt)/(GTK+)/(???)))" which is just plain annoying for low-level graphics hacker wannabes - I can make a WinAPI app in C that opens a window in a few KB, where as to do that in Linux now I HAVE to support XCB and also write my own tiny UI for Wayland.

Practically speaking it means that most developers will just pick a side^H^H^H^Htoolkit and go with that. It doesn't help that I've never been able to get past Qt's love of background processes vs. GTK's various displays of autism/spasticness.

sighs...rant over, situation accepted a bit more.

systemd is still a disaster though. I saw a massive 3Wx5H 1080p video wall in a shop window the other day, displaying... systemd emergency mode.

At least I learned that some video stretchers are smart and will drop the panels they're controlling into standby if they display black for too long. (Only the two panels at the top-left displaying the error were on, the others visibly had their backlights off. Neat.)


It hasn't been all roses in the windows world. Aside from qt/gtk being just as desirable there, there was also winforms and WPF in .net that are now left out in the cold and no clear forward direction that would also work on windows 7. They seem to be taking an each way bet on whether Win32 is deprecated or not.

Actually, I think this is the situation that lead to the growth in webapps and probably helped the decline (or failure to rise) of windows phone, no one had a clue where MS was going.


WPF is pretty much alive for desktop applications and its architecture (XAML + Blend tooling) is the foundation of UWP applications.

Windows Forms is officially dead as communicated at Build a few years ago. It is now playing chess with Carbon.

MFC is officially on life support. Way forward for C++ developers is UWP.

Everything from Win32 that isn't required for UWP support is deprecated and Project Centipede is the official way to bring Win32 applications into the new shinning UWP world.


TIL. Thanks for this, been wanting to keep my finger on the pulse of Windows development, but am quite distant at the moment (no Windows hardware).


Huh. That interesting, and kinda sad.

TIL about this aspect of the bigger picture. I'm a bit behind on where Windows is at in the grand scheme of things nowadays.


> I'm really sad X11 is legacy software myself, as an aside. It's a disaster, sure, but now we have one more layer of "uhhhh..." for all the UX-types to get scared away by: it used to be "(WinAPI) vs ((Qt)/(GTK+)/(Xlib/XCB))", which was embarrassing enough; now it's "(WinAPI) vs (X11((Qt)/(GTK+)/(Xlib/XCB))/Wayland((Qt)/(GTK+)/(???)))" which is just plain annoying for low-level graphics hacker wannabes - I can make a WinAPI app in C that opens a window in a few KB, where as to do that in Linux now I HAVE to support XCB and also write my own tiny UI for Wayland.

That's worse than this: you will quite possibly need features that are not implemented through Wayland, but through each different Desktop Environment, through different APIs, since Wayland ditched many X11(+standardised extensions) features.


Another commentator noted that X will just become a Wayland client when things even out. I suspect that things won't necessarily work out that cleanly/elegantly, and eventually X11 will installed on fewer and fewer devices.

Whatever we're left with will create quite an interesting ecosystem; here's hoping it's not too much of a political disaster.

For me, that means hoping Qt keeps up at the end of the day; it's been far superior to GTK in every way IMHO for some time.


Bollocks, the equivalent of opening a window using Win32 in Linux world is using X11's xlib (or xcb) API. Equivalent albeit more brain damaged.

Wayland doesn't change this. Once Wayland is adopted the X server will become a Wayland client and X client's will connect to the X server as usual. You don't have to write a native Wayland application if you don't want to.


That's true, yep.

But another commentator noted how Wayland doesn't provide X11-standard functionality (https://news.ycombinator.com/item?id=13346877).

I fear that X11 will eventually be installed (and possibly even available) in less and less environments, in the long term.

So 10 years from now it'll be interesting to see where things are at. Hopefully things haven't devolved too far.


One could only hope that 10 years from now X has in-fact disappeared. My greater worry is that it's still around and Wayland (with Weston implementation) hasn't yet gained enough traction to become "de-facto" server. To make matters worse there's the Ubuntu's MIR display server, which seems to have gone silent. This could lead to some nasty fragmentation between distros.


Curious why this was downvoted (it's currently at -1). Happy to hear any explanation or view.


I haven't followed this discussion so am unfamiliar with the validity of the details of the comment, so I can't speak to the content. The downvotes may be in part due to the rantish nature of part of it, which you were aware of at the time of posting. Some readers may have thought as you knew its tone was overly heated, you could have taken the time to express it better. However, this is just speculation, based on behavior I've seen on HN.


Ah, I see. Defining one's argument concretely and concisely is the basis of debating effectively, whereas I've just resorted to anecdote and ranting here. Woops.

Thanks very much for that feedback, I'll keep it in mind.


probably because you called systemd a disaster without qualifying why you think so


Hrm. Okay, let me have a go.

If I had to use one word to describe systemd's integration and adoption into the Linux ecosystem it would have to be "hostile" - the label has unfortunately been applicable in both directions.

Most of the feathers flew around 2012 when the major Linux distributions adopted systemd as their default init system, irreversibly pulling in all of systemd's system management policies as well, many of which were poorly designed.

Several big names in the Linux community (Linus Torvalds and Greg Kroah-Hartman, to name two) have had heated discussions with Lennart Poettering and other people behind systemd about major bugs, design flaws and policy integration issues, with the systemd response consistently being "the way we're doing it is the right way, no patches will be accepted, go away" even when shown multiple times that something contravenes design best practices or tradition (aka principle of least surprise).

For this reason I dislike systemd's highly bureaucratic "manglement" style, and am very sad that all major distributions have adopted it so widely. systemd uses a very dictatorial approach which makes it very very hard to use any other init system without nontrivial and obscure system reconfiguration.

I understand Lennart also built PulseAudio and got it integrated into pretty much all Linux distributions. PA works well now, but if it's having a bad day and I really need sound working in a pinch, I can just kill it and use ALSA/OSS directly.

systemd categorically isn't like that because it's (ostensibly) an init system. However it comes with so many extra "side features" (which an increasing number of things are depending on) that temporarily shoving it out of the way to became impossible very quickly, and before any real documentation was established. I think it's understandable a large amount of the Linux community have growled and snarled when presented with this set of circumstances.

Nowadays, systemd is pretty much part of the woodwork now, but the communication and social issues continue.

The first reply to a previous comment I made about systemd was extremely enlightening to read: https://news.ycombinator.com/item?id=12877934


I remember the Fedora 7 (?) days when PulseAudio was made default and nothing worked (silence is golden apparently). I routinely removed PulseAudio from my systems and dropped to ALSA.

I observe that systemd has a plethora of other systems that you mention, including a DHCP server. Yes a DHCP server.

I do not understand it.

Edit: Yes I know Fedora 7 was ancient. Just my memories. I think the fact that PA was broken in it got fixed pretty swiftly, from memory. But I was plagued with glitchy audio releases after this - could be my incredibly lame hardware at the time (but worked fine in ALSA).


Right. Wow, Fedora 7 was a little while back.

The problem with systemd's NTP and DHCP and whatnot is that they use their own systemd-specific APIs. Not using the APIs means that you don't talk to those components. And the thing is, if you're on a systemd-based system (which you can generally assume* to be the case now), you can 100% depend on those components absolutely definitely existing, regardless whatever else is(n't) installed.

(* Unless your users are using Slackware (hi there :D), Devuan or something like that.)

So of course things are beginning to depend on those services' APIs.

Which are exposed via D-Bus. ("Desktop"-Bus. On servers. Facepalm, Inc.)

Now, I do understand that when you use systemd-nspawn or LXC or Docker or whatever else you can generally assume that these components will interoperate and that's why they were implemented. That's the theory.

In practice, things... don't work out so well. This was on here a couple days ago: https://thehftguy.com/2016/11/01/docker-in-production-an-his...


Their DNS "client" implementation was a tour the force of NIH wrongs, including screamers like not implementing security functionality that had been commonplace in other implementations for a decade or more.

Damn it, they have a web server in there for the sole reason of displaying a QR code for the initial log signing key. A signing system that apparently Poettering's brother came up with as a doctorate thesis, with systemd-journald being the only implementation (that i know of).

BTW, these days you find dbus inside the initramfs. Because systemd need it to be present during bootstrap. After systemd-pid1 is up, it will kill the initramfs version and fire up the one from the HDD instead.

There are times i wonder if the Fedora maintainers grit their teeth and play along with Poettering and crew because they have the same paymasters.


> Their DNS "client" implementation was a tour the force of NIH wrongs, including screamers like not implementing security functionality that had been commonplace in other implementations for a decade or more.

:(

> Damn it, they have a web server in there for the sole reason of displaying a QR code for the initial log signing key. A signing system that apparently Poettering's brother came up with as a doctorate thesis, with systemd-journald being the only implementation (that i know of).

Okay, that I didn't know.

Actually let me read that backwards...

> log signing key

What on earth? Is the log encrypted?

> QR code

How are QR codes relevant to encryption?

> web server

Why do I need a WEB SERVER to display a QR code?! Uh... I can get displaying a QR code on the screen, sure. But... I get the impression you mean the QR code is served over a web server?

Oh. For headless boxes. But... why display a QR code, again? Why not just serve the log signing key itself? QR codes aren't encryption (just a good week's worth of reading on error-correction).

> BTW, these days you find dbus inside the initramfs. Because systemd need it to be present during bootstrap. After systemd-pid1 is up, it will kill the initramfs version and fire up the one from the HDD instead.

Mmm. Because all of its APIs are delivered as D-Bus (desktop-bus) services. I totally get that, but... aghhh. Why not even ZeroMQ :(

> There are times i wonder if the Fedora maintainers grit their teeth and play along with Poettering and crew because they have the same paymasters.

Unless things have changed, Linus Torvalds uses Fedora. He's had a lot to say about things.

I would be very very surprised if there wasn't a noteworthy bunch of mental-pitchfork-wielders.


My limited understanding of the whole thing is that journald use a chain of signatures to verify journal integrity.

Meaning that the first key is used to sign a new key that signs the journal entry and the next key that sign yet another key and entry etc etc etc. And that by having the initial key handy one can at any time walk through the journal to verify that it has not been tampered with.

The whole QR thing it there to allow a would be admin to quickly transfer the initial key to their smartphone or similar by scanning the code.

As for Torvalds being a Fedora user, my impression is that his usage needs are fairly modest these days. He spends most days reading emails via gmail, and approve commits to the kernel code housed on the kernel.org servers.


I see, interesting. For what it's worth that's pretty cool. I never even thought of the idea of a verifiable system boot log...

It's almost sad systemd has some good points. Heh.

I vaguely recall a video that noted where Torvalds was at nowadays; he seems to mostly be in administration/management now, as opposed to low-level hacking. Must be an interesting position to be in.


Do you understand why the Linux kernel has a network stack and do encryption algorithms?

Historically, its the difference between Monolithic vs Micro design. The linux kernel is not just a layer between the application and the hardware, it also support a bunch of extra things which the project want to have built in rather than as optional libraries. There is no TCP/IP or AES library, but there is non-supported alternatives to those that are libraries.

If you wonder why systemd has a dhcp implementation, ask why the tcpip stack don't.


That is a very good point, and I'd like to hear the arguments for why such capabilities wouldn't be added into the kernel.

They'd probably go along the lines of saying that there are already millions of lines of code in there and adding these types of features would add to the codebase size and permanent maintenance requirements.

But it would be really cool if all of these kinds of high-level features were available, yeah...


You don't want them in the kernel, that makes them much harder to replace/upgrade/tinker with. If anything you want to go the other way and move the TCP stack out of the kernel and into a normal userspace library.


Something that is in the process of happening, iirc.

Linux is turning into something of a hybrid kernel, or perhaps will emerge a micro-kernel given time.


I read (unfortunately unsure where) that microkernels do have one fundamental issue: having servers do All The Things and then just making a kernel to dispatch calls to those servers horribly falls down if the messaging/dispatch implementation is single-threaded.

And it inevitably always is, since if you're generalizing all system operations onto a single bus, that bus would either need to support some generic form of contextualization hinting or have some kind of theorem-solver-inspired system to determine what requests have no dependencies. I don't suspect Minix incorporates either approach...

The problem I see is the need to put "these are audio frames" in a different queue than "here are filesystem request packets". (Ideally the filesystem queue would itself allow further sharding, since most filesystems are multithreaded now.)

Writing such a generalized queue sounds like a rather fun exercise to me.

That said, if any such implementations are out there or there are any counter-arguments to make to this, I'd love to hear them. I mean, AFAIK Mach is a microkernel, so it's clearly solved some of this.


I think multithreading the messaging/dispatching implementation would add more overhead than it saved. I remember the hurd's core message-passing routine is 26 assembly instructions - there's simply not a lot of computation involved, and in general not enough data for the message-passing to be the bottleneck - when you're transfering bulk data you'd use shared memory or at least DMA or the like (in a sensible microkernel you just do it, in a super-purist microkernel you'd have a server that owned bulk data buffers and your regular processes pass handles around rather than actually owning the data and it's fine too).

If you need a queue with particular properties you write one, as its own userspace process (or system of cooperating processes). The kernel dispatcher isn't assumed to be a fully general messaging system.


Hmm, interesting.

I am wondering about one thing though.

> there's simply not a lot of computation involved

Wow, 26 instructions.

Here's my worst-case scenario: you have 8 concurrent threads (a current reality on POWER8), and let's say all of them are engaged in fetching large amounts of data from different servers - let's say disk and TCP I/O are both servers.

I'm genuinely curious how well a 26-instruction-but-singlethreaded message passing system would hold up. (I honestly don't know.)

Worst case scenario, the cache and branch predictor would perpetually resemble tic-tac-toe after an earthquake.

---

I think it would be genuinely interesting to throw some real-world workloads at Minix, Hurd, etc, and see how they hold up.

Now I'm wondering about ways to preprocess gcc's asm output to add runtime high-resolution function timing information that (eg) just writes elapsed clock ticks to a preallocated memory location (within the kernel)... and then a userspace process to periodically read+flush that area...


> Here's my worst-case scenario: you have 8 concurrent threads (a current reality on POWER8), and let's say all of them are engaged in fetching large amounts of data from different servers - let's say disk and TCP I/O are both servers.

Speculating: if you were passing all the data in messages, terribly. But that's not how you'd handle it. You'd use messages as a control channel instead, similar to DMA or SIMD instructions. E.g. if you're downloading a file to disk, the browser asks to write a file, the filesystem server does its thing to arrange to have a file and gets a DMA channel from the disk driver server. The TCP layer likewise does its thing and gets a DMA channel from the network card driver, and either the browser or a dedicated bulk-transfer server connects them up. The bulk data should never even hit the processor, yet alone the message-passing routines.

> I think it would be genuinely interesting to throw some real-world workloads at Minix, Hurd, etc, and see how they hold up.

Do. Also look at QNX which is the big commercial successful microkernel.

> Now I'm wondering about ways to preprocess gcc's asm output to add runtime high-resolution function timing information that (eg) just writes elapsed clock ticks to a preallocated memory location (within the kernel)... and then a userspace process to periodically read+flush that area...

I'd look at something along the lines of perf_events ( which I encountered via http://techblog.netflix.com/2015/07/java-in-flames.html ).


Using messages as a control channel sounds awesome, wow.

One of the targets I've been trying to figure out how to hit is how to make message-passing still work if you're using it in the dumbest way possible, eg using the message transport itself to push eg video frames. I'm slowly reaching the conclusion that while it'll work, it'll just be terrible, like you say.

I mention this because, at the end of the day, most web developers would just blink at you all like "DM-what?" if you suggested this idea to them. These types of techniques are simply not in widespread use sadly.

In my own case, I'm not actually sure myself how you use DMA as a streaming transport. I know that it's a way to write into memory locations, but I don't know how you actually take advantage of it at higher levels - do you use a certain bit as a read-and-flush clock bit? Do you split the DMA banks into chunks and round-robin write into each chunk so that the other side can operate as a "chaser"? I'm not experienced with how this kind of thing is done.

Well, workload-testing microkernel OSes is now on my todo list, buried along with "count to infinity twice" :) (I really will try and get to it one day though, it is genuinely interesting)

Regarding QNX, I actually mentioned that to the other person who replied in this thread (https://news.ycombinator.com/item?id=13346822), and I said a few other words about it a couple months ago - https://news.ycombinator.com/item?id=12777520

I really wish the QNX story had gone ever so slightly differently :'(

Regarding perf_events and the linked blog post, thanks for both - this is really interesting!


> In my own case, I'm not actually sure myself how you use DMA as a streaming transport. I know that it's a way to write into memory locations, but I don't know how you actually take advantage of it at higher levels - do you use a certain bit as a read-and-flush clock bit? Do you split the DMA banks into chunks and round-robin write into each chunk so that the other side can operate as a "chaser"? I'm not experienced with how this kind of thing is done.

I don't know enough to answer this stuff - last message was already second-hand info (or worse). All I can say is, best of luck.


I am of limited knowledge regarding micro-kernels.

As i have come to understand there is one successful such kernel out there, QNX. And that while both the OSX/iOS and Windows NT kernel started out as a micro design, both Apple and Microsoft have been moving things in and out of the kernel proper as they try to balance performance and stability (Most famously with Windows, the graphics sub-system).


QNX is such a mixed story of technical ingenuity and frustration.

The OS was cautiously courting a "shared source" model where you could agree to a fairly permissive (but not categorically pure-open-source) license and get access to quite a few components' source code.

It was anybody's guess what might develop from that, and an intriguing and hopeful time.

And then BlackBerry came along and bought QNX and killed the shared source initiative. Really mad at BB for deciding to do that.

Nowadays QNX is no longer self-hosting - no more of that cool/characteristic Neutrino GUI anymore :(


Frankly more and more Poettering reminds me of De Icaza.

In both instances what the person produces only becomes "stable" after he pass the maintainership to someone else.


(NB - I wasn't dissing Fedora. I was just registering how far back these issues went.)


He's not. Far better to run a tightly configured systemd on an embedded device than thickets of shell scripts. And as he states, it makes modifying device behaviour that much simpler.


I am honestly not sure...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: