Hacker News new | past | comments | ask | show | jobs | submit | 0bfusct3's comments login

Professional Linux Kernel Architecture has a good introduction into the Linux/os nomenclature.


People that have all this "talent" is nothing more than a drive for something - the talent in x, y, z is a side effect. My drive is for a distributed, vm based operating system and a language to go with it, subsequently I'm versed in mathematics, computer science, etc.


Then taking it all the way down to the hardware, every user of x86 should be required to agree to a license agreement to use an x86 processor since it 'interprets' opcodes on-the-fly thus everything that uses x86 is under an intel license QED. I don't know the full details of the thesis theme but use an API doesn't constitute violation of the GPL v2 - see: tivo and thousands of routers, wireless cards, etc.


The are multiple licenses for the x86 ISA. And companies that produce common chipsets specifically allow open development on top of them. They make more money that way.

This wasn't always the case, though.

It's easy to forget history.


My example may have been flawed (it was meant for exaggeration ) but the point still remains that API use is fine under GPL v2.


I assume you have successfully litigated this position in a US court on multiple occasions?


Tivo has released sources for the GPL covered portions of its software (http://www.tivo.com/linux/index.html ).

At least some router manufacturers have also released source code as required by the GPL. For example, see D-Link http://tsd.dlink.com.tw/GPL.asp and Cisco http://homesupport.cisco.com/en-us/gplcodecenter .


Tivo releases what they see as either not a competitive advantage or a burden to maintain. Just like Nvidia releases the shim but nothing relating the how the hardware works - it's a business they aren't here to plant daisies in the FOSS fields there here to make a bottom line.


Overhead of the operating system and more likely a massive amount of packets per second will easily peg a single core.. I did some tests with nginx (comparable to node.js) and it easily pegged a xenon 2 cpu quad core 8GB ram (all 8 cpu's were 90+%) with a paltry 8055.77 rps over 2 x 10gbit ethernet but then this is more likely an OS / fine tuning limitation


Maybe it's just me but I think this is incredibly stupid - He's abstracting the issue that Unix isn't distributed into a single application. It's like putting $5000 rims and mufflers on a $100 van.


Configuration files need to be static key value or a relational algebra - you cannot have mutations and branches based on logic within a configuration or else you introduce issues you wouldn't want to even dream of.


So generating those values in a full-fledged programming language is that much better?


The same can be said for *nixes, in order to create something in sh using programs you need literally days of reading man pages for switches, etc or use perl, python, etc.. but even then many tools don't have bindings so you're back to use sh.

All operating systems currently popular are crap except for the most popular one in it's usage domain.


This assumes you aren't online, where nearly any shell script example you can think of is likely only a google search away. I don't think that anyone really spends days studying man pages anymore. Though it's nice that they are there when you need to look up something system-specific.


That's like saying language y is fine because I can just search and copy and paste... no you cannot. If you do anything beyond a simple [[ -f somefile ]] && do something sh turns into a nightmare.. case in point autotools.


Unix is bad, crusty and getting less relevant due to how computers are becoming more distributed and less as a single system. Linux may be the top of the herd right now but sooner rather than later there is going to be a huge paradigm shift into distributed computing. Linux will have no place in distributed computing without either layers upon layers to the POSIX API (aka backwards compatibility) or well a new operating system.


I'm wondering why you're saying linux has no place in distributed computing. Most supercomputers and distributed systems run linux. Not sure what paradigm shifts are going to have a significant impact on the OS. The only changes I've seen are in hardware/programming syntax as of late.


Sure they do but they use extremely specialized software that probably took years to apply to the platform let alone developing the software. Linux is the top in distributed computing because it filled a niche (free) and has momentum if BSD, L4, etc was created then / didn't have some sort of issue we'd all be using kernel x instead of linux


What, specifically, about the Linux kernel and Unix structure makes them bad for distributed applications, again?


c, capabilties (or lack there of), self healing, runtime based optimizations, concurrency, locking, threading and just generally the way it's structured it's not meant to be a distributed operating system the applications developed for it show this. Have you seen how glusterfs, lustre or any other distributed FS works? They bend over backwards trying to implement a POSIX API usually with hideous hacks.


What sort of distribution of computing is going to happen beyond what's already happened?


Simple interfaces tying into a vm for complete distribution of software on any architecture. Concurrent, functional and slightly object orientated low level languages allowing automatic formal provability and easier formal provability. This is all quiet similar to Microsofts Midori / Singularity from what I can read on it anyways... but I'm sure Microsoft will screw it up somehow and cripple it.


Get a hardware book and read up on how computers actually work. This will help an immeasurable amount to understanding why C is the way it is.


I recently ditched rails for sinatra / padrino (mostly sinatra) due to the fact that if you want to do something off the beaten path, even in rails 3 it's a complete crap shoot of trying to tie together different pieces since it seems people only work with rails specific active_* parts - You spend more time wrestling libraries in rails 2-3 than actually doing work.


Not to take anything away from those microframeworks (they are awesome), but are you sure you're giving Rails 3 a chance?

After all, they brought the merb devs into Rails core and radically re-architected the whole framework to support modularity and customization down to a very low level. From the extraction of ActiveModel, to the inclusion of Thor for generators, the decoupling of Prototype and an ambitious attempt at unobtrustive javascript, and the declaration of a publicly support API upon which even the framework itself is built, Rails 3 the biggest overhaul Rails has ever had.

Have they succeeded in making it truly modular? I dunno, it's not released yet for one thing, but to say it's a crap shoot is a bit premature IMO.


Yes I'm also using sinatra, I love how clean and simple it is. I was trying to decide which gem to use for authentication, but couldn't find one that was simple enough (I'm fairly stupid) so I rolled my own. Sorry this is more of a ramble, I just really like sinatra, no sinatra-language to learn as in rails.

I first used rails very early on, late 2003 no... maybe it was early 2004. Anyway I didn't like it then but it did save my ass in getting my college project done.


Thank you. I investigated padrino because of your comment.

For those of us who prefer to avoid thick APIs yet appreciate sinatra's dsl, padrino looks very promising.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: