The other thing is that UNIX wasn't built a start-up, it was built at Bell Labs. Google could be argued t be a modern bell-labs.
Most modern start-ups avoid, at all costs, any systems programming (or doing anything at all beyond using a scripting language to read text from a database and display it on a screen).
Note, however, I said most. There are startups solving serious scientific and engineering challenges. It's just "start-up" does not equate "lasting technological/scientific contribution".
Particularly given the ridiculous number of ex-Bell-Labs employees at Google. One of the SVPs of Engineering at Google was formerly the VP in charge of the organization that birthed C, C++, UNIX, etc.
I think the key here is to avoid doing any work at all that is not your value proposition. If scripting languages are good enough and will benefit you, use them.
We use Python, Twisted, Django, Matplotlib, JQuery, Movable Type, Postgresql all for this reason. Even our firmware is built as much as possible on a 3rd party neworking library and high level hardware building blocks.
That's great for you (if it's money you're after), but it does make the start-up fairly uninteresting to someone who's interested in systems level development (and not just glueing together a product based on work others have done).
Generally, top scientists/engineers are not motivated by money, which means given an offer from a typical ("glue a product together") start-up and Google they'd choose Google, unless they're looking for the "start-up experience".
If you want top technical talent (who will always be able to find fulfilling and challenging work at the whatever the current Google-like company is, e.g. SGI in early 90s, Netscape in mid 90s, etc), you have to give them top technical challenges (which is how these companies have themselves been able to compete for talent when they were start-ups).
Yes, that is my frustration also. I would rather be doing low level optimisation work, but I haven't yet found someone who wants to pay me for it.
In the next 6 to 18 months, I am going to have to start swapping out the big building blocks with parts meant to scale. At that point there should be plenty of room for the systems engineers to join me.
That's fine then. It's better to use high level components to build a product that's correct (i.e. does what customers want it to do well) and then make it fast then build a fast, but incorrect product.
Be careful, however, with treating scale as equivalent of performance. You have to design for scalability, not optimize for it: given the same algorithm, sorting a list of items in python might take 50 times as long as doing it in C; yet using an n^2 vs. n log(n) algorithm a 1 000 000 item list would take 50000 times as long to sort.
Please excuse the tone of this post. I have spent a lot of time thinking about potential performance issues, and rationalising my choices.
I'm a ACM ICPC (programming contest) finalist, and have competed in the Google Codejam, so I know about big-oh.
Anyway, who implements a sorting algorithm in pure Python? I use the builtin sort() function, which is implemented in C (making callbacks into python for comparisons for some types). It uses TimSort - an approach designed to minimise the number of comparisons.
http://svn.python.org/projects/python/trunk/Objects/listsort...
I thought you were going to point out that I should ensure that my system is modular (stateless, partitioned, etc) and I can spread the load across several machines - I can.
You are right that I could make a screaming fast bubble sort and still end up with major issues. Fortunately I already have had that epiphany while solving ACM practice problems.
I appreciate your advice, there are far too many who haven't yet understood it.
Sorry if I sounded condescending. I didn't know the specifics of your system (is it mostly on the server side, embedded, etc...?) so I couldn't give you any specific advise.
The sort example was merely a metaphor for design for scalability vs. optimization for performance. Perhaps better examples would have been dividing the server side portion of your application into asynchronously invoked services is design for scalability. Rewriting some of these services in C or OCaml, switching from JSON to Protocol Buffers, switching from an HTTP server and a layer 7 load balancer to a custom non-blocking server and ZooKeeper (for cluster membership) are optimizations for performance/stability/cost.
Certainly agreed, it's amazing what they've accomplished on such limited machines, just it shouldn't be held as a pinnacle of engineering or computer science (else, why would the people who built it be at working on plan9 and go?)
The best we can do is usually not the goal. Good enough is the goal.
People look at great products and say that sentiment isn't true, but you always have to release. Take the first iPhone. When it came to feature checklists, it was embarrassed by every competitor out there. It nailed the interface though, and that was Apple's definition of "good enough". The shipped, took the industry by storm, and have since added most of those checklist features that everybody thought they needed in the first place.
Were they kids though? I thought they already grew beards and some might have a tiny bit of grey hair here and there...
There may be beautiful hacks in UNIX but the building principals (UNIX Philosophies) lives on and I don't think kids these days that go straight into startup even know these things.