On a related note, I would be interested to hear what parts of your company's product you have opened.
Funny you asked. I happened to work for a startup that got acquired by a big&evil corporation that specifically prohibited our engineers not only from opening our code, but even from contributing to existing OSS projects, which, partially sparkled my original post.
BTW, there is only one item on your list that qualifies as a valid answer to my question (for obvious reasons) and that is webmin.com, which is by far my favorite YC startup.
Yeah. Every time when someone tries to make a case for memchached they bring in Facebook and LiveJournal.
Well guess what: you aren't Facebook. You don't serve 21,000 requests per second. And you never will. Go and look at the size of your MySQL tables, then look at how much RAM your server has and what percentage of it is allocated to filesystem cache and how many real I/O reads/writes your hardware RAID does per second. Then come back and tell me again why do "we need to switch to memcached"?
That's because our system somehow rewards real estate speculators, mortgage brokers, middle management, lawyers and healthcare professionals way more than it rewards engineers. I don't really know why this is happening, but I am sure you all have met a few MBA types who have "the vision" and looking for "just" a code monkey to build their fortune. This is also the reason why GE/Ford/Chrysler continued building shit for decades, giving out multi-million dollar bonuses to their top-tier management.
Someone suggested it's because we live in a "services economy" not a "product economy" and, therefore, the value of engineering is declining. There is some truth in that: google isn't an engineering firm, they are in the entertainment business, very much like ESPN, Fox and those annoying guys in big hats at your local Tres Amigos.
That's funny, it's a routine complaint at Stanford that all the campus job fairs are filled with companies recruiting engineers and nobody else. :p
Our system also rewards engineers way more than teachers, graduates who go into public service, and-- at graduation time, at least-- anyone with a humanities degree. I'd worry about fixing compensation for teachers before rewarding engineers...
Google is an engineering firm, they just happen to make money from ads.
Your doubling the salary for teachers due to tenure is questionable. The reason tenured professors don't quit is only partly due to tenure, also due to the fact that they find being a professor more interesting than the other work. Regular (grade-school to high-school) teachers don't exactly have tenure (though for those in most unions it is close), and they probably don't have the same employment options outside the profession.
It's certainly questionable, I didn't mean to imply it was more than a rough guess. It's true, professor jobs at research universities have an "interestingness" premium, so part of the multiple comes from that.
On the other hand, lack of external employment options makes tenure MORE valuable, since it raises the risk premium the teacher would demand to quit the tenured job.
Since my financial math is rusty, I won't try to compute the actuarial values of these revenue streams, however.
Most teachers are not professors. Most teachers work for public schools in the k-12 arena. Many times tenure doesnt work out.
Teachers rarely get overtime, but overtime is necessary. The best teachers I know, and I know about 20, work 7-7. They work during the summer to setup curriculum, to tutor summer school students, to select books, to setup their classroom. And they do this all without any extra pay than their salary. They also supplement poor school budgets with their own money to buy supplies the students need to learn.
They are generally pressured to pay into a Teachers union that supports the misfits in their profession, and doesn't generally help them at all. These same awesome teachers are paid solely on the number of years they have been working -- not (at all) by performance, skill, or student/parent reviews. This translates into little reward for awesome teachers over not to awesome 8:15-3pm teachers.
I am not sure why are you bringing enterprise software into the discussion. Despite being the most popular form of employment for "programmers" it has always been the absolutely lowest form of life in a software ecosystem and, therefore, should be ignored and left out of any intelligent discussion about programming.
In the 90s it was Visual Basic, now it's Java and C#, but it has absolutely nothing to do with what most of us consider to be discussion worthy.
So? Despite being useful, what's so interesting about CRUD programming for the enterprise? I've been there, I've had my share of dealing with coworkers that openly admit that they haven't touched a single book since graduation 10 years ago and they see no reason why would they want to.
That's the kind of programmers this industry attracts, and that's the kind of software it builds. What's so interesting about it? Why even bother mentioning these numerous java/C# jobs? My ex-wife with zero programming experience has trained herself in less than a month to run a simple SQL queries in Visual basic and blast results in a grid control on a form, so did thousands of ex-taxi drivers in late 90s. So?
> My ex-wife with zero programming experience has trained herself in less than a month to run a simple SQL queries in Visual basic and blast results in a grid control on a form, so did thousands of ex-taxi drivers in late 90s. So?
To be honest, I'm still not sure what point you're trying to make is. At best it's somewhat of an Ad Hominem attack.
"People often write in higher level languages because they want lots of bad code fast."
Even if this is true, it's true only in enterprise software environment where code quality has never been terribly important. Therefore it can't really be an argument against higher level languages in the context of a typical HN discussion.
I'm in no way trolling here, but if you think about it, most of the guys working on startups are also producing lots of bad code fast. The only difference is that most of them are aware of it, and will improve the code based on the requirements from the market.
I think how interesting it is depends on your interest.
If you are heavily into programming, as it seems you are, I can understand that SQL queries seem trivial and boring. If your interest is elsewhere programming is often just a tool used to create the interesting stuff. Whether that is usability, design, getting people to interact in new ways, launching rockets or automatically turning off your garden lights when your computer powers down.
Believe it or not there are interesting discussions to be had about other things than programming.
Erm, not really. Of the core bits of the OS, lots of them are open sourced as well; Darwin is open source as well (and loosely based on NeXT, which isn't that closely related to BSD). The kernel is open source, bonjour, launchd, webkit, and so on. Apple contributes a lot to open source. Oh, most recent addition: http://www.opensource.apple.com/darwinsource/10.5.5/autozone... - the ObjC GC.
The biggest part of the OS that isn't open source is Aqua. But, really. Thats just the UI. And I'm always hearing about how good Compiz Fusion is supposed to be. So, shrug.
You said "not really" and then agreed with me. Moreover, you only confirmed my theory that when people say "open source" it usually means commercial companies milking OSS developers, or releasing their code "contributions" for products that have no commercial value to them (Obj-C, Chrome).
Where is the code for Aperture? Final Cut Pro? Numbers/Pages? Even iTunes? I also want to see the code for Google's page rank, BigTable and gmail.
I also want to see scribd code for converting MS Office documents to iPaper. 90% of their solution consists of Open Office code that they took freely and haven't released theirs in return.
I was trying to say that a significant portion of the OS is open-source - not just GPL'd. It's hardly "milking the community" if a good portion is returned back in a similar manner.
I wouldn't say that WebKit is worthless; its the primary rendering engine behind several browsers - which are gaining in popularity due to the success of OSS in general.
If the web was so worthless, MS wouldnt have bothered with IE and Netscape wouldn't have open sourced the code for Mozilla. Nor would a majority of the HN startups exist for that matter.
I don't understand your disconnect between the licensing for the OSS projects in question and what you're saying, since I have yet to see Apple and others flagrantly ignore licensing terms. If this was such a big problem, change the license.
Besides, I can think of lots of projects where employees at corporations and "commercial companies" are being paid to work on OSS projects. I frankly doubt that OSS would have gone this far without such people.
Hm... I get exact same stripes about twice a month, usually when it gets really hot, most commonly when watching a movie via Hulu or iTunes. But it always goes away if I turn it off/on. I never bothered going to an Apple store because I couldn't reproduce this reliably. Perhaps I should. Has anyone here tried to fix a similarly unreliable problem?
Makes perfect sense: they've clearly separated traditions and habits from religion.
The tradition of getting married in church is no different from a habit of screaming "jesus motherfucking christ!" when faced with a scary chance of seeing Texas Tech playing in a national championship game: no reason to call someone religious on both grounds.
OK, but you can't "identify yourself as Christian" and then not believe in the existence of God. That's mainly my confusion, maybe it's a poor choice of words for the writer.
Again, these things are the kinds of definitions that are fuzzy.
People identify themselves (& others) as Jews regardless of their beliefs. In areas where religion has become the current tribal banner such as N. Ireland, India, Iraq, etc., people do something similar.
Often they mean association with a group. Sometimes they mean ancestry/heritage. 'Identify as a Christian' tends to mean different things in different places.
I'd say that in Sweden "identifying" oneself as Christian should be interpreted as having a value-system based on the traditional Christian one.
What would be the alternative to identify oneself as Christian? As atheists? Where non-belief is overwhelmingly common there is little need to spell out you non-belief (that's taken almost for granted) and focus is instead put on what cultural aspects you feel in line with and which you don’t. Although people don't believe in God they still acknowledge that the Scandinavian cultures are heavily influenced by Christianity.
prospero, I have been looking for C++ replacement for desktop programs non-stop for at least 5 years and found nothing. Rubies, lisps and pythons are great in a little dark corner of server-side development where everything is under your control, but for targeting thousands of varying desktops you just can't take that route since many additional variables come into play: size of downloads, performance, startup times, windows compatibility, etc etc etc. Plus, all VM-based languages suffer from so-so integration into native OS or/and excessive weight. You can write an IDE or an Excel replacement in them, sure, but most desktop software tends to be much smaller.
My last C++ project was a background daemon that would download your RSS feeds and do some fancy parsing. I wish I could show you how much memory Java prototype has eaten vs C++ version processing the same OPML file.
No, C++ is not "essentially worthless" without Boost. In fact I never use all of Boost: your build times becomes #1 conversation topic in the office when you do. It's better to cherry-pick a few important and lightweight pieces (like pointers, any, functional, etc). Don't forget about HUGE selection of plain C libraries too.
Whatever C++ has been used for is still being written and re-written in C++ as we speak. Newer languages and paradigms wiped out the giant Visual Basic army of developers (yes, this is what people used to code "basecamps of the 90s" in, but I don't see these languages threaten C++/C/Obj-C domination for desktop software on Win/Linux/OSX.
In fact, how many native Python or Ruby GUI libraries are out there? I haven't heard of one. Only bindings to GTK/Qt.
With all that said, I will be the first to jump the wagon. If I ever get so lucky.
If you have a small, well-scoped application, like your RSS daemon, C++ is going to be best of breed in most cases. Similarly, if performance is your absolute primary concern, there's nothing out there that can compete.
For most desktop software, though, neither of these things are true. The design and scope of the application are constantly changing, and mostly it just sits there. I write desktop software for a living, including a lot of CPU-intensive graphics stuff, and I'm infinitely grateful that I work in C# rather than C++. It makes me many times more productive, makes my work conceptually cleaner, and makes me a happier person all round. There's a memory cost, to be sure, but it's acceptable, and 90% of the clock cycles are spent in C libraries no matter what, so I don't see much of a downside.
My background's in 3D graphics and computational geometry. I've always enjoyed working in C++ on computationally intensive problems, and reducing a problem to its cleanest, fastest implementation. But I think that in the real world, 99% of the time, what really matters are the features, and that the widespread fascination with efficiency is mostly just techno-fetishism.
You're lucky: I also do desktop software but we can't afford to ignore the growing number of Mac users. While they still represent only about 8% of all computer purchases, they account for nearly half of "customers who pay for software" and C# isn't an option for us although we keep monitoring Mono's progress.
Also, allow me to disagree with you on something:
"If you have a small, well-scoped application, like your RSS daemon, C++ is going to be best of breed in most cases. Similarly, if performance is your absolute primary concern, there's nothing out there that can compete.
For most desktop software, though, neither of these things are true."
Perhaps it's our difference in backgrounds, but most desktop software is more like that: small pieces that need small downloads, small memory footprints and instantaneous startup times. Just count the number of executable files on your hard drive and see what percentage of them eats more than 3MB of RAM (measured in 'private bytes'). Or you can look at the list of running processes: you'll see perhaps 1-3 behemoths like Firefox or Photoshop there, and one can only wonder how FireFox memory consumption would look like under JVM or .NET VM.
I guess it heavily depends on a market segment, but for "family computer software" it is true: lots of PC users playing with free/evaluation versions but for when it comes to premium subscriptions and premium features - most of them evaporate, whereas Mac users like to stick around.
Look at various PC/Mac softwares out there: most of PC-only software belongs to "free&shitty" category, but nearly everything for OSX you have to pay for, and they (users) are accustomed to it.
I am not sure if I can post results of the research my company has done (and paid for) here.
I write Windows applications. C# has Win32 bindings that are vastly superior to anything else out there. There are tools designed to let me leverage these capabilities that are much more polished than the comparable tools for Python, Ruby, or anything else.
C# isn't the best language out there, and certainly not the most elegant, but it's been carefully designed by some very smart people, and it shows. At the end of the day, for what I do, it's the best tool available.
My point was more philosophical, I guess. Obviously if you're doing Windows desktop apps you use whatever Visual Studio gives you. I'm just pro-dynamic language. I also like C++ and fail to see Java or C# as sufficiently improved over C++.
I like dynamic languages too, and often wish C# was looser in its typing (or at least less verbose in its typecasting). If it were practical to use Ruby/C to do my job, I probably would.
That being said, C# is a huge step above C++, at least with respect to application development. Garbage collection aside, it has first-class functions, well-implemented closures, an integrated query DSL, and decent (if a little awkward) reflection capabilities. It's not an exciting language, and I don't feel particularly cool saying I use it, but I think it gets a pretty bum rap.
I'm aware of the feature set, I just don't think it buys you that much over modern C++ (different story six years ago). If you're using boost correctly garbage collection isn't necessary, and higher order stuff is quite doable.
I have dealt with Java memory problems several times now and am convinced that garbage collection has no business near performance bottleneck code.
C++ makes programming slow in many ways. You have to specifiy (and and maintain) every function signature twice. Compile times are measured in minutes for any substantial code base, unless you spend considerable time decoupling layers of headers, which slows programming down even more. Finding bugs is much more difficult without a stack trace, etc.
I'm not saying C++ shouldn't be used because if it really adds to the quality of the result the additional effort may be worth it. It's a balance between the quality of the product and time to market.
There's one more issue that shouldn't actually play a role but does: I have rarely seen C++ code that isn't full of incredibly stupid errors like copying large collections three times in a single function call. Even smart professional programmers very often write C++ code that is complete crap and way slower than the most naive C# or Java implementation.
But as someone who writes very data intensive apps I have to say there's just no way around C/C++. C# and Java VMs have twice the memory consumption of C/C++. I don't know whether C code uses more memory than assembly, but I doubt that a factor of two will ever be negligible, particularly because it translates into a factor of 100 as soon as you run out of memory and have to hit the disk.
Unfortunately the combination of C/C++ and dynamic languages doesn't work for many of my own scenarios because of the GIL.
I mean look Rails (just as an example). A typical rails app runs in 16 processes, each consuming 150MB of RAM (after a while of running) just for the framework itself and a few controllers and views basically. Nothing but little helpers doing CRUD.
There's just no memory left to do anything interesting with data in-memory because all that data would have to be duplicated in each process. That's why dynamic languages driving a C++ core doesn't really fly for interactive apps. It works for batch processes like what google does.
Funny you asked. I happened to work for a startup that got acquired by a big&evil corporation that specifically prohibited our engineers not only from opening our code, but even from contributing to existing OSS projects, which, partially sparkled my original post.
BTW, there is only one item on your list that qualifies as a valid answer to my question (for obvious reasons) and that is webmin.com, which is by far my favorite YC startup.