Hacker News new | past | comments | ask | show | jobs | submit | gryan's comments login

Cool! What is their idea on the next step? Are they going to try to remove the front wall with the entire painting enact?


There's still tons of data to go through to further confirm the theory. After that... I'm not quite sure. Although it is highly unlikely at this point that they will remove the front wall which in itself is a masterpiece. Any damage would result in a priceless loss.


Beats being swayed by facts.


Arthur C. Clarke: Science fiction writer.

E.W. Dijkstra: Actual computer scientist.


Clarke was no technical slouch! http://lakdiva.org/clarke/1945ww/


So, you're an idiot?


Fortunately, Joel is not the first or last word on any subject.


"For example, you might start at the root resource http://foo.com/api/ . It returns, as content, a set of subresources that can be requested. The API documentation should describe what data the server sends for /api/ -- what format it's in (HTML, XML, JSON, etc), how to find its subresources (<a>, <form>, id'ed elements), and any relationship"

Good luck doing that for "FTP, SSH, whatever"


Don't be so sure that all of Linus' opinions are valid for all areas of programming, or even in all situations within the kernel.


They aren't, but it would be nice to see all his opinions laid out at once so I could absorb the good ones.


This is the perfect attitude with which to face the Internet.


I don't know what the OP's beef is. There's nothing semantically wrong or confusing about this.

However, for simple types like int and float, there's no performance advantage to passing the value as a reference. It generally takes as long to construct the temporary reference as it does to copy the value.

For more complex types like structures or objects, then yeah, you're better off passing by reference.


Yeah, I didn't mean "int" to be the offensive part.

Read my response. I'm not talking about performance, you shouldn't be copying around large structs or strings no matter what, I'm talking about code readability.

In general, your compiler will optimize the crap out of primitive type copies (and most struct copies too, these days) anyway, so it's less of a performance argument as it used to be.


It's unfortunate that Boost is an all-or-nothing set of libraries, and the meta-programming in there can be brutal to compile, so for that kind of platform you'd really need to weight the pros and cons of what you're using out of Boost vs. compile times.

However, building the library set and examples and tests is a one-time cost for the non-header-only libraries. And you don't need to compile the examples or the tests. Or the various debug-release combinations.

Plus, why wouldn't you cross-compile, anyway? I do lots of embedded sensor work and I wouldn't think of compiling anything directly on the hardware itself.


This was cross-compiled. We had just implemented the full (as opposed to abridged) C++ libraries so obviously we decided to make sure that our customers could use them fully, however given that this was new support there was a distinct lack of test cases. Someone suggested that Boost exercised this functionality pretty well (some customers previously complained that they had been prevented from using boost before due to our only supporting the abridged libraries so this was actually a good idea), someone decided that I should be in charge of this.

What it amounted to was debugging horrible compiler crashes, broken library behaviour and spending a long time deciphering huge template names. This caused me to hate C++ more than you can ever imagine. I swear whoever decided to stitch templates together in the way Boost does has never had to debug a development compiler, this broke almost every part of the toolchain.

Ugh.


You can compile only what you need. It's much faster that way, but if you do a complete build (debug, static, etc) it will take hours. It's not that bad though and certainly worth it in the end.


C++ isn't without it's age spots, but articles like this do nothing to give a balanced view of its usefulness vs. issues. Many languages have boasted to topple C++ and it hasn't happened it, mostly because C++ does its job well enough and other languages aren't better enough so that migrating large code bases is cost effective.


The article was not intended to give a balanced view of issues vs usefulness. It was attempting to explain why C++ compiles relatively slowly, which is a common question people ask me. Whether or not the speed issues are balanced by other considerations is up to the C++ user.


As far as this specific complaint goes, I've worked on large-scale C++ programs and I've rarely come across anything prohibitive as far as compile times are concerned. Granted, C is faster, as are some other languages, but in terms of overall project development time, compiling is dwarfed under requirements, design, coding, testing, collaboration, etc. It's a non-sequitur that improving compile time would have any overall impact on development.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: