I managed to get to about 30 seconds with 65,000 files on a local ext3 file system. The file names where all of the same length and with a ~100 character long identical prefix. I re-mounted the file system before the ls to eliminate caching.
I noticed that they are using CDN from day one. What's the rational behind it? How much complexity the CDN integration adds to the code and procedures? Is the number of images they load an important factor here? I think they are loading only few images of their own. (I'm not criticizing, just asking).
Actually, I was just bored. I meant to take it out but forgot about it... I was trying to get a high score in YSlow ;-).
If you look into the CSS, it still references the non-CDN images and Safari is actually somewhat stupid -- it loads both sets of images even if the CDN images override it. The homepage actually loads slower with the CDN images (but I get a higher score on YSlow :-P )
The only thing remotely html5-ey about it is the use of <!doctype html>, which sane people have been using for years. I don't think he's using any features less than a decade old!
Isn't that borderline phishing, Seriously? I'm thinking about a naive visitor, who doesn't have an account with Plurchase. If s/he now tries to login to Zappos, Plurchase sees and handles her/his username & password. This is a simple man-in-the-middle attack.
It is somewhat man-in-the-middle, but not much different than your browser-side plugins... some browser side plugins load external javascript into your webpages so it wouldn't be significantly different. Wesabe.com (personal finance site like Mint) allows you to enter in your financial data directly into their system, and also provides a firefox plugin that will fetch that financial data should you be worried about Wesabe's security. However, few people have used their firefox plugin for doing it.
There is a huge difference. I'm not bothered one little bit about logged-in / opt-in users or those who chose to install a plug in. My concern is specifically with a visitor who did not register and have no clue of what's going on.
The lazy function technique can easily become an example of bad coding practice. Code needs to be easy to read, understand, and maintain. In most cases solution 2 is good enough, in all cases it's much simpler.
Advanced techniques should only be used when there is a clear justification to sacrifice simplicity. It is disastrous when developers use so-called "advanced" technique only to entertain themselves and show-off.
You are judged not by the niftyness of your code, but by the availability, maintainability, stability and functionality of your system as a whole.
Code needs to produce predictable results without bugs. All other qualities are secondary. Good code without bugs is well encapsulated and should work as a black box. Elegance is secondary to reliability. I am not saying that it is not important, just not as important as reliability. As a general rule of thumb less lines of code translates into higher reliability. I know it does not apply to every case but it is a good starting point.
Coming up with the right name for a variable or a function can easily become the most difficult part of my day. I always wanted to open variable and function names consultancy.
if you get stuck, just do the objective c thing:
aInt, anArray, aXMLFile, etc. Still fairly generic, but still useful and easy to read (vs single letter variables or random things. Worse comes to worse, you can also just make a sentence out of it.
I wonder how many files with a name containing \n are in the wild. And if you encounter such a file, does it really matter if you accidentally delete it?
Waisting brain cycles to save CPU cycles (in the wrong places) is shameful, insulting and boring.
It would be amazing if there weren't tons of other, much more significant, things that deserved attention in the relevant script, system, or the developer's life. (sign)
The problem is the file you accidentally delete (or whatever) is probably not the one with the weird name. Most bogus filenames I've seen were created by shell scripts that had a set of commands that was accidentally quoted instead of executed, so they tend to contain the names of files which were important to the author.
I've seen some concepts thrown around about a (Linux) kernel patch to disallow filenames which contain unprintable characters. I approve of them.
With regard to why this matters, it's not uncommon for badly written scripts to interpolate filenames directly into commands. Consider this Perl fragment:
And if you encounter such a file, does it really matter if you accidentally delete it?
It matters. Leaving a bug around is disgusting and wasteful especially when you can fix it.
* would be amazing if there weren't tons of other, much more significant, things that deserved attention in the relevant script, system, or the developer's life. (sign)*
Fixing bugs and writing programs that have as few bugs as possible is very important.
When you find a bug, by all means, go ahead and fix it. (Well, not always. You need to make sure your fix is not likely to introduce a more severe bug. Not all bugs made equal and some guts-feeling required).
If your goal is to make your system as robust as engineerically possible, you should most likely pay more attention in some directions and less in other. It's not only about quantity - you better have no critical bugs then have only-a-very-few critical ones.
Not sure. I'm ok being down voted by I think this is the kind of honorifics that leads to the quality of the site going down. Homogeneous opinions and comments. Look how great and special we are! Aren't we all just like each other? :)
> It’s a mystery to me why more organizations don’t hide what their people are doing online (ask any 12 year old computer enthusiast how this is done if you don’t know), but for whatever reason, many of them don’t.