Hacker News new | past | comments | ask | show | jobs | submit login

Whether this is a good interview question I don't know, but:

> When the hell would you ever need to know that?

I find this such a strange perspective. You're writing code for a computer. Sometimes you need to estimate how quickly it should run, or else estimate how quickly it could run with optimal code. Surely it's obvious that knowing how fast your computer does stuff, at least within 3 orders of magnitude, is a necessary ingredient to make that estimate.

And yet in your mind that fundamental fact is "useless trivia." Very odd.




I see your point, but hardly anyone counts "operations" anymore. Besides, what even is an "operation" on a modern CPU? You could say, an operation is an entire CPU instruction, but the truth is that the instructions you see in assembly files are not really that atomic. So then do you count each microcode instruction as an "operation"? But even then the question is impossible to answer, because depending on the type of workload your execution time will vary significantly. For example, straight up doing math with some registers will always be faster than reading/writing memory, and reading memory in a cache-friendly way will be faster than jumping all over the place, etc.

Anyway, my answer to the question would be, "a lot". You need to know the specifics of workload and cpu to be more precise.


I agree that it is "useless trivia."

The reason is that things today are "fast enough." These days most slowdowns aren't the result of the CPU not executing instructions fast enough. Other factors dominate, such as memory access patterns, network delays, and interfacing with other complex software such as databases.

Unless you are doing compute-heavy code, the speed of the CPU isn't much of a factor in estimating how fast the program will run.


I'm not infrequently surprised how people don't spot that something is orders of magnitude slower than it should be or pick the wrong architecture because they can't or won't do simple mental math to work out very roughly how long it should take to move some data around in memory, ssd or over the network or perform some simple computation on it.

I'm having trouble believing that people who think a CPU can do thousands of instructions per second will do well at this (or reasoning about the memory hierarchy).


I don't think there's much predictive power there. The people who are answering thousands clearly just haven't though about it before and are on the spot. Humans are just terrible with big numbers, and thousands sounds like a lot already.

You may as well ask any other sort of technical trivia question and figure the people that happen to carry around more random facts about tech are more likely to understand the bigger things that do matter. It isn't necessarily wrong it's a pretty obtuse way to make a judgment. Why not just ask them about the memory hierarchy or network delays or whatever directly?


Unless you are writing assembler, speed of your code is very distant of instructions. You don't count instructions when estimating speed. Not in higher level languages, not when working with database and definitely not in something like javascript that runs in browser.


What is the clock speed of your iPhone or android phone?

I personally have no idea and it's the one computer I use most frequently.


My phone, and your phone too (possibly) is something like 1.2 GHz per core, with 4 cores; most phones have a similar spec (and those numbers are going up, too); basically figure 1-2 GHz per core, with 2-4 cores generally.

That said, when I started using computers, my first machine ran at 897 Khz on an 8-bit CPU, and had 16k of RAM. I was 11 years old, and this was a pretty standard machine for the time, unless you had real money (and even there, on the top end - not counting minicomputers or mainframes - you'd be lucky to break 8 MHz and a meg of RAM).

But I know I am of a different era. And honestly, I've stopped caring about CPU speeds too, because lately they don't change much (top end is about 4-5 GHz per core; servers can have around 32 cores - though that is increasing too).

What should be cared about is memory usage per process, and how the software you are using (or which multiple people are using) delegates that out. For instance, with PHP (not sure with PHP 7) you have one process per user, and depending on how much memory those processes take, will ultimately lead to an upper limit on the number of users that can be served at one time. In that case, knowing your memory usage and constraints of the server could be very important (there are a ton of other factors to consider as well, I know).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: