Interesting question. Yes, as the answers state, there is almost no reason why the two programs should iterate at different speeds, especially when the compiler optimises the loop into an unconditional jump.
However, back in the day, with some CPU architectures, and with some fairly primitive compilers, it is conceivable for while(1) and while(something) to iterate at different rates. That is if the compiler outputs literal unoptimised code, and the something is a number large enough to require a larger instruction to initialise in a register than the number 1.
For example, the 68000 CPU instruction code has a quick immediate addressing mode, which can form the moveq instruction that will embed a constant between -128 and 127 into the instruction. It also has a slower immediate addressing mode that allows full 32-bit numbers to be sourced from an extra 32 bits tacked onto the end of the instruction. Therefore, while(1) {} could conceivably be compiled to:
label:
moveq #1,D1
cmpi #0,D1
bne label (branch if not equal)
I can't recall which, but I have seen at least one compiler generate code like this (if I had to guess, I'll bet it was either Dynamic C for Rabbit 2000 or an old Bytecraft compiler for an ancient Cypress architecture). Though I've worked with a lot of old, primitive compilers & it's possible it only did this with optimizations off.
At Mindtribe, our coding style suggests "for(;;)" instead of "while(1)". However, the reason isn't performance. Instead, it's that we've seen more than one compiler produce a warning about the testing of a constant in the case of "while(1)". And we have a policy of compiling without warnings.
Depending on the CPU architecture and optimization level, there could be a host of different reasons. E.g. if you compile it to a comparison against a constant, on x86 you could go from a one byte immediate to a 4 byte immediate (if the test value is greater than 255), which could cause your loop to fall out of the loop buffer depending on what's inside the loop. If that's what the interviewer is trying to get at... this is a poor vehicle for it.
The fact that there are more than 0 interviewers who think there is a difference is sad, because I think this makes a great question to separate out those who understand C and those who don't - it's even easier than FizzBuzz for the first group, and much harder for the second.
Anecdote time: I once interviewed a new candidate for a Python job along with three other developers on my team. One of the people on our team who was not a part of the interview handed us all a list of Python interview questions about 30 seconds before we headed down to talk to the candidate. He said these were the "best practice" questions he just found and thought they'd be good to ask. We figured "why not?". Well, turned out the questions were pretty obscure Python stuff, that in fact none of us could answer on the spot. We did figure these out later as a group, but it certainly made me feel very unprofessional.
Since then I've stopped asking questions like this. Trick questions and logic questions I think are a bad metric. A person can be great at sudoku but be a horrible developer in terms of writing maintainable code. Instead, I tend to just talk about the macro-level stuff. "What have you built? Can I see it? How long did you maintain it? What was the trickiest part? I imagine it had performance issues; did you spend time optimizing it?" I do ask some tech questions. For example, for web developers I ask "Describe how an XHR request happens in detail." But that is as deep as typically go.
This isn't specific to comp sci. While I can't cite a specific story this many years out, I definitely recall my classmates in Physics going for their nuclear power interviews (I had already decided that wasn't for me) coming back their bragging/complaining that they had to explain the proper application of Schrodinger's equation to an interviewer.
Given the tight market for programmers, it's actually a nice way of learning about the company before you go work there, if you think about it that way.
Kind of like if you got an interviewer that responds with a blank stare if you talk about version control and says they just make copies of the directory, or something like that.
Some people there on SO are suggesting this could be a "self-confidence testing" question where the interviewer already knows what he's asking is bullshit and expects you to react accordingly. Sort of like a HR-ish kind of question.
I can't fanthom how would one come to the conclusion that there is a difference between the two otherwise.
It's also potentially a good test for how you approach correcting people, especially when you're in a subordinate position. Some people can be too forceful ("you're wrong!") quite a lot of people will let it slide ("I'll implement this terrible idea because I can't say no").
I hate this thinking (I know it is not yours, it's a hiring meme).
If an interviewer makes some crazy assertion, then I pretty much flip into the mode of "how do I get out of this interview ASAP, as there is no way I'm working with this idiot". Whereas in the office, I'm the guy that sends the really difficult mass email that calls out his boss, if it is really needed. Engineering is not interviewing, and vice versa; why would you expect identical behaviors in both?
And yup, I am self selecting out of the situation where the interviewer is not an idiot and just 'testing' me. I'm fine with that, because I'm also selecting out of a job where people aren't willing to treat me with respect.
Interviewing is largely voodoo; we shouldn't be inflicting cognitive pain on people based on our random selection process. The only measure that has ever been shown to be predictive is past performance. So let's talk about my past performance, and leave this amateur hour psychological testing in the trash bin where it belongs.
It's a psychology test, designed by non-psychologists.
Don't try something because you sit down and think "hey, I am going to create an artificial situation, and then guess at what a good candidate would do[1]." Then see if he does that.
Other nervous circumstances that are not the same as interview circumstances. For me at least, test and interview stress is of a nature and effect that I have never experienced in other circumstances in 12+ years as an engineer of various disciplines.
The type of place who needs people who can develop under that type of pressure doesn't sound like a great place to work... That type of pressure sounds a lot like cowboy coding.
TBH if you're nervous about a technical interview, maybe you don't actually have the technical level required for said job.
Or maybe that's just me; nervousity in presentations or interviews or even talking is IMHO an indication of not knowing what you're talking about. And knowledge is confidence.
I've interviewed a number of people who were incredibly confident in their language, voice and body language who clearly had no idea what they were talking about.
Similarly, I've interviewed people who clearly knew way more about the subject matter of the job than I did but who were incredibly nervous during the interview either because they were intimidated by the idea of working outside of academia, real/preceived language limitations, or because they were just incredibly shy and awkward.
I've generally tried to look for people who fit the sweet spot in-between: confident and sociable enough to be reasonably comfortable during the interview and humble enough to not be brash. The more "customer/client" facing the position, the more I care about the confident and sociable part.
Totally depends on the person. I'm shy in interviews, always have been, but I'd happily challenge anyone with my knowledge. (I'm not saying I know everything or even a lot, but I am very confident of what I do know)
Is it normal for an interviewer not to clarify that "actually, I was wrong" on these sorts of self-confidence tests?
Personally at the end of the interview (as the interviewer) I would refer back to any of those sorts of questions and say.. "actually I was trying to catch you out, you did good there", or some such.
This not only stops them doubting themselves, but also clears the air if I was to come back to them and offer them the job. Who knows, maybe the interviewee ran off to SO and categorically proved me wrong.
Could've been a disgruntled developer who told the manager "oh yeah, here's a good question to ask". Spent the evening laughing and feeling good about scaring the interviewee away.
Devil's advocate - if for some reason you're using "while(<constant evaluating to true>)" to do something other than wait for some kind of event, and if whatever you're doing is fast but repeated a huge number of times, it's possible that you'd see some speedup in the overall program by using a "faster" conditional evaluation (assuming that it's actually evaluated at runtime and not compiled away because it's a constant value).
I am having a really, really hard time contriving a reason for this, though. It may be my lack of imagination, but I feel like the only reason this would come up is if you deliberately designed a system specifically to make this an issue.
Honestly, you could take this question a bunch of ways. Maybe he's just trying to ask about whether the speed of a cast-to-bool depends on the value of the thing you're casting, maybe he's trying to get you to see that the compiled code won't even evaluate the conditional at runtime anyway or maybe he's trying to get you to realize that you'd almost certainly not care because even if it were evaluating the conditional, if you're using while(<true>), you're almost certainly waiting for some event, so the only thing it could possibly affect would be polling speed, negligibly.
The question can be interpreted in different ways. For example: (1) "Which of these infinite takes the longest to run to completion?". Another interpretation could be: (2) "We want to write an infinite loop, which of these two ends up with the most efficient code?".
I don't think anyone is missing the point that an infinite loop takes infinite time to terminate, it's just not a very interesting point to make. Getting "hung up" on (1) means missing out on potentially interesting insights found when answering (2).
The author of the question stated that the interviewer said that while(1) was faster. Probably the interviewer really wanted to get into the evaluation matters. Questions like this, especially during work interviews, make me think that interviewers read some "fun facts" somewhere and decided to ask them during the interview instead of actually evaluating the candidate's skill. Sorry for the rant.
While not directly pertaining to the question, there are precedents for languages to special-case some inputs to loops for various reasons. Specifically, I'm thinking of Java here. Note the following code:
int x;
while (true) {
x = 8;
break;
}
System.out.println(x);
This code compiles and runs. Java is smart enough to statically prove that the loop condition is true, and so it guarantees that `x` will always be initialized.
Instead of `true`, you can also use things like `1==1` in the condition to the same effect. However, if you attempt the following:
int x;
boolean y = true;
while (y) {
x = 8;
break;
}
System.out.println(x);
...the compiler will spit an error at you:
While.java:9: error: variable x might not have been initialized
System.out.println(x);
^
From a language design perspective, it's interesting to note this sort of demand and support for infinite loops in the language specification. One alternative to special-casing `while (true)` as shown above is to have an entirely separate construct for "I want this loop to be infinite", such as the `loop` keyword in Rust, or the bare `for` in Go. Alternatively, see how style guides for C-like languages tend to prefer `for (;;)` over `while (1)` to denote deliberately-infinite loops, to make the intent as obvious as possible.
This question was asked to me in a major MNC by a senior manager, after 4 technical round.
He started off by asking how do I manage conflict in team and went on to say "for example, if you write while (2) { // some code } and someone in code review suggests that replace this with while (1) or vice-versa, then how do you handle the conflict and how do convince him". I said both are same speed. he interuppted and said "check your basics, while (1) is faster " and moved on to next set of questions.
He was not testing my confidence. He actually bilived that while 1 is faster.
So this is an interview question, apparently by a senior manager. The obvious reply should be: What do you mean by faster? Do u mean writing this code is faster? Reading this code is faster? Compilation is faster OR may be you mean execution is faster? In case you meant execution then could you please mention the kind of computer on which the execution will happen? By the way it seems that both the code will run forever and ever, you can't figure out who won a race until the race ends and here it seems the race is going on forever.
You are very close to making the same mistake as the interviewer did. The compiled program will not have any tests or casts or anything like that, because the condition is known at compile-time and the compiler just compiles it down to an unconditional jump.
I'm not making any mistakes. It wouldn't even be a jump at all, just static code if the statement was compile time optimized. I failed to mention it because it's trivial enough without the extra fluff of compile time magic
In some contexts, it is important to make a distinction between O(1) and constant-time. O(1) means you've got an constant upper bound for how long the operation could take, but constant time implies that the operation always takes the same amount of time.
Consider the case of converting an integer value to floating point. In order to normalize the floating point value, you'll have to do something equivalent to counting the number of leading zeros on the int, and many machines don't have an instruction for that, so you have to do a loop. Data-dependent timing like that needs to be studiously avoided in crypto code in order to prevent timing attacks. (Of course, crypto rarely uses floating point, but it's still a great example of how an O(1) operation can have variable execution time depending on the input.)
Time complexity is essentially the expression for computation time as a function of input size. Before you can discuss it you have to decide how you measure the size of the input. What you pick as the size of the input may vary from problem to problem and may be the list size, string length, number of penguins etc.
I wonder, how do you propose to measure the "size" of your input to bool cast?
That's why I mentioned machine precision. Bignums are going to be different and most languages don't handle big nuns or casts on them implicitly, it's mostly lib based
What about something like casting a bignum to float? It's easy to imagine a simple implementation optimization where any number smaller than the machine's maximum integer size is stored as a native value, so e.g. float(100) would be constant time but float(2^64 + 1) might not.
What's it mean to cast in non-constant time? Instead of O(1) performance, it's O(N). But in that case, performance is linear with respect to what? With respect to how big of a number is being casted?
I think that might be possible in dynamic languages like Lisp that can compile to assembly. Maybe Franz or Allegro Lisp compilers do something like that. For the JVM or .NET, I'm not sure that can happen. For C or C++, every typecast is constant time.
I can see non-constant conversion times, especially with floats. Floats have lots of weird edge cases involved in them, especially when you get to the very big and the very small scales. If your number's too big, you have to see if you need to just throw back infinity and/or negative infinity. If your number's too small, positive and negative zero. Finally, if you're hanging out with tiny numbers near zero, a good floating point library will start to have to worry about denormal numbers, where you will start padding with zeroes, sacrificing precision to denote that the number isn't quite zero. So yeah, it would be N in respect to the size.
A conversion from string to int is O(n) in the length of the string (O(lg n) in the "size" of the number). Such a "cast" would be syntactically indistinguishable from builtin casts eg. in C++ which allows user-defined conversions.
i hope the answer the interviewer was looking for was:
"i'm unsure.. i usually check the byte code when i am optimising my work";
stead a flat "No" (the original poster claimed the interviewer said the run times were unequal)
However, back in the day, with some CPU architectures, and with some fairly primitive compilers, it is conceivable for while(1) and while(something) to iterate at different rates. That is if the compiler outputs literal unoptimised code, and the something is a number large enough to require a larger instruction to initialise in a register than the number 1.
For example, the 68000 CPU instruction code has a quick immediate addressing mode, which can form the moveq instruction that will embed a constant between -128 and 127 into the instruction. It also has a slower immediate addressing mode that allows full 32-bit numbers to be sourced from an extra 32 bits tacked onto the end of the instruction. Therefore, while(1) {} could conceivably be compiled to:
whereas while(200) {} would be compiled to: and this loop would run slower than the first.