For those wondering how you get numbers, strings and other primitives from a whole bunch of empty arrays and objects in JavaScript, here's what happens when you do arithmetic and other operations on arrays and objects:
> [] + []
"" (An empty string)
> {} + []
0 (The number zero)
> [] + {}
"[object Object]" (.toString() called on a plain object).
> ![]
false (!{} is the same)
> !![]
true (not false)
> +[]
0 (the number zero)
> -[]
-0 (negative zero)
> +{}
NaN (Not a number, same goes for -{})
> "" + []
"" (empty string)
> "" - []
0 (number zero, not empty string)
And the one that gets more people than the previous list:
> typeof [] === typeof {}
true
Incidentally, some of these things can be useful. For example, +(x) will always evaluate to a double (unless it is preceded by a string), while (x|0) will always evaluate to a 32bit integer. asm.js abuses (to an extent) this to have more control over types, and most JavaScript engines would now store the result of (x|0) as an integer internally instead of a double.
Does knowing any of this make you a better programmer? I'd say no.
Should you be using any of this in production code? No
The next programmer even if he is a really good one might not know that
particular esoteric trick.
I'm not trying to vote down or anything, but what is the point? Most of it look
like language design warts to me. Most of them should have thrown exceptions and
errored out.
Now I'm loving the idea of static typing a lot more.
Most of it is warts and is completely useless. Sometimes things like knowing that an array is an abject are pretty important if you're writing production code.
The bit at the end (integer and float casting) is almost essential knowledge now for anyone writing JavaScript code that does anything mildly intensive.
The problem is the strong-typing/weak-typing distinction, not the static-typing/dynamic-typing distinction. For example, Python is about as strongly typed as C++ yet they're on opposite sides wrt. how dynamic their types are.
Security through obscurity is not security. This would be easy to translate back into the original code anyway. This is also not efficient from a file size perspective which is cancer on the web.
This is wrong, it is only true in the weird scenario that you type it into the jsconsole and don't use the result. It parses as an empty block ({}) and then an unrelated unary +[] after that. var x = {} + []; gives you a string and it is the same as var x = [] + {}.
There is more logic than it seems here:
Unary plus:
> always gives a number (a double: NaN is a double)
> on array: +a -> if len>=2: NaN else +a[0] (recursively)
> on object: NaN
> on string: parseNumber, or NaN if that fails
> on bool: 0 for false, 1 for true
> on null: 0
> on undefined: NaN
binary plus (addition):
> order doesn't matter except for the jsconsole pseudo-bug
> if both sides are number or boolean, add them
> else, toString() both sides and concat them
boolean negate (not):
> false for 0, false, "", null, undefined
> true otherwise
binary - (subtraction)
> x-y coerces x and y to number and subtracts them.
> Most mainstream languages that use + for string concat don't use - for some sort of string unconcat so its not too crazy.
> typeof [] === typeof {}
This isn't so crazy, typeof any non-primitive gives you "object".
[] is pretty much just an object plus magic .length property,
var x = []; x.blah = 7; console.log(x.blah) works. It doesn't work on number or bool.
One interesting question was about the "performance impact"? One reply was: "Vanilla I got 225k ops/sec. JSFuck, 4.5 ops/sec. So about 50000 times slower."
For shits and giggles I tried to use the minified version of my open source library. After about 30 minutes of Chrome being frozen I gave up, lol. I was curious how large it would balloon 17kb of JavaScript.
I seem to remember CloudFlare using this approach (for obfuscation purposes) when users enable the 'under attack' mode on their sites. I was pretty surprised something like this was possible when I first saw the code.
Well ; produces 8307 chars on its own (without eval checked) -- which seems kind of inefficient (for such a common character in idiomatic javascript). Looking that their encoder, ';' actually has a specific encoding (which itself has to be encoded) so it looks like there's some inefficient expansion taking place (e.g. '.' has a specified encoding that does not require recursive encoding). Encoding the string "link" in the expansion of ';' appears to be very expensive -- alert is cheap in comparison because you can obtain its letters from Javascript return values (e.g. "l" is pulled out of "false" which is obtained by (![]+"")[2])
I'd imagine that if you were serious about this, you'd implement, say, e=String.fromCharCode (12k chars) and use that to dig yourself out of a lot of this expensive stuff if you need more than one hard-to-encode character.
I know this will most likely come as a surprise to most Javascript programmers but all code is represented using only two "characters" deep inside your little computers!
What do you mean two characters? What are they exactly? I'm guessing $ and ; because it's like jQuery but some operations seem impossible, like how does it even assign variables?
> [] + []
"" (An empty string)
> {} + []
0 (The number zero)
> [] + {}
"[object Object]" (.toString() called on a plain object).
> ![]
false (!{} is the same)
> !![]
true (not false)
> +[]
0 (the number zero)
> -[]
-0 (negative zero)
> +{}
NaN (Not a number, same goes for -{})
> "" + []
"" (empty string)
> "" - []
0 (number zero, not empty string)
And the one that gets more people than the previous list:
> typeof [] === typeof {}
true
Incidentally, some of these things can be useful. For example, +(x) will always evaluate to a double (unless it is preceded by a string), while (x|0) will always evaluate to a 32bit integer. asm.js abuses (to an extent) this to have more control over types, and most JavaScript engines would now store the result of (x|0) as an integer internally instead of a double.