If these sorts of 'strength checkers' become ubiquitous across enough places, I wonder how much value there will be in using reverse-engineered (most of these are in JS for UX latency reasons, right?) models of their strength testing as another parameter to your brute-forcing module.
Then you can automatically skip any password you know is too simple, because the site won't have allowed the user to set it in the first place. You could also de-weight any constructions your generator is using (keyboard locality, l33t, ..), rather than positively weighting them as is done now.
Intuitively, it seems like the more restrictions placed on a password (must have 1 x char, no more than 20 total chars, ...), the smaller the entire search space. But where is the inflexion point where these rules generate stronger passwords than they assist.
Then again, if you're doing your hashing and storage right, brute force ain't gonna help.
Right. I think people make the mistake of thinking that, if you have 40 bits of entropy and then you delete some 20-bit entropy passwords, you only have 20 bits left. That's not how it works.
40 bits of entropy means 2^40; 20 bits means 2^20. 2^40 - 2^20 gives you something very, very close to 2^40 (39.9999986 bits of entropy.)
Then you can automatically skip any password you know is too simple, because the site won't have allowed the user to set it in the first place. You could also de-weight any constructions your generator is using (keyboard locality, l33t, ..), rather than positively weighting them as is done now.
Intuitively, it seems like the more restrictions placed on a password (must have 1 x char, no more than 20 total chars, ...), the smaller the entire search space. But where is the inflexion point where these rules generate stronger passwords than they assist.
Then again, if you're doing your hashing and storage right, brute force ain't gonna help.