Hacker News new | past | comments | ask | show | jobs | submit login

I didn't mean to claim "it will work just great" I claimed "it is supposed to be 6% more efficient of what we have now".

In theory, this particular comparison attack is entirely based on amount of operations, and by making it 6% longer we make it 6% easier to detect, am I right? My point is 6% is quite a significant improvement and instead of using regular technique I see in blog posts everywhere on the Internet, we should mention the possibility of tree-like patterns. Furthermore, for [01] alphabet it is gonna be about 90% improvement.

I'm hundred percent sure the technique was discovered by someone long time ago, but I'm confused I never saw that in casual articles (I read bunch of them recently).




It's hard to find any articles about practical experiences with string comparison timing attacks over networks, probably because so many of the results of those experiments are going to be negative.

There definitely are circumstances where it's a concern, and, worse yet, some of those circumstances happen below the language layer, so it's not enough to say "Ruby string comparison isn't exploitable"; depending on the runtime, things could be as bad as they were in Java.

So it's still important to doc those flaws --- when they're meaningful --- but a lot of the time we set them "sev:low" or even "sev:info".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: