Can anyone explain the “why does this matter” paragraph where the author seems to suggest that using branches in our program is a security risk?
I know that branch speculation can be used as an attack vector if our program is the aggressor - but does simply using branches in some way make us more likely to be the victim?
Say that your server has an admin login that simply compares the submitted password to "thisistherootpassword", one byte at a time. The attacker starts by trying "a","b","c", etc. and measuring the time it takes for the server to respond. When they get to "t", the server takes slightly longer, so they start trying "ta","tb","tc", etc. until the server takes even longer to respond. This allows the attacker to crack the password much, much faster than naively brute-forcing the space.
Actually, I have an even better example: I performed exactly this attack in order to get 1st place in a programming competition. You can read my writeup here: https://lukechampine.com/progcomp.html
This is regarding a JWT[0] which is often used for authentication.
Server-side code which takes a different amount of time depending on what bits are set in the JWT (or any similar authentication token) can be probed by repeating the operation with different values. Think of lockpicking—if you can move a pin and hear a click or feel more or less resistance, you know you've poked something critical in the core.
One classical example of this is a side-channel attack on RSA. In brief, the central computation is to raise a number to the power of a secret key. This is naively accomplished through a square-and-multiply algorithm -- where multiplications happen for every 1 of the binary representation of the secret key. By watching the power consumption of a device as it encodes/decodes a message, you can read off the secret key with relative ease.
I know that branch speculation can be used as an attack vector if our program is the aggressor - but does simply using branches in some way make us more likely to be the victim?