The way light guns work, they rely on the fact that a white frame is not displayed all at once but "drawn" by the CRT ray - if you flash a single white frame, one location will flash white slightly later than other, and you can infer the location from the timing.
LCDs, well, change all the frame at once. The issue isn't with the frame rate but at what happens when the device is drawing a single frame change.
But following the same logic as GP, if you have a higher refresh rate, can't you just split the frame into several frames where you simulate that refresh scan as an animation?
Not in practice. To completely simulate the CRT refresh, you'd need to flash each pixel separately along the electron gun path. If your LCD has 1,000,000 pixels, you'd need 1,000,000 times faster refresh rate to do that.
Tearing is a property of the output from the GPU. The data takes time to send, and the GPU can be directed to send different data halfway through the process (or the memory it's in the process of sending can be modified, etc.). The display just shows what it's sent... it shows it as it receives it (CRT), or it buffers it up and draws it later (LCD), but if there's a tear, you'll see it just the same.
Obviously not, there is no memory inside the glass, and bandwidth is limited. LCD screens still scan in.
The difference is LCD keeps picture until told otherwise, whole picture is emitting light. CRT has limited phosphor persistence, there is only one point/pixel being lit at any given time.
LCDs, well, change all the frame at once. The issue isn't with the frame rate but at what happens when the device is drawing a single frame change.