Hacker News new | past | comments | ask | show | jobs | submit login
Things you should know about Windows Input, but would rather not (ph3at.github.io)
176 points by w4rh4wk5 86 days ago | hide | past | favorite | 107 comments



I have to be very disciplined about switching my layout back to US English before launching games because they seem insistent on using the mapping for WASD versus the keys for WASD. But, worse (IMHO): some games use the keys and thus I can't predict whether the game was coded correctly or not and thus I just gave up and made it part of my launch process

I have secret desires to convert one of my old MacBook Pro devices into a Steam-via-Proton setup to get out from under the tyranny of Windows but since gaming is designed to be a break from "work" ... that's why it's not already done


Your computer doesn't actually know what the keys for WASD are. It just receives a number (commonly called scan code) for the pressed key from the keyboard, and has to use the mapping to determine which key that actually was.

There's some convention for which scan code to use for which physical position on a keyboard, but that's not correlated with what's actually printed on the key caps. E.g. on common QWERTY keyboards the "A" key will have scan code 4, but on AZERTY keyboards it will have scan code 20.

Games can probably get away by listening for the scan codes of the keys in the position commonly used by WASD, but it's a bit fragile, and they can't actually tell you what's printed on the keys they're listening to. The lack of consistenency is certainly annoying, though...


> Games can probably get away by listening for the scan codes of the keys in the position commonly used by WASD, but it's a bit fragile, and they can't actually tell you what's printed on the keys they're listening to. The lack of consistenency is certainly annoying, though...

The operating system knows how to map scan codes to characters based on the keyboard mapping the user has selected.

    Win32: MapVirtualKeyExA / MapVirtualKeyW
    MacOS: CGEventKeyboardGetUnicodeString / UCKeyTranslate
    Linux: xkb_state_key_get_utf8
Hell, glfw nicely wraps all of these up with glfwGetKeyName.

Stop reading character codes directly, use the scan code and when displaying them map it using the system API so people see the right freaking binding. It's not rocket science,


Scancode vs character code is what they are describing:

Games SHOULD use scancodes for positional mappings (like WASD) and rely on the system keymap to decide what letter to display them as. There is no "probably" here, the scancode is the scancode and the keymap is the keymap.

Games OFTEN use the character codes directly regardless if it's for a positional mapping or not. This requires explicit support in the game for each possible system keymap, otherwise you end up with nonsense mappings.


We've invested some time to improve this situation for the games we port, at least to some extent. Keyboard input is read as a combination of scan code + win32 virtual key information to determine the physical key pressed on your keyboard. This way the keybindings are (almost) independent of the keyboard layout.

However, we also reflect your current key bindings with dynamic button prompts that tell you which key to press in game. For this we translate the determined physical key back through the selected keyboard layout to figure out what the corresponding label on your keyboard might be.

Most of this is just win32 API shenanigans, but the following post provides a bit more detail on reading inputs this way.

https://blog.molecular-matters.com/2011/09/05/properly-handl...


Not sure about Macbook Pro, but I converted my Windows 11 gaming desktop to Linux and Steam with Proton works just fine for all the games I care about. The only game that didn't play was Starfield and that was fixed shortly after a few weeks.


This is the kind of slippery problem I would never think about but would make me want to break my keyboard if I encountered in real life.


I am very lucky that Win+spacebar switches, so it's low drama to execute, just some drama to remember. Insult to injury, and likely very related to the article's points: if I remember after launching the game, Win+spacebar is 50/50 on whether the game notices and thus I usually just pay the startup cost again if I forget rather than having things be in limbo


> I am very lucky that Win+spacebar switches, so it's low drama to execute, just some drama to remember.

Unless they've fixed this in the last few years, switching your input mode like that while World of Warcraft is running will cause some kind of crash that prevents you from using the in-game chat. This felt especially egregious because switching input modes is the kind of thing you want to do all the time if you're using the in-game chat.


There's a lot to be said for a consistent process that always fixes a problem, vs one that fixes the problem relatively permanently, inconsistently, and requires more memory.


That's easier than just changing the key bindings in game?


Option 1: remember to press Win+spacebar before launching game

Option 2: launch game, navigate to settings, write down on a sheet of paper what the current keybindings are (since one I press "." for "e" it's going to either whine or blank out whatever "." was mapped to when I started), repeat that exercise about 25 times per game, times the 15 games I have in rotation right now, feel that was a great use of my "downtime"

Option 2 has the added bonus of making it 50/50 whether the game help text knows I remapped, and thus says "press . to open door" or whether it continues to say "press e to open door" and I have to guess whether it means "their e" or "my e"


The way games ought to do keymapping is to just allow conflicts. If you map two functions to the same key, that key should do both things, until the player sorts it out. The keymapper can put a warning sign up, "conflicts with foobar", but it shouldn't remove the key from foobar, and it shouldn't say "I can't allow you to do that", either.


If author reads comments I just wanted to confirm that everything seems to be correct.

The fun parts you might not know is eg. that if you want to ship on steam you will need legacy messages for its overlay to function. Because its a web based thing ^_^

Practical solutions usually include either letting user to pick their perfect input type and assume they know limitations like no windowed mode. Or expecting users to reduce 8k to reasonable number in mouse software. Users already know how to read forums or reddit and games that do now work fine with 4k/8k polling rates are well known and numerous.

There is an alternative that my colleagues going to explore = make it Microsoft problem. There is Microsoft provided library that can do input for you ...


If the library you are alluding to is GameInput, then I tried that and it sadly did not solve the problem.


I never said it would. Any engine needs 'basic platform support' style input. And going into extra windows/threads/apis is for 'competitive shooter titles'.

After working on one a bit I would say library way is not good enough :) but it is sure thing cost effective way to support a platform. Or even few platforms shared by same vendor.


I maintain a library to simulate input on Windows, macOS and Linux. There are a ton of quirks on all of these platforms. On Windows you can simulate text input. The text cannot start with a new line character though. It is perfectly fine if it's not the first one though. macOS also allows you to simulate text input, but the text has to be shorter than 21 characters. Linux is especially difficult with X11, multiple Wayland protocols and libei.


Look into contributing to LibTAS.

It’s very Linux specific but can frame by frame manipulate the input that an application is getting, meant for speed runs of games.


Thank you for the suggestion. It's an interesting project, but I'm more comfortable writing Rust code. Also I still have a lot to do for the library I'm maintaining. The libei and Wayland support is still experimental and buggy and I need to get better integration tests working. So much to do, so little time :-)


If it's available, I'd love a link.



Does this use SendInput on Windows or injects directly into the app?



I would point people to the new GameInput API - https://learn.microsoft.com/en-us/gaming/gdk/_content/gc/inp...

GameInput does everything these older APIs do without having to mix them all together or use a bunch of them to get the information you want


I tried GameInput, sadly it doesn't solve this issue.

You still need legacy input for Window interactions, and the message queue still gets flooded by fast-polling mice.


Just wanted to check: what are the messages that cause the problem, exactly? If it’s only WM_MOUSEMOVE and WM_NCMOUSEMOVE, then it’s important to note that they do not, in fact, become real messages[1] until you GetMessage or PeekMessage:

> When the hardware mouse reports an interrupt, indicating that the physical mouse has moved, Windows determines which thread should receive the mouse move message and sets a flag on that thread’s input queue that says, “The mouse moved, in case anybody cares.” (Other stuff happens, too, which we will ignore here for now. In particular, if a mouse button event arrives, a lot of bookkeeping happens to preserve the virtual input state.)

> When that thread calls a message retrieval function like GetMessage, and the “The mouse moved” flag is set, Windows inspects the mouse position and does the work that is commonly considered to be part of mouse movement: Determining the window that should receive the message, changing the cursor, and determining what type of message to generate (usually WM_MOUSEMOVE or perhaps WM_NCMOUSEMOVE).

The book version of Raymond Chen’s blog[2] elaborates on the subject of messages arriving in an empty forest:

> One of [the] special messages is the WM_MOUSEMOVE message. As you saw earlier, the message is not added as an input message when the mouse moves; instead, a “the mouse moved” flag is set. When the window manager goes looking for an input message and the “the mouse moved” flag is set, it clears the flag and generates a WM_MOUSEMOVE message on the fly, adding it to the list of input messages (or coalescing it with an existing WM_MOUSEMOVE message).

> Other special messages that fall into this “generated on the fly” category are the WM_PAINT, WM_TIMER and WM_QUIT messages. The first two messages are generated even later in the message search process, only after no applicable input message list was found, and only if the message filter indicates that that type of message is being requested. (The WM_QUIT message is even shier than the paint and timer messages, because it emerges only after the posted message list is empty. On the other hand, the WM_QUIT message is also bolder, in that it ignores the message filter.)

Keeping that in mind, using PeekMessage() and thus materializing the pending WM_MOUSEMOVE should be done very carefully (though with PM_REMOVE it shouldn’t be a problem). In fact, it seems like unless you care about WM_TIMER, WM_PAINT, or WM_QUIT, the right thing to do upon seeing a WM_MOUSEMOVE (but not a WM_NCMOUSEMOVE) would be to behave as though the message queue is currently empty and proceed to do whatever other processing you need to perform for the current frame.

Still, I don’t get why this should be a problem. If you receive a WM_MOUSEMOVE and your only action is to discard it, surely it doesn’t take 125 μs to do that? And if it doesn’t, you shouldn’t see more WM_MOUSEMOVEs that can only get generated at 8 kHz at most.

Suggested experiment because I don’t have a fancy mouse: if you insert

  if (msg.hwnd == <root window> && msg.message == WM_MOUSEMOVE) continue;
before the TranslateMessage call, will the stutter disappear? I know that above I proposed break instead of continue, but continue could be less disruptive to other special messages—if it works.

[1]: https://devblogs.microsoft.com/oldnewthing/20031001-00/?p=42...

[2] https://openlibrary.org/works/OL9256722W


I guess if mice are generating too many messages, asking Windows to coalesce them has less impact.


Unfortunately not everything. For example, no xbox 360 controller support: https://github.com/microsoft/GDK/issues/39


Another "fun" thing is old joysticks and steering wheels that require calibration. This used to be done in the windows control panel such that not every game needs to implement that... However it seems the latest API in fashion that eg. Unity uses just totally ignores that native calibration totally. (Could just be an Unity side problem also). So bunch of old wheels are broken on Unity.


What's equallt bad these days is the fact that the calibration UI is hidden in the third or fourth level of nested dialog windows recent versions of the Control Panel. Even finding it is a hunting game.

Many, but not all recent game controllers come with accurate and stable enough calibration so that you don't need to do this very often or at all. It's mostly older hardware that needs to go through the process. Most 10 year old controllers (sticks/wheels etc.) can still be perfectly usable today after calibration!


It was pretty easy to find with the Settings app.

"Find a setting" > type "joy" suggests "Setup USB game controllers". Lists the controllers/joysticks. Click "Properties". Click "Calibrate..."


Shoutout to Elden Ring where the entire game freezes when your (wireless) mouse goes to sleep.


Sounds like a gameplay mechanics a-la Braid.


Issue closed (Works as intended)


I use winit, which is a Rust crate for dealing with windows (lower case) and keyboard/mouse events. It supports Microsoft Windows, MacOS, Linux/X11, Linux/Wayland, Android, web browsers, and Redox OS. Trying to abstract over all of those works, almost.

Reading keyboard arrows or WASD has timing problems. Although the program gets key code up/down events, the key repeat software seems to interfere with the timing of the events and slows them down. Software repeats can be ignored, but still cause problems. Holding down a repeating key for long periods can cause lost events. X11 is especially bad, but there seem to be some problems with Windows, too.

Another thing nobody has fixed in the Rust game dev ecosystem yet because the developer base is tiny. Rust needs one good first person shooter project with enough push behind it to wring out the performance problems in interactive 3D.


I don't know about Linux, but in Windows all input messages come with a timestamp, so even if you happen to process it later you should know when the key was actually pressed. That might help when simulating physics, animation or other stuff.


A timestamp won't help if your goal is reducing game lag. You can tell you're lagging, but can't do anything about it.


Japanese IME on Windows is insane; it's always switching to A mode. You're always fighting it to go to あ.


I also hate this behavior. My understanding is that it doesn't get noticed by native users as it's not an issue when the JP input is the only input, so it's mainly an issue for language learners.

It's been on my mind to try the google IME for some time, to see if it fixes this issue, but haven't got around trying it yet.


The Google IME is waaaay better. With both I have some bug though where the browser will just freeze for a few seconds while switching the IME - happens in both Chrome and Firefox. Also happens on macOS with the native IME so there seems to be some bug there...


I just had a thought: I really wonder how mouse-in-pointer[1] mode interacts with raw input. I had looked into mouse-in-pointer somewhat extensively to try to understand how it behaves[2], in part because I'd really like WM_POINTER in wine some day (but I fear I will never actually be able to contribute to that effort, since I'm just not in-the-weeds enough with Wine input, and I don't think anyone is interested in holding my hand to figure it out.) However, one thing I definitely never thought about is raw input. It's pretty clear that both are implemented somewhere in kernel mode, in front of the event processing queue, so I bet they have an interaction. WM_POINTER events also implement batching, so I wonder if enabling both and ensuring the windowproc doesn't let it fallback can solve some problems. Probably not, but worth a shot.

[1]: https://learn.microsoft.com/en-us/windows/win32/api/winuser/...

[2]: https://github.com/jchv/WmPointerDemo

> Disable legacy input, create your own legacy input events at low frequency. Another one in the category of “silly ideas that probably could work”, but there are a lot of legacy messages and this would be another support nightmare.

This was my first thought when reading the article. Frankly it would be a pain, but this seems like the sort of thing that someone only has to do one time.


I would think you only have to generate enough legacy mouse messages that your application works. I'm sure I'm probably missing some catch here but I know plenty of applications that simulate input.

(Calling them "legacy" messages seems weird -- this is the normal current way that input messages work for regular applications).


Even if you do generate legacy input events, it is worth noting that the kernel still knows that they're not real input events. It seems to keep track of the current event on a per-thread basis (maybe it's stuffed into the TIB or something, but it could also just not be visible, I didn't search too deeply.) The knock-on effect of this is that sometimes DefWindowProc won't do the expected thing if you feed it a synthetic event, which could lead to strange behavior. You can of course use CreateSyntheticPointerDevice[1], and try to emulate it like that, but obviously then it knows that the events are coming from a synthetic pointer device (which is also system-wide and not just per-application!) So it feels like any way you go, behavior will be slightly different than expected.

Will this matter? I'm not quite sure.

I think Microsoft's WebView2 component enables Mouse-in-Pointer mode using the undocumented syscall for doing it per-HWND, but apparently Raw Input is per-entire-process, so if you aren't generating "legacy" events, will a WebView2 component in your process function as expected? Maybe if you go the synthetic pointer route, but again, that's a pretty big can of worms in and of itself.

Windows input is indeed very tricky. Seems like there's only one way to figure out some of these answers. Perhaps I should add RawInput to my test application and find out what the interaction is.

[1]: https://learn.microsoft.com/en-us/windows/win32/api/winuser/...


> Use a separate process to provide raw input. This is a bit of a silly idea

I don't think this is silly at all. People are too timid about using multiple processes in general. It's OK, there's nothing wrong with it! We ought to have more libraries to help make multiprocess apps.


> This is not an empty phrase: we have observed performance losses of 10-15% in extremely CPU-limited cases just by calling XInputGetState for all controllers every frame while none are connected!

Is this why a game renders only a few frames per second and slows down in time when my wireless Xbox controller disconnects due to low battery? I always assumed it was some mechanism to make the gameplay “fair” if your controller had an intermittent disconnect.


I think the author has two issues in their approach.

1) You should be processing the whole message queue for all types of messages in one go. When using raw input, author is peeking for "not input" event types. To me, this seems suspicious for performance. With a raw input device registered, the message queue is going to be huge.

While I don't know the underlying data structure of the message queue, if we assume an array, then the author's code will do a while loop of O(N) lookups for "not input" messages and remove them.

The correct approach is to dispatch the whole queue and ignore messages of types that you don't care about in your message handler.

2) You SHOULD be using the legacy input events for games, not raw input. The OS input massaging (ex: acceleration) is something that should remain consistent between applications. If there's acceleration: that's what users expect. You don't want the mouse to suddenly behave different in your application. Your games tuning values (sensitivity, accel) should apply relative to the OS values.


For shooter games, raw input is preferred. You want the same amount of mouse movement to move your camera the same amount every time.

In those type of games, you aren’t controlling a cursor, so there’s no “expected behavior”.


I'd argue that linear camera movement with mouse _is_ the expected behavior for FPS these days.


It's more complex than that. There's no right answer and therefore every answer is wrong :(

For more critical observers, you're right: raw input is preferred. However, most users (casuals) expect similar settings as their OS. It's probably best left up to the user to decide, but to default things to the OS legacy settings. A more advanced player will know to tune the settings while the less advanced just want to launch and go.


No. He's right. No fps gamer expects the game camera speed to "match" mouse movement speed. Raw input-> mouse DPI * in game sense = effective dpi. Everyone has their own preferred edpi. This was you can change mouse or computer or in different games, you always get consistent movement.


Everyone plays fps games. This includes people who wouldn't consider themselves "fps gamers". Users who care about mouse input will go into the settings to target the 'correct' settings for them.

For everyone else, there's the OS values.

We're not having the same argument, but I didn't clarify in my initial post properly: It's best to launch your app for the first time with the default OS settings. Let the "power users (fps gamers)" tune settings that enable raw input.


that's just going to result in the non-power-users having a crap experience and not knowing why. Mouse acceleration is not a good match for FPS at any skill level.


This sounds plausible in theory, but have studies actually been done? Could well be similar to golden wires for audiophiles.


I'm not aware of any studies on this, Nvidia did do a study on the effect of sensitivity on aiming [0] but I'm afraid it didn't cover acceleration. Imo it very well could be given how many top aimers and pro players have successfully used configurable mouse acceleration solutions like RawAccel, InterAccel, Povohat's and Quake built-in acceleration. I feel like aiming with acceleration has a steeper learning curve though especially with steeper acceleration curves without an offset, so perhaps I still wouldn't slap default windows acceleration at newer players.

[0] https://research.nvidia.com/publication/2022-08_mouse-sensit...


Surely even in a shooter game you spend some amount of time in menus where you do have a cursor, right? Is this why I sometimes get completely 'wrong' cursor movement in games?


> You SHOULD be using the legacy input events for games, not raw input.

The author claims that this generates an overwhelming number of events for a high-performance mouse. Is this not the case?


Only when a raw input device is enabled. If I remember correctly, Windows will also send the raw input as legacy input messages. With no raw input device registered, the OG message queue is not bloated.


I'm gonna ask here because I've not found a solution anywhere else:

I loved the Middle-earth: Shadow of Mordor game but when I tried to play its sequel, Middle-earth: Shadow of War, I found the mouse input absolutely horrible.

Horrendous mouse lag, floaty camera control, and all the signs of a "mouse emulating gamepad" type situation.

Does anyone know if there's a way to fix this game? I'd love to play it on PC with a mouse but can't stomach it as-is.


Sometimes, enabling "raw mouse input" option in the game solves this completely. Ofc this depends on the game, and also not all games have this option. The ones that don't, you can try reducing your mouse DPI to 400 and polling rate to lowest; I think 125hz. Don't remember.

btw, I never played this particular game but lots of others have this issue.


Thank you; I will try.


The worst part of inputs is that my mouse wheel can only scroll in whole lines and the left-right wheel doesn’t work most places where the trackpad can scroll left and right - and the trackpad can scroll with much higher fidelity than whole lines.


It reads like he is assuming that missing messages from the mouse is not allowable.

I would say think of it this way: what happens when the mouse moves between USB hid messages?

If the mouse moves between messages sent via USB HID do the later messages capture the delta between the last and current message. Including the movement that happened in between?

Do the input messages, both those sent over USB by the mouse as well processed and presented by Windows, give relative or absolute positions?

The messages already represent a discrete sampling if a signal. You should be able to recreate the original information reasonably well without processing every single message.


Standard USB HID Mouse reports relative position change to previous report, so the reported position is difference from last polled one.


Can you disable and re-enable NOLEGACY on-the-fly? Maybe you can disable it when you calculate that the user has moved the mouse outside of the game window region.


apparently not:

> This would at least solve the problem for the vast majority of users, but it’s not possible, as far as I can tell. You seemingly cannot switch between legacy and raw input once you’ve enabled it. You might think RIDEV_REMOVE would help, but that completely removes all input that the device generates, including both legacy and raw input.


Unfortunately it seems that is not possible.


Is the solution look like rare limit in the client side ?

And maybe it will have some missed event between your ticker for processing game logic and fps.


One thing I'd like to see fixed in computer keyboard input in general is how Shift is handled for capitals.

I find it next to impossible to type "Now" really fast without having it come out as "NOw" much of the time. (Why I'm using this example is that it's the first word of the "Now is the time ..." test sentence).

The timing required is strict. The Shift key must be down before the letter key is down.

Keys activate strictly and immediately on the down event.

Note that virtual buttons on most contemporary GUIs, clicked by mouse, do not work this way. You must click and release the button for it to activate. If you click OK on some dialog, but change your mind while still holding down the mouse button, you can back out of it, by moving the cursor out of the button's area. You also cannot click down outside of the button, then move into the button and release. The button must receive both mouse down and mouse up, without the pointer leaving the area, in order to activate.

I'd like a mode in which the down events of non-modifier keys are deliberately delayed by some configurable number of milliseconds, so that when the user hits a modifier key at around the same time, and that key lands a little bit late, it is still taken into account.

It would also be nice to be able to experiment with a mode in which keystrokes are generated by their up events, not down, and the modifier is sampled at that time.


I can't find any viable alternative. Keyboard is much faster than those click and release interfaces. Keyboards also have repeat keys, when you press a character for a long time you can actually press and depress the shift key and see the change in the line of characters input. This is extremely useful feature in games, graphic design software and other applications.

Generation of keystroke based on the up event, beside been incompatible with repeating keys for long strokes, will slow down typing significantly, as it requires tracking timing pressing keys for longer duration. I'm pretty sure that this isn't only effect of me being used to track keypress timing on the way down, but an unavoidable result of the duration of the action.

Waiting for up event on contemporary GUI, when the contempt UI is a sluggish fit-to-nothing dirty touchscreen in a public kiosk is sensible. When you know an interface will yield more errors than intended input it is only sensible to assume that any input is a mistake unless the user is making an effort to validate it.


Keyboard repeat is only useful in ANSI terminal games on Unix, and games on some old 8 bit home computers that didn't have up and down events (Apple II+, ...).

A game written for an IBM PC and everything after that can know exactly which keys are being held down and when they are released by intercepting the "scan codes" (or abstract keyboard events in a GUI event loop).

All that is missing is synthesizer-like velocity and pressure info. :)

> will slow down typing significantly

Only for people who have to look at the screen. :)

The hunt-and-peck beginners who look at their fingers are not effect, and neither are those who can look at something else or close their eyes.

A serious concern affecting even those people is that using the release event could reorder things, causing mistakes. Like say if someone types the sequece wh (involving two hands) such that they release the w later than h.


> The timing required is strict.

Your keyboard has a serial interface to your computer. Events are generated serially. If you press one first it will register first.

> keystrokes are generated by their up events

We don't do this because then you can't have auto repeat.


> If you press one first it will register first.

While that is obviously so, there are two events: a press and a release event.

The thing I run into is that the release time: Shift being held over too long, into the start of the next character. Now comes out as NOw because the up event for Shift comes after the down event for w.

So I don't think my proposals are necessarily the right ones.

Perhaps this: when a shifted letter is being typed (the down event came in a shifted state), then the rule should apply that an up event should be received to generate the keystroke, and that up event should still be in the shifted state.

Refinement: when a shifted letter is being typed in a situation where the Shift was held over from a previous shifted letter, then apply the logic. Thus N is not delayed; it comes out on the downstroke. But if Shift is still held while o is typed, that is a held-over Shift, and so the rule applies; the state machine must see an up event for the o and then decides whether it is shifted. If Shift is released before the o, then o is issued on the down event as usual.

Another idea: when a shifted letter is being typed in a Shift-holdover state, then issue the keystroke for the letter either on the release of the Shift, or the release of the letter, whichever comes first. If Shift is released first, assume it was a holdover and don't capitalize the keystroke. If the letter is released first, issue the capitalized keystroke. In the situation where the intent is to type Now, this will hardly result in a noticeable delay for the o, since the Shift hold-over period is very brief.

> you can't have auto-repeat

Auto-repeat is an arbitrary behavior that kicks in due to a timer going off while a key is continuously held down.

It generates multiple fake keystrokes even though there aren't multiple down events.

So of course you can still have it generate multiple keystrokes even though there aren't multiple up events (including generating the first keystroke in spite of there not being any up event).


> prevent performance collapse by just not processing more than N message queue events per frame. N=5

Why is this an issue? Your mouse is sending raw input 80,000 times per second to a screen that can only display 120 pictures per second.

Kinda sounds like an arbitrary problem that people who believe 8k mouses are necessary would worry about.


> Your mouse is sending raw input 80,000 times per second to a screen that can only display 120 pictures per second.

8 thousand, not 80 thousand. Many screens can display (much more) than 120 frames per second - My monitor goes to 240hz, there are 300hz displays available.

Rendering rate is very often not tied to game update rates - games can update their state at significantly higher rates than they display. A decent number of games have a "high level" skill of doing things like "pre-shooting" where the winner of a fight is done by figuring out "who realistically did it first".

Assuming that you can poll at 125hz is wrong.

> Why is this an issue? ... Kinda sounds like an arbitrary problem that people who believe 8k mouses are necessary would worry about.

People have these devices and run our games. We (unfortunately) need to handle them, just like a website needs to handle a poorly behaved client. Also depending on _when_ you sample, (i.e. which 5 messages you handle), you can get incorrect information from the device.


Also they probably ended up where they were because if I remember correctly windows will 'smash' mouse move events together for you in some cases. This will look like 'lost messages' (I recognized the peek loop as I have done that same thing once). This is is for two reasons one old not needed as much any more, and the one they found out the hard way. Memory as just scrubbing the mouse around would create a ton of memory objects that need to be pretty much cleaned up right away. The second is performance. As that is a ton of little messages that need to be picked up looked at and something done with. Which zorches your CPU.

The core of the issue is trying to wrap polling style of logic into an event based system (which windows is at its heart). There is no real pretty way to fix it.


I think it would be probably best to handle it like real time audio, 8kHz is not far off anyway. Just collect mouse sensor data at a fixed sampling rate into a ring buffer, and don't try to have a separate complex object for each sample.


That would take care of the memory issue. It would need a way to tell the kernel to not send wm_* messages about it and a way to give the kernel a buffer to hold that data. However, you still have the overhead of processing the 'events' though or keeping up with the buffer.

It occurs to me that it may be possible to disable the wm messages. There is probably a callback hook location. Set that to 0 and it would probably not send anything. But that is just a guess and I would have to dig around in the docs to see if it is there. This indicates it is possible https://stackoverflow.com/questions/57008742/how-to-discard-... Then it would be up to you to use one of the other raw APIs to find out what is going on. https://learn.microsoft.com/en-us/windows/win32/winmsg/mouse... https://learn.microsoft.com/en-us/windows/win32/winmsg/about...


> The core of the issue is trying to wrap polling style of logic into an event based system (which windows is at its heart).

Well said. The irony being that the event based system on windows is wrapping the polling of the USB device in the first place.


yeah. I would be the old win3.x/2.x code probably just hung out on the ps2/serial bus interrupt waiting for junk to show up. Then sending a messages along when it happened. That 'behavior' would have to stick around for MS to hit its backwards compatibility goals. Either sampling or event spamming with a debounce buffer.


This is distinction without difference imho. 8 thousand messages is 320KB. My magic estimator tells me this would take about 25 microseconds to process on a very shitty by now i3 7100.


Simulation resolution needs to be higher than display resolution. Simulating physics at only 120Hz can lead to jank. This is similarly true in the spacial dimensions, where even old games would run the game simulation at more than 640x480 for fidelity.


I've seen high mouse polling rates cause genuine performance problems in real games. It was a surprise to me too, but I ended up dropping my polling rate from 4000 to 500 to fix it.


I know this doesn’t address the technical questions asked or the concerns prompted, but if your mouse is causing your games to have performance issues…

It’s not a high-end mouse.

Yeah, yeah OK polling rates, etc. Whatever. But honestly, PC gamers have been mostly fine for over 20 years of dealing with mouse input. Some major engines didn’t even support raw input until after the mid-2000s.

You can put a V8 on a Corolla… but if the curb weight changes and the torque eats up your tires, well, what did you expect? It’s just idiot engineering.


Early USB keyboards and mice were inferior to their PS/2 counterparts and it took a good few years for those issues to be overcome. Keyboards with N-key rollover and mice with decent resolution have tangible benefits for many types of games over their bargain-bin counterparts. It doesn't take much money to get to good enough, but that doesn't mean everything out there is good enough.


You just don't know what you are talking about. You are essentially saying horse carriage was great for transportation. A car isn't an upgrade. PC gamers were fine with 60hz monitor. 120hz is not high end/is not needed... high end mice are essential for certain games; esp esports ones.


No they aren't.


Please, 120 Hz monitors and "high end" mice are the PC gaming equivalent of audiophile equipment. Provides no actual benefit other than the user believing in it.


The difference is not linear but I switch between 60 and 144 Hz monitors, as well as ordinary and high-resolution mice, on a regular basis, and the difference is definitely noticeable.

Neither of these things are "high-end" anymore though. You can get decent high-res mice for $50 and 120 Hz monitors for $250.


https://youtu.be/OX31kZbAXsA?si=RHSlP3lhjuC8Katj

Now you may go back to bottom fragging.


>If you are already an expert on game input on Windows, please skip directly to our current solution and tell us how to do it better, because I hope fervently that it’s not the best!

This is a neat thing to see at the top of an article, very in line with the hacker spirit.


I don't write it explicitly, but this has been the goal pretty much every time I've put a bunch of work into writing up how I've done some complicated thing. It's half Cunningham's Law and half trying to convince the experts out there that putting in the effort of telling me the better way to do things won't be wasted.


meanwhile under wayland if you move your 8k mouse too much then the socket event buffer fills up and the compositor terminates the connection, killing the app


It's ok, it's only been in development for 16 years. Rome wasn't built in a day!


LMAO, Rome fell much faster ;)


<s>"1024x768 ought to be enough for anyone"</s>

Sorry, I thought you meant 8k like the resolution, not 8kHz mouse sampling rate. I thought about deleting my comment, but it made me laugh so maybe it will others, too


> A few days later after the build is pushed to beta testers, you get a bug report that the game window can no longer be moved around. And then you realize that you just disabled the system’s ability to move the window around, because that is done using legacy input.

I don't think I've seen any AAA games that allow me to run them in windowed mode with decent performance.

Valorant, for example, switches to windowed mode when I press Alt-Enter by mistake, but doesn't allow me to interact with the window much. It also locks up completely when Alt-Tabbing, so I don't know what's going on there.

Plus, I think the general expectation is to always run AAA games in full-screen mode. In such cases, disabling legacy messages is a viable approach.


That's odd, because in my experience almost all games run best in "Borderless windowed" mode.

You get full-screen performance with the ability to easily alt-tab without mode changes.


I think it's only been with new DX12 presentation modes [1] that games can use "Borderless windowed" without input lag. Before that or without updated support, you would always get a noticeable amount of input lag when not using exclusive fullscreen. I played CSGO competitively for a while, and it was definitely enough to be annoying.

[1] https://learn.microsoft.com/en-us/windows/win32/direct3ddxgi...


Flip model swap chains were already available for D3D11 since DXGI 1.2 (Windows 8) and also as an extension to D3D9 (but not 10/11) since Win 7.


This isn't my area of expertise, but isn't DX12 a decade old at this point?


Not needing to care about win7/8 is relatively recent (a few years now, but a lot less than ten), and fully supporting both dx11 and dx12 with no caveats was sufficiently difficult that most games didn't really take full advantage of dx12 until they could drop dx11.


Flip model swap chains were already available since DXGI 1.2 (Windows 8) and could be used with D3D11.

Strangely enough, Windows 7 also supported the flip model, but only as an extension to D3D9, not 10/11.


>That's odd, because in my experience almost all games run best in "Borderless windowed" mode.

I think the difference is between when you tell the game to run that way in it's settings, versus abruptly forcing it by doing alt-tab. Games where their default 'full screen' is really 'borderless windowed' don't have problems when you alt-tab, but a lot of them act weird when their 'full screen' actually is a true full screen mode and you force them out of it by alt-tabbing.


I still don't get how people play Valorant with the kernel-level anti-cheat


I personally don't mind having it installed, just because it makes a huge difference.

I remember playing CS:GO (and later, CS2) and it was super common to see cheaters in almost every other match.

In contrast, I haven't seen a single cheater yet in a match of Valorant.

You can see the difference yourself by checking the UnknownCheats forums for CS2 vs Valorant: the CS2 subforum is full of new hacks every day (most of which seem to be slight modifications of existing ones) while the Valorant one is completely dead.


The sad reality is that people simply do not give a damn/are so weak willed that they will buy the game regardless. I wish it weren't the case, but unfortunately it is.


I'd rather play without cheaters than care about running a kernel-level anticheat.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: