Hacker News new | past | comments | ask | show | jobs | submit login

Basically, VR as a platform requires a completely different input system if you want to make anything but the most basic game. Game engines by themselves fit very well with digital (buttons) and analog (sticks, triggers) inputs, but none handle positional and rotational inputs properly.

Yes it's rather easy to have the 3D hands tracked ingame, but VR requires much more affordable interactions than a regular 2D game, and that is way trickier to handle.

A practical example: In the Half-Life games, a common trope/puzzle is finding a door (or some other blockage) that has one of those "submarine hatch" type hand crank to open, but it has been misplaced.

In the 2D games, you just need to pick up the crank, go to the door, and hold the interact button on the placed crank to spin it. But in the latest installment (Alyx, which is in VR), you have to actually grab it with your hand, carry it to the door, snap it into place and spin it.

So now technically you have to handle physics joints between the hand models and the crank, ensuring it visually stays attached, but also tracking the position such that if the player moves away while holding the crank their hand doesn't just stay there forever.

And since the player is still physically allowed to move their arms with the model attached to the crank, you have to ensure other interactions that depend on tracked position do not engage, such as grabbing items from the backpack.

That is way more effort than just checking line of sight to a collision box while a key is down, and playing an animation. And writing such a system (even with the provided frameworks) is still a lot of effort.

That, and the level design has to change a lot between 2D and VR games, due to how a player can do tricky things like crouching to see under objects, and you as a designer/developer can never stop the camera from moving.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: