Hacker News new | past | comments | ask | show | jobs | submit login

Would this teach me to develop for the Oculus quest?

I have some locomotion ideas I want to try out but I don’t know Where to start.




Yes - Unity supports the Quest (through Link mode). You can check out the XR Interaction Toolkit [1] for an easy way to get going with teleport locomotion.

[1] https://docs.unity3d.com/Packages/com.unity.xr.interaction.t...


You don't need to use Oculus Link (PC); Unity can build and deploy straight to the Quest itself (Android). That said, it's a lot more convenient to iterate in Link mode when possible due to how slow the Android build process currently is.


That's correct, thanks for the clarification :)


Oculus and Unity released a course for VR development that will teach you the basics of developing for Quest https://learn.unity.com/course/oculus-vr


Most of the development you'd do for Oculus is 90% the same with what you'd do for any other game, so I'd say yes, but you'll need to supplement with tutorials specifically for Oculus/Unity.


What does the other 10% entail, when creating a VR game vs. a (first-person) 3D environment for PC? I'd never pondered this, and am curious exactly which elements are any different whatsoever. The only obvious thing that comes to mind is making the camera smooth and flexible in terms of variable degrees of leaning/bending/crouching.


You actually want the opposite, as little artificial camera motion as possible. The VR SDKs feed you the camera transform and settings against a reference point so you don’t need to do anything special except render a view from that. The organic platform comes with all the motion smoothing built in. If you watch VR footage you’ll get a feeling for how wobbly people’s heads actually are.

In terms of level design VR has a looot more fidelity of input so the interaction design is richer and consequently there is more to setup. Game spaces tend to be less cluttered and have some slightly distorted dimensions. Both are more noticeable in VR. Games tend to have more fixed sight lines, the player is stood, crouched and maybe prone at most. In VR people will stick their heads everywhere.


Thanks for the insight.

> In VR people will stick their heads everywhere.

Particularly when players have any level of knowledge about what kind of complexities or edge cases are likely involved (eg. any software developer or QA, even if not part of the gaming industry). Or... hell, maybe it's even worse when you have ignorant players who expect the VR environment to mimic real life so perfectly that they get frustrated and can't understand why certain actions aren't supported/working.

The first VR game I got to experience was one of the haunted house horror games, and you're damn right I bent down and tried to shoved my head into an open cupboard just to see if the collision detection stopped at the outer box of the model, or whether my head would be allowed to enter the space. Then repeatedly leaned/shoved my head against VR walls at various angles to see if I could get the camera to clip or bounce/reposition jarringly. Poor, poor developers who have to try and nail all that logic perfectly. It must be so rewarding to see final results when everything works out well, though. :P


I'm curious with this being the case why there don't seem to be more ports of existing games. Seems like most titles are either developed new or virtually rebuilt from ground up - is it because the gameplay mechanics are so different? or other technical reasons (performance sucks if you just use your existing design etc).


Basically, VR as a platform requires a completely different input system if you want to make anything but the most basic game. Game engines by themselves fit very well with digital (buttons) and analog (sticks, triggers) inputs, but none handle positional and rotational inputs properly.

Yes it's rather easy to have the 3D hands tracked ingame, but VR requires much more affordable interactions than a regular 2D game, and that is way trickier to handle.

A practical example: In the Half-Life games, a common trope/puzzle is finding a door (or some other blockage) that has one of those "submarine hatch" type hand crank to open, but it has been misplaced.

In the 2D games, you just need to pick up the crank, go to the door, and hold the interact button on the placed crank to spin it. But in the latest installment (Alyx, which is in VR), you have to actually grab it with your hand, carry it to the door, snap it into place and spin it.

So now technically you have to handle physics joints between the hand models and the crank, ensuring it visually stays attached, but also tracking the position such that if the player moves away while holding the crank their hand doesn't just stay there forever.

And since the player is still physically allowed to move their arms with the model attached to the crank, you have to ensure other interactions that depend on tracked position do not engage, such as grabbing items from the backpack.

That is way more effort than just checking line of sight to a collision box while a key is down, and playing an animation. And writing such a system (even with the provided frameworks) is still a lot of effort.

That, and the level design has to change a lot between 2D and VR games, due to how a player can do tricky things like crouching to see under objects, and you as a designer/developer can never stop the camera from moving.


I'm in the same boat - I think the Quest has amazing potential as a platform, would love to know more about how to get started with development for it.


Email me (in profile) if you’re thinking about doing it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: