It's upon reading these words that I realized that every computer I've owned was missing the ability to flip the bird to application windows as a means of force quitting. I'm not ashamed to admit that I'm anticipating that feature.
I would love to see the internationalization work done for that! Even better, an i18n package for identifying gestures! I’ll flip off Apple Mail, but please let me give Chrome an “up yours”.
By far the most interesting admission from a brief flip-through:
Consider how environmental factors impact your app. The characteristics of your physical environment can affect system load and thermals of the device. Consider the effect that ambient room temperature, the presence of other people, and the number and type of real-world objects can have on the your app‘s algorithms.
The fact that they’re so prominently focusing on this seems like a soft indication that this device (at least in V1?) is not suited for exercise or sunny outdoor use. Which makes sense, but adds some finality to a “will people use these outside/in public” discussion I was having on here last week
Brings up memories of trying to run the HoloLens 1 on job sites in the northeast during the summer. Standing in the shade, you could get an hour out of it. Step into direct sunlight, power's off in minutes until it can sit in the A/C.
I’m not an apple person but I do think the eye tracking to focus / pinch the air to activate combo is very natural and significantly different to what meta, HTC etc are doing. I think this might be the turning point for AR/VR.
>I’m not an apple person but I do think the eye tracking to focus / pinch the air to activate combo is very natural and significantly different to what meta, HTC etc are doing. I think this might be the turning point for AR/VR.
From getting started in 2015 with the DK2, I've held from the very beginning that controllers are a dead end for VR. I've given countless demos where people are just completely confused because it's so unnatural and awkward, and it adds to the overall cognitive load to the point of destroying presence. Sure competent gamers can figure it out in a few minutes, but that's not the point. You don't interact with the world using triggers and buttons. You interact with the word by interacting with it. The controllers need to go away.
You don't interact with the real world with gaze and pinching. Do you think this gesture is also a dead end?
For me, controllers are bad simply because your hands are full. You're giving up interacting with the real world to interact virtually. Having to figure out a remote control isn't all that insurmountable for the majority of the population but no one wants to hold a controller all day.
The missing haptics and discrete button feedback is an issue with hand tracking though.
Reads a bit like when people would complain about having to use a mouse back in the late 80s / early 90s.
There’s ton of stuff you’ll never be able to do in VR without some kind of controller with buttons and joy sticks. Making the controllers smaller, lighter, more ergonomic and fairly standardized is the natural path forward. People who grow up with VR and games won’t have a problem.
The problem is the best apps attach all sorts of weird things to weird buttons.
I've given plenty of novices a try of a headset and it's a train wreck with them accidentally pushing buttons every which way that completely destroy whatever setup you've done. My favorite is devs like to make the "grip" buttons (on the side where your middle finger rests) shift the frame or scale of world (eg: OpenBrush does this). It's super natural once you know what you are doing, but give that to a novice and it's incredibly hard for them to learn to not grip the controllers. They cling on to them like crazy and the whole world is spinning around and they immediately tell you they are disoriented and nauseated and hate VR.
This could be solved by having well-designed interface guidelines and have those guidelines enforced by the company building the headset. If anyone is going to do that, it's Apple.
I'm happy the VisionPro uses gestures to control the interface, but I also agree with the parent poster that we'll need hardware controls someday. No matter how good the gesture recognizers get, waving your hands in space is never going to be as good as having physical controls and buttons attached to your hands.
> This could be solved by having well-designed interface guidelines and have those guidelines enforced by the company building the headset. If anyone is going to do that, it's Apple.
I've no doubt there will be tons of third, and possibly even first party peripherals for it, but treating gesture interaction as the baseline to design against was the right call. It's the number one thing that will help adoption right now if done right. I think the callout made in the keynote about each new Apple product being based around a particular interaction paradigm (e.g. mouse/keyboard, clickwheel, then multitouch, now gesture) makes it seem obvious that this is the natural evolution of HCI.
> No matter how good the gesture recognizers get, waving your hands in space is never going to be as good as having physical controls and buttons attached to your hands.
I thought this as well 5 years ago, but pose detection and reverse kinematics have come lightyears since then thanks to ML. I'm fully confident that what Apple is shipping with Vision Pro will be equal to or better than the 6DOF control fidelity of the gen 1 Oculus constellation based tracking. The only problem that remains is occlusion when reaching outside the FOV, which indeed it's hard to imagine a solution without controllers or an outside-in tracking setup.
Head/eye tracking as user input isn't really new in VR. In the past it has not felt very good. The pinch gesture in the Hololens was very unreliable. Its not natural that your gaze effects the world around you other than making eye contact with someone, perhaps. I think reaching out and gripping things is farm more _natural_.
That said, the sensors and software in this headset might finally be up to the task. And despite being unnatural it might be easy enough to pick up.
I remember reading a first impressions post where they mentioned that starting off, they were instinctively reaching out and pinching the UI elements. They said it worked fine, because you're also looking at whatever you're pinching, and it only took a few minutes to adjust to keeping their hands in their lap.
But I found it promising that the Vision Pro can see your hands in front of you as well, which should really help ease the learning curve, and prevent first-time users from feeling frustration with the UI.
> I think reaching out and gripping things is farm more _natural_.
Like when you grab a virtual object and it feels like you're grabbing absolutely nothing at all? This isn't natural and it's actually quite jarring in my experience.
It seems to work great for 2D control where it works similar to a mouse or a touchscreen. But for 3D control outside floating screens it is a bit unclear whether it can compete with a proper 3D controller.
Note: I'm speculating here, not saying Apple Vision Pro can do any of this.
If you could track hand movement 1:1 enough that interacting with a 3D space was as natural as it would be as if it was actually in front of you, the problem is definitely a solved one at that point. If Apple manages to do this at any point in the lifecycle of this product line, it'll be a significant breakthrough for this type of computing / experience
But what replaces the buttons? You need precise true/false states. Pinching again? That would be just one button. What about aiming/shooting? Just locking and pinching? What about character movement without a stick?
This all seems like playing Quake on a touchscreen.
Yeah. On a touchscreen you get at least a tactile resistance of the feeling of touching something rather than the air, although you don't feel whether you touch the virtual button or not, or when you are successfully pressing it. On hand tracking you don't feel anything at all. Controllers have a large advantage here.
Maybe Apple deliberately decided against including optional controllers, to make it clear that they are aiming at AR, not VR gaming. But even in AR the lack of tactile feedback could be jarring.
Apple has a 3d keyboard that you can reach out and type on. Tactile feedback is replace with visual feedback - the letters glow more as your finger gets near and they "pop" when you make contact.
I can’t seem to get the simulator to boot up on my machine. It hung in the “verifying package” part, so I force quit the Xcode Beta, and couldn’t get that screen to pop back up, even after “uninstalling” the beta.
Tldr: create a new visionOS app from the menu, it'll prompt to download the support files correctly.
Had a similar issue after doing the Xcode 15 beta 2 download with both iOS 17 and visionOS 1 simulator checked.
When creating a new visionOS App it would tell me that I'm missing the device/simulator support and prompted to install it.
After confirming this it would show a download for a build that ended in xxxxxxxg instead of xxxxxxxf as was part of the download, so I assume that part was updated in the background and now is no longer the correct reference.
The virtual environment reminds me something described by u/patches765 (a user active on reddit regarding DND sessions). Basically he was active in a Startrek MUD decades ago for a while, and one of the administrator asked him to perform a task: take a (virtual) dog and do some scripting, so that he can assess his programming ability.
Now that this is definitely not a new idea, but I'd imagine, in the future, if the metaverse does sprang up, maybe we will see the same thing: Hey John Doe here is a widget which is a small window in the metaverse room 211, please write a script and make it interesting, you can do anything.
That definitely sounds very interesting. Man since I have a macbook pro at work, I might actually try it out, at least in the simulator.
Yes. One's a summary, one's detail about the exact same subject for discussion - developer information for a just announced product that nobody can use for a few months.
Developing against a beta API for an unreleased platform is going to be outrageously frustrating on a brand new MacBook, I can't imagine how infuriating it would be trying to do the same on a Hackintosh.
If your goal is to hack around and try to get anything at all to work it might be fun, if your goal is to actually build a working VisionOS app, I'd probably suck it up and spend the $1000 on used MacBook.
The main problem I see with developing for VisionOS is the lack of reach. I expect that after the initial novelty the user base will be reduced further.
The closed development ecosystem of Apple is alluring because it offers access to its iOS devices.
But what makes it worth the effort to learn a new SDK, or even buy a MacOs device to develop for it, for just a few thousands of devices?
Apple customers are one of the few consumer segments that don't mind paying for software. Apple captures a huge chunk of app revenue, massively outsized compared to their user base and far more than Android. One of the big reasons I don't try Android anymore is the last time I did so many of the apps were adware with no option to buy an ad-free version - I guess they don't bother because Android customers statistically won't pay.
>But what makes it worth the effort to learn a new SDK, or even buy a MacOs device to develop for it, for just a few thousands of devices?
This is a new official platform from Apple, not some random Google experiment. They don't do something like that willy nilly. This will be supported for decades to come now, and be the basis for who knows how many products. It's about getting in at the ground floor, a la iOS circa 2007.
The iPhone entered a market that was already huge and relatively mature, to further expand it in value.
AR/VR is far from that. I hope Apple's entry boosts it a lot further, but comparing it to the iPhone feels like there's only disappointment in waiting, even if it fairly succeeds like the Watch does for instance.
AppleTV was supposed to be a new thing with people developing entertainment apps and games that fit with a TV, not for general computing.
The watch added to the iPhone and had unique apps to take advantage of the sensors.
Neither of these were tied heavily to a macbook or other general purpose device. Unless you mean computing as in anything digital that runs code. Which I wouldn't really call a platform.
VisionOS feels closer to an AppleTV. New hardware with a different paradigm that needs a unique API and new apps built for it.
That said it does seem more like a supplemental device to your macbook + iphone. I doubt many are going to buy it with the intention of it being their main means of computing.
Maybe define what main platform of computing and core product mean if you could?
I hope you're right. I've had a US top 10 grossing game (Bowling) on Apple TV for years and the income it's not enough to pay one salary. We thought it could be a great gaming platform for people without PS/Xbox, but unfortunately this did not happen. Apple TV is probably great for Netflix etc though.
Is the Vision Pro actually that expensive at $3500?
A high/end LG OLED G3 77 inch TV @ $4,300
A Dell XPS laptop @ $4,000
A 2005 Honda Civic with 225,000 miles @ $3800
NBA season tickets mid-“club” level seats @ $4300 (does not include parking, dinner, transportation)
A Barcalounger power reclining loveseat @ $3120
An 11 x 18.5 ft outdoor storage shed made out of plastic and genuine steel frame @ $5274
You might need that Honda if you are barely scrapping by, but if you are like many HN readers then you know damn well that most of what you really needed was taken care of a long time ago.
Disposable income is disposable how you see fit.
I see fit to buy a Vision Pro after a short test-drive. And I expect to be amazed.
About half of the items you posted are not really comparable for a lot of reasons:
A high/end LG OLED G3 77 inch TV @ $4,300: Buy one of them to watch shows with your entire family. Want do do that with the Vision? You're either taking turns or forking out $14k for 4 of them.
A Dell XPS laptop @ $4,000: Can be used for many more things than the vision pro seems to be capable of. This is a work device, if you bought the vision pro could you _get rid of_ your work PC or would you need to have it anyway?
A 2005 Honda Civic with 225,000 miles @ $3800: I challenge you to trade in your car for a VR headset. Go on, tell us how much better your life is afterwards.
NBA season tickets mid-“club” level seats @ $4300 (does not include parking, dinner, transportation)
A Barcalounger power reclining loveseat @ $3120
These two are actually comparable in that they're ultimately unnecessary luxury items.
An 11 x 18.5 ft outdoor storage shed made out of plastic and genuine steel frame @ $5274:
I'll just keep my lawnmower inside next to the shelf I keep my Vision Pro on.
These comparisons are ridiculous, I notice you didn't bother mentioning how cheap the Vision Pro is next to a house?
It won't be a casual device that you want to put on/off to check the news while saving battery, so it's not a contender for the smartphone. Two hours is not enough for daily work, so it won't replace the computer. Meaning this is a routine device that you use once or twice per day then forget about it.
Can’t you plug in it? My Windows laptop gives me about 45 mins tops. My Quest is never charged when I go to use it. It only gets charged when I use it.
> But what makes it worth the effort to learn a new SDK, or even buy a MacOs device to develop for it, for just a few thousands of devices?
It is the same type of a bet when deciding to start a tire company or a car mechanic shop shortly after the first cars ever were introduced on the roads. You don't bet on the exisitng base, you bet on that base expanding extremely wildly, and you want to secure your position early.
Also, I fully expect to be positively surprised by the sales of VisionPro. Keep in mind, we live in a world where one of the fairly popular flagship android phones (Galaxy Fold) sells retail for close to $2k. And it is still fundamentally a smartphone, it just folds and has a couple nice features due to it that others don't. But none of it is practically groundbreaking. I would be honestly even more surprised, if Apple wouldn't manage to bring a sub-$2k version of their AR device in the following few years.
>It is the same type of a bet when deciding to start a tire company or a car mechanic shop shortly after the first cars ever were introduced on the roads. You don't bet on the exisitng base, you bet on that base expanding extremely wildly, and you want to secure your position early.
A lot of people (and billions of VC) made that bet in 2016 with the first wave of headsets. It didn't pan out, and they got burned (myself included).
Here's hoping Apple is in this for the long haul, as they are basically the last hope for consumer VR reaching mainstream adoption.
Hardly the last hope. So far the only successful headset is the Oculus Quest and it was made by Meta, not Apple. It had multiple new features: internal processor, inside-out tracking, camera passthrough, very low price. The Quest headsets have sold 20 million units. In terms of features, the Vision is basically an ultra expensive premium Quest, combined with a heavier software focus on AR and finger tracking instead of controllers.
It is clear that Meta tries to conquer the market from the low price end, which seems to work, as there hasn't been an expensive headset (like the Valve Index) which sold anywhere as many units as the Quest. Now Apple doubles down on the high price end route. It will be interesting to see which price approach proves successful in the long run, or whether they both converge at some sort of compromise. I wouldn't say Apple's approach has an obvious advantage.
This is kind of my point though. Over a hundred billion dollars invested by Facebook to sell 20 million units over 5 years (all at or below hardware cost). That's still a tiny drop in the bucket compared to mainstream adoption in the sense of iPhones and iPads. You could add up every headset ever sold by any company since Oculus 1 released in 2016, and it would still be less than the number of iPhones sold last year alone, both in units and revenue.
We are still waiting to see whether it will happen for VR. Apple just announced the first true second gen headset, so hopefully that does the trick.
I hope I live to be old enough that things sold to millions can succeed with their own story, instead of being subjected to eternal comparison to the iPhone story
It launched my career, but it’s such a poor comparison point for success. The iPhone replaced existing phones, it wasn’t a de novo market, or even technology really.
I didn’t review your FB #s carefully, but my guess is it performs some sort of coarse operation to guesstimate incremental employee cost due to VR
>I hope I live to be old enough that things sold to millions can succeed with their own story, instead of being subjected to eternal comparison to the iPhone story
VR has absolutely "succeeded" in the sense that it's a neat toy to play games with that we didn't have before. But for it to be anything more than that, yes the iPhone analogy is apt. Otherwise it remains a niche PC gaming peripheral, and nothing more (i.e. where we've been stuck for the last 5 years).
Oculus Quest wasn't the first commercial HMD with inside-out tracking.
Windows Mixed Reality headsets came out October 2018 with inside-out tracking. Oculus Quest came out May 2019.
The Vive technically could do camera passthrough, but I never saw anything actually supporting it well and it wasn't very high quality at all. The Vive Pro which also released before the Quest even had two cameras for stereoscopic passthrough.
For comparison to another gaming device, that's more than the Xbox Series X and Series S (which both came out around the same time) combined, and over half of the PS5 sales numbers. These are well-established products with loyal fanbases.
This is Apple though. It is very likely that this will be a successful product. How many windows mobile/Symbian devs were burned before the iPhone came out?
> Apple's marketing department telling us they'll be selling a lot if of little help to be honest.
Since it requires manufacturing scale (these things aren't going to be hand-built by a blacksmith in Germany) I would bet it isn't merely marketing smoke & mirrors.
Lmao, and your estimate is gonna.. hurt it? I think they sell At least 5 mil. 7 or less is a 1st gen failure but they will have a 2nd gen that should fix the issues. AR is the future
Apple TVs running tvOS don't present a meaningfully better product offering than either Amazon FireSticks or Google Chromecasts, both of which sell at roughly an order of magnitude cheaper. Moreover, TV devices are neither positioned nor optimized for general computing.
The Vision Pro sells at an order of magnitude more expensive than most AR/VR devices and we don't know yet if the product will be meaningfully better than the offering in 2024 when it's supposed to be released (a full year is a pretty long time in the computing field, IMHO)
VR devices are also not optimized for general computing (as a sign of it, the Vision Pro won't natively support mac apps either)
This is a great question that will likely be subsidized by their development community. The smart move here is making the SDK available, hopefully hardware for development in the near future, and (most importantly) a market place available on day one for consumers.
We can never predict the future and can safely assume this will never have the reach of their phones, tablets, or computer market. But it is tempting to play the App Store lottery before the clones roll in!
And in very limited geographic areas you can use Apple's devices for testing
> Next month, Apple will open developer labs in Cupertino, London, Munich, Shanghai, Singapore, and Tokyo to provide developers with hands-on experience to test their apps on Apple Vision Pro hardware and get support from Apple engineers. Development teams will also be able to apply for developer kits to help them quickly build, iterate, and test right on Apple Vision Pro.
Wonder what those dev kits cost and how available they'll be. Apple probably doesn't anyone getting their hands on the early hardware/software using it for reviews before it's released.
Presumably you return it after release and get the money rolled over toward purchasing a production unit.
I remember the company I was at having to shell out $20k for the PSP emulator back when it was a thing. It was a dedicated computer with a tethered cable connecting to a PSP where the optical drive would normally be located. After that experience, I just assumed that was SOP for companies with pre-release units like that.
There's a long vetting process which all studios must undergo in order to get approval to develop on all major consoles. Notice that I use the word: "studios" and this is simply because if you want to self-publish a game you must form an LLC or you won't get anywhere in the application process. (This is where publishers comes in really handy as they already have contacts inside the big 3 and can speed up the entire process.)
If your studio and game project are approved you will receive the game engine licenses of your choice (Unity, Unreal, Game Maker and in some cases special builds which differ from the ones everyone can download online) and will have the opportunity to take a dev kit out on loan. The kits always have to go back at some point and you have to sign a bunch of NDA's basically agreeing that you will never post pics online or allow anyone outside of your studio access to them.
With Microsoft you can actually use any retail Xbox as a dev kit:
https://learn.microsoft.com/en-us/windows/uwp/xbox-apps/devk...
One of Sony's weird requirements is that you must have a static IP address in order to access their dev portal. And I believe that Nintendo actually charges money for their dev kits, but it's been a while since I checked so I don't know if that's still the case.
If the simulator is as good as the iOS' simulators then you must test your app in a device or you will be flooded with bug reports at launch day. As noted elsewhere, Apple is setting up labs in some cities, everyone else needs to buy the device or cross fingers and hope for the best.
BTW, Android is not as bad because there are emulators, not just simulators (that is my understanding for the disparity).
A virtual CRT simulator, emulating the various attributes of CRT displays. Shadow masks/aperture grilles, phosphor glow, bloom, geometric distortions. EmuVR does some of this already but it'd be a fun novelty to have such a thing in a AR headset with Vision Pro's resolution and other capabilities.
The lack of a 60/120hz mode will cause inevitable stutter, but that's going to be the case with any 60hz content (24p will be fine due to the 96hz mode).
The resolution of the Vision is probably not high enough for that. The Micro-OLED screens themselves are something like 4k, but that's stretched over the entire field of view. The CRT screen would cover just a small fraction of that.
Depend how close you are, the resolution of the CRT etc. There's also the temporal accumulation from the micro-movements the head makes, so you're never sampling a fixed grid.
It's already quite effective in EmuVR even on lower end HMDs, get close enough to the virtual TVs and you can see the individual RGB phosphor dots. Moire is pretty bad at a distance though.
Nice. It’s like a lame version of The Matrix. Instead of going to a fantastic and amazing virtual world I get to ~virtually~ be in the same room I’m already in but with a crappy CRT monitor. For the sake of nostalgia, I guess?
Nostalgia sure, but also preservation. CRTs have visual characteristics that flat panel monitors can't fully emulate, which matters in that a lot of media was created for display on them and only look correct when viewed that way, mostly old video games. Current AR headsets can't fully replicate it either, that'd require 1000hz+ high contrast screens so the beam scanout could be emulated. But it'd do a better job than a flat panel, particularly with CRT's more physical attributes like the thick curved glass and the layers underneath it.
Or, I dunno, they could single-handedly port Half Life Alyx. But I feel like the virtual CRT thing is more manageable.
CRT monitors had exceptional "motion clarity", because they worked more like a fast strobe light than displaying a sequence of frozen frames for a fraction of a second, like current flat screens. The former is apparently better at tricking our eyes to perceive it as fluid motion. OLED monitors could theoretically emulate that, to some degree, with black frame insertion. But manufacturers hate to implement it, perhaps because it causes wear. It also makes the screen dimmer.
CRTs also had no native resolution. They would instead change resolution physically, which made fuzzy interpolation unnecessary.
For a long time, they also had much higher refresh rates and better contrast than LCDs, though this has been matched in recent times.
I had a 21" Sony Trinitron for a computer monitor through the 2000s. 100hz at 1600x1200, it bested everything I bought after it for years. Until something in it popped, anyway.
CRTs didn't have a native resolution, but colour ones did have either masks or grills with individual RGB subpixels just like a flat panel. There was still interpolation in a sense, it was just done physically instead of transforming a framebuffer. If you made a CRT photon gun accurate enough that it could reliably address those subpixels it would then have a "native" resolution.
e: Also, a heads up that VR headsets strobe by default, it's part of what makes them work at all. Even LCD models, where they strobe the backlight.
Interesting, I wonder why manufacturers don't offer LCD backlight strobing as an option in TVs or monitors. It wouldn't cause wear like in organic displays. Perhaps the backlight doesn't get bright enough for that? I assume it should at least be possible with HDR TVs, as they have quite powerful blacklights.
I was thinking of doing something simple and silly like a model of our solar system with the idea that you can get close ups to everything based on recent high res. images ( ok, maybe that part is not that simple ).
You need actual physical corrective lenses for the vision correction (which, of course, will be available in first and third party addon form), because of the way your eyes are actually focused well past the physical location of the tiny screens in the headset. It's one of the non-intuitive things about the optics involved.
This, but with movement. Also do pose detection on others or a mirror and see the organs move in a scientifically accurate way.
---------------------
Bring real-world objects into VR but render them in the style of the scene which already exists. (99% sure not possible with this API). Basically, if I'm playing a flight simulator/etc, I want to be able to see my water bottle, keyboard, mouse, cell phone, etc so that I can easily grab them. But I don't want the immersion to be broken.
---------------------
Track conversations I have with other people and use an LLM to remind me about the relevant conversation history we've had.
Years back I did a "startup training" weekend. My team decided to use my idea of gamifying museums and historical sights. The team was fun and had some good puns!
I'd originally thought it'd be fun to play a version of the "floor is lava game" at the ruined temple where they partly filmed the Temple of Doom. I still wish I could do that someday! A sort of historical scavenger game would be really fun way to engage the history of the place.
Silly little AR models maybe? The thing I’ve trying to cobble together at the moment is a flag waving simulator like https://krikienoid.github.io/flagwaver
So custom image texture on a model with some animation, so you can checkout how a flag design would look when flying.
I would love a bit of AR object recognition with overlays in a target language for language learning. Via the mics you could overlay a translation of whatever was just said as well. A little pinch on an object could tell you what the object is.
Another similar idea with object recognition would be an AR scavenger hunt.
> Dwell Control will provide on-screen options for Tap, Scroll, Long Press, and Drag, allowing for users to interact with the interface without using their hands.
Default gestures are irrelevant. Apple always puts a huge effort into accessibility for their platforms. You are condemning the accessibility based on a tiny preview. No need to be concerned. They will have plenty of accessible options for using the device.
They are typically ahead of the curve when it comes to accessibility feature on iOS. It's very well possible they will have Accessibility options to offer alternatives to those.
It's upon reading these words that I realized that every computer I've owned was missing the ability to flip the bird to application windows as a means of force quitting. I'm not ashamed to admit that I'm anticipating that feature.