I don't understand why this entire field seems so focused on gaming, and not productivity. The single most interesting thing to me was the "displays on demand".
There are two things that would absolutely revolutionize how I work that I would pay big money for:
1) Any sort of glasses that would allow me to view virtual displays in high definition. I don't care if I have to turn my head to see more than one display, I don't care if they are VR instead of AR. Should be high enough resolution to be able to use busy Excel spreadsheets and see enough details on a page to do web development
2) Some sort of glove where I could move my fingers to type. It doesn't need to represent an actual keyboard. I could learn whatever new gestures are required for each character.
Those two innovations would mean freedom for me. You could effectively work anywhere in any position, laying in a hammock, on a crowded train, at night in bed when inspiration hits without waking your significant other. It would have to be AR to use while running or working out :)
If these can be made with enough quality to enable equal productivity to a laptop the creators will have an addressable market of about 3 billion people.
There are two reasons why the field is focused on gaming and not productivity: fidelity and familiarity. It's hard to build high-enough fidelity into the hardware at a reasonable price point. And even if you did make VR goggles that were great for giant spreadsheets at $1000, it would be a weird enough idea that it would have to be massively better than $1000 worth of monitors to get people interested.
In gaming, on the other hand, the unfamiliarity of the tech is not a risk but an asset, making the experience more novel. The fidelity doesn't have to be sufficient to overtake an existing process, just to support a fun experience. That's a more scalable business with lower technological barriers to entry, so that's where businesses are focused.
Same reason we had Pong before PCs. You might also see major tech advances in products for specific targets, like military use, medical, or advanced manufacturing, that then trickle down to mainstream productivity applications. But until they get the tech good and cheap enough, expect progressively better games!
> It's hard to build high-enough fidelity into the hardware at a reasonable price point. And even if you did make VR goggles that were great for giant spreadsheets at $1000, it would be a weird enough idea that it would have to be massively better than $1000 worth of monitors to get people interested.
I doubt that. If you make a VR/AR/?R version of a Bloomberg terminal or Factset then finance firms will literally roll up wads of cash and throw them at you. It doesn't have to be massively better. It just can't be worse which is the real hurdle. Granted I'm not an expert on the subject but every VR/AR app I've seen that claims to be the "$APP Killer" simply sucks.
This is the same reason nobody has dethroned Excel in finance. There's plenty of things that solve specific sub problems but the generic, "I have tabs of data and I want to slice and dice it" always goes back to an analyst exporting data to an Excel file.
I've never worked at a financial firm but I have a hard time imagining the uptight social culture of finance being one that embraces employees being spotted wearing goofy headsets and waving their hands around. The boost in productivity from any such app would seemingly be negligible, especially when you consider that the popularity of the Bloomberg terminal lies in its chat function:
* Impromptu breath holding contests complete with prop bets and name chanting
* Chairs shoved, phones slammed, mice thrown, (full) drinks thrown, (Other people's) paper piles toppled, soooo much profanity.
* One time a trader yelled FKING FK very loudly as our EVP of Trading was giving a tour of the office to a business journalist. To which the EVP turned to the journalist and very calmly said, "With us, you always know where you stand."
I worked on two different (energy) trading floors. One was a fairly large venture (30 or so trading desks + mid/back office + dev/ops/DC teams + management). The other was one of the huge banks. Both were in Stamford CT.
I spent well over 5 years there and never once did I see those sorts of shenanigans.
There was some stuff: the guy that liked to throw a football across the floor, occasionally smashing a monitor; the guys that bet $1000+ each over who could lose the highest % of body weight in 2 months (or was it one month?); and the occasional swearing, but nothing near the "like a sailor" level I hear about.
Pretty much everyone I worked with was highly professional. They were profit motivated (for sure), but even there I personally witnessed people making fair deals where they could have squeezed someone dry and unwinding deals at a loss to keep a good counterparty relationship.
I'm so glad I never worked with the handset smashing, drug abusing, loudly swearing, king of the world, type-A assholes that I hear about.
Been there done that. Was the big bank one, the one with the keys in it's logo ? If so, you were using the Desktop OS build I designed :)
> I'm so glad I never worked with the handset smashing, drug abusing, loudly swearing, king of the world, type-A assholes that I hear about.
I think these days, thats largely confined to the Hedge-Fund traders. And those guys are EXACTLY the sort of people would love AR/VR based trading UIs - anything to make more money is always welcome in their world.
I have a VR system and I wear it knowing that it makes me look like a total doofus. You also have zero ability to discern what's going on around you, e.g. to easily socialize with someone walking by. The productivity advantage a VR application would have to be huge for it to be worth doing things by VR, and the fact is that the fidelity just isn't there yet. I would also argue that even with fidelity, the usecases seem specious. But I also thought that the "Minority Report" scenes (which didn't involve a siloed-VR-helmet experience), while cool-looking, seemed like an extremely inefficient way to solve crimes.
Frat-like behavior on trading floors is for members of the rich kids club or those willing to go through the initiation rituals. The lower-level staffers who support the traders are not invited to participate in such japes and are expected to look and act professional, not to be goofy individualists who have just as much fun at work as the traders.
>>I've never worked at a financial firm but I have a hard time imagining the uptight social culture of finance being one that embraces employees being spotted wearing goofy headsets and waving their hands around.
Put some time in at a long/short or quant fund. This would be par for the course. I've been there.
One important thing about VR goggles is that they are literally for your eyes only. You can be shoulder to shoulder with other people, and be sure they can't eavesdrop.
This can be quite important in certain circumstances.
lol @ uptight social culture of finance. Most trading desks are just ex-frat boys with quant skills or a connected uncle. They're all in on tech gadgets; they all have the latest smartphone / wearables / TV / smart home.
the regulatory compliance of Bloomberg lies in its chat function. seriously, type "sold for 10million" in there and it's a deal, reported to the SEC and sucked into end of day reporting
Some hedge funds do not have any uptight culture and they will most likely readily embrace the goofy glasses and gloves. Way better than a cluster of 6 monitors.
You might be able to collect a few wads of cash from finance firms who want a cooler-looking Bloomberg terminal, but I doubt this strategy could actually displace Bloomberg unless there's a technological edge that (1) delivers real, ongoing value and (2) can't be matched by Bloomberg within 1-2 years.
Problem #1 is what you mentioned with VR/AR $APP Killers - they mostly suck. They create value via novelty, which wears off. Bloomberg delivers value by addressing a really consistent set of needs in a solution highly familiar to many users across a whole industry, to the point it's hard not to buy it if you're in finance. It's like the industry's craigslist - lousy in many ways but good enough to stay entrenched (though Bloomberg's entrenchment is driven more through familiarity and brand rather than network effect, which makes it somewhat easier to compete against).
Problem #2 (being matched by Bloomberg) comes from Bloomberg having such domination of its market, and such deep pockets, that unless you are doing things they can't do they will just learn from and then kill you should you attain sufficient success. Although it might be easier to buy you!
You're not going to displace Bloomberg - the amount, quality and timeliness of the information they provide is on a completely different level (and a completely different business).
You're going to get wads of cash from people eager to consume their Bloomberg data in a better way.
I can assure you, the majority of people who regularly use the Bloomberg terminal are not eager for any change whatsoever to how they consume the data.
Any change, no matter how much it improves the workflow or UI is met with bitter resistance, because the users do everything from muscle memory, and small changes break that and slow them down.
I think this might be one place where that rule is broken. I started seriously using VR a while ago and I was blown away by some of the little things that make it 1000% better than a standard screen for some applications.
The most poignant example I have comes from gaming, but I think it illustrates the point. In Elite: Dangerous, some of your menus are to the left and right of your avatar. Normally, you'd have to press buttons to focus the camera on these menus; in VR all I have to do is look at where that menu is and it pops right up for me to use.
It actually made the interface _way_ more intuitive, easier to use, and significantly faster because it played on my expectation of the result. If something similar could be designed on top of Bloomberg data, I think you'd see finance people trampling each other to get it.
This would be particularly true if you played into their existing behaviors and motions; they wouldn't have to learn anything, it would "Just Work".
I have no trouble believing you. I'm in the financial industry, although as an IT guy and not a finance one, but I gave back that utter abomination of the Lenovo Carbon X1 second generation because of the atrocious keyboard (touchstrip, ESC relocated, Tilde relocated, "split" Del/Backspace key etc).
Same reason I won't buy the new Macbook pro - they fucked with the ESC key, they instantly stopped existing as a viable product for me.
At no point did I suggest messing with their keyboard - just give them better visualization possibilities.
This reminds me of somewhere around 1998. I was on-site technician for a market-data software company that had was complimentary (live charting) to Bloomberg terminals. That year, we were updating our software from DOS to Windows NT. I spent a LOT more time than I ever imagined I would teaching men two to three times my age how to use a mouse. Many did not enjoy the change and most were quite adamant about saying so colorfully.
Those guys could run circles around me in the DOS version with just a keyboard. Seeing them use it now reminds me of watching someone with a decade+ experience with vim or emacs.
It's been a while since I've been in the AR/VR/3D space but last time I was there you just couldn't do high density text well at all.
Look at the effective resolution of your monitor and consider the percentage of the field of view that it occupies. You'd need to get something in the area of 6-8k resolution screens if not more to match the same effective resolution.
Well, the airforce had a laser scanner for their maintenance personal beaming pdf documents directly into the eye. Super sharp. Looked a bit like Google Glasses.
Like in this picture: https://www.x.company/glass/
Yes it can. The resolution of AR/VR glasses is less than my iPhone or my laptop monitor.
Now you’re telling me that plugging in a big bulky headset is somehow going to deliver a not worse experience than thee other devices that don’t require an unparalleled level of immersion (both a pro and a con) yet able to show less detail?
I think he was stating that "can't be worse" was a requirement, not a fact. As in, it _must not_ be worse than existing options ($1000 in monitors), ie: resolution, as easy to do everything you already do, etc.
That's on purpose. Resolution is nothing in VR, reaction time is everything. You only need a very wide field, like 800x200px. This would be luxury VR glasses.
However for other, more non-immersive applications, like AR resolution became more important. Which leads go slower reaction times. Which can lead to sickness and lawsuits.
Another reason is that productivity has very specific integration, security, and deployment requirements before it is used. You would never look at a financial terminal that wasn't secured. A video game on the other hand? Sure, why not?
I’m afraid it’s a fact, the fidelity of current VR/AR tech just isn’t high enough. Money really is being thrown at the problem, but there is no magic wand available at any price. These advances take time.
When I was at <fairly large financial research website company> they had a prototype VR research project that was shown to a few people. Never seemed to gain any traction, and I didn't hear why.
Oh wow, that’s interesting. I thought it was still super common. Do you have a link that I can read to learn more about every bank using custom software?
They aren't all using custom software. But all are using banking software provided by one of the big 4 providers in the industry: FIS, Fiserv, Jack Henry, and D+H
I believe they control 95%+ of the banking market.
Those are valid points, especially your point about fidelity, current tech is good enough for gaming and apparently not good enough for work (or we haven't figured out how to use it right). So they can sell it for that right now.
But this and other comments are vastly underestimating the resources that people would be willing to exchange to make their work better. For what I described, a "reasonable" price point could easily be around $10,000. Apple is selling their new iMac at a starting price of $5,000 and there is nothing particularly revolutionary about it, it's an upgrade of what is already available. When you are spending all day using it, and it is the tool you use to make your money, it is worth a lot.
What is the rate of spending in industry on displays vs computers? My general feel is people skimp on displays and over spend on compute. So it would be hard to fight that trend.
Secondly with VR as a desktop replacement... This is simple math. A 4k (or higher) 30" monitor will always be easier and cheaper to build and worst case the same. Except it's at 3' from your face. Shrink that monitor massively and stick it inches from your face. More expensive. Now do the trig on how many and how small of pixels would be required to pay that same resolution from 3' to 1"... Massive downscale... But great we'll get there on display density.... Except, and this is the kicker... For a lot less you can just upgrade the 4k monitor.
Basically unless something fundamentally changes in display tech or we hit a Max resolution that people care about... Miniture displays will mathmatically lag large displays. Yes smartphones tipped the balance but it all got rolled into desktop displays and tvs.
So you lose on econimics and math.
A display tech with massive resolution but something that inherently keeps it from scaling would be a big kicker. Or as some people have said... Hitting 16k or so resolution on VR screens per eye and your dream is there.
Till then gaming is a massive tech driver... Not people using spreadsheets
I don't need a virtual 4K monitor. I can turn my head to see more, just like I do in real life. How close do you have to be to see the pixels in a 4K monitor, and at that point, can you see the entire monitor without looking around?
I want the portability, and then the resolution just has to be good enough. If we could get a virtual 1080p display that would be amazing and good enough. But to your point, the VR display would probably have to be 4K just to be able to properly represent a 1080p virtual display.
you're not doing the math... this is not meant as a slight but:
how big is your virtual 1080p screen representing? and at what distance. give me those numbers and I can tell you what display tech it needs to be, but it's going to be on the order of 10-20x density. it's all about pixel radians.
Also, 1080 is quite small for reading, macs have retina displays which is nearing print when running anti-aliasing.
And don't think I'm attacking you, I want this too. It's just that I did the math.
I guess I'm ok with a virtual screen taking up most of the view. Of course it would be better if it didn't. But I'd still pay for one where I could pretty much only look at one screen at a time, with enough space around the edges to see if there was another screen to look at. And to avoid eye strain I'd need to not have to focus a few inches away.
So if I could focus like it was 6 feet away but it was big enough it took up most of the screen from that distance, and it had a 1080p resolution, that would be good enough. And then I'd buy the upgrades when they came around :) The rough math I was doing in my head was that anything that wasn't 1:1 would need probably 4 pixels for every one virtual pixel. So 4K would about do it.
I pick 1080p because most laptops today still have that resolution, and several laptops that I still use have less resolution than that and are still usable.
a virtual 1080 display that looks like the real thing would be a big gamechanger. Most companies don't buy 4k displays for their employees because 1080p is generally a lot cheaper and also good enough, even for reading. I still have 2 Dell Ultrasharp 1200p IPS displays at home that are 12 years old but reading/writing text on them is totally fine at the normal viewing distance. I'd want 4K mostly for more real estate.
Just got a $399 50" Samsung TV on BlackFriday. Base on the review, it supports 4:4:4 which is a must as PC monitor. Hook up the one yrs old cheap HP laptop to it via HDMI port.
Looks great! 4k video streaming from Youtube works very good.
4 x 1080 Tiled windows - perfect for productivities.
I don't do gaming. The latency is less of an issue for me.
There is also talk about how we actually only mentally process a small focus point in high resolution, and a renderer that could track our focus point could use a lot less GPU compute for a scene if fast enough.
>And even if you did make VR goggles that were great for giant spreadsheets at $1000, it would be a weird enough idea that it would have to be massively better than $1000 worth of monitors to get people interested.
I'll admit that I might not be the "average person" that this is target, but I completely disagree.
I've often said that developers should have powerful machines and tools, pretty much regardless of price. Salaries are often over $100k/yr, and at that price if something makes you a few percent more productive but costs a few thousand dollars, it will (in theory) pay for itself in less than a year.
And for something like this, it would easily make me more than a few percent more productive. Just being able to take the equivalent of 3 monitors with me when I'm on a plane, or am working somewhere other than my normal workstation alone would be worth it. Plus the fact that i'm no longer limited to screen realestate means that when i'm actually "in the zone" and really getting shit done, I'm not going to be struggling with frustration at needing 3 console windows open, plus 2 editors, 2 browser windows, and a chat window. Just being able to throw a console or 3 at a spot where I can glance without having to take my hands off the keyboard or alt+tab through several windows in some cases would be a giant QoL improvement!
For me, this would truly be revolutionary. A headset that lives up to the promise of "a 1080p screen, but everywhere" would change so much of how I work it would be completely revolutionary. And I'd easily be willing to throw a few thousand at a product like this without even thinking about it (with the caveat that it would need to look like it's going to be supported for a while, and won't just ship a device and never release software for it again).
> It's hard to build high-enough fidelity into the hardware at a reasonable price point.
It's more than that. With the current state of the art, it's impossible to make a "virtual monitor" that can match the display density of even a plain 1920x1080 display. You can make "giant spreadsheets", but only because you have to blow up everything to multiples of normal screen size for text to be readable in the VR environment.
> And even if you did make VR goggles that were great for giant spreadsheets at $1000, it would be a weird enough idea that it would have to be massively better than $1000 worth of monitors to get people interested.
If it took up less space than multiple large monitors, it would already be better by one criterion.
>> You could effectively work anywhere in any position, laying in a hammock, on a crowded train, at night in bed when inspiration hits
> have to be massively better than $1000 worth of monitors
Just read these two quotations, and you have the answer. Yes, high-resolution low-hassle googles would be massively better than 3-4 decent monitors for quite some people. At $500 for 2K resolution, that would be an instant hit.
I think that it's a convenient text input device that is missing from the picture. A lot of power users can't touch-type yet.
Another problem is the hassles: either cables or poor battery life. Bandwidth is important outside of gaming, too, else you'll hear the complaints about janky page scrolling instantly.
The big problem there is resolution and the "Screen door" effect.
Text has to be fairly large in order to be readable. We will need an enormous resolution and incredibly low dot pitch to solve those issues and make productivity tools possible.
This is the tech space where I dream of working. I have been designing features for developer teams in AR but without the hardware being available, it's all theory.
Yup! I cringed when I read the above comment and imagined working in Excel on current AR displays. Resolution is still an issue, as you've stated, and there are still heaps of issues in the realm of focal depths/focal rivalry when it comes to AR. Current displays have a set focal depth and people don't realize how much that can affect things.
There is work[1] to address focal issues, and obviously resolution will continue to get better. I notice the focal issues in VR racing sims when I glance between my mirrors and back to the track, you expect a shift in focal length and it hurts your brain when there is none.
I'm still excited to see where HMDs get to in the next 5-10 years, both AR and VR.
So I haven't tried the tech, but one of the main advantages of the 'light field' tech they've been developing is supposedly solving this focal length issue - i.e. near things appear at a different focal point than objects farther away. Can't say how well it works in practice though.
That and the whole "how do you make stuff opaque". All the demo show semi-transparent object being added to the real world. Making text legible when you can't fully control the background on which it is display is very hard. My best guess is that if they don't advertise that use-case it is because they don't really feel ready to be judged on it.
Magic Leap claims to be able to occlude real-world objects with their AR, not just semi-transparent "hologram" overlays. This is something I'd want to see for myself, but here's what Brian Crecente at Rolling Stone wrote about it:
> Miller wanted to show me one other neat trick. He walked to the far end of the large room and asked me to launch Gimble. The robot obediently appeared in the distance, floating next to Miller. Miller then walked into the same space as the robot and promptly disappeared. Well, mostly disappeared, I could still see his legs jutting out from the bottom of the robot.
> My first reaction was, “Of course that’s what happens.” But then I realized I was seeing a fictional thing created by Magic Leap technology completely obscure a real-world human being. My eyes were seeing two things existing in the same place and had decided that the creation, not the engineer, was the real thing and simply ignored Miller, at least that’s how Abovitz later explained it to me.
If they really have that working, it's a huge advantage over systems like HoloLens.
Based on Magic Leap's explanation, I'm not so sure it would work for that.
> My eyes were seeing two things existing in the same place and had decided that the creation, not the engineer, was the real thing and simply ignored Miller, at least that’s how Abovitz later explained it to me.
That sounds to me like some kind of light-field trickery where it puts an object in front of the background using the light field, but doesn't physically block the light. Instead, your brain processes it out because your visual model of the space has something in front of it.
I'm imagining this works sort of like the effect where you overlay left and right eye images that don't match, your brain sort of fades between the two of them because it can't decide what's there? Except instead of having that disagreement, both of your eyes say "This thing is in front" and that's how you see it"? It's hard to say from the kind of hand-wavey explanation.
If that's the case, I don't know that it would work for really bright glare. Either it might be a strong enough signal to overpower the light field, or the bright light scattering around your eyeball might still cause enough bloom to wash out your vision.
Furthermore, even if the technology DID somehow manage to trick your brain into not acknowledging the glare, the bright light would still be entering your eyes and causing damage to your retinas. It would be similar to the case of somebody who has nerve damage and doesn't feel pain, so they don't know that their hand is resting on a hot stove until they smell burning flesh.
If only we had technology that, when subjected to an electrical field, turned glass from opaque to translucent and were produced in sufficient quantities for applications such as airplane and building windows as to make this technology relatively inexpensive.
The occluder wouldn't be in the same focal plane as the real world objects you are looking at. It'll darken a vague area around the object you want to mask.
Probably I'm missing something but I don't see this as a hard problem to solve because LCDs work by controlling the transparency of a pixel to let trough more or less background light and OLEDs don't need the background light to emit color so combining both you can have controlled transparency behind the color emitting pixels.
Don't you control that by doing text recognition and always placing a frame behind? Let the user customize the color schemes in question (bright white + black text, or off white + off black etc).
If the background is a blue wall in your living room, you place a white framing layer behind the text you're looking at. If the real wall is black, you do the same thing. Makes no difference what color the wall is then.
Wall -> Frame Layer -> Text
Text recognition should be among their easiest chores (which is to say it's still not easy, it's on the lower level of difficulty in what they're trying to do).
The issue is that if you have a transparent display for AR, light from the background goes through it. There's no way to just "put a white framing layer behind it" because the light the display puts out is being added on top of whatever light is coming through from behind it. This is what happens with Microsoft's HoloLens headsets.
Magic Leap is claiming to have occlusion of background objects working, but hasn't really explained the mechanism. It sounds like it's some sort of "light field" trickery where they let the light through, but your brain knows there's a virtual object in front of it and mentally processes it out. Cool if it works, but I'll need to see this to believe it.
Nice. I think the idea is that by tracking and moving with the eye, you don't need a high resolution display. You just focus the resolution you have at the center of the users field of vision where most of our visual acuity is located.
That's a solvable problem though - it's more a matter of cost and the pixel density will go up and the cost down over time. I don't see this a long term limitation of this tech. Have you seen the Pimax 8k VR headset for instance? https://www.kickstarter.com/projects/pimax8kvr/pimax-the-wor...
In reality pimax delivers 1440p per eye. Someone needs to calculate this, but I think even that would be orders of magnitude away from a few 2K+ displays projected in your FOV. I think technology is really far away from that for any price. Don't trust the hype.
It's probably solvable eventually, but the tech isn't there yet.
Companies can focus on the gaming industry first because it's tolerant of slightly degraded looking graphics and inability to render high fidelity text, and they can start selling headsets now to that demographic. A person writing code all day or looking at spreadsheets will not tolerate reading small text through a screendoor.
I see this is a popular wish, and I'm afraid it's frustratingly unimaginative.
Finally we have a medium that can open up entirely new ways of doing things with computers, yet so many people just wish it could replicate the old ways.
Same thing happened when the first display computers were made to emulate paper-based terminals instead of exploring what's possible on a graphical display. We still haven't fully recovered from that.
>Finally we have a medium that can open up entirely new ways of doing things with computers, yet so many people just wish it could replicate the old ways.
Right. But to invent the new ways, I need access to A) a replication of the old ways, B) source/config/whatever so I can evolve the Next Big Thing. Without this use case in mind, we're effectively stuck on keyboards and monitors.
They are explicitly not. They want to abolish the screen. They want real, touchable objects to be imbued with dynamism.
From their FAQ:
"Is Dynamicland augmented reality (AR)?
It depends what you mean by augmented reality. Dynamicland is primarily about working with actual physical objects that everyone can see and touch. Glasses and phone-based AR is usually about 'holograms' floating in space that only the person with the device can see. It is a central tenet that all people who come in to Dynamicland share the same reality. This enables social cues like pointing, eye contact, and shared attention which are essential for people to be fully present with each other."
AR is not VR. I think that VR will certainly enable all these behaviors, eventually. Except in a virtual space, instead of meat space. The behaviors and ideas Dynamicland are advocating should work just as well in both.
And while their points ring true for the current iterations of AR-as-personal-assistant, I don't see why a networked AR where everyone shares the same annotations would work any differently. Isn't that what their projectors are doing, after all?
It's not just the social angle — projections in VR and AR aren't tangible, whereas physical objects imbued with dynamism are tangible. Here's a really good article on why this matters so much: http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...
Now, once we have a holodeck, where the virtual objects are made tangible, your points will hold. But until then, tangibility is a key difference (and perhaps not the only key difference, but this should suffice for now).
I don't get the pink plane reference, but this is a good point. I'm conflicted about it. On one hand you are right we should try to use new technology to get to entirely different levels of productivity, opening up "new worlds of ideas".
At the same time, we have built our organizations, relationships, even our own brains to work in a certain commonly accepted way. Everyone can visualize what I mean when I describe a virtual display, which themselves have basically represented virtual pieces of paper as you mentioned. We would be instantly better off. But yes, it would be a tragedy if the better, more revolutionary way of working was discarded for an incrementally better version of what we are used to. I'd hope somehow we could do both.
I believe the pink plane comment is a reference to Alan Kay’s idea that there is an entire “blue plane” of ideas that lie orthogonal to the “pink plane” that people live/think in, that only becomes unlocked with a change in perspective (“looking up”). He talks about it in his How To Invent the Future talk.
> But yes, it would be a tragedy if the better, more revolutionary way of working was discarded for an incrementally better version of what we are used to.
This tragedy is common occurrence with incremental ideas.
Compared to the revolutionary, incremental ideas have an unfair advantage. They are way easier to talk about, way easier to imagine, way easier to implement, and way easier to sell. They capture the market, and before long, too many people's lives will depend on them that no one would dare even propose doing things differently.
This has happened time and time again. QWERTY is a nice example:
> The top row of alphabetic keys of the standard typewriter reads QWERTY. For me this symbolizes the way in which technology can all too often serve not as a force for progress but for keeping things stuck. The QWERTY arrangement has no rational explanation, only a historical one. It was introduced in response to a problem in the early days of the typewriter: The keys used to jam. The idea was to minimize the collision problem by separating those keys that followed one another frequently. Just a few years later, general improvements in the technology removed the jamming problem, but QWERTY stuck. Once adopted, it resulted in many millions of typewriters and a method (indeed a full-blown curriculum) for learning typing. The social cost of change (for example, putting the most used keys together on the keyboard) mounted with the vested interest created by the fact that so many fingers now knew how to follow the QWERTY keyboard. QWERTY has stayed on despite the existence of other, more "rational" systems. On the other hand, if you talk to people about the QWERTY arrangement they will justify it by "objective" criteria. They will tell you that it "optimizes this" or it "minimizes that." Although these justifications have no rational foundation, they illustrate a process, a social process, of myth construction that allows us to build a justification for primitivity into any system. And I think that we are well on the road to doing exactly the same thing with the computer. We are in the process of digging ourselves into an anachronism by preserving practices that have no rational basis beyond their historical roots in an earlier period of technological and theoretical development. – Seymour Papert, Mindstorms
That's why I believe we should be more evangelical in promoting revolutionary ideas (from the blue plane), and more quick to point out incremental ideas (the pink plane).
QWERTY seems like a curious example; I'd hardly call a different keyboard layout a revolutionary idea - neither in its conception nor in its results.
In fact, I'd say it's an example of the opposite: the alternatives didn't surpass QWERTY because they were mere incremental improvements. You need a big leap to beat the inertia of the existing systems.
> I'd hardly call a different keyboard layout a revolutionary idea
Yes. It wasn't my intention to imply that an alternative keyboard layout would be a revolutionary idea. The example was meant to illustrate how a non-revolutionary idea (virtual 2D screens in VR), after establishing itself, can make it difficult/impossible for other ideas to take hold.
> You need a big leap to beat the inertia of the existing systems.
It's good to point, that once an idea becomes mainstream, even research into revolutionary alternatives will dry up compared to those focused on incrementally improving the mainstream, making it even more unlikely that the leap would happen.
Instead of a screen, output used to just be typed on paper like a typewriter. And we have basically emulated that as we worked our way up to modern computers. It's a good point.
> I don't understand why this entire field seems so focused on gaming, and not productivity.
Is it ? If we talk abour VR, yes, but there are a number of companies actually delivering AR products whose focus is on productivity. Some examples (but there are more) are
Microsoft with Hololens [1]
Epson with Moverio devices [2]
Sony [3]
Vuzix [4]
Their oldest model are simple smart glasses, it seems to me that they are moving towards full AR.
As a plus side, they don't look something that I would wear only inside the office of SV startup.
I don't know about the other devices, but the hololens is incredibly underwhelming when used. The marketing CGI for it is the best thing about it. I suspect that is probably true also of the magic leap one.
Have you actually tried it? I know some of the developers and creatives that have worked with it, and they say it's more impressive when used in person than it looks in the 2d promo videos, which is the opposite of most devices.
I have tried it in person; and it was overall disappointing. This was some time last year so maybe they've improved in the mean time but the AR overlays weren't particularly compelling and the FOV was terrible.
I also felt the tracking was mildly worse than Vive; although that is more anecdotal.
I did the Mars demo at a conference a while back - the first couple of minutes I was underwhelmed, but at some point, my brain "bought-in" - I noticed this when I crouched down to look under a virtual rock outcropping without a second thought.
Vuziz is a pretty funny example there, because they used to be (pre-Oculus Rift) focused on VR gaming. The only problem: their product was garbage. I'm astounded the company still exists at all.
I had the chance to try only one of those, so I cannot make an informed comparison: one really has to try the device because I think that there is almost no relation between marketing videos and the actual experience.
My interest is mainly the healthcare sector, and while I have in mind a few applications, I think that for now they are not able to offer something a smartphone in the pocket of a doctor with a properly designed app, cannot do in the same amount of time.
If I had to make a wild bet on what will be the first successful consumer application of AR glasses I would say glasses for cyclists: the user can really benefit from not having to watch away from the road and we are already used to see them riding with funny glasses anyway :-).
(in fact it seems there are already a bunch of companies focusing on that).
Hololens is kinda neat but gesture recognition isn't there yet and the limited FOV really, really sucks. That's the biggest problem with any of these.
The Moverio is setting out to be pretty modest--think "Google Glass screen projected in field of view, but not shit"--but the execution there is solid. It also plays very nicely with glasses.
> I don't understand why this entire field seems so focused on gaming
Because gamers are a community well know to spend several hundreds of dollars easily for any kind of upgrade to their existing hardware. At least the core PC gamers, and that's quite a large segment nowadays.
Myself and millions of others would spend thousands for this, and thousands more for upgrades. When you consider this is the tool that you use to make your salary, anything that makes me more productive with a better quality of life starts to be worth more than the price of a new car.
I think the issue is that you and millions of other more than likely want a lot more assurance it will work well and contribute to their workflow. Gamers are much more likely to make a speculative purchase, or at lease have a smaller set of requirements to consider it a useful purpose.
People's workflow is often fairly static, and this would need to fit the existing requirements. Games are very different from each other and transient. This may not work well for most of a gamer's library, but new games might come out that they will buy and utilize. If this doesn't work well (or even just as well as what you currently use) for a browser, terminal, spreadsheet and text editor, are you going to switch to some new software that you don't know that might work better (but still possibly not as well as what you used originally) just to make use of it? I wouldn't.
Any current workplace that doesn't currently spend $1000 per employee on good, large screens has already voted with their wallet (the most honest way of expressing a true opinion) that they definitely won't consider anything like this to be worth it.
I think your comment illustrates why they should stick to games, but I'm not quite sure how to say it. It has something to do with less pressure, or reducing the requirements or feature creep.
Why do you think your productivity is going to spike? There seems to be enough info being pumped into people's heads as it is, without them actually doing anything useful with it. Sure there is a great appearance of productivity but talk to an economist for the real story.
I think is because of the culture of the company itself, it’s all about creating worlds and leaping towards the next big contract with the user. Those things won’t happen for an Excel user that just want Excel in 3D. Haptic feedback, virtual characters etc. are more gamer friendly
A lot of money is being poured into professional VR and AR applications right now. It does not happen very publicly. I am aware of a few in-house efforts of big manufacturing companies with rather big team sizes (dozens of developers at least). They are mostly centered around stuff where 3D visualization is beneficial and head tracking allows for that extra bit of interaction and exploration. Think "classic" AR ideas such as information overlays (disassembly instructions for mechanics, etc.). Except that this time around they can make the tracking so good that the overlays are really "there".
> I don't understand why this entire field seems so focused on gaming, and not productivity.
Lets look at a similar product that lives on as "productivity", one that did NOT launch with gaming features: google glass.
For most of us, and most companies they aren't going to invest 100's or 1000's of dollars on something unproven. We might do that for "entertainment" purposes but not productivity ones until the tech is proven.
How do you prove out tech? How do you get those consumer grade productivity tools into the hands of end users quickly and efficiently -- lower cost or raise value. There are plenty of folks who gladly spend $1000 on gaming, look at the mobile space and the insane decisions consumers make there.
That was google glasses problem though, they didn't focus enough on productivity. It was a gimmick. Instead of handing it out to techies to play with, they should have made one single useful application, like for doctors or architects, and then only rolled it out to them. Instead of "glassholes" you'd have respected professionals who would come to rely on it for work. More and more applications would be added, and eventually people would use it for gaming and entertainment and daily life. Like, for example, cell phones. And the internet. They both started for productivity purposes and became much more.
And anything that increases productivity will always have plenty of money thrown at it. Think about standing desks, for example. My employer is happy to drop $3,000 on a high end laptop so I can be that much more productive.
Wow, you must work in a magical office. I work in Tech Support for a large company that is in the Fortune 1000, and they don't spend that much for tech. The engineers don't get that much either. They give us hand-me-down laptops in tech support, and buy mid level machines for the software engineers. The will spend more if there is need, but that would be the server, not the user laptop.
Gamers will shellout $1000's though for tech that is limited run, so you can then get items to scale for the majority of businesses/end users to purchase.
Also, Google Glass is being used in productivity, the current gen program is being used only in productivity applications. Google Glass was just too new to the market, and with the camera people were worried about being recorded, it needed a LED or a physical closure for the lens to make people comfortable.
Google glass is absolutely pointless and cannot be used for mixed reality.
It has a fixed small display that forces you to look at there.
Magic leap will allow you to move freely your eyes inside the active field of view.
Beyond these, the main reason is that gaming is a much better exploration and experimentation field than productivity because each app can have far less capital investment, lifecycles are short, and there's no need for a reliable, successful output.
Existing VR displays have your eyes fixed focused either several meters out, or at the horizon, so the kind of eye strain your talking about is not an issue.
MagicLeap supposedly has variable distance focus working.
3D is most obvious when it'd used to visualize physical objects or environments.
Abstract information spaces are hard;
being non-physical and having many more dimensions.
The added interface complexity usually doesn't justify the gain, from only throwing away 23 dimensions, instead of 24.
There might be useful ways to represent clusters of information related to a non-physical task effectively; there hasn't been a clearly successful one yet.
and no, if you are trying to jam a bunch of 2D screens in a 3D environment, you are not "representing physical objects or environments" as identified as the core strength above.
Have you tried out HoloLens, Oculus etc? Technology is simply not there yet. Field of view is very limiting. Resolution is very low. Hardware is highly optimized for graphics, not high performance computation. None of the devices I have tried could be worn for more than 30-45 mins before you start having headache (because your brain is seeing depth but your eyes are on different focus plane). VR/AR tech probably needs 5 more years of advancements (if not more) before your scenario becomes reality - but even then there are significant physics barriers to cross.
They're focusing on gaming because they're pursuing the consumer market. The things you describe sound like a much better fit for the business market (and there are people, likely including Magic Leap, pursuing it).
But what you describe isn't quite there technologically; they just don't have the resolution or refresh rate and input methods are still evolving. Another issue with AR is that it's only additive. The lenses make everything darker and they can only add light.
Some of the best use-cases I've seen for AR is assistive technology where it walks you through taking something apart labeling each part. Right now that requires an incredible amount of manual resources to generate (or would require a huge investment and significant breakthroughs to make dynamically generated or procedural). That tech is orthogonal to the display and input, but both are necessary for a product.
Check out [SimulaVR](https://github.com/SimulaVR/Simula) which is trying to do EXACTLY THIS on the Linux platform. We're in need of developers and test users if anyone is interested in joining the fight.
I am not sure it's fair to say 'this entire field'. VR is very focused on gaming, but AR has already placed a huge focus on productivity. Hololens and also the Google Glass team (yeah that's still being developed) have placed a lot of focus on business applications, and I believe right now that's the sole focus of glass given it's failure to launch in the consumer space. I think Magic Leap are positioning themselves as the next gen iPhone - they need to get that cool factor to win over the early adopters, and then later will broaden into the productivity space. But from what little information they have released, productivity and non gaming applications is also a large focus of theirs, and I don't think this page indicates otherwise.
If you want to reach HD resolution displays at a "virtual" 50 cm distance (for example) you'd need a VR display resolution at like 10K or something, very high in any case. And even then, would it be more ergonomic than a regular screen? It would only pay off if it was something you could use for hours a day, like you can with regular screens atm.
Same with the gloves, if it doesn't have good feedback then I'm going to nope out. Good for tech demos and sci-fi movies, but in real life you need the feedback and physical object.
(you know, like how in real life I need a keyboard and mouse instead of just an awesome mechancial keyboard and vim / emacs mastery).
Yea, but if the display was a "virtual" 50cm away, it could be a much lower resolution, or a much bigger size. The pixel density doesn't need to depend on the resolution of the virtual screen, it just needs to be good enough so the user can't distinguish the pixels of the VR itself. Then any resolution can be accurately simulated in the VR.
There are lots of companies working on AR/VR for productivity. Microsoft Hololens, All of the Windows Mixed Reality headsets, Meta, DAQRI (not so much for your description of productivity. eg word processing), and ODG. If you want a headset that has the best resolution, you should check out the Pimax (https://www.kickstarter.com/projects/pimax8kvr/pimax-the-wor...). There are mixed reviews on the headset itself, but they have very high resolution (Pimax 8K VR) that's 4K per eye.
4k per eye isn't even close to enough for grandparents wishes... You'd need at least 10k per eye, and would still have a lower resolution than a 24 inch full HD screen at 50 cm
The company I work for is working on a VR productivity app that is focused entirely on using your existing desktop apps in a VR space at speeds similar to what you can do with a keyboard and mouse. We have essentially taken the tech for the orbiTouch[0] and mapped it to the controllers that come with the current headsets.
We are always looking for more alpha testers. Should you have a VR setup feel free to contact me at davidn@blueorb.com to try it.
I fully agree. But remember that gamers are passionate folk who are willing to throw their hard-earned money away at this hobby. That revenue will be used, we hope, to develop the next generation of displays which will have sufficient foveal resolution to be used as a monitor replacement. I would love to leave my dual 27" monitor desk behind and travel the world while doing my work. That day will come. But I'm afraid that it'll be too late for me - I'll have already retired.
VR is mostly focused on gaming. Because it sells better now and is simpler, cheaper. AR is mostly focusing on produtivity, Google Labs and Microsoft Hololens are known examples. But this is harder to develop, so progress is simply slower. And I assume it's easier to develop for highly specific task then delivering a universal solution. So there is not much to hear about for the normal people.
Have you tried Samsung Gear VR with Galaxy S8? S8 has the latest and greatest mobile display with one of the highest pixel density in the market. However, when I use it in a VR headset, I can still see the pixels. It may be good enough for some gaming but it's not usable for working with spreadsheets in VR.
I think the focus is on gaming because gaming purchases are made with disposable income on disposable items. Selling goggles to a gamer just requires convincing the gamer that goggles are cool.
On the other hand, selling into business means developing business solutions and making business cases for purchase of the solution.
On the third hand, your use case is almost certainly smaller than the gamer market and has many of the same business case features as the business market...which means it pretty much is a business case in that "viewing virtual displays in high definition" is a business solution involving Excel and web development and hammocks and trains. Unlike the game market, the developers of the tools you want to use are not chomping at the bit to work with MagicLeap.
I see some potential to AR for assisted training of employees. Imagine being able to train someone to a series of tasks that are displayed right in front of the user's eyes.
We learn much faster by copying someone, and having this kind of tool would free up resources while you on-board someone.
Why, because AR needs to be bright enough to not be drowned out (even with Magic focus cues) by background light. Right now, that means that they project a small field of view. It appears that both MagicLeap and HoloLens are ~35deg FOV. A typical max resolution would be 30-60pix/deg (neither achieves this!). This means that you can get at most a single FHD resolution monitor in view at a time.
If you want to have something like a seemless experience, you need something closer to 90deg field of view and several thousand linear pixels (e.g. 5-10Mpix per eye) at near retina resolution. That is VERY hard even for the biggest GPUs (that don't fit on your belt) even at 60Hz. If you want to avoid dizzyness, then you want >90Hz and <10ms latency.
>I don't understand why this entire field seems so focused on gaming, and not productivity.
Probably for the same reasons that Tesla built expensive sports cars first, before focusing on delivering a (comparatively) more boring commuter sedan.
That's a really good point. I've had friends in the past show my their VR demo with multi screens, and it's been awesome. Of course, resolution is the barrier to it being usable.
There're other interesting areas for productivity improvements too, like better interfaces for human computer interaction
Your post gave me a fun thought: someday, we won't buy desktop and laptop computers anymore. We'll buy hi-res VR goggles and keep our desktop environments in the cloud, using virtual keyboards and hand sensors to interact with them.
The latency issue makes this essentially impossible since anything that's not trivially close to the local network will have upwards of 40ms of latency, making VR mostly useless unless the local compute does significant processing.
Because more people like to play games than work with abstract information, so that's where the money is. I think your notion that the sale of 3bn laptops means 3bn want to work at all hours is a little naive.
I bought a Pimax 8K-X on kickstarter for this. I use my vive and rift but the screen door effect and pixels make it hard to write too much code unless im in the right mood.
Those ideals of productivity sound amazing but in practice I can see problems. A floating, non corporeal keyboard with no feedback would be tough to type on for starters.
They definitely are trying to build what I want. There is even an Excel spreadsheet in the marketing materials! I didn't think the Oculus Rift had good enough resolution though. Has anyone tried this?
>I don't understand why this entire field seems so focused on gaming, and not productivity
Because they think that gamers will pay. Unfortunately, I see that they got a wrong idea. It has a vibe of a semibotched Kickstarter project, except backers here are not private individuals, but gigacompanies.
A type of a gamer who spends 15k usd on a gaming rig to crush opponents in Quake 3 in ultracompetitive environment, will not care a bit about this toy.
The founder of the company comes from a socioeconomic strata whose people have that characteristic. A Boston "old money family (R)" born man may see that selling gaming stuff to quite a lot of relatively rich people dumping 15k on a gaming rig is a good business idea, proceeds to build a company built around that idea with all audicious bold claims being received with accolades from other people like him, but never actually bothers to figure out what things matter in a gaming gear.
If you have read his personal blog from naughties before he deleted it, you will get that his ways can be said to be well beyond "nebulous". He wrote stuff like "solving global problems" while maintaining that tone you usually see from people who flood the internet with something very insubstantial like "saving African children with Agile, innovation, and seven sigma framework..."
Ok, back to the botches kickstarter line. As happens often with such projects, original claims performance get scaled down, company barely manages to deliver a downrated product after missing the delivery deadlines multiple times, product works so so, and in the end it ends in your drawer for good. A year down the line the company simply shuts down the cloud service for the widget and you are left with an expensive paperweight. I expect magic leap to follow this route.
Since the 1st iteration of the design is apparently ready, why can't they just show a small demo?
It's supposed to be shipping in 2018. They could easily have hired a team of top notch creators to showcase some of the capabilities of the device live -even if the device is still not 100% ready- instead of this silly Manhattan project secrecy. I hope they deliver as promised, but something smells fishy.
It's a pity, because unlike VR I think AR has huge potential both for consumer and industrial applications.
I share the concern that there's still nothing too publically accessible, but the device itself isn't completely under wraps. The journalist in the Rolling Stone article[0] viewed a few demos built between the ML team, including one with Sigur Ros that's got a very small clip on youtube[1]. I suspect building a demo of this kind of tech that's remotely as impressive viewed on a website or youtube video is challenging. If I'd built the device as marketed I'd be concerned about giving half-baked first impressions that disrupt the hype machine, even if the device itself isn't half-baked.
The device is supposed to be in the hands of actual paying customers within a year and they don't even have pictures or video of it _doing something_. Sure it might not be perfect, but there's exactly zero evidence that this product isn't just a hollow chassis. If a demo at this point comes off as half baked, isn't that a gigantic red flag that this product isn't able to cash the checks that Magic Leap is writing?
Nintendo announced the Switch about six months before it shipped. Before that they announced titles, showed game demos, and talked about titles in development. Magic Leap has shown us what could be a 3D printed mockup for all we know, and has announced (to my knowledge) exactly one thing for the console (mixed reality comics).
Magic Leap should be marketing the hell out of this. It's a multi-billion dollar product, and yet they have exactly zero actual footage of the actual hardware even working. Their sizzle reels have been nothing but concept art. Something is very wrong with this.
I agree that it is concerning, but I don't see how an entire multi page article of someone's hands-on experience with the device qualifies as "exactly zero evidence this product isn't just a hollow chassis."
The article, remarkably, has scant few details on the author's experience with the product. This quote sums up my skepticism:
>instead they were constructed to give visitors who pass through the facility under non-disclosure agreement, a chance to see the magic in action.
It's a controlled environment with purpose-built demos for folks under NDA.
There's no videos, renderings from actual hardware, or substantive critiques on the fidelity of the device's output. The only negative criticism is that it has a rectangular viewport which doesn't fill your field of view. I can't believe that's the only negative thing that could be said about this. Not a single comment on FPS, glitches, or any other problems.
Call me a conspiracy theorist, but I could totally see this demo as being fudged. The computations could be happening off-device with video streamed over wifi. We've heard before that Magic Leap has struggled to miniaturize their hardware, with the last version looking like a proton pack...what better way to demo it than to fake the demo?
I want real evidence that the cute hockey puck has a real computer inside, not just anecdotes from an NDAed journalist in a lab environment.
> It's a controlled environment with purpose-built demos for folks under NDA
Magic Leap pulled a similar stunt with The Information around the same time last year [1]. Seems like they found a more pliant journalist in The Rolling Stone.
"In March of last year, it released a video online titled “Just Another Day in the Office at
Magic Leap.” Shot from the perspective of one of its employees working at his desk, all appears normal until robots start falling from the ceiling and converging on the worker, who picks up a toy gun and starts blasting his enemies into tangled lumps of virtual metal. The video, viewed 3.4 million times on YouTube, was meant to demonstrate a game people were playing with Magic Leap’s headset. It had been used for more than a year to recruit employees to South Florida. 'This is a game we’re playing around the office right now,' Magic Leap wrote in the description of the video.
But no such game existed at the time, according to two former employees with direct knowledge. The video was not actually filmed using any Magic Leap technology. It was made by New Zealand-based special effects company Weta Workshop, which has worked on movies like 'Mad Max: Fury Road' and 'The Hobbit,' the employees said. One of them called it an 'aspirational conceptual' video. The employees said some at the company felt the video misled the public.
...
In addition to the bulky demo connected to a computer, Mr. Abovitz showed The Information a prototype of the compact device it intends to build. It looked as if somebody fastened electronics to every inch of a pair of wire-framed glasses. It had a multi-layered, flat lens. He would not turn the
device on, but assured a reporter that it worked just as well as the larger, helmet-like device. Mr. Abovitz would not discuss details of the technology, repeatedly responding to probing questions with the phrase 'Squirrels and Sea Monkeys.'"
I think Magic Leap is another Theranos. A second, independently-developed HoloLens makes for a respectable incremental business. But that nugget of truth has been leveraged to a $6 billion hallucination. Maintaining that hallucination could have forced management to lie to investors, to the public and to their employees.
So are you explicitly saying that the Rolling Stones journalist is lying?
And the only supposed "proof" you bring is an article over an year old written when the miniaturised prototype didn't even exist?
"This is a game we’re playing around the office right now,' Magic Leap wrote in the description of the video."
"In addition to the bulky demo connected to a computer, Mr. Abovitz showed The Information a prototype of the compact device it intends to build. It looked as if somebody fastened electronics to every inch of a pair of wire-framed glasses. It had a multi-layered, flat lens. He would not turn the device on, but assured a reporter that it worked just as well as the larger, helmet-like device."
Your post agrees that one year ago, Magic Leap was lying about the technology they had. JumpCrisscross only asserts that given all publicly available information, Magic Leap is probably still lying.
The article begins with the author describing several demos, only after which he is guided to a different room where he has, in his own words, "My first close look at the full Magic Leap hardware."
Rony Abovitz calls the chips supposedly powering his tech "Sea Monkeys."
"I noticed that when I moved or looked around, her eyes tracked mine. The cameras inside the Lightwear was feeding her data so she could maintain eye contact."
Yes. The machines used to render that demo were, in the author's own words, not the full Magic Leap hardware.
edit: Also, even if that demo was the advertised Magic Leap hardware, it still only responded to camera movement, and Miller said the demo had capabilities that he refused to actually display.
"The level of detail was impressive. I wouldn't mistake her for a real person"
"I noticed that when I moved or looked around, her eyes tracked mine. The cameras inside the Lightwear were feeding her data so she could maintain eye contact." Yes it is possible that the demo changes behavior based on eye movement alone but that's not what the author said.
Lightwear is the headset component. It is not functional without another, separate computer. The author says he only clearly saw the full, multi-piece ensemble of the advertised prototype Magic Leap hardware later, in a different room from the demos. My information comes from the literal words in and structure of the article. To make the point you are trying to make, one must add words and meaning that are not in the article. Continue insulting my literacy.
edit: You are right about one thing; my assertion that the demo in question was reliant on -camera motion- may not be correct. Eye-tracking on commodity hardware using a single camera has been a solved problem for years.
And if you're choosing who to trust between The Information and The Rolling Stone... well that's like choosing between a Bugatti Veyron and a Ford Pinto.
I pick you: you people are insane. they have raised based off of demos. vcs come in, get the demo, sign a check (because it's that fucking good). I know at least 4 people personally that have gotten demos and it's very real.
What is known, empirically true - it is possible for one group of people to scam other people and groups for many millions of dollars each. Given what is known about the mechanisms of high end confidence tricks, what is different about the operation of Magic Leap that indicates that it is not a confidence trick? Remember, the most detailed article written by a journalist who experienced the demo describes a literal scam.
I think they're probably fudging their demos, but my hope is that they've just decided on a marketing basis to not show people on a flat screen the images. With VR headsets I think the best "selling point" is the experience - VR graphics aren't great - but once you put it on you get it (if its good).
This is the reason why I hate conspiracy theorist.
You can't argue with them because they think to know everything despite the evidence of the contrary.
You're on a bit of a roll across this thread [1] without anything more than the Rolling Stone article. People are expressing healthy skepticism towards a company that has been publicly caught in a material lie, has raised a lot of VC with little to show for it, and made an announcement with lots of CGI and a vague "2018" release date. Given whom this announcement is targeting, I think it's fair for people to have a balanced view before they commit their time or energy to this company.
Because I hate people that lie and spread misinformation.
You are not expressing healthy skepticisim, you are outrightly affirming that the Rolling Stones journalist is lying and you are misinforming everyone with your false statements.
I'm not asserting that I know anything to be true. I'm pointing out that until Magic Leap shows actual reasonable proof that their product works outside of a lab, I have no reason to believe that it works. I sincerely hope it does, because this is cool as heck. But for a company to be rapidly approaching their target ship date and have released nothing of substance showing the damn thing working, that's incredibly suspicious.
In their defense, it looks like some sort of dev kit is supposed to be in the hands of paying developers ("creators"). That might be a pretty raw product.
I guess you didn't follow it at all.
Magic leap showed the demo to a lot of people and the Rolling Stones journalist extensively tested it.
How can you suggest that it is just a plastic mockup?
As for the content there are hundreds of people working on it both in house and in selected partners like Weta.
It would be trivial to fake the demo. You can buy the parts to stream HD video wirelessly for a few hundred bucks. If they're using a small server farm to render the output, isn't that cheating?
If they can show it to a journalist in a lab, why not make a marketing demo in a park? Like I said, I'll believe it when I see it.
Interesting that this came out in Rolling Stone and not another tech site. I obviously couldn't say for sure, but I don't see why it isn't implausible to think that they found a publication that was happy to roll over a few details in exchange for an exclusive dig.
> “We went on this really crazy sprint from basically October 2014 to December 2017,”
This is the part that startles me the most. 3 years of a "crazy sprint" with no revenues is extremely hard for companies to pull off. It leads to all sorts of problems internally and hiring decisions are usually rushed and lack direction. It's surprisingly much easier to do when you have paying customers, because you make decisions based on the market and not just your gut (Slack being a good example of this).
This is very impressive if what it says is true. They've basically reinvented the concept of a digital computer display by projecting a sort of artificial light field into the eye. The amount of capital they've raised makes sense now, this sounds like a ridiculously big undertaking. What remains to be seen is whether all this work will be worth it as opposed to just better displays, but color me intrigued.
It's just talking about a VRD, which is used by HoloLens as well. I imagine "light field" is just jargon to impress investors (it's not an impressive concept in itself - projecting a 'light field' on a 2D plane (retina) amounts to simply projecting light).
Building a demo for this would be challenging, but come on. They have been at this for what... five years now, and had $2 billion to work with?
Besides, hype can be very dangerous. Look at what happened with No Man's Sky. They rode the hype train to the Moon and came crashing down hard because the product utterly failed to meet expectations. That's why it's important to temper hype with regularly administered doses of realism.
I feel like this is also an answer to the top comment in this thread - gamers are a target market for stuff like this because they are more likely to take risks based on hype without waiting for a technology to prove itself.
The journalist in the Rolling Stone article[0] viewed a few demos built between the ML team, including one with Sigur Ros that's got a very small clip on youtube[1]. I suspect building a demo of this kind of tech that's remotely as impressive viewed on a website or youtube video is challenging. If I'd built the device as marketed I'd be concerned about giving half-baked first impressions that disrupt the hype machine, even if the device itself isn't half-baked.
I hadn't read the article in rolling stone to be honest, but after reading it I get the impression that it would be even easier to prepare a more public demonstration (in a controlled environment) instead of this NDA shackled article. It wouldn't be that hard for example to show a video of the journalist in the room wearing the device and side-by-side a cloned view of what the journalist is visually experiencing. Something like what Steve Jobs did in the first iphone unveiling, but not live - an edited version would be more than fine, conveying both the visual experience and the journalists reactions.
This is a ground breaking technology with so many non-gimmicky applications that it's mind boggling. I was thinking earlier this morning how it can be used for driving assistance: wear the headset in your car and get an overlaid "google maps" experience without ever having to get your eyes off the road, pair it with the car's sensors and get collision distance information, coloured lanes in dark roads, etc.
How about (paired with sensors) measuring dimensions, inspecting components, counting objects, etc. in professional/industrial environments?
I don;t know, maybe I'm wrong but I feel this obsession with secrecy is hurting both their product and the technology
If the concern is interrupting the hype machine, why not use reaction videos? That worked for Enchroma glasses, which is in a very similar position: both are about doing something with your vision by wearing glasses, making it hard (impossible in Enchroma's case) to capture on video.
This is what I was thinking. That conversation probably went something like this:
“We need more money.”
“We gave you $2 billion. You’re going to have to ship something before we give you more.”
“Hmmm...we aren’t ready to ship and may not be for some time...what if instead of shipping, we can show you that 50K people are interested in buying it? Will you give us some more money then?”
“Sure, you show us that 50k people want to buy it and we will write you another check”.
So they created an email opt-in page, photoshopped some 3D renders of a product that doesn’t exist yet onto a person’s head, and did a press release calling this page an “unveiling”. That’s really all this is - a “coming soon” page for something that may or
may not be vaporware. Investors are getting antsy and want some sense of what the actual market is for this thing.
To be fair, if all I have to do is give them my email address for them to get a bunch more money to make something that could either completely fail or revolutionise human computer interaction, sure, please have my email address.
They seem to be trying to bring a new thing into the world, I'm inclined to support it.
Have you seen the page? It’s literally nothing more than an email opt-in with some clearly photoshopped images of a device that doesn’t exist yet, even in mock-up form (otherwise they would have used the mock-up in the photos).
Creating good demos for wearable technology is hard. You certainly can make fancy simulations like Microsoft for their Hololens, but the devil is in the details. It's mainly about how it feels and you need to experience the technology to figure it out.
I don't see much to be gained from demoing prototypes to general public. If it's main blowing - great, but does not really make a difference if you can't buy it. If it's less than mind blowing, then you will just ruin everything. Even if you then later improve the product to the mind blowing level, most have already made their minds.
invest in their known publicly-traded investors (google. others?). figure out who manufactures the more expensive pieces of silicon (or optics?) in the device, invest in them.
Considering the scope what they are developing (silicon that requires new fab technology), secrecy isn't the priority it would be if the problem was one that others could steal a march on by throwing money and bodies at it.
If they do ship, I predict that the product might very well meet all their technical claims, but will be a miss on their claims for utility and usability.
I know a couple people that got hired specifically to work on content for it. I don't know anything about what they're working on because of the secrecy of everything around Magic Leap.
I imagine the reason they haven't shown anything is because they can only showcase the tech on the hardware itself, which isn't far enough along or in enough quantities to demo. I always thought the ads trying to show off some new 4k thing playing on my non-4k tv was silly.
I find it even way worse when there is a demo for VR or AR.
If I look at the first demo of Micrososft's Hololens, it is all CGI.
By itself it is fair, you just can't make a good demo from first person capture of what the headset displays.
However, the demo was showing a room littered with augmented objects and minecraft blocks and people naturally interacting with them.
The reality of the very tiny cone of vision of Hololens make it feel extremely misleading.
In the end, it is hard to show something else that the vision of what you want to achieve and in Microsoft's case at least, there was a big gap between that and the limitations of the actual product.
I agree. I figured the SD/HD analogy would be easy to explain and relatable.
For awhile I was demoing VR projects to non-techies (VPs, CEOs, and others). The most interesting response was when they took the headset off and they had forgotten what room they were in. Everyone's favorite demo was Tilt Brush because it showed off interactivity the most--not the ones with fancy graphics. It's hard to express those ideas without demoing it. As an aside; the demoing experience for VR is miserable. There's a very long line, people are required to set up and explain the demo and clean the equipment, and there's a learning curve and short time allotted for each demo. You can't just set up 5 unattended kiosks and let people figure it out.
Read the story again. He experienced the demos with some sort of surrogate hardware, not the goggles pictured. You'll note that he goes through all the demos, and then later gets his first close look at the goggles hardware. The story provides no indication that the goggle hardware is yet functional.
I'd wager they have something which is just enough to get people with deep pockets to give them a pile of cash on the premise of "imagine what it could be with only another billion dollars!" but is utterly underwhelming for a mass-market consumer product.
There is absolute hard evidence that they have faked videos that do not represent the reality of their product. They have a video where they show 100 people watching the Magic Leap effects without wearing any googles. With today's announcement that video is a very clear fake without any attempt to make it resemble a final or even in-development product.
Their videos are fake and do not show what they are building. The product might be neat, I don't know. We do know that they lie in their videos to get attention.
I have been an avid Magic Leap supporter for years. I told everyone I know about them and showed all my friends their videos. I've been gushing about them for years. All based on a lie. All my love and attention for them has been because they told us they were building a system for you to have AR without goggles. LIES. ALL LIES.
It's difficult to do marketing for things that aren't yet in productized form without them being aspirational to some extent or another.
Case in point, the public unveilings for the Kinect and original Mac were combinations of stuff that worked at the time of the demo and stuff they hoped to finish by the time they sold. The former didn't deliver, the latter did.
It's clearly bad to overhype a product, but you have to try and reach for what you think you can do even if you're not done yet. And for new product categories you're even more dependent on marketing than for established ones.
You mean "confirm". This website does not do anything to make the product seem not vapourware, and within the context of their entire activity, it can be taken as an evidence for the product being vapourware - after all, if it actually existed, they could've shown someone something.
Nope, not 'confirm' — my 'stunt' in that context was referring to the parent comment's insider source saying they're actively working on a proper demo... Obviously, the webpage in of itself currently could be vaporware, for all we know...
I have heard people say "AR has the huge potential" thing many times, yet VR seems to actually have converted from potential into something interesting -- such as gaming, viewing movies, virtual presence, etc.. while AR by definition is saying that the majority of your experience is reality and that AR is simply some overlay of information into your world and provide less content and be less immersive, but always with you.. that thing already exists and is called a smartphone.. so its unclear how AR breaks away from "just becoming an app on your phone" problem..
What we've seen so far is more toy like. But imagine an AR tutorial app that guides you through replacing the brake pads on your car.
It could point the the exact spot for the next step, instruct you which tool and size to use and all sorts of things assuming you can recognize the objects well enough and a specific tutorial was built for your car. You could even build tutorials by having it observe what you do.
That sort of application is probably pretty far off but having it as a HUD instead of having to pull out your phone every 30 seconds would be super useful.
In the industrial setting some workers are already using Google Glass type devices to have some information readable without pulling out a phone but those don't have any real AR features yet.
I've got a HTC Vive and my expectations were really high due to all these great looking demos. In reality though, there is still today very few engaging content which leads to a lot disappointment. VR overpromised and maybe Magic Leap, while being bold in their initial vision, learned from this and is careful to increase hype before having actual good content. I mean, this doesn't seem to address end users yet.
> In reality though, there is still today very few engaging content which leads to a lot disappointment.
I keep hearing this from some sections of the VR community but I my personal feelings are entirely opposite. I simply don't have enough time for all the good VR content I've got access to. My backlog is growing.
Do you have incredible amounts of free time or do you have very specific tastes? Tilt Brush and Google Earth alone should be enough to keep a normal person entertained for most of their natural life!
Probably a mix of specific taste and the abundance of alternatives to do. Tilt Brush and Google Earth is super fun, but it it wears off. And when it comes to games, there are only so many games I'm interested in. Most Vive games feel more like browser/mobile games to me than the epic PC games I love.
That being said, I just realized that Fallout VR and LA Noire just got released. This is the kind of content I was waiting for and in the case of Magic Leap, it will also take time until somebody invests into building good content for it.
I see we differ then. I'm not interested in VR as another medium for AAA titles. I want to see software that could only ever be in realised in VR.
And I'm quite happy for that software to be shorter and quirkier and less polished for the foreseeable future. Let a thousand flowers bloom. VR should also be bigger than gaming. Gaming would be just one application of VR - not it's raison d'etre.
Anything is better than it becoming a vehicle for annual releases of Call of Duty and FIFA.
My personal experience with the Vive is that there is only a content problem. The tech is really good, even though the screen could be better. It just feels like nobody managed to figure out yet what to do with it.
I keep telling myself that I won't take them seriously as long as there's no public demo, but big serious company who have seen behind closed doors demos keep giving huge chunks of money... They've gotten $2B in funding... with so many companies out there trying to do what Magic Leap is doing, I don't see any reason why VCs would give them so much money over the others. The only plausible explanation is that their secret demo must've been really good.
There is an over-abundance of capital in the economy right now (a big part of the reason why the stock market has gone up so much) so VCs are basically begging* to invest in startups that look even remotely promising.
Magic Leap has a track record of outright lying [1]. Everything in this "unveiling" looks like CGI. There is no store and nothing is shipping until "2018".
Common concerns, like FOV (not very big), are addressed in this article:
> The viewing space is about the size of a VHS tape held in front of you with your arms half extended. It’s much larger than the HoloLens, but it’s still there.
The thing is the FOV on the original Hololens prototype was massive, but the FOV had to be constrained down for the actual product because the FOV needed a full backpack computer (instead of the headset).
So it's all well and good that the demo units have a slightly bigger FOV than Hololens but I'm not going to hold my breath until we see the released product have a higher FOV than Hololens.
Our family had one of the early consumer VCR/camcorders around 1980. I still had to go grab one out of an old box to see how good my memory of its dimension is. Answer: meh, kinda close, but not an accurate unit of measurement even for a guy that used them from the beginning. And that’s because even old people haven’t touched one in a good ten years.
But, hey, Stranger Things, amirite? We’re all 80s-ophiles or something.
Hololens is about the size of a smartphone at that half-arm-length distance. It's pretty significant. The question remains, what are the resolution and tracking like? The Meta 2 similarly has a wider FOV, but the visuals are fuzzy and the tracking is the worst of any device I've ever used.
A company I work with did a product demo presentation (for a completely unrelated product to Magic Leap) and showed a ton of extremely ambitious features in the demo and zero development work had even been started on that project. In fact, they haven't even hired the development team yet. The end of their demo showed a "Launching in 2018" message.
Marketing's job is to make promises. Whether the company keeps them is another matter.
The naysayers here are funny. There's a fundamental basis for the technology thats rooted in physics and a mathematical model describing how to attain perfect bragg compliance in lightfield displays that has been well documented. But I didn't come here to argue about whether such a thing could exist in the world or not.
Let's turn instead to more practical matters. The reason they needed a lot of money is because no one on earth manufactures displays with the specs required to manipulate light in such a way. This isn't taking some already existing LCOS micro-displays and shoving them into an enclosure with fish-eye lenses. Oculus doesn't make their own displays and didn't spend the last 10 years perfecting them; that tech already existed because it came from the projector industry as a direct result of trying to make smaller projectors. What these guys are trying to do is basically as hard or harder than setting up a state of the art semiconductor company, that just so happens to be making a chip with special optical properties at the same time. Manufacturing stuff takes boat loads of money, let alone stuff that is that small. How long and how much money did it take to go from CRT->LCD is the correct frame of reference. They basically have to engineer and design the factory in tandem with developing the product/prototype. Not an easy feat, or cheap.
They could partner with one of the existing fab companies but its not clear that would be any more efficient because most of those companies are setup in such a way that is highly customized based on whatever fab and process tech the company specializes in or adopted, and most that deal with precision optical stuff (displays) are based on ancient and rudimentary liquid crystal and polarizer tech that is optimized for smartphones or TVs -- nowhere near what's required here. It's useful to know that even Apple doesn't make or design their own displays (although they do heavily influence the technology), but they do design custom processors and such.
I know its hard for the average reader here to visualize that someone was given a billion dollars to make something, without ever having made such a thing before and not yet releasing a product. And it probably makes people here mad, when they are struggling to raise a modest amount of money for their own startup. However, generally people who have billions to invest on high risk super high tech ideas aren't as stupid as people here would have you believe.
Very true about Theranos, but in that case they hadn't even revealed how the technology worked, and the claims they made implied not only a novel core technology but some kind of discovery in physics/science as well, by someone without a formal science background who couldn't explain the science behind her technology. How do you measure something that occurs once per million unless you collect at least a million samples? Most people who understand basic science knew that company had a high chance of being fake. And if I recall properly, most of their investors were from the conventional startup industry, not biotech, and they thought they could play house in biotech without understanding what they were investing in.
I would be inclined to believe the same of MagicLeap but in their case the science is already well understood and has been done in an analog form for a long time, using techniques similar to conventional off-axis holography.
I'm less skeptical about their display technology and more skeptical about the ML/computer vision and 3D rendering (for both eyes, as light fields) that they claim to cram into a tiny hockey puck along with a battery.
"And this is where another comparison between Magic Leap and Theranos falls apart. Most of the former's investors are sophisticated tech venture capitalists, who Abovitz says were “sending their brilliant professors from all the top schools to try and shoot us down.” Theranos, on the other hand, largely raised money from rich individuals whose life sciences experience began and ended with high school biology. Magic Leap's board includes Google CEO Sundar Pichai and Alibaba co-founder Joe Tsai ― whereas Theranos directors used to include former Secretaries of State Henry Kissinger and George Shultz."
A counterpoint where willful negligence and manipulation of investors exists is not a good counterpoint to an example of bold investments in technology.
If anything use "Segway", with Steve Jobs et al hyping them up before it bombed...
Negligence like that is far far rarer than the technology not being feasible, at least in terms of startups failing.
I'm pretty sure the main objection isn't how much money they took, it's that they've shown very little evidence of actually being able to back up their claims. How much money they were able to raise just makes it look more egregious.
Again, this is your own bias rooted in the fact that most SV/tech people in bay area think you have to launch a product as soon as possible and that's the only way tech has ever been developed. It's not the only way, and the agile model that works well for web stuff and consumer/B2B applications often isn't suitable for other kinds of technology development, where there is a custom hardware component.
If you want to see a lightfield display you can just visit Display Week (http://www.displayweek.org/). Once you see one in person your doubts will be put to rest.
I've noticed that the agile model doesn't work well for a lot of technologies too. Things that have been well studied (for example, creating a webapp with X frontend and Y backend, or creating a graphics engine with DirectX) have known costs and estimates. There are answers you can look up on StackOverflow. Not all realms of technology have such luxury.
Is there an alternative to agile that's available out there?
I am not sure that agile is the right term for what you are both describing.
It doesn't mean releasing things before they are ready. Agile is about potentially shippable increments and customer collaboration. Again this doesn't mean testing in production.
While the manifesto was written for software, with minor changes it can applied to most industries. Even with hardware, agile principles can be used to iterate on design, not necessarily physical production.
sure, but they don't need to show you any evidence. Only investors. In fact it's a competitive advantage to keep such evidence behind an NDA for as long as possible when you're creating fundamentally new tech?
Hmm my understanding was it was formed as a research arm of exisiting companies. It also has a very long history and didn't become famous until much later in it's lifetime.
I'm pretty optimistic about Magic Leap, and lightfield technology in general.
Where I see it really shining is as a successor to staged plays, and cinema.
The thought of allowing viewers to really deeply immerse, comfortably for long periods of time, to control the point of focus, to move around.
Kurosawa would shoot movies by building whole sets with all walls intact that worked from every angle, and having the actors perform the scene over and over like a contiguous play, and then this would eventually be shot from 3 fixed points in a single take, offering a consistent and seamless edited final version with perfect continuity... imagine this with an array of lightfield cameras capturing the set and actors, and using CGI to gap fill angles where necessary... and then allowing people to experience cinema from within the set.
Or, imagine animated cinema where the animations are real-time inserted into a set that you move around and interact with. The magical real being tangibly real.
Beyond games, beyond the AR/VR as we've experienced, I think that there's a really rich content vein that could be tapped within pure entertainment consumption, and that of the technologies lightfield tech may give the best control and experience to consumers.
That would be amazing. I once attended a real-life version of this called "Sleep No More"[1] — a looping, living play taking place in a darkened warehouse where you could freely move around, examine every room (down to the drawer), and even interact with the actors to a limited degree. The audience was made to wear masks, and the physicality combined with the anonymity and general darkness gave you the ghostly sense of losing yourself. It was a sublime and markedly different experience from any other medium I'd ever encountered.
(I wouldn't go again because the eroticism and occult themes make me quite uncomfortable, but the idea and execution were incredible from an artistic point of view.)
In a few decades, this could literally be the next "movies".
That would require an order of magnitude more money and effort for movie production. For example, sets would have to look real from multiple angles instead of being just slapped together to work for a single shot. And there's no evidence that audiences actually want this level of interactivity in their "lean back" entertainment.
This resolves major complaints about 3D movies: forced focus, forced POV, and required normal eye-pair function. It's not so much "interactivity" as resolving chronic subtle annoyances: allow natural eye focus around the scene, get subtle shifts in perspective from moving one's head, and realize that not everyone's eyeballs are functioning & coordinated as a "double 2D" view requires.
After going out to see a movie for the first time in a few years this past weekend, I can't really picture many of the overweight families I saw fully reclined shoveling hamburgers into their faces wanting to get up and "interact" with a movie.
But could you picture a set in the round with all of the audience sitting in a circle in a tiered auditorium, and a set being rendered over the audience and actors performing in the middle?
Shakespeare in the round would be given a fully immersive experience, and it didn't require any person to move at all... it simply means that from the perspective of each individual the set is fully complete and hides the audience as much as possible and gives each observer a unique angle, focus point.
That an actor could interact with a modern object (foam stick) that is rendered as an epoch suitable object (a longsword) would be possible.
The possibilities are really quite something, for the scenario of "each viewer has a lightfield viewer and the scene, set and objects can be rendered in real-time for them".
Even if you could solve the tremendously difficult technical problems, that's just not an experience that mass market entertainment audiences would want to pay for on a frequent basis.
I think you're correct about stage, but great cinema to me relies on a director's unique point of view. The meaning derived from cinematic storytelling lies entirely in where the camera is placed at that specific moment of the narrative playing out.
Perhaps I'm too narrow-minded, but I fail to see how this tech would create a compelling cinematic experience. Stage, on the other hand, makes complete sense.
Ever since Lytro made a light field camera I have been waiting for a light field display. It seems odd that it is being developed commercially in a compact AR device rather than a TV sized display.
I don't know enough about the physics, but it could be similar to the difficulty in making glasses-free 3D TVs. Things like the Nintendo 3DS only succeeded because they naturally dictate the viewer's position. I imagine that this kind of thing would be easier to develop if you could control the focal point (i.e.: strapping someone's eyeballs into place).
The current generation of VR headsets serve as pretty solid light field displays. Current VR can’t show differences in focus yet, but having a large field of view and vivid colors makes for a pretty awesome experience.
They're not light field displays. Current VR is fixed 2D monitors with focal-length adjusters (so a 5cm display 3cm from your eye looks like it's a 5m display 3m away). Light field presents photons in a way your eye can dynamically adjust focus & position within.
In my opinion, the new Magic Leap seems to do exactly what Microsoft Hololens could do two years ago. The only advantage they could have is in the UI, which MS 100% failed at, and in content availability which was really poor for MS as well.
This thing could be huge, if the first people who try it on find an abundance of interactive experiences, so much that they rave about it, and make everyone else want one. Best yet, if this all happens in public spaces.
Again, personal experience with MS lens is the basis for this, but having played with it, I am not overly optimistic here. This would be interesting.
If it’s a true lightfield with full focus from close to infinity then the difference from regular ar/vr to this is as great as from a flat display to at/vr.
But the number of people that have actually tested it seems small...
The first headset to give me virtual monitors that I can use day-to-day in lieu of my physical monitors will have my money in a heartbeat. I'd love to be able to carry a headset to/from work and have essentially the same multi-display experience at home/coffee shop/work/etc. I don't care if it's VR/AR.
This comes up a lot in VR/AR discussions, but IMO it's the least likely application in the near to medium term. There's just not enough pixels in the headset to accurately simulate multiple high-resolution monitors.
For instance, the Hololens is rumored to have 1280x720 resolution per-eye - so a screen that consumes the entire visual field is only 720p, and "simulated displays" that were farther away would be worse.
People keep saying this over and over but I don’t understand. Why does a HMD need the same resolution as a monitor in order to do the same job? Wouldn’t the HMD be equivalent to having like sixteen 1600x1200 displays all around you, and isn’t that at least as good as a single 4K monitor?
You could have one virtual display with close to your face for high resolution information, and then dozens of peripheral displays further back for ambient information. Displays can move forward and back with subtle head movements. Why is the hardware resolution the limit? Isn’t it more of a UI problem?
It’s like people are assuming the VR desktop is limited to being an exact replica of their physical monitor, but why would you do that?
A good chunk of it is that text is unreadable in current headsets unless it's far larger than what most people are used to. So that 720p hololens gives you at best a 720p monitor's worth of text at a time[1]. Sure, you can have any number of virtual displays around you at any desirable resolution - they're just unusable until you get close enough that you only see a tiny piece at a time.
To your specific example of "You could have one virtual display with close to your face for high resolution information, and then dozens of peripheral displays further back for ambient information.", yes, absolutely. That makes sense. But the "close to your face" one would show you less than a paragraph of text, if you could actually see the peripheral ones all the time. Useful at times, to be sure, but not equivalent in all (I'd argue "most") situations.
[1]: plus some fudge-factor, because you can read a bit better than with a comparative screen - the change in how the text lands on pixels as you move gives you a slightly higher "effective" resolution... though text at this size is still plenty difficult to read, so you still don't want to rely on it.
Have you looked at text through a VR headset? It's difficult to fit a lot of legible text on screen at once right now. They need to have incredibly high pixel density in displays that are an inch or so from your eye.
> You could have one virtual display with close to your face for high resolution information, and then dozens of peripheral displays further back for ambient information. Displays can move forward and back with subtle head movements. Why is the hardware resolution the limit? Isn’t it more of a UI problem?
No matter how much you play with bringing some virtual monitors/displays closer or further away depending on focus it's always going to be inherently limited by the internal display resolution. Even then in VR headsets the lens distortion means text isn't really readable outside a small FOV directly ahead of you. Eventually we'll get cheaper better screens in these headsets but that'll require a lot more rendering power and still doesn't get past the fact that you're losing a lot of pixels to anything that isn't the display so the headset screens have a long ways to go before they can look anywhere near as good as the normal displays we use.
Wow. Clay Bavor is one of the most articulate speakers that I've ever heard. He delivered a talk with an extremely high information density so cleanly. I really just want to hear this guy talk about things more.
Could it do a single monitor at a decent resolution? That would be handy enough for me when I'm standing on the train and can't open a laptop, or just to avoid carrying a laptop around everywhere. If I could open a decently sized terminal at any time I'd be happy.
The latest oculus software release lets you have a "virtual desktop", pin windows to your VR games etc... It's pretty brilliant. I've been playing some Elite Dangerous with a video pinned in my cockpit during the long trips, it's very immersive. Obviously the resolution is still not good enough to use for serious work though but who knows, maybe in a couple of iterations...
Neither VR nor AR will provide effective alternatives to physical monitors for 8+ hours a day of software development or other knowledge work. VR blocks your view so you have to keep removing the headset to talk to a colleague or find your coffee cup. AR just overlays on the real world background making it really unpleasant to read or edit code or documents.
I expect most of us will still be using 2-D physical monitors for most work in 20 years. VR and AR use will increase but they'll be limited to particular use cases.
AR can also selectively block out the background to give you a virtual monitor. The advantage is that it can go transparent where required so that you can find that coffeecup or talk to your colleague without taking off the headset.
Projectors can't block out the background entirely either, and they're in very wide use. We get around that by projecting onto flat, solid-color areas. As long as you understand that to use your "monitor", you need to be staring at a semi-smooth solid color area, it should work.
If I moved my monitor out of the way, I'd be staring at a white wall. That doesn't need to be completely blacked out before displaying something else unless I was working on photo/video editing or design work where colors actually mattered.
Well sure, you could stare at a blank wall with a bulky headset on, but why not just have a monitor? Also the parent comment didn't seem to be talking about staring at a blank wall. I just don't see how this could be better than an nice wide monitor.
Well sure, you could stare at a tiny monitor, but why not just have a headset that turns all of your available wall space into one massive screen?
Let's take one use case I've seen quite often: someone has three cubicle walls around them, and only uses one 21" wide section of it. Maybe two 21" wide sections, or one 27" wide section if they're lucky. The person has a chair that swivels 360 degrees.
Nothing says "a nice wide monitor" more than a display area that's 20 feet wide by three feet tall.
To provide a good high resolution in that area though is going to require much better screens in our VR/AR headsets than we currently have with a corresponding bump in rendering power. With VR the best resolution you can conceivably have is to fill the whole FOV with a monitor which gives you the resolution of the internal screens. This ignores problems like screen door and lens distortion that is going to reduce the usable area and worst of all who's monitor takes up the ~100 degrees of FOV a current headset gives, there's loads of stuff around so what you're used to seeing will require probably 4-8k+ per eye.
Think of ergonomics though. Can you comfortably read something 5 or 6 feet away that's in 16pt font? Can you find text on a surface that large easily? I'd be willing to guess people can't do that. Otherwise projectors would be more common displays.
You make a great point, no software in the history of ever has had any ability to change the size of the font or to scale the display size. I completely forgot that humans need to hold their heads perfectly still at a perfectly measured distance from their screen in order to function.
If you need giant font and to move around a lot, then what's the point? Do you really want to more a round a ton while trying to type? I still think this is worse than a monitor in almost every way.
Most office workers don't have a large, flat, solid, dark colored wall to look at. The reality of most office environments is that we're looking at co-workers, furniture, plants, windows, artwork, small cubicle walls, etc. If you have to rearrange offices to make AR usable then what's the point? Just give everyone a physical monitor. They're cheap.
I doubt that white walls will work well for AR backgrounds because it's additive light. There won't be enough contrast to make reading comfortable.
I'll admit that it wouldn't work for offices with the open floorplan nonsense where across the table is another human face staring back at you, but in my experience as a consultant visiting dozens of different IT offices around the US every year, most office workers I meet are in cubicles where it's trivial to hang a black piece of paper up with some thumbtacks.
Might not work for everyone, but it'd work for some people, which is quite often the case with basically every product that exists anywhere.
Maybe, just maybe it would work for people with ample wall space, in an environment with stable lighting, and good viewing distances. But how is it better than a monitor? How do you show your screen to another person? Do the economics makes sense at all? could the resolution and refresh rate be high enough?
Again, what does it give you that a really large monitor doesn't do better for less money?
Man HN always has a hard-on for "I can come up with one use case where it doesn't work, so therefore it will never work and everyone who wants it is stupid". It's ridiculous. Stop it. You don't want it. Fine. Some people do. That's also fine. Why can people on this site never understand that? And how often do people here say "no one will ever want product X" and then the very next headline is "Product X is top selling gadget of the year"?
Sometimes I want to lean back in my chair and look upward but still continue working. I can't do that with a monitor. I could with a head-mounted display. How the fuck does that hurt you in any way?
I'm not saying that it wouldn't work for anyone. I'm just saying that is a general display tool for office work, it's clearly inferior for most common use cases. I mean shit, if it becomes a thing, I'd try it. I'm happy to be proven wrong, but this sounds like the promise of serious productivity on tablets it was great for a narrow slice of use cases and crap for everything else.
Instead of tacking up a black piece of paper on the wall wouldn't it be better to just hang a monitor there? Flat panel monitors get larger, thinner, sharper, and cheaper every year. It's a lot easier to collaborate with colleagues when you can both look at the same physical monitor.
Coffee shop? I can't imagine there are very many people willing to wear that in public. That may say something bad about those people, but I believe it to be true.
are you my clone? I've been wanting this exact thing for 4+ years now... I want to be able to add/subtract monitors at will and put them wherever in my view space...
Please tell me where I can send my money. I've heard of this, but at really low resolutions which I don't think would work well. Is there any existing VR solution that would be able to show an HD display?
You may forget what 2015 was like. The Oculus Kickstarter was in 2012. By 2015 there were a lot of people questioning whether or not the CV1 was actually going to happen. Yes, we had the DK1 and DK2 at that point, but we're not actually talking about a very long period of time here. Magic Leap first formed in 2010. It's said that the iPhone took 7 years to develop, basically in secret, for an established company with consumer electronics experience, with other products to live off of.
I've been pretty critical of Magic Leap and their teasing in the past, and while we haven't seen any real evidence that the product is real, we also haven't seen any real evidence that it isn't.
I remember, and I don't recall anyone seriously doubting whether or not CV1 would happen. Plus, as you say they had publicly available prototypes. The original Kickstarter had real videos of prototypes and John Carmack saying it was great. Totally different situation.
That was my first reaction as well, but there is one article online by someone who's used the device. It sounds more mature than I assumed it'd be viewing this landing page [0]
"I noticed that when I moved or looked around, her eyes tracked mine. The cameras inside the Lightwear was feeding her data so she could maintain eye contact."
IKEA's products as listed in the catalogue can be ordered (and, if required, build). Magic Leap cannot be build and has yet to deliver.
Even if Magic Leap will ship, their secrecy is unwarranted because they don't have any credit as a company. Compare to Apple's secrecy: they do have credit.
Looks like good design choices to me. Separating the power and compute from goggles reduces weight on the headset. Its ideally suited to be integrated into a special ops soldier's helmet for use on the battlefield. Or a surgical visor for a telemedicine operating theatre.
It's very much Day One for this. HoloLens has been a stealth hit for Microsoft this year. Its the kit ($3K) I'm most excited to try out. Its quite possible all design prototyping and additive manufacturing software interfaces will have a head-mounted 3D input component soon.
And that's just the enterprise market. For retail consumers, check out Fragments to see the possibilities of turning turning home or public spaces into immersive gaming environments:
Hard to fathom why everyone is optimistic about this. Hasn't recent history overwhelmingly demonstrated that adding AR tech to out current culture will result in exactly the situation from the video?
"Augmented virtuality (AV), is a subcategory of mixed reality which refers to the merging of real world objects into virtual worlds.[15]
As an intermediate case in the virtuality continuum, it refers to predominantly virtual spaces, where physical elements, e.g. physical objects or people, are dynamically integrated into, and can interact with, the virtual world in real time. This integration is achieved with the use of various techniques. Often streaming video from physical spaces (e.g., via webcam)[16] or using 3-dimensional digitalisation of physical objects.[17]"
When I heard the word "mixed reality" for the first time two years ago, the idea was that AR was a 2d layer on top of reality (like google glasses) and that MR was actual 3d objects merged in the reality.
But then, mobile apps started doing the later and called it AR. Also, Hololens did what we were calling MR and called it AR as well. So at this point, I guess you can use whatever word you prefer.
I'm happy to see the website stressing something I've thought VR would do well: Act as a display for my computer desktop. I would love to just take my laptop and these glasses with me and be able to have N screens all around me anywhere I want. Imagine working at a cafe with various large displays all around you showing all the windows you have open.
If this is cheaper than my three displays (and the resolution isn't too terrible), it's already saving me money.
I'm a skeptic. For a preview of this, try using a retina MacBook at full resolution with a pair of $10 jewelers glasses.
Personally I found it super slow and frustrating to have to turn my head instead of my eyes to see different parts of the screen - it's nothing like using multiple physical monitors. And jewelers glasses are better than virtual screens will be - they have zero lag and don't suffer from the double-aliasing of projecting one pixel grid onto another pixel grid.
I can imagine this working, but it would take way more resolution and field of view to simulate multiple monitors than anything we know how to build today.
That's my intent as well: use MR to replace monitors, then progressively switch to monitor-less apps.
There may be a limitation, though: resolution. If we are have 9 virtual monitors, each displaying content at 2500x1200 (random pick), performances may not follow.
But then I guess, we only need to focus on one at a time. Maybe those additional monitors could work if we lower resolution of those not directly where our eyes are looking at, and just blur them for good rendering.
To say nothing of whether magic leap is real - when vr/ar technology catches up there's a trick to get around this. It should be possible to track gaze precisely and only render the high resolution 'fovea' at high detail. The rest can be very low res and nobody would know. This will probably be important in graphics and gaming first though, where they can focus all of the gigaflops on sampling illumination raycasts in the most sensitive area of vision. It's actually pretty wasteful to render an entire UHD monitor at full resolution when the eye can't even discern a word at one end of a sentence when the fovea is focused on the other.
This would be great. Though, I don't think I'd use it in a café - I might be looking at a huge dump of data, but the person sitting behind where my "display" is might think I'm staring at them.
Eh. People got over others using laptops in public places, which used to be a big annoyance. People got over other using smartphones in public, which used to be a big annoyance (man the backlash against Blackberry was huge). People will get over AR/VR once it becomes widespread.
This unveiling doesn’t commit to the shape of the product. It doesn’t describe technical details, like the processing power, number or spectrum of cameras, radios, anything. It doesn’t commit to a price range, like “under $1200”.
They need more money. Magicleap looks like a typical dot com era borderline scam of the nineties: something is always right around the corner, if only there was more money...
So, we can sign up now, but they still haven't shown the tech to anyone without an extremely heavy NDA? I'll wait for Anandtech or similar outfit to put out at least a tweet about the viability of the technology.
Good lord they really needed a decent industrial design team on hand, though I guess no one's wearing this out in public, so good looks probably aren't as necessary. I'm just afraid of the image of people using this causing it to go the way of the Segway -- we all remember how Dean Kamen and team envisioned it revolutionizing intraurban transport, yeah?
Anyway, I'm glad to see something came of Magic Leap.
An insightful comment from Reddit: “Looks like they've limited your real world fov with the frames so it maybe matches the virtual fov you get. I guess shrinking your real world fov to match might be more immersive than a virtual viewport in the middle of it like the HoloLens.”
I agree they could do it in a much less ugly manner though.
I think the worse they look, the better, that way they can be developed to be useful in private and professional spaces, without anyone being tempted to take them out in public.
The thing will need some serious tech in it, there's really no way to make it not look ugly. I think aesthetics are somewhat overrated: If a device is useful, people (including the "cool kids") will use it regardless.
You're just wrong. Aesthetics are crazy important on consumer devices. I'd argue it is THE most important aspect on something you WEAR. People put an awful lot of thought into their clothes, and accessories. Some people don't care, but they're rare. If aesthetics didn't matter there wouldn't be 100 different clothing stores in the mall. When we choose what to wear, we're annoucing to the world who we are. When you wear a suit, you're sending a message to anyone who looks at you. When you wear a flannel shirt, and dirty boots, you're sending another message.
What is the message being projected if someone looks at you wearing these glasses? Probably not one a "normal" person would want to project.
Function matters, but don't overrate fashion. People deeply care about it, and will skip the function (no matter how great it is) if it makes them look like a dork.
Yeah this is why the SCUBA diving industry failed. Sure, everybody wants to breathe underwater, but if you have to look like a dork while doing it, then it isn't marketable.
No, it's you who has said something wrong. You're talking as if this device has no "aesthetics". As if skilled industrial designers didn't produce a design. The way you're talking raises doubts about your qualifications. But I'm going to put that aside for now.
Plenty of people thought the Walkman was futuristic because they were blown away by what it could do in that package. Maybe some of the more insecure people didn't start using them until it became sufficiently safe to be seen with one. But we're very lucky that in mankind there will be those one or two people who don't care what you think because you obviously can't see what they can see.
Imagine a hundred years ago if you came up with a fashion that involved having strings dangle out of your ears- It looks completely stupid and ridiculous.
Yet everybody does this every day in 2017 because headphones are super useful- Now we even think they'er fashionable!
I'm still amazed at some people's reaction to AirPods. They are small and sexy, but at the same time very weird to a large part of the population that isn't used to new things.
The old demo videos from a year or two ago showed that the Magic Leap could not do opaque black. Black = transparent on their clear screens.
If that's still the case, then every single example image on their site is a misrepresentation. They all show black and indicate the Magic Leap can do opacity.
I call bullshit until I see that they've solved that problem.
> Miller wanted to show me one other neat trick. He walked to the far end of the large room and asked me to launch Gimbal. The robot obediently appeared in the distance, floating next to Miller. Miller then walked into the same space as the robot and promptly disappeared. Well, mostly disappeared, I could still see his legs jutting out from the bottom of the robot.
> My first reaction was, “Of course that’s what happens.” But then I realized I was seeing a fictional thing created by Magic Leap technology completely obscure a real-world human being. My eyes were seeing two things existing in the same place and had decided that the creation, not the engineer, was the real thing and simply ignored Miller, at least that’s how Abovitz later explained it to me.
Read the Rolling Stone article where it explains how the light field tech works. It's nothing at all like a conventional display, but rather treats the eye as a filter and delivers to the eye the photons necessary to cause your _visual cortex_ to render the desired image. That's what makes this so different from any other VR/AR/mixed platform out there.
Edit: Let me quote the article instead of explaining it poorly:
"What that would mean is that the brain grabs more information and renders more detail when it needs to. And that completely changed the way Abovitz and his team were thinking about the light field problem. Suddenly, if the theory was right, technology didn’t need to capture the entirety of the light field and recreate it, it just needed to grab the right bits of that light field and feed it to the visual cortex through the eye...He was sure if they could create a chip that would deliver the right parts of a light field to the brain, he could trick it into thinking it was seeing real things that weren’t there. The realization meant that they were trying to get rid of the display and just use what humans already have."
Anyone who thinks it works like an Oculus is totally and completely wrong, as one is AR vs VR. Whether it works similarly to a Hololens is completely unknown. The technical details just aren't available. A ton of handwaving and promises of features but very little information on actual technology.
I thought they had solved this. "Such may be used to cancel light from the planar waveguides with respect to light from the background or real world, in some respects similar to noise canceling headphones." from https://gpuofthebrain.com/blog/2016/7/22/how-magic-leap-will...
I think you have to take those demo videos with a pinch of salt. It's impossible to demo tech like this without wearing the device. Assumedly the field of view is inaccurate as well. What's interesting is that it's a 'light field' and not just a screen, hopefully opening up areas of innovation. Let's see how it pans out once more details are revealed.
"Rendering black" is an unsolved problem technically.
There are a few approaches. One is selectively blocking incoming light at the lens, however due to the nature of light because the distance between the lens and the eye that would allow for the right per pixel degree specificity is relatively large, you would get bleeding from the other incident light and the "black" would look at best fuzzy.
The other way to do it is to create a "standing wave" so to speak on the retina, and again that requires an almost photon control level of the display.
Neither of which I am confident ML has demonstrated effectively.
On the other hand, for many applications where people say they want to "render black," darkening the entire field of view and rendering on top of that is sufficient and not particularly hard to do.
Well that's a completely different display system in that case. That would be more like pass-through rather than see-through. Currently the latency on pass-through is not anywhere close to ready from a processing standpoint.
That said, I am actually bullish on a really good pass-through system, nobody is working on it seriously though.
Nope, sorry. The backing LCD will be out of focus. The whole AR thing is a lot of technology that sounds simple, but ends up being really really hard in practice.
My south Floridian side really wants them to be successful, but my techie side despises their secrecy and "demos" aka photoshopped graphics and cgi videos.
I live here too. We unfortunately have built a reputation of our value only being skin deep and superficial – from our real estate to our transit to our restaurants and nightlife to our startup community. This reputation is not entirely without merit; we need to do better.
Knowing all that – I want this to be amazing, but I am keeping my expectations in check. To me, this all seems too familiar. Lots of flash and shine, but I don't see anything concrete.
Most people there have always been from somewhere else, I didn't always fit in as well being born in Florida.
from above:
>>I don't understand why this entire field seems so focused on gaming, and not productivity
>Because they think that gamers will pay. Unfortunately, I see that they got a wrong idea. It has a vibe of a semibotched Kickstarter project, except backers here are not private individuals, but gigacompanies.
>A type of a gamer who spends 15k usd on a gaming rig to crush opponents in Quake 3 in ultracompetitive environment, will not care a bit about this toy.
>The founder of the company comes from a socioeconomic strata whose people have that characteristic. A Boston "old money family (R)" born man may see that selling gaming stuff to quite a lot of relatively rich people dumping 15k on a gaming rig is a good business idea, proceeds to build a company built around that idea with all audicious bold claims being received with accolades from other people like him, but never actually bothers to figure out what things matter in a gaming gear.
>If you have read his personal blog from naughties before he deleted it, you will get that his ways can be said to be well beyond "nebulous". He wrote stuff like "solving global problems" while maintaining that tone you usually see from people who flood the internet with something very insubstantial like "saving African children with Agile, innovation, and seven sigma framework..."
>Ok, back to the botches kickstarter line. As happens often with such projects, original claims performance get scaled down, company barely manages to deliver a downrated product after missing the delivery deadlines multiple times, product works so so, and in the end it ends in your drawer for good. A year down the line the company simply shuts down the cloud service for the widget and you are left with an expensive paperweight. I expect magic leap to follow this route.
then there's this:
>Given what is known about the mechanisms of high end confidence tricks, what is different about the operation of Magic Leap that indicates that it is not a confidence trick
Similar to the way that SV is decades ahead in software engineering, Hollywoood (Calif.) almost a full century ahead in moving picture entertainment, and Houston with its petro/chemicals, Ft. Lauderdale leads the pack in confidence leverage, selling to investors their very own dreams in the most "creative" ways like no place else. Lots of locations are desirable for different reasons and give rise to extreme leadership in regional specialties like these, where most outsiders are completely out of their element. Magic leap is already successful on its own terms without needing actual paying customers yet or even a shippable product, what's the hurry to put icing on the cake, even if it becomes possible? I wouldn't expect them to be as competitive at selling to customers compared to the pitches they have already delivered and won.
Vapor ware has existed much longer and more traditionally in hardwares than in softwares to begin with.
The most well-honed So. Fla. ventures always have a very realistic possibility of truly making money, the persuasive confidence being focused on distorting the probablity rather than on complete fraud. After all, fraud would be illegal, even in So. Fla. where you traditionally did not ask people what they do for a living, that would be rude since there's so little opportunity to earn a legitimate income compared to parts north. You're supposed to have money before you go there.
When I was a youngster Ft. Lauderdale was a much smaller yachting community, but more so than ever it looks like "hook, line, and sinker" will always be some of the most prominent pastimes enjoyed by those who specialize in this type of activity. All the yachtsmen I knew were only looking for the biggest fish, not interested in the small fry. That was for commercially viable fishermen who didn't even own a pleasure craft.
People probably don't have much memory from the last time, of course Port St. Lucie isn't exactly South Florida proper. Not as big a venture but could be considered a POC in an area not as thoroughly overfished as Broward:
With Facebook and all, everybody knows SV is where the biggest fish are these days. You go where the money is, or even better bring them to you.
Anyway, I am completely "confident" I could get a better return for the investors in Theranos than for those lured in to Magic Leap based on what each of these groups has to work with at the present time, if given the opportunity to steward each of these companies' present assets from this point forward. Surely I have seen what looks like some of the huge cash put into Magic Leap already trickle down into photonic advances that will make money for somebody someday, and from the looks of Theranos there have got to be some outstanding people in there somewhere with amazing breakthroughs that I would have an unfair advantage exploiting.
Only problem is, not so sure it would be a positive return for either one, the better bet may just be starting from scratch or getting in on the ground floor of a much smaller outfit in either case.
This is going to sound super dumb and I haven't been following AR / VR at all...but I thought Magic Leap were creating AR technology that worked _without Goggles_? As in projected visible light that I could see with my naked eye? Am I mad?
More than 'seem', that is what their videos are. Those children screaming about that whale flipping around in their auditorium? They don't have any goggles on.
Their promo videos specifically promise that you could have a crowd of 100 people without the goggles on watching the Magic Leap effects.
I soon realized you'd have to be looking through something for this. You wouldn't be looking at something a la VR (pair of 2D displays with shortened focal length), it would add something to existing light but do so close to your eye. Early demo videos (actual product, not just faked-up CGI) tried to downplay the "through something" but still indicated it was.
We're not going to have tables projecting images into the air above & being seen from the side, any time soon. (Though I do have a crazy idea about making that work...)
My impression was that it was a device for entertainment in auditoriums/stadiums, maybe placed behind the chair's back, launching light rays to the person seated behind.
They've been very secretive in the past in showing actual results, only releasing simulations and videos that look real but aren't (fooling people into thinking they actually had something), and that makes me doubt anything they say they'll release until they show actual specs and actual videos of people using it, opinions on their experience, etc.
At least now they have release dates and supposedly that means they actually have something, so I'm looking forward to seeing what they have.
I was always wondering why HoloLens is insisting on packing the batteries/processor on the headset. I'm very pleased to see Magic Leap going the other way. There is just more room for volume and weight in a pocket than on a user's head.
Yes, the wire would be somewhat annoying, but I'm optimistic wireless tech will get there soon enough.
Star Control II was on other platforms, but was great on 3DO, http://sc2.sourceforge.net/ is free now and a deep game. Like 2D Mass Effect I thought. (Star Control II: The Ur-Quan Masters)
I also enjoyed Return Fire and PoliceNauts which are both available on other platforms also. Wing Commander III maybe?
Gesture recognition is glossed over in the Rolling Stone article, and Magic Leap is also using a 6dof hand controller. Hololens and Meta also only support basic gestures, since as Leap Motion showed significant power draw is required to run high speed depth cameras.
The eye tracking feature will be a game-changer though, if they really have it working.
Disappointed to see that the advertised game with a team of 55 is robots shooting at you through space portals. Hololens has already shipped two (RoboRaid, Holo Raid).
I believe in their light field tech. I think there are a bunch of ways easier light fields that do good enough can be achieved at a price point here and now, but they are big dreamers and the leader with all that cash.
All that considered though, WHY IS THAT HEADSET MOCKUP SO UGLY? Forgive the caps but the industrial design on this thing; I'm not talking about the form factor or the fixed sizing, I'm talking about the bevels, profile, and texture. It looks like a knockoff ergo keyboard from 2001. Go Scandinavian minimal cyberprep. Like Frank Lloyd Wright designed a living room for Daft Punk. Failing that, go full cyberpunk and make this thing true to the prototype it is. Let me feel like it's my DNI to my Ono Sendai.
I know it's just a rendering, but they should find with some of them dollars an industrial designer with some teeth and an opinion. This is the most exciting thing ever if it works as advertised. If they are limiting because of constraints, them making it cooler to wear is even more important. They've been a hype machine; that TED talk for their launch was ridiculous. Design by committee Steampunk goggles that look like a couple 2002 MS laser mice want to breed on my face is boring and stupid.
At least it’s something. But a few renders of dodgy looking goggles and some fake Weta demos don’t instill much confidence until we see hardware in our hands.
I still wonder, how responsive mixed reality UIs and/or content presentation may be implemented in a general way.
E.g., your app requires a user selection of one out of 5 tools, which are to be displayed, ready to pick, in a line-up on a flat surface at approximately table height. But there is no such surface, or the surface is already cluttered with other (voluminous) objects. – How do you proceed? Maybe, you're going for a fallback solution hovering in plain air. But then, there's the same issue with the "play field" or free of obstacles floor space or table real estate, obscure floor plans, etc.
I'm finding this problem with nearly all of the content advertised, in demo videos, etc. How is this to be solved in real life?
Whoa! Hopefully they deliver after almost five (?) years of promises. I wonder if this has anything with the leak of the initial prototype (a huge backpack you have to wear). Maybe they were forced to release something substantial.
It appears online music publication Pitchfork has been given a preview/demo as well.
Somewhat coincidentally (or not) at the same day Icelandic music-greats Sigur Ros were there to check out a new iteration of the AR-app they have been developing together with Magic Leap for the past 4 years.
We do not have a culture that is anywhere close to prepared for this kind of technology. It should be patently clear from the past 20 years that this is the case.
For example, here's a realistic vision of how it plays out:
Btw. with my mobile Chrome I couldn't see much more than a large gradient on the first screen (before scrolling down) and the mobile view in my normal chrome looks much more impressive (similar to the desktop version) than the page I saw on my phone (via wifi). To me I looks like my phone triggered some bug while loading the page.
Maybe you're not very familiar with the Hololens. It looks very different to me. The "goggles" look seems to indicate the rendering will cover the full field of view. The computer pack seems to indicate they are either going for greater visual fidelity or longer battery life (or both). And the controller is more akin to the desktop VR systems than Hololens' janky hand gestures (though I would honestly prefer someone just solve hand gestures already).
This is a lot more different from the Hololens than the Vive and Rift are from each other.
"The viewing space is about the size of a VHS tape held in front of you with your arms half extended. It’s much larger than the HoloLens, but it’s still there.
“I can say that our future gen hardware tech significantly expands the field of view,” Miller says. “So the field of view that you are looking at on these devices is the field of view this will ship with.
It seems to have a larger field of view. Hololens (I have one) is about the size of a credit card held out with your arm half extended. This one is the size of a VHS tape. Also, the unit appears to be lighter as processing is done by a pocket computer that comes with it. Hololens is completely self-contained and thus heavier and less comfortable. I wonder if they'll use the same Unity/C# dev tools.
VHS tape with arm half extended is actually the exact size I calculate for hololens, given the field of view of the hololens is 30°x17°, and an outstretched arm's distance is 35cm:
TL;DR - it's cool. Lightfields are potentially a transformative tech (and this makes MagicLeap really different than other AR companies). We'll still have to wait until we get actual hardware to see how well it works.
I follow this space fairly closely, so here is some context for this.
The first thing to keep in mind about AR is that the "target" for all of these devices is really 5 to 10 years out when the assumption is that an AR goggles/device will replace smartphones.
This is why Google + everyone else are cool with dumping a billion+ dollars into MagicLeap, why FB bought Oculus (Yes, Oculus is currently a VR company, but Michael Abrash and others have publicly stated that is only because it's a more tractable problem than AR), etc. without an immediate payback / shipping device. How many trillions is owning the tech that replaces smartphones worth?
And there's a decent chance that MagicLeap will do this as they're taking a very different approach to the display technology than anyone else. Competitors like HoloLens, Meta, and about a dozen different smaller companies all have some version of a LCD display reflecting off an angled visor.
Meta - https://www.metavision.com/ - the design is like you took a baseball cap, glued a smartphone screen under the brim pointing down and then attached a plexiglass visor to reflect the images up at you.
This reflective LCD approach is well understood and in relative terms "cheap" (aka you can buy good LCD screens at volume b/c of the existing smartphone market), but the result is necessarily a very washed out image, it doesn't look real or solid.
What MagicLeap is doing display wise is something very different. Rony's first company (sold for $1.2B) used fiber optics for seeing what was happening inside of a person while they were getting surgery. Imagine a light + fiber in your heart and the fiber is "scanning" back and forth to get a picture of what the valves of your heart look like.
Now _flip_ that - so instead you have fiber optics projecting light into your eye and the resulting image is indistinguishable from reality b/c it's just more light.
The presumption and rumors all along have been that this lightfield tech is real and amazing enough to open the wallets of some of the smartest people on the planet - but initially required something like the arm+lens setup that you use at the optometrist and a massive gaming PC to make it work.
What MagicLeap has been doing with all this time and money has been trying to shrink this system down. There have been leaked reports of "backpack" setups that were more or less a battery and a PC motherboard zip tied to a person. This announcement at least draws a line in the sand with regards to shipping a portable lightfield product and it may turn out to really be something special.
Every single lightfield device hides the specs till the launch date. And then it turns out that even if the resolution of the underlying display/camera is 10-20 megapixels, you have to divide it by 9 or 16 to get the actual resolution, which inevitably ends up being very blocky.
I scrolled through the site and am still not quite sure I'm understanding what this is about or how it feasibly works... I guess that's what I get with modern hip web design marketing BS based on "feelings".
>Digital Lightfield
Our lightfield photonics generate digital light at different depths and blend seamlessly with natural light to produce lifelike digital objects that coexist in the real world. This advanced technology allows our brain to naturally process digital objects the same way we do real-world objects, making it comfortable to use for long periods of time.
Can someone explain in detail how this works and is better / different than a display? Also if this will kill the entire OLED business.
That's right: Apple had the product ready for sale, and they still didn't have the product as the first thing you saw. You just get a glimpse at the end.
Two interesting things I noticed. One is that the demos he experienced were not using the "goggles" pictured. You'll note that in the chronology, he sees the demos, and then later:
"My first close look at the full Magic Leap hardware comes in a secluded space upstairs that resembles a fashion showroom."
So the demos and any tests of field of view were done with surrogate hardware. It could have even been a modified hololens.
Second, they seem to be backing away from lightfield. I'm guessing they are switching to sending a single plane with pre-rendered depth of field based on Abovitz super long, super incoherent rambling monologue.
"Suddenly, if the theory was right, technology didn’t need to capture the entirety of the light field and recreate it, it just needed to grab the right bits of that light field and feed it to the visual cortex through the eye."
And of course later:
In theory, a light field should allow you to look past a created image to the reality behind it and have that closer image lose some focus. The demonstrations I went through didn't really present an opportunity to see if the goggles could do that effectively. So I asked if the technology supported multiple focal planes. "Magic Leap's Lightwear utilizes our proprietary Digital Lightfield technology, which is our digital version of an analog lightfield signal,
I just find it hard to see how this is different / competes against the Hololens. They already have a couple years of market presence and after all the marketing/campaigns by Magic Leap, the product is the same (with the same qualms like FOV).
I will say that creator studio piece looks great, given Weta and other studio parternships.
Say it with me: vaporware! I’d love to be proven wrong, but there is absolutely no way they’ll ship a day before December 31, 2018. And I’ll be shocked if even that happens.
They’ve never shown a public demo because the private hardware demo is very likely built with insanely more expensive kit than they could ever sell to a normal person.
Looks like they're not. From the Rolling Stone article: "the company will also take prescription details to build into the lenses for those who typically wear glasses."
That's not great at all if it means you can't share your headset.
It's probably because of the need to go to market quickly (a simple design helps) but I wonder if there is at least one founder of those companies that needs prescription glasses. Apparently all of them have perfect eyesight or wear contact lenses.
What would the ownership situation look like ina company like this? It’s raised nearly a couple billion, but has not delivered a single unit.
What is realistically the ownership of the founding team at this point, and how would they go about raising all this capital and still maintaining some semblance of ownership and control?
I am really looking forward to AR. I thought microsoft mixed reality was both AR/VR, then found out MR is just a marketing term. The MR devices do not have AR capability.
So Magic Leap would be the first commercial AR product? Thats kinda neat. 8K VR will be out in 2018 but also the first AR.
Whoa magic leap finally hatched. Buddhist philosophy and an obsession with the cgi computers from iron man - I've been mentally imagining how ar labels and menus looked over my real life for years! This is a dream come true
It looks like they're lagging far behind the industry with the input device. Vive and Oculus are very focused on finger tracking / improving the motion controller experience. I'm curious how far behind the rest of the device is.
They could literally just slap a LeapMotion on it and be one-up on Oculus and Vive for the types of interactions that would be useful in AR.
I've used the LeapMotion in VR and it's amazing. The VR headset controllers work well for their purpose, gaming - low latency, high reliability, high freedom of movement (can put it behind you etc.), and having physical buttons. But the LeapMotion tracks your fingers and feels really natural - without holding anything or placing trackers.
Is MagicLeap Lightfield tech same as that of Red's $1200 holographic smartphone display (announced few months back)?
CEO Jim Jannard has revealed that RED is creating the screen in partnership with a company called Leia Inc. A spin-off from Hewlett-Packard labs, it calls itself "the leading provider of light field holographic display solutions for mobile," and the key words "light field" gives us a pretty good idea as to how it works.
The medium brings new opportunities sure, but also its own challenges. For example, capturing convincing 3D representations of live people is really difficult. Even just motion capture in specially designed studios isn’t easy.
Besides 2D overlays, I suspect that most AR applications will use avatars instead of trying to digitize real people. An unrealistic or cartoonish avatar would also avoid any uncanny valleyness and quite likely would even be part of the appeal
Microsoft has 2 Mixed Reality studios now that do 3D capture, one in SF and the other in Seattle, which they rent out for VR game creation and other purposes.
NVidia also has a 3D video capture tech called Virtual Eye. Rumor has it modern compression brings the video within 20% of 2D video size.
While it's not yet feasible for an AR headset to do such dense capture, the tech is not far off.
Whilst I am excited about the prospect of doing this myself, in the hands of the advertising web this is effectively "advertise inside your competitors' store(s)"
I am sorry but the scrolling on website is a bit annoying. For one, it wouldn't move until the images loaded and then mouse is constantly confused between zooming and scrolling up and down.
I like the separation of glasses from compute power. The design itself seems cool, a little bit inspired by Ghost in the Shell. But imagine putting this on average dude, the coolness is gone ;)
I am really excited with this announcement ! Hope to get my hands on one of these and experience it. With future updates, i hope the glasses will look less bulky.
Makes sense. One of the ways well-known authors make money is by selling bogus endorsements. Basically just giving someone the right to quote them that they like their product. I doubt Stephenson's "employment" with Magic Leap goes much deeper than that.
This is a nonsense name, some useless design effects and the first sentence I read is "We're adding another dimension to computing". Which is an insult to everyone working in computing to this day because it claims everything we've done is a null set.
Can someone please tldr this without marketing bullshit?
It was likely a blind copy-paste by the submitter, or submitted using the HN bookmarklet. You can contact the mods via the Contact link in the footer and ask them to update the link.
So why does AR even exist? Is it too expensive to just have an outward facing camera (or 2 or 5) and re-render everything on the eye screens in real-time?
I'm so shocked at this question that I don't know how to answer it other than yes it is very expensive to render in high resolution high fov low latency
So AR is essentially a transitional technology until we have better computers?
e: so I realize AR might be around for a while in things other than Goggles (like phones where resources are limited), but for fully goggled in people I assume VR > AR in nearly every way.
Because you can't render true black and rendering the world gives the developer more control over what the user sees.
I have no doubt that AR will power many "low end" gadgets like google-glass or snapchat's glasses, but for going full gargoyle mode, I imagine the ideal is a fully computer rendered landscape, even if it is mostly true to what you would see in the real world anyway (since it offers so much more power).
not sure if this page is aimed at regular people, but by reading it I don't know what this device is. I would guess some kind of AR. what I can read is that it's for creators and it will change the world.
Opened the link on a mobile phone. Got a full screen image with the following text: "welcome to day one". OK, let's scroll a bit. This is the text that follows: "We're adding another dimension to computing. Where digital respects the physical."
What's this site about? What's the product?
I understand that designing a website is tricky, but - to me - this site seems an example of something aesthetically pleasing, but very cryptic from a content point of view.
This kind of thing drives me crazy! But, for a small project that’s not obvious to anyone - this makes sense.
Look at Apple’s full iPhone landing page. It’d be silly to explain at the top “iPhone is a wireless digital communication device.” It’s obvious that dude is wearing glasses that look like AR/VR headset. You ain’t gotta spell it out for people.
It would be silly because everyone knows what an iPhone is. Not everyone knows what Magic Leap is. When I first saw it I couldn’t figure out what it was. Bad website.
I agree whole heartedly. You’re doing a very bad job with your copy if I don’t know what you’re selling within the first 3 seconds of seeing your copy.
Just as bad on my iPad. The in-app browser hung and froze for a good twenty seconds. After it unfroze all the images were in a fuzzy low resolution and it took me an embarrassingly long amount of time (a couple of minutes, at least) to figure out what the heck it was all about.
Interesting tech. First reviews we have about Magic Leap One is that it has the same draw backs as Microsoft's Hololens. They both lack real world field of view and are limited by the design. We covered the Hololens use at the LA Auto Show which is now in place at the Petersen Automotive Museum in Los Angeles. https://latechnews.org/mixed-reality-hololens-experience/
So...how's this different than the Microsoft HoloLens? Seems to be a huge let down considering the hype machine.
Also, the form factor makes me want to vomit. Look there are 2 possible market segments you could go for that might buy this thing - the staid corporate type that is potentially looking for a toy that they can convince their SO is actually for work. In that case it should look like a pair of chunky raybans or horn rim glasses. The other segment is the hip crowd that wants to have something flashy and fashion forward. Just google "cute <insert preferred gender here> wearing ski goggles". People love those things.
Instead what they made is something you put on your face that makes you look like an asshole. No one likes looking like an asshole.
The cage that houses the cameras and the viewing screen is made of plastic. Any idiot with a 3d printer can make this in their garage. The important bits are all cabled to a hip harddrive anyway (also, really no wireless?). You have to ask yourself, if they can't do the easy things, why would we expect them to do the hard things (ie make this thing actually work)?
A true light field projection would be nice. If you look at the background in a 3D movie it's blurry, with a light field it would come in to focus.
This is probably also what makes the projection bulky and why there are optical fibers running up (if it was anything else you could have used a single one). If this delivers what it promises, I wouldn't even mind wearing a light backpack for it.
After years of nothing to show and an astounding $1.8bn + in funding your expectations should have been very low. I mean, when has that ever worked out? If Magic Leap is still around in 5 years I'd be extremely surprised.
They’re doing hardware research and development. They’re not building a Web 2.0 bootstrap site that is going to get “just 1% of college students” to pay $10 per month.
You are aware that the power or intensity of photons does not diminish over range, correct? Things that are closer are appear brighter because they cover a greater field of view. You could keep something at the same luminosity but make it larger and it's the same effect as bringing it closer. Also, you see precisely because photons are beamed into your eyes.
this ones a sure piece of junk :) which will not appeal to 90% of population. Who will wear that device and roam around carrying a light pack and a controller. There is a long way to go and Apple will refine this piece!
They haven't unveiled jack shit. Just another bunch of 3D renders. Unveiling your product should mean a little more than some sketches of what you'd like to build.
This is actually really exciting. Still not a lot of details on specs or pricing (which is what will ultimately determine whether this succeeds or not), but from what we've seen so far I think this is the first consumer-focused AR product which has the potential to gain a real substantial level of adoption. The Oculus Rift DK1 of AR, as it were.
Though with recent advances in AR and VR tech, the Magic Leap doesn't seem as magical as it did when they first started teasing it several years back. Looking at the trends over the last year or so, I think it was only a matter of time before _someone_ came out with a product like this. (In fact, what they announced here today almost seems like a cross between a Microsoft Hololens and an Oculus Santa Cruz headset.) The real innovation Magic Leap seems to be bringing to the table is their "lightfield" display; though it's hard for me to judge how big of a deal that will be without trying it for myself.
> Abovitz' view that this first release of hardware is workable and good, could explain why they’re calling it the Magic Leap One: Creator Edition. To Magic Leap, creators are developers, brands, agencies, but also early adopter consumers. “The consumers who bought the first Mac, or the first PCs,” he says. “Everyone who would have bought the first iPod. It’s that kind of group. But it’s definitely not just a development kit. If you’re a consumer-creator you are going to be happy.”
There are two things that would absolutely revolutionize how I work that I would pay big money for: 1) Any sort of glasses that would allow me to view virtual displays in high definition. I don't care if I have to turn my head to see more than one display, I don't care if they are VR instead of AR. Should be high enough resolution to be able to use busy Excel spreadsheets and see enough details on a page to do web development 2) Some sort of glove where I could move my fingers to type. It doesn't need to represent an actual keyboard. I could learn whatever new gestures are required for each character.
Those two innovations would mean freedom for me. You could effectively work anywhere in any position, laying in a hammock, on a crowded train, at night in bed when inspiration hits without waking your significant other. It would have to be AR to use while running or working out :)
If these can be made with enough quality to enable equal productivity to a laptop the creators will have an addressable market of about 3 billion people.