Hacker News new | past | comments | ask | show | jobs | submit login
The smallest large display would be projected straight onto your retina (hackaday.com)
163 points by rolph on April 20, 2020 | hide | past | favorite | 91 comments



I'm not sure if there is a good short primer on Retinal Displays or not, but I don't think this is it.

As the article itself outlines, it's been decades of research and we haven't really been able to make any practical progress yet. There are a lot of reasons for this that are as much social as they are technical.

At my previous company a few years ago we took a crack at a concept [1], but never got it off the ground because prototype costs were in the millions and there are only a few places that we could source the kind of hardware you need to build these things.

Last I checked in 2017, Oculus/Facebook was making the most progress on this but even then were having trouble miniaturizing things. I'm really curious where they are with it.

[1] https://pdfaiw.uspto.gov/.aiw?docid=20180131926&SectionNum=1...


What exactly cost so much? You can build a retinal display using a $200 off-the-shelf laser projector if you turn down the power and add less than $100 in lenses. I've done it, and I've even managed to get it up to 8K resolution with some more time and money thrown at it. I've since formed a company to try to commercialize it: http://www.alphalux.io


Not sure what you are attempting to commercialize, but your website doesn't explain at all what you have made. Unless I'm supposed to guess it from your background picture of 3d rendered glasses with some sensors in them...

From most studies I've read on the subject, there are a couple big issues. Firstly, physics is hard. It's very difficult to get a useful amount of digital information presented that closely to your eye, where it is in focus, and comfortable to look at. Scaling up resolution beyond a low-res screen is even more difficult. Secondly, tech just isn't there yet. The closest thing I've seen in the market is Google Glass, and that was a huge flop. It also looked dorky as hell, while providing very little real value. Sure, you could read a text, if you squinted and focused your attention up and to the right, but at that point, pulling out your phone is just as easy. It it also makes you look like a strange android while taking your attention away from the real world.

For a product like this to work in the market it has to meet a lot of requirements. Resolution, invisibility (as in, it doesn't feel like you're wearing a clunky awkward device on your face), battery life, safety (you're shining light into your eyes), and provide real, useful, functionality.

Useful functionality, to make it worth caring about a device like this, is good augmented reality integration. And I mean very good. If you put the device on your face and the mapping of the real world stutters for a second, you're going to hate it and never use it - and no one will buy it.

Also, no, being able to project a little cartoon monster on the surface of a table is not "useful" AR functionality...


> Also, no, being able to project a little cartoon monster on the surface of a table is not "useful" AR functionality...

I would think being able to do that means you've solved most the hard problems you mentioned, so if it can be done well it means we've achieved a certain level of usefulness.

Sort of like how bouncing a white square between two movable white rectangles was a "useful" bit of functionality for consumer AV electronics. Pong isn't exactly blowing anyone's mind now, but it did mean they had to solve a lot of problems to make a machine that could interface with current televisions, deal with user input, do it and update the display within an acceptable time period so it was responsive, and hit a cost and form factor so the general public could make use of it.


> I would think being able to do that means you've solved most the hard problems

I guess the other hard problem, besides just creating the tech, is real-world application,

Reminds me of the Leap Motion device. The creators made some really cool tech, and it works pretty well for what it is, but most people struggle to find a really useful application for it.


You think Google Glass is the closest thing to ever be made or the closest you've personally experienced? The Hololens and the Magic Leap execute much better on the idea of a HUD.


Hololens and Magic Leap are worse than Google Lens in the "looking dorky" and "looking like a strange android" sense.


Even Steve Mann couldn't get past the dork factor, but having waypoint visual reminders is a wonderful option.


I don't know if you have a working project, but if you do, let me throw an idea at you. Instead of trying for an 8k projector, give me something that can handle a couple of lines of text -- say 6 lines by 25 characters, and supports a simple serial API.

Augmented reality is a great dream, but an unobtrusive heads-up-display for simple information would be revolutionary in itself. Baseline applications like a clock/calendar/compass, maybe reminders, or a no-look note-taking tool. Next-gen involving real-life closed captioning, or, when supplemented with a camera and an offline database, a basic "who is this person I am talking to" / protocol officer. Further than that a very rough "am I facing the right way" waypoint finder, etc.

If the hardware can be made cheaply for this sort of application, use cases will emerge faster than you can shake a stick at. Overlaying reality, etc., are way less interesting without huge amounts of compute.


Unfortunately the hardware to do this is harder than a LCD or even DLP. Otherwise we'd have it already.

Galvo projectors as used in hololens etc. are vector, but tiny projectors aimed at retina are extremely hard to pull off, even ones aimed at glasses are very hard and lousy. Aiming them at a wall is ok though.


I'm not suggesting a vector display. The VT100 had 240 scan lines and 768 dots; half of this would be fine.


Without knowing more about your solution I can't comment on that specifically.

I do know that the mems, fiber coupling to the mems and the waveguides necessary for wide enough FOV and variable focal length are very much bespoke right now and thus costly.


Where do you get an 8k laser pico projector? I don't think those exist on the market.


You can get 720p pico projectors off the shelf, but to do 8K, you have to make a custom design with faster laser modulation.


Unfortunately laser modulation is pretty much the only solved problem.

At higher scanning rates the achievable angle of deflection with commercially available MEMS scanners (the type of scanner used in the 720p pico projectors) is too small to be useful.

The other class of laser scanner potentially relevant to practical high resolution projection are the solid state laser scanning systems, which suffer from limited angle of deflection AND a limited number of resolvable angles.

60fps 8k is achievable with a mechanical system using a turbine-driven polygonal mirror. Unfortunately, the precision machining, several tens of kilowatts input power and requisite hearing protection present some obstacles to commercialization.

As a general rule, if a raster laser projector claims to achieve much over 720p they are either lying, mistaken, or out of your price range. It requires an order of magnitude increase in scanning rate to scale from 720p to 8k. The next incremental milestone for this sector is "actually achieving 1080p instead of just lying about it".


I thought high resolution laser projectors were just using the laser as a light source and forming the image using a DLP or LCOS imaging element? Scanning 4k+ seems like it'd need some bonkers specs on the equipment.


And that vs 8k LED lit LCD projector with waveguides for depth. Same technology as used in VR goggles but with much more optics; and also smaller.


neat, how fast is the average time to complete a full 8k scan? I imagine it needs to be on the order of <1ms for it to look like a complete frame and not see rolling shutter-type effects.


I'm fairly confused - that would be an absolutely massive breakthrough that several companies worth billions are looking for. What's the downside?


This is cool. I would like to subscribe to your newsletter (seriously).


Send me your e-mail using the Contact button on my site, and I'll keep you updated.


Which site? Which contact button? Your profile here is empty. I'd also be very interested!


if the image doesn't sit superimposed in the center of the vision no matter where you look, that's not a retinal display


Seems like a nitpicky distinction to make. If I could buy a small cheapish device that would replace a large 4k/8k monitor that costs thousands I'd buy it in a heartbeat even if it didn't track which direction I was looking in.


I wouldn't be surprised if Valve have at least thought about going down this route.

They seem to be really enthusiastic (perhaps terrifyingly) about Brain Computer Interfaces, being able to project directly onto the Eye in a VR headset might be a good stepping stone for them.


In an interview, Jeri Ellsworth stated that when she was at Valve around 2012, they were exploring all types of near-eye displays for use in their eventual VR headset, and one of the displays they tested was a laser retinal projection display. It was specifically mentioned because she pointed out the safety concern they had with projecting a laser into the eye.


Last GDC there was a cool BCI talk from Valve[0], if I had money I would bet that the next VR headset from valve would contain some brain activity sensors.

And in this interview with IGN, Gabe Newel talks a bit about where they stand with BCI [1]. Gabe: "reading and writing to someones motor-cortex is much more of a tractable problem than making someone feel cold" The idea that different parts of the brain have different levels of interface-usability was really mind-blowing to me. I guess it's easy to fall into the trap of seeing the brain as one thing, while it's doing a lot of different things which you can interact with in different ways.

[0]: https://www.youtube.com/watch?v=Qhj3C1H5JWo [1]: https://youtu.be/I0zXkwLs_lo?t=647


> Gabe: "reading and writing to someones motor-cortex is much more of a tractable problem than making someone feel cold"

This is probably very true, but that doesn't mean that accessing the motor cortex is easy. In prosthetics, the two workable approaches currently are an implanted BCI, or using bits of muscle as 'amplifiers' to turn the motor nerve signals into something that can be read via an external electrode. We're a LONG way from being able to parse an EEG into physical sensations in any practical sense.


Hi Andrew, I saw your VB FB AR article from 2017 and checked out Pair3D. Can you comment on Bosch's approach vs Intel Vaunt (now defunct)?


I liked both and they seemed like good first starts from what I saw. Never got to personally try them so my knowledge is pretty limited.


Sounds like we wait for some technologies to reduce the prototype cost by two orders of magnitude. What makes it so costly?


> What makes it so costly?

I've seen this question a few times, so I'll try to address...

Yes, you can take a bi-axial rastor scanning MEMS mirror (similar to what Microvision uses in their Pico projector, or those supplied by Mirrorcle), shine it into your pupil (at the appropriate) distance and BAM!, you have a point source VRD. However, I would heed the warning on Microvisions Pico projector as those ARE NOT eye safe lasers. It is nit difficult, however, to get your hands on eyesafe laser sources suitable for VRD.

However, with this simple device your FOV will be limited by the angular range of your MEMS. Your resolution will be limited by the pulse width of your synchronized laser. Most importantly though is that the exit pupil (eyebox) is very small such that when you move your eyeball a few degrees, the image will start to disappear. So, you need redundant point sources to expand the FOV and also expand the eyebox to permit eye rotation. These point sources must be spaced in a matrix of less than 2mmx2mm (the typical minimum dilation of the pupil in bright ambient settings).

That's a lot of point sources, so if you try to just use a bunch of MEMS mirrors, the driving electronics will generally occlude your natural view of the environment. That's not a problem for video see-through displays, but for optical see-through displays that's a deal breaker. So, you're back to trying to disassociate the light beam point sources from the scanning-driver electronics. The typical strategy is to use some sort of waveguide - to transport the light beam (for a given portion to the image) through a transparent material (e.g. glass), then get that beam to outcouple from the waveguide at the precise location, then have that light beam continue to raster scan. Now you need to do that for, a few hundred light beams (per eye!) to fully fill the natural human FOV with a composite image greater than 35 pixels per degree (for text readability)

All of the above items are achievable, but it requires precision optics and alignment to get everything working in harmony. Add the component of affordability, and you've got a pretty big engineering challenge on your hand.

I'm sure someone can do it. This is just one viable path to truly transformative AR displays, but I think it's the most promising.


Saying this, I feel a bit like somebody in the 1800s saying they don't want to board a train because they don't want to suffocate, but here it is:

I don't want any lasers shone directly into my eyeballs because I don't want to go blind.


Well, it is definitely one of those technologies where it doesn't pay to be an early (first decade) adopter. For all those people who wouldn't board a train, there were also people who didn't take their cocaine prescriptions, lick their radium paintbrushes, or take their thalidomide.


Virtual Retinal Displays (VRD) have been around for a while (1). More recently entering the consumer market. I didn't see the article mention the Avegant Glyph (2)

1. https://en.wikipedia.org/wiki/Virtual_retinal_display

2. https://avegant.com/video-headset

Edit: formatting


Avegant just has you looking at a DLP "screen". There's nothing special or magic about it. Each mirror/pixel you look at is either reflecting laser light or not reflecting laser light.


For (2), it looks cool but there’s no way to order it and the website doesn’t look updated since it won that award at CES 2017. I’ve seen so many of these headsets over the years that exhibit at CES and then are never heard from again. I’ll believe it when I see it in real life.


A DLP projector for your retinas. Cool concept. I bet the parts are cheaper than the equivalent laser setup.


Looks like they're dead, I wonder if the MEMS latency issues killed it.


I guess the ultimate version of this is figuring out how to get your brain to generate the right image via direct simulation of the visual cortex, without painting anything on your retina at all.


I imagine this would result in things like in Spiderman Far From Home where you can be given an environment that isn't reality. Could be very malicious in the wrong hands.


See: Black Mirror. All technology is bad in the wrong hands. Technology is always just a tool. Smartphones and webcams don't spy on people. People spy on people. People kill each other. People steal from the poor. Et cetera. The shape of the tool can be made to be malicious. For instance, a gun is only good as a ballistics toy that you have to be very creative to use constructively, or it can be used to kill. Nuclear bombs could be used for the most efficient nuclear power generation available (haw haw but it was seriously considered [0]).

So, some questions we must always ask about a technology are:

1. How malicious is its shape?

2. Is it too powerful for use?

3. Does society have enough control over itself to actually prevent it from being made?

This is one of the lines of thinking you see around AI, as the emergence of a powerful AI is seen as a fast approaching inevitability.

0. https://en.wikipedia.org/wiki/Project_Gnome


I sort of take the contrary view. I think 24 hour news coverage is inherently bad and cannot ever be good, because it doesn't give anchors enough time to react and analyse events, or viewers to digest and critically examine what they have watched. There are many examples like this, where the form itself is actually the problem even without any specifically malicious use.


You would say the shape of fast response news is malicious. I disagree. I think speed for accuracy is a tradeoff. There is a benefit to having fast response news. I don't have the patience to learn all of the psychological models necessary to substantiate this claim, but I do know people are more more motivated to take action by events that happen closer in time. Some details in breaking news will be wrong, but you wouldn't want to wait a week to learn about a school shooting or the stopping of all international travel.


Fast response news and 24 hours news are different things. "Fast response" also covers things like Twitter, which, while rife with its own problems, is at least theoretically capable of giving you an immediate, succinct news bulletin and then shutting up.

24 hour news stations can't do this. They have to blather and overanalyze rumors and twist words and sometimes lie because they need people to watch all the time, even when nothing is happening. Plus, since they are so mass-market focused (they have to be because of the revenue demands of the medium), any in-depth analysis is off-limits. Thus, viewers come away with surface-level, emotionally-charged and largely inaccurate ideas--flaws which are inherent to the system.


> Smartphones and webcams don't spy on people. People spy on people.

Well yes, but those smartphones were designed by people, and many of them are designed to spy on people. While it’s true that there’s (almost) nothing about smartphones that makes them inherently intrusive, that’s cold comfort when surveillance is the industry standard.

(But yeah, I’m keeping a close eye on Linux smartphone development for that reason.)


It also seems more practical to plug the signal "into the cables" than to try to pass information through the complexity of the eye. In which case I believe it is the thalamus[1] that we should aim to plug into. I can't imagine a whole lot of research can be carried out without running into issues with ethics though.

[1] https://i.imgur.com/GsRkEg3.jpg


If we are going by analogy, that would be more like plugging the USB camera directly into your CPU bus instead of USB. Your eye is a fine interface. Messing with the literally wet neural interface sounds very complicated.


Counter-analogy: it's the difference between taking screenshots and taking pictures of the screen with your camera.


Yes, analogies only go so far, especially in domains which are new or evolving. Which is why I added the sentence about literally wet neurons. That said, its not without precedent, these things my geeky brain found very interesting:

http://www.motionfx.gr/film-overview.html

https://www.blunham.com/Radar/TransmissionLines/PDU/PDU.html


Is that just a "that would be cool" or do you know of research into this (even so much as a feasibility study)?


There’s lots of stuff on BCI. People have been pursuing it for a decades and technology is maturing into handling the quantity of data. The limiting factor is the (apparently) obvious: skulls are thick. You get a terribly weak signal through a skull. Who’s going to get a sub-skull implant so they can play VRChat 2?


I would keep an eye on Valve.

I don't think they're anywhere near doing it yet but Gabe Newell heavily implied that he wants to steer Valve in this direction in his recent interview about Half Life: Alyx on IGN.


I wonder if the logos with the red handle in the eye then on the back of the neck was foreshadowing


That would be cool, but I doubt that was the original intent, given how old that logo is and that the "red handle" is just a valve, simply referencing the company name.


This is what neuralink is doing.


They are specifically focusing on the visual cortex?


What kind of health issues would this type of technology cause? I can't see it being very good for your eyes to have them focused on lasers being projected directly into them into at extremely close range for long periods of time on a regular basis.


Your eyeballs are being bombarded with photons emitted by the sun and your monitor right now as we speak. Furthermore, you're perpetually bathed in cosmic background radiation.

A "laser" just means that the emitted photons are phase synchronized, but says nothing about the intensity. The range is also irrelevant. For a radially emitting source, intensity scales as the inverse squared distance, but assuming you control the output, it's a moot point.


I'd also be concerned about, what happens if something goes wrong? What happens if there's some sort of power delivery issue, and the laser burns far too bright just for an instant before dying?

Is that an unfounded concern?


It's analogous to in-ear monitors.

https://blog.64audio.com/protecting-your-hearing-while-weari...

> Most pro-level IEMs are capable of producing SPLs (sound pressure level) well above 120db. A phantom power spike or a microphone falling on the floor can easily produce a signal loud enough to damage your hearing.

The article above recommends putting a brickwall limiter on the monitor mix. However, a brickwall will only do so much for a sudden screech centered at 3kHz, right in the zone where human hearing is the most sensitive. And what if the limiter gets disabled? Murphy's Law guarantees that at some point the safety system will fail.

The only way to guarantee that devices like these won't emit signals which damage your sensory organs is to make them physically incapable of producing such signals.


Computer interfaces are dangerous

I have been wearing headphones a lot recently and I think I got an ear infection from them. I barely slept the last 2 days because my ears are itching so much. Although only when I lay, not when I sit.

And I purposely have been using over-ear headphones rather than in-ear devices, so I thought you do not get an infection from over-ear headphones


I can't use in the ear head phones at all. Ever since I was young they've given me ear infections. They're also terrible for your ear. Your tragus is there to help protect your eardrum from loud noises, among other things. Sticking noise emitting things behind it is a bad idea.

I've been alright with over the ear style ones for the most part. Though I can see why they, noise cancelling ones especially, could cause infections if worn for long periods.

They create moist anaerobic environments liked by a lot of bacteria and because headphones tend to be left around or carried outside, they're not the cleanest things around. So as they sit nice and snug around your ears, bacteria have a nice happy place to live all cuddled up close like to the entrances to your ears.


I mostly don't use ear buds, but when I do/did I'd wipe them with rubbing alcohol before using. Not, like, constantly, but after putting them down for more than a moment, every trip to the gym, etc.


I've been prone to ear and sinus infections since I was a kid. Apparently I have narrow sinuses. I end up with a sinus infection nearly every time I get sick.

To be fair, I find the sound quality lacking on most ear buds, even expensive ones.

When it comes to low frequencies, the size of cone makes a difference. You'll never get the same kind of bass from an earbud as an over the ear headphone or a speaker.

Music just isn't the same without bass or with lackluster bass. Bassvin most tracks is the bridge between melody and rythym and in many genres actually carries the song. If I had a choice between high quality treble and lakcluster bass, like most ear buds offer, despite, marketing and lousy high end with full bass, id' take the latter every time.


It’s not a one to one analogy. Any electronic system can be made to be failsafe. Audio mixing as it exists today doesn’t have an easy failsafe solution to limiting output, but that doesn’t mean such a system couldn’t easily be made. Literally anything you could name that produces or consumes more than 100 kW is failsafe. There is no reason an eye laser couldn’t be as well.


Sure. It's as implausible as (say) your car not starting, or a water pipe bursting, or your phone arriving DOA.

Personally, as cool as it sounds, I'll pass.


The dose makes the poison.


For acute poisoning. You're forgetting chronic poisoning, such as heavy metal poisoning.

Which is more what I was talking about, low exposure levels over regular long term periods.

Regular screens and displays cause eye strain and retina damage over long terms as it is.


No, I'm not forgetting chronic poisoning. The dose makes the poison, acute or chronic. That's especially true for heavy metals.


So the power from the optical system is spread over the retina by the MEMS mirrors. What happens if the laser stays on and one or both of the scanners stops?


That is a failure that could potentially result in damage to the eye.

MEMS mirrors have built-in angle sensors, and when these report that there's irregular or no detected movement in one or both directions, the lasers are turned off.

The lasers in a MEMS mirror near-eye display operate in the microwatt range. Low power laser pointers operate in the milliwatt range, and the safety mechanism for those is that your eyelid is expected to shut to block off the laser in less than a second. Therefore, you have more time to shut your eyelids or remove your eye from the MEMS mirror display if there's a failure and the lasers fail to shut off.


This is the first scary question that everyone asks with these devices, because the answer is "you burn a hole in your retina".

It is an inherent danger in the system, but I would be interested in reading a full risk analysis on this type of system. Industrial processes always have risk analysis and avoidance, with failsafe for all possible failure modes being desired. I wonder what kind of hardware failsafes you could put into this. You never, under any circumstances, want to burn someone's eye. But you also don't want someone's expensive toy to brick itself because they jumped with it on.



You close your eyes immediately and take the glasses off, just like when an errant sunlight reflection hits your eyes just right.


Excellent reality check "Lasers Are Not Magic" http://doc-ok.org/?p=1386

"Can I make a full-field-of-view AR or VR display by directly shining lasers into my eyes?" -- No.


Immediately thought of the Bosch Light Drive[0] product when I saw this. There's been no news since they were unveiled, though, as far as I can tell...

[0] https://www.bosch-sensortec.com/products/optical-microsystem...


That is mentioned in the article -- BML500P is the part number for the unreleased thing.


Ah, good catch. I skimmed for the Bosch product but didn't see it the first time.


Seems like this is something that could eventually be placed onto a contact lens (maybe just the reflective part) to help with alignment.

Also I wonder if a lightfield concept could be used to allow the lens in your eye to focus the image so you don't get that weird 'permanently in focus' effect that has to be somewhat jarring.


VRDs sound great in theory, but the more you learn about them the more you realize that they aren't the right solution for the problem. Compared to other AR optics approaches, they're vastly more expensive, more dangerous, and have more limitations to overcome. For example, a VRD will only display if your eyes are looking in one direction (see the North glasses), so a VRD-based headset would need eye tracking (and vastly more complex optics to move the image around) for basis use.

Diffractive optics can achieve much better effects at a fraction of the cost and without any risk.


I would be really interested in the visual correction opportunities offered by this kind of technology. I suffer from a rare cornea disease called Pellucid Marginal Degeneration that results in a very disfigured cornea. It would be amazing to have a pair of glasses that scan your cornea for abnormalities, and then adjust the angles at which photons hit your eye in a way that "cancels out" the astigmatic abnormalities.


That is what I am interested in too.

I have 3 diopters astigmatism. Or more, the optometrists have trouble measuring it, their measurement fluctuates between 2.5 and 3.5 diopters. If the glasses just slip a little, I can hardly read anything.

I hope I do not develop anything worse. My father had keratoconus.


"The smallest large display would be projected straight onto your retina"

Bosch, yes. I am 100% sure I have read about it before on Ycombinator.

https://spectrum.ieee.org/tech-talk/consumer-electronics/gad...


I won’t be an early adopter, but this technology has been a source of great fascination and excitement for me, and I look forward to the day it becomes a mainstream reality.


The prospect of ciliary muscle atrophy (or exhaustion) from longer term use of this is pretty scary. I wonder if they're considering how to abate that.


How do you deal with ciliary muscle exhaustion from staring at a computer screen for extended periods?


I'm glad you said abate and not ablate.


something interesting for the wearables sector.

I thought this would go along with it for safety sake:

https://news.ycombinator.com/item?id=22634890


Alternatively, use your optic nerve or visual cortex.


I'm surprised nobody's mentioned Magic Leap: https://www.theverge.com/2018/8/8/17662040/magic-leap-one-cr...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: