Hacker News new | past | comments | ask | show | jobs | submit login
Google Glass is getting a second life in the manufacturing industry (npr.org)
305 points by happy-go-lucky on March 18, 2017 | hide | past | favorite | 134 comments



It's kind of clear that their quoted expert Tsai didn't interview anyone while they were using Google Glass.

"With Google Glass, it may look like you're listening to the person in front of you, but you could actually be watching a movie or looking up sports stats."

Unfortunately the problem is the opposite and more offensive. It's very obvious if you're having a conversation with someone while they're using Glass while looking at you from their eye movements, and it makes the user look really weird ignoring any fashion issues. I found the experience quite a bit more offensive than having someone reading emails on their laptop while you talked to them. Unlike the laptop case, with glass you get a very clear direct view of their eyes as they scan whatever glass is showing them, while there is the obvious false pretense of giving you their full attention.

I think one of the big problems with glass was that they picked the wrong sort of people to be early public users that then set the tone for the product. A process that ensured only super-enthusiastic users would bother applying is also the sort that would select for the least willing to notice how other people might find certain uses of it rude and annoying.


> I think one of the big problems with glass was that they picked the wrong sort of people to be early public users

Totally agree. As awesome a technical achievement as Glass was, it was the product of 20 years of work from some very odd people (MIT's borgs) that literally wore desktop computers on their backs prototyping it, ignoring all social convention in the process. While their insights were valuable for their particular approach to wearable computing (HUD-based, data recall oriented, etc), that approach isn't the right approach for the mainstream, nor are they the right brand ambassadors.

Google's perennial issue is they're a technology factory that thinks they're a consumer company. The goto market strategy Glass should have adopted was in enterprise, where there are many valuable use cases like ones in this article, and Google could have refined it there. But Google isn't good at picking markets or any form of understanding marketing, so they decided that a bunch of geeks walking around cafes, bars, and restaurants was the right way to go. sigh


I'd be a lot more understanding if Neal Stephenson hadn't foreseen and lampooned the concept in 'Snow Crash'. Seriously, how many people on HN took one look at Google Glass and though, "Oh shit, gargoyles!"


I didn't do it until now, when I looked at the actual borgs :)

https://c.o0bg.com/rf/image_960w/Boston/2011-2020/2012/07/15...


Google could do great in the enterprise. But they have been resistant to offering the support that companies want to buy with products they rely on.


Enterprise are really adverse to storing/sharing data outside premises


Some are, some are not. They are definitely adverse to having it mined, though. That should not be a problem for google if they are charging for the product, rather than trying to recoup costs selling the data.


All that can squarely be blamed on Sergey Brin.

I guess he let all the comparisons to Tony Stark go to his head...


> But Google isn't good at picking markets or any form of understanding marketing, so they decided that a bunch of geeks walking around cafes, bars, and restaurants was the right way to go.

Isn't Snap currently proving that the market was right, but you need a different marketing copy?


They did the same thing to Ara: interesting tech anchored by a down-market consumer agenda.


I suspect this is just bad phrasing, I imagine it should have been something closer to this:

> With Google Glass on, others around you have no idea if you're actually watching a movie or looking up sports stats rather than talking to them, which can lead to people being uncomfortable in your presence.

(And yes, to those "in the know", they will know you're not actively browsing the internet based on what you're saying and what weird eye movements you're not making.. But, they might still suspect you started that movie right before the conversation, or maybe you're getting live notifications from an in-progress game)

EDIT: However, I also totally agree with the issues around their selective public beta group..


I feel this could be a general problem with augmented reality going forward, especially for the end game where it is integrated in glasses or even contact lenses. As you say, you will realize people are not paying attention to you anymore when their eyes wander off and in general there would be this distrust in anyone to give you their full attention. At least that is how i would probably feel.


Snap tried pretty hard to set a different tone for Spectacles, but they don't seem to be doing very well either.


Spectacles never had near the awareness or hype that Google glass originally had, and the news I saw surrounding it was always overshadowed by mentioning what happened with glass. I'm not saying Glass would have succeeded otherwise, but it would have made a far better impression and not set the tone for other efforts.


Need to separate the product from the business. Spectacles, the product, is widely considered to be a hit and has received near universal praise. Snap's business meanwhile isn't as strong.



I think it's safe to say it's way too early to call it a "hit." Remember that Google Glasses were also a "hit" among bloggers and techies and other influential people. Of course in hindsight we know this was the wrong kind of attention.


Considered a hit by who? The blogosphere? They're not even selling well on eBay.


Why would they sell well? They're selling directly online now.

It's taking quite a bit of willpower to stop myself from buying a pair, and I barely use Snapchat.


Snap did a good job limiting supply and building hype for the rollout. Now that Spectacles are generally available, I have reason to believe that sales are below expectations.


They aren't? I thought their glasses sounded pretty cool. What are they like?


I dont think I've ever cared whether a person was looking at me while conversing.


I've walked out on a date because the lady in question could not break away from texting. It's only happened once but I've got better things to be doing than actively trying to miscommunicate with someone.


Or paid you any attention?


Using Glass in a factory environment surprises me. Factories are notoriously loud, so unless Glass is really good at filtering out background noise, I'd imagine that voice commands wouldn't work very well. My experience has mostly been consumer electronics factories in Asia, but I also see the following problems that may or may not apply to domestic factories:

* Equipment with clear resale value has a tendency to "walk away". We worked hard to avoid using consumer electronics such as PCs or phones on the line. When we did need to use them, we had to establish strict policies and secure storage for when the equipment was not in use. Glass seems like it'd fall in this category.

* Even dedicated, laser-based, handheld barcode scanners could be finicky with part labels. Camera-based scanners were unusable due to poor accuracy and latency.

* Internet connectivity is poor or non-existent. WiFi coverage is usually terrible due to physical and electrical interference from factory equipment.

AR work instructions would be a dream--especially if the technology could flag errors. The environment just seems especially hostile for consumer-oriented technology such as Glass. I don't have firsthand experience with Glass, but I'm really surprised this company is reporting success with it.


Several years ago I was trying to solve the problem of voice recognition in a noisy environment. After brainstorming with some colleagues, we came up with the idea of using a throat mike. The only problem was that we couldn't find reasonably priced hardware that would work with commodity computers.


Jawbone Bluetooth headsets actually contact your jaw to use the conducted vibrations to cancel background noise, similar to your idea. Disclosure: I worked on their first generation Bluetooth headset. I'm not certain the current generations still work this way.


I had that one. It was really great in noisy environments... unless there were also wind. Wind outright killed the sound.


Noiseassassin - yup, the later gen ones do as well, but as I understand it, Jawbone is basically dead at this point.


Could get a second wind with voice recognition because the problem with comms is everyone sounds the same through that style of mic.


What the heck? I found cheap ones on Amazon designed for the Baofeng ham radios. Surely Shenzhen has figured it out.


What really needs to happen is for someone to figure out how to decode the electrical signals generated by subvocalization, train a DNN to interpret them, and use that.


I don't think that is practical without invasive surgery. Afaik the best we can do with external monitoring is roughly tell whether a person is trying to move or thinking about moving. Reading thoughts doesn't seem realistic with current technology.


If you're trying to tap the brain that's true, but if you could readout the muscular contractions around the vocal chords, you could translate that instead. It's definitely easier to understand the simple output at the level of muscles, than reading "thoughts" in the brain. There are already some prosthetic limbs which used implants to read muscular contractions to control the prosthetic, but those are far too crude to interpret speech.


Google Glass has physical controls on the right side to navigate the UX.


Ah, makes sense. The article's photo even seems to show the operator using those controls. The article mentioned voice commands specifically though, so I was reacting to that.


Yeh, it'd definitely want to be some form of hardened industrial version of Glass, but the base technology (unobtrusive screen in front of eye with data connectivity) would be a dream for me when I'm doing inspections on plant, for example.


I know some people who are working on this stuff. No voice controls, all gesture-based. The idea is to replace service manuals.


Cool. Presumably they're already doing this, but I'd really recommend observation to understand the environment in which the product will be used and then following up with user tests as they prototype. There can be some surprising or unintuitive ergonomic concerns around heavy machinery. For example, factory equipment with automated movement often requires two-handed operation. Even if it could be operated with one hand, many machines will have two buttons--one on either side of the machine--which must be pressed simultaneously. This ensures the operator's hands are out of harm's way during the automated movement. If the operator takes a hand off of a button to perform a hand-based gesture, the machine comes to a halt. Also, operators tend to develop a flow as they perform repetitive operations. If a tool breaks that flow, it will likely be rejected or abused.

It sounds like your acquaintances are targeting service technicians, not necessarily factory workers, so the above examples might not apply, but I'd imagine their targeted industry has its own ergonomic concerns.

Sorry to ramble at you. Just excited because it sounds cool.


That's an interesting idea, but doesn't it add significant training overhead? The value of voice commands is that the user is already fluent in the basic language of the interface, even if the specific commands still need to be learned.


I played with a demo, and no, it doesn't add significant training overhead. It's less like Minority Report and more like an iPad attached to your face.


Isn't noise cancellation a solved problem when using multiple microphones?


The Amazon echo has a pretty good microphone array, and it still has a hard time picking up my commands if there's background noise. And the background noise in factories is extreme. Notice that in the article's photos, the operator is wearing earplugs with Glass.


It should be a lot easier for a microphone that's right next to your head. Face one or more microphones away from the user, that gives you the environmental noise. Point one or more microphones at the user's mouth (or use some other method). That gives you what you need (signal much louder than the environmental noise, with the right microphone), along with some of that same noise. Apply some noise subtraction magic.

It's actually pretty different than an Echo, which needs to pick out the user's commands via complex signal processing alone.


If your raw SNR is low, then you're boned no matter what.


I wonder, how does one approach Google for such business tech? Or do they only find customers themselves?

I couldn't find the landing page for prospective enterprise Glass customers, though there's Glass at Work partner list with companies which do have real contact address.

https://developers.google.com/glass/distribute/glass-at-work


After having a quick look at the site, my understanding is that you don't approach Google directly for this, you either approach a Glass Certified Partner to do the work or you become one yourself. There are no apparent links for becoming a Glass Certified Partner though, so my guess is you simply can't right now.


A colleague recently interacted with Google in an attempt to acquire Glass. The takeaway I got was Google was only interested in selling if the project was "cool" (no definition in sight) and if the quantity was substantial (minimum 100 units I think.)


There's something ironic and lovely about the much-maligned Google Glass becoming a blue-collar tool.


It's funny, I'm an industrial electrician and my smart phone is my handiest tool. I can take pictures from awkward spots like up a pipe behind some bus bars, even thermal, download a manual and keyword search my issues. Video chat with more/less experienced employees to problem solve. Use my phone to trouble shoot cellular issues (m2m iot). Remotely configure network systems. Use OTG cable to serial interface with rs232/485. Scan a barcode of existing hardware,compare it to new hardware and order it online. A LOT of mobile technology is wasted on company's thinking I want this to snap a highlight reel of my weekend and other trivial shit ....When what I really want to do is take a picture with my device, save the location of picture and send it to all parties concerned and document my time.


My piano tuner uses an iPhone app called TuneLab.[1]

$500.00USD. For an app. Interface that looks like the dev Googled all the images needed and used the first result. First line of the app store description is Do not buy this app unless you are a professional piano technician. Yet it's a highly powerful tool and professionals use it.

[1] https://itunes.apple.com/nz/app/tunelab-piano-tuner/id335568...


Wow this is amazing. It really shows how powerful the app store has become.


If anything is exactly what the app store should be. Cell phones are incredibly powerful but it's rare they're used for anything besides a dumb ui for a server or games


To be fair, there's an equal amount of trash software available from your laptop/desktop as well, the only difference being that there isn't a centralized repository that it all must go through; the trash heap that rests under the app stores was inevitable, since such junk was always going to exist and people would find a way to sell it.


What I find interesting is that everyone's trying to make the next big game or consumer app, a market that's already insanely saturated.

Meanwhile a single decent professional piano tuning app can sell for $500 (I don't know the sales numbers of course) with the most amateur-looking UI ever and a description that's a warning not to buy it.

Considering that most games get lost in the heap and sell basically nothing, maybe looking to see if there's some niche tool you could make instead isn't a bad idea.


Well to be fair, it's priced at $500, and aside from the single point of reference, I wonder how many have been sold.


These are all hugely underrated uses of mobile phones. Having drawings and documentation to hand, downloading manuals for random pieces of equipment without leaving the field, being able to stick your phone in behind equipment to get photos of stuff you can't see, talking to network equipment... This stuff can save days of wasted messing around.

Also just quickly recording things. Terminal block wiring inside the case of a VSD or other equipment (so when later you think 'oh wait, what was wired in to X5' you don't have to isolate the damn panel again to get in and check). Mark up drawings and take a photo to document it. Take a photo of that scribbled-over page of the manual that the client's been using for 10 years to configure their product and that's not recorded anywhere.


I think it's a lot easier to sell a consumer experience than an enterprise device. And you can tell a consumer how to use a product, whereas an enterprise expects the product to meet their needs.

I wanted my Glass to help me with my business flow (namely, being on call at work), but it wasn't built to do things like... Connect to a non-Gmail account. And Google's requirement of uploading every photo I took meant I couldn't securely take work photos with it.

I have one of my best pictures of my chinchilla because of it though... Only way I've ever been able to hold something with two hands and take a picture.


You would probably be better served with something from Vuzix.

https://www.vuzix.com/

Their products are full Android devices, as best i can tell.


they don't seem to sell anything


Their M100 model seems to be on sale, with the M300 up for preorder.


Hey i am thinking about becoming an electrician, any advice in how to start out? Union or no? Domestic or industrial?


The "sad" part is that outside of the thermal thing (i think) all this was going on for years before Apple introduced iPhone. But over night it was the "jesusphone" that was the definition of "smartphone". Heck, one reporter tried to redefine it as the "superphone" at some point...


I think this was part of Google's long game all along. Although it seemed to be marketed initially as some sort of lifestyle or social media device, there are striking parallels between the Glass and some of Vannevar Bush's concepts - and really as a very rough prototype. We still have some way to go, I think.


If this was the original goal, they sure took a roundabout way to get there. MS's hololens is/was aimed from the start at this kind of market. Snapchat's glasses are at the other extreme, as illustrated by their marketing campaign featuring mostly attractive naked people:

https://www.google.com/search?tbm=isch&q=snapchat%20spectacl...

I'm not sure Google had a plan at all:

https://www.google.com/search?tbm=isch&q=google%20glass&tbs=...


It kinda was, but then Sergey Brin happened...


Followed by Ivy Ross and Tony Fidel...


This reminds me of the Manna story by Marshall Brain: http://marshallbrain.com/manna1.htm


that was my first thought too! but this does genuinely seem assistive rather than controlling


I'm a bit disappointed that Google didn't continue Glass as a niche product. If I had money, I'd maybe buy one :-)

Does anybody know if there are any "inconspicuous" AR glasses (meaning that they look like e.g. sunglasses and not like the Borg)? Preferably full visual field?

And are there any glasses that work on holographic / light field principles? (AFAIK Hololens doesn't, despite the name!) A few years ago at university, I looked into computer generated holograms. We generated holograms of simple objects on the CPU, printed them out, and shone a laser throught them. Viewing them at a slight angle (so you wouldn't be blinded), you could see the object floating in space - as if the hologram was a window. Back then, we thought maybe you could use a GPU to generate the holographic pattern, and a LCD matrix to display it - but the technology was not there yet. Now you have shader GPUs, and high-DPI LCDs, I feel it's almost just a matter of combining the pieces and you'd have immersive glasses, or a holographic screen.


Seems like a great opportunity to use video capture from all employees' GG cameras in order to train their robot replacements.


Google Glass is nothing compared to real AR like Microsoft Hololens. And for Enterprise solutions, 2k for these glasses vs 3k for a hololens Dev kit is nothing.


> Google Glass is nothing compared to real AR like Microsoft Hololens

You can't wear Microsoft Hololens outside of your office, far from your desktop.

Also I'm saying that, but it's a bit silly to entertain this discussion as they are two completely unrelated products. Mircosoft Hololens is AR as in Augmented Reality while Glass is a UI floating on your face.


> You can't wear Microsoft Hololens outside of your office, far from your desktop.

This is not true. Hololens is untethered and there's nothing preventing you from using it anywhere. Well, nothing except the fact that it's bulkier and heavier than Glass and obscures your face too much for use in social situations.

The real reason why AR isn't better than Glass for this kind of thing is that AR tracking and recognition of objects is not good enough for real applications yet. On Glass as reported in the article you have to scan a code to identify an object you're holding. In people's imaginations, an AR device would magically recognize the object and overlay useful informational displays on it as you turn it in your hands. In reality Hololens is not even close to being able to do this, so you would be reduced to scanning codes just like on Glass.


This is the second misconception I've read in this thread so far. I was at a conference in finland this year and a huge company called ABB was there with a holo lens demo. They had a hololens you could wear to see how to repair some mechanical unit. The hololens was able to recognise the unit from its previous mapping of the object and project AR on to it. They moved the unit and it still was able to almost instantaneously detect it.

The technology is here. It's ready. And it's already in use.


It's probably best to be skeptical of a demo.


Perhaps but I've used both a Hololens and Google Glass, and the Hololens is light-years ahead. It is easily capable of the 'AR repair' application. Glass on the other hand was very 'meh'. The voice interface was horribly unreliable (think 2010-level speech recognition), and it is really just a convenient small display, not proper AR.


I tried a hololens in real life to measure the distance between bowling balls (English style bowls) and it worked really well superimposing an arrow between the balls with the distance.

For me it was more impressive than glass which when I tried it mainly had me wondering why the photos were so much fuzzier than you could take with you phone.


Hololens can display 3D annotations that have been prearranged on a mostly static, previously mapped environment. It cannot map moving or non-rigid objects in dynamic environments and it cannot identify or track small objects that you are holding in your hands, unless maybe you stick giant fiducial markers on them.


It can't do that on its own, but it's certainly possible to write an app for Hololens that does. I can't confirm writing such an app at this time.


"There's nothing preventing you from using it. Except (...)" - that's quite a lot of "nothing," IMNSHO, to the tune of "if we handwave away all its problems, there are no problems. Tada!"


It has to map the area around you so it has a really hard time being used outdoors and is almost completely unusable when moving in a vehicle.


> You can't wear Microsoft Hololens outside of your office, far from your desktop.

Yes you can assuming wifi is in range. The Hololens is great. There is no cable. The battery life is reasonable. The field of view is awful but usable. We have been using one at the office and I'm very impressed with how accurately and quickly it maps out a space.

I don't think they are so different. In fact anything glass can do hololens will do better.


Google glass is a remote display on ur eyes. Meaning a screen attached to your glasses.

They are entirely different.

Hololens is an entirely new beast.

Comparing them because both can play youtube is every journalist's mistake and uneducated journalists created that mistake.

Hololens can not be used outside in a bright day light.

Hololens can be used any where. WiFI is not needed all time for Hololens. WiFi is needed only for the APPs which needs them as in some APPs in phones. You can connect the Hololens to wifi(hotspot) in any smartphone.


>Mircosoft Hololens is AR as in Augmented Reality while Glass is a UI floating on your face.

Google marketed Glass as AR but it turned out to be just a Floating UI. That is why it failed imo. Hololens is what we were promised for a relatively similar price point.


To be fair, Google's AR example of Glass was it's marketing concept video, but never shown or advertised as a feature of the released product.


My impression was that the Hololens kits ran W10, albeit an embedded version, natively.

The fact that you need to spell out the difference between full AR and glass also seems to show how poorly Microsoft has communicated the potential of AR.


Agreed. I got to go to a hands on demo of Hololens last year focusing on potential enterprise applications, and was extremely impressed. What they have right now is very much an alpha (maybe beta) product, but the list of potential applications is enormous.


I tried the halolens at united this year. Wearing it for 5 minutes was unbearable. I quit the demo early because the apparatus was uncomfortable. The field of vision was really small and the illusion was broken really easily. Obviously the tech should get better, but it was no where near prime time.


It depends on what you're doing. Crawling around on or inside a plane, for example, would be a bad idea with a hololens, but perfect for Glass.

It's not a 'one size fits all' situation.


If you are inside a plane, you are already should be wearing a hear hat. Probably could mod a Holo lens into a hard hat form factor instead of just a bulky headset?



Oh looks cool. Haven't tried it before so I wonder how it compares to the AR of hololens.


Weight is an issue. The two devices are not completely comparable.


But if both of these were to have production scaled up for industrial use, I feel that a Glass like product would be able to hit a far lower price point.


Calling Hololens "real AR" when the FOV is the size of a postage stamp and needs a large computer just to power it is like calling a Ford Fiesta a supercar.


In a world of Model Ts, a Ford Fiesta is a supercar


I agree that FOV is very small but all of the hololens software runs on the headset itself. It works untethered with it's own wifi and battery.


Glass didn't die because of privacy -- it died because the teams walked out after being threatened with their jobs.


We need more details. Why would they be threatened when they had momentum?


I'm confused. Threatened that they'd lose their jobs over something?


Was Google Glass ever actually meant to be a big success in the short term?

It looks like typical Hen-Egg-Problem where you first have to build it in order for them to come (slowly). Then you have to wait it out until the actually useful applications arise from the dust.

While the smartphone has established its place in everyday life (after years and years of trying and mostly failing with similar approaches), the glass probably will be a tool for a thousand niche applications mostly in the corporate world who need a finished solution before they start acting.


I'll be expecting my RSUs from Google for telling them this exact strategy in my PM interview 2 years ago when I was asked "What would you do with Google Glass?". Then again enterprise is a fairly obvious answer even before they tried the consumer angle. But, if they offer Gmail for people that colonize the moon, (another interview question I got), then I'm definitely going to need some retribution for telling them how to do that too :-P


The only problem with Google Glass was that it was ahead of its time. I fully expect Apple to release a 'revolutionary new product' in 10 years time which will essentially be Google Glass with more advanced tech and better styling.


More like premature. The display was too low-res as to be useless for anything other than reading 9 or so words at once.


Another area where Google Glass has shown to be very useful is in medicine. You can put a pair on and have a specialist walk you through a interview or examination. The poison review toxicology podcast had an interview with a who had significant success using glass in this manner.

http://www.thepoisonreview.com/2016/03/19/tpr-podcast-episod...

Obviously it needed to be de-googlified to make it HIPAA compliant


Yeah also doc uses it to get drug allergies, saves life, as part of Google's "Glass for Work initiative" (2014) http://www.independent.co.uk/life-style/gadgets-and-tech/goo...

Doc uses it to save 1.5 hours a day apparently https://www.washingtonpost.com/news/the-switch/wp/2016/09/27...


According to that page:

"Google Glass has the potential of giving us a new perspective on telemedicine. However, the use of this device presents several problems, especially involving data security, patient privacy and HIPAA compliance."

"Using Google Glass, the consultants were also able to send and receive high-quality pictures of EKGs, pill bottles, etc."

"Although it is possible to send and receive photos using cell phones, computers, and tablets, Dr. Chai made the point that such technology is not HIPAA compliant."

So, they used it to Skype with their attendings, and it has the same compliance issues as ... normal cell phones, computers, and tablets - which absolutely are HIPAA compliant, if you're using secured hardware provided by the hospital (like all the laptops and blackberries used by insurance company employees carrying around identifiable patient data). Hospitals cheap out, though, and make you BYO - which is not a problem I envision Google Glass being a solution to.

Going to the cited publications didn't show any additional benefits.


HIPAA compliance is not some unacheavable data security standard. It's so basic that probably all industries should follow it. The problem is that it's surrounded in verbose policy-speak so it just looks completely opaque. There used to be a table on the HHS website that gave a concise explanation of the rules. It's things like what PII is, that it should be encrypted at rest and in transit, how to get explicit permission to transmit it, authorization and authentication is required for viewing the data, and there must be an audit log. Having worked in the financial sector this is all basic stuff.


Most of those issues are discussed in the podcast. The real advantage of it seems to be the head mounting and voice control, allowing the doctor's hands to be free (and washed/gloved) at all times.


I could be misremembering this, but haven't head-mounted displays of one sort or another been used in industrial settings for something like 20 years? I don't remember when it was, but I seem to recall something with monochrome (and lower-resolution) displays like this being used in things like airplane repair[1].

[1] After a little searching, I may be remembering reading about the 1996 Boeing conference: https://www.media.mit.edu/wearables/lizzy/timeline.html, or it could be any of the other late-90s systems listed here: https://en.wikipedia.org/wiki/Optical_head-mounted_display that got press at the time.

Edit: Looking at another page, I may actually be remembering something from computer magazines back while I was in college or shortly after - the Private Eye from Reflection Technology (1989) (https://glassdevelopment.wordpress.com/2014/04/17/hmd-histor...) - which would fit, because that plus the Twiddler chorded keyboard became part of the MIT Wearable Computer stuff and I remember wanting one of those keyboards.....


Is there an affordable Google Glass like floating screen. With the number of people walking looking at their screens, it makes sense to just gave a floating screen in a nicely design spectacle. No camera, just a ui to see your phone screen and something to track finger movements and tap.

We already have people who have Bluetooth speakers, no reason to also not have a tiny screen near the eyes.


This is the best option I've seen: https://www.vufine.com


This is where it always should've been. The ability to look at engineering drawings, etc. as you work on something, or being able to look through work instructions without having to down tools and clean off your hands is invaluable. This is always the dream product I had for something like Glass.


Beyond me why Google just simply did not add a red LED on the consumer edition to show if the camera was recording in an attempt to address the privacy concerns many people had.


While I'm not sure how much it actually was the case with the apps that were available, a lot of the ideas around it seemed to be things that require the camera running, so the LED would be on in a lot of situations where creating a recording would be inappropriate.

(And making a trustworthy indicator that only shows if a recording is actually being created is basically impossible, so "camera running" is the only thing they could show)


Right, which is why it failed, Google tried to be sneaky, it was inappropriate, and instead of realizing they needed to figure out a way around the issue, they abandoned it.

Another option and even more trustworthy would have been a camera cover that was usable & durable and clearly & visibly expressed the status of the camera.


This was never really a reasonable or valid concern... The camera was never on when the screen wasn't, and the screen was very visible when on.

And the recording capabilities of the device were way overstated. Not just because of battery life, but heat. Trying to record video with it felt like putting yourself at risk of setting one side of your face on fire.

In fact, considering recent concerns with exploding batteries and the size of Glass and how hot it could get, perhaps the biggest marvel is that Google avoided even one Glass exploding.


Did people (in general, not users of the device) know that? I think people got the impression that everyone wearing Glass was always recording everything all the time. So what you're saying is all the more reason why a recording indicator LED would have helped.


I just don't think it makes sense to put something on there that has no practical purpose.

The mere idea that someone could covertly record with Glass is silly, and there's tons of cheaper, far sneakier ways to film someone.

I get what you're saying, that average persons didn't know what the device could or couldn't do, but I don't think that's a justification to add a light that makes no sense to include.


And glass overheated and then died after 20 minutes of video recording...


>> "I just don't think it makes sense to put something on there that has no practical purpose"

The purpose is clear, that being if the camera is recording.


No surprises there, Vuzix have been selling a less stylish version of it in the same market for years.

Frankly i see little use for the likes of AR and VR in civilian life. Hell even the smartphone of today is something of a bleak shadow of the business tool it once was, thanks to every OEM trying to cater to consumers that barely message and access social media.


I can see a lot of uses for AR.

- A killer feature for me would be (pedestrian) navigation. Highlight the road I have to take with a subtle color. That would even work on a very low-res display. Or at the train station, highlight the part of the train were my reserved seat is.

- If you have a higher-res screen, you could add additional information to the walls of places, like restaurant ratings etc.. Making it look just like a physical sign is key.

- One thing that is controversial, but would be huge for me, is face recognition. I sometimes have a hard time remembering what face belongs to what name. A option to show name tags for people I've seen before would be really convenient.

- How about an app that shows plans when fixing a bike or a car? Or one that tracks where you left your screws? If you have a very good camera, it might be able to even find screws you lost :-)

- Of course AR/VR games could also be a huge thing. Time will tell if that really catches on or not.


I develop on the Vuzix M100 and M300, not nearly as good of a user experience as Glass. Also develop for Hololens... it's in another league from glass and Vuzix.


Same thing with the Segway. Remember that device which was going to revolutionize how we travel? Now it's for park police and airport staff and whatnot. Segway is now owned and made in China iirc.


a similiar idea is romanticized in the 2008 mexican sci-fi movie sleep dealer, only the concept was exploiting cheap labor in full immersion controlling remote robots.


Whatever happened to their big cargo ship docked in the bay?


   I am not surprised Google Glass was a product way ahead if it's time. It can still be useful in closed environment like a factory floor. To be really successful in the consumer market it needed an AI capable of identifying every object it sees which we haven't reached yet. I am personally of the opinion that augmented reality is the future of computers rather than virtual reality. People keep pointing towards privacy as the reason it was not successful which is I feel is wrong. They don't have the tech yet of making it truly useful in the consumer market.


I wonder how this compares to Lumus -- https://lumusvision.com/?


Medicine still seems like the best use case to me


Do we have to worry about the wifi antenna being right next to ones head for a full shift?


No, and certainly not when you consider all the significantly higher power gigahertz spectrum light we're already filling the air with.


And medical offices.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: