I'm not entirely sure of the capabilities of Glass. For instance, battery life and an ability to persistently have the camera on and displaying an overlay for extended periods seems difficult. Some of the Glass ideas I like but haven't heard much discussion over:
- Facial recognition with a little popup over people's heads with their name. Never forget someone's name
- Link in with Facebook or your calendar to have people's pop up indicate if its their birthday. Always be on top of the "Happy birthday!" game
- Have an alternate reality game with digital objects people can place/write on. Only Glass users can see the teddy bear in the corner, and only google glass users can see the giant graffiti wall where people sign their name.
- Identify desired objects and make them stick out, "Dammit, where are my keys again?!"
- Auto translate QR codes and overlay the translation
- Auto tweet images of your life on a schedule
- Many cameras pick up infrared, could overlay this (if the Glass camera does) to expose more of the light spectrum to Glass users
Being a private person i just shudder when hearing concepts about face recognition or auto tweets in this context.
^ "Have an alternate reality game with digital objects people can place/write on. Only Glass users can see the teddy bear in the corner, and only google glass users can see the giant graffiti wall where people sign their name."
I am sorry since for some this will not be considered as being a constructive contribution to the discussion, but there is an Anime out there about exactly(!) that.
It's called Dennou Coil. Very good series.
[Sorry, i just had to mention it! Forgive me.]
Hey, no need to apologize. It's relevant to the topic at hand and you've just given me, and quite likely some other folks, a series to check out. That's hardly something worth apologizing over :)
This would work well with the images from that bad movie 13 Ghosts (2001 version). The movie wasn't that good, but the depictions of the ghosts which were only viewable with special glasses, was very well done.
I tried Google Glass a little while ago. I think it's amazing but one thing I wasn't as aware of before was that it's not an augmented overlay at all. Think of it instead of a display in the corner of your field of vision. You can add information about what you're seing, but you can't effectively overlay things (i.e. highlight parts in an assembly etc)
Is there an augmented reality aspect to these? I got the impression that there was not, that it was simply a little area in the corner of your field of vision where it would display things.
I believe that's what I heard as well. If I remember correctly, I think Robert Scoble said something to the effect of "It's like having a Nexus 7 in the corner of your vision." Apparently he's writing a book for Google Glass and has used them as part of the deal.
> Have an alternate reality game with digital objects people can place/write on. Only Glass users can see the teddy bear in the corner, and only google glass users can see the giant graffiti wall where people sign their name.
There's a small team at Google working on a game that uses the real world as its map, Ingress[0]. I haven't been able to get access to it yet (typical Google closed beta), but from what I've seen of the game it would fit right in with Glass.
Slightly off-topic: I managed to snag an invite and play around with ingress a bit and I have to say, as a game, it is pretty terrible. You have to find portals, get within a few dozen meters, and then interact with them on a cool down. If there are no portals in your area, there is literally nothing to do in it.
It looks like it was designed as an excuse for people to walk around with their GPS enabled and take pictures of "interesting" places. Which is too bad because the concept is kind of cool, but without any actual playability outside of these key locations...what's to keep me playing?
I was pretty disappointed at first too, because I couldn't find any portals. Then one day I discovered a spot with 3. Then I found out that if you log on with your web browser you can see the whole range of portals with a Google Maps-like interface. I won't say I'm hooked, but it is played every couple of days. There are not a lot of portals where I am either, but new portals are added periodically and you can submit places to become portals. The latter part is probably the whole reason Google launched it: Google created an army of people taking photos of landmarks and interesting businesses, museums, etc.
But that is the problem. There should be SOMETHING to do even when you aren't in a hotspot. In my town there are only 3 portals, and only (apparently) one person in the opposing faction. Why would I bother playing?
You could take photos and submit them as potential portal locations (you can share the photos with ingress from the android camera/gallery). That could encourage more people to play and give the current players more to do.
The downside is that I think it takes a few weeks for portal suggestions to be processed.
imagine mechanics using it to see an x-ray kind of view of a vehicle or engine.
location/time awareness could do a "checklist" of sorts when you are leaving your house to make sure you pick up all of the items from the front table.
a lego app that helps you find that piece in the big bin of pieces
yeah, for realtime AR, latency would be a big challenge, but even if in the beginning you could only take a picture and then overlay a schematic over the top of that view, I could see it being useful.
I have looked up how to take a laptop apart before to change a part that required me to remove screws in certain places, unhook cables and do other steps in a particular order. Half the trouble with that is figuring out where the picture corresponds to the device in your hands - even if you had to manual advance the instructions, a HUD arrow pointing to the screw would be super useful I think.
That video is exactly what im talking about - I think it could be done even better, but you can see the possibilities. I dont really understand the need for tint on those glasses, but im sure it was just an example ;)
These are wearable computers, not glasses with visual powers that are not yet feasible to implement, period (I'm guessing there would be a few technical issues with head-mounted-x-ray machines).
The average internal combustion engine is really pretty simple. It is no more complex to a mechanic than the innards of a "powerful" desktop PC are to most computer people.
All the glasses could do is display a layout of a known engine, they can't see inside it to highlight a problem. So, it seems like they would be more of an overall nuisance than anything else in that scenario.
The intriguing thing about proper AR, and the proposed "internet of things", is that in a sufficiently connected/smart environment a highly complex mechanical object could be overlaid with a schematic and data about where faults were, as long as the object was capable of reporting its detailed status over a network. That is effectively the X-ray specs the parent mentioned, and more.
AFAIK, _the_ drive for AR in maintenance is that it makes it possible for you to look at a schematic while using both hands for the task at hand, and in whatever position you are.
For example, imagine inspecting a plane's fuel tanks from the inside. You could bring a manual, but getting it out of your pocket may be cumbersome, checking that you took it out after the job takes time, etc (forgetting your glasses is less of a risk; you would need them soon again. That manual? The next tank might have a different 200 page one)
Depending on the PDF processes to create that manual and its overall design sensibility...imagine how hard it will be to navigate the same kind of manuals that don't behave on a tablet...sometimes, paper will still end up as being the best choice.
yes, I didnt mean to really xray the engine, just give you an AR view of the schematic. I would assume it would require certain calibration points or barcodes on the engine to orient itself.
And even if it was a nuisance once you learned the engine, it could be a useful tool in schools or someone learning a new engine.
Even imagine a small engine machine shop that sees 10s or 100's of different kinds of engines.
I don't think it would work well in real-time with barcodes since the human head is in constant movement. Current gyroscopic sensors would not make it a seamless experience.
Voice input just isn't ready. Commands, yes. Anything longer than a short sentence and it's unusable.
That means this will forever be a Second device, either separate or connected to your smartphone.
Working speech recognition would be a major revolution, so would useful Augmented Reality, and so would combining them in a lightweight wearable mobile device... but none of these exist. Big jumps in technology don't happen like that, and certainly not all at once at one company.
No, in fact as I'm growing older I'm getting more and more to the point of taking more and more "technology vacations" where I ditch my phone and don't use a computer for a night/weekend.
It has overall improved my life in many ways. I already carry enough electronic gadgetry, at least with a phone it is in my hand or pocket. Having glasses with a battery pack on my belt? Yeah I'll wait until the tech improves.
I can't tell if you're being sarcastic or not. Do you have some source to back up why that is "not going to happen"?
I can't imagine a scenario where Google wouldn't allow it to link to Twitter and Facebook, especially considering that Google is opening it up to developers.
Apple has already filed patents for a "Glass" type product and were Google to disallow developers to link to Facebook and Twitter, Apple would only have to allow this sort of integration...and then Apple has a strong selling point over Google.
I'm trying to think of any successful consumer product that blocks this sort of integration and am not coming up with any. Xbox, iPhone, geez even my dad's blu-ray player and people's automobiles (Ford Sync) integrate with Facebook!
The coolest part of this being, it doesn't have any lens, lens is replaced by a head-up display.
A head-up display or heads-up display—also known as a HUD—is any transparent display that presents data without requiring users to look away from their usual viewpoints.
The frame of mind I have with Glass is to think of problems where the "taking phone out of pocket" motion is inhibitive enough to be a barrier for a successful app.
For example, QR codes today are something that pretty much nobody scans but will be relevant overnight if people start wearing these. Or, any app that is useful for you while you are driving, like parking assistance or finding nearby gas stations.
Also there could be some interesting applications that arise when you consider that if you are wearing Glass software can know when you are looking at a particular display, like your phone or computer.
"the "taking phone out of pocket" motion is inhibitive enough to be a barrier for a successful app."
I think the same goes for any wearable tech. I find the idea of watch phones awesome, but all of them right now (like on aliexpress or Sony's smartwatch) are pretty tacky/nerdy. But with the right execution and design a watch phone could be an awesome way to avoid taking the phone out of your pocket for checking messages or snapping a pic real fast. A phone client rather than a full phone seems like the right way to go. Plus it's way less intrusive than a pair of glasses (not as fun as augmented reality though).
Half the front page of today's The Times in the UK is a picture of Zuckerberg. Though I think it might be more to do with giving the paper a "we're down with tech" image.
The explorer edition is not going to be the same as the commercial edition. This is a developer only release so the specs are bound to change and are not quite as important, imo.
I'd love to see this app for it. I call it real life screen sharing!
Take any task you need to do;
say change a tire.
You post a request to the internet for help.
Someone agrees to help you.
Then they see on their screen what you are seeing through the glasses.
They can talk you through doing whatever you need to do, and perhaps even point to places on the screen and it will highlight that area in your view screen.
Think how many repairs we're reasonably competent to make if only an expert would look over your shoulder and you could ask questions!
Not just "someone willing to help" but also customer support, tutoring, medical emergencies, bomb diffusing... The monetization force is strong with this one.
I think it's a great idea. For mundane tasks like changing tire, setting up a printer (just thinking out loud), even a prerecorded video overlay can be of great help to many. Create this service. Can expand.
Do we know what the augmented reality part will look like from the user's perspective? The skydive jump looks like it is basically a head mounted video camera - and while there is no doubting that was technologically very impressive, full video feeds in that wireless environment, I've not seen what it looks like for the person actually using the glasses.
They did the concept video http://www.youtube.com/watch?v=9c6W4CCU9M4 I guess that's the long term goal. I haven't seen any real-world previews yet. I guess that's what is most exciting about the hardware going to developers soon, we should see some images.
Re-watching the promo video, I'm surprised at how basic it seems. For example, when the user had picked up the book about ukeleles, I was expecting a bunch of comparable price listings to pop-up, or at least the book's ISBD info.
And face recognition is an obvious use, but I think people would've freaked out if the promo showed how you could look at people and ascertain their identity.
Seriously? How would this be more dangerous than current cellphones? At least with glass you can keep your eyes facing the road while gathering information and I'm pretty sure you don't need your hand to answer phone calls either (tilt your head.).
Also, they wouldn't be releasing a product that caused eye problems.
> At least with glass you can keep your eyes facing the road
A sheet of anything, however transparent, centimetres away from your eye won't help you see something a hundred meters away.
You're concerned with alignment but focus matters even more. You usually don't notice the focus time because your eyes are doing it while moving (changing the alignment).
HUDs work in planes and cars because they're far away enough to be in the "far sight" range.
Worse, while once your eye moved the distracting elements are out of the way and you get an unobstructed and non-distractive view, with this you simply can't look away of the distracting elements — merely focus away — and however transparent and blurry it would appear it would still obscure the view.
Besides, the attention problem makes it an order of magnitude more likely to have an accident whereas the hands problem makes it only a factor more. When you're on a call with someone or whatever, hands or not, your mind is simply not driving.
So, should it be a cause of concern? I don't know, but it's certainly not the solution. Driving is an enjoyable yet serious matter, requiring a combination of motor and cognitive skills, therefore the solution is (whether people like it or not) that one should not do anything else but driving, or not drive at all (i.e have someone either drive meanwhile or do the task, or take the train, coach, taxi, or get a Google car). The huge majority of crashes are directly tied to an inadequate level of cognition from the driver (alcohol, testosterone excess, adrenaline addiction, sleep deprivation, psychological pressure, multitasking...) because every dent one makes in its ability to analyse raises the odds of an event ranging from nuisance to dramatic given the energies involved WRT our frail physical envelopes.
So, drive responsibly. Don't answer, dial or text wile driving, and don't put an electronic lid on your eye. Even if you think it's safe while it's probably not, it's way safer not to.
Just to make sure I'm not misunderstanding you, you're saying that having a display in front of your eyes would distract you from other things, for example the road?
Surely in cases where this is important you can simply turn off the display? A piece of glass or plastic isn't going to interrupt anyone's vision, as we can see by everyone who wears glasses when driving.
Furthermore, I don't agree that any loss of optical clarity is "way" more dangerous than perfect vision, nor that something being displayed on the glasses need be perceptible at all.
For example, if a glasses wearing driver were to swap out their moderately rimmed glasses for fancy clear-framed glasses, their safety would be inpercievably improved. At times, it is easy to not notice dust on your glasses.
Now, there is certainly potential for far more obstruction to be had, and the case for changing focus is surely one to be considered, however the claim that it is necessarily "way safer" to not have anything present close to your eye that is not completely transparent is incorrect. As a simple counterexample, the glasses could change tint or brightness according to driving conditions, improving the vision of the driver. Drivers already do this by putting sunglasses on, these glasses could do it automatically.
I'm talking about the display being enabled (else it would simply be similar to regular glasses) bringing two problems:
- the sight problem: both eyes converge and focus simultaneously (unless you have some serious neurologic issue). Unless Google solved the problem of measuring the the lens curvature (i.e eye focal distance) and send light rays the proper way in a ridiculously tiny and cheap package (compared to currently available professional equipment measuring such values), it is sending them in a best-effort way on a fixed focale. Hence the display will look blurred for some focal distances (possibly obscuring all of part of the field of view in a significant manner), while when at the correct focal distance change of focus takes time (possibly preventing to see or notice stuff in a timely manner, or at all).
- the cognitive load problem: even simple notifications on your computer eats away your productivity, so imagine the firehose of your digital life randomly pinging you while driving.
The first one is a general concern covering many areas (safety, health, comfort...) while the second one is making a point that it does not bring anything to safety while driving.
According to David Pogue of the NYT who tried them on back in September the display is "invisible" unless you focus on it:
"The biggest triumph — and to me, the biggest surprise — is that the tiny screen is completely invisible when you’re talking or driving or reading. You just forget about it completely. There’s nothing at all between your eyes and whatever, or whomever, you’re looking at.
And yet when you do focus on the screen, shifting your gaze up and to the right, that tiny half-inch display is surprisingly immersive. It’s as though you’re looking at a big laptop screen or something.
(Even though I usually need reading glasses for close-up material, this very close-up display seemed to float far enough away that I didn’t need them. Because, yeah — wearing glasses under Glass might look weird.)"
"At least with glass you can keep your eyes facing the road while gathering information"
But can you keep attending to the road? Fighter pilots can, but they take a lot of training, lots of them don't make it through training, and their display doesn't show 'I am contacting you on behalf of my deceased Nigerian aunt" messages.
For mere mortals, I think it may be safer to have separate gaze directions for 'looking at the road' and 'looking at my cellphone's screen', if that screen shows data unrelated to the driving task.
I think the biggest scare that will happen is when geocoded data of the value-added type is overlayed upon your view.
Imagine these scenarios:
* The gun owners address map re-oriented so that when you look at a house, you see a "gun" symbol over it
* Property value over each home (available from county assessor)
* When Facebook makes another privacy-opening-oops and people with their wall/statuses set to semi-public have a little bubble over their head with their latest status update ("I'm so drunk from yesterday!").
* When you look at people, or their children, they automatically assume you are videotaping them. And someone out there is going to write an app that checks your name against a sex-offender registry and will notify anyone if you have a similar name to said person (guess you'd have to reveal your name somehow...which you might through Google Plus and the required real name)
> Property value over each home (available from county assessor)
As someone looking for a home, in sometimes unfamiliar neighborhoods... this is a great idea! Zillow has something similar in their iphone app, but this would make it a lot more convenient.
All of these things are already possible to implement, google glass just makes it more accessible by keeping it in your field of view the entire time your wearing the device.
Yes, all of these things are already possible to implement, in the same way that it was possible to construct a webpage in which you listed your activities and doings and manually update them -- and track your friends who have their own webpages -- long before Facebook came along.
The idea isn't the point here, it's the ease of use and the visceralness of it. The gun maps debate is a prime example of this. Anyone could go to the government office and get those records. And if you were a thief, you would have much more use out of them as a straight excel spreadsheet which you could cross-reference with other property records...or at least just do quick name searches across a large Excel file.
But as soon as a newspaper put a bunch of red dots on the map (which made it nearly impossible for a user to find anything more than who has a gun and who lives at that red dot without clicking manually thousands of times), the New York state legislature is hot to pass a law to ban those public records.
Usability and location is much more than just "additional features".
> And if you were a thief, you would have much more use out of them as a straight excel spreadsheet which you could cross-reference with other property records...or at least just do quick name searches across a large Excel file.
Are they using this to steal the guns or avoid the houses with guns? Owning a dog is probably the single best way to dissuade break-ins if you are that concerned.
I have to stop looking forward to use it, for just a small part of a second, because of different focal distance, different angle of my eyes, limits in the brain part that focus the attention, etc.
It actually helps me avoid accidents.
I don't see how Google Glasses would be much different to that mirror. I would love to use them for GPS or other related info. Focal distance in tiny screens is either a solved problem, or one being solved by smart people right now.
Ads and even more sever privacy violations are comming, not the future.
People with bluetooth sets in their ears looks funny to me, I guess people wit GG will look even more weird and scary.
Right now the primary use case of the Google Glasses is as an always-on video camera. Now most people (including myself) don't appreciate being filmed without permission and there are plenty of situations where people could be filmed committing illegal acts.
So I can guarantee you that injury is going to occur at some point and especially in the US a not insignificant chance of being shot.
My experience for the last two years is that usually there are 2 or 3 people sitting around in any public place with their phones/tablets out pointed in my direction. Those phones/tablets have a camera (most have two at this point). They can be turned on remotely by software running in a background process. In addition almost all public spaces have more than one surveillance camera.
I don't think merely avoiding google glass wearing people will help much. To be effective you should probably avoid anyone with a phone/tablet that is not in their pocket and stay out of public spaces (maybe live in the woods).
Why not complain about Gorgon Stare instead? Since it is more likely to be used for infringing rights and you have more of a legal case (preventing unwanted photography in public was lost long ago).
>"The system is capable of capturing motion imagery of an entire city, which can then be analyzed by humans or an artificial intelligence, such as the Mind's Eye project being developed by the Defense Advanced Research Projects Agency. [..] It is a spherical array of nine cameras attached to an aerial drone." -http://en.wikipedia.org/wiki/Gorgon_Stare
You know, in public places there are going to be lots of eyeballs recording images directly into human brains. Most of these people will quickly forget about you if you don't do anything unusual, of course, but there's always a chance that there's someone around who recognizes you. I'd say it's always best not to assume you have privacy in any public place.
There's a very, very large difference between eyeballs on a person and wide-scale archived surveillance. The Supreme Court ruling against warrantless GPS trackers on cars (whereas warrantless 24-hour surveillance is okay) is an outstanding example, and that one ignores the fact that human memory is pretty crappy when it comes to evidence.
That being said, I think widespread cameras are simply something people are an inevitability that people are going to have to get used to, warts and all.
My point is more that pervasive, individual, recorded surveillance is a different beast than random acquaintances seeing you around town.
It's really the scale of the thing -- YouTube and Ustream and the like are going to have some serious privacy concerns when masses of people want to start lifestreaming.
Wait, do you think that everybody is going to be uploading everything they see to YouTube? Different people have different comfort levels about what they put online which means that being tagged in Facebook photos can often be embarrassing, but very few people have zero desire for privacy and I don't see share-by-default taking off for lifeloggers.
I see it becoming vastly more prevalent. Maybe not when Glass comes out (particularly due to data caps,) but I imagine that surveillance/sousveillance will be notably less taboo in 10 years.
Of course, I should disclose that I became greatly interested in (at least partially edited) lifeblogging once I saw devices like the vaporwared Zion Eyez[1] or the Memoto[2].
I attempted to start recording my drives home with my phone and publishing them on YouTube, but I'm not committed enough to keep it up until it becomes frictionless. This would be a perfect example of the potentially scary surveillance -- if a number of people start autopublishing dash cam recordings, it creates a way for people/organizations/governments to data mine someone else's data for their own surveillance purposes.
And when similar capabilities are built into less obvious glasses? What about contacts? At what point will you have decided that since everyone is on camera at all times in public, from someone's camera, it just doesn't matter?
We're all going to have to get used to being sousveilled.
They could prove useful against cops abusing the law. Cops are going to use this sort of tech very soon anyway. Might as well fight back with your own.
I am wondering if Google will implement an Ingress play mode especially suited for Glass. It would be really nice to walk around town, be notified when you are near a portal and be able to hack/link/whatever that portal without taking your phone out of the pocket.
Well, it's democratized cheating, so the playing field is still level. I think people would just adapt.
Either that or casinos would change their games.
Anyway, you can already do card counting via computer pretty easily at a casino if you think about it--I'm pretty sure they don't care enough to stop you as long as you aren't obvious and you aren't robbing them.
Oh, they care. If they have any suspicion that you're counting they'll just politely ask you to leave. Casinos are private property and they have the right to kick/ban anyone they want for any reason. Call it unfair, whatever.
And read up on the history of the early card counters and their shoe computers. It's a colorful story.
Since I have no time/desire to tackle this. I can see Google Glass being effectively used in a line cook type setting. Imagine overlays when you look at the grill, seeing how long a steak has been on there, and an indicator about the desired doneness. Or glancing up to see your assigned dishes to make next.
This could be applied to many types of environments, but I think chaotic situations would be best suited for these types of simple quick focused information overlays that are contextual.
If it turns out that it's actually possible to comfortable read information on these, which I find extremely hard to imagine, they would also be immensely helpful in the Emergency Room and the hospital, especially with facial recognition software. Being able to instantly see lab results/x-rays/patient data for any patient you're looking at, and being able to instantly look up things you don't know without having to lug around a stack of files or having to keep going back to the computer would save an incredible amount of time, and make it so much easier not to overlook critical information.
That said, the privacy implications are horrifying.
How much would facial recognition help in the ER? There's already id bracelets, charts, and other ways to track patients and learn information about them. You'll still need a backup system because babies, identical twins, sunglasses (think "often worn by blind people") and people with face damage or head bandages are just some of the people who aren't easily be tracked via facial recognition. Better to have one system than two, no?
Is there any way to get access now, having not been at Google IO and thus not been able to preorder? I have Android computer vision experience and some projects in mind, and I'd gladly pay the preorder price for access to one of these.
I predict it will be hours, not days when this hits the street and some joe in a bar realizes that he's 'on camera' that there will be a confrontation.
Google Glass is something that I highly doubt society is ready for in any way.
A couple of ideas. Introduce bugs, so managers wearing Glass in the City pub will see maintenance instructions for cash registers. Make ambient sounds automatically switching on when your boss talks.
Holy shit - If any one from google would like to give me a shot with some of these I would be more than receptive. Have been kicking ideas about ever since I first heard of these in 2011.
I came up with an idea. It's Google Glass app+tool which controlled with this app. When Google glasses will come out there will be more people with app idea. So, should I patent my idea?
What you just described is basically why the patent system sucks - if your idea is so obvious that other people will think of it, it's not worthy of one. Your motive for getting the patent is to block other people from doing something obvious.
Nah, chances are 100 other people have the same idea. No one is going to steal it. Just build the best version of that idea and don't worry about what everyone else is doing.
- Facial recognition with a little popup over people's heads with their name. Never forget someone's name
- Link in with Facebook or your calendar to have people's pop up indicate if its their birthday. Always be on top of the "Happy birthday!" game
- Have an alternate reality game with digital objects people can place/write on. Only Glass users can see the teddy bear in the corner, and only google glass users can see the giant graffiti wall where people sign their name.
- Identify desired objects and make them stick out, "Dammit, where are my keys again?!"
- Auto translate QR codes and overlay the translation
- Auto tweet images of your life on a schedule
- Many cameras pick up infrared, could overlay this (if the Glass camera does) to expose more of the light spectrum to Glass users