"Each step in human technological advancement provides improved methods for the distribution of cat photos. Project Glass is no different."
On a slightly more serious note, I'm glad to see this since I feared that they might keep the API for later, instead opting to try and make a "product" first (a la iPhone 1 or Google+).
This API is definitely not what I was envisioning though - I expected another API add-on to Android, where you can take over and do what you wish with the display, so perhaps it is a bit like webapps-as-apps on the iPhone 1. I'd be interested to see what early adopters do with it and if they find this API too limited.
I was also fairly disappointed, it looks like 5 out of 5 of my ideas for what I wanted to do with Glass can't be done with this API. But I'm hoping they wanted to get some basic / easier to implement API out as fast as possible while still working on more interesting support on a side (native apps, in particular).
This is... pretty seriously disappointing, if it's supposed to be the whole thing (or close to the whole thing). I'll give them the benefit of the doubt and assume that they're going to publish the rest later, and not deliberately hobbling it to prevent the public from getting creeped out by the hardware's capabilities.
Glass is an entirely new kind of device; to have good ideas I think we need to wear it and experience the world through it. The API is fairly constrained, but so is the device (eg always in the field of view). And strict constraints, when enforced for good reason, often lead to interesting products (eg Twitter).
See the work of Prof. Steve Mann, specifically "EyeTap". I understand one of his ex-students is on the Glass team, however "an entirely new kind of device" it is not. It has a heritage of ideas and implementations that are around 30 years old.
That's true, but Glass is the first time something like this has been mass-produced. It's clearly a version 1 product, even though there have been other people doing similar stuff before glass.
There's no augmented reality, but from the docs it looks like you can stream audio and video to and from a service, maintain awareness of the users location whether they're using the app or not, and push interactive notifications. About all it's missing is a HUD, which will no doubt come as soon as the batteries can handle it.
This is so exciting. Just how a completely new user experience paradigm was uncovered when high quality touch screen devices like iPhone etc. first launched, this is yet another milestone in how we will continue to interact with the petabytes of data that we as mankind have digitized.
It's worth pondering how significantly new I/O devices change the game -- the first tty, the commercial keyboard & mouse, the touch screen, multi-touch trackpads, and voice activated smartphones.
Amazing technology aside, this is a pretty disappointing API release from a developer standpoint. Basically no access to Glass's amazing hardware, nor any way to receive user input other than a swipe/tap on the side? I'm really hoping it gets more comprehensive.
The Glass will surely be hacked six ways to Sunday but I doubt the official API will go much deeper than it already is.
The still-disappointing Plus API pretty much tipped Google's hand when it comes to how flexible they want to be on providing APIs for future products and on top of that there are some pretty substantial privacy issues with giving developers low-level access to the vast amount of personal data Glass will constantly be collecting. I'm already worried enough about Google having that data that I'm sitting out Glass for the foreseeable future (despite the fact that I suspect it will be useful for a lot things), but if random third parties could access that data at a low-level I'd be even more worried.
I'm willing to bet that they'll have some android integration announcements at I/O. The Motorola phones are suppose to have touch controls on the back, which would be a pretty convenient way to interact with glass.
Actually from what I see the only way to receive user input is via user selection of menu options. There don't appear to be any callbacks from swipes or taps on the touchpad.
> It's worth pondering how significantly new I/O devices change the game -- the first tty, the commercial keyboard & mouse, the touch screen, multi-touch trackpads, and voice activated smartphones.
This comment actually explains my complete disinterest in Glass :)
I don't see how voice-activated smartphones have changed the game yet, IMHO they're in the same league as Glass - "this might be worth it later" (especially dubious for Siri). I use two touch-screens and a trackpad every day and yet I could probably go back to only a keyboard and die happy.
What has changed my life was connectedness, and Glass does not do more than a smartphone in that area.
(You go out to the Ebay store? Can you tell me where that is?)
My point was, just because something isn't for sale in stores right now doesn't mean it never existed. As far as I know, Steve Mann's EyeTap glasses have never been for sale in stores, but kaolinite's argument was just silly – hence my comment.
If you don't like the 'reel-to-reel tape recorder' example, imagine I said 'enriched plutonium'.
My point wasn't that it didn't exist because it wasn't in stores but more that taking a device like that to mass-market is a much bigger challenge that making it for one person (not that I am saying his work isn't incredible).
When discussing UX and how this will affect the general population, this technology is basically brand new.
What would be nice is if Google released an Android app that does the same thing as Glass (ie. location updates and push notifications), for testing purposes. It wouldn't be as nifty as having the thing on your head, but pretty much all the use cases covered by the API would work on that.
Yeah, I assume that works if you want to test how cards display, though there does not seem to be any interaction.
Considering the API only allows viewing cards, taking pictures, sending your current location and taking textual input, there's nothing that prevents them to have a glass implementation on Android to test things out, other than the time/resources to develop such an implementation.
I've been a Google Glass skeptic. But I just got back from Mexico, where I was walking all over waving my phone at various signs so Word Lens could translate them for me... skeptic no more! Word Lens is a killer app for the platform. Except now I see that there's no API to access the camera. Seems like a huge mistake.
One of their example API uses includes users taking photo's with the built in camera and sharing them with your service. See "add a cat to that" https://developers.google.com/glass/stories
No camera API access? That's ridiculous! Not only does it restrict about 70% of the possible usefulness of a head-mounted computer (so it's basically just a fancy news-feed display), but it'll allow competitors to move into the arena having a clear advantage. The only reason I see for Google to be doing this is resource allocation, but I feel like image processing could be offloaded to a linked smartphone, if necessary.
It looks like there will be one standard way to take photos, and you'll register an Intent to handle what you want to do with those photos (see 'Add a cat to that'). Considering people will be wearing these things 23/7, it's not unreasonable that they wouldn't give arbitrary apps shutter control right away.
The Google Mirror API allows you to build web-based services, called Glassware, that interact with Google Glass. It provides this functionality over a cloud-based API and does not require running code on Glass.
This API is focused on pushing info to Glass, rather than interaction (which I assume will be later)
> but the setup process gets blocked when you need your actual device to sync
Not entirely, it's obvious that it's just pinging some endpoint for a protocol buffer. The server will return a protocol buffer with a "continue" message, but you can just spoof that.
As per the update to that issue, the API is currently only available to developers with physical access to Glass. It is a shame they didn't make this clearer on the developer documentation page itself...
Look hopefully they'll improve on this a lot, but lets be honest, this API is pathetic. If there are significant technical (power, weight, etc) reasons why this is all that can be done, then they probably aren't ready for prime time. I'm trying to stay positive and imagine the future for this product is bright, but wow wow wow this is bad.
So screen resolution is 640 x 360px. Looks like lot of interesting applications can be build with that real estate! Weather, Maps etc. are some to come to my mind right away.
It looks like the Mirror API allows you to register callbacks, like Android Intents, to handle events like 'position changed' or 'new photo'. They're really locking down user interaction on the device itself, probably due to processor power, and to keep the experience consistent. Not that I agree with them, but they definitely look afraid of getting a bad name if some devs produce crap - hence the very limited scope.
The specs on the display are pretty bad. In the API examples, the one with the most text is the shopping list with 5 lines of short text. I want to look at longer text on a HUD device. Is this a limitation in the ability to create HUD hardware with higher resolution. Also, can someone with experience in this area explain the pros / cons in the "how it appears" spec? I'm talking about the spec where they say "looks like a X size display X distance away" Here's 2 HUD specs:
Glass: 640X360 25" HD display from 8 ft
Vuzix M100: 400X240 4" mobile screen at 14"
If I place a ~4" mobile device 14" from the top right of my field of vision, I think I could live with that amount of obscured vision, but is it feasible to create that with 720p resolution? Why would you want a 25" display 8ft away? That seems like it would just be good for placing display ads and not really for most useful things aside from quick notifications.
Go? This is awesome to hear. Unlike Android, Glass will be an instant win for us Gophers. ( Well, once they have crossed 1m+ devices-sold so that the potential audience for hackery becomes actually interesting )
This could be a chance for Go to become more mainstream, if there wasn't so many languages supported, but I guess is not Google's agenda to disseminate their own lang.
I believe the Go library is automatically generated off the API, effectively just extending the existing Go/Google API client library. Which is not to necessarily down play it in any way (I love Go!), but just that it's not vastly exciting.
It doesn't actually show up as a service yet - follow the instructions on this page [0], see "Getting started".
Basically, create a project in API console, create an OAuth 2.0 client id, add [1] to the valid JavaScript origins and then paste your client id ({number}.apps.googleusercontent.com) into the playground [2].
I'm hoping there's an easy 'be quiet' option. They're 100% right that apps shouldn't be spammy, but we additionally need a standardised option to mute or uninstall apps with a couple of swipes. It's the most walled garden ever, but for something like this with so much potential for spammer abuse, I think we need it. At least they'll be hackable.
I used to think it was ethically questionable to add cameras and trackers to wild animals just so we could investigate their habits. Now human animals are doing this to themselves voluntarily.
On a slightly more serious note, I'm glad to see this since I feared that they might keep the API for later, instead opting to try and make a "product" first (a la iPhone 1 or Google+).
This API is definitely not what I was envisioning though - I expected another API add-on to Android, where you can take over and do what you wish with the display, so perhaps it is a bit like webapps-as-apps on the iPhone 1. I'd be interested to see what early adopters do with it and if they find this API too limited.