Google has already begun to show off the hardware of its anticipated Glass product, but has not been as forthcoming with the software side, a.k.a what developers will be able to do with the product once it is finally released to the general public. The focus has been on what Glass is, and not what it can be.
But now that's no longer the case. Senior development advocate Timothy Jordan took the stage at a Glass hackathon in Google’s San Francisco offices on Tuesday to gives developers a sneak peek of the brand new Glass Developer Kit (GDK).
"Glass is this really cool thing, as we all know in this room, that gives us access to the technology that we love, without getting in the way," Jordan said. "And that's very cool. But, what's even cooler, is that its a way to build services for users, that they can use in a natural, and every day, way in their lives."
Developers have been building on Glass, so far, using the Google Mirror API, and "only a handful of them are available online," meaning that, so far, 83% of those using Glass have one of them installed.
There is a lot that developers can do with the Mirror API, said Jordon, but developers want more, and Google wants to give it to them. That is where GDK comes in.
So what's the difference between Mirror and GDK? There are three major features.
First, developers can build apps that can run in both online and offline mode. Second, developers can get "real-time, immediate user response" which means that they will not have to access the cloud first to respond to a user request. And, lastly, "deeper access to hardware," including an accelerometer and GPS.
In addition, the GDK offer news element, such as"live cards" which allows developers to "draw directly into the Glass timeline in real time." The example that Jordon gave for this was a stopwatch, which has millisecond updates, so you can see the stopwatch updating while doing something else on the device, and could be accessed with a swipe.
Glass now also offers "immersion", which are the opposite of live cards. Jordon described it as a "focused user experience, where the rest of glass fades away," so that the user can focus solely on one task. In this case the swipe would only control features within the app.
"Unlike live cards, where, if you swipe forward, you go to next card in the timeline, with immersions, if you swipe forward, that's captured by that experience and may be used by that Glassware. So those gestures can be used as part of the user experience," he explained.
Its always better to show than to tell, of course, so Google has also unveiled five new apps that put these new fuctions to use.
One of them is called Word Lens, and it is an app that allows users to translate printed foreign words into English using the Google Glass camera, even when it was not connected to the Internet. It is an example of an immersive experience.
Another is a run tracking app called Strava, which uses the "live card" function. It allows users to view the details of theur jog or bike ride, which are shown on the left side of the Glass home screen while the user is able to do something else at the same time.
Google also unveiled an app called Allthecooks, which lets users search for recipes by voice command, and swipe through the different steps.
There is also GolfSight, which utilizes the GPS to tell users where they are on a golf course, as well as accurate pin distance, course data and scoring,
Finally, there is Spellista, another immersive app, which is the first game built specifically for Glass.
The more I learn about Google Glass, the more I want one. And these developments, which highlight some of the really cool things that developers will be able to do with it, only makes me more excited about what it could one day be used for.
Check out Jordan's whole presentation below: