On day two of Google I/O 2013 Google conducted Glass track of sessions which proved to be the most popular at the conference with all sessions filled to capacity and in need of overflow rooms. Glass remains in developer preview with more than 2000 developers in Google’s Glass Explorer Project having received the device about a month ago.
How Google Glass Works ?
The Glass display shows what Google refers to as the Timeline, which is a series of pages called cards. Cards may be text, images, video or instruction menus. Glass users can scroll through these cards using the multi-touch panel on the ear lug. User can activate Glass’ basic functionality of web search and photo and video capture through voice commands. Glass also integrates with Google+ so users can share their photos and videos. Sharing is a core functionality of the Glass software available to all apps for the device.
Some of the key takeaways from the Web about using Google Glass at I/O 2013 are,
- Communication to Google Glass is by Voice interface and it found responses to voice commands crisp and accurate.
- The multi-touch panel on the ear lug takes a few minutes to get used to in terms of the pace of flipping through page displays
- Focusing on the display also takes some time to getting used to
Developers can build apps for Glass using the Mirror API from Google, which is based on JSON (JavaScript Object Notation) for encoding and OAuth for user authentication. Google will make available a Glass Developers Kit (GDK) with tools enabling the creation of Glassware, or apps for the device. Core products Google search, Gmail, and Google+ have Glassware applications. In addition, the New York Times and Path currently have Glassware apps, while Glassware from Facebook, CNN, Evernote are in various stages of development.
Google disappointed many without announcing any time frame for the commercial availability of Google Glass. However, it announced the next group to receive Glass devices will be the 8000 winners of the company’s #ifihadglass contest.