I recently had the pleasure of attending Austin’s first Google Glass meetup, hosted by Brian Moeskau and the good folks of Bleeding Edge Web at Capitol Factory. Pristine founder Kyle Samani gave a detailed introduction on the hardware, software, development, economics, and philosophy of Google Glass, shared his pair, and fielded questions from the diverse and curious audience. Kyle and his team have been working with glass for months now, and as a result have developed some of the most mature thoughts on the platform I’ve encountered.
Self-trackers like me are particularly excited about how Google Glass can enhance our ability to record data about and understand ourselves. Chris Hollindale over at The Guardian has been brainstorming a little on the subject. Unlike him, I wouldn’t go so far as to say the possibilities with it are “endless,” but I do believe Glass can complement existing self-tracking apps and systems in interesting ways.
Here are a few things I’d love to see early consumer Glass apps include:
- Nike+ Running milestone notifications
- Real-time workout feedback from Nike+ FuelBand / FitBit
- Glass-prompted mood input to MoodPanda
- Heart-rate or blood pressure warnings from Polar / Blip
- Geofenced Foursquare check-in prompt (“Yes, I’m here” to confirm)
These are mostly friction-reduced versions of existing features. Far more interesting things are probably in the works as I write this. Kyle anticipates developers are already building Glass apps to fill the obvious hobby enhancement categories by its launch later this year or early next. That’s great news for consumers, and its a necessary step to spur early adoption of such a radically new product. Price point, app maturity, and integration considerations will matter, of course. But Glass’s ability to offer quick, contextual and friction-free interaction seems well suited to aid us self-trackers from the get-go.
Just to be clear, I don’t believe Glass will replace smartphones for consumption and insights about our data. Tablets and iPhones and the Web are already very good with these activities. Glass can distinguish itself by offering new, useful input channels (always-ready voice, camera, sensors) and a heads-up display for only immediately relevant output. Google is adamant that Glass should not “get in the way.” I completely agree, and look forward most to the apps which use Glass for its unique strengths, rather than simply porting smartphone functionality.
How would you like to see Glass enhance self-tracking?
In other news…
I’m a NikeFuellionaire!
Somewhere around the time I took this photo on my recent jaunt in San Francisco, I passed ONE MILLION NikeFuel. Back in October of last year I started recording my Nike+ FuelBand activity, and despite a few-days gap near the beginning when my battery had a minor glitch, I’ve recorded every single day since. I’ll do a more detailed breakdown of this journey in a future post. Until then, hooray for relatively arbitrary milestones!
And as if that weren’t enough
I’m kicking myself that I completely missed the brain-tastic Melon headband Kickstarter project. This EEG-based marvel looks very stylish when paired with Glass, and I’m a sucker for stuff that feeds my telekinesis fantasies. I look forward to buying this when it eventually launches and controlling origami with my mind.
Oh, and I bought a Memoto. ^_^