The audio speaker for the headset looks like a button (it’s the oval with the Glass logo on it in the picture to the right, photo credit goes to androidchief.com) and is of the bone transduction variety that’s supposed to rest on the bone above your ear. When i first tried it, it felt odd because i could feel the audio vibrating against the side of my head where the speaker rested against it and i couldn’t hear it that well. If i pressed it into my head more, it was better, but it still didn’t project the audio in a way that i felt was practical for listening to someone talk if i were out and about. By accident I discovered that the audio quality improved a great deal if i put my thinksound earbuds in (not connected to anything).
I suspect that the earbud ear fittings that expand in my ear picks up the audio and amplifies it in the same way that it amplifies the sound out of its own speakers. I’ve heard that the actual microphone component of Glass is much more successful for the device than the speaker, but because i can’t get that immediate feedback right now (i haven’t tried to call anyone with Glass on), i don’t trust that yet. When i put my earbuds in and try to make a call, Glass gives me the option to route my incoming and outgoing audio to either Glass (which is connected via Bluethooth) or the headphones. I went with the headphones because i already know that that works successfully. I’m not sure if there’s software stuff that can be done to address that – probably since they’ve already done software updates to vastly improve the picture and video quality. I’ll have to experiment with it some to see how i feel about it if/once i use it for other calls (something that will be rare for me since i don’t actually make a lot of calls on my smartphone).
Sidebar: When i use Glass to connect to Siri to have it do voice commands, Glass interprets that as a phone call and puts up a timecard for it, cluttering up my Timeline.
Pat told me that he didn’t think that Glass could work directly with Siri, but i discovered that it could – although where i tried it was a pretty windy area and Siri picked up a lot of ambient noise making it difficult to do commands effectively. What’s interesting is that Glass interprets that connection as if it were a phone call and puts up a timecard that displays the telephone number of the phone and a duration of how long i was interacting with Siri. This is a minor annoyance because the Timeline aspect of Glass is fairly annoying to navigate as it is when it gets overcluttered with content.
Positive #3 and closing remarks: Google Glass has a lot of potential.
Glass is still a very new technology, and as such it is still pretty limited in its functionality. A lot of people see that as a criticism – The “it’s cool but it doesn’t actually do much” sort of answer, or the “if you have that on your head, then why do you have to also pull out your phone?” sort of question. I usually fall into this camp of thinking – i hate technology for the sake of technology and am not the sort of person that gets excited about Shiny New Things available on the market, particularly beta or alpha products that in my mind might be fun but offer little practical functionality or application.
But Glass feels a little different from that to me for a few reasons. First, I generally put a lot of faith in Google as a company when it comes to finding the right balance and blend of technological innovation combined with practical and accessible value. I’m very impressed with the sort of detail and care that they take to make the user experience of their products intuitive and flexible for a wide variety of users and their continual desire to push and challenge the limits of what they’re capable of delivering to the world. Glass is an example of a pretty great and innovative product that is already pretty great and will turn into something beyond awesome that a lof of people with a wide variety of backgrounds and interests will be interested in using.
Second, I tend to think that one of the reasons that Glass is still in some ways fairly basic in its functionality is the product of Google’s desire to not lock Glass into too narrow of a box at a stage when its future is so open-ended. Glass is an evolving project whose end goal is best directed by that large-and-wide-variety-of-background-and-interest populace. People like to joke about how the early adopters of Google products are their ultimate UAT employees, but i think there’s a lot of value in the kind of beta philosophy that they perpetuate from people who are enthusiasts of their innovations and have interest in their products.
Third, although i don’t know exactly how often i’ll have it on, where i’ll use it, and how i’ll use it (i plan on writing a separate entry about the possibilities that are running through my head), Glass feels very relevant to me and what i’m trying to achieve with my job at the TUMB and my endeavors as a composer and electronic musician. Some of my “use cases” for the device are basic and enhanced versions of things that have already been done by both Glass and other sorts of technology, but some of the larger loftier and big scale ideas that i have are things that fall into a category of “very few people have the drive or the context to try to pull this off.” A future entry i have is one where to talk “out loud’ as it were about one of these lofty ideas, which has to do with how Glass, Leap Motion, Hue, and WeMo can potentially be coupled together to create some sort of musical/visual immersive experience for a willing participant.
And I have to say that it feels pretty neat to play even this small part of the product’s shaping before final release and to share that commonality with other Glass Explorers both locally and across the globe. We’ll see how all of this progresses over the next year or so. I’m pretty excited about it.