When the iPhone was first released, Apple did something very un-Apple-like: they left a lot of empty space on the home screen. As we all know, Apple is a company that is known for perfecting every detail of their devices. Each curve, icon, and pixel has a purpose. So why leave a quarter of screen as empty black space?
It left the door open for possibilities; possibilities that now number more than one million.
Google has done the same thing with Glass. When I first picked up my Glass back in June, there was only a couple of voice commands that followed the initial “OK, Glass” prompt: “Take a picture”, “Record a video”, “Google”, “Get directions”, and some others. Just six months later that list has expanded to include such things like “Start a workout”, “Play a round of golf”, and even “Translate this”.
Soon, the above list of voice commands will be endless, yet not overwhelming. Google wants you to say anything, wherever you are, and have its services respond. That’s why Glass was designed to be worn on your face, closer to your mouth than a watch. Ask for something, and Google is right there. Just look at desktop voice search or touchless control on Google-owned Motorola devices for proof.
In a few years, Glass will be the hub for all the smart clothing we’ll be wearing, and will be the way we communicate on the go. That silicon and plastic brick in your pocket? Gone.
When you’re at home, Glass will continue to be that hub for your connected/smart devices, allowing you to control all aspects of your house with a simple, natural voice command. Actually, this is something that is already possible today. But imagine if all of your devices at home had Glass’ core voice recognition functionality built in?
Google has said that they’re ultimately striving to build the “Star Trek computer”, but I think they’re on track to give the world something more along the lines of Tony Stark’s JARVIS. And that’s A-OK with me.