Getting map directions is easily one of the best features and use-cases for Google Glass. Seeing turn-by-turn directions at the corner of your eye when you’re out and about is one of the simple pleasures of wearing a computer on your head.
Unfortunately the only way Google provides to start navigation is with speech recognition which fails more times it works. Even though Glass’ speech recognition works well enough for simple queries like “Pizza Hut” or “62 King St”, it stumbles on more complicated place names and addresses (especially with an Australian accent). Of course there’s also the problem of sound like a crazy person yelling addresses on the street.
Needless to say this problem has been frustrating me for weeks and because I had so much fun developing my first Google Glass app, I knew I could solve it too.
The solution had to be typing, but you can’t type on Glass. So the next best thing was to type in a browser or on your phone, then send the address to Glass, like ChromeToPhone. Thankfully, the Glass Mirror API allows you to send content with a geolocation latitude/longitude and a “NAVIGATE” action for this exact purpose.
So over the Valentine’s Day weekend, I decided what better way to spend a romantic evening than with the Mirror API, PHP, SQL Azure and the Google Maps API. After a few hours of trial and error, Map2Glass.com was born.
It’s a simple website that lets you login with a Google Glass account and opens a map view with an autocomplete search box at the top. Google Maps’ v3 API makes this almost too easy. A “Send to Glass” button then takes the latitude and longitude of a pinned address (along with some other metadata), formats it to a Glass Timeline card and sends it to the Mirror API. Once received on Glass, a simple tap begins navigation to the embedded location.
I threw the code on Windows Azure Web Sites, bought a domain and started spreading it around. On a post in the Google+ community of Glass Explorer users I got a comment which was very fitting for Valentine’s Day and it made it all worthwhile.
What this “phone-to-Glass” workflow has taught me is that even though I strongly believe wearable computing is the future, simple and precise tasks like typing can be perfectly complimentary to the wearable experience.
Every time I get to the emoji keyboard, I curse at the “switch keyboard” button.
Just a year after Apple introduced the iPhone, the very start of the mobile platform wars, Microsoft announced it had acquired Danger Inc. Six years later today, people barely remember the acquisition much less the brand and technology that came with it.
Chris DeSalvo, who worked at Danger and later worked at Google on Android and now at Voxer, wrote up a very insightful blog post on the long and winding history of Danger from the 2000s, when their product was a keyfob with an LCD screen. It’s a great read for anyone interested in the history of mobile platforms.
I came across a website whose purpose was to provide a super detailed list of every handheld computing environment going back to the early 1970’s. It did a great job except for one glaring omission: the first mobile platform that I helped develop. The company was called Danger, the platform was called hiptop, and what follows is an account of our early days, and a list of some of the “modern” technologies we shipped years before you could buy an iOS or Android device.
His back-of-the-napkin math showed that for about the same cost as building out and maintaining this doomed nationwide FM data network we could instead do the R&D on a two-way data device hosted on GSM cellular networks. The data service on those networks was called GPRS, bleeding edge stuff at the time. This was awesome!
Tons of inputs—being power users of our desktop computers we wanted lots of inputs and lots of ways to tie them together to do extra stuff. We had a 1D roller controller that was also the main action button (later replaced with a 2D trackball), a 4-way d-pad (for games and such), three buttons on the corners of the face of the device (menu, jump, cancel). There was also a full QWERTY keyboard with a dedicated number row. You could chord the menu button with keyboard keys to perform menu actions (cut/copy/paste, etc), or with the jump button to quickly switch between apps. We’d later add two top-edge shoulder buttons, an “ok” button, and dedicated buttons for answering and hanging up phone calls. Written out like that it sounds like a lot but you quickly got used to them allowed you to do a lot of complicated actions without ever having to look at the screen.
We did a demo once at a trade show where we had someone in the audience give us a quote. Our presenter typed the quote into a hiptop and then put it on the ground and dropped a bowling ball on it. The hiptop was destroyed. He then removed the SIM card, plugged it into another hiptop, signed into the same account and seconds later there was the Notes app with the quote fully restored. Much applause.
Around 2005 there was a skunkworks project within Danger to merge a color Gameboy with a hip top—we called it G1.
We extracted a Gameboy Advance chipset and built it on to the backside of the hiptop’s main board. We then developed a custom chip that would let us mix the video signals of the Gameboy and the hiptop so that on a per-pixel basis we could decide which to show on the screen. We made hiptop software that would let us start and stop the Gameboy, or play/pause a game, etc. The Gameboy inputs came from the hiptop’s d-pad and four corner buttons.
For a company that has pivoted so many times and came up with the wildest ideas at each turn, it’s kind of no surprise their run (figuratively) ended with the Microsoft Kin.
P.S. The top image comes courtesy of the Microsoft Careers site which still has a reference to “Microsoft Danger Mobile”.
Xbox Music today quietly launched the Xbox Music for Developers program which allows apps and websites to utilise Xbox Music’s APIs for music-related tasks and upsell them Xbox Music subscriptions for a nice affiliate commission.
The API is still in its very stages and currently only exposes a REST-endpoint for basic search and metadata queries but does allow for deep-link generation which can redirect users to hear and purchase music from Xbox Music on the web, Windows Phone, Windows 8 and other platforms.
These deep-links can also be tied to an affiliate code that will generate a revenue-share every time a user clicks through. Microsoft also provides a “Available on Xbox Music” badge for developers to use.
Other music services like Spotify and Rdio also offer APIs for developers but also allow for playback of music to integrated in mobile apps and websites which is extremely useful for apps like the ones my startup are developing. I can only assume that will also be the case for Xbox Music in the future.
The Xbox team held a Xbox One global launch party on the oval at the Redmond headquarters with a lot of fireworks.
The 4-minute spectacle is now available for viewing from a pretty kickass vantage point high up in the sky thanks to a HD camera mounted on top of quad-copter flying 150 feet (45 metres) off the ground.
(Image above credit microsoftlife / Instagram)
So I received an Explore invite to Google Glass earlier last week and I took the opportunity to gift myself a Christmas present.
I’ve tried Google Glass before for just a couple minutes, but there’s always a different experience between trying them and using them. Here are my impressions from using it every day over the last 5 days.
Hardware and usability
- I’m not used to wearing glasses for extended periods of time and I find Google Glass the same as wearing sunglasses.
- The Google Glass display is more off than on. It’s a passive device, not an active device. The experience is very notifications driven.
- Your normal field of vision is not obscured in any way. When the device is off, the crystal prism can cause light refractions and reflections, but this does not blind or distort your view (it’s not glare).
- Driving with Google Glass is no different than driving with a pair of sunglasses and no notifications are ever displayed until you tap or look up to activate the screen.
- The standard frame design aesthetics leaves a lot to be desired, but the “Active Shade” attachment makes these appear like sunglasses. (One person commented they didn’t even notice it was Google Glass until I pointed it out).
- The photo and video camera works remarkably well across colour reproduction, white-balance, sharpness, field of view, focus, low-light and response speed.
- The swiping touchpad is essential for practical navigation and is easy to perform on the large touch surface.
- Battery life is unusably short at around 5-6 hours or so of light use and persistent data tethering. Thankfully it uses standard microUSB for charge.
- A lack of an onboard GPS chip makes location-based services and features more difficult than it needs to be. Android “MyGlass” companion app provides GPS from phone to Glass, but iOS equivalent missing.
- The bone-conduction speaker is next-to-useless since it is inaudible if there is any other noise in the environment. The included mono-earbud is perfect but swapping between USB charge/earbud is annoying.
- Bluetooth phone calls work fine but lack of A2DP profile means it can’t replace headphones for a mobile phone yet (if using the mono or stereo earbud).
OS software & user experience
- The operating system is fast and responsive. It’s primitive in functionality but practical for the input control.
- The timeline UI paradigm of “future/current/past” display cards takes a lot of getting used to. The lack of categorization or file structures makes navigating structured content appear more difficult than it should be.
- Very bare-bones experience out-of-the-box provided by the native apps and functionality. Lacks native social integration to Facebook or Twitter. (Official Facebook and Twitter apps currently very primitive if not broken.)
- Most interactions heavily biased to Google+ which means some people will have to go out of their way to manage contacts and photos/videos cloud backup.
- Voice commands and searches work remarkably well if you desire to use them (I use touch 90% of the time)
- The web management UI makes add/remove/configure third-party apps easy and straightforward.
- Native integration with Google search, Google music, Google Now works extremely well but provides only a limited set of capabilities of each.
- The two app development models (GDK/Mirror API) provide a great balance for both native-app developers and web developers
- The list of authorised apps and services is growing. Many more “unauthorised” apps and services are being actively developed which can be easily sideloaded.
- Developing a Mirror API app is based on simple HTTP calls and should be a no-brainer for any modern web developer. Comes with code samples for all popular platforms.
- I made a Mirror API-powered app for Facebook messages and notifications in under a day using PHP, MySQL and Windows Azure.
- From what I can see developing on the GDK is no different to developing Android apps which is ubiquitous.
- There is much potential for apps and services to deliver quick “glance-and-go” notifications and media content (pictures, video) to Glass users that drive engagement to complement existing apps/websites.
I have no doubt if Google released the current hardware and software (ecosystem) to the public, it will flop. The display fidelity, frame design, bone conduction speakers, battery life, iOS compatibility all need to be greatly improved before it can be considered a practical day-to-day tool.
Having said that, the opportunities for developers are bountiful and the GDK and Mirror APIs are some of the most approachable developer platforms I’ve seen.
And in my experience no one on the street, train or bus really cares. I’ve made it a habit to use mine primarily outdoors and take it off when I’m meeting a person face-to-face.