danger-page-graphic

The history of Danger, a Microsoft acquisition long forgotten

Just a year after Apple introduced the iPhone, the very start of the mobile platform wars, Microsoft announced it had acquired Danger Inc. Six years later today, people barely remember the acquisition much less the brand and technology that came with it.

Chris DeSalvo, who worked at Danger and later worked at Google on Android and now at Voxer, wrote up a very insightful blog post on the long and winding history of Danger from the 2000s, when their product was a keyfob with an LCD screen. It’s a great read for anyone interested in the history of mobile platforms.

I came across a website whose purpose was to provide a super detailed list of every handheld computing environment going back to the early 1970′s. It did a great job except for one glaring omission: the first mobile platform that I helped develop. The company was called Danger, the platform was called hiptop, and what follows is an account of our early days, and a list of some of the “modern” technologies we shipped years before you could buy an iOS or Android device.

His back-of-the-napkin math showed that for about the same cost as building out and maintaining this doomed nationwide FM data network we could instead do the R&D on a two-way data device hosted on GSM cellular networks. The data service on those networks was called GPRS, bleeding edge stuff at the time. This was awesome!

Tons of inputs—being power users of our desktop computers we wanted lots of inputs and lots of ways to tie them together to do extra stuff. We had a 1D roller controller that was also the main action button (later replaced with a 2D trackball), a 4-way d-pad (for games and such), three buttons on the corners of the face of the device (menu, jump, cancel). There was also a full QWERTY keyboard with a dedicated number row. You could chord the menu button with keyboard keys to perform menu actions (cut/copy/paste, etc), or with the jump button to quickly switch between apps. We’d later add two top-edge shoulder buttons, an “ok” button, and dedicated buttons for answering and hanging up phone calls. Written out like that it sounds like a lot but you quickly got used to them allowed you to do a lot of complicated actions without ever having to look at the screen.

We did a demo once at a trade show where we had someone in the audience give us a quote. Our presenter typed the quote into a hiptop and then put it on the ground and dropped a bowling ball on it. The hiptop was destroyed. He then removed the SIM card, plugged it into another hiptop, signed into the same account and seconds later there was the Notes app with the quote fully restored. Much applause.

Around 2005 there was a skunkworks project within Danger to merge a color Gameboy with a hip top—we called it G1.

We extracted a Gameboy Advance chipset and built it on to the backside of the hiptop’s main board. We then developed a custom chip that would let us mix the video signals of the Gameboy and the hiptop so that on a per-pixel basis we could decide which to show on the screen. We made hiptop software that would let us start and stop the Gameboy, or play/pause a game, etc. The Gameboy inputs came from the hiptop’s d-pad and four corner buttons.

For a company that has pivoted so many times and came up with the wildest ideas at each turn, it’s kind of no surprise their run (figuratively) ended with the Microsoft Kin.

P.S. The top image comes courtesy of the Microsoft Careers site which still has a reference to “Microsoft Danger Mobile”.

Xbox Music for developers

Xbox Music launches developer APIs & affiliate program

Xbox Music today quietly launched the Xbox Music for Developers program which allows apps and websites to utilise Xbox Music’s APIs for music-related tasks and upsell them Xbox Music subscriptions for a nice affiliate commission.

The API is still in its very stages and currently only exposes a REST-endpoint for basic search and metadata queries but does allow for deep-link generation which can redirect users to hear and purchase music from Xbox Music on the web, Windows Phone, Windows 8 and other platforms.

These deep-links can also be tied to an affiliate code that will generate a revenue-share every time a user clicks through. Microsoft also provides a “Available on Xbox Music” badge for developers to use.

Screen Shot 2013-12-20 at 2.56.42 pm

Other music services like Spotify and Rdio also offer APIs for developers but also allow for playback of music to integrated in mobile apps and websites which is extremely useful for apps like the ones my startup are developing. I can only assume that will also be the case for Xbox Music in the future.

googleglass

Exploring Google Glass: impressions from a developer

So I received an Explore invite to Google Glass earlier last week and I took the opportunity to gift myself a Christmas present.

I’ve tried Google Glass before for just a couple minutes, but there’s always a different experience between trying them and using them. Here are my impressions from using it every day over the last 5 days.

googleglass2

Hardware and usability

  • I’m not used to wearing glasses for extended periods of time and I find Google Glass the same as wearing sunglasses.
  • The Google Glass display is more off than on. It’s a passive device, not an active device. The experience is very notifications driven.
  • Your normal field of vision is not obscured in any way. When the device is off, the crystal prism can cause light refractions and reflections, but this does not blind or distort your view (it’s not glare).
  • Driving with Google Glass is no different than driving with a pair of sunglasses and no notifications are ever displayed until you tap or look up to activate the screen.
  • The standard frame design aesthetics leaves a lot to be desired, but the “Active Shade” attachment makes these appear like sunglasses. (One person commented they didn’t even notice it was Google Glass until I pointed it out).
  • The photo and video camera works remarkably well across colour reproduction, white-balance, sharpness, field of view, focus, low-light and response speed.
  • The swiping touchpad is essential for practical navigation and is easy to perform on the large touch surface.
  • Battery life is unusably short at around 5-6 hours or so of light use and persistent data tethering. Thankfully it uses standard microUSB for charge.
  • A lack of an onboard GPS chip makes location-based services and features more difficult than it needs to be. Android “MyGlass” companion app provides GPS from phone to Glass, but iOS equivalent missing.
  • The bone-conduction speaker is next-to-useless since it is inaudible if there is any other noise in the environment. The included mono-earbud is perfect but swapping between USB charge/earbud is annoying.
  • Bluetooth phone calls work fine but lack of A2DP profile means it can’t replace headphones for a mobile phone yet (if using the mono or stereo earbud).

20131205_174122_621_x (1)

OS software & user experience

  • The operating system is fast and responsive. It’s primitive in functionality but practical for the input control.
  • The timeline UI paradigm of “future/current/past” display cards takes a lot of getting used to. The lack of categorization or file structures makes navigating structured content appear more difficult than it should be.
  • Very bare-bones experience out-of-the-box provided by the native apps and functionality. Lacks native social integration to Facebook or Twitter. (Official Facebook and Twitter apps currently very primitive if not broken.)
  • Most interactions heavily biased to Google+ which means some people will have to go out of their way to manage contacts and photos/videos cloud backup.
  • Voice commands and searches work remarkably well if you desire to use them (I use touch 90% of the time)
  • The web management UI makes add/remove/configure third-party apps easy and straightforward.
  • Native integration with Google search, Google music, Google Now works extremely well but provides only a limited set of capabilities of each.

now-machine

App ecosystem

  • The two app development models (GDK/Mirror API) provide a great balance for both native-app developers and web developers
  • The list of authorised apps and services is growing. Many more “unauthorised” apps and services are being actively developed which can be easily sideloaded.
  • Developing a Mirror API app is based on simple HTTP calls and should be a no-brainer for any modern web developer. Comes with code samples for all popular platforms.
  • I made a Mirror API-powered app for Facebook messages and notifications in under a day using PHP, MySQL and Windows Azure.
  • From what I can see developing on the GDK is no different to developing Android apps which is ubiquitous.
  • There is much potential for apps and services to deliver quick “glance-and-go” notifications and media content (pictures, video) to Glass users that drive engagement to complement existing apps/websites.

Conclusion

I have no doubt if Google released the current hardware and software (ecosystem) to the public, it will flop. The display fidelity, frame design, bone conduction speakers, battery life, iOS compatibility all need to be greatly improved before it can be considered a practical day-to-day tool.

Having said that, the opportunities for developers are bountiful and the GDK and Mirror APIs are some of the most approachable developer platforms I’ve seen.

And in my experience no one on the street, train or bus really cares. I’ve made it a habit to use mine primarily outdoors and take it off when I’m meeting a person face-to-face.

Photo samples

20131209_153343_233 20131207_130558_392 (1) 20131207_142826_815 20131205_193815_309 20131206_085515_573 20131207_162527_265 20131206_103532_516 20131207_140844_972

socialnui

Microsoft Research opens “Microsoft Centre for Social Natural User Interface” in Melbourne

Microsoft Research has landed down under!

At a kangaroo hop away from the Melbourne city center in the suburb of Parkville, the University of Melbourne campus is now home to a Microsoft Research center dedicated to developing new social interactive technologies.

The Microsoft Centre for Social Natural User Interface is “quite a mouthful” as noted by the University of Melbourne’s Deputy Vice Chancellor of Research, and as such it’s abbreviated as SocialNUI. The State of Victoria Minister for Technology also joked “a big a challenge of setting up the center is the name”.

socialnui2

Once you look past the buzzword-filled name, the center’s focus is on natural user interface technologies that include and often combine voice, gesture, eye, body and touch inputs like those found in phones, tablets and devices like Xbox Kinect for innovative new social uses and applications.

Microsoft Research Vice President Dr. Tony Hey notes there are four main areas of NUI use research: in private spaces such as those with families, in public spaces such as parks and gatherings, in educational scenarios for formal and informal learning, as well as health and wellbeing applications.

The $8 million dollar joint research center between Microsoft Research, the University of Melbourne and the State Government of Victoria will fund the operation for 3 years.

It will explore how such technologies can enable new forms of social and collaborative behaviours, including how people communicate, play, learn and work together in different settings – in the home, the work place, in education, health and public spaces.

The center and its 28 dedicated research staff will join the existing 13 Microsoft Research labs and centers all around the world including Cambridge, Beijing, Bangalore, Cairo, Aachen, Israel and at the Redmond headquarter. The program will also offer internship opportunities for PhD students exchange between this center and other centers around the world.

Although research has not yet officially begun at this center, an example of the research that might be taking place was demonstrated by a not-yet-published NUI research project from the Cambridge center.

socialnui3

The video demonstrated a gesture add-on for Windows 8 that allowed the use of hand movements above the keyboard to quickly open the start menu, peek and pin applications as well as searching.

It achieved this by a wall-mounted Kinect sensor unconventionally pointing downwards at the keyboard. The depth sensor allows the distinguish the hand whether it is resting or hovering above the keyboard (something traditional camera sensors cannot do alone).

I look forward to what cool research projects will be coming out of the Melbourne center in the years to come.