Every time I get to the emoji keyboard, I curse at the “switch keyboard” button.
Every time I get to the emoji keyboard, I curse at the “switch keyboard” button.
Just a year after Apple introduced the iPhone, the very start of the mobile platform wars, Microsoft announced it had acquired Danger Inc. Six years later today, people barely remember the acquisition much less the brand and technology that came with it.
Chris DeSalvo, who worked at Danger and later worked at Google on Android and now at Voxer, wrote up a very insightful blog post on the long and winding history of Danger from the 2000s, when their product was a keyfob with an LCD screen. It’s a great read for anyone interested in the history of mobile platforms.
I came across a website whose purpose was to provide a super detailed list of every handheld computing environment going back to the early 1970′s. It did a great job except for one glaring omission: the first mobile platform that I helped develop. The company was called Danger, the platform was called hiptop, and what follows is an account of our early days, and a list of some of the “modern” technologies we shipped years before you could buy an iOS or Android device.
His back-of-the-napkin math showed that for about the same cost as building out and maintaining this doomed nationwide FM data network we could instead do the R&D on a two-way data device hosted on GSM cellular networks. The data service on those networks was called GPRS, bleeding edge stuff at the time. This was awesome!
Tons of inputs—being power users of our desktop computers we wanted lots of inputs and lots of ways to tie them together to do extra stuff. We had a 1D roller controller that was also the main action button (later replaced with a 2D trackball), a 4-way d-pad (for games and such), three buttons on the corners of the face of the device (menu, jump, cancel). There was also a full QWERTY keyboard with a dedicated number row. You could chord the menu button with keyboard keys to perform menu actions (cut/copy/paste, etc), or with the jump button to quickly switch between apps. We’d later add two top-edge shoulder buttons, an “ok” button, and dedicated buttons for answering and hanging up phone calls. Written out like that it sounds like a lot but you quickly got used to them allowed you to do a lot of complicated actions without ever having to look at the screen.
We did a demo once at a trade show where we had someone in the audience give us a quote. Our presenter typed the quote into a hiptop and then put it on the ground and dropped a bowling ball on it. The hiptop was destroyed. He then removed the SIM card, plugged it into another hiptop, signed into the same account and seconds later there was the Notes app with the quote fully restored. Much applause.
Around 2005 there was a skunkworks project within Danger to merge a color Gameboy with a hip top—we called it G1.
We extracted a Gameboy Advance chipset and built it on to the backside of the hiptop’s main board. We then developed a custom chip that would let us mix the video signals of the Gameboy and the hiptop so that on a per-pixel basis we could decide which to show on the screen. We made hiptop software that would let us start and stop the Gameboy, or play/pause a game, etc. The Gameboy inputs came from the hiptop’s d-pad and four corner buttons.
For a company that has pivoted so many times and came up with the wildest ideas at each turn, it’s kind of no surprise their run (figuratively) ended with the Microsoft Kin.
P.S. The top image comes courtesy of the Microsoft Careers site which still has a reference to “Microsoft Danger Mobile”.
Xbox Music today quietly launched the Xbox Music for Developers program which allows apps and websites to utilise Xbox Music’s APIs for music-related tasks and upsell them Xbox Music subscriptions for a nice affiliate commission.
The API is still in its very stages and currently only exposes a REST-endpoint for basic search and metadata queries but does allow for deep-link generation which can redirect users to hear and purchase music from Xbox Music on the web, Windows Phone, Windows 8 and other platforms.
These deep-links can also be tied to an affiliate code that will generate a revenue-share every time a user clicks through. Microsoft also provides a “Available on Xbox Music” badge for developers to use.
Other music services like Spotify and Rdio also offer APIs for developers but also allow for playback of music to integrated in mobile apps and websites which is extremely useful for apps like the ones my startup are developing. I can only assume that will also be the case for Xbox Music in the future.
The Xbox team held a Xbox One global launch party on the oval at the Redmond headquarters with a lot of fireworks.
The 4-minute spectacle is now available for viewing from a pretty kickass vantage point high up in the sky thanks to a HD camera mounted on top of quad-copter flying 150 feet (45 metres) off the ground.
(Image above credit microsoftlife / Instagram)
So I received an Explore invite to Google Glass earlier last week and I took the opportunity to gift myself a Christmas present.
I’ve tried Google Glass before for just a couple minutes, but there’s always a different experience between trying them and using them. Here are my impressions from using it every day over the last 5 days.
Hardware and usability
OS software & user experience
I have no doubt if Google released the current hardware and software (ecosystem) to the public, it will flop. The display fidelity, frame design, bone conduction speakers, battery life, iOS compatibility all need to be greatly improved before it can be considered a practical day-to-day tool.
Having said that, the opportunities for developers are bountiful and the GDK and Mirror APIs are some of the most approachable developer platforms I’ve seen.
And in my experience no one on the street, train or bus really cares. I’ve made it a habit to use mine primarily outdoors and take it off when I’m meeting a person face-to-face.
Microsoft Research has landed down under!
At a kangaroo hop away from the Melbourne city center in the suburb of Parkville, the University of Melbourne campus is now home to a Microsoft Research center dedicated to developing new social interactive technologies.
The Microsoft Centre for Social Natural User Interface is “quite a mouthful” as noted by the University of Melbourne’s Deputy Vice Chancellor of Research, and as such it’s abbreviated as SocialNUI. The State of Victoria Minister for Technology also joked “a big a challenge of setting up the center is the name”.
Once you look past the buzzword-filled name, the center’s focus is on natural user interface technologies that include and often combine voice, gesture, eye, body and touch inputs like those found in phones, tablets and devices like Xbox Kinect for innovative new social uses and applications.
Microsoft Research Vice President Dr. Tony Hey notes there are four main areas of NUI use research: in private spaces such as those with families, in public spaces such as parks and gatherings, in educational scenarios for formal and informal learning, as well as health and wellbeing applications.
The $8 million dollar joint research center between Microsoft Research, the University of Melbourne and the State Government of Victoria will fund the operation for 3 years.
It will explore how such technologies can enable new forms of social and collaborative behaviours, including how people communicate, play, learn and work together in different settings – in the home, the work place, in education, health and public spaces.
The center and its 28 dedicated research staff will join the existing 13 Microsoft Research labs and centers all around the world including Cambridge, Beijing, Bangalore, Cairo, Aachen, Israel and at the Redmond headquarter. The program will also offer internship opportunities for PhD students exchange between this center and other centers around the world.
Although research has not yet officially begun at this center, an example of the research that might be taking place was demonstrated by a not-yet-published NUI research project from the Cambridge center.
The video demonstrated a gesture add-on for Windows 8 that allowed the use of hand movements above the keyboard to quickly open the start menu, peek and pin applications as well as searching.
It achieved this by a wall-mounted Kinect sensor unconventionally pointing downwards at the keyboard. The depth sensor allows the distinguish the hand whether it is resting or hovering above the keyboard (something traditional camera sensors cannot do alone).
I look forward to what cool research projects will be coming out of the Melbourne center in the years to come.