Found this amazing Microsoft story from Twitter told first-hand by David Auerbach, a former Microsoft developer in 1998 on the MSN Messenger Service team. It’s a sleuthing story of back-and-forth reverse engineering by Microsoft and sabotage by AOL, all for one simple feature – chatting with AOL Instant Messenger friends inside MSN Messenger.
After we finished the user part of the program, we had some down- time while waiting for the server team to finish the Hotmail integration. We fixed every bug we could find, and then I added another little feature just for fun. One of the problems Microsoft foresaw was getting new users to join Messenger when so many people already used the other chat programs. The trouble was that the programs, then as now, didn’t talk to one another; AOL didn’t talk to Yahoo, which didn’t talk to ICQ, and none of them, of course, would talk to Messenger. AOL had the largest user base, so we discussed the possibility of adding code to allow Messenger to log in to two servers simultaneously, Microsoft’s and AOL’s, so that you could see your Messenger and AIM buddies on a single list and talk to AIM buddies via Messenger. We called it “interop.”
Our client took the surrounding boilerplate and packaged up text messages in it, then sent it to the AOL servers. Did AOL notice that there were some odd messages heading their way from Redmond? Probably not. They had a hundred million users, and after all I was using their own protocol. I didn’t even send that many messages. My program manager and I thought this little stunt would be deemed too dubious by management and taken out of the product before it shipped. But management liked the feature. On July 22, 1999, Microsoft entered the chat markets with MSN Messenger Service. Our AOL “interop” was in it.
So I took the AIM client and checked for differences in what it was sending, then changed our client to mimic it once again. They’d switch it up again; they knew their client, and they knew what it was coded to do and what obscure messages it would respond to in what ways. Every day it’d be something new. At one point they threw in a new protocol wrinkle but cleverly excepted users logging on from Microsoft headquarters, so that while all other Messenger users were getting an error message, we were sitting at Microsoft and not getting it. After an hour or two of scratching our heads, we figured it out.
The messenger war was a rush. Coming in each morning to see whether the client still worked with AOL was thrilling. I’d look through reams of protocol messages to figure out what had changed, fix the client, and try to get an update out the same day. I felt that I was in an Olympic showdown with some unnamed developers over at AOL. I had no idea who my adversaries were, but I had been challenged and I wanted to win.
Web designers rejoice. The jaggies are gone!
Fonts in Chrome on Windows no longer look like they’re from the XP-era. Thanks to a new experimental flag added to the latest version of the Google Chrome beta (35.0.1916.27 beta-m), Chrome is finally rendering fonts with the advanced DirectWrite font rendering engine.
For a before and after comparison, here’s one I compiled with the Windows.com website. The effect is most noticeable on big curves like the “a” and bends of the question mark.
Simply open the Chrome experiments screen chrome://flags/#enable-direct-write and “Enable DirectWrite“, then relaunch Chrome for the change to take effect.
Missing the train by just a minute is mildly frustrating. Every time I see the train roll away from the platform as I walk towards the station makes me wonder what if I had walked a little bit faster or paced up the escalator. (This dilemma keeps me up at night)
Of course there are plenty of cool mobile apps with public transport timetables but pulling out a phone and fiddling about while I’m walking is more trouble than its worth.
After two nights of hacking, I can now access Melbourne’s bus, train and tram timetables while I’m walking to the station with voice and geolocation, thanks to my Google Glass app PTVGlass! I’ll be the Glasshole shouting “trams near me” in the city.
Download the source code
Unfortunately I know the number of Google Glass Explorers in Melbourne will be in the single digits so this app has very little commercial value. I’m hoping someone can critique my code or be inspired to create more Google Glass apps.
Falling in love with Xamarin
A couple of weeks ago, the state government agency Public Transport Victoria released the first version of their Timetable API. I knew I just had to make an app for Google Glass. Except one problem, I didn’t know Java or how to write Android apps.
Unlike my previous Google Glass apps using the Mirror API which allows web developers to develop apps in the form of Glass push notifications, this particular scenario required a native GDK app which could be initiated through voice and menu commands.
I tried a good effort to quickly get my head around Android development, but the tooling and documentation was simply sub-par compared to C# and .NET development. I’m sure the Glass Development Kit would be a piece of cake for any Android developer but learning Java, Android and the GDK all at once was just too much.
In an amazing coincidence, around the same time rumors of Microsoft acquiring Xamarin floated. I wasn’t entirely familiar with the Xamarin product at the time, but my eyes opened when I saw they supported developing for Google Glass using C# and Visual Studio piqued my interest.
Even as a C# noob, I could appreciate the straightforwardness of C#. (Objective-C makes me want to cry.)
Although I was skeptical of Xamarin Studio at first, I’m pleasantly surprised by the capabilities of the tooling. It was like Visual Studio Lite. Intellisense for the full Android SDK, multi-threaded breakpoints, variable explorers and integrated device deploys and debugging.
I have to admit the only weak aspect of developing with Xamarin is the documentation. Xamarin tries its best in its developer guides to provide an overview of the fundamentals in C#, but this is obviously no match to the official documentation in the native language. Surprisingly though StackOverflow was quite saturated with Xamarin questions and answers which helped with the small quirks here and there.
Unfortunately the current Xamarin licensing cost (upwards from $299 to $999 a year) would scare off most hobbyist developers. If the rumors of Microsoft’s acquisition are true, I certainly look forward to Microsoft opening up access for this amazing technology and tools to more developers via MSDN and the Visual Studio Express programs.
Microsoft already has the best developer tools on Windows. Now imagine if their tools were also the best in class for the competing but dominant mobile platforms.
In early 2012 it seemed like everyone jumped on the Kickstarter bandwagon. At TechCrunch Disrupt San Francisco 2012 I was introduced to a project called Memoto, a miniature camera small enough to clip on to your clothing and it’ll automatically take photos of your day-to-day life – lifelogging.
At the time it seemed like a diamond in the rough (remember this was before Google Glass was announced). It exceeded its $50,000 goal with $550,000 of backers. Fast forward to today, after a year of production delays and a company rebrand, the Narrative Clip has finally started shipping.
The Narrative Clip takes the wear in wearables quite literally. Inside a plastic case about half a business card sized is a 5MP camera, a GPS, accelerometer, accelerometer, magnetometer and 8GB of memory. It’s a good example of minimalist Scandinavian design and engineering that weighs less than a pack of gum.
A small pinhole opening on the frontside is the camera lens. A metallic clip at the back allows the device to grip on to most pieces of clothing – shirts, pockets, hats and anything with an edge.
Since there’s no buttons on the device a 4-light LED indicates battery life and a microUSB port (with rubber cover) charges and syncs. The front surface is touch sensitive so you can double tap to force it to take a photo and show battery life. Without a power button, placing the Clip face down or in a totally dark place (like a bag) will put it in sleep mode.
Although you’re suppose to wear a wearable, the Clip can stand vertically on any of its four sides and the user guide actually encourages people to use the camera as a time-lapse tool to capture the clouds outside a window as a neat secondary use for the device.
The Clip automatically takes a photo every 30 seconds (changing the interval is coming in a future firmware update). Needless to say the camera in a lifelogging wearable is pretty important but unfortunately the camera in the Clip leaves a lot to be desired.
The most significant issue is the field of view. At only just 70 degrees, the photos are basically half of what the human eye sees (approx 120 degrees). The camera on Google Glass on the other hand is fantastic with a wide-angle lens (no official FOV but I predict around 100-120).
One redeeming factor about the camera is the ability to automatically correct the tilt of the photo. Due to the flexible design of the Clip, in a lot of scenarios it will be attached to clothing at an angle and also taking photos at an angle. Of course photos at an angle doesn’t make for stunning pictures.
Thanks to the sensors built-in to the device, each photo has some metadata which the Narrative cloud servers process and make adjustments to the rotation of each photo so it is more consistently levelled with the horizon.
The Clip is actually pretty useless without two companion apps: an uploader on a computer and a viewer on the phone.
The uploader program for Windows and OS X allows the Clip to sync its photos to the computer, the cloud or a combination of the two.
The Narrative Cloud is free for all users in the first year (and $9/month after that). Due to the amount and size of the photos you would take day-to-day, the cloud is actually the only reasonable option if you intend to keep all your captures. The Narrative Cloud is also required for the photo post-processing features like tilt-correction, date grouping and location grouping.
Users with slow or limited upload bandwidth are not going to enjoy the fact that you could be uploading gigabytes of photos every few days.
The cloud service tries its best to automatically select a number of photos in each group that it thinks are interesting and represent key “moments” of the day. Whilst this trims down browsing hundreds of photos down down to just a handful, the algorithm is a bit hit and miss.
Each moment (trimmed or in full) can also be played back like a time-lapse, but unfortunately there’s no way to export a video or animated GIF of this. You can share individual photos to Facebook, Instagram, Twitter or email.
Wearing a Narrative Clip is somewhere between a watch like Pebble and a headset like Glass. It’s not as obvious as a display and camera on your head but someone directly infront of you will definitely still notice a little black box around your neck.
Even though I believe the arguments against wearables for its ability to discretely take photos and video are invalid by the fact that video recording glasses and pens are readily available in the market with much more affordable prices and discreteness, however I do recognise that people behave differently if they are aware they are being recorded which changes the dynamics of social gatherings.
For that reason I think the Narrative Clip is actually a worse experience than Google Glass since the Clip is entirely passive. As long as it is worn, it is always recording at a set interval without any interaction or control, until it is put in a bag or placed face down. A device like Glass on the other hand is barely “on/active” and only takes photos on command.
The funny thing is that I actually felt uncomfortable wearing the Clip myself.
If you’re willing to upload gigabytes of nondescript photos day after day with the anticipation that you might have captured an interesting Kodak moment while out and about, the Narrative Clip is the gadget to check out.
Wearable technologies have come a long way since 2012 and the Narrative Clip is disappointing at $279. Its simple lifelogging functionality only scrapes the surface of what I have come to expect of a device to be worn day-in day-out.
I don’t know what the hell is going over at the Windows Phone Store, but I believe the following screenshots can encapsulate everything that is wrong with Microsoft’s app strategy and approval process for the Store.
Blatant artwork theft, copyright/trademark infringements and deceptive conduct by “developers” who spam the Store with website-wrapper apps. (A problem Microsoft only itself has to blame for submitting many unauthorised website-wrapper apps themselves on behalf of popular brands.)
The following screenshots are taken on 15th of March 2014 10pm searching for “Facebook” on the Australian Windows Phone Store. The first result with 4-and-a-half stars and over a thousand ratings is not the official Facebook app. From the first to the 49th, they are not the official Facebook app (except the second Messenger app which is official but not the app we’re looking for).
The circled app is the official app. Very obvious right?
The same ridiculous results are also displayed on the phone.
Thankfully the U.S. Windows Phone Store does not seem to exhibit the same ranking problem for the official app, not to say it’s not full of filth from result three onwards.
I can’t begin to imagine the experience of a customer who just walked out of an Australian mobile phone shop with a new Windows Phone device and goes to try download Facebook from the Store.
I suspect there’s some serious app review manipulations at play here, probably involving bots, that is artificially bumping malicious apps (and possibly demoting the official app).
With all the Microsoft management musical chairs that’s been happening over the past few months, I’m beginning to wonder if anyone is actually still in charge of the Windows Phone Store.