Microsoft College Tour 09: mindblowing natural user interface concept demos from Microsoft Research

[flv:collegetour1.f4v 655 352]

It turns out 2019 is getting closer every day. At the moment, Microsoft’s chief research and strategy officer Craig Mundie is doing the rounds at a number of prestigious colleges in the States showing off Microsoft’s vision for technology to solve the world’s biggest problems. Of course, one must use the latest in natural user interfaces for this task.

A feature of this year’s tour appears to be a next-generation computer – one that docks and undocks from a transparent glass display and allows for not only pen and voice input as you’d come to expect from natural user interfaces, but also incorporates touchless gestures and eye-tracking to interact with the information at hand.

Personally I’ve never seen eye-tracking used as an input before, but after seeing this demo, it makes so much sense to skim vasts amounts of information with your eye.

I’ll let these two videos do the rest of the talking.

[flv:collegetour2.f4v 655 352]

No doubt with any fancy prototype it’s usually very difficult to distinguish just how much of the demo is real and how much of it is simulated – either by timers or remote-control, but knowing Microsoft Research and what they’re capable of, I’m willing to bet it’s all real.

On a related note, it appears now Microsoft’s new vision is a glass-display on every desk. Time to get in the window cleaning business perhaps?

73 insightful thoughts

  1. That shot in the second video, where the image comes up on to he glass, is amazing.
    Slightly similar concept to 10gui.com to prevent arm-ache when using a touchscreen monitor.

    I wonder if Windex shares have gone up?

  2. It looks fantastic but is anyone else a little bit fed up with seeing “concepts” from Mundie’s team(s) rather than actual products? The only one I can really think of to come through in the last few years was Surface.

    Another recently was Yusuf Mehdi’s demo earlier this year: http://www.ditii.com/2009/09/28/microsoft-next-gen-office-wall-prototype-demo/ . This stuff really doesn’t feel any closer to coming to fruition y’know!

    -Jamie

  3. Jamie, you have to understand that much of the work that Mundie’s researchers come up with end up embedded in other products rather than as standalone products. His teams came up with a lot of the features in Aero and Glass in Win7. Project Natal, the Wii-like (but better) interface for Xbox (and more). came out of MS Research.

    Roundtable, the 360-degree video microphone for conference call, which MS sold off to another company, was another MS Research development.

  4. Regardless of whether or not this is fake, I’m SUPER pleased that this issue is being raised and being worked on. The entire idea of controlling a computer with a mouse, while nice because it is accurate, it is also at times very slow. Imagine being a 3D artist, and being able to manipulate your 3D object with gestures as if it were sitting there right in front of you. Or imagine organizing your files/folders in your computer the same way you would if you had an actual filing cabinet in front of you. It only makes sense to move in this direction, and I’m so glad this is finally being worked on. I have envisioned this (I keep thinking of Minority Report) for a long time and I think it is entirely feasible and should be done.

  5. @John: It isn’t fake. Look closely and you’ll see a projector behind the glass (in the table). Looking to the scene from the top, in the front is the projector, then the thin glass, then his keyboard and finally him. You can also see the projection being worse at the top end of the glass screen while it is much better at the bottom end.

  6. I may be wrong, but I am not convinced that either of these demos will prove to be useful.

    For the first, suppose I focus on a document, and the computer automatically zooms in on it. Now, if I move my eye to the top left of the magnified window, how is the computer going to figure out whether I want to take a closer look at the zoomed-in-upon document or whether I want to look at a different document? The demo makes it look like the system will zoom in on another document as soon as one’s eye leaves the area where the unzoomed document was. If so, it will be as good as impossible to see more from the zoomed-in document than from the stamp-sized picture.

    For the second, using outstretched arms for anything but occasional UI would tire really, really soon. I would rather have, for example, some gadget in a pocket that allowed me to do this 3D pan/zoom/etc stuff.

    In my opinion, these make for nice demos, but the things they show would be lousy products.

  7. I agree with “Someone” – great eye candy, but completely out of touch with solving real usability problems. How does making you wave your arms around to do something you can currently do by rolling a scroll wheel on a mouse qualify as an improvement? Good for museums and stuff like that, crappy for 99% of computer/human interaction models.

  8. I was especially interested to see that Microsoft is looking at eye tracking as a user interface. I have an eye tracker mounted on my monitor and I use it every day. It replaces the mouse for most pointing actions. It is also fast and intuitive, but a little less accurate than the hand mouse.

    Full disclosure: I am president of EyeTech Digital Systems , the manufacturer.

  9. @ToddF: It does look a lot like eye candy now, but the important thing is that people are thinking about alternate ways to present and interact with data. Even if Microsoft doesn’t get it right, someone else will see this and think “Hey, I know how to make this work.”

  10. The good thing about “eye candy” is that it gives you something to wait for. The bad thing about “eye candy”, is that it makes you wait forever. I have no doubt whatsoever that this is cutting edge technology. But getting it out of the labs and to the general public is not only up to the business developing it, but also a matter of political decision: “What can we release that will make our competition look bad?”

    Last year they announced Project Natal, but I have yet to see it anywhere close to implementation for the public. Before that, it was the Surface. There are some out in selected markets. I haven’t seen one anywhere except online. As long as the bean counters call the shots, this stuff is still cutting edge SF.

    Sorry, Microsoft, you’ve let me down on Project Natal. How am I supposed to keep the faith in pie in the sky handwavium? When I can try it myself, I’ll be impressed.

  11. good video, but whats the point of projecting things on such glass? I mean, u see it in this video, u see it happening in tons of movies and tv shows, but I can’t believe it’s helping me to work better with the work. I mean, I’m almost sure I see the content visible on that glass better when I can’t look trough the glass. Why do I need to learn to make those hand gestures, and hold it up that high when pressing a small button in 0,008 second on my keyboard does the same. Sure it’s usefull when ur not able to have a keyboard and don’t want to touch your device, but other then that I fear they are working alot on just eye candy stuff that does’t make any sense for business users.

    Other then that, how are those maps really usefull to me. Whats the point of showing me 1000’s of boxes? Have u ever googled/binged over 1000’s pages? Give me 10 to 50 very good results and I’ll be soo much happier.

    In my opinion i’m a bit dissapointed with both videos. They aren’t looking like anything usefull to me, aren’t build on real scenarios, and is just made for eye-candy purposes rather then improving the stuff we work with today.

  12. Joey1058 — When MS announced Natal it was demoing it and releasing it to developers. At the time MS said it would be spring/summer 2010 before anything hit the consumer market.

  13. although i believe this to be real as well, it is highly unlikely that we will ever see it, not to mention i believe this dont would work in your typical office space, given that most follow the formfactor of cubicles and as such this type of technology would be left for executives and “higher-ups (management)” i am pleased that MS is willing to push the boundaries of the modern PC, but i have seen to many demos from the MS research group, that have never come to fruitiion, mind you, this leaves me “warm and fuzzy inside” but is highly doubtful that it will ever come, Phodeo, MediaBrowser, TimeQuilt and Phototriage were almost all based on the same idea, grouping your media together by way of metadata, this demo goes a bit further than that with the fancy display and such.

    i also wonder if this is “real”, it very well could be, but at the same time MS have shown lots of fancy slideshow/macromedia demos in the past showing off technology they werent capable of or of things that never came to be.

    as tom mentioned above, i would find that being able to look through my display would be very distracting and would blow “privacy” out the window, not to mention if this is what the pc experience would look like in 2019, showing it now is a bit premature, dont ya think? lol

    peas
    cityboy

  14. It is interesting to see more of these types of interfaces designed to work around people instead of having people work around the limitations of the machine. Where these interfaces could truly make a difference is in the case of those who have mobility issues. An interface with more advanced eye tracking capabilities would certainly be of significant benefit to them.

  15. Personally I think this kind of technology is amazingly cool and futuristic.

    I guess the question then becomes how useful will it actually be. Having cool looking stuff is nice but if the end user experience isn’t improved then there isn’t much point to it.

    I hope Microsoft is thinking about this as they design these next-gen computers.

  16. While very cool this is a contrived example. The big problem with NUIs is making them generic (think mouse,window,button,radio button). A NUI generally has to be written for every possible use-case and it’s far more complicated than slapping a few buttons onto a window. If they can crack that barrier they might actually have something *really* special.

  17. This isn’t that far off.

    Metadata sorting is quite feasible – the question is how good is the metadata and how much data are you looking at. An existing application that uses a similar approach is WinDirStat (http://windirstat.info/).

    The glass surface is required in order to be able to read the gestures. I suspect it uses the MS Surface technology that alternately projects dual images (to surface and through surface to optional dynamic secondary surface) and senses through the surface. It just takes the glass off of the table.

    Gesture interfaces are happening – the iPhone uses it extensively. The demo is just taking the multi-touch concept (built in to Windows 7 AFAIK, and available as add-on to Vista(?)) to the through-the-glass sensors. The hardware cost needs to come down before it gets wide distribution.

    I enjoyed playing with the MS Surface last year, and it is really great – at least for some things. For typing-intesive tasks, maybe less so. But then a mouse is more of a drag than pointing. Once you try it, you won’t quickly forget. Very cool indeed.

  18. Very cool, especially the flicking of the wind turbine onto the glass screen. However I still have my doubts on speech vs clicking. We think faster than we speak, so the mouse is still more productive in my opinion because you can work faster with it, clicking vs talking.

  19. If I have to have a big piece of glass in front of me, between myself and a client, why not just project the display onto a wall? It would display much bigger and could still have all the functionality. If your going to create something get rid of the glass and create a 3D hologram that I can work with? Then I can spin the object around and zoom in or out on it. You could display the hologram in the middle of a conference table and it could be the center of focus as your display should be. You could still control it from your computer where you are giving your sales pitch, whatever it may be.

  20. It is the direct manipulation of the displayed image that makes it so powerful. It is one of those things you just have to experience.

    Today’s technology requires the glass (it’s not just plain glass) and uses a rear projection and through the glass sensing. Holograms are still a ways off – at least for mainstream application.

  21. Yeah!
    Eye Tracking really kicks ass! It is in use already in quite some areas. A fantastic application can be seen here: http://www.youtube.com/watch?v=8QocWsWd7fc&feature=player_embedded (made with a Tobii Eye Tracker).

    In the area of assistive communication for disabled people it is already working pretty well. even with a windows system ;-) Companies like Tobii Technology provide modular systems with different possibilities for interaction: touch screens, eye tracking, voice…

    The only thing is: Everyone wants to controll the computer their way. Some want to use the cool projection stuff, others the eye tracking things and again other people want just to keep the good ole mouse…
    Intuitive interaction is only possible if the computer is so intelligent to understand what you want and when you want it. But it would just suck if you are moving in front of the computer to stretch yourself and the computer starts doing weird stuff. Or if you are talking to your colleague and the computer starts googling things…

  22. The Durban International Airport will most probably put to use method the South African Defence Drive greatest the 2010 FIFA Global Cup.The airport serves as an incredible gateway for travelers to KwaZulu-Natal and the Drakensberg.

Leave a Reply