Microsoft Applied Sciences Group has posted a few videos of a project called “The Wedge: Seeing Smart Displays Through A New Lens.” Very interesting user interface work along the same lines as Kinect for XBox where user motion and gestures are captured via optical input. One of the results is a surface computing system that does not rely solely on touch.
This is not the first time we’ve seen touchless surface computing, but the demo shows the detail in which the system recognizes the hand. This opens potential beyond basic gesture recognition. Tasks as complicated as translating sign language should be possible with the right software.
The video below shows the system’s ability to track head movement with three-dimensional perspective and seamless switching between the behind-the-screen Wedge camera and a traditional bezel-mounted camera. More videos at The Wedge website. Via Engadget.
As an Amazon Associate I earn from qualifying purchases.