Microsoft Applied Sciences Group has posted a few videos of a project called “The Wedge: Seeing Smart Displays Through A New Lens.” Very interesting user interface work along the same lines as Kinect for XBox where user motion and gestures are captured via optical input. One of the results is a surface computing system that does not rely solely on touch.
This is not the first time we’ve seen touchless surface computing, but the demo shows the detail in which the system recognizes the hand. This opens potential beyond basic gesture recognition. Tasks as complicated as translating sign language should be possible with the right software.
The video below shows the system’s ability to track head movement with three-dimensional perspective and seamless switching between the behind-the-screen Wedge camera and a traditional bezel-mounted camera. More videos at The Wedge website. Via Engadget.
4 Reasons Not to Install macOS Mojave & 10 Reasons You Should Install 10.14.1
The macOS Mojave update could completely change how you use your Mac. Many users will want to install the free update...
How to Take an ECG on the Apple Watch
This guide will show you how to take an ECG with the Apple Watch 4. This is a new feature...