Among the projects we saw was one that employed a Kinect sensor to translate sign language.
In another project, 3D graphic images on a screen were made more clear 
wherever the user was looking, as other areas were rendered in less 
detail with the idea of optimizing the GPU capabilities. A third project
 tracks facial expressions using a traditional webcam and was able to 
mimic those in an on screen character. And finally, using a smartphone, 
researchers took traditional haptic feedback and made it so that certain
 areas of the screen will have a sensation of friction; it could 
eventually help blind people use touchscreen devices.
 
Monday, November 4, 2013
Microsoft shows off its Beijing research center
14:26
  Antivirus Kenya
  








