Multimodal target selection techniques for multi displays

We compare multi-modal interaction techniques in a perspective-corrected multi-display environment (MDE). The performances of multimodal interactions using gestures, eye gaze, and head direction are experimentally examined in an object manipulation task in MDEs and compared with a perspective cursor by mouse. Experimental results showed that gesture-based multimodal interactions can provide approximately identical performance as mouse-based perspective cursors in task completion time. A technique utilizing user head direction received positive comments from subjects even though it was not fast. Based on the experimental results and observations, we discussed the possibilities of multimodal interactions for present and future MDEs. (IEEE 3DUI 2010)