{"id":3531,"date":"2023-01-30T17:03:49","date_gmt":"2023-01-30T08:03:49","guid":{"rendered":"https:\/\/www.icd.riec.tohoku.ac.jp\/en\/?page_id=3531"},"modified":"2023-01-30T17:03:49","modified_gmt":"2023-01-30T08:03:49","slug":"handygaze","status":"publish","type":"page","link":"https:\/\/www.icd.riec.tohoku.ac.jp\/en\/research\/projects\/handygaze\/","title":{"rendered":"HandyGaze: A Gaze Tracking Technique for Room-Scale Environments using a Single Smartphone"},"content":{"rendered":"\n<h1>HandyGaze: A Gaze Tracking Technique for Room-Scale Environments using a Single Smartphone<\/h1>\n\n\n\n<div class=\"is-layout-flex wp-container-3 wp-block-columns\">\n<div class=\"is-layout-flow wp-block-column\">\n<p>We propose HandyGaze, a 6-DoF gaze tracking technique for room-scale environments that can be carried out by simply holding a smartphone naturally without installing any sensors or markers in the environment. Our technique simultaneously employs the smartphone\u2019s front and rear cameras: The front camera estimates the user\u2019s gaze vector relative to the smartphone, while the rear camera (and depth sensor, if available) performs self-localization by reconstructing a pre-obtained 3D map of the environment. To achieve this, we implemented a prototype that works on iOS smartphones by running an ARKit-based algorithm for estimating the user\u2019s 6-DoF head orientation. We additionally implemented a novel calibration method that offsets the user-specific deviation between the head and gaze orientations. We then conducted a user study (N=10) that measured our technique\u2019s positional accuracy to the gaze target under four conditions, based on combinations of use with and without a depth sensor and calibration. The results show the mean absolute angular error of the gaze point was 8.2 degrees and the positional error was 0.53 m when using the depth sensor, suggesting that the technique can be practically used in applications such as a gaze-based guidance application for museums.<\/p>\n<\/div>\n\n\n\n<div class=\"is-layout-flow wp-block-column\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"1024\" height=\"683\" src=\"https:\/\/www.icd.riec.tohoku.ac.jp\/en\/wp-content\/uploads\/sites\/3\/HandyGaze-1024x683.jpg\" alt=\"\" class=\"wp-image-3530\" srcset=\"https:\/\/www.icd.riec.tohoku.ac.jp\/en\/wp-content\/uploads\/sites\/3\/HandyGaze-1024x683.jpg 1024w, https:\/\/www.icd.riec.tohoku.ac.jp\/en\/wp-content\/uploads\/sites\/3\/HandyGaze-300x200.jpg 300w, https:\/\/www.icd.riec.tohoku.ac.jp\/en\/wp-content\/uploads\/sites\/3\/HandyGaze-768x512.jpg 768w, https:\/\/www.icd.riec.tohoku.ac.jp\/en\/wp-content\/uploads\/sites\/3\/HandyGaze-1536x1024.jpg 1536w, https:\/\/www.icd.riec.tohoku.ac.jp\/en\/wp-content\/uploads\/sites\/3\/HandyGaze.jpg 1920w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<div class=\"jetpack-video-wrapper\"><span class=\"embed-youtube\" style=\"text-align:center; display: block;\"><iframe class='youtube-player' width='1170' height='659' src='https:\/\/www.youtube.com\/embed\/t5hGGfdW0ic?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=ja&#038;autohide=2&#038;wmode=transparent' allowfullscreen='true' style='border:0;' sandbox='allow-scripts allow-same-origin allow-popups allow-presentation'><\/iframe><\/span><\/div>\n<\/div><\/figure>\n\n\n\n<h3>Publications<\/h3>\n\n\n\n<h4>International<\/h4>\n\n\n\n<ul><li>Takahiro Nagai, Kazuyuki Fujita, Kazuki Takashima, Yoshifumi Kitamura. HandyGaze: A Gaze Tracking Technique for Room-Scale Environments using a Single Smartphone, Proc. of the ACM on Human-Computer Interaction (ISS &#8217;22), Vol. 6, Issue ISS, Article No. 562, pp 143\u2013160, Nov. 2022.<br><a href=\"https:\/\/doi.org\/10.1145\/3567715\">https:\/\/doi.org\/10.1145\/3567715<br><\/a><\/li><\/ul>\n\n\n\n<h4>Domestic<\/h4>\n\n\n\n<ul><li>\u6c38\u4e95\u5d07\u5927\uff0c\u85e4\u7530\u548c\u4e4b\uff0c\u9ad8\u5d8b\u548c\u6bc5\uff0c\u5317\u6751\u559c\u6587\uff0c\u30b9\u30de\u30fc\u30c8\u30d5\u30a9\u30f3\u306e\u307f\u3092\u7528\u3044\u305f\u5468\u56f2\u74b0\u5883\u3078\u306e\u8996\u7dda\u5165\u529b\u30a4\u30f3\u30bf\u30d5\u30a7\u30fc\u30b9\u306e\u691c\u8a0e\uff0c\u30d2\u30e5\u30fc\u30de\u30f3\u30a4\u30f3\u30bf\u30d5\u30a7\u30fc\u30b9\u5b66\u4f1a\u7814\u7a76\u5831\u544a\u96c6, 2021\u5e746\u6708\uff0e<\/li><\/ul>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>HandyGaze: A Gaze Tracking Technique for Room-Scale Environments using a Single Smartphone We propose HandyGaz [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":933,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"templates\/page-full-width.php","meta":[],"_links":{"self":[{"href":"https:\/\/www.icd.riec.tohoku.ac.jp\/en\/wp-json\/wp\/v2\/pages\/3531"}],"collection":[{"href":"https:\/\/www.icd.riec.tohoku.ac.jp\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.icd.riec.tohoku.ac.jp\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.icd.riec.tohoku.ac.jp\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.icd.riec.tohoku.ac.jp\/en\/wp-json\/wp\/v2\/comments?post=3531"}],"version-history":[{"count":1,"href":"https:\/\/www.icd.riec.tohoku.ac.jp\/en\/wp-json\/wp\/v2\/pages\/3531\/revisions"}],"predecessor-version":[{"id":3532,"href":"https:\/\/www.icd.riec.tohoku.ac.jp\/en\/wp-json\/wp\/v2\/pages\/3531\/revisions\/3532"}],"up":[{"embeddable":true,"href":"https:\/\/www.icd.riec.tohoku.ac.jp\/en\/wp-json\/wp\/v2\/pages\/933"}],"wp:attachment":[{"href":"https:\/\/www.icd.riec.tohoku.ac.jp\/en\/wp-json\/wp\/v2\/media?parent=3531"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}