In recent years, mobile terminal such as smart phone has become widespread. According to this, we tend to use the information service at a glance and frequently. For example, we use information services to find a route, check e-mails or update of SNS. However, such hand-held mobile terminal needs to retrieve from pocket and hold the device itself by at-least single hand while using. Therefore it is difficult to use hand-held mobile terminal when both hands are occupied.
{"title":"Toe detection with leg model for wearable input/output interface","authors":"Fumihiro Sato, N. Sakata","doi":"10.1145/2818427.2818453","DOIUrl":"https://doi.org/10.1145/2818427.2818453","url":null,"abstract":"In recent years, mobile terminal such as smart phone has become widespread. According to this, we tend to use the information service at a glance and frequently. For example, we use information services to find a route, check e-mails or update of SNS. However, such hand-held mobile terminal needs to retrieve from pocket and hold the device itself by at-least single hand while using. Therefore it is difficult to use hand-held mobile terminal when both hands are occupied.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126014391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nobuhisa Hanamitsu, Haruki Nakamura, M. Nakatani, K. Minamizawa
When using social networking, users often upload digital media to capture their experiences, and this includes video or photos of meals, landscapes, gatherings of friends, etc. These events are captured with a camera, recorded by a microphone, or archived using a video camera. Such media provides us with visual, audible, or integrated audio-visual experiences. However, up until now, sharing one's corresponding haptic experiences has not been possible. If this haptic experience can be shared, the sensory feedback will be sufficiently compelling and easy to understand in order for a more complete experience from a first-person perspective.
{"title":"Twech: a mobile platform to search and share visuo-tactile experiences","authors":"Nobuhisa Hanamitsu, Haruki Nakamura, M. Nakatani, K. Minamizawa","doi":"10.1145/2818427.2818461","DOIUrl":"https://doi.org/10.1145/2818427.2818461","url":null,"abstract":"When using social networking, users often upload digital media to capture their experiences, and this includes video or photos of meals, landscapes, gatherings of friends, etc. These events are captured with a camera, recorded by a microphone, or archived using a video camera. Such media provides us with visual, audible, or integrated audio-visual experiences. However, up until now, sharing one's corresponding haptic experiences has not been possible. If this haptic experience can be shared, the sensory feedback will be sufficiently compelling and easy to understand in order for a more complete experience from a first-person perspective.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129907643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an experimental method and apparatus for producing spherical panoramas with high dynamic range imaging (HDRI). Our method is optimized for providing high fidelity augmented reality (AR) image-based environment recognition for mobile devices. Previous studies have shown that a pre-produced panorama image can be used to make AR tracking possible for mobile AR applications. However, there has been little research on determining the qualities of the source panorama image necessary for creating high fidelity AR experiences. Panorama image production can have various challenges that can result in inaccurate reproduction of images that do not allow correct virtual graphics to be registered in the AR scene. These challenges include using multiple angle photograph images that contain parallax error, nadir angle difficulty and limited dynamic range. For mobile AR, we developed a HDRI method that requires a single acquisition that extends the dynamic range from a digital negative. This approach that needs least acquisition time is to be used for multiple angles necessary for reconstructing accurately reproduced spherical panorama with sufficient luminance.
{"title":"Augmented reality using high fidelity spherical panorama with HDRI","authors":"Zi Siang See, M. Billinghurst, A. Cheok","doi":"10.1145/2818427.2818445","DOIUrl":"https://doi.org/10.1145/2818427.2818445","url":null,"abstract":"This paper presents an experimental method and apparatus for producing spherical panoramas with high dynamic range imaging (HDRI). Our method is optimized for providing high fidelity augmented reality (AR) image-based environment recognition for mobile devices. Previous studies have shown that a pre-produced panorama image can be used to make AR tracking possible for mobile AR applications. However, there has been little research on determining the qualities of the source panorama image necessary for creating high fidelity AR experiences. Panorama image production can have various challenges that can result in inaccurate reproduction of images that do not allow correct virtual graphics to be registered in the AR scene. These challenges include using multiple angle photograph images that contain parallax error, nadir angle difficulty and limited dynamic range. For mobile AR, we developed a HDRI method that requires a single acquisition that extends the dynamic range from a digital negative. This approach that needs least acquisition time is to be used for multiple angles necessary for reconstructing accurately reproduced spherical panorama with sufficient luminance.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115719459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marie-Stephanie Iekura, H. Hayakawa, Keisuke Onoda, Yoichi Kamiyama, K. Minamizawa, M. Inami
What could we do if we were able to feel others experience in real time. SMASH is a system that provides sports experience of a person from a remote area to spectators at the stadium and television audience in real time. For example, in sports watching, the spectator, by holding the actuator build-in device in the hand, is able to feel the heartbeat of the athlete and the tactile sensation that the player has during a game like shooting a ball, steps, or smashing a shuttle. Sports watching experience using player view point with HMD is said to be the most close to the player, but it shuts the user from other spectators. By feeling the players sensation in the palm, the spectator is able to feel the player closer while enjoying sports watching with others and share their emotions. This system used with television broadcast and at stadium should bring different stage of synchronization with the athlete depending on the situation. In 2020, with the Tokyo Olympic and Paralympic games, new system for sports watching are expected to emerge, using not only tactile information, but information system using extension of any of the human senses are expected to come out.
{"title":"SMASH: synchronization media of athletes and spectator through haptic","authors":"Marie-Stephanie Iekura, H. Hayakawa, Keisuke Onoda, Yoichi Kamiyama, K. Minamizawa, M. Inami","doi":"10.1145/2818427.2818439","DOIUrl":"https://doi.org/10.1145/2818427.2818439","url":null,"abstract":"What could we do if we were able to feel others experience in real time. SMASH is a system that provides sports experience of a person from a remote area to spectators at the stadium and television audience in real time. For example, in sports watching, the spectator, by holding the actuator build-in device in the hand, is able to feel the heartbeat of the athlete and the tactile sensation that the player has during a game like shooting a ball, steps, or smashing a shuttle. Sports watching experience using player view point with HMD is said to be the most close to the player, but it shuts the user from other spectators. By feeling the players sensation in the palm, the spectator is able to feel the player closer while enjoying sports watching with others and share their emotions. This system used with television broadcast and at stadium should bring different stage of synchronization with the athlete depending on the situation. In 2020, with the Tokyo Olympic and Paralympic games, new system for sports watching are expected to emerge, using not only tactile information, but information system using extension of any of the human senses are expected to come out.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130317217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeong-Joon Yoo, J. D. Lee, Sundeep Krishnadasan, Won-Jong Lee, J. Brothers, Soojung Ryu
In this paper, we present a tile-based path rendering scheme that provides a fast rendering on mobile device. Because legacy path rendering schemes have memory or computing intensive work, they do not provide an enough performance (fps) on mobile device. To get an acceptable performance, we propose using tile-based approach for path rendering on mobile device. The design goal of our scheme has two folds: 1) Minimize memory I/O, 2) Minimize computation. Because our scheme effectively reduces memory I/O and computation simultaneously, we can get an acceptable high performance of path rendering on mobile device.
{"title":"Tile-based path rendering for mobile device","authors":"Jeong-Joon Yoo, J. D. Lee, Sundeep Krishnadasan, Won-Jong Lee, J. Brothers, Soojung Ryu","doi":"10.1145/2818427.2818449","DOIUrl":"https://doi.org/10.1145/2818427.2818449","url":null,"abstract":"In this paper, we present a tile-based path rendering scheme that provides a fast rendering on mobile device. Because legacy path rendering schemes have memory or computing intensive work, they do not provide an enough performance (fps) on mobile device. To get an acceptable performance, we propose using tile-based approach for path rendering on mobile device. The design goal of our scheme has two folds: 1) Minimize memory I/O, 2) Minimize computation. Because our scheme effectively reduces memory I/O and computation simultaneously, we can get an acceptable high performance of path rendering on mobile device.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121746106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mod is a small studio creating interactive stories and immersive experiences. The bespoke nature of our work coupled with small budgets has required ingenuity around the use of mobile graphics and interactive applications. We examine operational challenges and lessons learned across three exhibitions that demonstrate iterative development and pitfalls in creating a more playful "plug-and-play" Internet of Things eco-system for creative multimedia experience.
{"title":"The internet of (showbiz) things: scalability issues in deploying and supporting networked multimedia experience.","authors":"Michela Ledwidge, Andrew Burrell","doi":"10.1145/2818427.2818432","DOIUrl":"https://doi.org/10.1145/2818427.2818432","url":null,"abstract":"Mod is a small studio creating interactive stories and immersive experiences. The bespoke nature of our work coupled with small budgets has required ingenuity around the use of mobile graphics and interactive applications. We examine operational challenges and lessons learned across three exhibitions that demonstrate iterative development and pitfalls in creating a more playful \"plug-and-play\" Internet of Things eco-system for creative multimedia experience.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115068732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","authors":"","doi":"10.1145/2818427","DOIUrl":"https://doi.org/10.1145/2818427","url":null,"abstract":"","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132588161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}