We present a tablet system providing real-time feedback to participants of face-to-face multi-party conversation on the scores of talking. In multi-party conversation, sometimes one speaks too much or too little and is a problem because we cannot obtain enough information. Our system has a game element to break through such a conversation. The system measures and visualizes the statistical profiles of conversation: who speaks to whom, who watches whom, and cumulative times of them by using front and back cameras of a tablet which each participant has. The system also calculates the participants' scores based on the rules of the game; the participant gets points when speaking and listening, and loses points when too much speaking. Therefore, the participants talk while trying maximize their scores, and it makes a balanced conversation. We evaluated the system in a within-subjects experiment, where five minutes of three-person conversation about a topic of software development and the system recorded the utterance of the participants. The results of experiments suggest that our system adequately balances the participants' utterance amounts.
{"title":"ScoringTalk: a tablet system scoring and visualizing conversation for balancing of participation","authors":"Hiroyuki Adachi, Seiko Myojin, N. Shimada","doi":"10.1145/2818427.2818454","DOIUrl":"https://doi.org/10.1145/2818427.2818454","url":null,"abstract":"We present a tablet system providing real-time feedback to participants of face-to-face multi-party conversation on the scores of talking. In multi-party conversation, sometimes one speaks too much or too little and is a problem because we cannot obtain enough information. Our system has a game element to break through such a conversation. The system measures and visualizes the statistical profiles of conversation: who speaks to whom, who watches whom, and cumulative times of them by using front and back cameras of a tablet which each participant has. The system also calculates the participants' scores based on the rules of the game; the participant gets points when speaking and listening, and loses points when too much speaking. Therefore, the participants talk while trying maximize their scores, and it makes a balanced conversation. We evaluated the system in a within-subjects experiment, where five minutes of three-person conversation about a topic of software development and the system recorded the utterance of the participants. The results of experiments suggest that our system adequately balances the participants' utterance amounts.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122130770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nico Li, Daniel J. Rea, J. Young, E. Sharlin, M. Sousa
The limitations of human perception make it impossible to grasp four spatial dimensions simultaneously. Visualization techniques of four-dimensional (4D) geometrical shapes rely on visualizing limited projections of the true shape into lower dimensions, often hindering the viewer's ability to grasp the complete structure, or to access its spatial structure with a natural 3D perspective. We propose a mobile visualization technique that enables viewers to better understand the geometry of 4D shapes, providing spatial freedom and leveraging the viewer's natural knowledge and experience of exploring 3D geometric shapes. Our prototype renders 3D intersections of the 4D object, while allowing the user continuous control of varying values of the fourth dimension, enabling the user to interactively browse and explore a 4D shape using a simple camera-lens-style physical zoom metaphor.
{"title":"And he built a crooked camera: a mobile visualization tool to view four-dimensional geometric objects","authors":"Nico Li, Daniel J. Rea, J. Young, E. Sharlin, M. Sousa","doi":"10.1145/2818427.2818430","DOIUrl":"https://doi.org/10.1145/2818427.2818430","url":null,"abstract":"The limitations of human perception make it impossible to grasp four spatial dimensions simultaneously. Visualization techniques of four-dimensional (4D) geometrical shapes rely on visualizing limited projections of the true shape into lower dimensions, often hindering the viewer's ability to grasp the complete structure, or to access its spatial structure with a natural 3D perspective. We propose a mobile visualization technique that enables viewers to better understand the geometry of 4D shapes, providing spatial freedom and leveraging the viewer's natural knowledge and experience of exploring 3D geometric shapes. Our prototype renders 3D intersections of the 4D object, while allowing the user continuous control of varying values of the fourth dimension, enabling the user to interactively browse and explore a 4D shape using a simple camera-lens-style physical zoom metaphor.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123509752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-02DOI: 10.1007/978-0-387-30160-0_10590
Marie-Stephanie Iekura, H. Hayakawa, Keisuke Onoda, Yoichi Kamiyama, K. Minamizawa, Masahiko Inami
{"title":"SMASH","authors":"Marie-Stephanie Iekura, H. Hayakawa, Keisuke Onoda, Yoichi Kamiyama, K. Minamizawa, Masahiko Inami","doi":"10.1007/978-0-387-30160-0_10590","DOIUrl":"https://doi.org/10.1007/978-0-387-30160-0_10590","url":null,"abstract":"","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122400581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Watten, Marco Gilardi, Patrick Holroyd, Paul F. Newbury
With the advancement of mobile technologies cameras on mobile devices have improved to the point where the quality of their output is sometimes comparable to that obtained from a professional video camera. Moreover mobile cameras are being used more and more frequently in professional production environments. However, tools that allow professional users to access and display the accurate information they need to control the technical quality of their filming and make informed decisions about the scenes they are filming is missing on mobile platforms. In this paper the Mobile Acquisition and VISualisation (MAVIS) app is presented, see figure 1. By exploiting the capabilities of modern mobile GPUs, MAVIS integrates the functionalities of a vectorscope, waveform monitor, false colouring and focus peaking monitors together with all the standard functionalities of a video recording app into a single tool. With the extra information that MAVIS displays, the user is able to make informed decisions about how to light and shoot the scene. This enables high quality videos to be obtained from the mobile camera, which can be used alongside outputs of more professional cameras.
{"title":"MAVIS: mobile acquisition and VISualization: hands on","authors":"P. Watten, Marco Gilardi, Patrick Holroyd, Paul F. Newbury","doi":"10.1145/2818427.2818447","DOIUrl":"https://doi.org/10.1145/2818427.2818447","url":null,"abstract":"With the advancement of mobile technologies cameras on mobile devices have improved to the point where the quality of their output is sometimes comparable to that obtained from a professional video camera. Moreover mobile cameras are being used more and more frequently in professional production environments. However, tools that allow professional users to access and display the accurate information they need to control the technical quality of their filming and make informed decisions about the scenes they are filming is missing on mobile platforms. In this paper the Mobile Acquisition and VISualisation (MAVIS) app is presented, see figure 1. By exploiting the capabilities of modern mobile GPUs, MAVIS integrates the functionalities of a vectorscope, waveform monitor, false colouring and focus peaking monitors together with all the standard functionalities of a video recording app into a single tool. With the extra information that MAVIS displays, the user is able to make informed decisions about how to light and shoot the scene. This enables high quality videos to be obtained from the mobile camera, which can be used alongside outputs of more professional cameras.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126508741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present an augmented exhibition podium that supports natural free-hand 3D interaction for visitors using their own mobile devices. Visitors can hold a smartphone, tablet or wear a Smart Glass, and then point the mobile camera at the podium deck to see Augmented Reality (AR) content overlaid on an exhibit on the mobile display. They can also use their bare hands to interact with these virtual scenes without adding extra hardware (e.g. depth sensor) on their own devices.
{"title":"An augmented exhibition podium with free-hand gesture interfaces","authors":"Huidong Bai, Gun A. Lee, M. Billinghurst","doi":"10.1145/2818427.2818464","DOIUrl":"https://doi.org/10.1145/2818427.2818464","url":null,"abstract":"In this paper we present an augmented exhibition podium that supports natural free-hand 3D interaction for visitors using their own mobile devices. Visitors can hold a smartphone, tablet or wear a Smart Glass, and then point the mobile camera at the podium deck to see Augmented Reality (AR) content overlaid on an exhibit on the mobile display. They can also use their bare hands to interact with these virtual scenes without adding extra hardware (e.g. depth sensor) on their own devices.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132683789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Youngsam Shin, S. Hwang, J. D. Lee, Won-Jong Lee, Soojung Ryu
In this paper, we propose an efficient ray scheduling algorithm and non-block cache architecture to hiding main-memory access latency targeting real-time ray tracing on mobile device. We first analyze on the impact of a memory latency by analyzing the memory access patterns for a ray tracing system and present a novel ray scheduling method using a non-block pipeline feedback and cache architecture for ray tracing hardware engine. To achieve more cache efficiency, we also present a memory-efficient encoding scheme for the scene geometry. For an evaluation of our approach, we implemented a prototype ray tracing architecture using our approach on an FPGA platform. Our experimental results indicate that our approach shows that an average performance conservation of 85% and an average performance improves of 2.4 times.
{"title":"Latency tolerance techniques for real-time ray tracing on mobile computing platform","authors":"Youngsam Shin, S. Hwang, J. D. Lee, Won-Jong Lee, Soojung Ryu","doi":"10.1145/2818427.2818437","DOIUrl":"https://doi.org/10.1145/2818427.2818437","url":null,"abstract":"In this paper, we propose an efficient ray scheduling algorithm and non-block cache architecture to hiding main-memory access latency targeting real-time ray tracing on mobile device. We first analyze on the impact of a memory latency by analyzing the memory access patterns for a ray tracing system and present a novel ray scheduling method using a non-block pipeline feedback and cache architecture for ray tracing hardware engine. To achieve more cache efficiency, we also present a memory-efficient encoding scheme for the scene geometry. For an evaluation of our approach, we implemented a prototype ray tracing architecture using our approach on an FPGA platform. Our experimental results indicate that our approach shows that an average performance conservation of 85% and an average performance improves of 2.4 times.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132684957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chek Tien Tan, Richard Byrne, Simon Lui, Weilong Liu, F. Mueller
JoggAR demonstrates a novel combination of wearable visual, audio and sensing technology to realize a game-like persistent augmented reality (AR) environment to enhance jogging and other exertion experiences that involves changing attention intensities in the course of the activities. In particular we developed a method to perform an audio-first exploration of 3D virtual spaces so as to achieve our experiential goal of supporting exertion-focused activities.
{"title":"JoggAR: a mixed-modality AR approach for technology-augmented jogging","authors":"Chek Tien Tan, Richard Byrne, Simon Lui, Weilong Liu, F. Mueller","doi":"10.1145/2818427.2818434","DOIUrl":"https://doi.org/10.1145/2818427.2818434","url":null,"abstract":"JoggAR demonstrates a novel combination of wearable visual, audio and sensing technology to realize a game-like persistent augmented reality (AR) environment to enhance jogging and other exertion experiences that involves changing attention intensities in the course of the activities. In particular we developed a method to perform an audio-first exploration of 3D virtual spaces so as to achieve our experiential goal of supporting exertion-focused activities.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"174 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133124234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alaeddin Nassani, Huidong Bai, Gun A. Lee, M. Billinghurst
Light head-mounted display (HMD) combined with a 3D depth sensing camera as wearable technologies that can be used to enhance the AR experience. In this demonstration we present a scenario where the user wears Glass together with the Tango mounted on their chest to create and review 3D augmented annotations indoors. The process of creating AR annotations using the 3D sensor is separate from viewing AR annotation using a head-mounted display (HMD). For example, users' have to turn their body to face the AR tag during creation, however for viewing the AR tags, the user can turn their head to explore the surrounding environment, separate from where their body is facing, which makes the experience natural and comfortable.
{"title":"Extending HMD by chest-worn 3D camera for AR annotation","authors":"Alaeddin Nassani, Huidong Bai, Gun A. Lee, M. Billinghurst","doi":"10.1145/2818427.2818462","DOIUrl":"https://doi.org/10.1145/2818427.2818462","url":null,"abstract":"Light head-mounted display (HMD) combined with a 3D depth sensing camera as wearable technologies that can be used to enhance the AR experience. In this demonstration we present a scenario where the user wears Glass together with the Tango mounted on their chest to create and review 3D augmented annotations indoors. The process of creating AR annotations using the 3D sensor is separate from viewing AR annotation using a head-mounted display (HMD). For example, users' have to turn their body to face the AR tag during creation, however for viewing the AR tags, the user can turn their head to explore the surrounding environment, separate from where their body is facing, which makes the experience natural and comfortable.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127971401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, mobile terminal such as smart phone has become widespread. According to this, we tend to use the information service at a glance and frequently. For example, we use information services to find a route, check e-mails or update of SNS. However, such hand-held mobile terminal needs to retrieve from pocket and hold the device itself by at-least single hand while using. Therefore it is difficult to use hand-held mobile terminal when both hands are occupied.
{"title":"Toe detection with leg model for wearable input/output interface","authors":"Fumihiro Sato, N. Sakata","doi":"10.1145/2818427.2818453","DOIUrl":"https://doi.org/10.1145/2818427.2818453","url":null,"abstract":"In recent years, mobile terminal such as smart phone has become widespread. According to this, we tend to use the information service at a glance and frequently. For example, we use information services to find a route, check e-mails or update of SNS. However, such hand-held mobile terminal needs to retrieve from pocket and hold the device itself by at-least single hand while using. Therefore it is difficult to use hand-held mobile terminal when both hands are occupied.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126014391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The geospatial web --- exemplified by the popularity of Google Maps --- has democratised the accessibility of geospatial data that was previously available only to those with expertise in GIS (Geographic information systems). This increased accessibility has resulted in critical information---such as the location of bushfires in Australia --- being made more accessible to communities vulnerable to such risks. This paper reports on the findings of a research project in Australia that aimed to present near real-time bushfire information in an interface that community-based users found intuitive and easy to use. It also describes the early prototype stages of an iPhone application that aims to demonstrate how Japanese natural hazard data can be presented in a more intuitive way. The work described here is intended to encourage organisations and individuals presenting spatial hazard information to non-expert users to consider the needs, abilities and concerns of their intended audience. It also describes the technologies and processes used in the design and development of the MyFireWatch and Mapping Hazards in Japan applications.
{"title":"Mobile map applications and the democratisation of hazard information","authors":"Paul Haimes, Tetsuaki Baba, S. Medley","doi":"10.1145/2818427.2818440","DOIUrl":"https://doi.org/10.1145/2818427.2818440","url":null,"abstract":"The geospatial web --- exemplified by the popularity of Google Maps --- has democratised the accessibility of geospatial data that was previously available only to those with expertise in GIS (Geographic information systems). This increased accessibility has resulted in critical information---such as the location of bushfires in Australia --- being made more accessible to communities vulnerable to such risks. This paper reports on the findings of a research project in Australia that aimed to present near real-time bushfire information in an interface that community-based users found intuitive and easy to use. It also describes the early prototype stages of an iPhone application that aims to demonstrate how Japanese natural hazard data can be presented in a more intuitive way. The work described here is intended to encourage organisations and individuals presenting spatial hazard information to non-expert users to consider the needs, abilities and concerns of their intended audience. It also describes the technologies and processes used in the design and development of the MyFireWatch and Mapping Hazards in Japan applications.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130014379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}