{"title":"Fingers tale","authors":"Luca Schenato, Sinem Vardarli","doi":"10.1145/2542398.2542472","DOIUrl":"https://doi.org/10.1145/2542398.2542472","url":null,"abstract":"An unusual adventure of a team of toes.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114694730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Owing to the increasing use of social networking in the mobile environment, people today share more than a million geotagged objects that include objects with social tagging on a daily basis. In this paper, we propose social reverse geocoding (SRG). Social reverse geocoding (SRG) provides highly descriptive geographical information to mobile users. GPS provides the user with latitude and longitude values; however, these values are cumbersome for determining a precise location. A traditional reverse geocoding (conversion of the abovementioned values into street addresses) provides location information based on administrative labeling, but people often do not recognize locations or their surrounding environs from street addresses alone. To address this problem with location recognition, we have created SRG, a reverse geocoding system that enhances location data with user-generated information and provides assistance through a mobile interface [Sueda, et al. 2012]. Through a user study of SRG, we found a clear correlation between the number of tags and the locality of the residents. The obtained result indicates that the residents define the area of a city through SRG as closer than that defined by its street address. Further, the result reveals the potential of developing location-based services based on the image of the city obtained using social tagging on the Internet.
{"title":"Social reverse geocoding studies: describing city images using geotagged social tagging","authors":"Koh Sueda","doi":"10.1145/2543651.2543679","DOIUrl":"https://doi.org/10.1145/2543651.2543679","url":null,"abstract":"Owing to the increasing use of social networking in the mobile environment, people today share more than a million geotagged objects that include objects with social tagging on a daily basis. In this paper, we propose social reverse geocoding (SRG). Social reverse geocoding (SRG) provides highly descriptive geographical information to mobile users. GPS provides the user with latitude and longitude values; however, these values are cumbersome for determining a precise location. A traditional reverse geocoding (conversion of the abovementioned values into street addresses) provides location information based on administrative labeling, but people often do not recognize locations or their surrounding environs from street addresses alone. To address this problem with location recognition, we have created SRG, a reverse geocoding system that enhances location data with user-generated information and provides assistance through a mobile interface [Sueda, et al. 2012]. Through a user study of SRG, we found a clear correlation between the number of tags and the locality of the residents. The obtained result indicates that the residents define the area of a city through SRG as closer than that defined by its street address. Further, the result reveals the potential of developing location-based services based on the image of the city obtained using social tagging on the Internet.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116887761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Cohen, Rasika Ranaweera, Kensuke Nishimura, Y. Sasamoto, Tomohiro Oyama, Tetsunobu Ohashi, Anzu Nakada, J. Villegas, Yong Ping Chen, Sascha Holesch, Jun Yamadera, Hayato Ito, Yasuhiko Saito, Akira Sasaki
Contemporary smartphones and tablets have magnetometers that can be used to detect yaw, which data can be distributed to adjust ambient media. We have built haptic interfaces featuring smartphones and tablets that use compass-derived orientation sensing to modulate virtual displays. Embedding mobile devices into pointing, swinging, and flailing affordances allows "padiddle"-style interfaces, finger spinning, and "poi"-style interfaces, whirling tethered devices, for novel interaction techniques [Cohen et al. 2013].
现代智能手机和平板电脑都有磁力计,可以用来检测偏航,这些数据可以分发给调整环境媒体。我们已经建立了智能手机和平板电脑的触觉界面,使用罗盘衍生的方向感应来调制虚拟显示。将移动设备嵌入指向、摆动和拍打功能中,可以实现“桨”式界面、手指旋转和“poi”式界面以及旋转系绳设备等新颖的交互技术[Cohen et al. 2013]。
{"title":"Twirled affordances, self-conscious avatars, & inspection gestures","authors":"Michael Cohen, Rasika Ranaweera, Kensuke Nishimura, Y. Sasamoto, Tomohiro Oyama, Tetsunobu Ohashi, Anzu Nakada, J. Villegas, Yong Ping Chen, Sascha Holesch, Jun Yamadera, Hayato Ito, Yasuhiko Saito, Akira Sasaki","doi":"10.1145/2543651.2543691","DOIUrl":"https://doi.org/10.1145/2543651.2543691","url":null,"abstract":"Contemporary smartphones and tablets have magnetometers that can be used to detect yaw, which data can be distributed to adjust ambient media. We have built haptic interfaces featuring smartphones and tablets that use compass-derived orientation sensing to modulate virtual displays. Embedding mobile devices into pointing, swinging, and flailing affordances allows \"padiddle\"-style interfaces, finger spinning, and \"poi\"-style interfaces, whirling tethered devices, for novel interaction techniques [Cohen et al. 2013].","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130706706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous processing and data streaming on CPUs.
{"title":"GPU-based large-scale visualization","authors":"M. Hadwiger, J. Krüger, J. Beyer, S. Bruckner","doi":"10.1145/2542266.2542273","DOIUrl":"https://doi.org/10.1145/2542266.2542273","url":null,"abstract":"Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size.\u0000 The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections.\u0000 You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization.\u0000 We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous processing and data streaming on CPUs.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130832161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Poswal, K. Hecker, Debra Isaac Downing, G. Downing, V. Bohossian
Human-Engine is an innovative new approach to 3D asset creation, using 4D scan data to create lifelike virtual humans, clothing or anything else you can put in front of a camera. Our goal is to bridge the gap between traditional video capture and existing CG technology by creating accurate scans of humans and objects in motion that combine the realism of video footage with the flexibility of CG models.
{"title":"Human-engine: viewing 4D mesh captures on mobile devices","authors":"M. Poswal, K. Hecker, Debra Isaac Downing, G. Downing, V. Bohossian","doi":"10.1145/2543651.2543674","DOIUrl":"https://doi.org/10.1145/2543651.2543674","url":null,"abstract":"Human-Engine is an innovative new approach to 3D asset creation, using 4D scan data to create lifelike virtual humans, clothing or anything else you can put in front of a camera. Our goal is to bridge the gap between traditional video capture and existing CG technology by creating accurate scans of humans and objects in motion that combine the realism of video footage with the flexibility of CG models.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"469 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117215385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, research of haptic interface has been attracting attentions of researchers. By using haptic devices, people can easily handle 3D objects. Therefore, it is expected to be used for applications such as simulation in medical fields or remote control of robots. Falcon [1] and PHANToM [2] are one of the famous haptic devices. These devices have controller or pen to let user touch virtual objects. Furthermore, these devices can also provide a sense of force by each point of virtual objects for user. These haptic devices are classified as one point contact type haptic device. Users can experience as if he pokes virtual objects by using these haptic devices. However, point contact type haptic devices cannot perform a sense of force and a sense of touch at the same time. We define this sense of touch as the sense of friction caused by different materials. In order to perform more realistic sense of touch, we attempt to perform both a sense of force and a sense of touch at the same time. To realize this purpose, we developed a novel haptic device.
{"title":"A haptic device based on an approximate plane","authors":"Anzu Kawazoe, Kazuo Ikeshiro, H. Imamura","doi":"10.1145/2542302.2542337","DOIUrl":"https://doi.org/10.1145/2542302.2542337","url":null,"abstract":"In recent years, research of haptic interface has been attracting attentions of researchers. By using haptic devices, people can easily handle 3D objects. Therefore, it is expected to be used for applications such as simulation in medical fields or remote control of robots. Falcon [1] and PHANToM [2] are one of the famous haptic devices. These devices have controller or pen to let user touch virtual objects. Furthermore, these devices can also provide a sense of force by each point of virtual objects for user. These haptic devices are classified as one point contact type haptic device. Users can experience as if he pokes virtual objects by using these haptic devices. However, point contact type haptic devices cannot perform a sense of force and a sense of touch at the same time. We define this sense of touch as the sense of friction caused by different materials. In order to perform more realistic sense of touch, we attempt to perform both a sense of force and a sense of touch at the same time. To realize this purpose, we developed a novel haptic device.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"307 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117228396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a recent project on automatic generation of Kandinsky style of abstract paintings using the programming language Processing. It first offers an analysis of Kandinsky's paintings based on his art theories and the author's own understanding and observation. The generation process is described in details and sample generated images styled on four of Kandinsky's paintings are also demonstrated and discussed. Our approach is highly scalable, limited only by the memory space set in Processing. Using random generation, every styled image generated can be unique. A selection of the images generated in the required resolution is also submitted and 70 images are made into a video companion.
{"title":"Generating abstract paintings in Kandinsky style","authors":"Kang Zhang, Jinhui Yu","doi":"10.1145/2542256.2542257","DOIUrl":"https://doi.org/10.1145/2542256.2542257","url":null,"abstract":"This paper presents a recent project on automatic generation of Kandinsky style of abstract paintings using the programming language Processing. It first offers an analysis of Kandinsky's paintings based on his art theories and the author's own understanding and observation. The generation process is described in details and sample generated images styled on four of Kandinsky's paintings are also demonstrated and discussed. Our approach is highly scalable, limited only by the memory space set in Processing. Using random generation, every styled image generated can be unique. A selection of the images generated in the required resolution is also submitted and 70 images are made into a video companion.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133425481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The visual effects of Neill Blomkamp's latest sci-fi epic, "Elysium".
尼尔·布洛姆坎普最新的科幻史诗《极乐空间》的视觉效果。
{"title":"Inside \"elysium\": from earth to the ring","authors":"A. Kaufman","doi":"10.1145/2542398.2542424","DOIUrl":"https://doi.org/10.1145/2542398.2542424","url":null,"abstract":"The visual effects of Neill Blomkamp's latest sci-fi epic, \"Elysium\".","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127885778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Toshikazu Ohshima, Y. Shibata, K. Isshiki, Ko Hayami, Chiharu Tanaka
"Hyak-Ki Men" is one of our Mixed Reality (MR) Entertainment Project. The goal is to realize innovative and high quality entertainment which provides impressive experience for people with various interest and wide age by applying MR technology. "Hyak-Ki Men" is MR Ninja Entertainment in which a player becomes a Ninja (Figure 1a) and the mission is to defeat virtual Ogres in MR field. The player can enjoy exciting battle with the ogres using virtual Katana (Figure 1b) and Throwing Stars (Figure 1c) by natural gestural user interface with multisensory feedbacks.
{"title":"Hyak-Ki Men: a study of framework for creating mixed reality entertainment","authors":"Toshikazu Ohshima, Y. Shibata, K. Isshiki, Ko Hayami, Chiharu Tanaka","doi":"10.1145/2542302.2542335","DOIUrl":"https://doi.org/10.1145/2542302.2542335","url":null,"abstract":"\"Hyak-Ki Men\" is one of our Mixed Reality (MR) Entertainment Project. The goal is to realize innovative and high quality entertainment which provides impressive experience for people with various interest and wide age by applying MR technology. \"Hyak-Ki Men\" is MR Ninja Entertainment in which a player becomes a Ninja (Figure 1a) and the mission is to defeat virtual Ogres in MR field. The player can enjoy exciting battle with the ogres using virtual Katana (Figure 1b) and Throwing Stars (Figure 1c) by natural gestural user interface with multisensory feedbacks.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114253143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In public transportation, quality of service is of paramount importance. In a study on public transportation reliability, approximately half of the riders reduced their use of services due to unreliability, switching to other modes of transportation [Carrel et al. 2013]. It is also known that the perceived wait time at a bus stop is greater than the actual wait time and a real time information diminishes this difference [Mishalani et al. 2006]. However, when real time data itself is unreliable, this is felt particularly frustrating [Carrel et al. 2013].
在公共交通中,服务质量至关重要。在一项关于公共交通可靠性的研究中,大约一半的乘客由于不可靠性而减少了对服务的使用,转而使用其他交通方式[Carrel et al. 2013]。我们还知道,在公交车站感知到的等待时间大于实际等待时间,而实时信息会减少这种差异[Mishalani et al. 2006]。然而,当实时数据本身不可靠时,这让人感到特别沮丧[Carrel et al. 2013]。
{"title":"Real time reliability: mixed reality public transportation","authors":"Antti Nurminen, J. Järvi","doi":"10.1145/2543651.2543689","DOIUrl":"https://doi.org/10.1145/2543651.2543689","url":null,"abstract":"In public transportation, quality of service is of paramount importance. In a study on public transportation reliability, approximately half of the riders reduced their use of services due to unreliability, switching to other modes of transportation [Carrel et al. 2013]. It is also known that the perceived wait time at a bus stop is greater than the actual wait time and a real time information diminishes this difference [Mishalani et al. 2006]. However, when real time data itself is unreliable, this is felt particularly frustrating [Carrel et al. 2013].","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129542674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}