Hiroyuki Adachi, Akimune Haruna, Seiko Myojin, N. Shimada
In order to enhance communication, various ways for supporting communication have been researched [Terken and Sturm 2010; Bergstrom and Karahalios 2007]. However, most of these works are difficult to set up because these works need special things, for example, having or wearing a microphone, a room equipped with a projector. On the other hand, our system [Adachi et al. 2014] only requires devices with two cameras and a display such as tablets and smartphones since the devices can both sensing and visualizing, and popular, therefore the system has the advantage of being easy to use. In addition, our system can provide different (controlled) information to the individual since each participant has the own display. We consider the system is useful in brainstorming, group meetings, tabletop games with conversation, and so on.
为了加强沟通,人们研究了各种支持沟通的方式[Terken and Sturm 2010;Bergstrom and Karahalios 2007]。但是这些作品大多是很难搭建起来的,因为这些作品需要一些特殊的东西,比如有或者戴着麦克风,房间里有投影仪。另一方面,我们的系统[Adachi et al. 2014]只需要带有两个摄像头和一个显示器的设备,如平板电脑和智能手机,因为这些设备既可以感知又可以可视化,而且很流行,因此该系统具有易于使用的优势。此外,由于每个参与者都有自己的显示器,我们的系统可以为个人提供不同的(受控的)信息。我们认为这个系统在头脑风暴、小组会议、桌面游戏等场合都很有用。
{"title":"ScoringTalk and WatchingMeter: utterance and gaze visualization for co-located collaboration","authors":"Hiroyuki Adachi, Akimune Haruna, Seiko Myojin, N. Shimada","doi":"10.1145/2818427.2818455","DOIUrl":"https://doi.org/10.1145/2818427.2818455","url":null,"abstract":"In order to enhance communication, various ways for supporting communication have been researched [Terken and Sturm 2010; Bergstrom and Karahalios 2007]. However, most of these works are difficult to set up because these works need special things, for example, having or wearing a microphone, a room equipped with a projector. On the other hand, our system [Adachi et al. 2014] only requires devices with two cameras and a display such as tablets and smartphones since the devices can both sensing and visualizing, and popular, therefore the system has the advantage of being easy to use. In addition, our system can provide different (controlled) information to the individual since each participant has the own display. We consider the system is useful in brainstorming, group meetings, tabletop games with conversation, and so on.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133122246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many types of display systems have been developed for providing a spatial viewing experience, and surround sound systems, for expressing high levels of presence. However, these types of visual or auditory display systems sometimes require the allocation of large spaces for fixed, specialized equipment, and they tend to be expensive. On the other hand, mobile devices such as smartphones and tablets are now widespread. Thus, it may be possible to build an immersive reality system on mobile devices, which users can experience at any time and in any place.
{"title":"Mobile - based streaming system for omnidirectional contents","authors":"Masanori Hironishi, Wataru Motomura, Tomohito Yamamoto","doi":"10.1145/2818427.2818435","DOIUrl":"https://doi.org/10.1145/2818427.2818435","url":null,"abstract":"Many types of display systems have been developed for providing a spatial viewing experience, and surround sound systems, for expressing high levels of presence. However, these types of visual or auditory display systems sometimes require the allocation of large spaces for fixed, specialized equipment, and they tend to be expensive. On the other hand, mobile devices such as smartphones and tablets are now widespread. Thus, it may be possible to build an immersive reality system on mobile devices, which users can experience at any time and in any place.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124089807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Watten, Marco Gilardi, Patrick Holroyd, Paul F. Newbury
Professional video recording is a complex process which often requires expensive cameras and large amounts of ancillary equipment. With the advancement of mobile technologies, cameras on mobile devices have improved to the point where the quality of their output is sometimes comparable to that obtained from a professional video camera and are often used in professional productions. However, tools that allow professional users to access the information they need to control the technical quality of their filming and make an informed decision about what they are recording are missing on mobile platforms. In this paper we present MAVIS (Mobile Acquisition and VISualization) a tool for professional filming on a mobile platform. MAVIS allows users to access information such as colour vectorscope, waveform monitor, false colouring, focus peaking and all other information that is needed to produce high quality professional videos. This is achieved by exploiting the capabilities of modern mobile GPUs though the use of a number of vertex and fragment shaders. Evaluation with professionals in the film industry shows that the app and its functionalities are well received and that the output and usability of the application align with professional standards.
{"title":"MAVIS: Mobile Acquisition and VISualization: a professional tool for video recording on a mobile platform","authors":"P. Watten, Marco Gilardi, Patrick Holroyd, Paul F. Newbury","doi":"10.1145/2818427.2818448","DOIUrl":"https://doi.org/10.1145/2818427.2818448","url":null,"abstract":"Professional video recording is a complex process which often requires expensive cameras and large amounts of ancillary equipment. With the advancement of mobile technologies, cameras on mobile devices have improved to the point where the quality of their output is sometimes comparable to that obtained from a professional video camera and are often used in professional productions. However, tools that allow professional users to access the information they need to control the technical quality of their filming and make an informed decision about what they are recording are missing on mobile platforms. In this paper we present MAVIS (Mobile Acquisition and VISualization) a tool for professional filming on a mobile platform. MAVIS allows users to access information such as colour vectorscope, waveform monitor, false colouring, focus peaking and all other information that is needed to produce high quality professional videos. This is achieved by exploiting the capabilities of modern mobile GPUs though the use of a number of vertex and fragment shaders. Evaluation with professionals in the film industry shows that the app and its functionalities are well received and that the output and usability of the application align with professional standards.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127688015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This demonstration presents an experimental method and apparatus configuration for producing spherical panoramas with high dynamic range imaging (HDRI). Our method is optimized for providing high fidelity augmented reality (AR) image-based environment recognition for mobile devices. We developed HDRI method that requires single acquisition which extends dynamic range from digital negative, this approach is to be used for multiple angles necessary for reconstructing accurately reproduced spherical panorama with sufficient luminance.
{"title":"Augmented reality using high fidelity spherical panorama with HDRI: demonstration","authors":"Zi Siang See, M. Billinghurst, A. Cheok","doi":"10.1145/2818427.2819696","DOIUrl":"https://doi.org/10.1145/2818427.2819696","url":null,"abstract":"This demonstration presents an experimental method and apparatus configuration for producing spherical panoramas with high dynamic range imaging (HDRI). Our method is optimized for providing high fidelity augmented reality (AR) image-based environment recognition for mobile devices. We developed HDRI method that requires single acquisition which extends dynamic range from digital negative, this approach is to be used for multiple angles necessary for reconstructing accurately reproduced spherical panorama with sufficient luminance.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115761273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Nittala, Nico Li, S. Cartwright, Kazuki Takashima, E. Sharlin, M. Sousa
We present our prototype of PlanWell, a spatial augmented reality interface that facilitates collaborative field operations. PlanWell allows a central overseer (in a command and control center) and a remote explorer (an outdoor user in the field) to explore and collaborate within a geographical area. PlanWell provides the overseer with a tangible user interface (TUI) based on a 3D printout of surface geography which acts as a physical representation of the region to be explored. Augmented reality is used to dynamically overlay properties of the region as well as the presence of the remote explorer and their actions on to the 3D representation of the terrain. The overseer is able to perform the actions directly on the TUI and then the overseer's actions are presented as dynamic AR visualizations superimposed on the explorer's view in the field. Although our interface could applied to many domains, the PlanWell prototype was developed to facilitate petroleum engineering tasks such as well planning and coordination of drilling operations. Our paper describes the details of the design and implementation of the current PlanWell prototype in the context of petroleum well planning and drilling, and discusses some of the preliminary reflections of two focus group sessions with domain experts.
{"title":"PLANWELL: spatial user interface for collaborative petroleum well-planning","authors":"A. Nittala, Nico Li, S. Cartwright, Kazuki Takashima, E. Sharlin, M. Sousa","doi":"10.1145/2818427.2818443","DOIUrl":"https://doi.org/10.1145/2818427.2818443","url":null,"abstract":"We present our prototype of PlanWell, a spatial augmented reality interface that facilitates collaborative field operations. PlanWell allows a central overseer (in a command and control center) and a remote explorer (an outdoor user in the field) to explore and collaborate within a geographical area. PlanWell provides the overseer with a tangible user interface (TUI) based on a 3D printout of surface geography which acts as a physical representation of the region to be explored. Augmented reality is used to dynamically overlay properties of the region as well as the presence of the remote explorer and their actions on to the 3D representation of the terrain. The overseer is able to perform the actions directly on the TUI and then the overseer's actions are presented as dynamic AR visualizations superimposed on the explorer's view in the field. Although our interface could applied to many domains, the PlanWell prototype was developed to facilitate petroleum engineering tasks such as well planning and coordination of drilling operations. Our paper describes the details of the design and implementation of the current PlanWell prototype in the context of petroleum well planning and drilling, and discusses some of the preliminary reflections of two focus group sessions with domain experts.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120903899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are many application areas on using smartphone to access relevant information. By taking a picture from a physical object using a smartphone app, we use image recognition technology to provide a quick link between the physical object and its relevant information. However, developing smartphone apps is expensive and time consuming. We developed a platform called MIMAS AR Creator, which is a web based software platform for automatic creation of smartphone apps for multimedia access using pictures captured from the smartphone camera and its GPS location. This platform allows people without programming skill to create a smartphone app in a few minutes with existing multimedia contents, shorten more than 90% of the app development time. The digital contents can be web pages, videos, audios, images, or 3D graphics with or without animation etc. The platform can be used for mobile advertising and retail marketing, mobile learning and tour guide etc. For example, with the created app running, people can point their phone camera to a picture on newspaper, product brochure, or physical product to obtain more relevant information provided by the advertisers or vendors. They can also point the phone camera to a building or monument to retrieve relevant historical information.
使用智能手机获取相关信息的应用领域很多。通过使用智能手机应用程序从物理对象拍摄照片,我们使用图像识别技术在物理对象与其相关信息之间提供快速链接。然而,开发智能手机应用既昂贵又耗时。我们开发了一个名为MIMAS AR Creator的平台,这是一个基于网络的软件平台,用于自动创建智能手机应用程序,用于使用智能手机相机拍摄的图片及其GPS位置进行多媒体访问。这个平台可以让没有编程技能的人在几分钟内用现有的多媒体内容创建一个智能手机应用程序,缩短90%以上的应用程序开发时间。数字内容可以是网页、视频、音频、图像或带有或不带有动画的3D图形等。该平台可用于移动广告和零售营销,移动学习和导游等。例如,随着创建的应用程序的运行,人们可以将手机相机对准报纸,产品宣传册或实物产品上的图片,以获得广告商或供应商提供的更多相关信息。他们还可以将手机摄像头对准建筑物或纪念碑,以检索相关的历史信息。
{"title":"A platform for mobile augmented reality app creation without programming","authors":"Yiqun Li, Aiyuan Guo, Ching-Ling Chin","doi":"10.1145/2818427.2818452","DOIUrl":"https://doi.org/10.1145/2818427.2818452","url":null,"abstract":"There are many application areas on using smartphone to access relevant information. By taking a picture from a physical object using a smartphone app, we use image recognition technology to provide a quick link between the physical object and its relevant information. However, developing smartphone apps is expensive and time consuming. We developed a platform called MIMAS AR Creator, which is a web based software platform for automatic creation of smartphone apps for multimedia access using pictures captured from the smartphone camera and its GPS location. This platform allows people without programming skill to create a smartphone app in a few minutes with existing multimedia contents, shorten more than 90% of the app development time. The digital contents can be web pages, videos, audios, images, or 3D graphics with or without animation etc. The platform can be used for mobile advertising and retail marketing, mobile learning and tour guide etc. For example, with the created app running, people can point their phone camera to a picture on newspaper, product brochure, or physical product to obtain more relevant information provided by the advertisers or vendors. They can also point the phone camera to a building or monument to retrieve relevant historical information.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116035322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takao Kakimori, Makoto Okabe, Keiji Yanai, R. Onai
Recently, many people take a picture of foods at home or in restaurants, and upload the picture to a social networking service (SNS) to share it with friends. People want to take a delicious-looking picture of foods, but it is often difficult, because most of them have no idea how to take a delicious-looking picture. There are many photography techniques for composition[Liu et al. 2010], lighting, color, focus, etc, and the techniques used to take a picture are different for different types of subjects. The problem lies in the difficulty for amateur photographers to choose and apply appropriate ones from such many techniques. In this paper, we pay attention to composition and develop a system to support the amateurs to take a delicious-looking picture of foods in a short time. Our target users are the amateurs of food photography and our target photographic subjects are foods on dishes. There are four steps to take a picture using our system: 1) our system automatically recognizes foods on dishes; 2) our system suggests the composition and the camera tilt, by which the user can take a delicious-looking picture; 3) the user arranges foods and dishes on the table, and set the camera position and tilt; 4) finally, the user takes the picture.
最近,很多人在家里或餐馆拍下食物的照片,并将照片上传到社交网络服务(SNS)上与朋友分享。人们想要拍一张看起来美味的食物照片,但这通常是困难的,因为他们中的大多数人不知道如何拍一张看起来美味的照片。构图[Liu et al. 2010]、灯光、色彩、对焦等摄影技巧有很多,不同类型的主体拍摄一张照片所使用的技巧也不同。问题在于业余摄影师很难从这么多的技术中选择和应用合适的技术。在本文中,我们注重构图,开发了一个系统,以支持业余爱好者在短时间内拍摄美味的食物照片。我们的目标用户是美食摄影的业余爱好者,我们的目标拍摄对象是菜肴上的食物。使用我们的系统拍照有四个步骤:1)我们的系统自动识别盘子上的食物;2)我们的系统建议构图和相机倾斜,用户可以通过它来拍摄美味的照片;3)用户在桌子上摆放食物和盘子,设置摄像头的位置和倾斜;4)最后,用户拍照。
{"title":"A system to support the amateurs to take a delicious-looking picture of foods","authors":"Takao Kakimori, Makoto Okabe, Keiji Yanai, R. Onai","doi":"10.1145/2818427.2818451","DOIUrl":"https://doi.org/10.1145/2818427.2818451","url":null,"abstract":"Recently, many people take a picture of foods at home or in restaurants, and upload the picture to a social networking service (SNS) to share it with friends. People want to take a delicious-looking picture of foods, but it is often difficult, because most of them have no idea how to take a delicious-looking picture. There are many photography techniques for composition[Liu et al. 2010], lighting, color, focus, etc, and the techniques used to take a picture are different for different types of subjects. The problem lies in the difficulty for amateur photographers to choose and apply appropriate ones from such many techniques. In this paper, we pay attention to composition and develop a system to support the amateurs to take a delicious-looking picture of foods in a short time. Our target users are the amateurs of food photography and our target photographic subjects are foods on dishes. There are four steps to take a picture using our system: 1) our system automatically recognizes foods on dishes; 2) our system suggests the composition and the camera tilt, by which the user can take a delicious-looking picture; 3) the user arranges foods and dishes on the table, and set the camera position and tilt; 4) finally, the user takes the picture.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131785551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chih-Hsiang Yu, Wen-Wei Peng, Shys-Fan Yang-Mao, Yuan Wang, W. Chinthammit, H. Duh
Nowadays, in order to overcome limitations of WIMP interaction, many novel emerging user interfaces have been discussed, such as multi-touch user interfaces [Reisman et al. 2009], tangible user interfaces (TUIs) [Jordà et al. 2007], organic user interfaces (OUIs) [Koh et al. 2011], and mid-air gesture detection [Benko and Wilson 2010]. These technologies have the potential to significantly impact on marketing in the area of smart TVs, desktops, mobile phones, tablets and wearable devices such as smart watches and smart glasses. As we know, Google Glass, a type of wearable device, which only provides a touch pad, located on the right side of the device, which can use touch gestures by simple tapping and sliding your finger on it. Hand gesture is not only one of powerful human-to-human communication modalities [Chen et al. 2007], but also can change the way with human-computer interaction. Therefore, implementing a hand gesture control framework on the glasses could provide an easy-to-use, intuitive and flexibility of interaction approach. In this paper, we proposed a hand gesture control framework on smart glasses that supported various fancy gesture controls. The user can load a virtual 3D object through his fingers just like the magician's trick; rotate the virtual 3D object by moving his hand; zoom the virtual 3D object by using a particular gesture sign.
如今,为了克服WIMP交互的局限性,人们讨论了许多新的新兴用户界面,如多点触摸用户界面[Reisman等人,2009]、有形用户界面(TUIs) [jord等人,2007]、有机用户界面(OUIs) [Koh等人,2011]和空中手势检测[Benko和Wilson, 2010]。这些技术有可能对智能电视、台式电脑、移动电话、平板电脑和智能手表、智能眼镜等可穿戴设备的营销产生重大影响。正如我们所知,谷歌眼镜是一种可穿戴设备,它只提供一个触摸板,位于设备的右侧,可以通过简单的点击和滑动手指来使用触摸手势。手势不仅是一种强大的人与人之间的交流方式[Chen et al. 2007],而且可以改变人机交互的方式。因此,在眼镜上实现手势控制框架可以提供一种易于使用、直观和灵活的交互方式。在本文中,我们提出了一个支持各种花哨手势控制的智能眼镜手势控制框架。用户可以通过手指加载一个虚拟的3D物体,就像魔术师的把戏一样;通过移动他的手来旋转虚拟3D对象;通过使用特定的手势来缩放虚拟的3D对象。
{"title":"A hand gesture control framework on smart glasses","authors":"Chih-Hsiang Yu, Wen-Wei Peng, Shys-Fan Yang-Mao, Yuan Wang, W. Chinthammit, H. Duh","doi":"10.1145/2818427.2819695","DOIUrl":"https://doi.org/10.1145/2818427.2819695","url":null,"abstract":"Nowadays, in order to overcome limitations of WIMP interaction, many novel emerging user interfaces have been discussed, such as multi-touch user interfaces [Reisman et al. 2009], tangible user interfaces (TUIs) [Jordà et al. 2007], organic user interfaces (OUIs) [Koh et al. 2011], and mid-air gesture detection [Benko and Wilson 2010]. These technologies have the potential to significantly impact on marketing in the area of smart TVs, desktops, mobile phones, tablets and wearable devices such as smart watches and smart glasses. As we know, Google Glass, a type of wearable device, which only provides a touch pad, located on the right side of the device, which can use touch gestures by simple tapping and sliding your finger on it. Hand gesture is not only one of powerful human-to-human communication modalities [Chen et al. 2007], but also can change the way with human-computer interaction. Therefore, implementing a hand gesture control framework on the glasses could provide an easy-to-use, intuitive and flexibility of interaction approach. In this paper, we proposed a hand gesture control framework on smart glasses that supported various fancy gesture controls. The user can load a virtual 3D object through his fingers just like the magician's trick; rotate the virtual 3D object by moving his hand; zoom the virtual 3D object by using a particular gesture sign.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134000021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabio Zünd, Mattia Ryffel, Stéphane Magnenat, A. Marra, Maurizio Nitti, Mubbasir Kapadia, Gioacchino Noris, Kenny Mitchell, M. Gross, R. Sumner
Augmented Reality (AR) holds unique and promising potential to bridge between real-world activities and digital experiences, allowing users to engage their imagination and boost their creativity. We propose the concept of Augmented Creativity as employing ar on modern mobile devices to enhance real-world creative activities, support education, and open new interaction possibilities. We present six prototype applications that explore and develop Augmented Creativity in different ways, cultivating creativity through ar interactivity. Our coloring book app bridges coloring and computer-generated animation by allowing children to create their own character design in an ar setting. Our music apps provide a tangible way for children to explore different music styles and instruments in order to arrange their own version of popular songs. In the gaming domain, we show how to transform passive game interaction into active real-world movement that requires coordination and cooperation between players, and how ar can be applied to city-wide gaming concepts. We employ the concept of Augmented Creativity to authoring interactive narratives with an interactive storytelling framework. Finally, we examine how Augmented Creativity can provide a more compelling way to understand complex concepts, such as computer programming.
{"title":"Augmented creativity: bridging the real and virtual worlds to enhance creative play","authors":"Fabio Zünd, Mattia Ryffel, Stéphane Magnenat, A. Marra, Maurizio Nitti, Mubbasir Kapadia, Gioacchino Noris, Kenny Mitchell, M. Gross, R. Sumner","doi":"10.1145/2818427.2818460","DOIUrl":"https://doi.org/10.1145/2818427.2818460","url":null,"abstract":"Augmented Reality (AR) holds unique and promising potential to bridge between real-world activities and digital experiences, allowing users to engage their imagination and boost their creativity. We propose the concept of Augmented Creativity as employing ar on modern mobile devices to enhance real-world creative activities, support education, and open new interaction possibilities. We present six prototype applications that explore and develop Augmented Creativity in different ways, cultivating creativity through ar interactivity. Our coloring book app bridges coloring and computer-generated animation by allowing children to create their own character design in an ar setting. Our music apps provide a tangible way for children to explore different music styles and instruments in order to arrange their own version of popular songs. In the gaming domain, we show how to transform passive game interaction into active real-world movement that requires coordination and cooperation between players, and how ar can be applied to city-wide gaming concepts. We employ the concept of Augmented Creativity to authoring interactive narratives with an interactive storytelling framework. Finally, we examine how Augmented Creativity can provide a more compelling way to understand complex concepts, such as computer programming.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129326841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chih-Hsiang Yu, Wen-Wei Peng, Shys-Fan Yang-Mao, Yuan Wang, W. Chinthammit, H. Duh
In this paper, we proposed a hand gesture control framework on smart glasses. Three different camera structures were presented to detect the hand portion, and the Moore's Neighbor tracing algorithm detects the hand contour more efficiently and automatically. We not only refined the skin-color model but also improved the Chamfer matching method for the robust and effective gesture recognition. A demonstration has been implemented by using the hand gesture control framework. Several gestures are pre-defined for various functions, such as selecting a virtual 3D object, rotating, zooming in or zooming out, and changing display properties of the 3D object.
{"title":"A hand gesture control framework on smart glasses","authors":"Chih-Hsiang Yu, Wen-Wei Peng, Shys-Fan Yang-Mao, Yuan Wang, W. Chinthammit, H. Duh","doi":"10.1145/2818427.2818444","DOIUrl":"https://doi.org/10.1145/2818427.2818444","url":null,"abstract":"In this paper, we proposed a hand gesture control framework on smart glasses. Three different camera structures were presented to detect the hand portion, and the Moore's Neighbor tracing algorithm detects the hand contour more efficiently and automatically. We not only refined the skin-color model but also improved the Chamfer matching method for the robust and effective gesture recognition. A demonstration has been implemented by using the hand gesture control framework. Several gestures are pre-defined for various functions, such as selecting a virtual 3D object, rotating, zooming in or zooming out, and changing display properties of the 3D object.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122839082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}