MR Coral Sea is a mixed-reality (MR) aquarium using which a user can play with virtual fish via a Coral Display, which is an MR display device with physical feedback. In response to hand movements, the virtual fish decides its behavior. The device provides physical feedback using illumination and tactile and auditory sensation to the user.
{"title":"MR coral sea: mixed reality aquarium with physical MR displays","authors":"Toshikazu Ohshima, Chiharu Tanaka","doi":"10.1145/2669047.2669051","DOIUrl":"https://doi.org/10.1145/2669047.2669051","url":null,"abstract":"MR Coral Sea is a mixed-reality (MR) aquarium using which a user can play with virtual fish via a Coral Display, which is an MR display device with physical feedback. In response to hand movements, the virtual fish decides its behavior. The device provides physical feedback using illumination and tactile and auditory sensation to the user.","PeriodicalId":118940,"journal":{"name":"SIGGRAPH Asia 2014 Emerging Technologies","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125193393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although we obtain a lot of information in our environment via the visual modality, we also obtain rich information via the non-visual modality. In the mechanism how we perceive our environment, we use not only the sensor information, but also "how it changes according to how we act." For example, we obtain the haptic information from the haptic sensor on our finger, and when we move our finger along to the surface of the touching object, the haptic information changes according to the finger motion, and we "perceive" the whole shape of the object by executing the action-and-sensing process. In other words, we have a high ability to "integrate" the relation of our body's action and its related sensing data, so as to improve the accuracy of sensor in our body. Based on this idea, we developed a simple perception aid device with user's explorer action, to perceive the object at a distance, which has a linked range sensor and haptic actuator, which we name "FutureBody-Finger." The distance sensor measures the distance to the object (20--80[cm]), and it is converted to the angle of lever attached at the servo motor (0--60[deg]). The user holds this device in his hand with attaching his index finger on the device's lever. For the long distance to the object, the lever leans to the front, and the user feels nothing. On the other hand, for the short distance to the object, the lever stands vertically, and the user feels the existence of the object. Although the device simply measures the distance to the single point on the object, as the user "explorers" around him, the user can obtain more rich distance information of the surrounding object, and hence, finally perceive the shape of the whole object.
{"title":"Touch at a distance: simple perception aid device with user's explorer action","authors":"J. Akita, T. Ono, Kiyohide Ito, M. Okamoto","doi":"10.1145/2669047.2669058","DOIUrl":"https://doi.org/10.1145/2669047.2669058","url":null,"abstract":"Although we obtain a lot of information in our environment via the visual modality, we also obtain rich information via the non-visual modality. In the mechanism how we perceive our environment, we use not only the sensor information, but also \"how it changes according to how we act.\" For example, we obtain the haptic information from the haptic sensor on our finger, and when we move our finger along to the surface of the touching object, the haptic information changes according to the finger motion, and we \"perceive\" the whole shape of the object by executing the action-and-sensing process. In other words, we have a high ability to \"integrate\" the relation of our body's action and its related sensing data, so as to improve the accuracy of sensor in our body. Based on this idea, we developed a simple perception aid device with user's explorer action, to perceive the object at a distance, which has a linked range sensor and haptic actuator, which we name \"FutureBody-Finger.\" The distance sensor measures the distance to the object (20--80[cm]), and it is converted to the angle of lever attached at the servo motor (0--60[deg]). The user holds this device in his hand with attaching his index finger on the device's lever. For the long distance to the object, the lever leans to the front, and the user feels nothing. On the other hand, for the short distance to the object, the lever stands vertically, and the user feels the existence of the object. Although the device simply measures the distance to the single point on the object, as the user \"explorers\" around him, the user can obtain more rich distance information of the surrounding object, and hence, finally perceive the shape of the whole object.","PeriodicalId":118940,"journal":{"name":"SIGGRAPH Asia 2014 Emerging Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122630592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shingo Nagasaka, Yuuki Uranishi, Shunsuke Yoshimoto, M. Imura, O. Oshiro
Tablet PCs and smartphones rapidly become popular nowadays. People can touch objects on the touch panel display of the tablet PC or smartphone, but only get sensation of touching the surface of the display. Recently, some systems capable of inserting themselves into the display by using retractable stylus have been proposed. Beyond [Lee and Ishii 2010] is one of these systems. It consists of a retractable stylus, a table-top display, an infrared marker and a camera set at an environment. A virtual tip of the stylus is rendered when the retractable stylus is pushed to the table-top display. The head position of the user is detected by the infrared marker and the camera, and the virtual objects and the tip of the stylus are rendered properly according to the head's position. The system enables the user to interact with a virtual object under the table. However, the stylus dose not shrink or extend automatically because the stylus dose not have any actuators such as a motor. So the user is unable to feel the haptic sensation from the virtual object. It is necessary for the user to perceive the force from the virtual object to interact with the object more realistically. Another limitation is the fact that the system is stationary. ImpAct [Withana et al. 2010] is another interaction system with a smartphone and a retractable stylus. The force feedback is represented by simply stopping the shrinkage of the stylus. However, the system gives only the rigid force feedback without tactile sensations. And the system does not give a user the tactile sensation from the virtual objects. In addition, the system does not consider the viewpoint of the user.
如今,平板电脑和智能手机迅速流行起来。人们可以触摸到平板电脑或智能手机的触摸屏上的物体,但只能触摸到显示器表面的感觉。最近,已经提出了一些能够通过使用可伸缩触控笔将自己插入显示器的系统。Beyond (Lee and Ishii 2010)就是这些系统之一。它由一个可伸缩的触控笔、一个桌面显示器、一个红外标记和一个设置在环境中的摄像头组成。当将可伸缩的触控笔推到桌面显示器时,将呈现触控笔的虚拟尖端。通过红外标记和摄像头检测用户的头部位置,并根据头部位置适当渲染虚拟对象和触控笔的笔尖。该系统使用户能够与桌子下面的虚拟物体进行交互。然而,触控笔不会自动收缩或扩展,因为触控笔没有任何驱动器,如电机。因此用户无法感受到虚拟物体的触觉。用户有必要感知来自虚拟物体的力,以便更真实地与物体进行交互。另一个限制是系统是静止的。ImpAct [Withana et al. 2010]是另一个带有智能手机和可伸缩触控笔的交互系统。力反馈是通过简单地停止手写笔收缩来表示的。然而,该系统只提供刚性力反馈,而没有触觉。而且该系统不会给用户虚拟物体的触感。此外,该系统不考虑用户的观点。
{"title":"Haptylus: haptic stylus for interaction with virtual objects behind a touch screen","authors":"Shingo Nagasaka, Yuuki Uranishi, Shunsuke Yoshimoto, M. Imura, O. Oshiro","doi":"10.1145/2669047.2669054","DOIUrl":"https://doi.org/10.1145/2669047.2669054","url":null,"abstract":"Tablet PCs and smartphones rapidly become popular nowadays. People can touch objects on the touch panel display of the tablet PC or smartphone, but only get sensation of touching the surface of the display. Recently, some systems capable of inserting themselves into the display by using retractable stylus have been proposed. Beyond [Lee and Ishii 2010] is one of these systems. It consists of a retractable stylus, a table-top display, an infrared marker and a camera set at an environment. A virtual tip of the stylus is rendered when the retractable stylus is pushed to the table-top display. The head position of the user is detected by the infrared marker and the camera, and the virtual objects and the tip of the stylus are rendered properly according to the head's position. The system enables the user to interact with a virtual object under the table. However, the stylus dose not shrink or extend automatically because the stylus dose not have any actuators such as a motor. So the user is unable to feel the haptic sensation from the virtual object. It is necessary for the user to perceive the force from the virtual object to interact with the object more realistically. Another limitation is the fact that the system is stationary. ImpAct [Withana et al. 2010] is another interaction system with a smartphone and a retractable stylus. The force feedback is represented by simply stopping the shrinkage of the stylus. However, the system gives only the rigid force feedback without tactile sensations. And the system does not give a user the tactile sensation from the virtual objects. In addition, the system does not consider the viewpoint of the user.","PeriodicalId":118940,"journal":{"name":"SIGGRAPH Asia 2014 Emerging Technologies","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129611126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today, wearable devices, that can constantly support us by being worn on a daily basis, are gathering attention. In contrast, although we can also find an increase of personal robots in daily life, "wearable robots" are not so prevalent. We developed a wearable robot as a partner, that moves on the human body autonomously. As daily support, the robot has an application to correct wearers' sitting posture. It estimates wearers' body state from some sensors, and if it perceives wearers' bad posture or habit, points them out by moving to a region of the problem directly. We may be able to make use of it, not only to correct our posture or bad habit, but especially, to train children.
{"title":"Daily support robots that move on me","authors":"Tamami Saga, N. Munekata, T. Ono","doi":"10.1145/2669047.2669055","DOIUrl":"https://doi.org/10.1145/2669047.2669055","url":null,"abstract":"Today, wearable devices, that can constantly support us by being worn on a daily basis, are gathering attention. In contrast, although we can also find an increase of personal robots in daily life, \"wearable robots\" are not so prevalent. We developed a wearable robot as a partner, that moves on the human body autonomously. As daily support, the robot has an application to correct wearers' sitting posture. It estimates wearers' body state from some sensors, and if it perceives wearers' bad posture or habit, points them out by moving to a region of the problem directly. We may be able to make use of it, not only to correct our posture or bad habit, but especially, to train children.","PeriodicalId":118940,"journal":{"name":"SIGGRAPH Asia 2014 Emerging Technologies","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125394534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Y. Saraiji, C. Fernando, Yusuke Mizushina, Yoichi Kamiyama, K. Minamizawa, S. Tachi
Telexistence [Tachi 2010] systems require physical limbs for remote object manipulation [Fernando et al. 2012]. Having arms and hands synchronized with voluntary movements allows the user to feel robot's body as his body through visual, and haptic sensation. In this method, we introduce a novel technique that provides virtual arms for existing telexistence systems that does not have physical arms. Previous works [Mine et al. 1997; Poupyrev et al. 1998; Nedel et al. 2003] involved the study of using virtual representation of user hands in virtual environments for interactions. In this work, the virtual arms serves for several interactions in a physical remote environment, and most importantly they provide the user the sense of existence in that remote environment. These superimposed virtual arms follows the user's real-time arm movements and reacts to the dynamic lighting of real environment providing photorealistic rendering adapting to remote place lighting. Thus, it allows the user to have an experience of embodied enforcement towards the remote environment. Furthermore, these virtual arms can be extended to touch and feel unreachable remote objects, and to grab a functional virtual copy of a physical instance where device control is possible. This method does not only allow the user to experience a non-existing arm in telexistence, but also gives the ability to enforce remote environment in various ways.
远程存在[Tachi 2010]系统需要物理肢体进行远程对象操作[Fernando et al. 2012]。手臂和手与自主运动同步,用户可以通过视觉和触觉感受机器人的身体。在这种方法中,我们引入了一种新的技术,为现有的远程存在系统提供虚拟手臂,而这些系统没有物理手臂。以前的作品[Mine et al. 1997;Poupyrev et al. 1998;Nedel et al. 2003]涉及在虚拟环境中使用用户手的虚拟表示进行交互的研究。在这项工作中,虚拟手臂服务于物理远程环境中的几种交互,最重要的是,它们为用户提供了在远程环境中的存在感。这些叠加的虚拟手臂跟随用户的实时手臂运动,并对真实环境的动态照明做出反应,提供适应远程照明的逼真渲染。因此,它允许用户体验对远程环境的具体化实施。此外,这些虚拟手臂可以扩展到触摸和感觉遥不可及的远程对象,并获取物理实例的功能虚拟副本,其中设备控制是可能的。这种方法不仅允许用户体验远程存在的不存在的手臂,而且还提供了以各种方式强制远程环境的能力。
{"title":"Enforced telexistence: teleoperating using photorealistic virtual body and haptic feedback","authors":"M. Y. Saraiji, C. Fernando, Yusuke Mizushina, Yoichi Kamiyama, K. Minamizawa, S. Tachi","doi":"10.1145/2669047.2669048","DOIUrl":"https://doi.org/10.1145/2669047.2669048","url":null,"abstract":"Telexistence [Tachi 2010] systems require physical limbs for remote object manipulation [Fernando et al. 2012]. Having arms and hands synchronized with voluntary movements allows the user to feel robot's body as his body through visual, and haptic sensation. In this method, we introduce a novel technique that provides virtual arms for existing telexistence systems that does not have physical arms. Previous works [Mine et al. 1997; Poupyrev et al. 1998; Nedel et al. 2003] involved the study of using virtual representation of user hands in virtual environments for interactions. In this work, the virtual arms serves for several interactions in a physical remote environment, and most importantly they provide the user the sense of existence in that remote environment. These superimposed virtual arms follows the user's real-time arm movements and reacts to the dynamic lighting of real environment providing photorealistic rendering adapting to remote place lighting. Thus, it allows the user to have an experience of embodied enforcement towards the remote environment. Furthermore, these virtual arms can be extended to touch and feel unreachable remote objects, and to grab a functional virtual copy of a physical instance where device control is possible. This method does not only allow the user to experience a non-existing arm in telexistence, but also gives the ability to enforce remote environment in various ways.","PeriodicalId":118940,"journal":{"name":"SIGGRAPH Asia 2014 Emerging Technologies","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116695549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work presents a new platform for performing one-man orchestra (Figure 1). The conductor is the only human involved, who uses traditional bimanual conducting gestures to interactively direct the performance of smartphones instead of human performers in a real-world orchestra. Each smartphone acts as a virtual performer who plays a certain music instrument like piano and violin. Our work not only allows ordinary people to experience music conducting but also provides a training platform so that students can practice music conducting with a unique listening experience.
{"title":"One-man orchestra: conducting smartphone orchestra","authors":"Chun Kit Tsui, Chi Hei Law, Hongbo Fu","doi":"10.1145/2669047.2669049","DOIUrl":"https://doi.org/10.1145/2669047.2669049","url":null,"abstract":"This work presents a new platform for performing one-man orchestra (Figure 1). The conductor is the only human involved, who uses traditional bimanual conducting gestures to interactively direct the performance of smartphones instead of human performers in a real-world orchestra. Each smartphone acts as a virtual performer who plays a certain music instrument like piano and violin. Our work not only allows ordinary people to experience music conducting but also provides a training platform so that students can practice music conducting with a unique listening experience.","PeriodicalId":118940,"journal":{"name":"SIGGRAPH Asia 2014 Emerging Technologies","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124562889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Toshiki Hosoi, Kazuki Takashima, T. Adachi, Yuichi Itoh, Y. Kitamura
We propose A-Blocks, a novel building block device that enables detecttion and recognition of children's actions and interactions when building with blocks. Quantitative data received from constructing and breaking A-Blocks can be valuable for various assessment applications (e.g., play therapy, cognitive testing, and education). In our prototype system, each block embeds a wireless measurement device that inclludes acceleration, angular velocity, and geomagnetic sensors to measure a block's spatial motion and posture during children's play. A standard set of blocks can be managed via Bluetooth in real time. By using combined sensor data, the system can estimate how to stack the blocks on each other by detecting surface collisions (Figure 1) and recognize many fundamental play action patterns (e.g., moving, stacking standing, waving) with SVM. Unlike existing block-shaped devices with phyysical constraints on their connections (e.g., electrical hooks, magnets), our solid and traditional-shaped block device supports flexible block play that could include more delicate motions reflecting a child's inner state (e.g., learning stages, stress level, representation of an imagination). These benefits of analyzing children's block play can be extended to allow for more enjoyable and interactive play, while social impacts include more constructive play.
{"title":"A-blocks: recognizing and assessing child building processes during play with toy blocks","authors":"Toshiki Hosoi, Kazuki Takashima, T. Adachi, Yuichi Itoh, Y. Kitamura","doi":"10.1145/2669047.2669061","DOIUrl":"https://doi.org/10.1145/2669047.2669061","url":null,"abstract":"We propose A-Blocks, a novel building block device that enables detecttion and recognition of children's actions and interactions when building with blocks. Quantitative data received from constructing and breaking A-Blocks can be valuable for various assessment applications (e.g., play therapy, cognitive testing, and education). In our prototype system, each block embeds a wireless measurement device that inclludes acceleration, angular velocity, and geomagnetic sensors to measure a block's spatial motion and posture during children's play. A standard set of blocks can be managed via Bluetooth in real time. By using combined sensor data, the system can estimate how to stack the blocks on each other by detecting surface collisions (Figure 1) and recognize many fundamental play action patterns (e.g., moving, stacking standing, waving) with SVM. Unlike existing block-shaped devices with phyysical constraints on their connections (e.g., electrical hooks, magnets), our solid and traditional-shaped block device supports flexible block play that could include more delicate motions reflecting a child's inner state (e.g., learning stages, stress level, representation of an imagination). These benefits of analyzing children's block play can be extended to allow for more enjoyable and interactive play, while social impacts include more constructive play.","PeriodicalId":118940,"journal":{"name":"SIGGRAPH Asia 2014 Emerging Technologies","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131868601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dancer-in-a-Box is a project to create a self-propelled cardboard box without an external drive system such as a wheel or propeller. Since the dawn of time, humans have used a box as a static object to store and transport everyday things easily. Therefore, we began to seek a new usage of the box as an active object. Based on this concept, we developed a self-rolling robotic cube which can be installed in a cardboard box. Also, this research created several applications using our robotic cube for entertainment purposes.
{"title":"Dancer-in-a-box","authors":"Yuichiro Katsumoto","doi":"10.1145/2669047.2669053","DOIUrl":"https://doi.org/10.1145/2669047.2669053","url":null,"abstract":"Dancer-in-a-Box is a project to create a self-propelled cardboard box without an external drive system such as a wheel or propeller. Since the dawn of time, humans have used a box as a static object to store and transport everyday things easily. Therefore, we began to seek a new usage of the box as an active object. Based on this concept, we developed a self-rolling robotic cube which can be installed in a cardboard box. Also, this research created several applications using our robotic cube for entertainment purposes.","PeriodicalId":118940,"journal":{"name":"SIGGRAPH Asia 2014 Emerging Technologies","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124915120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We have created SmartSail, a radio controlled (R/C) sailboat toy to learn sailing in easy and enjoyable way. SmartSail, an augmented-feedback user interfaces to visualize force on the sail to make controlling a sailboat easier.
{"title":"SmartSail: visualizing wind force on the sail to learn and enjoy sailing easily","authors":"Koh Sueda","doi":"10.1145/2669047.2669050","DOIUrl":"https://doi.org/10.1145/2669047.2669050","url":null,"abstract":"We have created SmartSail, a radio controlled (R/C) sailboat toy to learn sailing in easy and enjoyable way. SmartSail, an augmented-feedback user interfaces to visualize force on the sail to make controlling a sailboat easier.","PeriodicalId":118940,"journal":{"name":"SIGGRAPH Asia 2014 Emerging Technologies","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127061114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We developed a interactive jumping device mimicking the jumping behavior of click beetle. The purpose of this study is exploring the application of biomimetics to digital entertainment. Specifically, We focused on the psychological impact caused by jumping of click beetle. It is assumed that click beetle-inspired device can create the application of biomimetics to affect the emotion of the people. In addition, the device designed small and light is so secure for people as to can be touched. Consequently, the touchable interaction can produce the dense experience for entertainment.
{"title":"Click beetle-like jumping device for entertainment","authors":"Akihiko Fukushima, Y. Kawaguchi","doi":"10.1145/2669047.2669059","DOIUrl":"https://doi.org/10.1145/2669047.2669059","url":null,"abstract":"We developed a interactive jumping device mimicking the jumping behavior of click beetle. The purpose of this study is exploring the application of biomimetics to digital entertainment. Specifically, We focused on the psychological impact caused by jumping of click beetle. It is assumed that click beetle-inspired device can create the application of biomimetics to affect the emotion of the people. In addition, the device designed small and light is so secure for people as to can be touched. Consequently, the touchable interaction can produce the dense experience for entertainment.","PeriodicalId":118940,"journal":{"name":"SIGGRAPH Asia 2014 Emerging Technologies","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125663250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}