Water phase transitions are fundamental and complex phenomena in nature. Previous researches usually studied every process during water phase transitions individually and simulated it separately. This is because the phase transition process of water is very complex. In this paper, we proposed a novel method to simulate the processes of water phase transitions uniformly. We firstly established PBMR (Position Based Material Representation) which is based on PBD (Position Based Dynamics) to describe all three different kinds of water material. And then, we designed a unified computational process to modeled water phase transitions. In our unified computational process, heat transfer mechanism and mass transfer mechanism were main concerns in our consideration. With our method, it is capable to simulate water phase transition uniformly.
水相变是自然界中基本而复杂的现象。以往的研究通常是对水相变过程中的各个过程进行单独的研究和模拟。这是因为水的相变过程非常复杂。本文提出了一种均匀模拟水相变过程的新方法。我们首先建立了PBMR (Position Based Material Representation),它基于PBD (Position Based Dynamics)来描述这三种不同的水物质。然后,我们设计了一个统一的计算过程来模拟水的相变。在我们的统一计算过程中,传热机理和传质机理是我们主要考虑的问题。该方法能够均匀地模拟水的相变过程。
{"title":"A unified simulation framework for water phase transition based on particles","authors":"Chenyu Bian, Shuangjiu Xiao, Zhi Li","doi":"10.1145/3284398.3284419","DOIUrl":"https://doi.org/10.1145/3284398.3284419","url":null,"abstract":"Water phase transitions are fundamental and complex phenomena in nature. Previous researches usually studied every process during water phase transitions individually and simulated it separately. This is because the phase transition process of water is very complex. In this paper, we proposed a novel method to simulate the processes of water phase transitions uniformly. We firstly established PBMR (Position Based Material Representation) which is based on PBD (Position Based Dynamics) to describe all three different kinds of water material. And then, we designed a unified computational process to modeled water phase transitions. In our unified computational process, heat transfer mechanism and mass transfer mechanism were main concerns in our consideration. With our method, it is capable to simulate water phase transition uniformly.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129100828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Training and education for enhancing evacuee safety is essential to reduce deaths, injuries and damages from disasters, such as fire and earthquake. However, traditional training approaches, e.g. evacuation drills, hardly simulate the real world emergency, which lead to the limitation of reality and poor interaction. In addition, traditional approaches may not provide investigation of participants' behavior during evacuations and give feedback after training. As a novel and effective alternative to overcome these limitations, in this paper, a VR-based training prototype system is designed and implemented for enhance earthquake evacuation safety. Key modules including earthquake scenario simulation, damage representation, interaction, player investigation and feedback are developed. In the immersive VR environment, players can be provided with learning outcomes as well as behavior feedback as crucial goals for safety training. Based on the result of the evaluation, this prototype has proven to be promising for enhancing earthquake evacuee safety and shows positive pedagogical functions.
{"title":"Development of a VR prototype for enhancing earthquake evacuee safety","authors":"Hui Liang, Fei Liang, Fenglong Wu, Changhai Wang, Jian Chang","doi":"10.1145/3284398.3284417","DOIUrl":"https://doi.org/10.1145/3284398.3284417","url":null,"abstract":"Training and education for enhancing evacuee safety is essential to reduce deaths, injuries and damages from disasters, such as fire and earthquake. However, traditional training approaches, e.g. evacuation drills, hardly simulate the real world emergency, which lead to the limitation of reality and poor interaction. In addition, traditional approaches may not provide investigation of participants' behavior during evacuations and give feedback after training. As a novel and effective alternative to overcome these limitations, in this paper, a VR-based training prototype system is designed and implemented for enhance earthquake evacuation safety. Key modules including earthquake scenario simulation, damage representation, interaction, player investigation and feedback are developed. In the immersive VR environment, players can be provided with learning outcomes as well as behavior feedback as crucial goals for safety training. Based on the result of the evaluation, this prototype has proven to be promising for enhancing earthquake evacuee safety and shows positive pedagogical functions.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130023565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper summarizes the findings of a study about patterns between the levels of pressure exerted on a gamepad's buttons and the way that players feel when playing. We designed an experiment to trigger different emotions (boredom, frustration, fun) from players when playing a 2D space shooter and analyzed the relationship between pressure and experience. Results show clear trends and a close correlation between pressure and specific players' aspects. Older players tended to press the button harder and players with more experience tended to press it softer. We also found out that there is a strong correlation between pressure and aspects such as difficulty, fun, arousal and dominance, being the correlation: pressure/fun (76.92%) and pressure/dominance (78.57%) the most relevant ones. Finally, frustration, boredom and valence had unclear results, however, trends showed the following: the more frustration, the harder players pressed the button, boredom has an inversely proportional relation with pressure and the results for valence were 61.54% positive, without having a solid final conclusion about this parameter. We propose a parameters classification to carry on with this result in our next step and show how could we design an effective estimation method in the future.
{"title":"Analyzing the relationship between pressure sensitivity and player experience","authors":"Henry Fernández, Koji Mikami, K. Kondo","doi":"10.1145/3284398.3284421","DOIUrl":"https://doi.org/10.1145/3284398.3284421","url":null,"abstract":"This paper summarizes the findings of a study about patterns between the levels of pressure exerted on a gamepad's buttons and the way that players feel when playing. We designed an experiment to trigger different emotions (boredom, frustration, fun) from players when playing a 2D space shooter and analyzed the relationship between pressure and experience. Results show clear trends and a close correlation between pressure and specific players' aspects. Older players tended to press the button harder and players with more experience tended to press it softer. We also found out that there is a strong correlation between pressure and aspects such as difficulty, fun, arousal and dominance, being the correlation: pressure/fun (76.92%) and pressure/dominance (78.57%) the most relevant ones. Finally, frustration, boredom and valence had unclear results, however, trends showed the following: the more frustration, the harder players pressed the button, boredom has an inversely proportional relation with pressure and the results for valence were 61.54% positive, without having a solid final conclusion about this parameter. We propose a parameters classification to carry on with this result in our next step and show how could we design an effective estimation method in the future.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115443110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I-Sheng Lin, Tsai-Yen Li, Quentin Galvane, M. Christie
Cinematography affects how the audience perceives a movie. A same story plot can be interpreted differently through the presentation of different camera movements, which show the importance of cinematography in filmmaking. Typically, filmmaking is costly, and beginners and amateurs rarely have the opportunity to play and do an experiment on a film set. In this work, we aim to design and construct a virtual environment for film shooting, allowing a user to play multiple roles in a virtual film set and emulating the process of the filmmaking. Our system provides camera shooting assistants, tools for field directing and real-time editing, aiming to help novices learn cinematographic concepts, track the progress of filmmaking, and create a personalized movie. In order to verify that our system is a user-friendly and effective tool for experiencing filmmaking, we have conducted an experiment to observe the behaviors and obtain feedback from participants with various cinematographic backgrounds.
{"title":"Design and evaluation of multiple role-playing in a virtual film set","authors":"I-Sheng Lin, Tsai-Yen Li, Quentin Galvane, M. Christie","doi":"10.1145/3284398.3284424","DOIUrl":"https://doi.org/10.1145/3284398.3284424","url":null,"abstract":"Cinematography affects how the audience perceives a movie. A same story plot can be interpreted differently through the presentation of different camera movements, which show the importance of cinematography in filmmaking. Typically, filmmaking is costly, and beginners and amateurs rarely have the opportunity to play and do an experiment on a film set. In this work, we aim to design and construct a virtual environment for film shooting, allowing a user to play multiple roles in a virtual film set and emulating the process of the filmmaking. Our system provides camera shooting assistants, tools for field directing and real-time editing, aiming to help novices learn cinematographic concepts, track the progress of filmmaking, and create a personalized movie. In order to verify that our system is a user-friendly and effective tool for experiencing filmmaking, we have conducted an experiment to observe the behaviors and obtain feedback from participants with various cinematographic backgrounds.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122790626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The designer of the virtual environment have been trying for decades to provide the player with more enjoyable, comfortable and also informative user experiences, and yet still fail to ensure that the player follow the preset instructions and even implicit suggestions faithfully and naturally, due to the designer's invisibility during runtime, and the player's individual diversity and individual impromptu in manipulations. We believe that the camera is the mainly messenger for the designer and the player to communicate, and intend to build a bridge between them. By binding the designer's aesthetic ideas to the parameters of the camera's movement, we enable the player to roam in the virtual scene with the guidance from the designer. We also propose a navigation guiding language (NGL) to assist the binding and the guiding process. A user study is made to evaluate the performance of our method. Experiments and questionnaires have shown that our method can offer a more attentive and pleasing experience to the player with implicit guidance.
{"title":"CamBridge: a bridge of camera aesthetic between virtual environment designers and players","authors":"Chanchan Xu, Guangzheng Fei, Honglei Han","doi":"10.1145/3284398.3284423","DOIUrl":"https://doi.org/10.1145/3284398.3284423","url":null,"abstract":"The designer of the virtual environment have been trying for decades to provide the player with more enjoyable, comfortable and also informative user experiences, and yet still fail to ensure that the player follow the preset instructions and even implicit suggestions faithfully and naturally, due to the designer's invisibility during runtime, and the player's individual diversity and individual impromptu in manipulations. We believe that the camera is the mainly messenger for the designer and the player to communicate, and intend to build a bridge between them. By binding the designer's aesthetic ideas to the parameters of the camera's movement, we enable the player to roam in the virtual scene with the guidance from the designer. We also propose a navigation guiding language (NGL) to assist the binding and the guiding process. A user study is made to evaluate the performance of our method. Experiments and questionnaires have shown that our method can offer a more attentive and pleasing experience to the player with implicit guidance.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"280 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131680365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human parametric models can provide useful constraints for human shape estimation to produce more accurate results. However, the state-of-art models are computational expensive which limit their wide use in interactive graphics applications. We present PROME (PROjected MEasures) - a novel human parametric model which has high expressive power and low computational complexity. Projected measures are sets of 2D contour poly-lines that capture key measure features defined in anthropometry. The PROME model builds the relationship between 3D shape and pose parameters and 2D projected measures. We train the PROME model in two parts: the shape model formulates deformations of projected measures caused by shape variation, and the pose model formulates deformations of projected measures caused by pose variation. Based on the PROME model we further propose a fast shape estimation method which estimates the 3D shape parameters of a subject from a single image in nearly real-time. The method builds an optimize problem and solves it using gradient optimizing strategy. Experiment results show that the PROME model has well capability in representing human body in different shape and pose comparing to existing 3D human parametric models, such as SCAPE[Anguelov et al. 2005] and TenBo[Chen et al. 2013], yet keeps much lower computational complexity. Our shape estimation method can process an image in about one second, orders of magnitude faster than state-of-art methods, and the estimating result is very close to the ground truth. The proposed method can be widely used in interactive applications such as virtual try-on and virtual reality collaboration.
{"title":"PROME","authors":"Nianchen Deng, Xubo Yang, Yanqing Zhou","doi":"10.1145/3284398.3284406","DOIUrl":"https://doi.org/10.1145/3284398.3284406","url":null,"abstract":"Human parametric models can provide useful constraints for human shape estimation to produce more accurate results. However, the state-of-art models are computational expensive which limit their wide use in interactive graphics applications. We present PROME (PROjected MEasures) - a novel human parametric model which has high expressive power and low computational complexity. Projected measures are sets of 2D contour poly-lines that capture key measure features defined in anthropometry. The PROME model builds the relationship between 3D shape and pose parameters and 2D projected measures. We train the PROME model in two parts: the shape model formulates deformations of projected measures caused by shape variation, and the pose model formulates deformations of projected measures caused by pose variation. Based on the PROME model we further propose a fast shape estimation method which estimates the 3D shape parameters of a subject from a single image in nearly real-time. The method builds an optimize problem and solves it using gradient optimizing strategy. Experiment results show that the PROME model has well capability in representing human body in different shape and pose comparing to existing 3D human parametric models, such as SCAPE[Anguelov et al. 2005] and TenBo[Chen et al. 2013], yet keeps much lower computational complexity. Our shape estimation method can process an image in about one second, orders of magnitude faster than state-of-art methods, and the estimating result is very close to the ground truth. The proposed method can be widely used in interactive applications such as virtual try-on and virtual reality collaboration.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"46 50","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113933970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Indoor localization is an important problem with a wide range of applications such as indoor navigation, robot mapping, especially augmented reality(AR). One of most important tasks in AR technology is to estimate the target objects' position information in real environment. The existed AR systems mostly utilize specialized marker to locate, some AR systems track real 3D object in real environment but need to get the the position information of index points in environment in advance. The above methods are not efficiency and limit the application of AR system, so that solving indoor localization problem has significant meaning for the development of AR technology. The development of computer vision (CV) techniques and the ubiquity of intelligent devices with cameras provides the foundation for offering accurate localization services. However, pure CV-based solutions usually involve hundreds of photos and pre-calibration to construct an densely sampled 3D model, which is a labor-intensive overhead for practical deployment. And a large amount of computation cost is difficult to satisfy the requirement for efficiency in mobile device. In this paper, we present iStart, a lightweight, easy deployed, image-based indoor localization system, which can be run on smart phone and VR/AR devices like HTC Vive, Google Glasses and so on. With core techniques rooted in data hierarchy scheme of WiFi fingerprints and photos, iStart also acquires user localization with a single photo of surroundings with high accuracy and short delay. Extensive experiments in various environments show that 90 percentile location deviations are less than 1 m, and 60 percentile location deviations are less than 0.5 m.
室内定位是一个重要的问题,具有广泛的应用,如室内导航,机器人地图,特别是增强现实(AR)。在增强现实技术中,最重要的任务之一是对真实环境中目标物体的位置信息进行估计。现有的增强现实系统大多利用专门的标记进行定位,一些增强现实系统在真实环境中跟踪真实的三维物体,但需要提前获取环境中索引点的位置信息。以上方法效率不高,限制了AR系统的应用,因此解决室内定位问题对AR技术的发展具有重要意义。计算机视觉(CV)技术的发展和智能相机设备的普及为提供准确的定位服务提供了基础。然而,纯粹的基于cv的解决方案通常涉及数百张照片和预校准,以构建密集采样的3D模型,这对于实际部署来说是一种劳动密集型的开销。而大量的计算成本难以满足移动设备对效率的要求。在本文中,我们提出了iStart,一个轻量级的,易于部署的,基于图像的室内定位系统,它可以运行在智能手机和VR/AR设备,如HTC Vive,谷歌眼镜等。iStart的核心技术植根于WiFi指纹和照片的数据层次方案,利用单张周围环境照片获得用户定位,精度高,时延短。在各种环境下的大量实验表明,90%的百分位定位偏差小于1 m, 60%的百分位定位偏差小于0.5 m。
{"title":"Fusion of wifi and vision based on smart devices for indoor localization","authors":"Jing Guo, Shaobo Zhang, Wanqing Zhao, Jinye Peng","doi":"10.1145/3284398.3284401","DOIUrl":"https://doi.org/10.1145/3284398.3284401","url":null,"abstract":"Indoor localization is an important problem with a wide range of applications such as indoor navigation, robot mapping, especially augmented reality(AR). One of most important tasks in AR technology is to estimate the target objects' position information in real environment. The existed AR systems mostly utilize specialized marker to locate, some AR systems track real 3D object in real environment but need to get the the position information of index points in environment in advance. The above methods are not efficiency and limit the application of AR system, so that solving indoor localization problem has significant meaning for the development of AR technology. The development of computer vision (CV) techniques and the ubiquity of intelligent devices with cameras provides the foundation for offering accurate localization services. However, pure CV-based solutions usually involve hundreds of photos and pre-calibration to construct an densely sampled 3D model, which is a labor-intensive overhead for practical deployment. And a large amount of computation cost is difficult to satisfy the requirement for efficiency in mobile device. In this paper, we present iStart, a lightweight, easy deployed, image-based indoor localization system, which can be run on smart phone and VR/AR devices like HTC Vive, Google Glasses and so on. With core techniques rooted in data hierarchy scheme of WiFi fingerprints and photos, iStart also acquires user localization with a single photo of surroundings with high accuracy and short delay. Extensive experiments in various environments show that 90 percentile location deviations are less than 1 m, and 60 percentile location deviations are less than 0.5 m.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125412110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VR-based simulation could significantly improve the user experience by offering users vivid and near-life visual scenes, hence helping users better handle dangerous situations safely such as fire accidents. In this paper, we design a fire evacuation simulation system and propose a hybrid crowd evacuation modeling and simulation approach, which is a layer-based model adopting both local and global techniques partially into different layers. In essence, this model integrates an agent-based model with an improved dynamical network flow model, which is capable of taking into account issues both from individual diversity and from crowd movement tendency to simulate crowd evacuation. An emergency response mechanism driven by videos is then designed according to the model. Once fire accidents are detected in videos, the system will first simulate accidents according to the fire level provided by the monitoring module and then start an evacuation routine or adjust evacuation routes. The simulation system can be experienced by users in a virtual environment. Finally, evaluations have been conducted to test the rationality of our model and results show that the proposed model can simulate the crowd movement and agent behavior in dynamic environments efficiently.
{"title":"A VR-based, hybrid modeling approach to fire evacuation simulation","authors":"Tao Gu, Changbo Wang, Gaoqi He","doi":"10.1145/3284398.3284409","DOIUrl":"https://doi.org/10.1145/3284398.3284409","url":null,"abstract":"VR-based simulation could significantly improve the user experience by offering users vivid and near-life visual scenes, hence helping users better handle dangerous situations safely such as fire accidents. In this paper, we design a fire evacuation simulation system and propose a hybrid crowd evacuation modeling and simulation approach, which is a layer-based model adopting both local and global techniques partially into different layers. In essence, this model integrates an agent-based model with an improved dynamical network flow model, which is capable of taking into account issues both from individual diversity and from crowd movement tendency to simulate crowd evacuation. An emergency response mechanism driven by videos is then designed according to the model. Once fire accidents are detected in videos, the system will first simulate accidents according to the fire level provided by the monitoring module and then start an evacuation routine or adjust evacuation routes. The simulation system can be experienced by users in a virtual environment. Finally, evaluations have been conducted to test the rationality of our model and results show that the proposed model can simulate the crowd movement and agent behavior in dynamic environments efficiently.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116848510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Avatar expression appearing in the virtual social space is one of the key technologies to convey people's emotions and facilitate the social interactions effectively via the virtual social system. Aiming at lack of feasible solutions for synchronized facial expressions in current commercial virtual social systems, this paper presented a virtual social system with the focus on real-time avatar facial expressions. Firstly, cascaded pose regression was adopted to train a dynamic expression model to infer the expression coefficients from 2D video frames, and the facial landmarks in regression were extracted by supervised descent method instead of 2D cascaded pose regression to achieve better robustness and fault tolerance in facial tracking and animation. Secondly, we proposed a multi-scale adaptive expression coding technology for expression-voice data synchronization and striking balance between real-time and richness of facial expressions in varied complex network situations. The experimental results show that the proposed facial tracking and animation system is practical and feasible, and could produce a high degree of realistic emotional cues in virtual social system.
{"title":"Facial tracking and animation for digital social system","authors":"Dongjin Huang, Yuanqiu Yao, Wen Tang, Youdong Ding","doi":"10.1145/3284398.3284413","DOIUrl":"https://doi.org/10.1145/3284398.3284413","url":null,"abstract":"Avatar expression appearing in the virtual social space is one of the key technologies to convey people's emotions and facilitate the social interactions effectively via the virtual social system. Aiming at lack of feasible solutions for synchronized facial expressions in current commercial virtual social systems, this paper presented a virtual social system with the focus on real-time avatar facial expressions. Firstly, cascaded pose regression was adopted to train a dynamic expression model to infer the expression coefficients from 2D video frames, and the facial landmarks in regression were extracted by supervised descent method instead of 2D cascaded pose regression to achieve better robustness and fault tolerance in facial tracking and animation. Secondly, we proposed a multi-scale adaptive expression coding technology for expression-voice data synchronization and striking balance between real-time and richness of facial expressions in varied complex network situations. The experimental results show that the proposed facial tracking and animation system is practical and feasible, and could produce a high degree of realistic emotional cues in virtual social system.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131069913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xianxuan Lin, Xun Wang, Frederick W. B. Li, Bailin Yang, Kaili Zhang, T. Wei
Indoor home scene coloring technology is a hot topic for home design, helping users make home coloring decisions. Image based home scene coloring is preferable for e-commerce customers since it only requires users to describe coloring expectations or manipulate colors through images, which is intuitive and inexpensive. In contrast, if home scene coloring is performed based on 3D scenes, the process becomes expensive due to the high cost and time in obtaining 3D models and constructing 3D scenes. To realize image based home scene coloring, our framework can extract the coloring of individual furniture together with their relationship. This allows us to formulate the color structure of the home scene, serving as the basis for color migration. Our work is challenging since it is not intuitive to identify the coloring of furniture and their parts as well as the coloring relationship among furniture. This paper presents a new color migration framework for home scenes. We first extract local coloring from a home scene image forming a regional color table. We then generate a matching color table from a template image based on its color structure. Finally we transform the target image coloring based on the matching color table and well maintain the boundary transitions among image regions. We also introduce an interactive operation to guide such transformation. Experiments show our framework can produce good results meeting human visual expectations.
{"title":"Image recoloring for home scene","authors":"Xianxuan Lin, Xun Wang, Frederick W. B. Li, Bailin Yang, Kaili Zhang, T. Wei","doi":"10.1145/3284398.3284404","DOIUrl":"https://doi.org/10.1145/3284398.3284404","url":null,"abstract":"Indoor home scene coloring technology is a hot topic for home design, helping users make home coloring decisions. Image based home scene coloring is preferable for e-commerce customers since it only requires users to describe coloring expectations or manipulate colors through images, which is intuitive and inexpensive. In contrast, if home scene coloring is performed based on 3D scenes, the process becomes expensive due to the high cost and time in obtaining 3D models and constructing 3D scenes. To realize image based home scene coloring, our framework can extract the coloring of individual furniture together with their relationship. This allows us to formulate the color structure of the home scene, serving as the basis for color migration. Our work is challenging since it is not intuitive to identify the coloring of furniture and their parts as well as the coloring relationship among furniture. This paper presents a new color migration framework for home scenes. We first extract local coloring from a home scene image forming a regional color table. We then generate a matching color table from a template image based on its color structure. Finally we transform the target image coloring based on the matching color table and well maintain the boundary transitions among image regions. We also introduce an interactive operation to guide such transformation. Experiments show our framework can produce good results meeting human visual expectations.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116516870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}