When we see the wriggling movement and the shape of a tentacle like the sea anemone under the sea, we feel an existence of a primitive life. The goal of this research is to realize the expression of a kinetic artwork or interactive artwork such as waving tentacles of sea anemones. At present, soft actuators that bend in multiple directions have been developed. However, these each have a complex structure or are expensive. To realize the expression of waving tentacles we need a large number of actuators. Therefore, we developed a budget actuator with a simple structure. Previously, we have introduced three motion patterns for controlling a SMA actuator that can bend in three directions and an experimental system with 9 actuators [Nakayasu 2014]. In this paper, we introduce an experimental system with 64 actuators that react to a hand's movement via an optical flow algorithm.
{"title":"Waving tentacles 8×8: controlling a SMA actuator by optical flow","authors":"Akira Nakayasu","doi":"10.1145/2820926.2820931","DOIUrl":"https://doi.org/10.1145/2820926.2820931","url":null,"abstract":"When we see the wriggling movement and the shape of a tentacle like the sea anemone under the sea, we feel an existence of a primitive life. The goal of this research is to realize the expression of a kinetic artwork or interactive artwork such as waving tentacles of sea anemones. At present, soft actuators that bend in multiple directions have been developed. However, these each have a complex structure or are expensive. To realize the expression of waving tentacles we need a large number of actuators. Therefore, we developed a budget actuator with a simple structure. Previously, we have introduced three motion patterns for controlling a SMA actuator that can bend in three directions and an experimental system with 9 actuators [Nakayasu 2014]. In this paper, we introduce an experimental system with 64 actuators that react to a hand's movement via an optical flow algorithm.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122376424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maps exist as two-dimensional representations of spatial information, generally designed for a single specific purpose. Our work focuses on representation of data relevant to natural hazards scenarios. Although visualization choices can be made on maps, their fundamental representation is recognizably the same as what it was hundreds of years ago. Video representations can improve on this by incorporating temporal information about disasters in a linear manner. Video still has restrictions though, as they require predetermined decisions about viewpoint and what information is presented at any time-point in the narrative. The current work aims to incorporate the strengths of these methods and expand on their impact. We create a highly customizable visualization tool that incorporates the Unity 3D game engine with scientific layers of information about natural hazards. We discuss the development of proof-of concept work in the bushfire hazard domain here.
{"title":"Using unity for immersive natural hazards visualization","authors":"F. Woolard, M. Bolger","doi":"10.1145/2820926.2820970","DOIUrl":"https://doi.org/10.1145/2820926.2820970","url":null,"abstract":"Maps exist as two-dimensional representations of spatial information, generally designed for a single specific purpose. Our work focuses on representation of data relevant to natural hazards scenarios. Although visualization choices can be made on maps, their fundamental representation is recognizably the same as what it was hundreds of years ago. Video representations can improve on this by incorporating temporal information about disasters in a linear manner. Video still has restrictions though, as they require predetermined decisions about viewpoint and what information is presented at any time-point in the narrative. The current work aims to incorporate the strengths of these methods and expand on their impact. We create a highly customizable visualization tool that incorporates the Unity 3D game engine with scientific layers of information about natural hazards. We discuss the development of proof-of concept work in the bushfire hazard domain here.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122907241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wakana Asahina, N. Okada, Naoya Iwamoto, Taro Masuda, Tsukasa Fukusato, S. Morishima
In recent years, a lot of 3D character dance animation movies are created by amateur users using 3DCG animation editing tools (e.g. MikuMikuDance). Whereas, most of them are created manually. Then automatic facial animation system for dancing character will be useful to create dance movies and visualize impressions effectively. Therefore, we address the challenging theme to estimate dancing character's emotions (we call "dance emotion"). In previous work considering music features, DiPaola et al. [2006] proposed music-driven emotionally expressive face system. To detect the mood of the input music, they used a hierarchical framework (Thayer model), and achieved to generate facial animation that matches music emotion. However, their model can't express subtleties of emotion between two emotions because input music divided into few moods sharply using Gaussian mixture model. In addition, they decide more detailed moods based on the psychological rules that uses score information, so they requires MIDI data. In this paper, we propose "dance emotion model" to visualize dancing character's emotion as facial expression. Our model is built by the coordinate information frame by frame on the emotional space through perceptional experiment using music and dance motion database without MIDI data. Moreover, by considering the displacement on the emotional space, we can express not only a certain emotion but also subtleties of emotions. As the result, our system got a higher accuracy comparing with the previous work. We can create the facial expression result soon by inputting audio data and synchronized motion. It is shown the utility through the comparison with previous work in Figure 1.
{"title":"Automatic facial animation generation system of dancing characters considering emotion in dance and music","authors":"Wakana Asahina, N. Okada, Naoya Iwamoto, Taro Masuda, Tsukasa Fukusato, S. Morishima","doi":"10.1145/2820926.2820935","DOIUrl":"https://doi.org/10.1145/2820926.2820935","url":null,"abstract":"In recent years, a lot of 3D character dance animation movies are created by amateur users using 3DCG animation editing tools (e.g. MikuMikuDance). Whereas, most of them are created manually. Then automatic facial animation system for dancing character will be useful to create dance movies and visualize impressions effectively. Therefore, we address the challenging theme to estimate dancing character's emotions (we call \"dance emotion\"). In previous work considering music features, DiPaola et al. [2006] proposed music-driven emotionally expressive face system. To detect the mood of the input music, they used a hierarchical framework (Thayer model), and achieved to generate facial animation that matches music emotion. However, their model can't express subtleties of emotion between two emotions because input music divided into few moods sharply using Gaussian mixture model. In addition, they decide more detailed moods based on the psychological rules that uses score information, so they requires MIDI data. In this paper, we propose \"dance emotion model\" to visualize dancing character's emotion as facial expression. Our model is built by the coordinate information frame by frame on the emotional space through perceptional experiment using music and dance motion database without MIDI data. Moreover, by considering the displacement on the emotional space, we can express not only a certain emotion but also subtleties of emotions. As the result, our system got a higher accuracy comparing with the previous work. We can create the facial expression result soon by inputting audio data and synchronized motion. It is shown the utility through the comparison with previous work in Figure 1.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128569225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People interact with large corpuses of documents everyday, from Googling the internet, reading a book, or checking up on their email. Much of this content has a temporal component: a Website was published on a particular date, your email arrived yesterday, and Chapter 2 comes after Chapter 1. As we read this content, we create an internal map that correlates what we read with its place in time and with other parts that we've read. The quality of this map is critical to understanding the structure of any large corpus and for locating salient information.
{"title":"Timeline visualization of semantic content","authors":"Douglas J. Mason","doi":"10.1145/2820926.2820974","DOIUrl":"https://doi.org/10.1145/2820926.2820974","url":null,"abstract":"People interact with large corpuses of documents everyday, from Googling the internet, reading a book, or checking up on their email. Much of this content has a temporal component: a Website was published on a particular date, your email arrived yesterday, and Chapter 2 comes after Chapter 1. As we read this content, we create an internal map that correlates what we read with its place in time and with other parts that we've read. The quality of this map is critical to understanding the structure of any large corpus and for locating salient information.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133458922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Monte Carlo path tracing has been increasingly popular in movie production recently. It is a general and unbiased rendering technique that can easily handle diffuse and glossy surfaces. To trace light paths, most of existing path tracers rely on surface BRDFs for directional sampling. This works well for glossy appearance, but tends to be not effective for diffuse surfaces because in such cases, the rendering integral is mostly driven by the incoming radiance distribution, not the BRDFs. Therefore, with the same number of samples, it is more favorable to sample the incoming radiance distribution to achieve better effectiveness for diffuse scenes. [Vorba et al. 2014] addressed this sampling problem by using photons to estimate incoming radiance distributions which can then be compactly represented using Gaussian mixture functions.
蒙特卡罗路径追踪技术近年来在电影制作中越来越受欢迎。这是一种通用的、无偏见的渲染技术,可以很容易地处理漫射和光滑的表面。为了跟踪光路,大多数现有的路径跟踪器依赖于表面brdf进行定向采样。这对于有光泽的表面效果很好,但对于漫射表面往往不太有效,因为在这种情况下,渲染积分主要是由入射的辐射分布驱动的,而不是brdf。因此,在相同的采样次数下,更有利于对入射的辐射分布进行采样,以获得更好的漫射场景效果。[Vorba et al. 2014]通过使用光子来估计入射的辐射分布来解决这个采样问题,然后可以使用高斯混合函数紧凑地表示。
{"title":"Guided path tracing using clustered virtual point lights","authors":"Binh-Son Hua, Kok-Lim Low","doi":"10.1145/2820926.2820955","DOIUrl":"https://doi.org/10.1145/2820926.2820955","url":null,"abstract":"Monte Carlo path tracing has been increasingly popular in movie production recently. It is a general and unbiased rendering technique that can easily handle diffuse and glossy surfaces. To trace light paths, most of existing path tracers rely on surface BRDFs for directional sampling. This works well for glossy appearance, but tends to be not effective for diffuse surfaces because in such cases, the rendering integral is mostly driven by the incoming radiance distribution, not the BRDFs. Therefore, with the same number of samples, it is more favorable to sample the incoming radiance distribution to achieve better effectiveness for diffuse scenes. [Vorba et al. 2014] addressed this sampling problem by using photons to estimate incoming radiance distributions which can then be compactly represented using Gaussian mixture functions.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128280155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using mobile devices to capture photos are very common behaviors in our daily life. With such many photos captured from the members belonging to a social network, [Yin et al. 2014] proposed to utilize the social context from the mobile devices, e.g., geo-tag from the GPS sensor, to help a user to capture better photos via a mobile device. Using the geo-tags of a photo and the analysis of the image content to construct a 3D model of a scene has been developed since the Photo Tourism [Snavely et al. 2006] project. The scene reconstruction scheme proposed by [Snavely et al. 2008] can visualize the photos in a 3D environment according to the photos collected from the social members. In addition, [Szeliski et al. 2013] indicated that it is a natural way to navigate the images from the social media sites in a 3D geo-located context. Therefore, for multimedia visualization with a natural and immersive 3D user experience in a tech-art gallery, we propose a 3D social media browsing system to allow users to use motion-sensing devices to interactively navigate the social photos in a virtual 3D scene constructed from a real physical space.
使用移动设备拍照是我们日常生活中非常常见的行为。由于从属于社交网络的成员中捕获了如此多的照片,[Yin et al. 2014]提出利用来自移动设备的社交上下文,例如来自GPS传感器的地理标签,帮助用户通过移动设备捕获更好的照片。使用照片的地理标签和对图像内容的分析来构建场景的3D模型,自photo Tourism [Snavely et al. 2006]项目以来一直在发展。[Snavely et al. 2008]提出的场景重建方案可以根据收集到的社会成员的照片,将照片在三维环境中可视化。此外,[Szeliski et al. 2013]指出,在3D地理定位环境中,从社交媒体网站上浏览图像是一种自然的方式。因此,为了在科技艺术画廊中提供自然的沉浸式3D用户体验的多媒体可视化,我们提出了一种3D社交媒体浏览系统,允许用户使用体感设备在真实物理空间构建的虚拟3D场景中交互式地浏览社交照片。
{"title":"An interactive 3D social media browsing system in a tech-art gallery","authors":"Shih-Wei Sun, Jheng-Wei Peng, Wei-Chih Lin, Ying-Ting Chen, Wen-Huang Cheng, K. Hua","doi":"10.1145/2820926.2820953","DOIUrl":"https://doi.org/10.1145/2820926.2820953","url":null,"abstract":"Using mobile devices to capture photos are very common behaviors in our daily life. With such many photos captured from the members belonging to a social network, [Yin et al. 2014] proposed to utilize the social context from the mobile devices, e.g., geo-tag from the GPS sensor, to help a user to capture better photos via a mobile device. Using the geo-tags of a photo and the analysis of the image content to construct a 3D model of a scene has been developed since the Photo Tourism [Snavely et al. 2006] project. The scene reconstruction scheme proposed by [Snavely et al. 2008] can visualize the photos in a 3D environment according to the photos collected from the social members. In addition, [Szeliski et al. 2013] indicated that it is a natural way to navigate the images from the social media sites in a 3D geo-located context. Therefore, for multimedia visualization with a natural and immersive 3D user experience in a tech-art gallery, we propose a 3D social media browsing system to allow users to use motion-sensing devices to interactively navigate the social photos in a virtual 3D scene constructed from a real physical space.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115744959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Physical simulation has been developed rapidly and recent work shows natural looking simulated motion through motion capture data and robust adaptation to external perturbations by using manually designed balance controller. However, developing general controller to simulate unpredictable or complex motion is still challenging.
{"title":"Biped control using multi-segment foot model based on the human feet","authors":"Seokjae Lee, Jehee Lee","doi":"10.1145/2820926.2820943","DOIUrl":"https://doi.org/10.1145/2820926.2820943","url":null,"abstract":"Physical simulation has been developed rapidly and recent work shows natural looking simulated motion through motion capture data and robust adaptation to external perturbations by using manually designed balance controller. However, developing general controller to simulate unpredictable or complex motion is still challenging.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127052234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chen-Chi Hu, Tze-Hsiang Wei, Yu-Sheng Chen, Yi-Chieh Wu, Ming-Te Chi
Modeling is a key application in 3D fabrication. Although numerous powerful 3D-modeling software packages exist, few people can freely build their desired model because of insufficient background knowledge in geometry and difficulties manipulating the complexities of the modeling interface; the learning curve is steep for most people. For this study, we chose a cubic model, a model assembled from small cubes, to reduce the learning curve of modeling. We proposed an intuitive modeling system designed for elementary school students. Users can sketch a rough 2D contour, and then the system enables them to generate the thickness and shape of a 3D cubic model.
{"title":"Intuitive 3D cubic style modeling system","authors":"Chen-Chi Hu, Tze-Hsiang Wei, Yu-Sheng Chen, Yi-Chieh Wu, Ming-Te Chi","doi":"10.1145/2820926.2820956","DOIUrl":"https://doi.org/10.1145/2820926.2820956","url":null,"abstract":"Modeling is a key application in 3D fabrication. Although numerous powerful 3D-modeling software packages exist, few people can freely build their desired model because of insufficient background knowledge in geometry and difficulties manipulating the complexities of the modeling interface; the learning curve is steep for most people. For this study, we chose a cubic model, a model assembled from small cubes, to reduce the learning curve of modeling. We proposed an intuitive modeling system designed for elementary school students. Users can sketch a rough 2D contour, and then the system enables them to generate the thickness and shape of a 3D cubic model.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128595097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Akira Takeuchi, Hiromitsu Fujii, A. Yamashita, Masayuki Tanaka, R. Kataoka, Y. Miyoshi, M. Okutomi, H. Asama
Three-dimensional analysis of the aurora is significant because the shape of aurora depends on solar wind which influences electric equipment such as satellites. Our research group set two fish-eye cameras in Alaska, U.S.A and reconstructed the Aurora's shape from a pair of stereo images [Fujii et al. 2014]. However, the method using the feature-based matching cannot detect dense enough feature points accurately since they are hard to detect from the aurora image whose most parts are with low contrast. In this paper, we achieved both increasing the detected feature points and improving accuracy. Applying this method, the 3D shape of aurora from optional view point at optional time can be visualized.
极光的三维分析很重要,因为极光的形状取决于太阳风,而太阳风会影响卫星等电子设备。我们的研究小组在美国阿拉斯加设置了两台鱼眼相机,并从一对立体图像中重建了极光的形状[Fujii et al. 2014]。然而,基于特征匹配的方法很难从大部分对比度较低的极光图像中检测到足够密集的特征点,因此无法准确检测到这些特征点。在本文中,我们既增加了检测到的特征点,又提高了精度。应用该方法,可以实现任意时间、任意视点上的极光三维形状的可视化。
{"title":"3D visualization of aurora from optional viewpoint at optional time","authors":"Akira Takeuchi, Hiromitsu Fujii, A. Yamashita, Masayuki Tanaka, R. Kataoka, Y. Miyoshi, M. Okutomi, H. Asama","doi":"10.1145/2820926.2820967","DOIUrl":"https://doi.org/10.1145/2820926.2820967","url":null,"abstract":"Three-dimensional analysis of the aurora is significant because the shape of aurora depends on solar wind which influences electric equipment such as satellites. Our research group set two fish-eye cameras in Alaska, U.S.A and reconstructed the Aurora's shape from a pair of stereo images [Fujii et al. 2014]. However, the method using the feature-based matching cannot detect dense enough feature points accurately since they are hard to detect from the aurora image whose most parts are with low contrast. In this paper, we achieved both increasing the detected feature points and improving accuracy. Applying this method, the 3D shape of aurora from optional view point at optional time can be visualized.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134475497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Chinese ink portrait requires sophisticated skills and the training for Chinese ink painting takes a long time. In this research, a Chinese portrait generation system is proposed to allow the user to convert face images to Chinese ink portraits. We search the image using Active Shape Model (ASM) and extract facial features from an input face image. As a result, a feature-preserved ink diffused image is generated. In order to produce a feature-preserved Chinese ink portrait, we use artistic ink brush strokes to enhance face contour constructed with the facial features. The generated portraits can be used to replace faces in an ink painting.
{"title":"Generating face ink portrait from face photograph","authors":"P. Chiang, Kuo-Hao Chang, Tung-Ju Hsieh","doi":"10.1145/2820926.2820933","DOIUrl":"https://doi.org/10.1145/2820926.2820933","url":null,"abstract":"The Chinese ink portrait requires sophisticated skills and the training for Chinese ink painting takes a long time. In this research, a Chinese portrait generation system is proposed to allow the user to convert face images to Chinese ink portraits. We search the image using Active Shape Model (ASM) and extract facial features from an input face image. As a result, a feature-preserved ink diffused image is generated. In order to produce a feature-preserved Chinese ink portrait, we use artistic ink brush strokes to enhance face contour constructed with the facial features. The generated portraits can be used to replace faces in an ink painting.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127795064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}