Haruki Sato, T. Hirai, Tomoyasu Nakano, Masataka Goto, S. Morishima
This paper presents a system that can automatically add a soundtrack to a video clip by replacing and concatenating an existing song's musical bars considering a user's preference. Since a soundtrack makes a video clip attractive, adding a soundtrack to a clip is one of the most important processes in video editing. To make a video clip more attractive, an editor of the clip tends to add a soundtrack considering its timing and climax. For example, editors often add chorus sections to the climax of the clip by replacing and concatenating musical bars in an existing song. However, in the process, editors should take naturalness of rearranged soundtrack into account. Therefore, editors have to decide how to replace musical bars in a song considering its timing, climax, and naturalness of rearranged soundtrack simultaneously. In this case, editors are required to optimize the soundtrack by listening to the rearranged result as well as checking the naturalness and synchronization between the result and the video clip. However, this repetitious work is time-consuming. [Feng et al. 2010] proposed an automatic soundtrack addition method. However, since this method automatically adds soundtrack with data-driven approach, this method cannot consider timing and climax which a user prefers.
{"title":"A music video authoring system synchronizing climax of video clips and music via rearrangement of musical bars","authors":"Haruki Sato, T. Hirai, Tomoyasu Nakano, Masataka Goto, S. Morishima","doi":"10.1145/2787626.2792608","DOIUrl":"https://doi.org/10.1145/2787626.2792608","url":null,"abstract":"This paper presents a system that can automatically add a soundtrack to a video clip by replacing and concatenating an existing song's musical bars considering a user's preference. Since a soundtrack makes a video clip attractive, adding a soundtrack to a clip is one of the most important processes in video editing. To make a video clip more attractive, an editor of the clip tends to add a soundtrack considering its timing and climax. For example, editors often add chorus sections to the climax of the clip by replacing and concatenating musical bars in an existing song. However, in the process, editors should take naturalness of rearranged soundtrack into account. Therefore, editors have to decide how to replace musical bars in a song considering its timing, climax, and naturalness of rearranged soundtrack simultaneously. In this case, editors are required to optimize the soundtrack by listening to the rearranged result as well as checking the naturalness and synchronization between the result and the video clip. However, this repetitious work is time-consuming. [Feng et al. 2010] proposed an automatic soundtrack addition method. However, since this method automatically adds soundtrack with data-driven approach, this method cannot consider timing and climax which a user prefers.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127764543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prashanth Bollam, Eesha Gothwal, G. B. C. S. T. Vinnakota, Shailesh Kumar, Soumyajit Deb
The recent boom in computing capabilities of mobile devices has led to the introduction of Virtual Reality into the mobile ecosystem. We demonstrate a framework for the Samsung Gear VR headset that allows developers to create a totally immersive AR & VR experience with no need for interfacing with external devices or cables thereby making it a truly autonomous mobile VR experience. The significant benefits of this system over existing ones are - a fully hands free experience where hands could be used for gesture based input, the ability to use the Head Mounted Display (HMD) sensor for improved head and positional tracking and automatic peer to peer network creation for communication between phones. The most important factor in our system is to provide an intuitive way to interact with virtual objects in AR and VR. And users should be able to switch from AR to VR world and vice versa seamlessly.
{"title":"Mobile collaborative augmented reality with real-time AR/VR switching","authors":"Prashanth Bollam, Eesha Gothwal, G. B. C. S. T. Vinnakota, Shailesh Kumar, Soumyajit Deb","doi":"10.1145/2787626.2792662","DOIUrl":"https://doi.org/10.1145/2787626.2792662","url":null,"abstract":"The recent boom in computing capabilities of mobile devices has led to the introduction of Virtual Reality into the mobile ecosystem. We demonstrate a framework for the Samsung Gear VR headset that allows developers to create a totally immersive AR & VR experience with no need for interfacing with external devices or cables thereby making it a truly autonomous mobile VR experience. The significant benefits of this system over existing ones are - a fully hands free experience where hands could be used for gesture based input, the ability to use the Head Mounted Display (HMD) sensor for improved head and positional tracking and automatic peer to peer network creation for communication between phones. The most important factor in our system is to provide an intuitive way to interact with virtual objects in AR and VR. And users should be able to switch from AR to VR world and vice versa seamlessly.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134318494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our voice is an important part of our individuality but the relationship we have with our own voice is not obvious. We don't hear it the same way others do, and our brain treats it differently from any other sound we hear [Houde et al. 2002]. Yet its sonority is highly linked to our body and mind, and deeply connected with how we are perceived by society and how we see ourselves. The V3 system (Vocal Vibrations Visualization) offers a interactive visualization of vocal vibration patterns. We developed the hexauscultation mask, a head set sensor that measures bioacoustic signals from the voice at 6 points of the face and throat. Those signals are sent and processed to offer a real-time visualization of the relative vibration intensities at the 6 measured points. This system can be used in various situations such as vocal training, tool design for the deaf community, design of HCI for speech disorder treatment and prosody acquisition but also simply for personal vocal exploration.
我们的声音是我们个性的重要组成部分,但我们与自己的声音的关系并不明显。我们听到的声音和其他人听到的不一样,我们的大脑对待它的方式和我们听到的任何其他声音都不一样[Houde et al. 2002]。然而,它的声音与我们的身心密切相关,与社会如何看待我们以及我们如何看待自己密切相关。V3系统(声音振动可视化)提供了声音振动模式的交互式可视化。我们开发了六听诊面罩,一种头戴式传感器,可以测量面部和喉咙的6个点的声音生物声学信号。这些信号被发送和处理,以提供6个测点的相对振动强度的实时可视化。该系统可用于各种场合,如声乐训练,聋人社区的工具设计,语言障碍治疗和韵律习得的HCI设计,也可用于个人声乐探索。
{"title":"V3: an interactive real-time visualization of vocal vibrations","authors":"Rébecca Kleinberger","doi":"10.1145/2787626.2792624","DOIUrl":"https://doi.org/10.1145/2787626.2792624","url":null,"abstract":"Our voice is an important part of our individuality but the relationship we have with our own voice is not obvious. We don't hear it the same way others do, and our brain treats it differently from any other sound we hear [Houde et al. 2002]. Yet its sonority is highly linked to our body and mind, and deeply connected with how we are perceived by society and how we see ourselves. The V3 system (Vocal Vibrations Visualization) offers a interactive visualization of vocal vibration patterns. We developed the hexauscultation mask, a head set sensor that measures bioacoustic signals from the voice at 6 points of the face and throat. Those signals are sent and processed to offer a real-time visualization of the relative vibration intensities at the 6 measured points. This system can be used in various situations such as vocal training, tool design for the deaf community, design of HCI for speech disorder treatment and prosody acquisition but also simply for personal vocal exploration.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134063215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Pabst, Hansung Kim, L. Polok, V. Ila, Ted Waine, A. Hilton, J. Clifford
Modern digital film production uses large quantities of data captured on-set, such as videos, digital photographs, LIDAR scans, spherical photography and many other sources to create the final film frames. The processing and management of this massive amount of heterogeneous data consumes enormous resources. We propose an integrated pipeline for 2D/3D data registration aimed at film production, based around the prototype application Jigsaw. It allows users to efficiently manage and process various data types from digital photographs to 3D point clouds. A key step in the use of multi-modal 2D/3D data for content production is the registration into a common coordinate frame (match moving). 3D geometric information is reconstructed from 2D data and registered to the reference 3D models using 3D feature matching [Kim and Hilton 2014]. We present several highly efficient and robust approaches to this problem. Additionally, we have developed and integrated a fast algorithm for incremental marginal covariance calculation [Ila et al. 2015]. This allows us to estimate and visualize the 3D reconstruction error directly on-set, where insufficient coverage or other problems can be addressed right away. We describe the fast hybrid multi-core and GPU accelerated techniques that let us run these algorithms on a laptop. Jigsaw has been used and evaluated in several major digital film productions and significantly reduced the time and work required to manage and process on-set data.
现代数字电影制作使用在现场捕获的大量数据,如视频、数字照片、激光雷达扫描、球形摄影和许多其他来源来创建最终的电影框架。处理和管理这些海量的异构数据需要消耗大量的资源。我们提出了一个针对电影制作的2D/3D数据注册的集成管道,基于原型应用程序Jigsaw。它允许用户有效地管理和处理从数字照片到3D点云的各种数据类型。在内容制作中使用多模态2D/3D数据的关键步骤是注册到一个公共坐标框架(匹配移动)。3D几何信息从2D数据重建,并使用3D特征匹配注册到参考3D模型[Kim and Hilton 2014]。我们提出了几个高效和健壮的方法来解决这个问题。此外,我们开发并集成了一种用于增量边际协方差计算的快速算法[Ila et al. 2015]。这使我们能够直接在现场估计和可视化3D重建误差,在覆盖不足或其他问题可以立即解决的地方。我们描述了快速混合多核和GPU加速技术,使我们能够在笔记本电脑上运行这些算法。Jigsaw已经在几个主要的数字电影制作中使用和评估,并显着减少了管理和处理现场数据所需的时间和工作。
{"title":"Jigsaw: multi-modal big data management in digital film production","authors":"S. Pabst, Hansung Kim, L. Polok, V. Ila, Ted Waine, A. Hilton, J. Clifford","doi":"10.1145/2787626.2792617","DOIUrl":"https://doi.org/10.1145/2787626.2792617","url":null,"abstract":"Modern digital film production uses large quantities of data captured on-set, such as videos, digital photographs, LIDAR scans, spherical photography and many other sources to create the final film frames. The processing and management of this massive amount of heterogeneous data consumes enormous resources. We propose an integrated pipeline for 2D/3D data registration aimed at film production, based around the prototype application Jigsaw. It allows users to efficiently manage and process various data types from digital photographs to 3D point clouds. A key step in the use of multi-modal 2D/3D data for content production is the registration into a common coordinate frame (match moving). 3D geometric information is reconstructed from 2D data and registered to the reference 3D models using 3D feature matching [Kim and Hilton 2014]. We present several highly efficient and robust approaches to this problem. Additionally, we have developed and integrated a fast algorithm for incremental marginal covariance calculation [Ila et al. 2015]. This allows us to estimate and visualize the 3D reconstruction error directly on-set, where insufficient coverage or other problems can be addressed right away. We describe the fast hybrid multi-core and GPU accelerated techniques that let us run these algorithms on a laptop. Jigsaw has been used and evaluated in several major digital film productions and significantly reduced the time and work required to manage and process on-set data.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115539387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hiroki Kagiyama, Masahide Kawai, Daiki Kuwahara, Takuya Kato, S. Morishima
In movie and video game productions, synthesizing subtle eye and corresponding head movements of CG character is essential to make a content dramatic and impressive. However, to complete them costs a lot of time and labors because they often have to be made by manual operations of skilled artists.
{"title":"Automatic synthesis of eye and head animation according to duration and point of gaze","authors":"Hiroki Kagiyama, Masahide Kawai, Daiki Kuwahara, Takuya Kato, S. Morishima","doi":"10.1145/2787626.2792607","DOIUrl":"https://doi.org/10.1145/2787626.2792607","url":null,"abstract":"In movie and video game productions, synthesizing subtle eye and corresponding head movements of CG character is essential to make a content dramatic and impressive. However, to complete them costs a lot of time and labors because they often have to be made by manual operations of skilled artists.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129942236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anthousis Andreadis, Robert Gregor, I. Sipiran, P. Mavridis, Georgios Papaioannou, T. Schreck
The problem of object restoration from eroded fragments where large parts could be missing is of high relevance in archaeology. Manual restoration is possible and common in practice but it is a tedious and error-prone process, which does not scale well. Solutions for specific parts of the problem have been proposed but a complete reassembly and repair pipeline is absent from the bibliography. We propose a shape restoration pipeline consisting of appropriate methods for automatic fragment reassembly and shape completion. We demonstrate the effectiveness of our approach using real-world fractured objects.
{"title":"Fractured 3D object restoration and completion","authors":"Anthousis Andreadis, Robert Gregor, I. Sipiran, P. Mavridis, Georgios Papaioannou, T. Schreck","doi":"10.1145/2787626.2792633","DOIUrl":"https://doi.org/10.1145/2787626.2792633","url":null,"abstract":"The problem of object restoration from eroded fragments where large parts could be missing is of high relevance in archaeology. Manual restoration is possible and common in practice but it is a tedious and error-prone process, which does not scale well. Solutions for specific parts of the problem have been proposed but a complete reassembly and repair pipeline is absent from the bibliography. We propose a shape restoration pipeline consisting of appropriate methods for automatic fragment reassembly and shape completion. We demonstrate the effectiveness of our approach using real-world fractured objects.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128488664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Various colors, such as in a prism, are observed in properly cut diamond even under white light because of dispersion. Properly-cut diamond brings about scintillation when viewing angle is changed, because total reflection inside a diamond tends to occur frequently due to the large refractive index. Moreover, strong rainbow colors are seen because of high dispersion ratio.
{"title":"Display of diamond dispersion using wavelength-division rendering and integral photography","authors":"Nahomi Maki, K. Yanaka","doi":"10.1145/2787626.2792642","DOIUrl":"https://doi.org/10.1145/2787626.2792642","url":null,"abstract":"Various colors, such as in a prism, are observed in properly cut diamond even under white light because of dispersion. Properly-cut diamond brings about scintillation when viewing angle is changed, because total reflection inside a diamond tends to occur frequently due to the large refractive index. Moreover, strong rainbow colors are seen because of high dispersion ratio.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"237 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116330968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
"Shadow Shooter" is a VR shooter game that uses the "e-Yumi 3D" bow interface and real physical interactive content that changes a 360-degree all-around view in a room into virtual game space (Figure 1). This system was constructed by developing our previous interactive "Light Shooter" content based on "The Electric Bow Interface" [Yasumoto and Ohta 2013]. Shadow Shooter expands the virtual game space to all the walls in a room just as in Jones' "Room Alive" [Jones et al. 2014]; however, it does not require large-scale equipment such as multiple projectors. It only requires the e-Yumi 3D device that consists of a real bow's components added to Willis's interface with a mobile projector [Willis et al. 2013]. Thus, we constructed a unique device for Shadow Shooter that easily changes the 360-degree all-around view into a virtual game space.
《Shadow Shooter》是一款VR射击游戏,使用“e-Yumi 3D”弓界面和真实的物理互动内容,将房间内360度的全景变为虚拟的游戏空间(图1)。该系统是在我们之前的《the Electric bow interface》(Yasumoto and Ohta 2013)的基础上开发出的互动“Light Shooter”内容构建的。《Shadow Shooter》将虚拟游戏空间扩展到房间的所有墙壁,就像Jones的《room Alive》一样;然而,它不需要大型设备,如多个投影仪。它只需要e-Yumi 3D设备,该设备由一个真正的弓的组件添加到威利斯与移动投影仪的接口上[威利斯等人,2013]。因此,我们为《Shadow Shooter》构建了一个独特的设备,可以轻松地将360度全方位视角转变为虚拟游戏空间。
{"title":"Shadow shooter: 360-degree all-around virtual 3d interactive content","authors":"Masasuke Yasumoto, Takehiro Teraoka","doi":"10.1145/2787626.2787637","DOIUrl":"https://doi.org/10.1145/2787626.2787637","url":null,"abstract":"\"Shadow Shooter\" is a VR shooter game that uses the \"e-Yumi 3D\" bow interface and real physical interactive content that changes a 360-degree all-around view in a room into virtual game space (Figure 1). This system was constructed by developing our previous interactive \"Light Shooter\" content based on \"The Electric Bow Interface\" [Yasumoto and Ohta 2013]. Shadow Shooter expands the virtual game space to all the walls in a room just as in Jones' \"Room Alive\" [Jones et al. 2014]; however, it does not require large-scale equipment such as multiple projectors. It only requires the e-Yumi 3D device that consists of a real bow's components added to Willis's interface with a mobile projector [Willis et al. 2013]. Thus, we constructed a unique device for Shadow Shooter that easily changes the 360-degree all-around view into a virtual game space.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125987851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Katsuhiro Suzuki, Fumihiko Nakamura, S. Shimamura, K. Kunze, M. Inami, M. Sugimoto
Facial expression is a powerful way for us to exchange information nonverbally. They can give us insights into how people feel and think. There are a number of works related to facial expression detection in computer vision. However, most works focus on camera-based systems installed in the environment. With this method, it is difficult to track user's face if user moves constantly. Moreover, user's facial expression can be recognized at only a limited place.
{"title":"AffectiveWear: toward recognizing facial expression","authors":"Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Katsuhiro Suzuki, Fumihiko Nakamura, S. Shimamura, K. Kunze, M. Inami, M. Sugimoto","doi":"10.1145/2787626.2792632","DOIUrl":"https://doi.org/10.1145/2787626.2792632","url":null,"abstract":"Facial expression is a powerful way for us to exchange information nonverbally. They can give us insights into how people feel and think. There are a number of works related to facial expression detection in computer vision. However, most works focus on camera-based systems installed in the environment. With this method, it is difficult to track user's face if user moves constantly. Moreover, user's facial expression can be recognized at only a limited place.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"7 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128729638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In 3D production for commercials, television, and film, ID mattes are commonly used to modify rendered images without re-rendering. ID mattes are bitmap images used to isolate specific objects, or multiple objects, such as all of the buttons on a shirt. Many 3D pipelines are built to provide compositors with ID mattes in addition to beauty renders to allow flexibility.
{"title":"Fully automatic ID mattes with support for motion blur and transparency","authors":"J. Friedman, Andrew C. Jones","doi":"10.1145/2787626.2787629","DOIUrl":"https://doi.org/10.1145/2787626.2787629","url":null,"abstract":"In 3D production for commercials, television, and film, ID mattes are commonly used to modify rendered images without re-rendering. ID mattes are bitmap images used to isolate specific objects, or multiple objects, such as all of the buttons on a shirt. Many 3D pipelines are built to provide compositors with ID mattes in addition to beauty renders to allow flexibility.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116837353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}