Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization最新文献
Guido Maiello, Clara Silvestro, A. Canessa, Manuela Chessa, A. Gibaldi, S. Sabatini, F. Solari
In the last decade, there has been a rapidly growing development of low-cost and commercial interactive systems, both for professional (e.g. scientific visualization, surgery, rehabilitation), and consumer applications (e.g. 3D cinema and videogames). Moreover, there is a recent interest towards the development of simple and affordable systems for motor and cognitive rehabilitation applications to allow patients to perform psychomotor rehabilitation exercises without having to leave their homes [Attygalle et al. 2008].
{"title":"Assessment of stereoscopic depth perception in augmented reality environments based on low-cost technologies","authors":"Guido Maiello, Clara Silvestro, A. Canessa, Manuela Chessa, A. Gibaldi, S. Sabatini, F. Solari","doi":"10.1145/2077451.2077474","DOIUrl":"https://doi.org/10.1145/2077451.2077474","url":null,"abstract":"In the last decade, there has been a rapidly growing development of low-cost and commercial interactive systems, both for professional (e.g. scientific visualization, surgery, rehabilitation), and consumer applications (e.g. 3D cinema and videogames). Moreover, there is a recent interest towards the development of simple and affordable systems for motor and cognitive rehabilitation applications to allow patients to perform psychomotor rehabilitation exercises without having to leave their homes [Attygalle et al. 2008].","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"16 1","pages":"111"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79739217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christine J. Ziemer, Benjamin Chihak, J. Plumert, T. Nguyen, J. Cremer, J. Kearney
Whenever we move, we gain experience with how changes in visual flow are related to movement through the environment. One way that researchers have studied these perception-action linkages is through perturbing the normal relationship between perception and action [Kunz et al. 2009; Rieser et al. 1995]. In these studies, people experience an optic flow rate that is manipulated to be significantly faster or slower than their walking rate. Comparison of distance estimates from before and after this recalibration experience typically shows that people who experience faster optic flow undershoot targets at posttest and people who experience slower optic flow overshoot targets at posttest. Here, we examined how experience with mismatched perception and action (i.e., faster or slower optic flow) in a virtual environment affects subsequent distance estimation in the same virtual environment and in a similar real environment. Of particular interest was whether perception-action coupling is more malleable in the virtual environment than in the real environment.
每当我们移动时,我们就会体验到视觉流的变化是如何与环境中的移动相关联的。研究人员研究这些感知-行动联系的一种方法是通过扰乱感知和行动之间的正常关系[Kunz et al. 2009;Rieser et al. 1995]。在这些研究中,人们体验到的光流速率被操纵得明显快于或慢于他们的步行速率。比较重新校准前后的距离估计通常表明,光流更快的人在后测时低于目标,而光流更慢的人在后测时超过目标。在这里,我们研究了虚拟环境中不匹配的感知和动作(即更快或更慢的光流)的经验如何影响相同虚拟环境和类似真实环境中的后续距离估计。特别令人感兴趣的是,感知-行动耦合在虚拟环境中是否比在真实环境中更具可塑性。
{"title":"Is perception-action coupling more malleable in virtual than in real environments?","authors":"Christine J. Ziemer, Benjamin Chihak, J. Plumert, T. Nguyen, J. Cremer, J. Kearney","doi":"10.1145/2077451.2077477","DOIUrl":"https://doi.org/10.1145/2077451.2077477","url":null,"abstract":"Whenever we move, we gain experience with how changes in visual flow are related to movement through the environment. One way that researchers have studied these perception-action linkages is through perturbing the normal relationship between perception and action [Kunz et al. 2009; Rieser et al. 1995]. In these studies, people experience an optic flow rate that is manipulated to be significantly faster or slower than their walking rate. Comparison of distance estimates from before and after this recalibration experience typically shows that people who experience faster optic flow undershoot targets at posttest and people who experience slower optic flow overshoot targets at posttest. Here, we examined how experience with mismatched perception and action (i.e., faster or slower optic flow) in a virtual environment affects subsequent distance estimation in the same virtual environment and in a similar real environment. Of particular interest was whether perception-action coupling is more malleable in the virtual environment than in the real environment.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"42 1","pages":"114"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83279642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jan Wojdziak, Martin Zavesky, M. Starke, Rainer Groh
In standard computer graphics environments, there are numerous of tools to view and edit data. Those tools often use more than one view to support visualization and interaction more efficiently [Maple et al. 2004]. Based on existing work of Coordinated Multiple Views (CMV) [Roberts 2007], our approach introduces atomic modifiers to coordinate views in 3D-applications. Adaptable to a given context, complex and flexible view adjustments can be achieved by combining modifiers. Thereby, users gain a deeper and better understanding of the visualized data.
在标准的计算机图形环境中,有许多工具可以查看和编辑数据。这些工具通常使用多个视图来更有效地支持可视化和交互[Maple et al. 2004]。基于协调多视图(CMV)的现有工作[Roberts 2007],我们的方法引入了原子修饰符来协调3d应用程序中的视图。根据给定的上下文,可以通过组合修饰符来实现复杂而灵活的视图调整。因此,用户可以更深入、更好地理解可视化数据。
{"title":"Coordinated interaction for enhanced perception in multiple views","authors":"Jan Wojdziak, Martin Zavesky, M. Starke, Rainer Groh","doi":"10.1145/2077451.2077483","DOIUrl":"https://doi.org/10.1145/2077451.2077483","url":null,"abstract":"In standard computer graphics environments, there are numerous of tools to view and edit data. Those tools often use more than one view to support visualization and interaction more efficiently [Maple et al. 2004]. Based on existing work of Coordinated Multiple Views (CMV) [Roberts 2007], our approach introduces atomic modifiers to coordinate views in 3D-applications. Adaptable to a given context, complex and flexible view adjustments can be achieved by combining modifiers. Thereby, users gain a deeper and better understanding of the visualized data.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"30 1","pages":"120"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88283034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Meyer, M. Gonter, C. Grunert, S. Thomschke, M. Vollrath, M. Magnor
The aim of this work is to enable the simulation of the experience of short-time glare effects in a driving simulator by adjusting the display contrast according to human perception. The simulation is displayed on a standard LDR-monitor under office conditions and the prevailing illumination is incorporated. As contrast perception is highly subjective, a psychophysical experiment was performed under realistic night driving conditions, including background illumination as well as a representative driving situation to permit realistic driving behavior.
{"title":"Realistic simulation of human contrast perception after headlight glares in driving simulations","authors":"B. Meyer, M. Gonter, C. Grunert, S. Thomschke, M. Vollrath, M. Magnor","doi":"10.1145/2077451.2077481","DOIUrl":"https://doi.org/10.1145/2077451.2077481","url":null,"abstract":"The aim of this work is to enable the simulation of the experience of short-time glare effects in a driving simulator by adjusting the display contrast according to human perception.\u0000 The simulation is displayed on a standard LDR-monitor under office conditions and the prevailing illumination is incorporated. As contrast perception is highly subjective, a psychophysical experiment was performed under realistic night driving conditions, including background illumination as well as a representative driving situation to permit realistic driving behavior.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"104 1","pages":"118"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77342561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erin A. McManus, Aysu Erdemir, Stephen W. Bailey, J. Rieser, Bobby Bodenheimer
McManus et al. [2011] studied a user's ability to judge errors in self-produced motion; more specifically, throwing. We now take the first step towards discriminating what cues subjects are using in order to make their judgments. The endpoint of the ball is one such cue; the restricted field of view (FOV) of the head mounted display (HMD) makes it difficult for users to view the complete trajectory of the ball, making the endpoint one of the more consistent cues available during the experiment. For the current study, we hid the trajectory of the ball and showed only the landing point of the ball.
{"title":"Using endpoints to judge alterations in self-produced trajectories in an immersive virtual environment","authors":"Erin A. McManus, Aysu Erdemir, Stephen W. Bailey, J. Rieser, Bobby Bodenheimer","doi":"10.1145/2077451.2077485","DOIUrl":"https://doi.org/10.1145/2077451.2077485","url":null,"abstract":"McManus et al. [2011] studied a user's ability to judge errors in self-produced motion; more specifically, throwing. We now take the first step towards discriminating what cues subjects are using in order to make their judgments. The endpoint of the ball is one such cue; the restricted field of view (FOV) of the head mounted display (HMD) makes it difficult for users to view the complete trajectory of the ball, making the endpoint one of the more consistent cues available during the experiment. For the current study, we hid the trajectory of the ball and showed only the landing point of the ball.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"1 1","pages":"122"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82235309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandre Coninx, Georges-Pierre Bonneau, J. Droulez, G. Thibault
We present a new method to visualize uncertain scalar data fields by combining color scale visualization techniques with animated, perceptually adapted Perlin noise. The parameters of the Perlin noise are controlled by the uncertainty information to produce animated patterns showing local data value and quality. In order to precisely control the perception of the noise patterns, we perform a psychophysical evaluation of contrast sensitivity thresholds for a set of Perlin noise stimuli. We validate and extend this evaluation using an existing computational model. This allows us to predict the perception of the uncertainty noise patterns for arbitrary choices of parameters. We demonstrate and discuss the efficiency and the benefits of our method with various settings, color maps and data sets.
{"title":"Visualization of uncertain scalar data fields using color scales and perceptually adapted noise","authors":"Alexandre Coninx, Georges-Pierre Bonneau, J. Droulez, G. Thibault","doi":"10.1145/2077451.2077462","DOIUrl":"https://doi.org/10.1145/2077451.2077462","url":null,"abstract":"We present a new method to visualize uncertain scalar data fields by combining color scale visualization techniques with animated, perceptually adapted Perlin noise. The parameters of the Perlin noise are controlled by the uncertainty information to produce animated patterns showing local data value and quality. In order to precisely control the perception of the noise patterns, we perform a psychophysical evaluation of contrast sensitivity thresholds for a set of Perlin noise stimuli. We validate and extend this evaluation using an existing computational model. This allows us to predict the perception of the uncertainty noise patterns for arbitrary choices of parameters. We demonstrate and discuss the efficiency and the benefits of our method with various settings, color maps and data sets.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"1 1","pages":"59-66"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90907273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We conducted an experiment to examine the effects of scene density and richness on people's estimates of traveled distance. Participants wearing HMDs first experienced vision-only simulated self-motion over the distance of 65 meters in either a feature-dense scene (condition 1) or a sparse scene (condition 2), and then attempted to reproduce the same distance by physically walking with vision in a neutral virtual scene. We found that participants' estimates in the first condition were significantly shorter than those in the second condition. Furthermore, condition 1 estimates were significantly below the actual 65m travel distance, while condition 2 estimates did not differ significantly from 65m. The results suggest that scene feature density and richness affect traveled distance estimation.
{"title":"Effects of scene density and richness on traveled distance estimation in virtual environments","authors":"T. Nguyen, J. Cremer, J. Kearney, J. Plumert","doi":"10.1145/2077451.2077466","DOIUrl":"https://doi.org/10.1145/2077451.2077466","url":null,"abstract":"We conducted an experiment to examine the effects of scene density and richness on people's estimates of traveled distance. Participants wearing HMDs first experienced vision-only simulated self-motion over the distance of 65 meters in either a feature-dense scene (condition 1) or a sparse scene (condition 2), and then attempted to reproduce the same distance by physically walking with vision in a neutral virtual scene. We found that participants' estimates in the first condition were significantly shorter than those in the second condition. Furthermore, condition 1 estimates were significantly below the actual 65m travel distance, while condition 2 estimates did not differ significantly from 65m. The results suggest that scene feature density and richness affect traveled distance estimation.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"52 1","pages":"83-86"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86846585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiufeng Lin, Xianshi Xie, Aysu Erdemir, G. Narasimham, T. McNamara, J. Rieser, Bobby Bodenheimer
We conducted four experiments on egocentric depth perception using blind walking with a restricted scanning method in both the real and a virtual environment. Our viewing condition in all experiments was monocular. We varied the field of view (real), scan direction (real), blind walking method (real and virtual), and self-representation (virtual) over distances of 4 meters to 7 meters. The field of view varied between 21.1° and 13.6°. The scan direction varied between near-to-far scanning and far-to-near scanning. The blind walking method varied between direct blind walking and an indirect method of blind walking that matched the geometry of our laboratory. We varied self-representation between having a self-avatar (a fully tracked, animated, and first-person perspective of the user), having a static avatar (a mannequin avatar that did not move), to having no avatar (a disembodied camera view of the virtual environment). In the real environment, we find an effect of field of view; participants performed more accurately with larger field of view. In both real and virtual environments, we find an effect of blind walking method; participants performed more accurately in direct blind walking. We do not find an effect of distance underestimation in any environment, nor do we find an effect of self-representation.
{"title":"Egocentric distance perception in real and HMD-based virtual environments: the effect of limited scanning method","authors":"Qiufeng Lin, Xianshi Xie, Aysu Erdemir, G. Narasimham, T. McNamara, J. Rieser, Bobby Bodenheimer","doi":"10.1145/2077451.2077465","DOIUrl":"https://doi.org/10.1145/2077451.2077465","url":null,"abstract":"We conducted four experiments on egocentric depth perception using blind walking with a restricted scanning method in both the real and a virtual environment. Our viewing condition in all experiments was monocular. We varied the field of view (real), scan direction (real), blind walking method (real and virtual), and self-representation (virtual) over distances of 4 meters to 7 meters. The field of view varied between 21.1° and 13.6°. The scan direction varied between near-to-far scanning and far-to-near scanning. The blind walking method varied between direct blind walking and an indirect method of blind walking that matched the geometry of our laboratory. We varied self-representation between having a self-avatar (a fully tracked, animated, and first-person perspective of the user), having a static avatar (a mannequin avatar that did not move), to having no avatar (a disembodied camera view of the virtual environment). In the real environment, we find an effect of field of view; participants performed more accurately with larger field of view. In both real and virtual environments, we find an effect of blind walking method; participants performed more accurately in direct blind walking. We do not find an effect of distance underestimation in any environment, nor do we find an effect of self-representation.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"88 1","pages":"75-82"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83820423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markus Leyrer, Sally A. Linkenauger, H. Bülthoff, Uwe Kloos, B. Mohler
It is well known that eye height is an important visual cue in the perception of apparent sizes and affordances in virtual environments. However, the influence of visual eye height on egocentric distances in virtual environments has received less attention. To explore this influence, we conducted an experiment where we manipulated the virtual eye height of the user in a head-mounted display virtual environment. As a measurement we asked the participants to verbally judge egocentric distances and to give verbal estimates of the dimensions of the virtual room. In addition, we provided the participants a self-animated avatar to investigate if this virtual self-representation has an impact on the accuracy of verbal distance judgments, as recently evidenced for distance judgments accessed with an action-based measure. When controlled for ownership, the avatar had a significant influence on the verbal estimates of egocentric distances as found in previous research. Interestingly, we found that the manipulation of eye height has a significant influence on the verbal estimates of both egocentric distances and the dimensions of the room. We discuss the implications which these research results have on those interested in space perception in both immersive virtual environments and the real world.
{"title":"The influence of eye height and avatars on egocentric distance estimates in immersive virtual environments","authors":"Markus Leyrer, Sally A. Linkenauger, H. Bülthoff, Uwe Kloos, B. Mohler","doi":"10.1145/2077451.2077464","DOIUrl":"https://doi.org/10.1145/2077451.2077464","url":null,"abstract":"It is well known that eye height is an important visual cue in the perception of apparent sizes and affordances in virtual environments. However, the influence of visual eye height on egocentric distances in virtual environments has received less attention. To explore this influence, we conducted an experiment where we manipulated the virtual eye height of the user in a head-mounted display virtual environment. As a measurement we asked the participants to verbally judge egocentric distances and to give verbal estimates of the dimensions of the virtual room. In addition, we provided the participants a self-animated avatar to investigate if this virtual self-representation has an impact on the accuracy of verbal distance judgments, as recently evidenced for distance judgments accessed with an action-based measure. When controlled for ownership, the avatar had a significant influence on the verbal estimates of egocentric distances as found in previous research. Interestingly, we found that the manipulation of eye height has a significant influence on the verbal estimates of both egocentric distances and the dimensions of the room. We discuss the implications which these research results have on those interested in space perception in both immersive virtual environments and the real world.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"4 1","pages":"67-74"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89304155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephen W. Bailey, F. Pampel, D. Ashmead, Bobby Bodenheimer
This experiment investigates source point localization using only auditory cues. The idea is to determine how well humans can localize and reason about space using only sound cues, and to determine how different sound cues effect performance in such tasks. The findings from this line of research can be used in enhancing audio in virtual environments. This abstract presents first steps in this direction. The task we study here is: given a point in space near the user plus a sound cue that is modulated by the distance between the users hand and point, how well (quickly) can the user locate the point.
{"title":"Spatial localization with only auditory cues: a preliminary study","authors":"Stephen W. Bailey, F. Pampel, D. Ashmead, Bobby Bodenheimer","doi":"10.1145/2077451.2077487","DOIUrl":"https://doi.org/10.1145/2077451.2077487","url":null,"abstract":"This experiment investigates source point localization using only auditory cues. The idea is to determine how well humans can localize and reason about space using only sound cues, and to determine how different sound cues effect performance in such tasks. The findings from this line of research can be used in enhancing audio in virtual environments. This abstract presents first steps in this direction. The task we study here is: given a point in space near the user plus a sound cue that is modulated by the distance between the users hand and point, how well (quickly) can the user locate the point.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"55 1","pages":"124"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77622939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}