Predictive modeling has been a core area of learning analytics research over the past decade, with such models currently deployed in a variety of educational contexts from MOOCs to K-12. However, analyses of the differential effectiveness of these models across demographic, identity, or other groups has been scarce. In this paper, we present a method for evaluating unfairness in predictive student models. We define this in terms of differential accuracy between subgroups, and measure it using a new metric we term the Absolute Between-ROC Area (ABROCA). We demonstrate the proposed method through a gender-based "slicing analysis" using five different models replicated from other works and a dataset of 44 unique MOOCs and over four million learners. Our results demonstrate (1) significant differences in model fairness according to (a) statistical algorithm and (b) feature set used; (2) that the gender imbalance ratio, curricular area, and specific course used for a model all display significant association with the value of the ABROCA statistic; and (3) that there is not evidence of a strict tradeoff between performance and fairness. This work provides a framework for quantifying and understanding how predictive models might inadvertently privilege, or disparately impact, different student subgroups. Furthermore, our results suggest that learning analytics researchers and practitioners can use slicing analysis to improve model fairness without necessarily sacrificing performance.1
{"title":"Evaluating the Fairness of Predictive Student Models Through Slicing Analysis","authors":"Josh Gardner, Christopher A. Brooks, R. Baker","doi":"10.1145/3303772.3303791","DOIUrl":"https://doi.org/10.1145/3303772.3303791","url":null,"abstract":"Predictive modeling has been a core area of learning analytics research over the past decade, with such models currently deployed in a variety of educational contexts from MOOCs to K-12. However, analyses of the differential effectiveness of these models across demographic, identity, or other groups has been scarce. In this paper, we present a method for evaluating unfairness in predictive student models. We define this in terms of differential accuracy between subgroups, and measure it using a new metric we term the Absolute Between-ROC Area (ABROCA). We demonstrate the proposed method through a gender-based \"slicing analysis\" using five different models replicated from other works and a dataset of 44 unique MOOCs and over four million learners. Our results demonstrate (1) significant differences in model fairness according to (a) statistical algorithm and (b) feature set used; (2) that the gender imbalance ratio, curricular area, and specific course used for a model all display significant association with the value of the ABROCA statistic; and (3) that there is not evidence of a strict tradeoff between performance and fairness. This work provides a framework for quantifying and understanding how predictive models might inadvertently privilege, or disparately impact, different student subgroups. Furthermore, our results suggest that learning analytics researchers and practitioners can use slicing analysis to improve model fairness without necessarily sacrificing performance.1","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127238269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The motivation for this paper is derived from the fact that there has been increasing interest among researchers and practitioners in developing technologies that capture, model and analyze learning and teaching experiences that take place beyond computer-based learning environments. In this paper, we review case studies of tools and technologies developed to collect and analyze data in educational settings, quantify learning and teaching processes and support assessment of learning and teaching in an automated fashion. We focus on pipelines that leverage information and data harnessed from physical spaces and/or integrates collected data across physical and digital spaces. Our review reveals a promising field of physical classroom analysis. We describe some trends and suggest potential future directions. Specifically, more research should be geared towards a) deployable and sustainable data collection set-ups in physical learning environments, b) teacher assessment, c) developing feedback and visualization systems and d) promoting inclusivity and generalizability of models across populations.
{"title":"Technologies for automated analysis of co-located, real-life, physical learning spaces: Where are we now?","authors":"Y. H. V. Chua, J. Dauwels, S. Tan","doi":"10.1145/3303772.3303811","DOIUrl":"https://doi.org/10.1145/3303772.3303811","url":null,"abstract":"The motivation for this paper is derived from the fact that there has been increasing interest among researchers and practitioners in developing technologies that capture, model and analyze learning and teaching experiences that take place beyond computer-based learning environments. In this paper, we review case studies of tools and technologies developed to collect and analyze data in educational settings, quantify learning and teaching processes and support assessment of learning and teaching in an automated fashion. We focus on pipelines that leverage information and data harnessed from physical spaces and/or integrates collected data across physical and digital spaces. Our review reveals a promising field of physical classroom analysis. We describe some trends and suggest potential future directions. Specifically, more research should be geared towards a) deployable and sustainable data collection set-ups in physical learning environments, b) teacher assessment, c) developing feedback and visualization systems and d) promoting inclusivity and generalizability of models across populations.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116095288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To scaffold students' investigations of an inquiry-based immersive virtual world for science education without undercutting the affordances an open-ended activity provides, this study explores ways time-stamped log files of groups' actions may enable the automatic generation of formative supports. Groups' logged actions in the virtual world are filtered via principal component analysis to provide a time series trajectory showing the rate of their investigative activities over time. This technique functions well in open-ended environments and examines the entire course of their experience in the virtual world instead of specific subsequences. Groups' trajectories are grouped via k-means clustering to identify different typical pathways taken through the immersive virtual world. These different approaches are then correlated with learning gains across several survey constructs (affective dimensions, ecosystem science content, understanding of causality, and experimental methods) to see how various trends are associated with different outcomes. Differences by teacher and school are explored to see how best to support inclusion and success of a diverse array of learners.
{"title":"Differences in Student Trajectories via Filtered Time Series Analysis in an Immersive Virtual World","authors":"J. Reilly, C. Dede","doi":"10.1145/3303772.3303832","DOIUrl":"https://doi.org/10.1145/3303772.3303832","url":null,"abstract":"To scaffold students' investigations of an inquiry-based immersive virtual world for science education without undercutting the affordances an open-ended activity provides, this study explores ways time-stamped log files of groups' actions may enable the automatic generation of formative supports. Groups' logged actions in the virtual world are filtered via principal component analysis to provide a time series trajectory showing the rate of their investigative activities over time. This technique functions well in open-ended environments and examines the entire course of their experience in the virtual world instead of specific subsequences. Groups' trajectories are grouped via k-means clustering to identify different typical pathways taken through the immersive virtual world. These different approaches are then correlated with learning gains across several survey constructs (affective dimensions, ecosystem science content, understanding of causality, and experimental methods) to see how various trends are associated with different outcomes. Differences by teacher and school are explored to see how best to support inclusion and success of a diverse array of learners.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123265326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are numerous studies have reported the effectiveness of example-based programming learning. However, less is explored recommending code examples with advanced Machine Learning-based models. In this work, we propose a new method to explore the semantic analytics between programming codes and the annotations. We hypothesize that these semantics analytics will capture mass amount of valuable information that can be used as features to build predictive models. We evaluated the proposed semantic analytics extraction method with multiple deep learning algorithms. Results showed that deep learning models outperformed other models and baseline in most cases. Further analysis indicated that in special cases, the proposed method outperformed deep learning models by restricting false-positive classifications.
{"title":"Exploring Programming Semantic Analytics with Deep Learning Models","authors":"Yihan Lu, I-Han Hsiao","doi":"10.1145/3303772.3303823","DOIUrl":"https://doi.org/10.1145/3303772.3303823","url":null,"abstract":"There are numerous studies have reported the effectiveness of example-based programming learning. However, less is explored recommending code examples with advanced Machine Learning-based models. In this work, we propose a new method to explore the semantic analytics between programming codes and the annotations. We hypothesize that these semantics analytics will capture mass amount of valuable information that can be used as features to build predictive models. We evaluated the proposed semantic analytics extraction method with multiple deep learning algorithms. Results showed that deep learning models outperformed other models and baseline in most cases. Further analysis indicated that in special cases, the proposed method outperformed deep learning models by restricting false-positive classifications.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122689946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenfei Yan, Nia Dowell, Caitlin Holman, Stephen S. Welsh, Heeryung Choi, Christopher A. Brooks
MOOCs have developed into multiple learning design models with a wide range of objectives. Teach-Outs are one such example, aiming to drive meaningful discussions around topics of pressing social urgency without the use of formal assessments. Given this approach, it is crucial to evaluate learners' engagement in the discussion forum to understand their experiences. This paper presents a pilot study that applied unsupervised natural language processing techniques to understand what and how students engage in dialogue in a Teach-Out. We used topic modeling to discover the emerging topics in the discussion forums and evaluated the on-topicness of the discussions (i.e. the degree to which discussions were relevant to the Teach-Out content). We also applied content analysis to investigate the sentiments associated with the discussions. We have taken a step toward extracting structure from students' discussions to understand learning behaviors happen in the discussion forum. This is the first study to analyze discussion forums in a Teach-Out.
{"title":"Exploring Learner Engagement Patterns in Teach-Outs Using Topic, Sentiment and On-topicness to Reflect on Pedagogy","authors":"Wenfei Yan, Nia Dowell, Caitlin Holman, Stephen S. Welsh, Heeryung Choi, Christopher A. Brooks","doi":"10.1145/3303772.3303836","DOIUrl":"https://doi.org/10.1145/3303772.3303836","url":null,"abstract":"MOOCs have developed into multiple learning design models with a wide range of objectives. Teach-Outs are one such example, aiming to drive meaningful discussions around topics of pressing social urgency without the use of formal assessments. Given this approach, it is crucial to evaluate learners' engagement in the discussion forum to understand their experiences. This paper presents a pilot study that applied unsupervised natural language processing techniques to understand what and how students engage in dialogue in a Teach-Out. We used topic modeling to discover the emerging topics in the discussion forums and evaluated the on-topicness of the discussions (i.e. the degree to which discussions were relevant to the Teach-Out content). We also applied content analysis to investigate the sentiments associated with the discussions. We have taken a step toward extracting structure from students' discussions to understand learning behaviors happen in the discussion forum. This is the first study to analyze discussion forums in a Teach-Out.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114228821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Establishing small learning groups in online courses is a possible way to foster collaborative knowledge building in an engaging and effective learning community. To enable group activities it is not enough to design collaborative tasks and to provide collaboration tools for online scenarios. Collaboration in such learning groups is prone to fail or even not to be initiated without explicit guidance. In the target situations, interventions and guiding mechanisms have to scale with a growing number of course participants. To achieve this under privacy constraints, we aim at identifying target indicators for well-functioning group work that do not rely on any kind of information about individual learners.
{"title":"Predicting the Well-functioning of Learning Groups under Privacy Restrictions","authors":"Tobias Hecking, Dorian Doberstein, H. Hoppe","doi":"10.1145/3303772.3303826","DOIUrl":"https://doi.org/10.1145/3303772.3303826","url":null,"abstract":"Establishing small learning groups in online courses is a possible way to foster collaborative knowledge building in an engaging and effective learning community. To enable group activities it is not enough to design collaborative tasks and to provide collaboration tools for online scenarios. Collaboration in such learning groups is prone to fail or even not to be initiated without explicit guidance. In the target situations, interventions and guiding mechanisms have to scale with a growing number of course participants. To achieve this under privacy constraints, we aim at identifying target indicators for well-functioning group work that do not rely on any kind of information about individual learners.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"97 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127999285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several Learning Analytics applications are limited by the cost of generating a computer understandable description of the course domain, what is called an Intelligent Curriculum. The following work contributes a novel approach to (semi-)automatically generate Intelligent Curriculum through ontologies extracted from existing learning materials such as digital books or web content. Through a series of natural language processing steps, the semi-structured information present in existing content is transformed into a concept-graph. This work also evaluates the proposed methodology by applying it to learning content for two different courses and measuring the quality of the extracted ontologies against manually generated ones. The results obtained suggest that the technique can be readily used to provide domain information to other Learning Analytics tools.
{"title":"Semi-Automatic Generation of Intelligent Curricula to Facilitate Learning Analytics","authors":"Angel Fiallos, X. Ochoa","doi":"10.1145/3303772.3303834","DOIUrl":"https://doi.org/10.1145/3303772.3303834","url":null,"abstract":"Several Learning Analytics applications are limited by the cost of generating a computer understandable description of the course domain, what is called an Intelligent Curriculum. The following work contributes a novel approach to (semi-)automatically generate Intelligent Curriculum through ontologies extracted from existing learning materials such as digital books or web content. Through a series of natural language processing steps, the semi-structured information present in existing content is transformed into a concept-graph. This work also evaluates the proposed methodology by applying it to learning content for two different courses and measuring the quality of the extracted ontologies against manually generated ones. The results obtained suggest that the technique can be readily used to provide domain information to other Learning Analytics tools.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130865714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current Learning Analytics (LA) systems are primarily designed with University staff members as the target audience; very few are aimed at students, with almost none being developed with direct student involvement and undertaking a comprehensive evaluation. This paper describes a HEFCE funded project that has employed a variety of methods to engage students in the design, development and evaluation of a student facing LA dashboard. LA was integrated into the delivery of 4 undergraduate modules with 169 student sign-ups. The design of the dashboard uses a novel approach of trying to understand the reasons why students want to study at university and maps their engagement and predicted outcomes to these motivations, with weekly personalised notifications and feedback. Students are also given the choice of how to visualise the data either via a chart-based view or to be represented as themselves. A mixed-methods evaluation has shown that students' feelings of dependability and trust of the underlying analytics and data is variable. However, students were mostly positive about the usability and interface design of the system and almost all students once signed-up did interact with their LA. The majority of students could see how the LA system could support their learning and said that it would influence their behaviour. In some cases, this has had a direct impact on their levels of engagement. The main contribution of this paper is the transparent documentation of a User Centred Design approach that has produced forms of LA representation, recommendation and interaction design that go beyond those used in current similar systems and have been shown to motivate students and impact their learning behaviour.
{"title":"Student Centred Design of a Learning Analytics System","authors":"E. Quincey, C. Briggs, T. Kyriacou, R. Waller","doi":"10.1145/3303772.3303793","DOIUrl":"https://doi.org/10.1145/3303772.3303793","url":null,"abstract":"Current Learning Analytics (LA) systems are primarily designed with University staff members as the target audience; very few are aimed at students, with almost none being developed with direct student involvement and undertaking a comprehensive evaluation. This paper describes a HEFCE funded project that has employed a variety of methods to engage students in the design, development and evaluation of a student facing LA dashboard. LA was integrated into the delivery of 4 undergraduate modules with 169 student sign-ups. The design of the dashboard uses a novel approach of trying to understand the reasons why students want to study at university and maps their engagement and predicted outcomes to these motivations, with weekly personalised notifications and feedback. Students are also given the choice of how to visualise the data either via a chart-based view or to be represented as themselves. A mixed-methods evaluation has shown that students' feelings of dependability and trust of the underlying analytics and data is variable. However, students were mostly positive about the usability and interface design of the system and almost all students once signed-up did interact with their LA. The majority of students could see how the LA system could support their learning and said that it would influence their behaviour. In some cases, this has had a direct impact on their levels of engagement. The main contribution of this paper is the transparent documentation of a User Centred Design approach that has produced forms of LA representation, recommendation and interaction design that go beyond those used in current similar systems and have been shown to motivate students and impact their learning behaviour.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"29 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123875448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luiz Antonio Buschetto Macarini, C. Cechinel, H. D. Santos, X. Ochoa, Virgínia Rodés, G. E. Alonso, Alén Pérez Casas, Patricia Díaz
The present work describes the challenges faced during the development of a countrywide Learning Analytics tool focused on tracking the trajectories of Uruguayan students during their first three years of secondary education. Due to the large-scale of the project, which covers an entire national educational system, several challenges and constraints (both technical and legal) were faced during its conception and development. This paper presents the design decisions and solutions found to address or mitigate the problems found, with the current state of the project. Early results point out the feasibility of finding meaningful patterns in the available data (using data mining techniques) which can be embedded into a prototype for tracking the students scholar trajectory.
{"title":"Challenges on implementing Learning Analytics over countrywide K-12 data","authors":"Luiz Antonio Buschetto Macarini, C. Cechinel, H. D. Santos, X. Ochoa, Virgínia Rodés, G. E. Alonso, Alén Pérez Casas, Patricia Díaz","doi":"10.1145/3303772.3303819","DOIUrl":"https://doi.org/10.1145/3303772.3303819","url":null,"abstract":"The present work describes the challenges faced during the development of a countrywide Learning Analytics tool focused on tracking the trajectories of Uruguayan students during their first three years of secondary education. Due to the large-scale of the project, which covers an entire national educational system, several challenges and constraints (both technical and legal) were faced during its conception and development. This paper presents the design decisions and solutions found to address or mitigate the problems found, with the current state of the project. Early results point out the feasibility of finding meaningful patterns in the available data (using data mining techniques) which can be embedded into a prototype for tracking the students scholar trajectory.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127739148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sahba Akhavan Niaki, Clint P. George, G. Michailidis, C. Beal
We study the impact of an online tutoring program, AnimalWatch, for algebra readiness on mathematics achievements of grade 6 students. We use the data from a randomized experimental design conducted on 69 teachers and 2025 students in California in the academic years 2011-2012. After a brief description of the experimental design and the system implementation, we analyze the treatment effect of employing AnimalWatch using the popular hierarchical linear models and find a small positive effect. We further use the logged system usage data such as time spent in the system, modules completed, correct/incorrect/no-answers records of students in each login to analyze how system implementation and usage helped different students. Our results provide insights into the limitations in implementing such a study in a real world setting and suggests recommendations for future research.
{"title":"The Impact of an Online Tutoring Program for Algebra Readiness on Mathematics Achievements; Results of a Randomized Experiment","authors":"Sahba Akhavan Niaki, Clint P. George, G. Michailidis, C. Beal","doi":"10.1145/3303772.3303777","DOIUrl":"https://doi.org/10.1145/3303772.3303777","url":null,"abstract":"We study the impact of an online tutoring program, AnimalWatch, for algebra readiness on mathematics achievements of grade 6 students. We use the data from a randomized experimental design conducted on 69 teachers and 2025 students in California in the academic years 2011-2012. After a brief description of the experimental design and the system implementation, we analyze the treatment effect of employing AnimalWatch using the popular hierarchical linear models and find a small positive effect. We further use the logged system usage data such as time spent in the system, modules completed, correct/incorrect/no-answers records of students in each login to analyze how system implementation and usage helped different students. Our results provide insights into the limitations in implementing such a study in a real world setting and suggests recommendations for future research.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"17 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125769975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}