Mingyu Feng, Andrew E. Krumm, Alex J. Bowers, T. Podkul
Technologies used by teachers and students generate vast amounts of data that can be analyzed to provide insights into improving teaching and learning. However, practitioners are left out of the process. We describe the development of an approach by which researchers and practitioners can work together to use data intensive research methods to launch improvement efforts within schools. This paper describes elements of the first year of a researcher-practitioner partnership, highlighting initial findings, challenges, and strategies for overcoming these challenges.
{"title":"Elaborating data intensive research methods through researcher-practitioner partnerships","authors":"Mingyu Feng, Andrew E. Krumm, Alex J. Bowers, T. Podkul","doi":"10.1145/2883851.2883908","DOIUrl":"https://doi.org/10.1145/2883851.2883908","url":null,"abstract":"Technologies used by teachers and students generate vast amounts of data that can be analyzed to provide insights into improving teaching and learning. However, practitioners are left out of the process. We describe the development of an approach by which researchers and practitioners can work together to use data intensive research methods to launch improvement efforts within schools. This paper describes elements of the first year of a researcher-practitioner partnership, highlighting initial findings, challenges, and strategies for overcoming these challenges.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125585648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Wolff, John Moore, Z. Zdráhal, Martin Hlosta, Jakub Kuzilek
This workshop explores how data literacy impacts on learning analytics both for practitioners and for end users. The term data literacy is used to broadly describe the set of abilities around the use of data as part of everyday thinking and reasoning for solving real-world problems. It is a skill required both by learning analytics practitioners to derive actionable insights from data and by the intended end users, such that it affects their ability to accurately interpret and critique presented analysis of data. The latter is particularly important, since learning analytics outcomes can be targeted at a wide range of end users, some of whom will be young students and many of whom are not data specialists. Whilst data literacy is rarely an end goal of learning analytics projects, this workshop aims to find where issues related to data literacy have impacted on project outcomes and where important insights have been gained. This workshop will further encourage the sharing of knowledge and experience through practical activities with datasets and visualisations. This workshop aims to highlight the need for a greater understanding of data literacy as a field of study, especially with regard to communicating around large, complex, data sets.
{"title":"Data literacy for learning analytics","authors":"A. Wolff, John Moore, Z. Zdráhal, Martin Hlosta, Jakub Kuzilek","doi":"10.1145/2883851.2883864","DOIUrl":"https://doi.org/10.1145/2883851.2883864","url":null,"abstract":"This workshop explores how data literacy impacts on learning analytics both for practitioners and for end users. The term data literacy is used to broadly describe the set of abilities around the use of data as part of everyday thinking and reasoning for solving real-world problems. It is a skill required both by learning analytics practitioners to derive actionable insights from data and by the intended end users, such that it affects their ability to accurately interpret and critique presented analysis of data. The latter is particularly important, since learning analytics outcomes can be targeted at a wide range of end users, some of whom will be young students and many of whom are not data specialists. Whilst data literacy is rarely an end goal of learning analytics projects, this workshop aims to find where issues related to data literacy have impacted on project outcomes and where important insights have been gained. This workshop will further encourage the sharing of knowledge and experience through practical activities with datasets and visualisations. This workshop aims to highlight the need for a greater understanding of data literacy as a field of study, especially with regard to communicating around large, complex, data sets.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"9 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120986233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mengxiao Zhu, Yoav Bergner, Yan Zhang, R. Baker, Y. Wang, L. Paquette
This paper explores a longitudinal approach to combining engagement, performance and social connectivity data from a MOOC using the framework of exponential random graph models (ERGMs). The idea is to model the social network in the discussion forum in a given week not only using performance (assignment scores) and overall engagement (lecture and discussion views) covariates within that week, but also on the same person-level covariates from adjacent previous and subsequent weeks. We find that over all eight weekly sessions, the social networks constructed from the forum interactions are relatively sparse and lack the tendency for preferential attachment. By analyzing data from the second week, we also find that individuals with higher performance scores from current, previous, and future weeks tend to be more connected in the social network. Engagement with lectures had significant but sometimes puzzling effects on social connectivity. However, the relationships between social connectivity, performance, and engagement weakened over time, and results were not stable across weeks.
{"title":"Longitudinal engagement, performance, and social connectivity: a MOOC case study using exponential random graph models","authors":"Mengxiao Zhu, Yoav Bergner, Yan Zhang, R. Baker, Y. Wang, L. Paquette","doi":"10.1145/2883851.2883934","DOIUrl":"https://doi.org/10.1145/2883851.2883934","url":null,"abstract":"This paper explores a longitudinal approach to combining engagement, performance and social connectivity data from a MOOC using the framework of exponential random graph models (ERGMs). The idea is to model the social network in the discussion forum in a given week not only using performance (assignment scores) and overall engagement (lecture and discussion views) covariates within that week, but also on the same person-level covariates from adjacent previous and subsequent weeks. We find that over all eight weekly sessions, the social networks constructed from the forum interactions are relatively sparse and lack the tendency for preferential attachment. By analyzing data from the second week, we also find that individuals with higher performance scores from current, previous, and future weeks tend to be more connected in the social network. Engagement with lectures had significant but sometimes puzzling effects on social connectivity. However, the relationships between social connectivity, performance, and engagement weakened over time, and results were not stable across weeks.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"231 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121321343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper explores the potential of analytics for improving accessibility of e-learning and supporting disabled learners in their studies. A comparative analysis of completion rates of disabled and non-disabled students in a large five-year dataset is presented and a wide variation in comparative retention rates is characterized. Learning analytics enable us to identify and understand such discrepancies and, in future, could be used to focus interventions to improve retention of disabled students. An agenda for onward research, focused on Critical Learning Paths, is outlined. This paper is intended to stimulate a wider interest in the potential benefits of learning analytics for institutions as they try to assure the accessibility of their e-learning and provision of support for disabled students.
{"title":"What can analytics contribute to accessibility in e-learning systems and to disabled students' learning?","authors":"M. Cooper, Rebecca Ferguson, A. Wolff","doi":"10.1145/2883851.2883946","DOIUrl":"https://doi.org/10.1145/2883851.2883946","url":null,"abstract":"This paper explores the potential of analytics for improving accessibility of e-learning and supporting disabled learners in their studies. A comparative analysis of completion rates of disabled and non-disabled students in a large five-year dataset is presented and a wide variation in comparative retention rates is characterized. Learning analytics enable us to identify and understand such discrepancies and, in future, could be used to focus interventions to improve retention of disabled students. An agenda for onward research, focused on Critical Learning Paths, is outlined. This paper is intended to stimulate a wider interest in the potential benefits of learning analytics for institutions as they try to assure the accessibility of their e-learning and provision of support for disabled students.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116288171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Grawemeyer, M. Mavrikis, Wayne Holmes, S. Santos, Michael Wiedmann, N. Rummel
This paper describes the development and evaluation of an affect-aware intelligent support component that is part of a learning environment known as iTalk2Learn. The intelligent support component is able to tailor feedback according to a student's affective state, which is deduced both from speech and interaction. The affect prediction is used to determine which type of feedback is provided and how that feedback is presented (interruptive or non-interruptive). The system includes two Bayesian networks that were trained with data gathered in a series of ecologically-valid Wizard-of-Oz studies, where the effect of the type of feedback and the presentation of feedback on students' affective states was investigated. This paper reports results from an experiment that compared a version that provided affect-aware feedback (affect condition) with one that provided feedback based on performance only (non-affect condition). Results show that students who were in the affect condition were less bored and less off-task, with the latter being statically significant. Importantly, students in both conditions made learning gains that were statistically significant, while students in the affect condition had higher learning gains than those in the non-affect condition, although this result was not statistically significant in this study's sample. Taken all together, the results point to the potential and positive impact of affect-aware intelligent support.
{"title":"Affecting off-task behaviour: how affect-aware feedback can improve student learning","authors":"B. Grawemeyer, M. Mavrikis, Wayne Holmes, S. Santos, Michael Wiedmann, N. Rummel","doi":"10.1145/2883851.2883936","DOIUrl":"https://doi.org/10.1145/2883851.2883936","url":null,"abstract":"This paper describes the development and evaluation of an affect-aware intelligent support component that is part of a learning environment known as iTalk2Learn. The intelligent support component is able to tailor feedback according to a student's affective state, which is deduced both from speech and interaction. The affect prediction is used to determine which type of feedback is provided and how that feedback is presented (interruptive or non-interruptive). The system includes two Bayesian networks that were trained with data gathered in a series of ecologically-valid Wizard-of-Oz studies, where the effect of the type of feedback and the presentation of feedback on students' affective states was investigated. This paper reports results from an experiment that compared a version that provided affect-aware feedback (affect condition) with one that provided feedback based on performance only (non-affect condition). Results show that students who were in the affect condition were less bored and less off-task, with the latter being statically significant. Importantly, students in both conditions made learning gains that were statistically significant, while students in the affect condition had higher learning gains than those in the non-affect condition, although this result was not statistically significant in this study's sample. Taken all together, the results point to the potential and positive impact of affect-aware intelligent support.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116373135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Student modeling techniques are evaluated mostly using historical data. Researchers typically do not pay attention to details of the origin of the used data sets. However, the way data are collected can have important impact on evaluation and interpretation of student models. We discuss in detail two ways how data collection in educational systems can influence results: mastery attrition bias and adaptive choice of items. We systematically discuss previous work related to these biases and illustrate the main points using both simulated and real data. We summarize specific consequences for practice -- not just for doing evaluation of student models, but also for data collection and publication of data sets.
{"title":"Impact of data collection on interpretation and evaluation of student models","authors":"Radek Pelánek, Jirí Rihák, Jan Papousek","doi":"10.1145/2883851.2883868","DOIUrl":"https://doi.org/10.1145/2883851.2883868","url":null,"abstract":"Student modeling techniques are evaluated mostly using historical data. Researchers typically do not pay attention to details of the origin of the used data sets. However, the way data are collected can have important impact on evaluation and interpretation of student models. We discuss in detail two ways how data collection in educational systems can influence results: mastery attrition bias and adaptive choice of items. We systematically discuss previous work related to these biases and illustrate the main points using both simulated and real data. We summarize specific consequences for practice -- not just for doing evaluation of student models, but also for data collection and publication of data sets.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127764224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Crossley, L. Paquette, M. Dascalu, D. McNamara, R. Baker
Completion rates for massive open online classes (MOOCs) are notoriously low. Identifying student patterns related to course completion may help to develop interventions that can improve retention and learning outcomes in MOOCs. Previous research predicting MOOC completion has focused on click-stream data, student demographics, and natural language processing (NLP) analyses. However, most of these analyses have not taken full advantage of the multiple types of data available. This study combines click-stream data and NLP approaches to examine if students' on-line activity and the language they produce in the online discussion forum is predictive of successful class completion. We study this analysis in the context of a subsample of 320 students who completed at least one graded assignment and produced at least 50 words in discussion forums, in a MOOC on educational data mining. The findings indicate that a mix of click-stream data and NLP indices can predict with substantial accuracy (78%) whether students complete the MOOC. This predictive power suggests that student interaction data and language data within a MOOC can help us both to understand student retention in MOOCs and to develop automated signals of student success.
{"title":"Combining click-stream data with NLP tools to better understand MOOC completion","authors":"S. Crossley, L. Paquette, M. Dascalu, D. McNamara, R. Baker","doi":"10.1145/2883851.2883931","DOIUrl":"https://doi.org/10.1145/2883851.2883931","url":null,"abstract":"Completion rates for massive open online classes (MOOCs) are notoriously low. Identifying student patterns related to course completion may help to develop interventions that can improve retention and learning outcomes in MOOCs. Previous research predicting MOOC completion has focused on click-stream data, student demographics, and natural language processing (NLP) analyses. However, most of these analyses have not taken full advantage of the multiple types of data available. This study combines click-stream data and NLP approaches to examine if students' on-line activity and the language they produce in the online discussion forum is predictive of successful class completion. We study this analysis in the context of a subsample of 320 students who completed at least one graded assignment and produced at least 50 words in discussion forums, in a MOOC on educational data mining. The findings indicate that a mix of click-stream data and NLP indices can predict with substantial accuracy (78%) whether students complete the MOOC. This predictive power suggests that student interaction data and language data within a MOOC can help us both to understand student retention in MOOCs and to develop automated signals of student success.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127367718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scott Harrison, R. Villano, G. Lynch, George S. Chen
The prevalence of early alert systems (EAS) at tertiary institutions is increasing. These systems are designed to assist with targeted student support in order to improve student retention. They also require considerable human and capital resources to implement, with significant costs involved. It is therefore an imperative that the systems can demonstrate quantifiable financial benefits to the institution. The purpose of this paper is to report on the financial implications of implementing an EAS at an Australian university as a case study. The case study institution implemented an EAS in 2011 using data generated from a data warehouse. The data set is comprised of 16,124 students enrolled between 2011 and 2013. Using a treatment effects approach, the study found that the cost of a student discontinuing was on average $4,687. Students identified by the EAS remained enrolled for longer, with the institution benefiting with approximately an additional $4,004 in revenue per student over the length of enrolment. All schools had a significant positive effect associated with the EAS and the EAS showed significant value to the institution regardless of the timing when the student was identified. The results indicate that EAS had significant financial benefits to this institution and that the benefits extended to the entire institution beyond the first year of enrolment.
{"title":"Measuring financial implications of an early alert system","authors":"Scott Harrison, R. Villano, G. Lynch, George S. Chen","doi":"10.1145/2883851.2883923","DOIUrl":"https://doi.org/10.1145/2883851.2883923","url":null,"abstract":"The prevalence of early alert systems (EAS) at tertiary institutions is increasing. These systems are designed to assist with targeted student support in order to improve student retention. They also require considerable human and capital resources to implement, with significant costs involved. It is therefore an imperative that the systems can demonstrate quantifiable financial benefits to the institution. The purpose of this paper is to report on the financial implications of implementing an EAS at an Australian university as a case study. The case study institution implemented an EAS in 2011 using data generated from a data warehouse. The data set is comprised of 16,124 students enrolled between 2011 and 2013. Using a treatment effects approach, the study found that the cost of a student discontinuing was on average $4,687. Students identified by the EAS remained enrolled for longer, with the institution benefiting with approximately an additional $4,004 in revenue per student over the length of enrolment. All schools had a significant positive effect associated with the EAS and the EAS showed significant value to the institution regardless of the timing when the student was identified. The results indicate that EAS had significant financial benefits to this institution and that the benefits extended to the entire institution beyond the first year of enrolment.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128537329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alan Berg, Maren Scheffel, H. Drachsler, Stefaan Ternier, M. Specht
We present the collected experiences since 2012 of the Dutch Special Interest Group (SIG) for Learning Analytics in the application of the xAPI standard. We have been experimenting and exchanging best practices around the application of xAPI in various contexts. The practices include different design patterns centered around Learning Record Stores. We present three projects that apply xAPI in very different ways and publish a consistent set of xAPI recipes.
{"title":"The dutch xAPI experience","authors":"Alan Berg, Maren Scheffel, H. Drachsler, Stefaan Ternier, M. Specht","doi":"10.1145/2883851.2883968","DOIUrl":"https://doi.org/10.1145/2883851.2883968","url":null,"abstract":"We present the collected experiences since 2012 of the Dutch Special Interest Group (SIG) for Learning Analytics in the application of the xAPI standard. We have been experimenting and exchanging best practices around the application of xAPI in various contexts. The practices include different design patterns centered around Learning Record Stores. We present three projects that apply xAPI in very different ways and publish a consistent set of xAPI recipes.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124392771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Prieto, K. Sharma, P. Dillenbourg, M. Rodríguez-Triana
'Teaching analytics' is the application of learning analytics techniques to understand teaching and learning processes, and eventually enable supportive interventions. However, in the case of (often, half-improvised) teaching in face-to-face classrooms, such interventions would require first an understanding of what the teacher actually did, as the starting point for teacher reflection and inquiry. Currently, such teacher enactment characterization requires costly manual coding by researchers. This paper presents a case study exploring the potential of machine learning techniques to automatically extract teaching actions during classroom enactment, from five data sources collected using wearable sensors (eye-tracking, EEG, accelerometer, audio and video). Our results highlight the feasibility of this approach, with high levels of accuracy in determining the social plane of interaction (90%, κ=0.8). The reliable detection of concrete teaching activity (e.g., explanation vs. questioning) accurately still remains challenging (67%, κ=0.56), a fact that will prompt further research on multimodal features and models for teaching activity extraction, as well as the collection of a larger multimodal dataset to improve the accuracy and generalizability of these methods.
{"title":"Teaching analytics: towards automatic extraction of orchestration graphs using wearable sensors","authors":"L. Prieto, K. Sharma, P. Dillenbourg, M. Rodríguez-Triana","doi":"10.1145/2883851.2883927","DOIUrl":"https://doi.org/10.1145/2883851.2883927","url":null,"abstract":"'Teaching analytics' is the application of learning analytics techniques to understand teaching and learning processes, and eventually enable supportive interventions. However, in the case of (often, half-improvised) teaching in face-to-face classrooms, such interventions would require first an understanding of what the teacher actually did, as the starting point for teacher reflection and inquiry. Currently, such teacher enactment characterization requires costly manual coding by researchers. This paper presents a case study exploring the potential of machine learning techniques to automatically extract teaching actions during classroom enactment, from five data sources collected using wearable sensors (eye-tracking, EEG, accelerometer, audio and video). Our results highlight the feasibility of this approach, with high levels of accuracy in determining the social plane of interaction (90%, κ=0.8). The reliable detection of concrete teaching activity (e.g., explanation vs. questioning) accurately still remains challenging (67%, κ=0.56), a fact that will prompt further research on multimodal features and models for teaching activity extraction, as well as the collection of a larger multimodal dataset to improve the accuracy and generalizability of these methods.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124565882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}