Pub Date : 1900-01-01DOI: 10.4018/978-1-7998-0420-8.ch020
T. Rickards, A. Steele
A cloud based assessment learning environment exists when the collaborative sharing features of cloud computing tools (e.g. Google Docs) are utilised for a continuous assessment of student learning activity over an extended period of time. This chapter describes a New Zealand Polytechnic based success story which utilised a multi-method approach to investigate student perceptions of a cloud assessment learning environment. The learning environment factors that are examined in this chapter include progress monitoring, cloud tools (i.e. Google Docs), feedback, cloud storage, technology preference, student achievement, and student engagement. This chapter not only describes this unique learning environment, it also provides a clear insight into student perceptions of the cloud assessment learning environment. In concluding, the chapter provides some outcomes that may be utilised to improve pedagogy and student outcomes in a STEM based multimedia learning environment.
{"title":"Designing a Cloud-Based Assessment Model","authors":"T. Rickards, A. Steele","doi":"10.4018/978-1-7998-0420-8.ch020","DOIUrl":"https://doi.org/10.4018/978-1-7998-0420-8.ch020","url":null,"abstract":"A cloud based assessment learning environment exists when the collaborative sharing features of cloud computing tools (e.g. Google Docs) are utilised for a continuous assessment of student learning activity over an extended period of time. This chapter describes a New Zealand Polytechnic based success story which utilised a multi-method approach to investigate student perceptions of a cloud assessment learning environment. The learning environment factors that are examined in this chapter include progress monitoring, cloud tools (i.e. Google Docs), feedback, cloud storage, technology preference, student achievement, and student engagement. This chapter not only describes this unique learning environment, it also provides a clear insight into student perceptions of the cloud assessment learning environment. In concluding, the chapter provides some outcomes that may be utilised to improve pedagogy and student outcomes in a STEM based multimedia learning environment.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134324025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4018/978-1-7998-0420-8.ch072
T. Souders
Now more than ever before, health care educators are being challenged to meet the complex and dynamic needs of an expanding health care workforce. Continuing education requirements as well as graduate and undergraduate programs are striving to keep pace with the demands for more highly skilled health care professionals. Likewise, technology and related instructional media have been evolving at an exponential pace. The confluence of these variables requires health care educators to be knowledgeable about the options and tools available to design and deliver instruction using a variety of platforms in more diverse settings. In order to ensure that instruction achieves its intended goals, it is imperative to fully assess the learner characteristics of the target audience. The purpose of this chapter is to discuss the rationale for conducting a learner analysis and utilizing learner characteristics in designing effective instruction.
{"title":"Understanding Your Learner","authors":"T. Souders","doi":"10.4018/978-1-7998-0420-8.ch072","DOIUrl":"https://doi.org/10.4018/978-1-7998-0420-8.ch072","url":null,"abstract":"Now more than ever before, health care educators are being challenged to meet the complex and dynamic needs of an expanding health care workforce. Continuing education requirements as well as graduate and undergraduate programs are striving to keep pace with the demands for more highly skilled health care professionals. Likewise, technology and related instructional media have been evolving at an exponential pace. The confluence of these variables requires health care educators to be knowledgeable about the options and tools available to design and deliver instruction using a variety of platforms in more diverse settings. In order to ensure that instruction achieves its intended goals, it is imperative to fully assess the learner characteristics of the target audience. The purpose of this chapter is to discuss the rationale for conducting a learner analysis and utilizing learner characteristics in designing effective instruction.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130495475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4018/978-1-5225-0531-0.CH014
María Soledad Ibarra-Sáiz, Gregorio Rodríguez-Gómez
In this chapter it will present partial results from the DevalS Project (Developing Sustainable Assessment – Improving Students' Assessment Competence through Virtual Simulations), financed by the Spanish Ministry of Economy and Competitiveness (Ref. EDU2012-31804). The results will be focused on the use and usefulness of serious games for e-assessment literacy from a students' point of view. Firstly, it will introduce the project. Secondly, it will review the serious games that have been developed and implemented in different undergraduate courses. Finally, it will present the results and conclusions of surveys undertaken by students.
{"title":"Serious Games for Students' E-Assessment Literacy in Higher Education","authors":"María Soledad Ibarra-Sáiz, Gregorio Rodríguez-Gómez","doi":"10.4018/978-1-5225-0531-0.CH014","DOIUrl":"https://doi.org/10.4018/978-1-5225-0531-0.CH014","url":null,"abstract":"In this chapter it will present partial results from the DevalS Project (Developing Sustainable Assessment – Improving Students' Assessment Competence through Virtual Simulations), financed by the Spanish Ministry of Economy and Competitiveness (Ref. EDU2012-31804). The results will be focused on the use and usefulness of serious games for e-assessment literacy from a students' point of view. Firstly, it will introduce the project. Secondly, it will review the serious games that have been developed and implemented in different undergraduate courses. Finally, it will present the results and conclusions of surveys undertaken by students.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123400509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4018/978-1-7998-0420-8.ch058
Victoria Quesada, Eduardo García-Jiménez, M. Gómez-Ruiz
The participation of students in higher education assessment processes has been proven to have many benefits. However, there is a diverse range of techniques and options when implementing participative assessment, with each offering new possibilities. This chapter focuses on the topic of student participation in assessment processes, and it explores the main stages when it can be developed: participation in design, during implementation, and in grading. This chapter also considers the different modalities that can be used, especially self-assessment, peer assessment, and co-assessment and the three stages that characterise them. Finally, it analyses three experiences of student participation in higher education assessment, highlighting their strengths and weaknesses. These experiences show how participative assessment can be developed in everyday classes, in groups, or individually and how participative assessment can occur in different class settings. They also demonstrate the importance of design, assessment literacy, and some difficulties that might appear during the process.
{"title":"Student Participation in Assessment Processes","authors":"Victoria Quesada, Eduardo García-Jiménez, M. Gómez-Ruiz","doi":"10.4018/978-1-7998-0420-8.ch058","DOIUrl":"https://doi.org/10.4018/978-1-7998-0420-8.ch058","url":null,"abstract":"The participation of students in higher education assessment processes has been proven to have many benefits. However, there is a diverse range of techniques and options when implementing participative assessment, with each offering new possibilities. This chapter focuses on the topic of student participation in assessment processes, and it explores the main stages when it can be developed: participation in design, during implementation, and in grading. This chapter also considers the different modalities that can be used, especially self-assessment, peer assessment, and co-assessment and the three stages that characterise them. Finally, it analyses three experiences of student participation in higher education assessment, highlighting their strengths and weaknesses. These experiences show how participative assessment can be developed in everyday classes, in groups, or individually and how participative assessment can occur in different class settings. They also demonstrate the importance of design, assessment literacy, and some difficulties that might appear during the process.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115756852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4018/978-1-5225-9279-2.ch047
Stefanie Panke
Assessment plays a vital role in delivering, evaluating, monitoring, improving and shaping learning experiences on the Web, at the desk and in the classroom. In the process of orchestrating educational technologies instructional designers are often confronted with the challenge of designing or deploying creative and authentic assessment techniques. For an instructional designer, the focus of assessment can be on individual learning, organizational improvement or the evaluation of educational technologies. A common question across these domains is how to translate pedagogical concepts such as authenticity and creativity into concrete practical applications and metrics. Educational technologies can support creative processes and offer connections to authentic contexts, just as well as they can curtail creativity and foster standardized testing routines. The chapter discusses theoretical frameworks and provides examples of the conceptual development and implementation of assessment approaches in three different areas: Needs assessment, impact assessment and classroom assessment.
{"title":"Designing Assessment, Assessing Instructional Design","authors":"Stefanie Panke","doi":"10.4018/978-1-5225-9279-2.ch047","DOIUrl":"https://doi.org/10.4018/978-1-5225-9279-2.ch047","url":null,"abstract":"Assessment plays a vital role in delivering, evaluating, monitoring, improving and shaping learning experiences on the Web, at the desk and in the classroom. In the process of orchestrating educational technologies instructional designers are often confronted with the challenge of designing or deploying creative and authentic assessment techniques. For an instructional designer, the focus of assessment can be on individual learning, organizational improvement or the evaluation of educational technologies. A common question across these domains is how to translate pedagogical concepts such as authenticity and creativity into concrete practical applications and metrics. Educational technologies can support creative processes and offer connections to authentic contexts, just as well as they can curtail creativity and foster standardized testing routines. The chapter discusses theoretical frameworks and provides examples of the conceptual development and implementation of assessment approaches in three different areas: Needs assessment, impact assessment and classroom assessment.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114307988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4018/978-1-7998-0420-8.ch054
Simona Iftimescu, Romiță Iucu, Elena Marin, Mihaela Stingu
The purpose of this chapter is to analyze and discuss the concept of authentic assessment at Master's degree level. Firstly, this chapter attempts to provide a better understanding of the Master's program within the context of the Bologna system by providing a short historical perspective on the evolution of the Bologna process, as well as trying to identify the true beneficiaries. The chapter also addresses some of the challenges of the assessment process with two main themes: types and aim of the assessment process. Furthermore, the authors focus on the role of the authentic assessment, at a Master's degree level – as reflected by students' perception and correlated with its intended purpose. Drawing on the findings, the authors attempt to shape a description of what authentic assessment is and what it should be at Master's degree level.
{"title":"Authentic Assessment","authors":"Simona Iftimescu, Romiță Iucu, Elena Marin, Mihaela Stingu","doi":"10.4018/978-1-7998-0420-8.ch054","DOIUrl":"https://doi.org/10.4018/978-1-7998-0420-8.ch054","url":null,"abstract":"The purpose of this chapter is to analyze and discuss the concept of authentic assessment at Master's degree level. Firstly, this chapter attempts to provide a better understanding of the Master's program within the context of the Bologna system by providing a short historical perspective on the evolution of the Bologna process, as well as trying to identify the true beneficiaries. The chapter also addresses some of the challenges of the assessment process with two main themes: types and aim of the assessment process. Furthermore, the authors focus on the role of the authentic assessment, at a Master's degree level – as reflected by students' perception and correlated with its intended purpose. Drawing on the findings, the authors attempt to shape a description of what authentic assessment is and what it should be at Master's degree level.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114428226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4018/978-1-7998-0420-8.ch018
Alden J. Edson, D. Rogers, C. Browning
The focus of this chapter is on elementary preservice teachers' (PSTs') use of justification in problem-solving contexts based on a semester algebra course designed for elementary education mathematics minors. Formative assessment and digital tools facilitated the development of PSTs' understanding and use of justification in algebraic topics. The instructional model used includes the following components: negotiating a “taken-as-shared” justification rubric criteria; engaging in problem solving; preparing, digitally recording, and posting justification videos to the Cloud; and finally, listening and sharing descriptive feedback on the posted videos. VoiceThread was the digital venue for the preservice teachers to listen to their peers' justifications and post descriptive feedback. Findings from an analysis of a group focus on the PSTs' peer- and self-feedback as it developed through a semester and the PSTs' ability to provide a range of descriptive feedback with the potential to promote growth in the understanding and use of mathematical justification.
{"title":"Formative Assessment and Preservice Elementary Teachers' Mathematical Justification","authors":"Alden J. Edson, D. Rogers, C. Browning","doi":"10.4018/978-1-7998-0420-8.ch018","DOIUrl":"https://doi.org/10.4018/978-1-7998-0420-8.ch018","url":null,"abstract":"The focus of this chapter is on elementary preservice teachers' (PSTs') use of justification in problem-solving contexts based on a semester algebra course designed for elementary education mathematics minors. Formative assessment and digital tools facilitated the development of PSTs' understanding and use of justification in algebraic topics. The instructional model used includes the following components: negotiating a “taken-as-shared” justification rubric criteria; engaging in problem solving; preparing, digitally recording, and posting justification videos to the Cloud; and finally, listening and sharing descriptive feedback on the posted videos. VoiceThread was the digital venue for the preservice teachers to listen to their peers' justifications and post descriptive feedback. Findings from an analysis of a group focus on the PSTs' peer- and self-feedback as it developed through a semester and the PSTs' ability to provide a range of descriptive feedback with the potential to promote growth in the understanding and use of mathematical justification.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121712999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4018/IJTEPD.2018010106
William R. Merchant, K. Ciampa, Zora Wolfe
The purpose of this article is to assess the psychometric properties of the Standards Assessment Inventory (SAI) in order to confirm its construct validity using modern statistical procedures. The SAI is a 50-item assessment designed to measure the degree to which professional development programs align with seven factors related to “high quality” teacher learning (Learning Forward, 2011). These seven factors are Learning Communities, Leadership, Resources, Data, Learning Design, Implementation, and Outcomes. In their original evaluation of the factor structure of the SAI, Learning Forward (2011) tested one model containing all 50 items loading onto a single factor, and seven individual factor models, each containing one of the seven standards of professional development. To date there has been no published report related to the psychometric properties of a seven-factor model, which allows each of the seven standards to covary. The initial test of this model produced a poor fit, after which a series of modifications were attempted to improve the functioning of the SAI. After all meaningful modifications were added, the overall fit of the SAI was still outside of a range that would suggest a statistically valid measurement model. Suggestions for SAI modification and use are made as they relate to these findings.
{"title":"Examining the Psychometric Properties of the Standards Assessment Inventory","authors":"William R. Merchant, K. Ciampa, Zora Wolfe","doi":"10.4018/IJTEPD.2018010106","DOIUrl":"https://doi.org/10.4018/IJTEPD.2018010106","url":null,"abstract":"The purpose of this article is to assess the psychometric properties of the Standards Assessment Inventory (SAI) in order to confirm its construct validity using modern statistical procedures. The SAI is a 50-item assessment designed to measure the degree to which professional development programs align with seven factors related to “high quality” teacher learning (Learning Forward, 2011). These seven factors are Learning Communities, Leadership, Resources, Data, Learning Design, Implementation, and Outcomes. In their original evaluation of the factor structure of the SAI, Learning Forward (2011) tested one model containing all 50 items loading onto a single factor, and seven individual factor models, each containing one of the seven standards of professional development. To date there has been no published report related to the psychometric properties of a seven-factor model, which allows each of the seven standards to covary. The initial test of this model produced a poor fit, after which a series of modifications were attempted to improve the functioning of the SAI. After all meaningful modifications were added, the overall fit of the SAI was still outside of a range that would suggest a statistically valid measurement model. Suggestions for SAI modification and use are made as they relate to these findings.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121869823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4018/978-1-7998-0420-8.ch043
L. Vu
The large enrollments of multiple thousands of students in MOOCs seem to exceed the assessment capacity of instructors; therefore, the inability for instructors to grade so many papers is likely responsible for MOOCs turning to peer assessment. However, there has been little empirical research about peer assessment in MOOCs, especially composition MOOCs. This study aimed to address issues in peer assessment in a composition MOOC, particularly the students' perceptions and the peer-grading scores versus instructor-grading scores. The findings provided evidence that peer assessment was well received by the majority of students although many students also expressed negative feelings about this activity. Statistical analysis shows that there were significant differences in the grades given by students and those given by the instructors, which means the grades the students awarded to their peers tended to be higher in comparison to the instructor-assigned grades. Based on the results, this study concludes with implementations for peer assessment in a composition MOOC context.
{"title":"A Case Study of Peer Assessment in a Composition MOOC","authors":"L. Vu","doi":"10.4018/978-1-7998-0420-8.ch043","DOIUrl":"https://doi.org/10.4018/978-1-7998-0420-8.ch043","url":null,"abstract":"The large enrollments of multiple thousands of students in MOOCs seem to exceed the assessment capacity of instructors; therefore, the inability for instructors to grade so many papers is likely responsible for MOOCs turning to peer assessment. However, there has been little empirical research about peer assessment in MOOCs, especially composition MOOCs. This study aimed to address issues in peer assessment in a composition MOOC, particularly the students' perceptions and the peer-grading scores versus instructor-grading scores. The findings provided evidence that peer assessment was well received by the majority of students although many students also expressed negative feelings about this activity. Statistical analysis shows that there were significant differences in the grades given by students and those given by the instructors, which means the grades the students awarded to their peers tended to be higher in comparison to the instructor-assigned grades. Based on the results, this study concludes with implementations for peer assessment in a composition MOOC context.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121984305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4018/978-1-7998-0420-8.CH006
M. K. Idrissi, Meriem Hnida, S. Bennani
Competency-based Assessment (CBA) is the measurement of student's competency against a standard of performance. It is a process of collecting evidences to analyze student's progress and achievement. In higher education, Competency-based Assessment puts the focus on learning outcomes to constantly improve academic programs and meet labor market demands. As of to date, competencies are described using natural language but rarely used in e-learning systems, and the common sense idea is that: the way competency is defined shapes the way it is conceptualized, implemented and assessed. The main objective of this chapter is to introduce and discuss Competency-based Assessment from a methodological and technical perspectives. More specifically, the objective is to highlight ongoing issues regarding competency assessment in higher education in the 21st century, to emphasis the benefits of its implementation and finally to discuss some competency modeling and assessment techniques.
{"title":"Competency-Based Assessment","authors":"M. K. Idrissi, Meriem Hnida, S. Bennani","doi":"10.4018/978-1-7998-0420-8.CH006","DOIUrl":"https://doi.org/10.4018/978-1-7998-0420-8.CH006","url":null,"abstract":"Competency-based Assessment (CBA) is the measurement of student's competency against a standard of performance. It is a process of collecting evidences to analyze student's progress and achievement. In higher education, Competency-based Assessment puts the focus on learning outcomes to constantly improve academic programs and meet labor market demands. As of to date, competencies are described using natural language but rarely used in e-learning systems, and the common sense idea is that: the way competency is defined shapes the way it is conceptualized, implemented and assessed. The main objective of this chapter is to introduce and discuss Competency-based Assessment from a methodological and technical perspectives. More specifically, the objective is to highlight ongoing issues regarding competency assessment in higher education in the 21st century, to emphasis the benefits of its implementation and finally to discuss some competency modeling and assessment techniques.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129502556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}