Pub Date : 2020-12-15DOI: 10.1109/ICCES51560.2020.9334637
Abdelfatah M. Mohamed, A. S. T. E. Dein, Reham S. Saad
Visible light communication (VLC) is considered one of the newest wireless communication technologies that provides transmitted signals with high data rate. Orthogonal frequency division multiplexing (OFDM) is used extensively in VLC due to its ability to support high speed transmission and it gets ride of the inter-symbol interference [1]. OFDM enhancement with wavelet based on discrete wavelet transform (DWT) has achieved remarkable performance compared to OFDM due to interesting wavelet characteristics. OFDM suffers from the issue of peak to average power ratio (PAPR) which reduces the system efficiency especially in VLC system due to the nonlinearity of light emitting diode (LED) which distorts the signal. Wavelet proved an improvement in PAPR reduction of the radiated signal. In our study, an exponential companding PAPR reduction technique is applied on a wavelet-OFDM which satisfies better enhancement in PAPR reduction of the transmitted signal.
{"title":"PAPR Reduction of Wavelet-OFDM Signals Using Exponential Companding in Visible Light Communications","authors":"Abdelfatah M. Mohamed, A. S. T. E. Dein, Reham S. Saad","doi":"10.1109/ICCES51560.2020.9334637","DOIUrl":"https://doi.org/10.1109/ICCES51560.2020.9334637","url":null,"abstract":"Visible light communication (VLC) is considered one of the newest wireless communication technologies that provides transmitted signals with high data rate. Orthogonal frequency division multiplexing (OFDM) is used extensively in VLC due to its ability to support high speed transmission and it gets ride of the inter-symbol interference [1]. OFDM enhancement with wavelet based on discrete wavelet transform (DWT) has achieved remarkable performance compared to OFDM due to interesting wavelet characteristics. OFDM suffers from the issue of peak to average power ratio (PAPR) which reduces the system efficiency especially in VLC system due to the nonlinearity of light emitting diode (LED) which distorts the signal. Wavelet proved an improvement in PAPR reduction of the radiated signal. In our study, an exponential companding PAPR reduction technique is applied on a wavelet-OFDM which satisfies better enhancement in PAPR reduction of the transmitted signal.","PeriodicalId":247183,"journal":{"name":"2020 15th International Conference on Computer Engineering and Systems (ICCES)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133985218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-15DOI: 10.1109/ICCES51560.2020.9334614
Sherif Zeyada, M. Eladawy, Manal A. Ismail, H. Keshk
Recognizing the prosody or arud of modern Arabic poems in an automatic way is a big challenge. One of these challenges is the rare work in this field. We found very few publications on recognizing the classical poems arud, but, as far as we know we did not find any publication on applying machine learning techniques to recognize the arud for modern poems where there are no definite numbers of feet for each verse, no fixed rhyme, and meters are mixed. In this paper, we introduce a new system using Artificial intelligence called “IMAP” to identify and recognize the arud for modern Arabic poems. Tafhela (Foot) is a group of syllables that form a prosodic unit regardless of word boundaries. The accuracy of our proposed algorithm was 99%.
{"title":"A Proposed System for the Identification of Modem Arabic Poetry Meters (IMAP)","authors":"Sherif Zeyada, M. Eladawy, Manal A. Ismail, H. Keshk","doi":"10.1109/ICCES51560.2020.9334614","DOIUrl":"https://doi.org/10.1109/ICCES51560.2020.9334614","url":null,"abstract":"Recognizing the prosody or arud of modern Arabic poems in an automatic way is a big challenge. One of these challenges is the rare work in this field. We found very few publications on recognizing the classical poems arud, but, as far as we know we did not find any publication on applying machine learning techniques to recognize the arud for modern poems where there are no definite numbers of feet for each verse, no fixed rhyme, and meters are mixed. In this paper, we introduce a new system using Artificial intelligence called “IMAP” to identify and recognize the arud for modern Arabic poems. Tafhela (Foot) is a group of syllables that form a prosodic unit regardless of word boundaries. The accuracy of our proposed algorithm was 99%.","PeriodicalId":247183,"journal":{"name":"2020 15th International Conference on Computer Engineering and Systems (ICCES)","volume":"228 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114380756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-15DOI: 10.1109/icces51560.2020.9334631
{"title":"Session CN1: Computer Networks & Security I","authors":"","doi":"10.1109/icces51560.2020.9334631","DOIUrl":"https://doi.org/10.1109/icces51560.2020.9334631","url":null,"abstract":"","PeriodicalId":247183,"journal":{"name":"2020 15th International Conference on Computer Engineering and Systems (ICCES)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114807712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-15DOI: 10.1109/icces51560.2020.9334606
{"title":"Session IOT: Internet of Things","authors":"","doi":"10.1109/icces51560.2020.9334606","DOIUrl":"https://doi.org/10.1109/icces51560.2020.9334606","url":null,"abstract":"","PeriodicalId":247183,"journal":{"name":"2020 15th International Conference on Computer Engineering and Systems (ICCES)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130644440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-15DOI: 10.1109/icces51560.2020.9334676
{"title":"ICCES 2020 List Reviewers Page","authors":"","doi":"10.1109/icces51560.2020.9334676","DOIUrl":"https://doi.org/10.1109/icces51560.2020.9334676","url":null,"abstract":"","PeriodicalId":247183,"journal":{"name":"2020 15th International Conference on Computer Engineering and Systems (ICCES)","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127266835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-15DOI: 10.1109/ICCES51560.2020.9334569
Mohamed Elsayed, Hatem M. Abdelkader, A. Abdelwahab
in recent times, Big data is modifying the style life of workplaces and thinking by improved performance in knowledge discovering and decision making ever-greater volumes of data are being produced data due to the network of sensors and communication technologies Heterogeneous data is a category of unstructured data with an unknown pace in several ways. Current data analysis techniques are inadequate to handle the huge volumes of data produced, this data difficult to manage, store, handle, interpret, analyze using traditional techniques. Deep learning (DL) is extremely popular among many data scientists and experts thanks to the high precision in speech recognition, image handling, and data analytics. DL has become much more important because it can be used for largescale heterogeneous data. DL has been applied efficiently in several fields and has exceeded most of the traditional techniques, DL algorithmic can study large unclassified data with the ability to select features. This study concentrates on the discussion of a variety of new algorithms that handle this data and DL models that provide greater accuracy for heterogeneous data.
{"title":"Deep Learning Models for Heterogeneous Big Data Analytics","authors":"Mohamed Elsayed, Hatem M. Abdelkader, A. Abdelwahab","doi":"10.1109/ICCES51560.2020.9334569","DOIUrl":"https://doi.org/10.1109/ICCES51560.2020.9334569","url":null,"abstract":"in recent times, Big data is modifying the style life of workplaces and thinking by improved performance in knowledge discovering and decision making ever-greater volumes of data are being produced data due to the network of sensors and communication technologies Heterogeneous data is a category of unstructured data with an unknown pace in several ways. Current data analysis techniques are inadequate to handle the huge volumes of data produced, this data difficult to manage, store, handle, interpret, analyze using traditional techniques. Deep learning (DL) is extremely popular among many data scientists and experts thanks to the high precision in speech recognition, image handling, and data analytics. DL has become much more important because it can be used for largescale heterogeneous data. DL has been applied efficiently in several fields and has exceeded most of the traditional techniques, DL algorithmic can study large unclassified data with the ability to select features. This study concentrates on the discussion of a variety of new algorithms that handle this data and DL models that provide greater accuracy for heterogeneous data.","PeriodicalId":247183,"journal":{"name":"2020 15th International Conference on Computer Engineering and Systems (ICCES)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127149957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-15DOI: 10.1109/ICCES51560.2020.9334577
Mohamed M. Elmoslemany, A. S. Eldien, Mazen M.Selim
Software Defined Networking (SDN) is a new network architecture that decouples the control plane from the data plane. It separates routing from forwarding functions by using a controller and provides programmability for the data plane. SDN Controller is the intelligent part that is responsible for the Flow Path and the optimized Network Performance. Today, there are several OpenFlow controllers currently used in marketing and research. Thus, we must verify and select which controller will satisfy our requirement and performs our tasks. Performance and capabilities are important factors for selecting the controller. This paper presents and studies performance analysis for several open-source Controllers such as OpenDaylight, ONOS, Ryu and POX, based on indicators such as throughput and latency. We benchmark Controllers by using a tool called Cbench according to different parameters. These analyses will be a reference and help us with decision making on selecting the controller. Finally, we discuss research details and our findings in the testbeds for SDN Controller. We found the ONOS controller has the best throughput, Pox has the lowest Latency, and the ONOS controller has the best Scalability
{"title":"Performance Analysis in Software Defined Network Controllers","authors":"Mohamed M. Elmoslemany, A. S. Eldien, Mazen M.Selim","doi":"10.1109/ICCES51560.2020.9334577","DOIUrl":"https://doi.org/10.1109/ICCES51560.2020.9334577","url":null,"abstract":"Software Defined Networking (SDN) is a new network architecture that decouples the control plane from the data plane. It separates routing from forwarding functions by using a controller and provides programmability for the data plane. SDN Controller is the intelligent part that is responsible for the Flow Path and the optimized Network Performance. Today, there are several OpenFlow controllers currently used in marketing and research. Thus, we must verify and select which controller will satisfy our requirement and performs our tasks. Performance and capabilities are important factors for selecting the controller. This paper presents and studies performance analysis for several open-source Controllers such as OpenDaylight, ONOS, Ryu and POX, based on indicators such as throughput and latency. We benchmark Controllers by using a tool called Cbench according to different parameters. These analyses will be a reference and help us with decision making on selecting the controller. Finally, we discuss research details and our findings in the testbeds for SDN Controller. We found the ONOS controller has the best throughput, Pox has the lowest Latency, and the ONOS controller has the best Scalability","PeriodicalId":247183,"journal":{"name":"2020 15th International Conference on Computer Engineering and Systems (ICCES)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127106375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-15DOI: 10.1109/ICCES51560.2020.9334628
A. Farag
This talk describes recent efforts for quantifying students’ engagement in early engineering coursework, through designing, implementing, and testing a system to measure the students’ emotional, behavioral, and cognitive engagement states. Engineering programs suffer from a high rate of attrition in the freshman year, primarily due to poor engagement of students with their classes. The project plans to develop a sensor-driven, computational approach to measure emotional and behavioral components of student engagement. This information will be used to identify teaching strategies that increase engagement, with the goal of enhancing student success and retention in STEM education pathways. The project features a multi-disciplinary collaboration between faculty and undergraduate researchers in engineering, the physical sciences, psychological sciences, and education. The project involves students in first- and second-year engineering STEM subjects and the experienced faculty who teach these courses. Findings from the project could be a valuable step toward an early warning system to detect student disengagement and anxiety in STEM and non-STEM courses. Project goals include: (i) establishment of a robust network of non-obtrusive and non-invasive sensors in mid-size classes to enable real-time extraction of facial and vital signs, which will be integrated and displayed on instructors’ dashboards; (ii) identification of robust descriptors for modeling the emotional and behavioral components of engagement using data collected by the sensor networks; (iii) pilot testing of the system’s effectiveness in gathering meaningful data for subsequent work on emotional, behavioral, and cognitive metrics of engagement. The fundamental research question to be addressed relates to improving student learning by the automated capture of non-verbal cues of engagement: How can we use students’ expressions of engagement, based on non-verbal signs such as facial expressions, body and eye movements, physiological reactions, posture, to enhance learning? Findings from the project will constitute a foundation for multi-disciplinary research to incorporate novel machine learning and artificial intelligence-based models for measuring engagement in STEM classes. This project has been funded by the National Science Foundation (NSF). The talk will describe our latest discoveries in this long-term and multidisciplinary project.
{"title":"Plenary Talk II Measuring Student Engagement in Early Engineering Coursework","authors":"A. Farag","doi":"10.1109/ICCES51560.2020.9334628","DOIUrl":"https://doi.org/10.1109/ICCES51560.2020.9334628","url":null,"abstract":"This talk describes recent efforts for quantifying students’ engagement in early engineering coursework, through designing, implementing, and testing a system to measure the students’ emotional, behavioral, and cognitive engagement states. Engineering programs suffer from a high rate of attrition in the freshman year, primarily due to poor engagement of students with their classes. The project plans to develop a sensor-driven, computational approach to measure emotional and behavioral components of student engagement. This information will be used to identify teaching strategies that increase engagement, with the goal of enhancing student success and retention in STEM education pathways. The project features a multi-disciplinary collaboration between faculty and undergraduate researchers in engineering, the physical sciences, psychological sciences, and education. The project involves students in first- and second-year engineering STEM subjects and the experienced faculty who teach these courses. Findings from the project could be a valuable step toward an early warning system to detect student disengagement and anxiety in STEM and non-STEM courses. Project goals include: (i) establishment of a robust network of non-obtrusive and non-invasive sensors in mid-size classes to enable real-time extraction of facial and vital signs, which will be integrated and displayed on instructors’ dashboards; (ii) identification of robust descriptors for modeling the emotional and behavioral components of engagement using data collected by the sensor networks; (iii) pilot testing of the system’s effectiveness in gathering meaningful data for subsequent work on emotional, behavioral, and cognitive metrics of engagement. The fundamental research question to be addressed relates to improving student learning by the automated capture of non-verbal cues of engagement: How can we use students’ expressions of engagement, based on non-verbal signs such as facial expressions, body and eye movements, physiological reactions, posture, to enhance learning? Findings from the project will constitute a foundation for multi-disciplinary research to incorporate novel machine learning and artificial intelligence-based models for measuring engagement in STEM classes. This project has been funded by the National Science Foundation (NSF). The talk will describe our latest discoveries in this long-term and multidisciplinary project.","PeriodicalId":247183,"journal":{"name":"2020 15th International Conference on Computer Engineering and Systems (ICCES)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129124669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-15DOI: 10.1109/ICCES51560.2020.9334588
Neila Chettaoui, Ayman Atia, M. Bouhlel
Embodied learning defines a contemporary pedagogical theory focusing on ensuring an interactive learning experience through full-body movement. Within this pedagogy, several studies in Human-Computer Interaction have been conducted, incorporating gestures, and physical interaction in different learning fields. This paper presents the design of a multimodal and adaptive space for embodied learning. The main aim is to give students the possibility to use gestures, body movement, and tangible interaction while interacting with adaptive learning content projected on the wall and the floor. Thus, this study aims to explore how tangible interaction, as a form of implementing embodied learning, can impact the motivation of students to learn compared to tablet-based learning. Eighteen primary school students aged nine and ten years old participated in the study. The average percentages of answers on the Questionnaire on Current Motivation (QCM) pointed out a higher motivation among students learning via tangible objects. Results revealed a positive score for the Interest of learning abstract concepts using a tangible approach with a mean score of 4.78, compared to 3.77 while learning via a tablet. Furthermore, Success and Challenge measures, with a mean score of 4.67 and 4.56 indicate that physical interaction via tangible objects leads to significantly higher motivation outcomes. These findings suggest that learning might benefit more from a multimodal and tangible physical interaction approach than the traditional tablet-based learning process.
{"title":"Exploring the Impact of Multimodal Adaptive Learning with Tangible Interaction on Learning Motivation","authors":"Neila Chettaoui, Ayman Atia, M. Bouhlel","doi":"10.1109/ICCES51560.2020.9334588","DOIUrl":"https://doi.org/10.1109/ICCES51560.2020.9334588","url":null,"abstract":"Embodied learning defines a contemporary pedagogical theory focusing on ensuring an interactive learning experience through full-body movement. Within this pedagogy, several studies in Human-Computer Interaction have been conducted, incorporating gestures, and physical interaction in different learning fields. This paper presents the design of a multimodal and adaptive space for embodied learning. The main aim is to give students the possibility to use gestures, body movement, and tangible interaction while interacting with adaptive learning content projected on the wall and the floor. Thus, this study aims to explore how tangible interaction, as a form of implementing embodied learning, can impact the motivation of students to learn compared to tablet-based learning. Eighteen primary school students aged nine and ten years old participated in the study. The average percentages of answers on the Questionnaire on Current Motivation (QCM) pointed out a higher motivation among students learning via tangible objects. Results revealed a positive score for the Interest of learning abstract concepts using a tangible approach with a mean score of 4.78, compared to 3.77 while learning via a tablet. Furthermore, Success and Challenge measures, with a mean score of 4.67 and 4.56 indicate that physical interaction via tangible objects leads to significantly higher motivation outcomes. These findings suggest that learning might benefit more from a multimodal and tangible physical interaction approach than the traditional tablet-based learning process.","PeriodicalId":247183,"journal":{"name":"2020 15th International Conference on Computer Engineering and Systems (ICCES)","volume":"278 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116070679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-15DOI: 10.1109/ICCES51560.2020.9334594
Doaa Ebrahim, Amr M. T. Ali-Eldin, H. Moustafa, Hesham A. Arafat
Alzheimer’s disease is the extremely popular cause of dementia that causes memory loss. People who have Alzheimer’s disease suffer from a disorder in neurodegenerative which leads to loss in many brain functions. Nowadays researchers prove that early diagnosis of the disease is the most crucial aspect to enhance the care of patients’ lives and enhance treatment. Traditional approaches for diagnosis of Alzheimer’s disease (AD) suffers from long time with lack both efficiency and the time it takes for learning and training. Lately, deep-learning-based approaches have been considered for the classification of neuroimaging data correlated to AD. In this paper, we study the use of the Convolutional Neural Networks (CNN) in AD early detection, VGG-16 trained on our datasets is used to make feature extractions for the classification process. Experimental work explains the effectiveness of the proposed approach.
{"title":"Alzheimer Disease Early Detection Using Convolutional Neural Networks","authors":"Doaa Ebrahim, Amr M. T. Ali-Eldin, H. Moustafa, Hesham A. Arafat","doi":"10.1109/ICCES51560.2020.9334594","DOIUrl":"https://doi.org/10.1109/ICCES51560.2020.9334594","url":null,"abstract":"Alzheimer’s disease is the extremely popular cause of dementia that causes memory loss. People who have Alzheimer’s disease suffer from a disorder in neurodegenerative which leads to loss in many brain functions. Nowadays researchers prove that early diagnosis of the disease is the most crucial aspect to enhance the care of patients’ lives and enhance treatment. Traditional approaches for diagnosis of Alzheimer’s disease (AD) suffers from long time with lack both efficiency and the time it takes for learning and training. Lately, deep-learning-based approaches have been considered for the classification of neuroimaging data correlated to AD. In this paper, we study the use of the Convolutional Neural Networks (CNN) in AD early detection, VGG-16 trained on our datasets is used to make feature extractions for the classification process. Experimental work explains the effectiveness of the proposed approach.","PeriodicalId":247183,"journal":{"name":"2020 15th International Conference on Computer Engineering and Systems (ICCES)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133635628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}