A. Eslami, S. Asadi, .. G.R.Soleymani, V. Azimirad
This paper illustrates a method to finding a global optimal path in a dynamic environment of known obstacles for an Mobile Robot (MR) to following a moving target. Firstly, the environment is defined by using a practical and standard graph theory. Then, a suboptimal path is obtained by using Dijkstra Algorithm (DA) that is a standard graph searching method. The advantages of using DA are; elimination the uncertainness of heuristic algorithms and increasing the speed, precision and performance of them. Finally, Continuous Clonal Selection Algorithm (CCSA) that is combined with Negative Selection Algorithm (NSA) is used to improve the suboptimal path and derive global optimal path. To show the effectiveness of the method it is compared with some other methods in this area.
{"title":"A Real-time Global Optimal Path Planning for mobile robot in Dynamic Environment Based on Artificial Immune Approach","authors":"A. Eslami, S. Asadi, .. G.R.Soleymani, V. Azimirad","doi":"10.1037/e527372013-016","DOIUrl":"https://doi.org/10.1037/e527372013-016","url":null,"abstract":"This paper illustrates a method to finding a global optimal path in a dynamic environment of known obstacles for an Mobile Robot (MR) to following a moving target. Firstly, the environment is defined by using a practical and standard graph theory. Then, a suboptimal path is obtained by using Dijkstra Algorithm (DA) that is a standard graph searching method. The advantages of using DA are; elimination the uncertainness of heuristic algorithms and increasing the speed, precision and performance of them. Finally, Continuous Clonal Selection Algorithm (CCSA) that is combined with Negative Selection Algorithm (NSA) is used to improve the suboptimal path and derive global optimal path. To show the effectiveness of the method it is compared with some other methods in this area.","PeriodicalId":91079,"journal":{"name":"GSTF international journal on computing","volume":"91 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76986261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract—With automated measurement tool, a user can locate reusable classes, connected classes and independent classes. This paper describes how an automated tool can guide a programmer through measuring dependency of a program for software reuse. Automated identification of reusable software components based on dependency is explored. The case study demonstrates identifying the reusable units for software reuse and connected units for software package.
{"title":"Locating Reusable Classes Using Dependency in Object-Oriented Software","authors":"Young Lee, Jeong Yang","doi":"10.1037/e527372013-019","DOIUrl":"https://doi.org/10.1037/e527372013-019","url":null,"abstract":"Abstract—With automated measurement tool, a user can locate reusable classes, connected classes and independent classes. This paper describes how an automated tool can guide a programmer through measuring dependency of a program for software reuse. Automated identification of reusable software components based on dependency is explored. The case study demonstrates identifying the reusable units for software reuse and connected units for software package.","PeriodicalId":91079,"journal":{"name":"GSTF international journal on computing","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85969112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The passing rate of certification is the hot topic for Taiwan Universities of Science and Technology. Therefore, the strategies for the students in computer education need to be discussed to improve successful opportunities. In this paper, action research and data mining played important roles to collect and analyze data. The instructor arranged three semesters to take experiments in Enterprise Resource Planning subjects. Also E-Learning platform was set up to provide the other practice way. From the first to the second semester, the researcher found that motivation, hard work in the learning process and review were key secrets for the students to pass the certifications. So that attitude was the basis, more practice and truly understanding were two important skills to improve the certification performance. In the third semester, there were two rules in passing the ERP exam successfully. One was that if spending time in E-Materials were high, the result was passing. The other was that if spending time in E-Materials were normal and gender was female, the output was passing. Therefore, the aided tool for students was important to review thoroughly and the performance could also be improved. Besides, the experience sharing would also contribute to the computer science education.
{"title":"The strategies in passing enterprise resource planning certifications","authors":"Hsing-Yu, Hou","doi":"10.1037/e527372013-007","DOIUrl":"https://doi.org/10.1037/e527372013-007","url":null,"abstract":"The passing rate of certification is the hot topic for Taiwan Universities of Science and Technology. Therefore, the strategies for the students in computer education need to be discussed to improve successful opportunities. In this paper, action research and data mining played important roles to collect and analyze data. The instructor arranged three semesters to take experiments in Enterprise Resource Planning subjects. Also E-Learning platform was set up to provide the other practice way. From the first to the second semester, the researcher found that motivation, hard work in the learning process and review were key secrets for the students to pass the certifications. So that attitude was the basis, more practice and truly understanding were two important skills to improve the certification performance. In the third semester, there were two rules in passing the ERP exam successfully. One was that if spending time in E-Materials were high, the result was passing. The other was that if spending time in E-Materials were normal and gender was female, the output was passing. Therefore, the aided tool for students was important to review thoroughly and the performance could also be improved. Besides, the experience sharing would also contribute to the computer science education.","PeriodicalId":91079,"journal":{"name":"GSTF international journal on computing","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78400418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we investigate the impact of species’ spatial and spatiotemporal distribution information on speciation, using an individual-based ecosystem simulation (Ecosim). For this purpose, using machine learning techniques, we try to predict if one species will split in near future. Because of the imbalanced nature of our dataset we use smote algorithm to make a relatively balanced dataset to avoid dismissing the minor class samples. Experimental results show very good predictions for the test set generated from the same run as the learning set. It also shows good results on test sets generated from different runs of Ecosim. We also observe superior results when we use, for the learning set, a run with more species compare to a run with less species. Finally we can conclude that spatial and spatiotemporal information are very effective in predicting speciation.
{"title":"Investigating the Effect of Spatial Distribution and Spatiotemporal Information on Speciation using Individual-Based Ecosystem Simulation","authors":"M. Mashayekhi, R. Gras","doi":"10.1037/e527372013-015","DOIUrl":"https://doi.org/10.1037/e527372013-015","url":null,"abstract":"In this paper, we investigate the impact of species’ spatial and spatiotemporal distribution information on speciation, using an individual-based ecosystem simulation (Ecosim). For this purpose, using machine learning techniques, we try to predict if one species will split in near future. Because of the imbalanced nature of our dataset we use smote algorithm to make a relatively balanced dataset to avoid dismissing the minor class samples. Experimental results show very good predictions for the test set generated from the same run as the learning set. It also shows good results on test sets generated from different runs of Ecosim. We also observe superior results when we use, for the learning set, a run with more species compare to a run with less species. Finally we can conclude that spatial and spatiotemporal information are very effective in predicting speciation.","PeriodicalId":91079,"journal":{"name":"GSTF international journal on computing","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81114969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Supporting real-time multimedia applications on multicore systems is a great challenge due to cache’s dynamic behavior. Studies show that cache locking may improve execution time predictability and power/performance ratio. However, entire locking at level-1 cache (CL1) may not be efficient if smaller amount of instructions/data compared to the cache size is locked. An alternative choice may be way (i.e., partial) locking. For some processors, way locking is possible only at level-2 cache (CL2). Even though both CL1 cache locking and CL2 cache locking improve predictability, it is difficult to justify the performance and power trade-off between these two cache locking mechanisms. In this work, we assess the impact of CL1 and CL2 cache locking on the performance, power consumption, and predictability of a multicore system using ISO standard H.264/AVC, MPEG4, and MPEG3 multimedia applications and FFT and DFT codes. Simulation results show that both the performance and predictability can be increased and the total power consumption can be decreased by using a cache locking mechanism added to a cache memory hierarchy. Results also show that for the applications used, CL1 cache locking outperforms CL2 cache locking.
{"title":"Performance Evaluation of Multicore Cache Locking using Multimedia Applications","authors":"A. Asaduzzaman","doi":"10.1037/e527372013-017","DOIUrl":"https://doi.org/10.1037/e527372013-017","url":null,"abstract":"Supporting real-time multimedia applications on multicore systems is a great challenge due to cache’s dynamic behavior. Studies show that cache locking may improve execution time predictability and power/performance ratio. However, entire locking at level-1 cache (CL1) may not be efficient if smaller amount of instructions/data compared to the cache size is locked. An alternative choice may be way (i.e., partial) locking. For some processors, way locking is possible only at level-2 cache (CL2). Even though both CL1 cache locking and CL2 cache locking improve predictability, it is difficult to justify the performance and power trade-off between these two cache locking mechanisms. In this work, we assess the impact of CL1 and CL2 cache locking on the performance, power consumption, and predictability of a multicore system using ISO standard H.264/AVC, MPEG4, and MPEG3 multimedia applications and FFT and DFT codes. Simulation results show that both the performance and predictability can be increased and the total power consumption can be decreased by using a cache locking mechanism added to a cache memory hierarchy. Results also show that for the applications used, CL1 cache locking outperforms CL2 cache locking.","PeriodicalId":91079,"journal":{"name":"GSTF international journal on computing","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77907354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-08-25DOI: 10.5176/2010-2283_1.1.19
S. Doong, Ch Lai, J. S. Lee, Chen S. Ouyang, Chih-Hung Wu
Virtualization is a key technology in cloud computing to render on-demand provisioning of virtual services. Xen, an open source paravirtualized virtual machine monitor (hypervisor), has been adopted by many leading data centers of the world today. A scheduler in Xen handles CPU resources sharing among virtual machines hosted on the same physical system. This study is focused on a scheduler in the current Xen release - the Credit scheduler. Credit uses two parameters (weight and cap) to fine tune CPU resources sharing. Previous studies have shown that these two parameters can impact various performance measures of virtual machines hosted on Xen. In this study, we present a holistic procedure to establish performance models of virtual machines. Empirical data of two commonly used measures, namely calculation power and network throughput, were collected by simulations under various settings of weight and cap. We then employed a powerful machine learning tool (multi-kernel support vector regression) to learn performance models from the empirical data. These models were evaluated satisfactorily by using established procedures in machine learning.
{"title":"Virtual Machines Performance Modeling with Support Vector Regressions","authors":"S. Doong, Ch Lai, J. S. Lee, Chen S. Ouyang, Chih-Hung Wu","doi":"10.5176/2010-2283_1.1.19","DOIUrl":"https://doi.org/10.5176/2010-2283_1.1.19","url":null,"abstract":"Virtualization is a key technology in cloud computing to render on-demand provisioning of virtual services. Xen, an open source paravirtualized virtual machine monitor (hypervisor), has been adopted by many leading data centers of the world today. A scheduler in Xen handles CPU resources sharing among virtual machines hosted on the same physical system. This study is focused on a scheduler in the current Xen release - the Credit scheduler. Credit uses two parameters (weight and cap) to fine tune CPU resources sharing. Previous studies have shown that these two parameters can impact various performance measures of virtual machines hosted on Xen. In this study, we present a holistic procedure to establish performance models of virtual machines. Empirical data of two commonly used measures, namely calculation power and network throughput, were collected by simulations under various settings of weight and cap. We then employed a powerful machine learning tool (multi-kernel support vector regression) to learn performance models from the empirical data. These models were evaluated satisfactorily by using established procedures in machine learning.","PeriodicalId":91079,"journal":{"name":"GSTF international journal on computing","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82589393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-08-25DOI: 10.5176/2010-2283_1.1.14
Kate Craig-Wood, P. Krause, Nick Craig-Wood
Cloud computing is widely associated with major capital investment in mega data centres, housing expensive blade servers and storage area networks. In this paper we argue that a modular approach to building local or regional data centres using commodity hardware and open source hardware can produce a cost effective solution that better addresses the goals of cloud computing, and provides a scalable architecture that meets the service requirements of a high quality data centre. In support of this goal, we provide data that supports three research hypotheses: 1. that central processor unit (CPU) resources are not normally limiting; 2. that disk I/O transactions (TPS) are more often limiting, but this can be mitigated by maximizing the TPS-CPU ratio; 3. that customer CPU loads are generally static and small. Our results indicate that the modular, commodity hardware based architecture is near optimal. This is a very significant result, as it opens the door to alternative business models for the provision of data centres that significantly reduce the need for major up-front capital investment.
{"title":"The Case for Medium-Sized Regional Data Centres","authors":"Kate Craig-Wood, P. Krause, Nick Craig-Wood","doi":"10.5176/2010-2283_1.1.14","DOIUrl":"https://doi.org/10.5176/2010-2283_1.1.14","url":null,"abstract":"Cloud computing is widely associated with major capital investment in mega data centres, housing expensive blade servers and storage area networks. In this paper we argue that a modular approach to building local or regional data centres using commodity hardware and open source hardware can produce a cost effective solution that better addresses the goals of cloud computing, and provides a scalable architecture that meets the service requirements of a high quality data centre. In support of this goal, we provide data that supports three research hypotheses: 1. that central processor unit (CPU) resources are not normally limiting; 2. that disk I/O transactions (TPS) are more often limiting, but this can be mitigated by maximizing the TPS-CPU ratio; 3. that customer CPU loads are generally static and small. Our results indicate that the modular, commodity hardware based architecture is near optimal. This is a very significant result, as it opens the door to alternative business models for the provision of data centres that significantly reduce the need for major up-front capital investment.","PeriodicalId":91079,"journal":{"name":"GSTF international journal on computing","volume":"264 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79714398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-08-25DOI: 10.5176/2010-2283_1.1.29
J. Al-Sadi, Daed Al-Halabi, H. Al-Halabi
One of vital subject in education facility is student assesment. A common way used to compute there work is making exams. Generaly class sizes tend to expand in some socities. As a result there is a trend to give a quick accurate evaluation which is become more demand. A computerized questions make the process of taking an exam easier and somther. This caused the move towards the use of multiple-choice questions (MCQ). The rapid progress of using XML (Extensible Markup Language) for large amount of structured data, due to its ability of saving time and manipulate data makes it suitable for MCQ exam environment. Moreover, XML manipulates and deals with networks suffering from failure occurrences. The main contribution of this paper is to present an efficient method of transfer data related to online questions between the server and clients stations without being affected if the connection fails during taking the exam. An analytical study of the efficiency of module is also presented.
{"title":"MCQ Exams Correction in an Offline Network Using XML","authors":"J. Al-Sadi, Daed Al-Halabi, H. Al-Halabi","doi":"10.5176/2010-2283_1.1.29","DOIUrl":"https://doi.org/10.5176/2010-2283_1.1.29","url":null,"abstract":"One of vital subject in education facility is student assesment. A common way used to compute there work is making exams. Generaly class sizes tend to expand in some socities. As a result there is a trend to give a quick accurate evaluation which is become more demand. A computerized questions make the process of taking an exam easier and somther. This caused the move towards the use of multiple-choice questions (MCQ). The rapid progress of using XML (Extensible Markup Language) for large amount of structured data, due to its ability of saving time and manipulate data makes it suitable for MCQ exam environment. Moreover, XML manipulates and deals with networks suffering from failure occurrences. The main contribution of this paper is to present an efficient method of transfer data related to online questions between the server and clients stations without being affected if the connection fails during taking the exam. An analytical study of the efficiency of module is also presented.","PeriodicalId":91079,"journal":{"name":"GSTF international journal on computing","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85156670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-08-25DOI: 10.5176/2010-2283_1.1.02
T. Smith, Jonathan Miles
Machine learning is now widely studied as the basis for artificial intelligence systems within computer games. Most existing work focuses on methods for learning static expert systems, typically emphasizing candidate selection. This paper extends this work by exploring the use of continuous and reinforcement learning techniques to develop fully-adaptive game AI for first-person shooter bots. We begin by outlining a framework for learning static control models for tanks within the game BZFlag, then extend that framework using continuous learning techniques that allow computer controlled tanks to adapt to the game style of other players, extending overall playability by thwarting attempts to infer the underlying AI. We further show how reinforcement learning can be used to create bots that learn how to play based solely through trial and error, providing game engineers with a practical means to produce large numbers of bots, each with individual intelligences and unique behaviours; all from a single initial AI model.
{"title":"Continuous and Reinforcement Learning Methods for First-Person Shooter Games","authors":"T. Smith, Jonathan Miles","doi":"10.5176/2010-2283_1.1.02","DOIUrl":"https://doi.org/10.5176/2010-2283_1.1.02","url":null,"abstract":"Machine learning is now widely studied as the basis for artificial intelligence systems within computer games. Most existing work focuses on methods for learning static expert systems, typically emphasizing candidate selection. This paper extends this work by exploring the use of continuous and reinforcement learning techniques to develop fully-adaptive game AI for first-person shooter bots. We begin by outlining a framework for learning static control models for tanks within the game BZFlag, then extend that framework using continuous learning techniques that allow computer controlled tanks to adapt to the game style of other players, extending overall playability by thwarting attempts to infer the underlying AI. We further show how reinforcement learning can be used to create bots that learn how to play based solely through trial and error, providing game engineers with a practical means to produce large numbers of bots, each with individual intelligences and unique behaviours; all from a single initial AI model.","PeriodicalId":91079,"journal":{"name":"GSTF international journal on computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90889501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.5176/2251-3043_3.1.235#STHASH.P814C6AO.DPUF
Maulida Boru Butar Butar, D. Sanders
Increasing attention has been given to green computing in Business Intelligence. This paper specifically considers the measurement of performance in the reverse supply chain. That is because of the increasing value of products and technology at the end of general direct supply chains as well as the impact of new green legislation. Unlike forward supply chains, design strategies for reverse supply chains are relatively unexplored and underdeveloped. Meanwhile measuring supply chain performance is becoming important as the need for data in business intelligence systems increases and the understanding, collaboration and integration increases between supply chain members. It also helps companies to target the most profitable market segments or identify a suitable service definition. This paper describes a synthesis of known theory concerning measuring performance and assesses the state of the art. Strengths and gaps are identified. Some initial results are presented for measuring supply performance in reverse supply chains (using robust methods) and are outlined future research needs.
{"title":"Improving Green Computing in Business Intelligence by Measuring Performance of Reverse Supply Chains","authors":"Maulida Boru Butar Butar, D. Sanders","doi":"10.5176/2251-3043_3.1.235#STHASH.P814C6AO.DPUF","DOIUrl":"https://doi.org/10.5176/2251-3043_3.1.235#STHASH.P814C6AO.DPUF","url":null,"abstract":"Increasing attention has been given to green computing in Business Intelligence. This paper specifically considers the measurement of performance in the reverse supply chain. That is because of the increasing value of products and technology at the end of general direct supply chains as well as the impact of new green legislation. Unlike forward supply chains, design strategies for reverse supply chains are relatively unexplored and underdeveloped. Meanwhile measuring supply chain performance is becoming important as the need for data in business intelligence systems increases and the understanding, collaboration and integration increases between supply chain members. It also helps companies to target the most profitable market segments or identify a suitable service definition. This paper describes a synthesis of known theory concerning measuring performance and assesses the state of the art. Strengths and gaps are identified. Some initial results are presented for measuring supply performance in reverse supply chains (using robust methods) and are outlined future research needs.","PeriodicalId":91079,"journal":{"name":"GSTF international journal on computing","volume":"34 1","pages":"75-81"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75077469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}