In recent years, real estate industry has captured government and public attention around the world. The factors influencing the prices of real estate are diversified and complex. However, due to the limitations and one-sidedness of their respective views, they did not provide enough theoretical basis for the fluctuation of house price and its influential factors. The purpose of this paper is to build a housing price model to make the scientific and objective analysis of London's real estate market trends from the year 1996 to 2016 and proposes some countermeasures to reasonably control house prices. Specifically, the paper analyzes eight factors which affect the house prices from two aspects: housing supply and demand and find out the factor which is of vital importance to the increase of housing price per square meter. The problem of a high level of multicollinearity between them is solved by using principal components analysis.
{"title":"What are the most important factors that influence the changes in London Real Estate Prices? How to quantify them?","authors":"Yiyang Gu","doi":"10.1453/jeb.v5i1.1609","DOIUrl":"https://doi.org/10.1453/jeb.v5i1.1609","url":null,"abstract":"In recent years, real estate industry has captured government and public attention around the world. The factors influencing the prices of real estate are diversified and complex. However, due to the limitations and one-sidedness of their respective views, they did not provide enough theoretical basis for the fluctuation of house price and its influential factors. The purpose of this paper is to build a housing price model to make the scientific and objective analysis of London's real estate market trends from the year 1996 to 2016 and proposes some countermeasures to reasonably control house prices. Specifically, the paper analyzes eight factors which affect the house prices from two aspects: housing supply and demand and find out the factor which is of vital importance to the increase of housing price per square meter. The problem of a high level of multicollinearity between them is solved by using principal components analysis.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121968159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-05DOI: 10.1553/giscience2018_01_s65
Julian Bruns, J. Riesterer, Bowen Wang, T. Riedel, M. Beigl
Today we have access to a vast amount of weather, air quality, noise or radioactivity data collected by individual around the globe. This volunteered geographic information often contains data of uncertain and of heterogeneous quality, in particular when compared to official in-situ measurements. This limits their application, as rigorous, work-intensive data cleaning has to be performed, which reduces the amount of data and cannot be performed in real-time. In this paper, we propose dynamically learning the quality of individual sensors by optimizing a weighted Gaussian process regression using a genetic algorithm. We chose weather stations as our use case as these are the most common VGI measurements. The evaluation is done for the south-west of Germany in August 2016 with temperature data from the Wunderground network and the Deutsche Wetter Dienst (DWD), in total 1561 stations. Using a 10-fold cross-validation scheme based on the DWD ground truth, we can show significant improvements of the predicted sensor reading. In our experiment we were obtain a 12.5% improvement on the mean absolute error.
今天,我们可以获得全球个人收集的大量天气、空气质量、噪音或放射性数据。这种自愿提供的地理信息通常包含不确定和异构质量的数据,特别是与官方的原位测量相比较时。这限制了它们的应用,因为必须执行严格的、工作密集型的数据清理,这减少了数据量,并且不能实时执行。在本文中,我们提出动态学习单个传感器的质量,通过优化加权高斯过程回归使用遗传算法。我们选择气象站作为我们的用例,因为它们是最常见的VGI测量。2016年8月,利用Wunderground网络和Deutsche Wetter Dienst (DWD)的1561个站点的温度数据,对德国西南部进行了评估。使用基于DWD地面真值的10倍交叉验证方案,我们可以显示预测传感器读数的显着改进。在我们的实验中,我们获得了12.5%的平均绝对误差改进。
{"title":"Automated Quality Assessment of (Citizen) Weather Stations","authors":"Julian Bruns, J. Riesterer, Bowen Wang, T. Riedel, M. Beigl","doi":"10.1553/giscience2018_01_s65","DOIUrl":"https://doi.org/10.1553/giscience2018_01_s65","url":null,"abstract":"Today we have access to a vast amount of weather, air quality, noise or radioactivity data collected by individual around the globe. This volunteered geographic information often contains data of uncertain and of heterogeneous quality, in particular when compared to official in-situ measurements. This limits their application, as rigorous, work-intensive data cleaning has to be performed, which reduces the amount of data and cannot be performed in real-time. In this paper, we propose dynamically learning the quality of individual sensors by optimizing a weighted Gaussian process regression using a genetic algorithm. We chose weather stations as our use case as these are the most common VGI measurements. The evaluation is done for the south-west of Germany in August 2016 with temperature data from the Wunderground network and the Deutsche Wetter Dienst (DWD), in total 1561 stations. Using a 10-fold cross-validation scheme based on the DWD ground truth, we can show significant improvements of the predicted sensor reading. In our experiment we were obtain a 12.5% improvement on the mean absolute error.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123818856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Ramos, D. Nascimento, Camila Cocolo, M. J. Nicola, C. Alonso, Luiz Gustavo Ribeiro, A. Ennes, F. Louzada
In this study we considered five generalizations of the standard Weibull distribution to describe the lifetime of two important components of harvest sugarcane machines. The harvesters considered in the analysis does the harvest of an average of 20 tons of sugarcane per hour and their malfunction may lead to major losses, therefore, an effective maintenance approach is of main interesting for cost savings. For the considered distributions, the mathematical background is presented. Maximum likelihood is used for parameter estimation. Further, different discrimination procedures were used to obtain the best fit for each component. At the end, we propose a maintenance scheduling for the components of the harvesters using predictive analysis.
{"title":"Reliability-centered maintenance: analyzing failure in harvest sugarcane machine using some generalizations of the Weibull distribution","authors":"P. Ramos, D. Nascimento, Camila Cocolo, M. J. Nicola, C. Alonso, Luiz Gustavo Ribeiro, A. Ennes, F. Louzada","doi":"10.1155/2018/1241856","DOIUrl":"https://doi.org/10.1155/2018/1241856","url":null,"abstract":"In this study we considered five generalizations of the standard Weibull distribution to describe the lifetime of two important components of harvest sugarcane machines. The harvesters considered in the analysis does the harvest of an average of 20 tons of sugarcane per hour and their malfunction may lead to major losses, therefore, an effective maintenance approach is of main interesting for cost savings. For the considered distributions, the mathematical background is presented. Maximum likelihood is used for parameter estimation. Further, different discrimination procedures were used to obtain the best fit for each component. At the end, we propose a maintenance scheduling for the components of the harvesters using predictive analysis.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131926044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article we differentiate and characterize the standard two-process serial models and the standard two process parallel models by investigating the behavior of (conditional) distributions of the total completion times and survivals of intercompletion times without assuming any particular forms for the distributions of processing times. We address our argument through mathematical proofs and computational methods. It is found that for the standard two-process serial models, positive dependence between the total completion times does not hold if no specific distributional forms are imposed to the processing times. By contrast, for the standard two-process parallel models the total completion times are independent. According to different nature of process dependence, one can distinguish a standard two process serial model from a standard two-process parallel model. We also find that in standard two-process parallel models the monotonicity of survival function of the intercompletion time of stage 2 conditional on the completion of stage 1 depends on the monotonicity of the hazard function of processing time. We also find that the survival of intercompletion time(s) from stage 1 to stage 2 is increasing when the ratio of hazard function meets certain criterion. Then the empirical finding that the intercompletion time is grown with the growth of the number of recalled words can be accounted by standard parallel models. We also find that if the cumulative hazard function is concave or linear, the survival from stage 1 to stage 2 is increasing.
{"title":"A Theoretical Study of Process Dependence for Standard Two-Process Serial Models and Standard Two-Process Parallel Models","authors":"Ru Zhang, Yanjun Liu, J. Townsend","doi":"10.4324/9781315169903-6","DOIUrl":"https://doi.org/10.4324/9781315169903-6","url":null,"abstract":"In this article we differentiate and characterize the standard two-process serial models and the standard two process parallel models by investigating the behavior of (conditional) distributions of the total completion times and survivals of intercompletion times without assuming any particular forms for the distributions of processing times. We address our argument through mathematical proofs and computational methods. It is found that for the standard two-process serial models, positive dependence between the total completion times does not hold if no specific distributional forms are imposed to the processing times. By contrast, for the standard two-process parallel models the total completion times are independent. According to different nature of process dependence, one can distinguish a standard two process serial model from a standard two-process parallel model. We also find that in standard two-process parallel models the monotonicity of survival function of the intercompletion time of stage 2 conditional on the completion of stage 1 depends on the monotonicity of the hazard function of processing time. We also find that the survival of intercompletion time(s) from stage 1 to stage 2 is increasing when the ratio of hazard function meets certain criterion. Then the empirical finding that the intercompletion time is grown with the growth of the number of recalled words can be accounted by standard parallel models. We also find that if the cumulative hazard function is concave or linear, the survival from stage 1 to stage 2 is increasing.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114706373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When sexual violence is a product of organized crime or social imaginary, the links between sexual violence episodes can be understood as a latent structure. With this assumption in place, we can use data science to uncover complex patterns. In this paper we focus on the use of data mining techniques to unveil complex anomalous spatiotemporal patterns of sexual violence. We illustrate their use by analyzing all reported rapes in El Salvador over a period of nine years. Through our analysis, we are able to provide evidence of phenomena that, to the best of our knowledge, have not been previously reported in literature. We devote special attention to a pattern we discover in the East, where underage victims report their boyfriends as perpetrators at anomalously high rates. Finally, we explain how such analyzes could be conducted in real-time, enabling early detection of emerging patterns to allow law enforcement agencies and policy makers to react accordingly.
{"title":"Discovery of Complex Anomalous Patterns of Sexual Violence in El Salvador","authors":"Maria De-Arteaga, A. Dubrawski","doi":"10.5281/zenodo.571551","DOIUrl":"https://doi.org/10.5281/zenodo.571551","url":null,"abstract":"When sexual violence is a product of organized crime or social imaginary, the links between sexual violence episodes can be understood as a latent structure. With this assumption in place, we can use data science to uncover complex patterns. In this paper we focus on the use of data mining techniques to unveil complex anomalous spatiotemporal patterns of sexual violence. We illustrate their use by analyzing all reported rapes in El Salvador over a period of nine years. Through our analysis, we are able to provide evidence of phenomena that, to the best of our knowledge, have not been previously reported in literature. We devote special attention to a pattern we discover in the East, where underage victims report their boyfriends as perpetrators at anomalously high rates. Finally, we explain how such analyzes could be conducted in real-time, enabling early detection of emerging patterns to allow law enforcement agencies and policy makers to react accordingly.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126557946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-10DOI: 10.36334/modsim.2017.a2.bakar
K. Bakar
Areal level spatial data are often large, sparse and may appear with geographical shapes that are regular or irregular (e.g., postcode). Moreover, sometimes it is important to obtain predictive inference in regular or irregular areal shapes that is misaligned with the observed spatial areal geographical boundary. For example, in a survey the respondents were asked about their postcode, however for policy making purposes, researchers are often interested to obtain information at the SA2. The statistical challenge is to obtain spatial prediction at the SA2s, where the SA2s may have overlapped geographical boundaries with postcodes. The study is motivated by a practical survey data obtained from the Australian National University (ANU) Poll. Here the main research question is to understand respondents' satisfaction level with the way Australia is heading. The data are observed at 1,944 postcodes among the 2,516 available postcodes across Australia, and prediction is obtained at the 2,196 SA2s. The proposed method also explored through a grid-based simulation study, where data have been observed in a regular grid and spatial prediction has been done in a regular grid that has a misaligned geographical boundary with the first regular grid-set. The real-life example with ANU Poll data addresses the situation of irregular geographical boundaries that are misaligned, i.e., model fitted with postcode data and hence obtained prediction at the SA2. A comparison study is also performed to validate the proposed method. In this paper, a Gaussian model is constructed under Bayesian hierarchy. The novelty lies in the development of the basis function that can address spatial sparsity and localised spatial structure. It can also address the large-dimensional spatial data modelling problem by constructing knot based reduced-dimensional basis functions.
{"title":"Bayesian Gaussian models for interpolating large-dimensional data at misaligned areal units","authors":"K. Bakar","doi":"10.36334/modsim.2017.a2.bakar","DOIUrl":"https://doi.org/10.36334/modsim.2017.a2.bakar","url":null,"abstract":"Areal level spatial data are often large, sparse and may appear with geographical shapes that are regular or irregular (e.g., postcode). Moreover, sometimes it is important to obtain predictive inference in regular or irregular areal shapes that is misaligned with the observed spatial areal geographical boundary. For example, in a survey the respondents were asked about their postcode, however for policy making purposes, researchers are often interested to obtain information at the SA2. The statistical challenge is to obtain spatial prediction at the SA2s, where the SA2s may have overlapped geographical boundaries with postcodes. The study is motivated by a practical survey data obtained from the Australian National University (ANU) Poll. Here the main research question is to understand respondents' satisfaction level with the way Australia is heading. The data are observed at 1,944 postcodes among the 2,516 available postcodes across Australia, and prediction is obtained at the 2,196 SA2s. The proposed method also explored through a grid-based simulation study, where data have been observed in a regular grid and spatial prediction has been done in a regular grid that has a misaligned geographical boundary with the first regular grid-set. The real-life example with ANU Poll data addresses the situation of irregular geographical boundaries that are misaligned, i.e., model fitted with postcode data and hence obtained prediction at the SA2. A comparison study is also performed to validate the proposed method. In this paper, a Gaussian model is constructed under Bayesian hierarchy. The novelty lies in the development of the basis function that can address spatial sparsity and localised spatial structure. It can also address the large-dimensional spatial data modelling problem by constructing knot based reduced-dimensional basis functions.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127019917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this dissertation, we develop nonparametric Bayesian models for biomedical data analysis. In particular, we focus on inference for tumor heterogeneity and inference for missing data. First, we present a Bayesian feature allocation model for tumor subclone reconstruction using mutation pairs. The key innovation lies in the use of short reads mapped to pairs of proximal single nucleotide variants (SNVs). In contrast, most existing methods use only marginal reads for unpaired SNVs. In the same context of using mutation pairs, in order to recover the phylogenetic relationship of subclones, we then develop a Bayesian treed feature allocation model. In contrast to commonly used feature allocation models, we allow the latent features to be dependent, using a tree structure to introduce dependence. Finally, we propose a nonparametric Bayesian approach to monotone missing data in longitudinal studies with non-ignorable missingness. In contrast to most existing methods, our method allows for incorporating information from auxiliary covariates and is able to capture complex structures among the response, missingness and auxiliary covariates. Our models are validated through simulation studies and are applied to real-world biomedical datasets.
{"title":"Bayesian nonparametric models for biomedical data analysis","authors":"Tianjian Zhou","doi":"10.15781/T2MP4W42K","DOIUrl":"https://doi.org/10.15781/T2MP4W42K","url":null,"abstract":"In this dissertation, we develop nonparametric Bayesian models for biomedical data analysis. In particular, we focus on inference for tumor heterogeneity and inference for missing data. First, we present a Bayesian feature allocation model for tumor subclone reconstruction using mutation pairs. The key innovation lies in the use of short reads mapped to pairs of proximal single nucleotide variants (SNVs). In contrast, most existing methods use only marginal reads for unpaired SNVs. In the same context of using mutation pairs, in order to recover the phylogenetic relationship of subclones, we then develop a Bayesian treed feature allocation model. In contrast to commonly used feature allocation models, we allow the latent features to be dependent, using a tree structure to introduce dependence. Finally, we propose a nonparametric Bayesian approach to monotone missing data in longitudinal studies with non-ignorable missingness. In contrast to most existing methods, our method allows for incorporating information from auxiliary covariates and is able to capture complex structures among the response, missingness and auxiliary covariates. Our models are validated through simulation studies and are applied to real-world biomedical datasets.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127112119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A press release from the National Institute of Standards and Technology (NIST)could potentially impede progress toward improving the analysis of forensic evidence and the presentation of forensic analysis results in courts in the United States and around the world. "NIST experts urge caution in use of courtroom evidence presentation method" was released on October 12, 2017, and was picked up by the this http URL news service. It argues that, except in exceptional cases, the results of forensic analyses should not be reported as "likelihood ratios". The press release, and the journal article by NIST researchers Steven P. Lund & Harri Iyer on which it is based, identifies some legitimate points of concern, but makes a strawman argument and reaches an unjustified conclusion that throws the baby out with the bathwater.
美国国家标准与技术研究所(NIST)发布的一份新闻稿可能会阻碍美国和世界各地法庭对法医证据分析的改进和法医分析结果的呈现。2017年10月12日,美国国家标准与技术研究院(NIST)专家敦促谨慎使用法庭证据展示方法,并被本网站转载。它认为,除特殊情况外,法医分析的结果不应以“可能性比”报告。这份新闻稿,以及NIST研究人员Steven P. Lund和Harri Iyer在杂志上发表的文章,是该报告的基础,指出了一些合理的担忧点,但提出了一个吸管人的论点,得出了一个不合理的结论,把婴儿和洗澡水一起倒掉了。
{"title":"A Response to: 'Nist Experts Urge Caution in Use of Courtroom Evidence Presentation Method'","authors":"G. Morrison","doi":"10.2139/SSRN.3054092","DOIUrl":"https://doi.org/10.2139/SSRN.3054092","url":null,"abstract":"A press release from the National Institute of Standards and Technology (NIST)could potentially impede progress toward improving the analysis of forensic evidence and the presentation of forensic analysis results in courts in the United States and around the world. \"NIST experts urge caution in use of courtroom evidence presentation method\" was released on October 12, 2017, and was picked up by the this http URL news service. It argues that, except in exceptional cases, the results of forensic analyses should not be reported as \"likelihood ratios\". The press release, and the journal article by NIST researchers Steven P. Lund & Harri Iyer on which it is based, identifies some legitimate points of concern, but makes a strawman argument and reaches an unjustified conclusion that throws the baby out with the bathwater.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128182322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper explores improvements in prediction accuracy and inference capability when allowing for potential correlation in team-level random effects across multiple game-level responses from different assumed distributions. First-order and fully exponential Laplace approximations are used to fit normal-binary and Poisson-binary multivariate generalized linear mixed models with non-nested random effects structures. We have built these models into the R package mvglmmRank, which is used to explore several seasons of American college football and basketball data.
{"title":"Multivariate Generalized Linear Mixed Models for Joint Estimation of Sporting Outcomes","authors":"Jennifer Broatch, Andrew T. Karl","doi":"10.26398/IJAS.0030-008","DOIUrl":"https://doi.org/10.26398/IJAS.0030-008","url":null,"abstract":"This paper explores improvements in prediction accuracy and inference capability when allowing for potential correlation in team-level random effects across multiple game-level responses from different assumed distributions. First-order and fully exponential Laplace approximations are used to fit normal-binary and Poisson-binary multivariate generalized linear mixed models with non-nested random effects structures. We have built these models into the R package mvglmmRank, which is used to explore several seasons of American college football and basketball data.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125486082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}