Pub Date : 2019-04-26DOI: 10.1285/I20705948V12N1P55
Andrew V. Frane
A previous study in this journal used Monte Carlo simulations to compare the power and familywise Type I error rates of ten multiple-testing procedures in the context of pairwise comparisons in balanced three-group designs. The authors concluded that the Benjamini–Hochberg procedure was the "best."' However, they did not compare the Benjamini–Hochberg procedure to commonly used multiple-testing procedures that were developed specifically for pairwise comparisons, such as Fisher's protected least significant difference and Tukey's honest significant difference. Simulations in the present study show that in the three-group case, Fisher's method is more powerful than both Tukey's method and the Benjamini–Hochberg procedure. Compared to the Benjamini–Hochberg procedure, Tukey's method is shown to be less powerful in terms of per-pair power (average probability of significance across the tests of false null hypotheses), but more powerful in terms of any-pair power (probability of significance in at least one test of a false null hypothesis). Additionally, the present study shows that small deviations from normality in the population distributions have little effect on the power of pairwise comparisons, and that the previous study's finding to the contrary was based on a methodological inconsistency.
{"title":"Some clarifications regarding power and Type I error control for pairwise comparisons of three groups","authors":"Andrew V. Frane","doi":"10.1285/I20705948V12N1P55","DOIUrl":"https://doi.org/10.1285/I20705948V12N1P55","url":null,"abstract":"A previous study in this journal used Monte Carlo simulations to compare the power and familywise Type I error rates of ten multiple-testing procedures in the context of pairwise comparisons in balanced three-group designs. The authors concluded that the Benjamini–Hochberg procedure was the \"best.\"' However, they did not compare the Benjamini–Hochberg procedure to commonly used multiple-testing procedures that were developed specifically for pairwise comparisons, such as Fisher's protected least significant difference and Tukey's honest significant difference. Simulations in the present study show that in the three-group case, Fisher's method is more powerful than both Tukey's method and the Benjamini–Hochberg procedure. Compared to the Benjamini–Hochberg procedure, Tukey's method is shown to be less powerful in terms of per-pair power (average probability of significance across the tests of false null hypotheses), but more powerful in terms of any-pair power (probability of significance in at least one test of a false null hypothesis). Additionally, the present study shows that small deviations from normality in the population distributions have little effect on the power of pairwise comparisons, and that the previous study's finding to the contrary was based on a methodological inconsistency.","PeriodicalId":44770,"journal":{"name":"Electronic Journal of Applied Statistical Analysis","volume":"12 1","pages":"55-68"},"PeriodicalIF":0.7,"publicationDate":"2019-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49032254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-26DOI: 10.1285/I20705948V12N1P69
Hassan S. Uraibi
Lasso regression methods are widely used for a number of scientic applications.Many practitioners of statistics were not aware that a small changein the data would results in unstable Lasso solution path. For instance, inthe presence of outlying observations, Lasso perhaps leads the increase inthe percentage of the false selection rate of predictors. On the other hand,the discussions on determining an optimal shrinkage parameter of Lasso isstill ongoing. Therefore, this paper proposed a robust algorithm to tacklethe instability of Lasso in the presence of outliers. A new weight function isproposed to overcome the problem of outlying observations. The weightedobservations are subsamples for a certain number of subsamples to controlthe false Lasso selection. The simulation study has been carried out and usesreal data to assess the performance of our proposed algorithm. Consequently,the proposed method shows more eciency than LAD-Lasso and weightedLAD-Lasso and more reliable results.
{"title":"Weighted Lasso Subsampling for HighDimensional Regression","authors":"Hassan S. Uraibi","doi":"10.1285/I20705948V12N1P69","DOIUrl":"https://doi.org/10.1285/I20705948V12N1P69","url":null,"abstract":"Lasso regression methods are widely used for a number of scientic applications.Many practitioners of statistics were not aware that a small changein the data would results in unstable Lasso solution path. For instance, inthe presence of outlying observations, Lasso perhaps leads the increase inthe percentage of the false selection rate of predictors. On the other hand,the discussions on determining an optimal shrinkage parameter of Lasso isstill ongoing. Therefore, this paper proposed a robust algorithm to tacklethe instability of Lasso in the presence of outliers. A new weight function isproposed to overcome the problem of outlying observations. The weightedobservations are subsamples for a certain number of subsamples to controlthe false Lasso selection. The simulation study has been carried out and usesreal data to assess the performance of our proposed algorithm. Consequently,the proposed method shows more eciency than LAD-Lasso and weightedLAD-Lasso and more reliable results.","PeriodicalId":44770,"journal":{"name":"Electronic Journal of Applied Statistical Analysis","volume":"12 1","pages":"69-84"},"PeriodicalIF":0.7,"publicationDate":"2019-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1285/I20705948V12N1P69","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41803004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-14DOI: 10.1285/I20705948V11N2P397
Abedel-Qader Al-Masri
In this article, acceptance sampling plans are suggested for the life test that is truncated at a preassigned time. The life time of the test units are assumed to follow the inverse-gamma distribution. The minimum sample sizes necessary to ensure the specified mean life is obtained. The operating characteristic function values of the proposed sampling plans and the producer's risk are provieded. Some tables are given and the results are illustrated by numerical examples.
{"title":"Acceptance Sampling Plans Based on Truncated Life Tests in the Inverse-Gamma Model","authors":"Abedel-Qader Al-Masri","doi":"10.1285/I20705948V11N2P397","DOIUrl":"https://doi.org/10.1285/I20705948V11N2P397","url":null,"abstract":"In this article, acceptance sampling plans are suggested for the life test that is truncated at a preassigned time. The life time of the test units are assumed to follow the inverse-gamma distribution. The minimum sample sizes necessary to ensure the specified mean life is obtained. The operating characteristic function values of the proposed sampling plans and the producer's risk are provieded. Some tables are given and the results are illustrated by numerical examples.","PeriodicalId":44770,"journal":{"name":"Electronic Journal of Applied Statistical Analysis","volume":"11 1","pages":"397-404"},"PeriodicalIF":0.7,"publicationDate":"2018-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1285/I20705948V11N2P397","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48938329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-14DOI: 10.1285/I20705948V11N2P447
Giuseppe Pandolfo, Antonio D’Ambrosio, G. C. Porzio
A procedure is developed in order to deal with the classification problem of objects in circular statistics. It is fully non-parametric and based on depth functions for directional data. Using the so-called DD-plot, we apply the k-nearest neighbors method in order to discriminate between competing groups. Three different notions of data depth for directional data are considered: the angular simplicial, the angular Tukey and the arc distance. We investigate and compare their performances through the average misclassification rate with respect to different distributional settings by using simulated and real data sets. Results show that the use of the arc distance depth should be generally preferred, and in some cases it outperforms the classifier based both on the angular simplicial and Tukey depths.
{"title":"A note on depth-based classification of circular data","authors":"Giuseppe Pandolfo, Antonio D’Ambrosio, G. C. Porzio","doi":"10.1285/I20705948V11N2P447","DOIUrl":"https://doi.org/10.1285/I20705948V11N2P447","url":null,"abstract":"A procedure is developed in order to deal with the classification problem of objects in circular statistics. It is fully non-parametric and based on depth functions for directional data. Using the so-called DD-plot, we apply the k-nearest neighbors method in order to discriminate between competing groups. Three different notions of data depth for directional data are considered: the angular simplicial, the angular Tukey and the arc distance. We investigate and compare their performances through the average misclassification rate with respect to different distributional settings by using simulated and real data sets. Results show that the use of the arc distance depth should be generally preferred, and in some cases it outperforms the classifier based both on the angular simplicial and Tukey depths.","PeriodicalId":44770,"journal":{"name":"Electronic Journal of Applied Statistical Analysis","volume":"11 1","pages":"447-462"},"PeriodicalIF":0.7,"publicationDate":"2018-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1285/I20705948V11N2P447","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48790738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-14DOI: 10.1285/I20705948V11N2P655
E. Martinez, J. Achcar, T. R. Icuma
Under a context of survival lifetime analysis, we introduce in this paper Bayesian and maximum likelihood approaches for the bivariate Basu-Dhar geometric model in the presence of covariates and a cure fraction. This distribution is useful to model bivariate discrete lifetime data. In the Bayesian estimation, posterior summaries of interest were obtained using standard Markov Chain Monte Carlo methods in the OpenBUGS software. Maximum likelihood estimates for the parameters of interest were computed using the textquotedblleft maxLik" package of the R software. Illustrations of the proposed approaches are given for two real data sets.
{"title":"Bivariate Basu-Dhar geometric model for survival data with a cure fraction","authors":"E. Martinez, J. Achcar, T. R. Icuma","doi":"10.1285/I20705948V11N2P655","DOIUrl":"https://doi.org/10.1285/I20705948V11N2P655","url":null,"abstract":"Under a context of survival lifetime analysis, we introduce in this paper Bayesian and maximum likelihood approaches for the bivariate Basu-Dhar geometric model in the presence of covariates and a cure fraction. This distribution is useful to model bivariate discrete lifetime data. In the Bayesian estimation, posterior summaries of interest were obtained using standard Markov Chain Monte Carlo methods in the OpenBUGS software. Maximum likelihood estimates for the parameters of interest were computed using the textquotedblleft maxLik\" package of the R software. Illustrations of the proposed approaches are given for two real data sets.","PeriodicalId":44770,"journal":{"name":"Electronic Journal of Applied Statistical Analysis","volume":"11 1","pages":"655-673"},"PeriodicalIF":0.7,"publicationDate":"2018-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1285/I20705948V11N2P655","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43866792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-14DOI: 10.1285/I20705948V11N2P622
Alejandro Murua, Annick Nembot-Simo
We present an approximate posterior inference methodology for a Bayesian hierarchical mixed-effect Poisson regression model. The model serves us to address the multiple testing problem in the presence of many group or cluster effects. This is carried out through a specialized Bayesian false discovery rate procedure. The likelihood is simplified by an approximation based on Laplace's approximation for integrals and a trace approximation for the determinants. The posterior marginals are estimated using this approximated likelihood. In particular, we obtain credible regions for the parameters, as well as probability estimates for the difference between risks (Poisson intensities) associated with different groups or clusters, or different levels of the fixed effects. The methodology is illustrated through an application to a vaccine trial.
{"title":"Approximate Posterior Inference for Multiple Testing using a Hierarchical Mixed-effect Poisson Regression Model","authors":"Alejandro Murua, Annick Nembot-Simo","doi":"10.1285/I20705948V11N2P622","DOIUrl":"https://doi.org/10.1285/I20705948V11N2P622","url":null,"abstract":"We present an approximate posterior inference methodology for a Bayesian hierarchical mixed-effect Poisson regression model. The model serves us to address the multiple testing problem in the presence of many group or cluster effects. This is carried out through a specialized Bayesian false discovery rate procedure. The likelihood is simplified by an approximation based on Laplace's approximation for integrals and a trace approximation for the determinants. The posterior marginals are estimated using this approximated likelihood. In particular, we obtain credible regions for the parameters, as well as probability estimates for the difference between risks (Poisson intensities) associated with different groups or clusters, or different levels of the fixed effects. The methodology is illustrated through an application to a vaccine trial.","PeriodicalId":44770,"journal":{"name":"Electronic Journal of Applied Statistical Analysis","volume":"11 1","pages":"622-654"},"PeriodicalIF":0.7,"publicationDate":"2018-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44941522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-14DOI: 10.1285/I20705948V11N2P577
Ahmed Quazza, R. Noureddine, Zarrouk Zoubir
The purpose of this paper is to provide a hybrid method combining the Partial Least Squares and the Bayesian approaches to estimate the Structural Equation Models. The aim advantage of this new method is to overcome the assumption of normality that is required in Bayesian approach. The results obtained from an application on simulated and on real data show that our proposed method outperforms both PLS and Bayesian approaches in terms of standard errors.
{"title":"A hybrid method combining the PLS and the Bayesian approaches to estimate the Structural Equation Models","authors":"Ahmed Quazza, R. Noureddine, Zarrouk Zoubir","doi":"10.1285/I20705948V11N2P577","DOIUrl":"https://doi.org/10.1285/I20705948V11N2P577","url":null,"abstract":"The purpose of this paper is to provide a hybrid method combining the Partial Least Squares and the Bayesian approaches to estimate the Structural Equation Models. The aim advantage of this new method is to overcome the assumption of normality that is required in Bayesian approach. The results obtained from an application on simulated and on real data show that our proposed method outperforms both PLS and Bayesian approaches in terms of standard errors.","PeriodicalId":44770,"journal":{"name":"Electronic Journal of Applied Statistical Analysis","volume":"11 1","pages":"577-607"},"PeriodicalIF":0.7,"publicationDate":"2018-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1285/I20705948V11N2P577","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43226629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-14DOI: 10.1285/I20705948V11N2P427
C. Bernini, M. Matteucci, S. Mignani
The investigation of individual and community well-being has acquired a particular relevance over time for governments to develop strategies and identify resources for improving standards of living. To this aim, it is necessary to analyse changes at the overall level and examine how subjective well-being differs between different sub-groups of the population as well as across local areas. Using data measuring the well-being of residents in the Romagna area (Italy), we propose a multidimensional approach within the item response theory (IRT) framework to estimate an overall score of community Subjective Well-Being (SWB) and individual scores reflecting specific dimensions, taking into account for the ordinal polytomous nature of the items. The results show that aspects dealing with Life Evaluation mainly affect the overall SWB, while issues pertaining to Community and Environment are less important. The proposed approach is effective in developing an indicator taking into account the multidimensionality of SWB and estimating individual scores reflecting the heterogeneity among residents.
{"title":"Modelling Subjective Well-Being dimensions through an IRT bifactor model: Evidences from an Italian study","authors":"C. Bernini, M. Matteucci, S. Mignani","doi":"10.1285/I20705948V11N2P427","DOIUrl":"https://doi.org/10.1285/I20705948V11N2P427","url":null,"abstract":"The investigation of individual and community well-being has acquired a particular relevance over time for governments to develop strategies and identify resources for improving standards of living. To this aim, it is necessary to analyse changes at the overall level and examine how subjective well-being differs between different sub-groups of the population as well as across local areas. Using data measuring the well-being of residents in the Romagna area (Italy), we propose a multidimensional approach within the item response theory (IRT) framework to estimate an overall score of community Subjective Well-Being (SWB) and individual scores reflecting specific dimensions, taking into account for the ordinal polytomous nature of the items. The results show that aspects dealing with Life Evaluation mainly affect the overall SWB, while issues pertaining to Community and Environment are less important. The proposed approach is effective in developing an indicator taking into account the multidimensionality of SWB and estimating individual scores reflecting the heterogeneity among residents.","PeriodicalId":44770,"journal":{"name":"Electronic Journal of Applied Statistical Analysis","volume":"11 1","pages":"427-446"},"PeriodicalIF":0.7,"publicationDate":"2018-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1285/I20705948V11N2P427","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41320672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-14DOI: 10.1285/I20705948V11N2P489
O. D. L. Torre, Evaristo Galeana-Figueroa, J. Álvarez‐García
In this paper we test the use of Markov Switching models in equity trading strategies, following Brooks and Persand (2001), Kritzman et al. (2012) and Hauptmann et al. (2014), who suggest their use as warning systems of bad performing periods. We extend their reviews by testing again (with the impact of trading fees) the U.S. and U.K. markets and by extending our tests to the Italian and Mexican case. The rationale behind our Markov-Switching strategy is to invest in equity index tracking ETFs in low volatility or ”good performing” periods and in the local risk-free asset in high-volatility or ”bad performing” ones. Our results show that in a weekly simulation from January 4, 2001 to July 30, 2017 with a 0.35% trading fee plus taxes, our system is useful to create alpha in all the simulated markets even if the Italian case showed several deep distress moments due to a financial or political crisis.
{"title":"Using Markov-Switching models in Italian, British, U.S. and Mexican equity portfolios: a performance test","authors":"O. D. L. Torre, Evaristo Galeana-Figueroa, J. Álvarez‐García","doi":"10.1285/I20705948V11N2P489","DOIUrl":"https://doi.org/10.1285/I20705948V11N2P489","url":null,"abstract":"In this paper we test the use of Markov Switching models in equity trading strategies, following Brooks and Persand (2001), Kritzman et al. (2012) and Hauptmann et al. (2014), who suggest their use as warning systems of bad performing periods. We extend their reviews by testing again (with the impact of trading fees) the U.S. and U.K. markets and by extending our tests to the Italian and Mexican case. The rationale behind our Markov-Switching strategy is to invest in equity index tracking ETFs in low volatility or ”good performing” periods and in the local risk-free asset in high-volatility or ”bad performing” ones. Our results show that in a weekly simulation from January 4, 2001 to July 30, 2017 with a 0.35% trading fee plus taxes, our system is useful to create alpha in all the simulated markets even if the Italian case showed several deep distress moments due to a financial or political crisis.","PeriodicalId":44770,"journal":{"name":"Electronic Journal of Applied Statistical Analysis","volume":"11 1","pages":"489-505"},"PeriodicalIF":0.7,"publicationDate":"2018-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1285/I20705948V11N2P489","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42529437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-14DOI: 10.1285/i20705948v11n2p405
Ahmed Bani‐Mustafa, Sondos Abuorf, Raghdah AL-Jumlah, Manar Al-Mutair, Hajar Kattan, Hajar AL-Muzaiel, A. Mazari, Abdulwahed Khalfan, Najmuddin S. Patwa
Understanding factors that affect lead time can help supply chain management to get better understanding for the amount of time it takes to deliver products to the market. This investigation sought to determine factors that influenced lead time shipment at Al-Ghanim Sahra Transportation (AST). The delivery lead time data was collected from AST over four years (2013 – 2016). The information consists of customers’ orders starting from the actual time of shipments until clearance date over several stages of shipment. A multivariate fixed and random regression models were employed using stepwise variable selections to identify significant independent factors; including their interaction to lead time. Supp, Commodities, Departure Port and Shipping line along with their interaction were significant shipping-related contributors to lead time explaining 38.7% of the total variation in lead time.
{"title":"Predicting Total Shipping and Clearance Time for Al-Ghanim Sahara Transportation","authors":"Ahmed Bani‐Mustafa, Sondos Abuorf, Raghdah AL-Jumlah, Manar Al-Mutair, Hajar Kattan, Hajar AL-Muzaiel, A. Mazari, Abdulwahed Khalfan, Najmuddin S. Patwa","doi":"10.1285/i20705948v11n2p405","DOIUrl":"https://doi.org/10.1285/i20705948v11n2p405","url":null,"abstract":"Understanding factors that affect lead time can help supply chain management to get better understanding for the amount of time it takes to deliver products to the market. This investigation sought to determine factors that influenced lead time shipment at Al-Ghanim Sahra Transportation (AST). The delivery lead time data was collected from AST over four years (2013 – 2016). The information consists of customers’ orders starting from the actual time of shipments until clearance date over several stages of shipment. A multivariate fixed and random regression models were employed using stepwise variable selections to identify significant independent factors; including their interaction to lead time. Supp, Commodities, Departure Port and Shipping line along with their interaction were significant shipping-related contributors to lead time explaining 38.7% of the total variation in lead time.","PeriodicalId":44770,"journal":{"name":"Electronic Journal of Applied Statistical Analysis","volume":"182 5","pages":"405-426"},"PeriodicalIF":0.7,"publicationDate":"2018-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41275268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}