Severe simultaneous recessions are defined to occur when at least half of the countries under investigation (Australia, Canada, Germany, Japan, United Kingdom, and United States) are in recession simultaneously. I pose two new research questions that extend upon stylized facts for US recessions. One, are the occurrences of simultaneous recessions predictable? Two, does the yield spread predict future occurrences of simultaneous recessions? I use the indicator for severe simultaneous recessions as the explained variable in probit models. The lagged yield spread is an important explanatory variable, where decreasing yield spreads are a leading indicator for severe simultaneous recessions. Both US and German yield spreads act as leading indicator for severe simultaneous recessions.
{"title":"Predicting Simultaneous Severe Recessions Using Yield Spreads as Leading Indicators","authors":"C. Christiansen","doi":"10.2139/ssrn.1855937","DOIUrl":"https://doi.org/10.2139/ssrn.1855937","url":null,"abstract":"Severe simultaneous recessions are defined to occur when at least half of the countries under investigation (Australia, Canada, Germany, Japan, United Kingdom, and United States) are in recession simultaneously. I pose two new research questions that extend upon stylized facts for US recessions. One, are the occurrences of simultaneous recessions predictable? Two, does the yield spread predict future occurrences of simultaneous recessions? I use the indicator for severe simultaneous recessions as the explained variable in probit models. The lagged yield spread is an important explanatory variable, where decreasing yield spreads are a leading indicator for severe simultaneous recessions. Both US and German yield spreads act as leading indicator for severe simultaneous recessions.","PeriodicalId":165362,"journal":{"name":"ERN: Discrete Regression & Qualitative Choice Models (Single) (Topic)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126406585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
How does neighborhood context affect the relationship between individual racial identity, racial attitudes, and vote choice in an urban mayoral election? Although there have been many studies about racial context, racial attitudes and political behavior, for a variety of reasons, including methodological complexities and lack of quality contextual exit poll data, few studies have focused on vote choice in an urban election. Using exit poll data from the 2005 Los Angeles election, multilevel logistic random slope models are developed to explore the relationship between vote choice, racial attitudes, and neighborhood context. Results indicate significant variation across neighborhoods in the effects of race and racial attitudes, as well as significant contextual and cross-level effects on vote choice.
{"title":"Urban Voters, Racial Attitudes, and Neighborhood Context: A Multilevel Analysis of the 2005 Los Angeles Mayoral Election","authors":"Jason A. McDaniel","doi":"10.2139/ssrn.1810466","DOIUrl":"https://doi.org/10.2139/ssrn.1810466","url":null,"abstract":"How does neighborhood context affect the relationship between individual racial identity, racial attitudes, and vote choice in an urban mayoral election? Although there have been many studies about racial context, racial attitudes and political behavior, for a variety of reasons, including methodological complexities and lack of quality contextual exit poll data, few studies have focused on vote choice in an urban election. Using exit poll data from the 2005 Los Angeles election, multilevel logistic random slope models are developed to explore the relationship between vote choice, racial attitudes, and neighborhood context. Results indicate significant variation across neighborhoods in the effects of race and racial attitudes, as well as significant contextual and cross-level effects on vote choice.","PeriodicalId":165362,"journal":{"name":"ERN: Discrete Regression & Qualitative Choice Models (Single) (Topic)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125638122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the increasing interest in market segmentation in modern marketing research, several methods for dealing with consumer heterogeneity and for revealing market segments have been described in the literature.In this study, the authors compare eight two-stage segmentation methods that aim to uncover consumer segments by classifying subject-specific indicator values. Four different indicators are used as a segmentation basis.The forces, which are subject-aggregated gradient values of the likelihood function, and the dfbetas, an outlier detection measure, are two indicators that express a subject’s effect on the estimation of the aggregate partworths in the conditional logit model. Although the conditional logit model is generally estimated at the aggregate level, this research obtains individual-level partworth estimates for segmentation purposes. The respondents’ raw choices are the final indicator values. The authors classify the indicators by means of cluster analysis and latent class models. The goal of the study is to compare the segmentation performance of the methods with respect to their success rate, membership recovery and segment mean parameter recovery. With regard to the individual-level estimates, the authors obtain poor segmentation results both with cluster and latent class analysis. The cluster methods based on the forces, the dfbetas and the choices yield good and similar results. Classification of the forces and the dfbetas deteriorates with the use of latent class analysis, whereas latent class modeling of the choices outperforms its cluster counterpart.
{"title":"A Comparison of Two-Stage Segmentation Methods for Choice-Based Conjoint Data: A Simulation Study","authors":"M. Crabbe, B. Jones, M. Vandebroek","doi":"10.2139/ssrn.1846504","DOIUrl":"https://doi.org/10.2139/ssrn.1846504","url":null,"abstract":"Due to the increasing interest in market segmentation in modern marketing research, several methods for dealing with consumer heterogeneity and for revealing market segments have been described in the literature.In this study, the authors compare eight two-stage segmentation methods that aim to uncover consumer segments by classifying subject-specific indicator values. Four different indicators are used as a segmentation basis.The forces, which are subject-aggregated gradient values of the likelihood function, and the dfbetas, an outlier detection measure, are two indicators that express a subject’s effect on the estimation of the aggregate partworths in the conditional logit model. Although the conditional logit model is generally estimated at the aggregate level, this research obtains individual-level partworth estimates for segmentation purposes. The respondents’ raw choices are the final indicator values. The authors classify the indicators by means of cluster analysis and latent class models. The goal of the study is to compare the segmentation performance of the methods with respect to their success rate, membership recovery and segment mean parameter recovery. With regard to the individual-level estimates, the authors obtain poor segmentation results both with cluster and latent class analysis. The cluster methods based on the forces, the dfbetas and the choices yield good and similar results. Classification of the forces and the dfbetas deteriorates with the use of latent class analysis, whereas latent class modeling of the choices outperforms its cluster counterpart.","PeriodicalId":165362,"journal":{"name":"ERN: Discrete Regression & Qualitative Choice Models (Single) (Topic)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125102717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate those features of Australian firms that make them likely takeover targets. To this end, we apply a logit probability model similar to the one developed by Palepu (1986). Our findings reveal that takeovers are most likely to be motivated by market under-valuation combined with high levels of tangible assets. Takeover targets may also be financially distressed with high levels of leverage and low liquidity, and may exhibit declining sales growth with decreasing profitability. Notwithstanding these insights, we find that the prediction models are unable to provide abnormal returns with a high statistical significance, thereby lending support to market efficiency.
{"title":"The Financial Profiles of Takeover Target Firms and Their Takeover Predictability: Australian Evidence","authors":"Shuyi Cai, B. Balachandran, M. Dempsey","doi":"10.22495/COCV8I3C6P1","DOIUrl":"https://doi.org/10.22495/COCV8I3C6P1","url":null,"abstract":"We investigate those features of Australian firms that make them likely takeover targets. To this end, we apply a logit probability model similar to the one developed by Palepu (1986). Our findings reveal that takeovers are most likely to be motivated by market under-valuation combined with high levels of tangible assets. Takeover targets may also be financially distressed with high levels of leverage and low liquidity, and may exhibit declining sales growth with decreasing profitability. Notwithstanding these insights, we find that the prediction models are unable to provide abnormal returns with a high statistical significance, thereby lending support to market efficiency.","PeriodicalId":165362,"journal":{"name":"ERN: Discrete Regression & Qualitative Choice Models (Single) (Topic)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122583328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper considers a new method of uncovering demand information from market level data on differentiated products. In particular, we propose a globally consistent continuous-choice demand model with distinct advantages over the models currently in use and describe the econometric techniques for its estimation. The proposed model combines key properties of both the discrete- and continuous-choice traditions: i) it is flexible in the sense of Diewert (1974), ii) it is globally consistent in the sense it can deal with the entry and exit of products over time, and iii) incorporates a structural error term. In order to encompass different possible real-world applications, we consider two alternative specifications of the baseline model depending on the degree of flexibility the researcher is willing to accept for the substitution patterns between inside and outside goods. The estimation procedure follows an analog to the algorithm derived in Berry (1994), Berry, Levinsohn and Pakes (1995). Depending on the specification considered, the contraction mapping for matching observed and predicted budget shares may be analytical or not. The case for which the contraction is analytical is relatively simple and fast to estimate which can prove a key advantage in competition policy issues, where time and transparency are typically crucial factors. For the case it is not, we propose an alternative to Berry, Levinsohn and Pakes (1995)'s contraction mapping with super-linear rate of convergence. The final sections provide a series of Monte Carlo experiments to illustrate the estimation properties of the model and discuss how it can be extended to cope with consumer heterogeneity and dynamic behaviour.
{"title":"A Simple Globally Consistent Continuous Demand Model for Market Level Data","authors":"P. Davis, Ricardo Ribeiro","doi":"10.2139/ssrn.1690163","DOIUrl":"https://doi.org/10.2139/ssrn.1690163","url":null,"abstract":"This paper considers a new method of uncovering demand information from market level data on differentiated products. In particular, we propose a globally consistent continuous-choice demand model with distinct advantages over the models currently in use and describe the econometric techniques for its estimation. The proposed model combines key properties of both the discrete- and continuous-choice traditions: i) it is flexible in the sense of Diewert (1974), ii) it is globally consistent in the sense it can deal with the entry and exit of products over time, and iii) incorporates a structural error term. In order to encompass different possible real-world applications, we consider two alternative specifications of the baseline model depending on the degree of flexibility the researcher is willing to accept for the substitution patterns between inside and outside goods. The estimation procedure follows an analog to the algorithm derived in Berry (1994), Berry, Levinsohn and Pakes (1995). Depending on the specification considered, the contraction mapping for matching observed and predicted budget shares may be analytical or not. The case for which the contraction is analytical is relatively simple and fast to estimate which can prove a key advantage in competition policy issues, where time and transparency are typically crucial factors. For the case it is not, we propose an alternative to Berry, Levinsohn and Pakes (1995)'s contraction mapping with super-linear rate of convergence. The final sections provide a series of Monte Carlo experiments to illustrate the estimation properties of the model and discuss how it can be extended to cope with consumer heterogeneity and dynamic behaviour.","PeriodicalId":165362,"journal":{"name":"ERN: Discrete Regression & Qualitative Choice Models (Single) (Topic)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117111599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The resource-constrained project scheduling problem involves the determination of a schedule of the project activities, satisfying the precedence relations and resource constraints while minimizing the project duration. In practice, activity durations may be subject to variability, such that a stochastic approach to the problem is more appropriate. We propose a methodology for the determination of a project execution policy and a vector of predictive activity starting times with the objective of minimizing a cost function that consists of the weighted expected activity starting time deviations and the penalties or bonuses associated with late or early project completion. In a computational experiment, we show that our procedure greatly outperforms existing algorithms described in the literature
{"title":"Generating Proactive Execution Policies for Resource-Constrained Projects with Uncertain Activity Durations","authors":"Filip Deblaere, E. Demeulemeester, W. Herroelen","doi":"10.2139/ssrn.1588642","DOIUrl":"https://doi.org/10.2139/ssrn.1588642","url":null,"abstract":"The resource-constrained project scheduling problem involves the determination of a schedule of the project activities, satisfying the precedence relations and resource constraints while minimizing the project duration. In practice, activity durations may be subject to variability, such that a stochastic approach to the problem is more appropriate. We propose a methodology for the determination of a project execution policy and a vector of predictive activity starting times with the objective of minimizing a cost function that consists of the weighted expected activity starting time deviations and the penalties or bonuses associated with late or early project completion. In a computational experiment, we show that our procedure greatly outperforms existing algorithms described in the literature","PeriodicalId":165362,"journal":{"name":"ERN: Discrete Regression & Qualitative Choice Models (Single) (Topic)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115337016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using the credit card application data provided by a major credit card issuer, we estimate the demand for credit card using a regression discontinuity method. Our method exploits a unique feature of the credit card solicitation campaign design, i.e. credit issuer gives consumers different interest rate based on some cutoff points in consumers' credit score. This discontinuity in the interest rate offers allows us to obtain a reliable estimate of the effect of the interest rate on consumers' credit demand. We find that consumers' demand for credit card is near unit elasticity. The demand elasticity is estimated at -1.14. In addition, consumers with better credit rating are more responsive to interest rate than consumers with lower credit rating. We also find that, without controlling for the endogeneity of contracts, a regression model would give biased estimates.
{"title":"Estimating the Demand for Credit Card: A Regression Discontinuity Approach","authors":"Dandan Huang, Wei Tan","doi":"10.2139/ssrn.1466449","DOIUrl":"https://doi.org/10.2139/ssrn.1466449","url":null,"abstract":"Using the credit card application data provided by a major credit card issuer, we estimate the demand for credit card using a regression discontinuity method. Our method exploits a unique feature of the credit card solicitation campaign design, i.e. credit issuer gives consumers different interest rate based on some cutoff points in consumers' credit score. This discontinuity in the interest rate offers allows us to obtain a reliable estimate of the effect of the interest rate on consumers' credit demand. We find that consumers' demand for credit card is near unit elasticity. The demand elasticity is estimated at -1.14. In addition, consumers with better credit rating are more responsive to interest rate than consumers with lower credit rating. We also find that, without controlling for the endogeneity of contracts, a regression model would give biased estimates.","PeriodicalId":165362,"journal":{"name":"ERN: Discrete Regression & Qualitative Choice Models (Single) (Topic)","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122343735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motivated by the findings of Ramada-Sarasola (2009) and the lack of robustness of the previous literature’s results on foreign entry mode choice to model specification in this paper I perform an Extreme Bounds Analysis to determine which of almost 60 explanatory variables used in the literature are robust to different model specifications. I do so by following the methodology introduced in Sala-i-Martin (1997b) in a multinomial logit framework, based on 640 entries into foreign countries done by the largest 22 financial MNCs in the last 15 years. I suggest additional hypothesis to capture host-country level determinants and I improve the operationalization of industry level variables in a multi home-country and host-country setting. Amongst other results I find that an MNC’s size and its international experience increase the likelihood of greenfields as opposed to M&A or any type of entry mode involving a partner, and that more cultural distance between home and host country, a better developed local financial sector (or local credit market), a more regulated environment for obtaining licenses and more macroeconomic sustainability increase the chances of GI, while a worse local infrastructure, higher ITC costs and more difficulties in registering property and employing workers decrease its odds.
{"title":"Determining the Choice of Entry Mode of Multinationals: My 6 Million Regressions","authors":"Magdalena Ramada-Sarasola","doi":"10.2139/ssrn.1445422","DOIUrl":"https://doi.org/10.2139/ssrn.1445422","url":null,"abstract":"Motivated by the findings of Ramada-Sarasola (2009) and the lack of robustness of the previous literature’s results on foreign entry mode choice to model specification in this paper I perform an Extreme Bounds Analysis to determine which of almost 60 explanatory variables used in the literature are robust to different model specifications. I do so by following the methodology introduced in Sala-i-Martin (1997b) in a multinomial logit framework, based on 640 entries into foreign countries done by the largest 22 financial MNCs in the last 15 years. I suggest additional hypothesis to capture host-country level determinants and I improve the operationalization of industry level variables in a multi home-country and host-country setting. Amongst other results I find that an MNC’s size and its international experience increase the likelihood of greenfields as opposed to M&A or any type of entry mode involving a partner, and that more cultural distance between home and host country, a better developed local financial sector (or local credit market), a more regulated environment for obtaining licenses and more macroeconomic sustainability increase the chances of GI, while a worse local infrastructure, higher ITC costs and more difficulties in registering property and employing workers decrease its odds.","PeriodicalId":165362,"journal":{"name":"ERN: Discrete Regression & Qualitative Choice Models (Single) (Topic)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132891586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a nonparametric approach for estimating single-index, binary-choice models when parametric models such as Probit and Logit are potentially misspecified. The new approach involves two steps: first, we estimate index coefficients using sliced inverse regression without specifying a parametric probability function a priori; second, we estimate the unknown probability function using kernel regression of the binary choice variable on the single index estimated in the first step. The estimated probability functions for different demographic groups indicate that the conventional dummy variable approach cannot fully capture heterogeneous effects across groups. Using both simulated and labor market data, we demonstrate the merits of this new approach in solving model misspecification and heterogeneity problems.
{"title":"Misspecification and Heterogeneity in Single-Index, Binary Choice Models","authors":"Pian Chen, M. Velamuri","doi":"10.2139/ssrn.1393062","DOIUrl":"https://doi.org/10.2139/ssrn.1393062","url":null,"abstract":"We propose a nonparametric approach for estimating single-index, binary-choice models when parametric models such as Probit and Logit are potentially misspecified. The new approach involves two steps: first, we estimate index coefficients using sliced inverse regression without specifying a parametric probability function a priori; second, we estimate the unknown probability function using kernel regression of the binary choice variable on the single index estimated in the first step. The estimated probability functions for different demographic groups indicate that the conventional dummy variable approach cannot fully capture heterogeneous effects across groups. Using both simulated and labor market data, we demonstrate the merits of this new approach in solving model misspecification and heterogeneity problems.","PeriodicalId":165362,"journal":{"name":"ERN: Discrete Regression & Qualitative Choice Models (Single) (Topic)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123123226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is concerned with the semiparametric estimation of function means that are scaled by an unknown conditional density function. Parameters of this form arise naturally in the consideration of models where interest is focused on the expected value of an integral of a conditional expectation with respect to a continuously distributed “special regressor”' with unbounded support. In particular, a consistent and asymptotically normal estimator of an inverse conditional density-weighted average is proposed whose validity does not require data-dependent trimming or the subjective choice of smoothing parameters. The asymptotic normality result is also rate adaptive in the sense that it allows for the formulation of the usual Wald-type inference procedures without knowledge of the estimator's actual rate of convergence, which depends in general on the tail behaviour of the conditional density weight. The theory developed in this paper exploits recent results of Goh & Knight (2009) concerning the behaviour of estimated regression-quantile residuals. Simulation experiments illustrating the applicability of the procedure proposed here to a semiparametric binary-choice model are suggestive of good small-sample performance.
{"title":"Nonstandard Estimation of Inverse Conditional Density-Weighted Expectations","authors":"Chuan Goh","doi":"10.2139/ssrn.1333779","DOIUrl":"https://doi.org/10.2139/ssrn.1333779","url":null,"abstract":"This paper is concerned with the semiparametric estimation of function means that are scaled by an unknown conditional density function. Parameters of this form arise naturally in the consideration of models where interest is focused on the expected value of an integral of a conditional expectation with respect to a continuously distributed “special regressor”' with unbounded support. In particular, a consistent and asymptotically normal estimator of an inverse conditional density-weighted average is proposed whose validity does not require data-dependent trimming or the subjective choice of smoothing parameters. The asymptotic normality result is also rate adaptive in the sense that it allows for the formulation of the usual Wald-type inference procedures without knowledge of the estimator's actual rate of convergence, which depends in general on the tail behaviour of the conditional density weight. The theory developed in this paper exploits recent results of Goh & Knight (2009) concerning the behaviour of estimated regression-quantile residuals. Simulation experiments illustrating the applicability of the procedure proposed here to a semiparametric binary-choice model are suggestive of good small-sample performance.","PeriodicalId":165362,"journal":{"name":"ERN: Discrete Regression & Qualitative Choice Models (Single) (Topic)","volume":"175 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122874745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}