In recent years, the statistical assessment of crash injury severity data has increasingly begun to segment the available crash data into observational groups to explore the possibility that such groups may share the same estimated parameters. This method is commonly used to account for parameters that may shift over time, where the data is often segmented into groups based on observational year. Unfortunately, such data segmentation can lead to small samples within each group, which has caused some concern about decreasing sample size. However, concerns about diminishing sample size are often misplaced and not well understood. In this paper, the impact of data segmentation is assessed by estimating models that address the possibility of temporally shifting parameters. Starting with a large 80,000 observation sample, the process involves randomly segmenting the data into groups with sample sizes varying from 1000 to 40,000, and then assessing the difference between the estimated data-segmented models and the overall model (using all available data) using likelihood ratio tests. The results indicate that: 1) model specification is extremely important, regardless of sample size, 2) statistical tests should be used to determine the suitability of simple versus complex models, not sample size, and 3) the variance/covariance structure of the data being considered determines model specification and sample size effects, which means sample-size requirements are data-specific, and that general statements regarding minimum sample size requirements for specific model types cannot be made.
扫码关注我们
求助内容:
应助结果提醒方式:
