Abstract Common practice to address nonresponse in probability surveys in National Statistical Offices is to follow up every non respondent with a view to lifting response rates. As response rate is an insufficient indicator of data quality, it is argued that one should follow up non respondents with a view to reducing the mean squared error (MSE) of the estimator of the variable of interest. In this article, we propose a method to allocate the nonresponse follow-up resources in such a way as to minimise the MSE under a quasi-randomisation framework. An example to illustrate the method using the 2018/19 Rural Environment and Agricultural Commodities Survey from the Australian Bureau of Statistics is provided.
{"title":"A Note on the Optimum Allocation of Resources to Follow up Unit Nonrespondents in Probability Surveys","authors":"Siu-Ming Tam, A. Holmberg, Summer Wang","doi":"10.2478/jos-2023-0020","DOIUrl":"https://doi.org/10.2478/jos-2023-0020","url":null,"abstract":"Abstract Common practice to address nonresponse in probability surveys in National Statistical Offices is to follow up every non respondent with a view to lifting response rates. As response rate is an insufficient indicator of data quality, it is argued that one should follow up non respondents with a view to reducing the mean squared error (MSE) of the estimator of the variable of interest. In this article, we propose a method to allocate the nonresponse follow-up resources in such a way as to minimise the MSE under a quasi-randomisation framework. An example to illustrate the method using the 2018/19 Rural Environment and Agricultural Commodities Survey from the Australian Bureau of Statistics is provided.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47288555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew M. Raim, T. Mathew, Kimberly F. Sellers, Renée Ellis, Mikelyn Meyers
Abstract Statistical agencies depend on responses to inquiries made to the public, and occasionally conduct experiments to improve contact procedures. Agencies may wish to assess whether there is significant change in response rates due to an operational refinement. This work considers the assessment of response rates when up to L attempts are made to contact each subject, and subjects receive one of J possible variations of the operation under experimentation. In particular, the continuation-ratio logit (CRL) model facilitates inference on the probability of success at each step of the sequence, given that failures occurred at previous attempts. The CRL model is investigated as a basis for sample size determination– one of the major decisions faced by an experimenter–to attain a desired power under a Wald test of a general linear hypothesis. An experiment that was conducted for nonresponse followup in the United States 2020 decennial census provides a motivating illustration.
{"title":"Design and Sample Size Determination for Experiments on Nonresponse Followup using a Sequential Regression Model","authors":"Andrew M. Raim, T. Mathew, Kimberly F. Sellers, Renée Ellis, Mikelyn Meyers","doi":"10.2478/jos-2023-0009","DOIUrl":"https://doi.org/10.2478/jos-2023-0009","url":null,"abstract":"Abstract Statistical agencies depend on responses to inquiries made to the public, and occasionally conduct experiments to improve contact procedures. Agencies may wish to assess whether there is significant change in response rates due to an operational refinement. This work considers the assessment of response rates when up to L attempts are made to contact each subject, and subjects receive one of J possible variations of the operation under experimentation. In particular, the continuation-ratio logit (CRL) model facilitates inference on the probability of success at each step of the sequence, given that failures occurred at previous attempts. The CRL model is investigated as a basis for sample size determination– one of the major decisions faced by an experimenter–to attain a desired power under a Wald test of a general linear hypothesis. An experiment that was conducted for nonresponse followup in the United States 2020 decennial census provides a motivating illustration.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43302803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daan B. Zult, Sabine Krieg, B. Schouten, P. Ouwehand, Jan van den Brakel
Abstract Short-term business statistics at Statistics Netherlands are largely based on Value Added Tax (VAT) administrations. Companies may decide to file their tax return on a monthly, quarterly, or annual basis. Most companies file their tax return quarterly. So far, these VAT based short-term business statistics are published with a quarterly frequency as well. In this article we compare different methods to compile monthly figures, even though a major part of these data is observed quarterly. The methods considered to produce a monthly indicator must address two issues. The first issue is to combine a high- and low-frequency series into a single high-frequency series, while both series measure the same phenomenon of the target population. The appropriate method that is designed for this purpose is usually referred to as “benchmarking”. The second issue is a missing data problem, because the first and second month of a quarter are published before the corresponding quarterly data is available. A “nowcast” method can be used to estimate these months. The literature on mixed frequency models provides solutions for both problems, sometimes by dealing with them simultaneously. In this article we combine different benchmarking and nowcasting models and evaluate combinations. Our evaluation distinguishes between relatively stable periods and periods during and after a crisis because different approaches might be optimal under these two conditions. We find that during stable periods the so-called Bridge models perform slightly better than the alternatives considered. Until about fifteen months after a crisis, the models that rely heavier on historic patterns such as the Bridge, MIDAS and structural time series models are outperformed by more straightforward (S)ARIMA approaches.
{"title":"From Quarterly to Monthly Turnover Figures Using Nowcasting Methods","authors":"Daan B. Zult, Sabine Krieg, B. Schouten, P. Ouwehand, Jan van den Brakel","doi":"10.2478/jos-2023-0012","DOIUrl":"https://doi.org/10.2478/jos-2023-0012","url":null,"abstract":"Abstract Short-term business statistics at Statistics Netherlands are largely based on Value Added Tax (VAT) administrations. Companies may decide to file their tax return on a monthly, quarterly, or annual basis. Most companies file their tax return quarterly. So far, these VAT based short-term business statistics are published with a quarterly frequency as well. In this article we compare different methods to compile monthly figures, even though a major part of these data is observed quarterly. The methods considered to produce a monthly indicator must address two issues. The first issue is to combine a high- and low-frequency series into a single high-frequency series, while both series measure the same phenomenon of the target population. The appropriate method that is designed for this purpose is usually referred to as “benchmarking”. The second issue is a missing data problem, because the first and second month of a quarter are published before the corresponding quarterly data is available. A “nowcast” method can be used to estimate these months. The literature on mixed frequency models provides solutions for both problems, sometimes by dealing with them simultaneously. In this article we combine different benchmarking and nowcasting models and evaluate combinations. Our evaluation distinguishes between relatively stable periods and periods during and after a crisis because different approaches might be optimal under these two conditions. We find that during stable periods the so-called Bridge models perform slightly better than the alternatives considered. Until about fifteen months after a crisis, the models that rely heavier on historic patterns such as the Bridge, MIDAS and structural time series models are outperformed by more straightforward (S)ARIMA approaches.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44455403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Income inequality is a persistent topic of public and political debate. In this context, the focus often shifts from the national level to a more detailed geographical level. In particular, inequality between or within local communities can be assessed. In this article, the estimation of inequality within regions, that is, between households, is considered at a regionally disaggregated level. From a methodological point of view, a small area estimation of the Gini coefficient is carried out using an area-level model linking survey data with related administrative data. Specifically, the Fay-Herriot model is applied using a logit-transformation followed by a bias-corrected back-transformation. The uncertainty of the point estimate is assessed using a parametric bootstrap procedure to estimate the mean squared error. The validity of the methodology is shown in a model-based simulation for the point estimator as well as for the uncertainty measure. The proposed methodology is illustrated by estimating model-based Gini coefficients for spatial planning regions in Germany, using survey data from the Socio-Economic Panel and aggregate data from the 2011 Census. The results show that intra-regional inequality is more diverse than a consideration only between East and West suggests.
{"title":"Estimating Intra-Regional Inequality with an Application to German Spatial Planning Regions","authors":"Marina Runge","doi":"10.2478/jos-2023-0010","DOIUrl":"https://doi.org/10.2478/jos-2023-0010","url":null,"abstract":"Abstract Income inequality is a persistent topic of public and political debate. In this context, the focus often shifts from the national level to a more detailed geographical level. In particular, inequality between or within local communities can be assessed. In this article, the estimation of inequality within regions, that is, between households, is considered at a regionally disaggregated level. From a methodological point of view, a small area estimation of the Gini coefficient is carried out using an area-level model linking survey data with related administrative data. Specifically, the Fay-Herriot model is applied using a logit-transformation followed by a bias-corrected back-transformation. The uncertainty of the point estimate is assessed using a parametric bootstrap procedure to estimate the mean squared error. The validity of the methodology is shown in a model-based simulation for the point estimator as well as for the uncertainty measure. The proposed methodology is illustrated by estimating model-based Gini coefficients for spatial planning regions in Germany, using survey data from the Socio-Economic Panel and aggregate data from the 2011 Census. The results show that intra-regional inequality is more diverse than a consideration only between East and West suggests.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42002817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Large amount of data are today available, that are easier and faster to collect than survey data, bringing new challenges. One of them is the nonprobability nature of these big data that may not represent the target population properly and hence result in highly biased estimators. In this article two approaches for dealing with selection bias when the selection process is nonignorable are discussed. The first one, based on the empirical likelihood, does not require parametric specification of the population model but the probability of being in the nonprobability sample needed to be modeled. Auxiliary information known for the population or estimable from a probability sample can be incorporated as calibration constraints, thus enhancing the precision of the estimators. The second one is a mixed approach based on mass imputation and propensity score adjustment requiring that the big data membership is known throughout a probability sample. Finally, two simulation experiments and an application to income data are performed to evaluate the performance of the proposed estimators in terms of robustness and efficiency.
{"title":"Adjusting for Selection Bias in Nonprobability Samples by Empirical Likelihood Approach","authors":"Daniela Marella","doi":"10.2478/jos-2023-0008","DOIUrl":"https://doi.org/10.2478/jos-2023-0008","url":null,"abstract":"Abstract Large amount of data are today available, that are easier and faster to collect than survey data, bringing new challenges. One of them is the nonprobability nature of these big data that may not represent the target population properly and hence result in highly biased estimators. In this article two approaches for dealing with selection bias when the selection process is nonignorable are discussed. The first one, based on the empirical likelihood, does not require parametric specification of the population model but the probability of being in the nonprobability sample needed to be modeled. Auxiliary information known for the population or estimable from a probability sample can be incorporated as calibration constraints, thus enhancing the precision of the estimators. The second one is a mixed approach based on mass imputation and propensity score adjustment requiring that the big data membership is known throughout a probability sample. Finally, two simulation experiments and an application to income data are performed to evaluate the performance of the proposed estimators in terms of robustness and efficiency.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43617123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Improving the accuracy of deflators is crucial for measuring real GDP and growth rates. However, construction prices are often difficult to measure. This study uses the stratification and hedonic methods to estimate price indices. The estimated indices are based on the actual transaction prices of buildings (contract prices) obtained from the Statistics on Building Starts survey information from the administrative sector in Japan. Compared with the construction cost deflator (CCD), calculated by compounding input costs, the estimated output price indices show higher rates of increase during the economic expansion phase after 2013. This suggests that the profit surge in the construction sector observed in that period is not fully reflected in the CCD. Furthermore, the difference between the two “output-type” indices obtained by stratification and hedonic methods shrinks when the estimation methods are precisely configured.
{"title":"Constructing Building Price Index Using Administrative Data","authors":"Masahiro Higo, Yumi Saita, C. Shimizu, Yuta Tachi","doi":"10.2478/jos-2023-0011","DOIUrl":"https://doi.org/10.2478/jos-2023-0011","url":null,"abstract":"Abstract Improving the accuracy of deflators is crucial for measuring real GDP and growth rates. However, construction prices are often difficult to measure. This study uses the stratification and hedonic methods to estimate price indices. The estimated indices are based on the actual transaction prices of buildings (contract prices) obtained from the Statistics on Building Starts survey information from the administrative sector in Japan. Compared with the construction cost deflator (CCD), calculated by compounding input costs, the estimated output price indices show higher rates of increase during the economic expansion phase after 2013. This suggests that the profit surge in the construction sector observed in that period is not fully reflected in the CCD. Furthermore, the difference between the two “output-type” indices obtained by stratification and hedonic methods shrinks when the estimation methods are precisely configured.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41659530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract To investigate the effect of a change from the telephone to the web mode on item nonresponse in panel surveys, we use experimental data from a two-wave panel survey. The treatment group changed from the telephone to the web mode after the first wave, while the control group continued in the telephone mode. We find that when changing to the web, “don’t know” answers increase moderately from a low level, while item refusal increases substantially from a very low level. This is the case for all person groups, although socio-demographic characteristics have some additional effects on giving a don’t know or a refusal when changing mode.
{"title":"Effects of Changing Modes on Item Nonresponse in Panel Surveys","authors":"O. Lipps, M. Voorpostel, Gian-Andrea Monsch","doi":"10.2478/jos-2023-0007","DOIUrl":"https://doi.org/10.2478/jos-2023-0007","url":null,"abstract":"Abstract To investigate the effect of a change from the telephone to the web mode on item nonresponse in panel surveys, we use experimental data from a two-wave panel survey. The treatment group changed from the telephone to the web mode after the first wave, while the control group continued in the telephone mode. We find that when changing to the web, “don’t know” answers increase moderately from a low level, while item refusal increases substantially from a very low level. This is the case for all person groups, although socio-demographic characteristics have some additional effects on giving a don’t know or a refusal when changing mode.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42054301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract For operations such as a decennial census, the U.S. Census Bureau sends mail to potential respondents inviting a self-response. It is suspected that the mailing strategy affects the distribution of call volumes to the U.S. Census Bureau's telephone helplines. For staffing purposes, more uniform call volumes throughout the week are desirable. In this work, we formulate tests and confidence intervals to compare uniformity of call volumes resulting from competing mailing strategies. Regarding the data as multinomial observations, we compare pairs of call volume observations to determine whether one mailing strategy has multinomial cell probabilities closer to the uniform probability vector compared to another strategy. A motivating illustration is provided by call volume data recorded in three studies which were carried out in advance of the 2020 Decennial Census.
{"title":"A Statistical Comparison of Call Volume Uniformity Due to Mailing Strategy","authors":"Andrew M. Raim, E. Nichols, T. Mathew","doi":"10.2478/jos-2023-0005","DOIUrl":"https://doi.org/10.2478/jos-2023-0005","url":null,"abstract":"Abstract For operations such as a decennial census, the U.S. Census Bureau sends mail to potential respondents inviting a self-response. It is suspected that the mailing strategy affects the distribution of call volumes to the U.S. Census Bureau's telephone helplines. For staffing purposes, more uniform call volumes throughout the week are desirable. In this work, we formulate tests and confidence intervals to compare uniformity of call volumes resulting from competing mailing strategies. Regarding the data as multinomial observations, we compare pairs of call volume observations to determine whether one mailing strategy has multinomial cell probabilities closer to the uniform probability vector compared to another strategy. A motivating illustration is provided by call volume data recorded in three studies which were carried out in advance of the 2020 Decennial Census.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44788708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas von Brasch, Håkon S. Grini, Magnus Berglund Johnsen, T. Vigtel
Abstract The weighted arithmetic mean is used in a wide variety of applications. An infinite number of possible decompositions of the change in the weighted mean are available, and it is therefore an open question which of the possible decompositions should be applied. In this article, we derive a decomposition of the change in the weighted mean based on a two-stage Bennet decomposition. Our proposed decomposition is easy to employ and interpret, and we show that it satisfies the difference counterpart to the index number time reversal test. We illustrate the framework by decomposing aggregate earnings growth from 2020Q4 to 2021Q4 in Norway and compare it with some of the main decompositions proposed in the literature. We find that the wedge between the identified compositional effects from the proposed two-stage Bennet decomposition and the one-stage Bennet decomposition is substantial, and for some industries, the compositional effects have opposite signs.
{"title":"A Two-Stage Bennet Decomposition of the Change in the Weighted Arithmetic Mean","authors":"Thomas von Brasch, Håkon S. Grini, Magnus Berglund Johnsen, T. Vigtel","doi":"10.2478/jos-2023-0006","DOIUrl":"https://doi.org/10.2478/jos-2023-0006","url":null,"abstract":"Abstract The weighted arithmetic mean is used in a wide variety of applications. An infinite number of possible decompositions of the change in the weighted mean are available, and it is therefore an open question which of the possible decompositions should be applied. In this article, we derive a decomposition of the change in the weighted mean based on a two-stage Bennet decomposition. Our proposed decomposition is easy to employ and interpret, and we show that it satisfies the difference counterpart to the index number time reversal test. We illustrate the framework by decomposing aggregate earnings growth from 2020Q4 to 2021Q4 in Norway and compare it with some of the main decompositions proposed in the literature. We find that the wedge between the identified compositional effects from the proposed two-stage Bennet decomposition and the one-stage Bennet decomposition is substantial, and for some industries, the compositional effects have opposite signs.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48034700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Grid questions are frequently employed in web surveys due to their assumed response efficiency. In line with this, many previous studies have found shorter response times for grid questions compared to item-by-item formats. Our contribution to this literature is to investigate how altering the question format affects response behavior and the depth of cognitive processing when answering both grid question and item-by-item formats. To answer these questions, we implemented an experiment with three questions in an eye-tracking study. Each question consisted of a set of ten items which respondents answered either on a single page (large grid), on two pages with five items each (small grid), or on ten separate pages (item-by-item). We did not find substantial differences in cognitive processing overall, while the processing of the question stem and the response scale labels was significantly higher for the item-by-item design than for the large grid in all three questions. We, however, found that when answering an item in a grid question, respondents often refer to surrounding items when making a judgement. We discuss the findings and limitations of our study and provide suggestions for practical design decisions.
{"title":"Using Eye-Tracking Methodology to Study Grid Question Designs in Web Surveys","authors":"C. Neuert, Joss Roßmann, Henning Silber","doi":"10.2478/jos-2023-0004","DOIUrl":"https://doi.org/10.2478/jos-2023-0004","url":null,"abstract":"Abstract Grid questions are frequently employed in web surveys due to their assumed response efficiency. In line with this, many previous studies have found shorter response times for grid questions compared to item-by-item formats. Our contribution to this literature is to investigate how altering the question format affects response behavior and the depth of cognitive processing when answering both grid question and item-by-item formats. To answer these questions, we implemented an experiment with three questions in an eye-tracking study. Each question consisted of a set of ten items which respondents answered either on a single page (large grid), on two pages with five items each (small grid), or on ten separate pages (item-by-item). We did not find substantial differences in cognitive processing overall, while the processing of the question stem and the response scale labels was significantly higher for the item-by-item design than for the large grid in all three questions. We, however, found that when answering an item in a grid question, respondents often refer to surrounding items when making a judgement. We discuss the findings and limitations of our study and provide suggestions for practical design decisions.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43073744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}