Pub Date : 2021-08-10DOI: 10.18148/SRM/2021.V15I2.7773
Roopam Sadh, Rajeev Kumar
Analysis of survey data is a matter of significant concern as it plays a key role in organizational and behavioral research. Quantitative survey data possesses several distinct characteristics i.e., fixed small range of ordinal values, importance of respondent category labels etc. Due to such reasons quantitative survey data is not appropriate for existing analysis methods involving aggregate statistics. Literature has advised to utilize pattern based analysis tools instead of aggregate statistics since patterns are more informative and efficient in reflecting respondents’ preferences. Thus, we introduce a specialized pattern based clustering technique for survey data that uses the convention of direction instead of magnitude. Further, it does not require manual setting of clustering parameters whereas it automatically identifies respondent categories and their representative features with the help of an adaptive procedure. We apply proposed method over an original academic survey dataset and compare its results with K-Means clustering method in terms of interpretability and usability. We utilize benchmark stakeholder theory to verify the results. Results suggest that proposed pattern clustering method performs far better in segregating survey responses according to the stakeholder theory and the clusters made by it are much more meaningful. Hence, results empirically validates that pattern based analysis methods are more suitable for analyzing quantitative survey data.
{"title":"Directional Pattern based Clustering for Quantitative Survey Data: Method and Application","authors":"Roopam Sadh, Rajeev Kumar","doi":"10.18148/SRM/2021.V15I2.7773","DOIUrl":"https://doi.org/10.18148/SRM/2021.V15I2.7773","url":null,"abstract":"Analysis of survey data is a matter of significant concern as it plays a key role in organizational and behavioral research. Quantitative survey data possesses several distinct characteristics i.e., fixed small range of ordinal values, importance of respondent category labels etc. Due to such reasons quantitative survey data is not appropriate for existing analysis methods involving aggregate statistics. Literature has advised to utilize pattern based analysis tools instead of aggregate statistics since patterns are more informative and efficient in reflecting respondents’ preferences. Thus, we introduce a specialized pattern based clustering technique for survey data that uses the convention of direction instead of magnitude. Further, it does not require manual setting of clustering parameters whereas it automatically identifies respondent categories and their representative features with the help of an adaptive procedure. We apply proposed method over an original academic survey dataset and compare its results with K-Means clustering method in terms of interpretability and usability. We utilize benchmark stakeholder theory to verify the results. Results suggest that proposed pattern clustering method performs far better in segregating survey responses according to the stakeholder theory and the clusters made by it are much more meaningful. Hence, results empirically validates that pattern based analysis methods are more suitable for analyzing quantitative survey data.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"15 1","pages":"169-185"},"PeriodicalIF":4.8,"publicationDate":"2021-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44166497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-10DOI: 10.18148/SRM/2021.V15I2.7809
Benjamin Küfner, J. Sakshaug, Stefan Zins
The IAB Job Vacancy Survey of the German Institute for Employment Research collects detailed information on job search and vacancy durations for an establishment’s last successful hiring process. The duration questions themselves are burdensome for respondents to answer as they ask for precise dates of the earliest possible hiring for the vacancy, the start of the personnel search, and the decision to hire the selected applicant. Consequently, the nonresponse rates for these items have been relatively high over the years (up to 21 percent). In an effort to reduce item nonresponse, a split-ballot experiment was conducted to test the strategy of providing additional clarifying information and examples to assist respondents in answering the date questions. The results revealed a backfiring effect. Although there was evidence that respondents read the additional clarifying information, this led to even more item nonresponse and lower data quality compared to the control group. Additionally, we observed a negative spillover effect with regard to item nonresponse on a subsequent (non-treated) question. We conclude this article by discussing possible causes of these results and suggestions for further research.
{"title":"More Clarification, Less Item Nonresponse in Establishment Surveys? A Split-Ballot Experiment","authors":"Benjamin Küfner, J. Sakshaug, Stefan Zins","doi":"10.18148/SRM/2021.V15I2.7809","DOIUrl":"https://doi.org/10.18148/SRM/2021.V15I2.7809","url":null,"abstract":"The IAB Job Vacancy Survey of the German Institute for Employment Research collects detailed information on job search and vacancy durations for an establishment’s last successful hiring process. The duration questions themselves are burdensome for respondents to answer as they ask for precise dates of the earliest possible hiring for the vacancy, the start of the personnel search, and the decision to hire the selected applicant. Consequently, the nonresponse rates for these items have been relatively high over the years (up to 21 percent). In an effort to reduce item nonresponse, a split-ballot experiment was conducted to test the strategy of providing additional clarifying information and examples to assist respondents in answering the date questions. The results revealed a backfiring effect. Although there was evidence that respondents read the additional clarifying information, this led to even more item nonresponse and lower data quality compared to the control group. Additionally, we observed a negative spillover effect with regard to item nonresponse on a subsequent (non-treated) question. We conclude this article by discussing possible causes of these results and suggestions for further research.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"15 1","pages":"195-206"},"PeriodicalIF":4.8,"publicationDate":"2021-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47805076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-10DOI: 10.18148/SRM/2021.V15I2.7800
Brittany U Carter, James Bennett, Elric Sims
Survey duration—the time it takes to complete a survey—affects response and completion rates. Estimated survey duration equations may be used to estimate survey duration, however, there are no studies assessing their use. The objective of this study is to evaluate estimated survey duration equations using a health risk assessment. Six existing estimated survey duration equations were identified. Using health risk assessment data from January 1, 2018 to December 31, 2018, an average participant profile was built to inform the inputs into the estimated survey duration equations. Estimated survey duration of the health risk assessment ranged from 7.64 minutes to 39.6 minutes. Using the same dataset, the estimated survey duration was compared to the actual completion time of the health risk assessment. The average completion time of the health risk assessment was 11.27 minutes. The estimated survey duration equations either under- or overestimated the completion time of the health risk assessment. The equation that is based on word count, number of questions, decisions, and open text boxes is recommended for use to estimate the duration of a health risk assessment although it was an overestimate. Using estimated survey duration equations appear to be a suitable alternative to pilot testing but future studies are needed to further evaluate these equations in other types of surveys.
{"title":"Evaluation of Estimated Survey Duration Equations Using a Health Risk Assessment","authors":"Brittany U Carter, James Bennett, Elric Sims","doi":"10.18148/SRM/2021.V15I2.7800","DOIUrl":"https://doi.org/10.18148/SRM/2021.V15I2.7800","url":null,"abstract":"Survey duration—the time it takes to complete a survey—affects response and completion rates. Estimated survey duration equations may be used to estimate survey duration, however, there are no studies assessing their use. The objective of this study is to evaluate estimated survey duration equations using a health risk assessment. Six existing estimated survey duration equations were identified. Using health risk assessment data from January 1, 2018 to December 31, 2018, an average participant profile was built to inform the inputs into the estimated survey duration equations. Estimated survey duration of the health risk assessment ranged from 7.64 minutes to 39.6 minutes. Using the same dataset, the estimated survey duration was compared to the actual completion time of the health risk assessment. The average completion time of the health risk assessment was 11.27 minutes. The estimated survey duration equations either under- or overestimated the completion time of the health risk assessment. The equation that is based on word count, number of questions, decisions, and open text boxes is recommended for use to estimate the duration of a health risk assessment although it was an overestimate. Using estimated survey duration equations appear to be a suitable alternative to pilot testing but future studies are needed to further evaluate these equations in other types of surveys.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"15 1","pages":"187-194"},"PeriodicalIF":4.8,"publicationDate":"2021-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46128605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-10DOI: 10.18148/SRM/2021.V15I2.7640
Paulina Pankowska, B. Bakker, Daniel L. Oberski, D. Pavlopoulos
Longitudinal surveys often rely on dependent interviewing (DI) to lower thelevels of random measurement error in survey data and reduce the incidenceof spurious change. DI refers to a data collection technique that incorporatesinformation from prior interview rounds into subsequent waves. While thismethod is considered an eective remedy for random measurement error,it can also introduce more systematic errors, in particular when respondentsare rst reminded of their previously provided answer and then askedabout their current status. The aim of this paper is to assess the impactof DI on measurement error in employment mobility. We take advantageof a unique experimental situation that was created by the roll-out of dependentinterviewing in the Dutch Labour Force Survey (LFS). We applyHidden Markov Modeling (HMM) to linked LFS and Employment Register(ER) data that cover a period before and after dependent interviewing wasabolished, which in turn enables the modeling of systematic errors in theLFS data. Our results indicate that DI lowered the probability of obtainingrandom measurement error but had no signi cant eect on the systematiccomponent of the error. The lack of a signi cant eect might be partiallydue to the fact that the probability of repeating the same error was extremelyhigh at baseline (i.e when using standard, independent interviewing);therefore the use of DI could not increase this probability any further.
{"title":"Dependent interviewing: a remedy or a curse for measurement error in surveys?","authors":"Paulina Pankowska, B. Bakker, Daniel L. Oberski, D. Pavlopoulos","doi":"10.18148/SRM/2021.V15I2.7640","DOIUrl":"https://doi.org/10.18148/SRM/2021.V15I2.7640","url":null,"abstract":"Longitudinal surveys often rely on dependent interviewing (DI) to lower thelevels of random measurement error in survey data and reduce the incidenceof spurious change. DI refers to a data collection technique that incorporatesinformation from prior interview rounds into subsequent waves. While thismethod is considered an e\u000bective remedy for random measurement error,it can also introduce more systematic errors, in particular when respondentsare rst reminded of their previously provided answer and then askedabout their current status. The aim of this paper is to assess the impactof DI on measurement error in employment mobility. We take advantageof a unique experimental situation that was created by the roll-out of dependentinterviewing in the Dutch Labour Force Survey (LFS). We applyHidden Markov Modeling (HMM) to linked LFS and Employment Register(ER) data that cover a period before and after dependent interviewing wasabolished, which in turn enables the modeling of systematic errors in theLFS data. Our results indicate that DI lowered the probability of obtainingrandom measurement error but had no signi cant e\u000bect on the systematiccomponent of the error. The lack of a signi cant e\u000bect might be partiallydue to the fact that the probability of repeating the same error was extremelyhigh at baseline (i.e when using standard, independent interviewing);therefore the use of DI could not increase this probability any further.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"15 1","pages":"135-146"},"PeriodicalIF":4.8,"publicationDate":"2021-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43192782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-10DOI: 10.18148/SRM/2021.V15I2.7457
Gianmaria Bottoni, R. Fitzgerald
A number of countries in Europe and beyond have established online or mixed mode panels with a web component based upon probability samples. This paper aims to assess the first ever attempt to do this cross-nationally using an input harmonised approach representing a major innovation in cross-national survey methodology. The European Social Survey established a panel using the face-to-face interview as the basis for recruitment to the panel. This experiment was conducted in Estonia, Slovenia and Great Britain using an input harmonised approach to the design throughout something never done before across multiple countries simultaneously. The paper outlines how the experiment was conducted. It then moves on to compare the web panel respondents to the ESS achieved face-to-face sample in each country, as well as comparing the web panel achieved sample to external benchmarks. Most importantly, since the literature is very scarce, the differences in attitudinal and behavioural characteristics are also assessed. By comparing the answers of the total achieved sample in the ESS to the subset who also answered the CRONOS web panel we assess changes in representativeness and substantive answers without confounding the findings with other changes such as mode effects. This approach is only possible where ‘piggybacking’ recruitment has been used. This in itself is rare at the national level but this is the first time survey methodologists have employed this cross-nationally allowing such an analytical approach. Our findings suggest that the CRONOS sample is not too divergent from the target population and to the ESS with the exception of the oldest age groups. However, there are cross-national differences suggesting cross-national comparability might be different when compared to estimates from a face-to-face survey.
{"title":"Establishing a Baseline: Bringing Innovation to the Evaluation of Cross-National Probability-Based Online Panels","authors":"Gianmaria Bottoni, R. Fitzgerald","doi":"10.18148/SRM/2021.V15I2.7457","DOIUrl":"https://doi.org/10.18148/SRM/2021.V15I2.7457","url":null,"abstract":"A number of countries in Europe and beyond have established online or mixed mode panels with a web component based upon probability samples. This paper aims to assess the first ever attempt to do this cross-nationally using an input harmonised approach representing a major innovation in cross-national survey methodology. The European Social Survey established a panel using the face-to-face interview as the basis for recruitment to the panel. This experiment was conducted in Estonia, Slovenia and Great Britain using an input harmonised approach to the design throughout something never done before across multiple countries simultaneously. \u0000The paper outlines how the experiment was conducted. It then moves on to compare the web panel respondents to the ESS achieved face-to-face sample in each country, as well as comparing the web panel achieved sample to external benchmarks. Most importantly, since the literature is very scarce, the differences in attitudinal and behavioural characteristics are also assessed. By comparing the answers of the total achieved sample in the ESS to the subset who also answered the CRONOS web panel we assess changes in representativeness and substantive answers without confounding the findings with other changes such as mode effects. This approach is only possible where ‘piggybacking’ recruitment has been used. This in itself is rare at the national level but this is the first time survey methodologists have employed this cross-nationally allowing such an analytical approach. Our findings suggest that the CRONOS sample is not too divergent from the target population and to the ESS with the exception of the oldest age groups. However, there are cross-national differences suggesting cross-national comparability might be different when compared to estimates from a face-to-face survey.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"15 1","pages":"115-133"},"PeriodicalIF":4.8,"publicationDate":"2021-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46476824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-10DOI: 10.18148/SRM/2021.V15I1.7656
Angelica M. Maineri, Vera Lomazzi, R. Luijkx
The measurement of gender role attitudes has been found to be problematic in previous studies, especially in comparative perspective. The present study adopts a novel approach and investigates the position of the gender role attitudes scale in the questionnaire as a potential source of bias. In particular, the present study aims at assessing the context effect of the family norms question on the measurement of gender role attitudes by adopting the theoretical perspective of the construal model of attitudes, according to which the adjacent questions constitute the context for interpreting and answering a stimulus. The study employs data from the CROss-National Online Survey panel, which was fielded in 2017 and contained an experiment where the order of the questions under investigation varied. The reliability, validity and invariance of the measurement of gender role attitudes across experimental settings and countries (Estonia, Great Britain and Slovenia) are explored adopting several analytical techniques, such as regression models and multiple-group confirmatory factor analysis. Differences between experimental settings emerged, suggesting that the questionnaire context matters for the validity and stability of the gender role attitudes items; however, the lack of patterns hinders general conclusions on what is the order of questions yielding better measurement of the gender role attitudes scale. Clear differences among the countries indicate that the cultural context may interact with the question context. Finally, we stress that the measurement is overall poor, urging to find a better formulation of the items measuring gender role attitudes.
{"title":"Studying the Context Effect of Family Norms on Gender Role Attitudes: an Experimental Design","authors":"Angelica M. Maineri, Vera Lomazzi, R. Luijkx","doi":"10.18148/SRM/2021.V15I1.7656","DOIUrl":"https://doi.org/10.18148/SRM/2021.V15I1.7656","url":null,"abstract":"The measurement of gender role attitudes has been found to be problematic in previous studies, especially in comparative perspective. The present study adopts a novel approach and investigates the position of the gender role attitudes scale in the questionnaire as a potential source of bias. In particular, the present study aims at assessing the context effect of the family norms question on the measurement of gender role attitudes by adopting the theoretical perspective of the construal model of attitudes, according to which the adjacent questions constitute the context for interpreting and answering a stimulus. The study employs data from the CROss-National Online Survey panel, which was fielded in 2017 and contained an experiment where the order of the questions under investigation varied. The reliability, validity and invariance of the measurement of gender role attitudes across experimental settings and countries (Estonia, Great Britain and Slovenia) are explored adopting several analytical techniques, such as regression models and multiple-group confirmatory factor analysis. Differences between experimental settings emerged, suggesting that the questionnaire context matters for the validity and stability of the gender role attitudes items; however, the lack of patterns hinders general conclusions on what is the order of questions yielding better measurement of the gender role attitudes scale. Clear differences among the countries indicate that the cultural context may interact with the question context. Finally, we stress that the measurement is overall poor, urging to find a better formulation of the items measuring gender role attitudes.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"15 1","pages":"43-64"},"PeriodicalIF":4.8,"publicationDate":"2021-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42544899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-10DOI: 10.18148/SRM/2021.V15I1.7727
Tobias Gummer, Ruben L. Bach, Jessica Daikeler, S. Eckman
Response probabilities are used in adaptive and responsive survey designs to guide data collection efforts, often with the goal of diversifying the sample composition. However, if response probabilities are also correlated with measurement error, this approach could introduce bias into survey data. This study analyzes the relationship between response probabilities and data quality in grid questions. Drawing on data from the probability-based GESIS panel, we found low propensity cases to more frequently produce item nonresponse and nondifferentiated answers than high propensity cases. However, this effect was observed only among long-time respondents, not among those who joined more recently. We caution that using adaptive or responsive techniques may increase measurement error while reducing the risk of nonresponse bias.
{"title":"The Relationship Between Response Probabilities and Data Quality in Grid Questions","authors":"Tobias Gummer, Ruben L. Bach, Jessica Daikeler, S. Eckman","doi":"10.18148/SRM/2021.V15I1.7727","DOIUrl":"https://doi.org/10.18148/SRM/2021.V15I1.7727","url":null,"abstract":"Response probabilities are used in adaptive and responsive survey designs to guide data collection efforts, often with the goal of diversifying the sample composition. However, if response probabilities are also correlated with measurement error, this approach could introduce bias into survey data. This study analyzes the relationship between response probabilities and data quality in grid questions. Drawing on data from the probability-based GESIS panel, we found low propensity cases to more frequently produce item nonresponse and nondifferentiated answers than high propensity cases. However, this effect was observed only among long-time respondents, not among those who joined more recently. We caution that using adaptive or responsive techniques may increase measurement error while reducing the risk of nonresponse bias.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"15 1","pages":"65-77"},"PeriodicalIF":4.8,"publicationDate":"2021-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42443442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-10DOI: 10.18148/SRM/2021.V15I1.7594
Lionel Marquis
In this article, I consider the problem of "cheating" in political knowledge tests. This problem has been made more pressing by the transition of many surveys to online interviewing, opening up the possibility of looking up the correct answers on the internet. Several methods have been proposed to deal with cheating ex-ante, including self-reports of cheating, control for internet browsing, or time limits. Against this background, “response times” (RTs, i.e., the time taken by respondents to answer a survey question) suggest themselves as a post-hoc, unobtrusive means of detecting cheating. In this paper, I propose a procedure for measuring individual-specific and item-specific RTs, which are then used to identify unusually long but correct answers to knowledge questions as potential cases of cheating. I apply this procedure to the post-electoral survey for the 2015 Swiss national elections. My analysis suggests that extremely slow responses to two out of four questions are definitely suspicious. Accordingly, I propose a method for “correcting” individual knowledge scores and examine its convergent and predictive validity. Based on the finding that a simple revised scale of political knowledge has greater validity than the original additive scale, I conclude that the problem of cheating can be alleviated by using the RT method, which is again summarized in the conclusion to ensure its applicability in empirical research.
{"title":"Using Response Times to Enhance the Reliability of Political Knowledge Items: An Application to the 2015 Swiss Post-Election Survey","authors":"Lionel Marquis","doi":"10.18148/SRM/2021.V15I1.7594","DOIUrl":"https://doi.org/10.18148/SRM/2021.V15I1.7594","url":null,"abstract":"In this article, I consider the problem of \"cheating\" in political knowledge tests. This problem has been made more pressing by the transition of many surveys to online interviewing, opening up the possibility of looking up the correct answers on the internet. Several methods have been proposed to deal with cheating ex-ante, including self-reports of cheating, control for internet browsing, or time limits. Against this background, “response times” (RTs, i.e., the time taken by respondents to answer a survey question) suggest themselves as a post-hoc, unobtrusive means of detecting cheating. In this paper, I propose a procedure for measuring individual-specific and item-specific RTs, which are then used to identify unusually long but correct answers to knowledge questions as potential cases of cheating. I apply this procedure to the post-electoral survey for the 2015 Swiss national elections. My analysis suggests that extremely slow responses to two out of four questions are definitely suspicious. Accordingly, I propose a method for “correcting” individual knowledge scores and examine its convergent and predictive validity. Based on the finding that a simple revised scale of political knowledge has greater validity than the original additive scale, I conclude that the problem of cheating can be alleviated by using the RT method, which is again summarized in the conclusion to ensure its applicability in empirical research.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"15 1","pages":"79-100"},"PeriodicalIF":4.8,"publicationDate":"2021-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48109137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-10DOI: 10.18148/SRM/2021.V15I1.7725
T. Jonge, Akiko Kamesaka, R. Veenhoven
Many trend studies draw on survey data and compare responses to questions on the same topic that has been asked over time. A problem with such studies is that the questions often do not remain identical, due to changes in phrasing and response formats. We present ways to deal with this problem using trend data on life satisfaction in Japan as an illustrative case. Life satisfaction has been measured in the Life in Nation survey in Japan since 1958 and the question used has been changed several times. We looked at three methods published by scholars who tried to reconstruct a main trend in life satisfaction from these broken time-series, coming to different conclusions. In this paper we discuss their methods and present two new techniques for dealing with changes in survey questions on the same topic.
许多趋势研究利用调查数据,并比较对同一主题的问题的回答,这些问题一直被问到。这种研究的一个问题是,由于措辞和回答格式的变化,问题往往不保持相同。我们以日本的生活满意度趋势数据为例,提出了处理这一问题的方法。自1958年以来,日本的“国民生活调查”(Life in Nation survey)就开始测量生活满意度,所使用的问题已经改变了几次。我们研究了学者们发表的三种方法,他们试图从这些破碎的时间序列中重建生活满意度的主要趋势,得出了不同的结论。在本文中,我们讨论了他们的方法,并提出了两种新的技术来处理同一主题的调查问题的变化。
{"title":"How to Reconstruct a Trend when Survey Questions Have Changed Over Time.","authors":"T. Jonge, Akiko Kamesaka, R. Veenhoven","doi":"10.18148/SRM/2021.V15I1.7725","DOIUrl":"https://doi.org/10.18148/SRM/2021.V15I1.7725","url":null,"abstract":"Many trend studies draw on survey data and compare responses to questions on the same topic that has been asked over time. A problem with such studies is that the questions often do not remain identical, due to changes in phrasing and response formats. We present ways to deal with this problem using trend data on life satisfaction in Japan as an illustrative case. Life satisfaction has been measured in the Life in Nation survey in Japan since 1958 and the question used has been changed several times. We looked at three methods published by scholars who tried to reconstruct a main trend in life satisfaction from these broken time-series, coming to different conclusions. In this paper we discuss their methods and present two new techniques for dealing with changes in survey questions on the same topic.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"15 1","pages":"101-113"},"PeriodicalIF":4.8,"publicationDate":"2021-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42254915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-10DOI: 10.18148/SRM/2021.V15I1.7782
R. Becker
This empirical study examines, whether the weather situations during the different seasons in which panel surveys are carried out have an impact on the timing and extent of survey participation. Based on considerations regarding the panellists’ habits and their assessment of a participation's benefits and costs compared to alternative action, it is assumed that ‘pleasant’ weather diverts them from immediately completing the questionnaire while ‘unpleasant’ weather results in a higher degree of participation right after survey launch. The results of event history analysis based on longitudinal data from a multi-wave panel confirm these assumptions. Additionally, there seems to be an interaction between the season and the weather situation: ‘Pleasant’ weather in spring results in a lower participation rate compared to surveys in summer while, given the same weather situation, the participation rate is higher in autumn. Finally, it is evident that regardless of the season, heavy rainfall at the beginning of the field period is most beneficial for conducting an online survey in terms of both rapid response and high participation rates.
{"title":"Have You Ever Seen the Rain? It Looks Like It's Going to Rain!","authors":"R. Becker","doi":"10.18148/SRM/2021.V15I1.7782","DOIUrl":"https://doi.org/10.18148/SRM/2021.V15I1.7782","url":null,"abstract":"This empirical study examines, whether the weather situations during the different seasons in which panel surveys are carried out have an impact on the timing and extent of survey participation. Based on considerations regarding the panellists’ habits and their assessment of a participation's benefits and costs compared to alternative action, it is assumed that ‘pleasant’ weather diverts them from immediately completing the questionnaire while ‘unpleasant’ weather results in a higher degree of participation right after survey launch. The results of event history analysis based on longitudinal data from a multi-wave panel confirm these assumptions. Additionally, there seems to be an interaction between the season and the weather situation: ‘Pleasant’ weather in spring results in a lower participation rate compared to surveys in summer while, given the same weather situation, the participation rate is higher in autumn. Finally, it is evident that regardless of the season, heavy rainfall at the beginning of the field period is most beneficial for conducting an online survey in terms of both rapid response and high participation rates.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"15 1","pages":"27-41"},"PeriodicalIF":4.8,"publicationDate":"2021-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48920501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}