Pub Date : 2022-05-01Epub Date: 2019-11-20DOI: 10.1177/0049124119882479
Guangyu Tong, Guang Guo
Meta-analysis is a statistical method that combines quantitative findings from previous studies. It has been increasingly used to obtain more credible results in a wide range of scientific fields. Combining the results of relevant studies allows researchers to leverage study similarities while modeling potential sources of between-study heterogeneity. This paper provides a review of the core methodologies of meta-analysis that we consider most relevant to sociological research. After developing the foundation of the fixed-effects and random-effects models of meta-analysis, this paper illustrates the utility of the method with regression coefficients reported from two sets of social science studies. We explain the various steps of the process including constructing the meta-sample from primary studies; estimating the fixed- and random-effects models; analyzing the source of heterogeneity across studies; assessing publication bias. We conclude with a discussion of steps that could be taken to strengthen the development of meta-analysis in sociological research, which will eventually increase the credibility of sociological inquiry via a knowledge-cumulative process.
{"title":"Meta-Analysis in Sociological Research: Power and Heterogeneity.","authors":"Guangyu Tong, Guang Guo","doi":"10.1177/0049124119882479","DOIUrl":"https://doi.org/10.1177/0049124119882479","url":null,"abstract":"<p><p>Meta-analysis is a statistical method that combines quantitative findings from previous studies. It has been increasingly used to obtain more credible results in a wide range of scientific fields. Combining the results of relevant studies allows researchers to leverage study similarities while modeling potential sources of between-study heterogeneity. This paper provides a review of the core methodologies of meta-analysis that we consider most relevant to sociological research. After developing the foundation of the fixed-effects and random-effects models of meta-analysis, this paper illustrates the utility of the method with regression coefficients reported from two sets of social science studies. We explain the various steps of the process including constructing the meta-sample from primary studies; estimating the fixed- and random-effects models; analyzing the source of heterogeneity across studies; assessing publication bias. We conclude with a discussion of steps that could be taken to strengthen the development of meta-analysis in sociological research, which will eventually increase the credibility of sociological inquiry via a knowledge-cumulative process.</p>","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"51 2","pages":"566-604"},"PeriodicalIF":6.3,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0049124119882479","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40400611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-21DOI: 10.1177/00491241221091755
B. Meuleman, Tomasz Żółtak, A. Pokropek, E. Davidov, B. Muthén, Daniel L. Oberski, J. Billiet, Peter Schmidt
Welzel et al. (2021) claim that non-invariance of instruments is inconclusive and inconsequential in the field for cross-cultural value measurement. In this response, we contend that several key arguments on which Welzel et al. (2021) base their critique of invariance testing are conceptually and statistically incorrect. First, Welzel et al. (2021) claim that value measurement follows a formative rather than reflective logic. Yet they do not provide sufficient theoretical arguments for this conceptualization, nor do they discuss the disadvantages of this approach for validation of instruments. Second, their claim that strong inter-item correlations cannot be retrieved when means are close to the endpoint of scales ignores the existence of factor-analytic approaches for ordered-categorical indicators. Third, Welzel et al. (2021) propose that rather than of relying on invariance tests, comparability can be assessed by studying the connection with theoretically related constructs. However, their proposal ignores that external validation through nomological linkages hinges on the assumption of comparability. By means of two examples, we illustrate that violating the assumptions of measurement invariance can distort conclusions substantially. Following the advice of Welzel et al. (2021) implies discarding a tool that has proven to be very useful for comparativists.
{"title":"Why Measurement Invariance is Important in Comparative Research. A Response to Welzel et al. (2021)","authors":"B. Meuleman, Tomasz Żółtak, A. Pokropek, E. Davidov, B. Muthén, Daniel L. Oberski, J. Billiet, Peter Schmidt","doi":"10.1177/00491241221091755","DOIUrl":"https://doi.org/10.1177/00491241221091755","url":null,"abstract":"Welzel et al. (2021) claim that non-invariance of instruments is inconclusive and inconsequential in the field for cross-cultural value measurement. In this response, we contend that several key arguments on which Welzel et al. (2021) base their critique of invariance testing are conceptually and statistically incorrect. First, Welzel et al. (2021) claim that value measurement follows a formative rather than reflective logic. Yet they do not provide sufficient theoretical arguments for this conceptualization, nor do they discuss the disadvantages of this approach for validation of instruments. Second, their claim that strong inter-item correlations cannot be retrieved when means are close to the endpoint of scales ignores the existence of factor-analytic approaches for ordered-categorical indicators. Third, Welzel et al. (2021) propose that rather than of relying on invariance tests, comparability can be assessed by studying the connection with theoretically related constructs. However, their proposal ignores that external validation through nomological linkages hinges on the assumption of comparability. By means of two examples, we illustrate that violating the assumptions of measurement invariance can distort conclusions substantially. Following the advice of Welzel et al. (2021) implies discarding a tool that has proven to be very useful for comparativists.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"52 1","pages":"1401 - 1419"},"PeriodicalIF":6.3,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46916158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-07DOI: 10.1177/00491241221091754
C. Welzel, S. Kruse, Lennart Brunkert
Our original 2021 SMR article “Non-Invariance? An Overstated Problem with Misconceived Causes” disputes the conclusiveness of non-invariance diagnostics in diverse cross-cultural settings. Our critique targets the increasingly fashionable use of Multi-Group Confirmatory Factor Analysis (MGCFA), especially in its mainstream version. We document—both by mathematical proof and an empirical illustration—that non-invariance is an arithmetic artifact of group mean disparity on closed-ended scales. Precisely this arti-factualness renders standard non-invariance markers inconclusive of measurement inequivalence under group-mean diversity. Using the Emancipative Values Index (EVI), OA-Section 3 of our original article demonstrates that such artifactual non-invariance is inconsequential for multi-item constructs’ cross-cultural performance in nomological terms, that is, explanatory power and predictive quality. Given these limitations of standard non-invariance diagnostics, we challenge the unquestioned authority of invariance tests as a tool of measurement validation. Our critique provoked two teams of authors to launch counter-critiques. We are grateful to the two comments because they give us a welcome opportunity to restate our position in greater clarity. Before addressing the comments one by one, we reformulate our key propositions more succinctly.
{"title":"Against the Mainstream: On the Limitations of Non-Invariance Diagnostics: Response to Fischer et al. and Meuleman et al.","authors":"C. Welzel, S. Kruse, Lennart Brunkert","doi":"10.1177/00491241221091754","DOIUrl":"https://doi.org/10.1177/00491241221091754","url":null,"abstract":"Our original 2021 SMR article “Non-Invariance? An Overstated Problem with Misconceived Causes” disputes the conclusiveness of non-invariance diagnostics in diverse cross-cultural settings. Our critique targets the increasingly fashionable use of Multi-Group Confirmatory Factor Analysis (MGCFA), especially in its mainstream version. We document—both by mathematical proof and an empirical illustration—that non-invariance is an arithmetic artifact of group mean disparity on closed-ended scales. Precisely this arti-factualness renders standard non-invariance markers inconclusive of measurement inequivalence under group-mean diversity. Using the Emancipative Values Index (EVI), OA-Section 3 of our original article demonstrates that such artifactual non-invariance is inconsequential for multi-item constructs’ cross-cultural performance in nomological terms, that is, explanatory power and predictive quality. Given these limitations of standard non-invariance diagnostics, we challenge the unquestioned authority of invariance tests as a tool of measurement validation. Our critique provoked two teams of authors to launch counter-critiques. We are grateful to the two comments because they give us a welcome opportunity to restate our position in greater clarity. Before addressing the comments one by one, we reformulate our key propositions more succinctly.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"52 1","pages":"1438 - 1455"},"PeriodicalIF":6.3,"publicationDate":"2022-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41829124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-14DOI: 10.1177/00491241211067508
L. Vila‐Henninger, C. Dupuy, Virginie Van Ingelgom, M. Caprioli, Ferdinand Teuber, Damien Pennetreau, Margherita Bussi, Cal Le Gall
Qualitative secondary analysis has generated heated debate regarding the epistemology of qualitative research. We argue that shifting to an abductive approach provides a fruitful avenue for qualitative secondary analysts who are oriented towards theory-building. However, the concrete implementation of abduction remains underdeveloped—especially for coding. We address this key gap by outlining a set of tactics for abductive analysis that can be applied for qualitative analysis. Our approach applies Timmermans and Tavory's ( Timmermans and Tavory 2012 ; Tavory and Timmermans 2014 ) three stages of abduction in three steps for qualitative (secondary) analysis: Generating an Abductive Codebook, Abductive Data Reduction through Code Equations, and In-Depth Abductive Qualitative Analysis. A key contribution of our article is the development of “code equations”—defined as the combination of codes to operationalize phenomena that span individual codes. Code equations are an important resource for abduction and other qualitative approaches that leverage qualitative data to build theory.
定性二次分析引起了关于定性研究认识论的激烈争论。我们认为,转向溯因方法为面向理论建设的定性二级分析师提供了富有成效的途径。然而,绑架的具体实现仍然不发达,特别是在编码方面。我们通过概述一套可用于定性分析的溯因分析策略来解决这一关键差距。我们的方法采用Timmermans和Tavory的(Timmermans and Tavory 2012;Tavory和Timmermans 2014)定性(二级)分析的三个阶段:生成溯因代码本,通过代码方程进行溯因数据简化,以及深入溯因定性分析。我们文章的一个关键贡献是“代码方程”的发展——定义为代码的组合,以实现跨越单个代码的现象。代码方程是溯因法和其他利用定性数据构建理论的定性方法的重要资源。
{"title":"Abductive Coding: Theory Building and Qualitative (Re)Analysis","authors":"L. Vila‐Henninger, C. Dupuy, Virginie Van Ingelgom, M. Caprioli, Ferdinand Teuber, Damien Pennetreau, Margherita Bussi, Cal Le Gall","doi":"10.1177/00491241211067508","DOIUrl":"https://doi.org/10.1177/00491241211067508","url":null,"abstract":"Qualitative secondary analysis has generated heated debate regarding the epistemology of qualitative research. We argue that shifting to an abductive approach provides a fruitful avenue for qualitative secondary analysts who are oriented towards theory-building. However, the concrete implementation of abduction remains underdeveloped—especially for coding. We address this key gap by outlining a set of tactics for abductive analysis that can be applied for qualitative analysis. Our approach applies Timmermans and Tavory's ( Timmermans and Tavory 2012 ; Tavory and Timmermans 2014 ) three stages of abduction in three steps for qualitative (secondary) analysis: Generating an Abductive Codebook, Abductive Data Reduction through Code Equations, and In-Depth Abductive Qualitative Analysis. A key contribution of our article is the development of “code equations”—defined as the combination of codes to operationalize phenomena that span individual codes. Code equations are an important resource for abduction and other qualitative approaches that leverage qualitative data to build theory.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":" ","pages":""},"PeriodicalIF":6.3,"publicationDate":"2022-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46997816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-09DOI: 10.1177/00491241221077239
A. Remizova, M. Rudnev, E. Davidov
Individual religiosity measures are used by researchers to describe and compare individuals and societies. However, the cross-cultural comparability of the measures has often been questioned but rarely empirically tested. In the current study, we examined the cross-national measurement invariance properties of generalized individual religiosity in the sixth wave of the World Values Survey. For the analysis, we used multiple group confirmatory factor analysis and alignment. Our results demonstrated that a theoretically driven measurement model was not invariant across all countries. We suggested four unidimensional measurement models and four overlapping groups of countries in which these measurement models demonstrated approximate invariance. The indicators that covered praying practices, importance of religion, and confidence in its institutions were more cross-nationally invariant than other indicators.
{"title":"In Search of a Comparable Measure of Generalized Individual Religiosity in the World Values Survey","authors":"A. Remizova, M. Rudnev, E. Davidov","doi":"10.1177/00491241221077239","DOIUrl":"https://doi.org/10.1177/00491241221077239","url":null,"abstract":"Individual religiosity measures are used by researchers to describe and compare individuals and societies. However, the cross-cultural comparability of the measures has often been questioned but rarely empirically tested. In the current study, we examined the cross-national measurement invariance properties of generalized individual religiosity in the sixth wave of the World Values Survey. For the analysis, we used multiple group confirmatory factor analysis and alignment. Our results demonstrated that a theoretically driven measurement model was not invariant across all countries. We suggested four unidimensional measurement models and four overlapping groups of countries in which these measurement models demonstrated approximate invariance. The indicators that covered praying practices, importance of religion, and confidence in its institutions were more cross-nationally invariant than other indicators.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":" ","pages":""},"PeriodicalIF":6.3,"publicationDate":"2022-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46774458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-08DOI: 10.1177/00491241221077241
Katharina Meitinger, Tanja Kunz
Previous research reveals that the visual design of open-ended questions should match the response task so that respondents can infer the expected response format. Based on a web survey including specific probes in a list-style open-ended question format, we experimentally tested the effects of varying numbers of answer boxes on several indicators of response quality. Our results showed that using multiple small answer boxes instead of one large box had a positive impact on the number and variety of themes mentioned, as well as on the conciseness of responses to specific probes. We found no effect on the relevance of themes and the risk of item non-response. Based on our findings, we recommend using multiple small answer boxes instead of one large box to convey the expected response format and improve response quality in specific probes. This study makes a valuable contribution to the field of web probing, extends the concept of response quality in list-style open-ended questions, and provides a deeper understanding of how visual design features affect cognitive response processes in web surveys.
{"title":"Visual Design and Cognition in List-Style Open-Ended Questions in Web Probing","authors":"Katharina Meitinger, Tanja Kunz","doi":"10.1177/00491241221077241","DOIUrl":"https://doi.org/10.1177/00491241221077241","url":null,"abstract":"<p>Previous research reveals that the visual design of open-ended questions should match the response task so that respondents can infer the expected response format. Based on a web survey including specific probes in a list-style open-ended question format, we experimentally tested the effects of varying numbers of answer boxes on several indicators of response quality. Our results showed that using multiple small answer boxes instead of one large box had a positive impact on the number and variety of themes mentioned, as well as on the conciseness of responses to specific probes. We found no effect on the relevance of themes and the risk of item non-response. Based on our findings, we recommend using multiple small answer boxes instead of one large box to convey the expected response format and improve response quality in specific probes. This study makes a valuable contribution to the field of web probing, extends the concept of response quality in list-style open-ended questions, and provides a deeper understanding of how visual design features affect cognitive response processes in web surveys.</p>","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"44 38","pages":""},"PeriodicalIF":6.3,"publicationDate":"2022-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138506061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-08DOI: 10.1177/00491241221077238
Qiong Wu, Li-na Gu
Family income questions in general purpose surveys are usually collected with either a single-question summary design or a multiple-question disaggregation design. It is unclear how estimates from the two approaches agree with each other. The current paper takes advantage of a large-scale survey that has collected family income with both methods. With data from 14,222 urban and rural families in the 2018 wave of the nationally representative China Family Panel Studies, we compare the two estimates, and further evaluate factors that might contribute to the discrepancy. We find that the two estimates are loosely matched in only a third of all families, and most of the matched families have a simple income structure. Although the mean of the multiple-question estimate is larger than that of the single-question estimate, the pattern is not monotonic. At lower percentiles up till the median, the single-question estimate is larger, whereas the multiple-question estimate is larger at higher percentiles. Larger family sizes and more income sources contribute to higher likelihood of inconsistent estimates from the two designs. Families with wage income as the main income source have the highest likelihood of giving consistent estimates compared with all other families. In contrast, families with agricultural income or property income as the main source tend to have very high probability of larger single-question estimates. Omission of certain income components and rounding can explain over half of the inconsistencies with higher multiple-question estimates and a quarter of the inconsistencies with higher single-question estimates.
一般目的调查中的家庭收入问题通常采用单题摘要设计或多题分类设计。目前尚不清楚这两种方法的估计如何相互一致。本文利用了一项大规模的调查,用这两种方法收集了家庭收入。利用2018年具有全国代表性的中国家庭面板研究(China Family Panel Studies)中来自14222个城乡家庭的数据,我们比较了这两种估计,并进一步评估了可能导致差异的因素。我们发现,只有三分之一的家庭的两种估计是松散匹配的,而大多数匹配的家庭的收入结构都很简单。虽然多题估计的平均值大于单题估计的平均值,但其模式并非单调的。从较低的百分位数到中位数,单题估计值更大,而在较高的百分位数,多题估计值更大。较大的家庭规模和更多的收入来源导致两种设计的估计不一致的可能性更高。与所有其他家庭相比,以工资收入为主要收入来源的家庭给出一致估计的可能性最高。相比之下,以农业收入或财产性收入为主要来源的家庭往往有很高的概率得到更大的单题估计。某些收入成分的遗漏和四舍五入可以解释超过一半的与较高的多题估计不一致和四分之一的与较高的单题估计不一致。
{"title":"Comparing Single- and Multiple-Question Designs of Measuring Family Income in China Family Panel Studies","authors":"Qiong Wu, Li-na Gu","doi":"10.1177/00491241221077238","DOIUrl":"https://doi.org/10.1177/00491241221077238","url":null,"abstract":"Family income questions in general purpose surveys are usually collected with either a single-question summary design or a multiple-question disaggregation design. It is unclear how estimates from the two approaches agree with each other. The current paper takes advantage of a large-scale survey that has collected family income with both methods. With data from 14,222 urban and rural families in the 2018 wave of the nationally representative China Family Panel Studies, we compare the two estimates, and further evaluate factors that might contribute to the discrepancy. We find that the two estimates are loosely matched in only a third of all families, and most of the matched families have a simple income structure. Although the mean of the multiple-question estimate is larger than that of the single-question estimate, the pattern is not monotonic. At lower percentiles up till the median, the single-question estimate is larger, whereas the multiple-question estimate is larger at higher percentiles. Larger family sizes and more income sources contribute to higher likelihood of inconsistent estimates from the two designs. Families with wage income as the main income source have the highest likelihood of giving consistent estimates compared with all other families. In contrast, families with agricultural income or property income as the main source tend to have very high probability of larger single-question estimates. Omission of certain income components and rounding can explain over half of the inconsistencies with higher multiple-question estimates and a quarter of the inconsistencies with higher single-question estimates.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":" ","pages":""},"PeriodicalIF":6.3,"publicationDate":"2022-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47384693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-07DOI: 10.1177/00491241221077237
Natalja Menold, V. Toepoel
Research on mixed devices in web surveys is in its infancy. Using a randomized experiment, we investigated device effects (desktop PC, tablet and mobile phone) for six response formats and four different numbers of scale points. N = 5,077 members of an online access panel participated in the experiment. An exact test of measurement invariance and Composite Reliability were investigated. The results provided full data comparability for devices and formats, with the exception of continuous Visual Analog Scale (VAS), but limited comparability for different numbers of scale points. There were device effects on reliability when looking at the interactions with formats and number of scale points. VAS, use of mobile phones and five point scales consistently gained lower reliability. We suggest technically less demanding implementations as well as a unified design for mixed-device surveys.
{"title":"Do Different Devices Perform Equally Well with Different Numbers of Scale Points and Response Formats? A test of measurement invariance and reliability","authors":"Natalja Menold, V. Toepoel","doi":"10.1177/00491241221077237","DOIUrl":"https://doi.org/10.1177/00491241221077237","url":null,"abstract":"Research on mixed devices in web surveys is in its infancy. Using a randomized experiment, we investigated device effects (desktop PC, tablet and mobile phone) for six response formats and four different numbers of scale points. N = 5,077 members of an online access panel participated in the experiment. An exact test of measurement invariance and Composite Reliability were investigated. The results provided full data comparability for devices and formats, with the exception of continuous Visual Analog Scale (VAS), but limited comparability for different numbers of scale points. There were device effects on reliability when looking at the interactions with formats and number of scale points. VAS, use of mobile phones and five point scales consistently gained lower reliability. We suggest technically less demanding implementations as well as a unified design for mixed-device surveys.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":" ","pages":""},"PeriodicalIF":6.3,"publicationDate":"2022-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47694064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-01DOI: 10.1177/0049124119882477
Daniel Schneider, Kristen Harknett
In this paper, we explore the use of Facebook targeted advertisements for the collection of survey data. We illustrate the potential of survey sampling and recruitment on Facebook through the example of building a large employee-employer linked dataset as part of The Shift Project. We describe the workflow process of targeting, creating, and purchasing survey recruitment advertisements on Facebook. We address concerns about sample selectivity, and apply post-stratification weighting techniques to adjust for differences between our sample and that of "gold-standard" data sources. We then compare univariate and multi-variate relationships in the Shift data against the Current Population Survey and the National Longitudinal Survey of Youth-1997. Finally, we provide an example of the utility of the firm-level nature of the data by showing how firm-level gender composition is related to wages. We conclude by discussing some important remaining limitations of the Facebook approach, as well as highlighting some unique strengths of the Facebook targeting advertisement approach, including the ability for rapid data collection in response to research opportunities, rich and flexible sample targeting capabilities, and low cost, and we suggest broader applications of this technique.
{"title":"What's to Like? Facebook as a Tool for Survey Data Collection.","authors":"Daniel Schneider, Kristen Harknett","doi":"10.1177/0049124119882477","DOIUrl":"https://doi.org/10.1177/0049124119882477","url":null,"abstract":"<p><p>In this paper, we explore the use of Facebook targeted advertisements for the collection of survey data. We illustrate the potential of survey sampling and recruitment on Facebook through the example of building a large employee-employer linked dataset as part of The Shift Project. We describe the workflow process of targeting, creating, and purchasing survey recruitment advertisements on Facebook. We address concerns about sample selectivity, and apply post-stratification weighting techniques to adjust for differences between our sample and that of \"gold-standard\" data sources. We then compare univariate and multi-variate relationships in the Shift data against the Current Population Survey and the National Longitudinal Survey of Youth-1997. Finally, we provide an example of the utility of the firm-level nature of the data by showing how firm-level gender composition is related to wages. We conclude by discussing some important remaining limitations of the Facebook approach, as well as highlighting some unique strengths of the Facebook targeting advertisement approach, including the ability for rapid data collection in response to research opportunities, rich and flexible sample targeting capabilities, and low cost, and we suggest broader applications of this technique.</p>","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"51 1","pages":"108-140"},"PeriodicalIF":6.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0049124119882477","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9407478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-01DOI: 10.1177/00491241211036165
Fernando Rios-Avila, Michelle Lee Maroto
Quantile regression (QR) provides an alternative to linear regression (LR) that allows for the estimation of relationships across the distribution of an outcome. However, as highlighted in recent research on the motherhood penalty across the wage distribution, different procedures for conditional and unconditional quantile regression (CQR, UQR) often result in divergent findings that are not always well understood. In light of such discrepancies, this paper reviews how to implement and interpret a range of LR, CQR, and UQR models with fixed effects. It also discusses the use of Quantile Treatment Effect (QTE) models as an alternative to overcome some of the limitations of CQR and UQR models. We then review how to interpret results in the presence of fixed effects based on a replication of Budig and Hodges’s work on the motherhood penalty using NLSY79 data.
{"title":"Moving Beyond Linear Regression: Implementing and Interpreting Quantile Regression Models With Fixed Effects","authors":"Fernando Rios-Avila, Michelle Lee Maroto","doi":"10.1177/00491241211036165","DOIUrl":"https://doi.org/10.1177/00491241211036165","url":null,"abstract":"<p>Quantile regression (QR) provides an alternative to linear regression (LR) that allows for the estimation of relationships across the distribution of an outcome. However, as highlighted in recent research on the motherhood penalty across the wage distribution, different procedures for conditional and unconditional quantile regression (CQR, UQR) often result in divergent findings that are not always well understood. In light of such discrepancies, this paper reviews how to implement and interpret a range of LR, CQR, and UQR models with fixed effects. It also discusses the use of Quantile Treatment Effect (QTE) models as an alternative to overcome some of the limitations of CQR and UQR models. We then review how to interpret results in the presence of fixed effects based on a replication of Budig and Hodges’s work on the motherhood penalty using NLSY79 data.</p>","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"38 ","pages":""},"PeriodicalIF":6.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138506042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}