Pub Date : 2024-01-08DOI: 10.1177/10944281231223412
Balázs Kovács
Organizational research increasingly relies on online review data to gauge perceived valuation and reputation of organizations and products. Online review platforms typically collect ordinal ratings (e.g., 1 to 5 stars); however, researchers often treat them as a cardinal data, calculating aggregate statistics such as the average, the median, or the variance of ratings. In calculating these statistics, ratings are implicitly assumed to be equidistant. We test whether star ratings are equidistant using reviews from two large-scale online review platforms: Amazon.com and Yelp.com. We develop a deep learning framework to analyze the text of the reviews in order to assess their overall valuation. We find that 4 and 5-star ratings, as well as 1 and 2-star ratings, are closer to each other than 3-star ratings are to 2 and 4-star ratings. An additional online experiment corroborates this pattern. Using simulations, we show that the distortion by non-equidistant ratings is especially harmful in cases when organizations receive only a few reviews and when researchers are interested in estimating variance effects. We discuss potential solutions to solve the issue with rating non-equidistance.
{"title":"Five Is the Brightest Star. But by how Much? Testing the Equidistance of Star Ratings in Online Reviews","authors":"Balázs Kovács","doi":"10.1177/10944281231223412","DOIUrl":"https://doi.org/10.1177/10944281231223412","url":null,"abstract":"Organizational research increasingly relies on online review data to gauge perceived valuation and reputation of organizations and products. Online review platforms typically collect ordinal ratings (e.g., 1 to 5 stars); however, researchers often treat them as a cardinal data, calculating aggregate statistics such as the average, the median, or the variance of ratings. In calculating these statistics, ratings are implicitly assumed to be equidistant. We test whether star ratings are equidistant using reviews from two large-scale online review platforms: Amazon.com and Yelp.com. We develop a deep learning framework to analyze the text of the reviews in order to assess their overall valuation. We find that 4 and 5-star ratings, as well as 1 and 2-star ratings, are closer to each other than 3-star ratings are to 2 and 4-star ratings. An additional online experiment corroborates this pattern. Using simulations, we show that the distortion by non-equidistant ratings is especially harmful in cases when organizations receive only a few reviews and when researchers are interested in estimating variance effects. We discuss potential solutions to solve the issue with rating non-equidistance.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"22 12","pages":""},"PeriodicalIF":9.5,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139446588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-13DOI: 10.1177/10944281231216381
Arturs Kalnins, Kendall Praitis Hill
Variance inflation factors (VIF scores) are regression diagnostics commonly invoked throughout the social sciences. Researchers typically take the perspective that VIF scores below a numerical rule-of-thumb threshold act as a “silver bullet” to dismiss any and all multicollinearity concerns. Yet, no valid logical basis exists for using VIF thresholds to reject the possibility of multicollinearity-induced type 1 errors. Reporting VIF scores below a threshold does not in any way add to the credibility of statistically significant results among correlated variables. In contrast to this “threshold perspective,” our analysis expands the scope of a perspective that has considered multicollinearity and misspecification. We demonstrate analytically that a regression omitting a relevant variable correlated with included variables that exhibit multicollinearity is susceptible to endogeneity-induced bias inflation and beta polarization, leading to the possible co-existence of type 1 errors and low VIF scores. Further, omitting variables explicitly reduces VIF scores. We conclude that the threshold perspective not only lacks any logical basis but also is fundamentally misleading as a rule-of-thumb. Instrumental variables represent one clear remedy for endogeneity-induced bias inflation. If exogenous instruments are unavailable, we encourage researchers to test only straightforward, unambiguous theory when using variables that exhibit multicollinearity, and to ensure that correlated co-variates exhibit the expected signs.
{"title":"The VIF Score. What is it Good For? Absolutely Nothing","authors":"Arturs Kalnins, Kendall Praitis Hill","doi":"10.1177/10944281231216381","DOIUrl":"https://doi.org/10.1177/10944281231216381","url":null,"abstract":"Variance inflation factors (VIF scores) are regression diagnostics commonly invoked throughout the social sciences. Researchers typically take the perspective that VIF scores below a numerical rule-of-thumb threshold act as a “silver bullet” to dismiss any and all multicollinearity concerns. Yet, no valid logical basis exists for using VIF thresholds to reject the possibility of multicollinearity-induced type 1 errors. Reporting VIF scores below a threshold does not in any way add to the credibility of statistically significant results among correlated variables. In contrast to this “threshold perspective,” our analysis expands the scope of a perspective that has considered multicollinearity and misspecification. We demonstrate analytically that a regression omitting a relevant variable correlated with included variables that exhibit multicollinearity is susceptible to endogeneity-induced bias inflation and beta polarization, leading to the possible co-existence of type 1 errors and low VIF scores. Further, omitting variables explicitly reduces VIF scores. We conclude that the threshold perspective not only lacks any logical basis but also is fundamentally misleading as a rule-of-thumb. Instrumental variables represent one clear remedy for endogeneity-induced bias inflation. If exogenous instruments are unavailable, we encourage researchers to test only straightforward, unambiguous theory when using variables that exhibit multicollinearity, and to ensure that correlated co-variates exhibit the expected signs.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"24 4","pages":""},"PeriodicalIF":9.5,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139005307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-03DOI: 10.1177/10944281231216323
T. Köhler, Maria N. Rumyantseva, Catherine Welch
Qualitative research methods are deemed best suited to exploring novel phenomena and generating new concepts. Their potential to reevaluate existing theorizing, however, is underestimated. Qualitative restudies that return to the data and settings on which the original theories were built are a well-established tradition in other disciplines (e.g., history, sociology, and anthropology), but have received little recognition in management and organization studies. We introduce qualitative restudies as a powerful means to improve theorizing by revising or challenging theories that have become outdated or obsolete and establishing transferability and longevity of findings and interpretations. We provide a typology of qualitative restudy designs drawing on an integrative review of literature in management, strategy, and the social sciences and humanities. We highlight the main design and ethical considerations for researchers in undertaking a restudy. We argue for the strengths of restudies as lying in their possibilities for retheorizing, above and beyond verifying or updating prior studies. Restudies draw on the strengths of in-depth qualitative work to uncover how interpretations and theorizing are shaped by methodological traditions, historical contexts, existing societal structures, and researcher backgrounds.
{"title":"Qualitative Restudies: Research Designs for Retheorizing","authors":"T. Köhler, Maria N. Rumyantseva, Catherine Welch","doi":"10.1177/10944281231216323","DOIUrl":"https://doi.org/10.1177/10944281231216323","url":null,"abstract":"Qualitative research methods are deemed best suited to exploring novel phenomena and generating new concepts. Their potential to reevaluate existing theorizing, however, is underestimated. Qualitative restudies that return to the data and settings on which the original theories were built are a well-established tradition in other disciplines (e.g., history, sociology, and anthropology), but have received little recognition in management and organization studies. We introduce qualitative restudies as a powerful means to improve theorizing by revising or challenging theories that have become outdated or obsolete and establishing transferability and longevity of findings and interpretations. We provide a typology of qualitative restudy designs drawing on an integrative review of literature in management, strategy, and the social sciences and humanities. We highlight the main design and ethical considerations for researchers in undertaking a restudy. We argue for the strengths of restudies as lying in their possibilities for retheorizing, above and beyond verifying or updating prior studies. Restudies draw on the strengths of in-depth qualitative work to uncover how interpretations and theorizing are shaped by methodological traditions, historical contexts, existing societal structures, and researcher backgrounds.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"36 23","pages":""},"PeriodicalIF":9.5,"publicationDate":"2023-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138605211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-13DOI: 10.1177/10944281231213068
Philipp Poschmann, Jan Goldenstein, Sven Büchel, Udo Hahn
In this article, we develop a methodological approach for organizational research regarding the construction of multidimensional and relational similarity measures by using the vector space model in natural language processing (NLP). Our vector space approach draws on the well-established premise in organizational research that texts provide a window into social reality and allow measuring theory-based constructs ( e.g., organizations’ self-representations). Using a vector space approach allows capturing the multidimensionality of these theory-based constructs and computing relational similarities between organizational entities ( e.g., organizations, their members, and subunits) in social spaces and with their environments, such as the organization itself, industries, or countries. Thus, our methodological approach contributes to the recent trend in organizational research to use the potential inherent in big (textual) data by using NLP. In an example, we provide guidance for organizational scholars by illustrating how they can ensure validity when applying our methodological contribution in concrete research practice.
{"title":"A Vector Space Approach for Measuring Relationality and Multidimensionality of Meaning in Large Text Collections","authors":"Philipp Poschmann, Jan Goldenstein, Sven Büchel, Udo Hahn","doi":"10.1177/10944281231213068","DOIUrl":"https://doi.org/10.1177/10944281231213068","url":null,"abstract":"In this article, we develop a methodological approach for organizational research regarding the construction of multidimensional and relational similarity measures by using the vector space model in natural language processing (NLP). Our vector space approach draws on the well-established premise in organizational research that texts provide a window into social reality and allow measuring theory-based constructs ( e.g., organizations’ self-representations). Using a vector space approach allows capturing the multidimensionality of these theory-based constructs and computing relational similarities between organizational entities ( e.g., organizations, their members, and subunits) in social spaces and with their environments, such as the organization itself, industries, or countries. Thus, our methodological approach contributes to the recent trend in organizational research to use the potential inherent in big (textual) data by using NLP. In an example, we provide guidance for organizational scholars by illustrating how they can ensure validity when applying our methodological contribution in concrete research practice.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"131 17","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136351297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-09DOI: 10.1177/10944281231210558
Kevin W. Rockmann, Heather C. Vough
While there has been a great deal of guidance on qualitative research methodology, such scholarship has focused almost exclusively on the first three parts of the qualitative process: study design, data gathering, and coding/analysis. We suggest that writing findings is a fourth stage that involves pre-writing and composing. Our intent is to provide practices for this phase for those who are using qualitative data as the evidentiary basis for their claims. The pre-writing phase is strengthened by structuring claims and storyboarding findings, while the composing phase is improved by critically evaluating how to insert the author's voice. Practices surrounding qualitative writing are discussed, such as which quotes to include, where to place quotes, and how to edit quotes. Annotated examples are also provided that show both recommended and nonrecommended ways of inserting the author's voice into a findings section. A sample structure for writing a claim—a claim table—and a sample storyboard are provided.
{"title":"Using Quotes to Present Claims: Practices for the Writing Stages of Qualitative Research","authors":"Kevin W. Rockmann, Heather C. Vough","doi":"10.1177/10944281231210558","DOIUrl":"https://doi.org/10.1177/10944281231210558","url":null,"abstract":"While there has been a great deal of guidance on qualitative research methodology, such scholarship has focused almost exclusively on the first three parts of the qualitative process: study design, data gathering, and coding/analysis. We suggest that writing findings is a fourth stage that involves pre-writing and composing. Our intent is to provide practices for this phase for those who are using qualitative data as the evidentiary basis for their claims. The pre-writing phase is strengthened by structuring claims and storyboarding findings, while the composing phase is improved by critically evaluating how to insert the author's voice. Practices surrounding qualitative writing are discussed, such as which quotes to include, where to place quotes, and how to edit quotes. Annotated examples are also provided that show both recommended and nonrecommended ways of inserting the author's voice into a findings section. A sample structure for writing a claim—a claim table—and a sample storyboard are provided.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" 26","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135243107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1177/10944281231210309
Roman Briker, Fabiola H. Gerpott
Management and applied psychology scholars are confronted with a crisis undermining trust in their findings. One solution to this crisis is the publication format Registered Reports (RRs). Here, authors submit the frontend of their paper for peer review before data collection. While this format can help increase the trustworthiness of research, authors’ usage of RRs—although emerging—has been scarce and scattered. Eventually, common beliefs regarding the (dis)advantages of RRs and a lack of best practices can limit the broad implementation of this approach. To address these issues, we utilized a systematic review process to identify 50 RRs in management and applied psychology and surveyed authors with ( N = 86) and without experience ( N = 161) in publishing RRs and reviewers/editors who have handled RRs ( N = 59). On this basis, we (a) scrutinize prevalent beliefs surrounding the RR format in the fields of management and applied psychology and (b) derive hands-on best practices. In sum, we provide a fact check and guidelines for authors interested in writing RRs, which can also be used by reviewers to evaluate such submissions.
{"title":"Publishing Registered Reports in Management and Applied Psychology: Common Beliefs and Best Practices","authors":"Roman Briker, Fabiola H. Gerpott","doi":"10.1177/10944281231210309","DOIUrl":"https://doi.org/10.1177/10944281231210309","url":null,"abstract":"Management and applied psychology scholars are confronted with a crisis undermining trust in their findings. One solution to this crisis is the publication format Registered Reports (RRs). Here, authors submit the frontend of their paper for peer review before data collection. While this format can help increase the trustworthiness of research, authors’ usage of RRs—although emerging—has been scarce and scattered. Eventually, common beliefs regarding the (dis)advantages of RRs and a lack of best practices can limit the broad implementation of this approach. To address these issues, we utilized a systematic review process to identify 50 RRs in management and applied psychology and surveyed authors with ( N = 86) and without experience ( N = 161) in publishing RRs and reviewers/editors who have handled RRs ( N = 59). On this basis, we (a) scrutinize prevalent beliefs surrounding the RR format in the fields of management and applied psychology and (b) derive hands-on best practices. In sum, we provide a fact check and guidelines for authors interested in writing RRs, which can also be used by reviewers to evaluate such submissions.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135480381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1177/10944281231210481
Bo Zhang, Naidan Tu, Lawrence Angrave, Susu Zhang, Tianjun Sun, Louis Tay, Jian Li
Forced-choice (FC) measurement has become increasingly popular due to its robustness to various response biases and reduced susceptibility to faking. Although several current Item Response Theory (IRT) models can extract normative person scores from FC responses, each has its limitations. This study proposes the Generalized Thurstonian Unfolding Model (GTUM) as a more flexible IRT model for FC measures to overcome these limitations. The GTUM (1) adheres to the unfolding response process, (2) accommodates FC scales of any block size, and (3) manages both dichotomous and graded responses. Monte Carlo simulation studies consistently demonstrated that the GTUM exhibited good statistical properties under most realistic conditions. Particularly noteworthy findings include (1) the GTUM's ability to handle FC scales with or without intermediate statements, (2) the consistently superior performance of graded responses over dichotomous responses in person score recovery, and (3) the sufficiency of 10 mixed pairs to ensure robust psychometric performance. Two empirical examples, one with 1,033 responses to a static version of the Tailored Adaptative Personality Assessment System and the other with 759 responses to a graded version of the Forced-Choice Five-Factor Markers, demonstrated the feasibility of the GTUM to handle different types of FC scales. To aid in the practical use of the GTUM, we also developed the R package “ fcscoring.”
{"title":"The Generalized Thurstonian Unfolding Model (GTUM): Advancing the Modeling of Forced-Choice Data","authors":"Bo Zhang, Naidan Tu, Lawrence Angrave, Susu Zhang, Tianjun Sun, Louis Tay, Jian Li","doi":"10.1177/10944281231210481","DOIUrl":"https://doi.org/10.1177/10944281231210481","url":null,"abstract":"Forced-choice (FC) measurement has become increasingly popular due to its robustness to various response biases and reduced susceptibility to faking. Although several current Item Response Theory (IRT) models can extract normative person scores from FC responses, each has its limitations. This study proposes the Generalized Thurstonian Unfolding Model (GTUM) as a more flexible IRT model for FC measures to overcome these limitations. The GTUM (1) adheres to the unfolding response process, (2) accommodates FC scales of any block size, and (3) manages both dichotomous and graded responses. Monte Carlo simulation studies consistently demonstrated that the GTUM exhibited good statistical properties under most realistic conditions. Particularly noteworthy findings include (1) the GTUM's ability to handle FC scales with or without intermediate statements, (2) the consistently superior performance of graded responses over dichotomous responses in person score recovery, and (3) the sufficiency of 10 mixed pairs to ensure robust psychometric performance. Two empirical examples, one with 1,033 responses to a static version of the Tailored Adaptative Personality Assessment System and the other with 759 responses to a graded version of the Forced-Choice Five-Factor Markers, demonstrated the feasibility of the GTUM to handle different types of FC scales. To aid in the practical use of the GTUM, we also developed the R package “ fcscoring.”","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"29 9-10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135272455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-11DOI: 10.1177/10944281231205027
Hyun-Soo Woo, Jisun Kim, Albert A. Cannella
The Cox proportional hazard model has often been used for survival analysis in organizational research. The Cox model needs to satisfy one critical assumption—time independence—that the effects of independent variables are constant over survival time (also known as the proportional hazard assumption). However, organizational research often encounters time dependence in the Cox model. Organizational studies have traditionally seemed to view time dependence as an empirical nuisance, but we highlight that it is also a theory-development opportunity. Indeed, from our review of AMJ and SMJ papers published in a recent 10-year period, we found that researchers rarely considered time dependence as a theory-development opportunity, and worse, many of them did not test for (or report tests for) time dependence. The purpose of our study is to change this pattern. To this end, we provide a step-by-step guide to facilitate testing for time dependence and using time dependence as a theory development opportunity. We also demonstrate our step-by-step guide with an empirical example.
{"title":"Time Dependence in the Cox Proportional Hazard Model as a Theory Development Opportunity: A Step-by-Step Guide","authors":"Hyun-Soo Woo, Jisun Kim, Albert A. Cannella","doi":"10.1177/10944281231205027","DOIUrl":"https://doi.org/10.1177/10944281231205027","url":null,"abstract":"The Cox proportional hazard model has often been used for survival analysis in organizational research. The Cox model needs to satisfy one critical assumption—time independence—that the effects of independent variables are constant over survival time (also known as the proportional hazard assumption). However, organizational research often encounters time dependence in the Cox model. Organizational studies have traditionally seemed to view time dependence as an empirical nuisance, but we highlight that it is also a theory-development opportunity. Indeed, from our review of AMJ and SMJ papers published in a recent 10-year period, we found that researchers rarely considered time dependence as a theory-development opportunity, and worse, many of them did not test for (or report tests for) time dependence. The purpose of our study is to change this pattern. To this end, we provide a step-by-step guide to facilitate testing for time dependence and using time dependence as a theory development opportunity. We also demonstrate our step-by-step guide with an empirical example.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136063886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.1177/10944281231202741
Nale Lehmann-Willenbrock, Hayley Hung
Social signal processing develops automated approaches to detect, analyze, and synthesize social signals in human–human as well as human–machine interactions by means of machine learning and sensor data processing. Most works analyze individual or dyadic behavior, while the analysis of group or team interactions remains limited. We present a case study of an interdisciplinary work process for social signal processing that can develop automatized measures of complex team interaction dynamics, using team task and social cohesion as an example. In a field sample of 25 real project team meetings, we obtained sensor data from cameras, microphones, and a smart ID badge measuring acceleration. We demonstrate how fine-grained behavioral expressions of task and social cohesion in team meetings can be extracted and processed from sensor data by capturing dyadic coordination patterns that are then aggregated to the team level. The extracted patterns act as proxies for behavioral synchrony and mimicry of speech and body behavior which map onto verbal expressions of task and social cohesion in the observed team meetings. We reflect on opportunities for future interdisciplinary or collaboration that can move beyond a simple producer–consumer model.
{"title":"A Multimodal Social Signal Processing Approach to Team Interactions","authors":"Nale Lehmann-Willenbrock, Hayley Hung","doi":"10.1177/10944281231202741","DOIUrl":"https://doi.org/10.1177/10944281231202741","url":null,"abstract":"Social signal processing develops automated approaches to detect, analyze, and synthesize social signals in human–human as well as human–machine interactions by means of machine learning and sensor data processing. Most works analyze individual or dyadic behavior, while the analysis of group or team interactions remains limited. We present a case study of an interdisciplinary work process for social signal processing that can develop automatized measures of complex team interaction dynamics, using team task and social cohesion as an example. In a field sample of 25 real project team meetings, we obtained sensor data from cameras, microphones, and a smart ID badge measuring acceleration. We demonstrate how fine-grained behavioral expressions of task and social cohesion in team meetings can be extracted and processed from sensor data by capturing dyadic coordination patterns that are then aggregated to the team level. The extracted patterns act as proxies for behavioral synchrony and mimicry of speech and body behavior which map onto verbal expressions of task and social cohesion in the observed team meetings. We reflect on opportunities for future interdisciplinary or collaboration that can move beyond a simple producer–consumer model.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134946260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-28DOI: 10.1177/10944281231202740
Ze Zhu, John A. Aitken, Reeshad S. Dalal, Seth A. Kaplan
Organizational researchers are now making widespread use of ecological momentary assessments but have not yet taken the logical next step to ecological momentary interventions, also called Just-in-Time Adaptive Interventions (JITAIs). JITAIs have the potential to test within-person causal theories and maximize practical benefits to participants through two developmental phases: The microrandomized trial and the randomized controlled trial, respectively. In the microrandomized trial design, within-person randomization and experimental manipulation maximize internal validity at the within-person level. In the randomized controlled trial design, interventions are delivered in a timely and ecological manner while avoiding unnecessary and ill-timed interventions that potentially increase participant fatigue and noncompliance. Despite these potential advantages, the development and implementation of JITAIs require consideration of many conceptual and methodological factors. Given the benefits of JITAIs, but also the various considerations involved in using them, this review introduces organizational behavior and human resources researchers to JITAIs, provides guidelines for JITAI design, development, and evaluation, and describes the extensive potential of JITAIs in organizational behavior and human resources research.
{"title":"The Promise of Just-in-Time Adaptive Interventions for Organizational Scholarship and Practice: Conceptual Development and Research Agenda","authors":"Ze Zhu, John A. Aitken, Reeshad S. Dalal, Seth A. Kaplan","doi":"10.1177/10944281231202740","DOIUrl":"https://doi.org/10.1177/10944281231202740","url":null,"abstract":"Organizational researchers are now making widespread use of ecological momentary assessments but have not yet taken the logical next step to ecological momentary interventions, also called Just-in-Time Adaptive Interventions (JITAIs). JITAIs have the potential to test within-person causal theories and maximize practical benefits to participants through two developmental phases: The microrandomized trial and the randomized controlled trial, respectively. In the microrandomized trial design, within-person randomization and experimental manipulation maximize internal validity at the within-person level. In the randomized controlled trial design, interventions are delivered in a timely and ecological manner while avoiding unnecessary and ill-timed interventions that potentially increase participant fatigue and noncompliance. Despite these potential advantages, the development and implementation of JITAIs require consideration of many conceptual and methodological factors. Given the benefits of JITAIs, but also the various considerations involved in using them, this review introduces organizational behavior and human resources researchers to JITAIs, provides guidelines for JITAI design, development, and evaluation, and describes the extensive potential of JITAIs in organizational behavior and human resources research.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135344361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}