首页 > 最新文献

Organizational Research Methods最新文献

英文 中文
Application of Prototype Analysis to Organizational Research: A Critical Methodological Review 原型分析在组织研究中的应用:批判性方法论回顾
IF 9.5 2区 管理学 Q1 MANAGEMENT Pub Date : 2025-12-23 DOI: 10.1177/10944281251399210
Sandra Kiffin-Petersen, Sharon Purchase, Doina Olaru
Prototypes—internalized knowledge structures of the most typical or characteristic features of a concept—are important because they influence cognitive processing. Yet prototype analysis, the method used to examine prototypes, appears relatively underutilized in organizational research. To introduce prototype analysis to a wider audience of organizational scholars, we conducted a critical methodological literature review following a six-step procedure. Seventy-three prototype analyses published in 35 journals were categorized and their content analyzed. A prototype analysis typically includes a sequence of independent studies conducted over two stages, recently referred to as the standard procedure. Our review makes several contributions, including development of a taxonomy of prototype analysis applications, clarification of the standard procedure of a prototype analysis and possible variations, and suggestions for organizational research. Benefits of undertaking a prototype analysis include improved understanding of abstract workplace concepts that are difficult to measure directly, the ability to compare cross-cultural prototypes, and an approach for investigating the issue of construct redundancy. We conclude with best-practice recommendations, implications for organizational scholarship, methodological limitations, and future research suggestions.
原型——一个概念最典型或特征的内化知识结构——很重要,因为它们影响认知加工。然而,用于检验原型的原型分析方法在组织研究中似乎没有得到充分利用。为了向更广泛的组织学者介绍原型分析,我们按照六个步骤进行了批判性的方法论文献综述。对发表在35种期刊上的73份原型分析进行了分类,并对其内容进行了分析。原型分析通常包括一系列独立研究,分两个阶段进行,最近被称为标准程序。我们的综述做出了一些贡献,包括开发原型分析应用的分类,澄清原型分析的标准程序和可能的变化,以及对组织研究的建议。进行原型分析的好处包括提高对难以直接测量的抽象工作场所概念的理解,比较跨文化原型的能力,以及调查结构冗余问题的方法。我们总结了最佳实践的建议,对组织学术的影响,方法的局限性和未来的研究建议。
{"title":"Application of Prototype Analysis to Organizational Research: A Critical Methodological Review","authors":"Sandra Kiffin-Petersen, Sharon Purchase, Doina Olaru","doi":"10.1177/10944281251399210","DOIUrl":"https://doi.org/10.1177/10944281251399210","url":null,"abstract":"Prototypes—internalized knowledge structures of the most typical or characteristic features of a concept—are important because they influence cognitive processing. Yet prototype analysis, the method used to examine prototypes, appears relatively underutilized in organizational research. To introduce prototype analysis to a wider audience of organizational scholars, we conducted a critical methodological literature review following a six-step procedure. Seventy-three prototype analyses published in 35 journals were categorized and their content analyzed. A prototype analysis typically includes a sequence of independent studies conducted over two stages, recently referred to as the standard procedure. Our review makes several contributions, including development of a taxonomy of prototype analysis applications, clarification of the standard procedure of a prototype analysis and possible variations, and suggestions for organizational research. Benefits of undertaking a prototype analysis include improved understanding of abstract workplace concepts that are difficult to measure directly, the ability to compare cross-cultural prototypes, and an approach for investigating the issue of construct redundancy. We conclude with best-practice recommendations, implications for organizational scholarship, methodological limitations, and future research suggestions.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"22 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145812772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding Relative Differences with Magnitude-Based Hypotheses: A Methodological Conceptualization and Data Illustration 理解以震级为基础的假设的相对差异:方法概念化和数据说明
IF 9.5 2区 管理学 Q1 MANAGEMENT Pub Date : 2025-10-07 DOI: 10.1177/10944281251377139
Dane P. Blevins, David J. Skandera, Roberto Ragozzino
Our paper provides a conceptualization of magnitude-based hypotheses (MBHs). We define an MBH as a specific type of hypothesis that tests for relative differences in the independent impact (i.e., effect size difference) of at least two explanatory variables on a given outcome. We reviewed 1,715 articles across eight leading management journals and found that nearly 10% (165) of articles feature an MBH, employing 41 distinct methodological approaches to test them. However, approximately 40% of these papers show missteps in the post-estimation process required to evaluate MBHs. To address this issue, we offer a conceptual framework, an empirical illustration using Bayesian analysis and frequentist statistics, and a decision-tree guideline that outlines key steps for evaluating MBHs. Overall, we contribute a framework for applying MBHs, demonstrating how they can shift theoretical inquiry from binary questions of whether an effect exists, to more comparative questions about how much a construct matters,compared to what, and under which conditions.
我们的论文提供了基于震级的假设(MBHs)的概念化。我们将MBH定义为一种特定类型的假设,用于检验至少两个解释变量对给定结果的独立影响(即效应大小差异)的相对差异。我们回顾了8个主要管理期刊上的1715篇文章,发现近10%(165篇)的文章以MBH为特征,采用了41种不同的方法来测试它们。然而,这些论文中约有40%显示了评估mbh所需的后估计过程中的失误。为了解决这个问题,我们提供了一个概念框架,一个使用贝叶斯分析和频率统计的实证说明,以及一个决策树指南,概述了评估MBHs的关键步骤。总的来说,我们提供了一个应用mbh的框架,展示了它们如何将理论探究从一个效应是否存在的二元问题转变为更多的比较性问题,比如一个结构有多重要,与什么相比,以及在什么条件下。
{"title":"Understanding Relative Differences with Magnitude-Based Hypotheses: A Methodological Conceptualization and Data Illustration","authors":"Dane P. Blevins, David J. Skandera, Roberto Ragozzino","doi":"10.1177/10944281251377139","DOIUrl":"https://doi.org/10.1177/10944281251377139","url":null,"abstract":"Our paper provides a conceptualization of magnitude-based hypotheses (MBHs). We define an MBH as a specific type of hypothesis that tests for relative differences in the independent impact (i.e., effect size difference) of at least two explanatory variables on a given outcome. We reviewed 1,715 articles across eight leading management journals and found that nearly 10% (165) of articles feature an MBH, employing 41 distinct methodological approaches to test them. However, approximately 40% of these papers show missteps in the post-estimation process required to evaluate MBHs. To address this issue, we offer a conceptual framework, an empirical illustration using Bayesian analysis and frequentist statistics, and a decision-tree guideline that outlines key steps for evaluating MBHs. Overall, we contribute a framework for applying MBHs, demonstrating how they can shift theoretical inquiry from binary questions of whether an effect exists, to more comparative questions about how much a construct matters,compared to what, and under which conditions.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"50 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145241869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative Artificial Intelligence in Qualitative Data Analysis: Analyzing—Or Just Chatting? 定性数据分析中的生成式人工智能:分析还是聊天?
IF 9.5 2区 管理学 Q1 MANAGEMENT Pub Date : 2025-09-30 DOI: 10.1177/10944281251377154
Duc Cuong Nguyen, Catherine Welch
Researchers, engineers, and entrepreneurs are enthusiastically exploring and promoting ways to apply generative artificial intelligence (GenAI) tools to qualitative data analysis. From promises of automated coding and thematic analysis to functioning as a virtual research assistant that supports researchers in diverse interpretive and analytical tasks, the potential applications of GenAI in qualitative research appear vast. In this paper, we take a step back and ask what sort of technological artifact is GenAI and evaluate whether it is appropriate for qualitative data analysis. We provide an accessible, technologically informed analysis of GenAI, specifically large language models (LLMs), and put to the test the claimed transformative potential of using GenAI in qualitative data analysis. Our evaluation illustrates significant shortcomings that, if the technology is adopted uncritically by management researchers, will introduce unacceptable epistemic risks. We explore these epistemic risks and emphasize that the essence of qualitative data analysis lies in the interpretation of meaning, an inherently human capability.
研究人员、工程师和企业家都在热情地探索和推广将生成式人工智能(GenAI)工具应用于定性数据分析的方法。从自动化编码和专题分析的承诺,到作为一个虚拟的研究助手,支持研究人员进行各种解释和分析任务,GenAI在定性研究中的潜在应用似乎是巨大的。在本文中,我们退后一步,询问GenAI是什么样的技术工件,并评估它是否适合定性数据分析。我们提供了一种可访问的,技术上知情的GenAI分析,特别是大型语言模型(llm),并测试了在定性数据分析中使用GenAI的所谓变革潜力。我们的评估表明,如果管理研究人员不加批判地采用该技术,将会带来不可接受的认知风险。我们探讨了这些认知风险,并强调定性数据分析的本质在于对意义的解释,这是人类固有的能力。
{"title":"Generative Artificial Intelligence in Qualitative Data Analysis: Analyzing—Or Just Chatting?","authors":"Duc Cuong Nguyen, Catherine Welch","doi":"10.1177/10944281251377154","DOIUrl":"https://doi.org/10.1177/10944281251377154","url":null,"abstract":"Researchers, engineers, and entrepreneurs are enthusiastically exploring and promoting ways to apply generative artificial intelligence (GenAI) tools to qualitative data analysis. From promises of automated coding and thematic analysis to functioning as a virtual research assistant that supports researchers in diverse interpretive and analytical tasks, the potential applications of GenAI in qualitative research appear vast. In this paper, we take a step back and ask what sort of technological artifact is GenAI and evaluate whether it is appropriate for qualitative data analysis. We provide an accessible, technologically informed analysis of GenAI, specifically large language models (LLMs), and put to the test the claimed transformative potential of using GenAI in qualitative data analysis. Our evaluation illustrates significant shortcomings that, if the technology is adopted uncritically by management researchers, will introduce unacceptable epistemic risks. We explore these epistemic risks and emphasize that the essence of qualitative data analysis lies in the interpretation of meaning, an inherently human capability.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unleashing the Creative Potential of Research Tensions: Toward a Paradox Approach to Methods 释放研究紧张的创造潜力:走向方法的悖论方法
IF 9.5 2区 管理学 Q1 MANAGEMENT Pub Date : 2025-07-08 DOI: 10.1177/10944281251346804
Stephanie Schrage, Constantine Andriopoulos, Marianne W. Lewis, Wendy K. Smith
Research is a paradoxical process. Scholars confront conflicting yet interwoven pressures, considering methodologies that engage complexity and simplicity, induction and deduction, novelty and continuity, and more. Paradox theory offers insights that embrace such tensions, providing empirical examples that harness creative friction to foster more novel and useful, rigorous, and relevant research. Leveraging this lens, we open a conversation on research tensions, developing the foundations of a Paradox Approach to Methods applicable to organization studies more broadly. To do so, we first identify tensions raised at six methodological decision points: research scope, construct definition, underlying assumptions, data collection, data analysis, and interpretation. Second, we build on paradox theory to identify navigating practices: accepting, differentiating, integrating, and knotting. By doing so, we contribute to organizational research broadly by embracing methods of tensions to advance scholarly insight.
研究是一个矛盾的过程。学者们面临着相互冲突但又交织在一起的压力,他们考虑的方法涉及复杂性和简单性、归纳和演绎、新颖性和连续性等等。悖论理论提供了包含这种紧张关系的见解,提供了利用创造性摩擦来促进更新颖、有用、严谨和相关的研究的实证例子。利用这一视角,我们开启了一场关于研究紧张关系的对话,为更广泛地适用于组织研究的方法开发悖论方法的基础。为此,我们首先确定了在六个方法学决策点上产生的紧张关系:研究范围、构造定义、基本假设、数据收集、数据分析和解释。其次,我们以悖论理论为基础来识别导航实践:接受、区分、整合和打结。通过这样做,我们通过拥抱张力方法来推进学术洞察力,广泛地为组织研究做出贡献。
{"title":"Unleashing the Creative Potential of Research Tensions: Toward a Paradox Approach to Methods","authors":"Stephanie Schrage, Constantine Andriopoulos, Marianne W. Lewis, Wendy K. Smith","doi":"10.1177/10944281251346804","DOIUrl":"https://doi.org/10.1177/10944281251346804","url":null,"abstract":"Research is a paradoxical process. Scholars confront conflicting yet interwoven pressures, considering methodologies that engage complexity and simplicity, induction and deduction, novelty and continuity, and more. Paradox theory offers insights that embrace such tensions, providing empirical examples that harness creative friction to foster more novel and useful, rigorous, and relevant research. Leveraging this lens, we open a conversation on research tensions, developing the foundations of a Paradox Approach to Methods applicable to organization studies more broadly. To do so, we first identify tensions raised at six methodological decision points: research scope, construct definition, underlying assumptions, data collection, data analysis, and interpretation. Second, we build on paradox theory to identify navigating practices: accepting, differentiating, integrating, and knotting. By doing so, we contribute to organizational research broadly by embracing methods of tensions to advance scholarly insight.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"21 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144578317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Journey of Forced Choice Measurement Over 80 Years: Past, Present, and Future 80多年的强迫选择测量之旅:过去、现在和未来
IF 9.5 2区 管理学 Q1 MANAGEMENT Pub Date : 2025-07-07 DOI: 10.1177/10944281251350687
Philseok Lee, Mina Son, Steven Zhou, Sean Joo, Zihao Jia, Virginia Cheng
Over the past two decades, forced-choice (FC) measures have received considerable attention from researchers and practitioners in industrial and organizational psychology. Despite the growing body of research on FC measures, there has not yet been a comprehensive review synthesizing the diverse lines of research. This article bridges this gap by presenting a systematic review of post-2000 literature on FC measures, addressing ten critical questions, including: 1) validity evidence, 2) faking resistance, 3) FC IRT models, 4) FC test design, 5) FC measure development, 6) test-taker reactions and response processes, 7) measurement and predictive bias, 8) reliability, 9) computerized adaptive testing, and 10) random responding. The review adopts a historical perspective, tracing the development of FC measures and highlighting key empirical findings, methodological advances, current trends, and future directions. By synthesizing a substantial body of evidence across multiple research streams, this article serves as a valuable resource, providing insights into the psychometric properties, theoretical underpinnings, and practical applications of FC measures in organizational contexts such as personnel selection, development, and assessment.
在过去的二十年里,强迫选择(FC)措施受到了工业和组织心理学研究者和实践者的极大关注。尽管对FC措施的研究越来越多,但还没有一个综合不同研究方向的全面综述。本文通过对2000年后关于FC测量的文献进行系统回顾,解决了十个关键问题,包括:1)效度证据,2)伪造阻力,3)FC IRT模型,4)FC测试设计,5)FC测量开发,6)考生反应和反应过程,7)测量和预测偏差,8)可靠性,9)计算机化自适应测试,10)随机响应。本文采用了历史视角,追溯了FC测量方法的发展,强调了关键的实证发现、方法进展、当前趋势和未来方向。通过综合多个研究流的大量证据,本文提供了有价值的资源,提供了对FC测量在组织环境(如人员选择、发展和评估)中的心理测量特性、理论基础和实际应用的见解。
{"title":"The Journey of Forced Choice Measurement Over 80 Years: Past, Present, and Future","authors":"Philseok Lee, Mina Son, Steven Zhou, Sean Joo, Zihao Jia, Virginia Cheng","doi":"10.1177/10944281251350687","DOIUrl":"https://doi.org/10.1177/10944281251350687","url":null,"abstract":"Over the past two decades, forced-choice (FC) measures have received considerable attention from researchers and practitioners in industrial and organizational psychology. Despite the growing body of research on FC measures, there has not yet been a comprehensive review synthesizing the diverse lines of research. This article bridges this gap by presenting a systematic review of post-2000 literature on FC measures, addressing ten critical questions, including: 1) validity evidence, 2) faking resistance, 3) FC IRT models, 4) FC test design, 5) FC measure development, 6) test-taker reactions and response processes, 7) measurement and predictive bias, 8) reliability, 9) computerized adaptive testing, and 10) random responding. The review adopts a historical perspective, tracing the development of FC measures and highlighting key empirical findings, methodological advances, current trends, and future directions. By synthesizing a substantial body of evidence across multiple research streams, this article serves as a valuable resource, providing insights into the psychometric properties, theoretical underpinnings, and practical applications of FC measures in organizational contexts such as personnel selection, development, and assessment.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"109 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144578319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Markov Chains to Detect Careless Responding in Survey Research 利用马尔可夫链检测调查研究中的粗心响应
IF 9.5 2区 管理学 Q1 MANAGEMENT Pub Date : 2025-06-24 DOI: 10.1177/10944281251334778
Torsten Biemann, Irmela F. Koch-Bayram, Madleen Meier-Barthold, Herman Aguinis
Careless responses by survey participants threaten data quality and lead to misleading substantive conclusions that result in theory and practice derailments. Prior research developed valuable precautionary and post-hoc approaches to detect certain types of careless responding. However, existing approaches fail to detect certain repeated response patterns, such as diagonal-lining and alternating responses. Moreover, some existing approaches risk falsely flagging careful response patterns. To address these challenges, we developed a methodological advancement based on first-order Markov chains called Lazy Respondents (Laz.R) that relies on predicting careless responses based on prior responses. We analyzed two large datasets and conducted an experimental study to compare careless responding indices to Laz.R and provide evidence that its use improves validity. To facilitate the use of Laz.R, we describe a procedure for establishing sample-specific cutoff values for careless respondents using the “kneedle algorithm” and make an R Shiny application available to produce all calculations. We expect that using Laz.R in combination with other approaches will help mitigate the threat of careless responses and improve the accuracy of substantive conclusions in future research.
调查参与者的粗心回答会威胁数据质量,并导致误导性的实质性结论,从而导致理论和实践的出轨。先前的研究开发了有价值的预防和事后方法来检测某些类型的粗心反应。然而,现有的方法无法检测某些重复响应模式,如对角线衬里和交替响应。此外,一些现有的方法可能会错误地标记出谨慎的响应模式。为了应对这些挑战,我们开发了一种基于一阶马尔可夫链的方法进步,称为懒惰的受访者(Lazy responders,简称Lazy . r),它依赖于基于先前的回应来预测粗心的回应。我们分析了两个大型数据集,并进行了实验研究,以比较粗心响应指数和Laz。R并提供证据证明其使用提高了有效性。为了方便使用拉兹。在R中,我们描述了一个程序,该程序使用“针头算法”为粗心的受访者建立特定于样本的截止值,并使R Shiny应用程序可用于生成所有计算。我们期望用拉兹。R与其他方法的结合将有助于减轻粗心反应的威胁,并在未来的研究中提高实质性结论的准确性。
{"title":"Using Markov Chains to Detect Careless Responding in Survey Research","authors":"Torsten Biemann, Irmela F. Koch-Bayram, Madleen Meier-Barthold, Herman Aguinis","doi":"10.1177/10944281251334778","DOIUrl":"https://doi.org/10.1177/10944281251334778","url":null,"abstract":"Careless responses by survey participants threaten data quality and lead to misleading substantive conclusions that result in theory and practice derailments. Prior research developed valuable precautionary and post-hoc approaches to detect certain types of careless responding. However, existing approaches fail to detect certain repeated response patterns, such as diagonal-lining and alternating responses. Moreover, some existing approaches risk falsely flagging careful response patterns. To address these challenges, we developed a methodological advancement based on first-order Markov chains called <jats:italic>Lazy Respondents</jats:italic> (Laz.R) that relies on predicting careless responses based on prior responses. We analyzed two large datasets and conducted an experimental study to compare careless responding indices to Laz.R and provide evidence that its use improves validity. To facilitate the use of Laz.R, we describe a procedure for establishing sample-specific cutoff values for careless respondents using the “kneedle algorithm” and make an R Shiny application available to produce all calculations. We expect that using Laz.R in combination with other approaches will help mitigate the threat of careless responses and improve the accuracy of substantive conclusions in future research.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"235 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliability Evidence for AI-Based Scores in Organizational Contexts: Applying Lessons Learned From Psychometrics 组织背景下基于人工智能的分数的可靠性证据:应用心理测量学的经验教训
IF 9.5 2区 管理学 Q1 MANAGEMENT Pub Date : 2025-06-24 DOI: 10.1177/10944281251346404
Andrew B. Speer, Frederick L. Oswald, Dan J. Putka
Machine learning and artificial intelligence (AI) are increasingly used within organizational research and practice to generate scores representing constructs (e.g., social effectiveness) or behaviors/events (e.g., turnover probability). Ensuring the reliability of AI scores is critical in these contexts, and yet reliability estimates are reported in inconsistent ways, if at all. The current article critically examines reliability estimation for AI scores. We describe different uses of AI scores and how this informs the data and model needed for estimating reliability. Additionally, we distinguish between reliability and validity evidence within this context. We also highlight how the parallel test assumption is required when relying on correlations between AI scores and established measures as an index of reliability, and yet this assumption is frequently violated. We then provide methods that are appropriate for reliability estimation for AI scores that are sensitive to the generalizations one aims to make. In conclusion, we assert that AI reliability estimation is a challenging task that requires a thorough understanding of the issues presented, but a task that is essential to responsible AI work in organizational contexts.
机器学习和人工智能(AI)越来越多地用于组织研究和实践中,以生成代表结构(例如,社会有效性)或行为/事件(例如,离职概率)的分数。在这种情况下,确保AI分数的可靠性至关重要,但可靠性评估报告的方式不一致,如果有的话。本文批判性地考察了人工智能分数的可靠性估计。我们描述了人工智能分数的不同用途,以及它如何告知估计可靠性所需的数据和模型。此外,我们在此背景下区分可靠性和有效性证据。我们还强调了当依赖于AI分数和既定指标之间的相关性作为可靠性指标时,如何需要平行测试假设,然而这一假设经常被违反。然后,我们提供了适合于人工智能分数的可靠性估计的方法,这些方法对人们旨在做出的概括很敏感。总之,我们断言人工智能可靠性评估是一项具有挑战性的任务,需要对所提出的问题有透彻的理解,但对于在组织环境中负责任的人工智能工作来说,这是一项至关重要的任务。
{"title":"Reliability Evidence for AI-Based Scores in Organizational Contexts: Applying Lessons Learned From Psychometrics","authors":"Andrew B. Speer, Frederick L. Oswald, Dan J. Putka","doi":"10.1177/10944281251346404","DOIUrl":"https://doi.org/10.1177/10944281251346404","url":null,"abstract":"Machine learning and artificial intelligence (AI) are increasingly used within organizational research and practice to generate scores representing constructs (e.g., social effectiveness) or behaviors/events (e.g., turnover probability). Ensuring the reliability of AI scores is critical in these contexts, and yet reliability estimates are reported in inconsistent ways, if at all. The current article critically examines reliability estimation for AI scores. We describe different uses of AI scores and how this informs the data and model needed for estimating reliability. Additionally, we distinguish between reliability and validity evidence within this context. We also highlight how the parallel test assumption is required when relying on correlations between AI scores and established measures as an index of reliability, and yet this assumption is frequently violated. We then provide methods that are appropriate for reliability estimation for AI scores that are sensitive to the generalizations one aims to make. In conclusion, we assert that AI reliability estimation is a challenging task that requires a thorough understanding of the issues presented, but a task that is essential to responsible AI work in organizational contexts.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144371285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Machine Learning Toolkit for Selecting Studies and Topics in Systematic Literature Reviews 在系统文献综述中选择研究和主题的机器学习工具包
IF 9.5 2区 管理学 Q1 MANAGEMENT Pub Date : 2025-05-26 DOI: 10.1177/10944281251341571
Andrea Simonetti, Michele Tumminello, Pasquale Massimo Picone, Anna Minà
Scholars conduct systematic literature reviews to summarize knowledge and identify gaps in understanding. Machine learning can assist researchers in carrying out these studies. This paper introduces a machine learning toolkit that employs Network Analysis and Natural Language Processing methods to extract textual features and categorize academic papers. The toolkit comprises two algorithms that enable researchers to: (a) select relevant studies for a given theme; and (b) identify the main topics within that theme. We demonstrate the effectiveness of our toolkit by analyzing three streams of literature: cobranding, coopetition, and the psychological resilience of entrepreneurs. By comparing the results obtained through our toolkit with previously published literature reviews, we highlight its advantages in enhancing transparency, coherence, and comprehensiveness in literature reviews. We also provide quantitative evidence about the toolkit's efficacy in addressing the challenges inherent in conducting a literature review, as compared with state-of-the-art Natural Language Processing methods. Finally, we discuss the critical role of researchers in implementing and overseeing a literature review aided by our toolkit.
学者们通过系统的文献综述来总结知识并找出理解上的空白。机器学习可以帮助研究人员进行这些研究。本文介绍了一个机器学习工具包,该工具包采用网络分析和自然语言处理方法提取文本特征并对学术论文进行分类。该工具包包括两种算法,使研究人员能够:(a)为给定主题选择相关研究;(b)确定该主题中的主要议题。我们通过分析三个文献流来证明我们的工具包的有效性:联合品牌、合作和企业家的心理弹性。通过将我们的工具包获得的结果与先前发表的文献综述进行比较,我们强调了它在提高文献综述的透明度、一致性和全面性方面的优势。与最先进的自然语言处理方法相比,我们还提供了关于该工具包在解决进行文献综述中固有挑战方面的有效性的定量证据。最后,我们讨论了研究人员在实施和监督由我们的工具包辅助的文献综述中的关键作用。
{"title":"A Machine Learning Toolkit for Selecting Studies and Topics in Systematic Literature Reviews","authors":"Andrea Simonetti, Michele Tumminello, Pasquale Massimo Picone, Anna Minà","doi":"10.1177/10944281251341571","DOIUrl":"https://doi.org/10.1177/10944281251341571","url":null,"abstract":"Scholars conduct systematic literature reviews to summarize knowledge and identify gaps in understanding. Machine learning can assist researchers in carrying out these studies. This paper introduces a machine learning toolkit that employs Network Analysis and Natural Language Processing methods to extract textual features and categorize academic papers. The toolkit comprises two algorithms that enable researchers to: (a) select relevant studies for a given theme; and (b) identify the main topics within that theme. We demonstrate the effectiveness of our toolkit by analyzing three streams of literature: cobranding, coopetition, and the psychological resilience of entrepreneurs. By comparing the results obtained through our toolkit with previously published literature reviews, we highlight its advantages in enhancing transparency, coherence, and comprehensiveness in literature reviews. We also provide quantitative evidence about the toolkit's efficacy in addressing the challenges inherent in conducting a literature review, as compared with state-of-the-art Natural Language Processing methods. Finally, we discuss the critical role of researchers in implementing and overseeing a literature review aided by our toolkit.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"51 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144145566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Coreference Resolution to Mitigate Measurement Error in Text Analysis 利用共参考分辨率减轻文本分析中的测量误差
IF 9.5 2区 管理学 Q1 MANAGEMENT Pub Date : 2025-05-21 DOI: 10.1177/10944281251334777
Farhan Iqbal, Michael D. Pfarrer
Content analysis has enabled organizational scholars to study constructs and relationships that were previously unattainable at scale. One particular area of focus has been on sentiment analysis, which scholars have implemented to examine myriad relationships pertinent to organizational research. This article addresses certain limitations in sentiment analysis. More specifically, we bring attention to the challenge of accurately attributing sentiment in text that mentions multiple firms. Whereas traditional methods often result in measurement error due to misattributing text to firms, we offer coreference resolution—a natural language processing technique that identifies and links expressions referring to the same entity—as a solution to this problem. Across two studies, we demonstrate the potential of this approach to reduce measurement error and enhance the veracity of text analyses. We conclude by offering avenues for theoretical and empirical advances in organizational research.
内容分析使组织学者能够研究以前无法在规模上实现的结构和关系。一个特别关注的领域是情绪分析,学者们已经实施了研究与组织研究相关的无数关系。本文解决了情感分析的某些局限性。更具体地说,我们注意到在提到多个公司的文本中准确地归因于情绪的挑战。由于错误地将文本归因于公司,传统方法经常导致测量误差,因此我们提供了共同引用解析——一种识别和链接引用同一实体的表达式的自然语言处理技术——作为解决这个问题的方法。在两项研究中,我们展示了这种方法在减少测量误差和提高文本分析准确性方面的潜力。最后,我们为组织研究的理论和实证进展提供了途径。
{"title":"Using Coreference Resolution to Mitigate Measurement Error in Text Analysis","authors":"Farhan Iqbal, Michael D. Pfarrer","doi":"10.1177/10944281251334777","DOIUrl":"https://doi.org/10.1177/10944281251334777","url":null,"abstract":"Content analysis has enabled organizational scholars to study constructs and relationships that were previously unattainable at scale. One particular area of focus has been on sentiment analysis, which scholars have implemented to examine myriad relationships pertinent to organizational research. This article addresses certain limitations in sentiment analysis. More specifically, we bring attention to the challenge of accurately attributing sentiment in text that mentions multiple firms. Whereas traditional methods often result in measurement error due to misattributing text to firms, we offer coreference resolution—a natural language processing technique that identifies and links expressions referring to the same entity—as a solution to this problem. Across two studies, we demonstrate the potential of this approach to reduce measurement error and enhance the veracity of text analyses. We conclude by offering avenues for theoretical and empirical advances in organizational research.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"45 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Theorization Using Artificial Intelligence: Leveraging Large Language Models for Qualitative Analysis of Online Data 利用人工智能增强理论化:利用大型语言模型对在线数据进行定性分析
IF 9.5 2区 管理学 Q1 MANAGEMENT Pub Date : 2025-05-21 DOI: 10.1177/10944281251339144
Diana Garcia Quevedo, Anna Glaser, Caroline Verzat
Online data are constantly growing, providing a wide range of opportunities to explore social phenomena. Large Language Models (LLMs) capture the inherent structure, contextual meaning, and nuance of human language and are the base for state-of-the-art Natural Language Processing (NLP) algorithms. In this article, we describe a method to assist qualitative researchers in the theorization process by efficiently exploring and selecting the most relevant information from a large online dataset. Using LLM-based NLP algorithms, qualitative researchers can efficiently analyze large amounts of online data while still maintaining deep contact with the data and preserving the richness of qualitative analysis. We illustrate the usefulness of our method by examining 5,516 social media posts from 18 entrepreneurs pursuing an environmental mission (ecopreneurs) to analyze their impression management tactics. By helping researchers to explore and select online data efficiently, our method enhances their analytical capabilities, leads to new insights, and ensures precision in counting and classification, thus strengthening the theorization process. We argue that LLMs push researchers to rethink research methods as the distinction between qualitative and quantitative approaches becomes blurred.
在线数据不断增长,为探索社会现象提供了广泛的机会。大型语言模型(llm)捕捉人类语言的内在结构、上下文含义和细微差别,是最先进的自然语言处理(NLP)算法的基础。在本文中,我们描述了一种方法,通过有效地从大型在线数据集中探索和选择最相关的信息,帮助定性研究人员进行理论化过程。使用基于llm的NLP算法,定性研究人员可以高效地分析大量在线数据,同时保持与数据的深度接触,并保持定性分析的丰富性。我们通过检查18位追求环保使命的企业家(ecopreentrepreneurs)的5516篇社交媒体帖子,分析他们的印象管理策略,来说明我们方法的实用性。通过帮助研究人员有效地探索和选择在线数据,我们的方法提高了他们的分析能力,带来了新的见解,并确保了计数和分类的准确性,从而加强了理论化过程。我们认为法学硕士促使研究人员重新思考研究方法,因为定性和定量方法之间的区别变得模糊。
{"title":"Enhancing Theorization Using Artificial Intelligence: Leveraging Large Language Models for Qualitative Analysis of Online Data","authors":"Diana Garcia Quevedo, Anna Glaser, Caroline Verzat","doi":"10.1177/10944281251339144","DOIUrl":"https://doi.org/10.1177/10944281251339144","url":null,"abstract":"Online data are constantly growing, providing a wide range of opportunities to explore social phenomena. Large Language Models (LLMs) capture the inherent structure, contextual meaning, and nuance of human language and are the base for state-of-the-art Natural Language Processing (NLP) algorithms. In this article, we describe a method to assist qualitative researchers in the theorization process by efficiently exploring and selecting the most relevant information from a large online dataset. Using LLM-based NLP algorithms, qualitative researchers can efficiently analyze large amounts of online data while still maintaining deep contact with the data and preserving the richness of qualitative analysis. We illustrate the usefulness of our method by examining 5,516 social media posts from 18 entrepreneurs pursuing an environmental mission (ecopreneurs) to analyze their impression management tactics. By helping researchers to explore and select online data efficiently, our method enhances their analytical capabilities, leads to new insights, and ensures precision in counting and classification, thus strengthening the theorization process. We argue that LLMs push researchers to rethink research methods as the distinction between qualitative and quantitative approaches becomes blurred.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"16 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Organizational Research Methods
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1