{"title":"Meaningful Theoretical Pathways for Research Contributions","authors":"Elliot Bendoly, Rogelio Oliva","doi":"10.1002/joom.1348","DOIUrl":null,"url":null,"abstract":"<p>Across fields of scholarship, ever since scholarship has existed, there have been numerous discussions opining on what theory is, why it is useful and how best to craft theoretical arguments and frameworks. Every few years, a new discussion particularly relevant to a domain of study emerges. Often the intention of such discussions is to reiterate critical points made in the past as still applicable. In other instances, the discussions attempt to recast and reshape perspectives on theory. Both reiteration and alternate perspectives can prove valuable, as new scholars enter the field and as priorities for journals, editors and review teams evolve.</p><p>These points are also of interest to contemporary discussions at the <i>Journal of Operations Management (JOM)</i>. As an outlet long regarded for impactful empirical work in the field, we have long been interested in the appropriate use of theory and have also had a long history of intervening in our field to re-emphasize the ‘what’, ‘why’ and ‘how’ of meaningful theoretical structures and argumentation. As editors of the journal, we believe it is valuable to reiterate what is well-accepted regarding the role and nature of effective theory in research, whether we are discussing grand theories, theoretical frameworks, mid-range theory or theoretical arguments for specific mechanisms. However, we also strongly believe that it is critically valuable to outline how theoretical contributions may differ, while still offering considerable value to a research effort and the field.</p><p>What is core to the substantive nature of theoretical contributions, of course, must be driven by priorities regarding its role; just as the selection of empirical methods must be driven by the claims emerging from theoretical arguments (even nascent ones), and insights for future scholars driven by observation and analysis. By outlining contemporary priorities that define meaningful theory we are in a far better position to simultaneously expand perspectives on how theoretical contributions can be made, as well as challenge or dispel some often difficult-to-justify criticisms that scholars (authors, reviewers and editors) confront regarding what is ‘good’ theory.</p><p>According to Fried (<span>2020</span>), this “statistical equivalency” is one of the fundamental reasons that we cannot escape the need for well-reasoned theoretical arguments, designed to help us make sense of highly complex settings, in which a wealth of observed signals is accompanied by a wealth of unobserved signals. It is exactly when phenomena are <i>not</i> straightforward and mechanisms are <i>not</i> obvious, where sensemaking, and associated deliberate research inquiry, is critical.</p><p>In the same vein, a ‘complete theory’, akin to a physical law, doesn't present much of a motivator for research—if there is no uncertainty regarding cause and effect, there is little reason to expect that an inquiry into such phenomena would be of interest to a research community. Fortunately, in the domains that are studied in management, we seldom come close to complete theories. Occasionally we find enough evidence to corroborate what we might refer to as grand theories and associated frameworks. More often, we observe, or perceive, phenomena that exhibit patterns (either across a body of literature or direct observations in the field) that inspire us to question whether such patterns are repeatable. Indeed, theories are never finished products but rather exist along a continuum of sensemaking from vague hunches to detailed accounts of causal mechanism (Mohr <span>1982</span>; Weick <span>1989</span>), where the initial phases of theorizing often include the creation or definition of constructs and narratives to account for the observed phenomenon.</p><p>With the rise of replication discussions so prominent today, it would be a mistake to forget that methods are merely a means to an end, that they are bound to be imperfectly replicable in observations and analyses they yield. The most critical aspect of replication comes down to whether we can reinforce existing understanding, or whether such attempts at sensemaking require modification, qualification or replacement. That should be the primacy of replication interest for research communities; with a possible exception for communities focused on methodological contributions. Similarly, researchers certainly must be permitted to demonstrate thought that aligns with (replicates) existing theoretical arguments, based on the identification of repeated insights from whatever source, just as they must be permitted to deviate from such arguments if the patterns they encounter do not align. In the complex contexts that characterize management research domains, it is not helpful to expect scholars to identify universal laws, nor is it appropriate to bind them to recognizing or aligning with claims that others have made to that end.</p><p>Furthermore, it should be noted that not all theoretical arguments (hypotheses or propositions) are created equal. There are potential explanations that are clearly better than others. How do we assess the quality of a potential explanation? Bunge (<span>1967</span>), articulates the desired attributes of well-formulated scientific hypotheses as (1) logically sound, (2) grounded in previous knowledge, and (3) empirically testable. We believe that the quality of a conjecture can be judged by the extent to which it fulfills these criteria.\n <sup>1</sup>\n Thus, while two alternative explanations might be equally capable of explaining the data, we can easily assess which has more scientific credibility based on those criteria, for example, ‘a hard object hit and broke the glass’ versus ‘a soft object hit and broke the glass.’</p><p>If we accept the three points listed above as fundamental to the value and role of theory and the desirable attributes of claims, it is also clear, based on our experience with the editorial process, that certain misconceptions regarding what makes “good theory” continue to exist. We outline a few of these fallacies here, along with why they must be deemed to be fundamentally flawed.</p><p>In recognizing what is truly important when it comes to theory, and pushing aside concerns that are not ‘real’ concerns, we can now focus on the fruitful pathways available to authors as their embark on theoretical considerations in their work, and as reviewers and editors approach efforts to further develop such work. Figure 1 presents a generalization of two paths available to authors as they leverage observations and theory to build meaningful contributions to the field.</p><p>The common path (Path A) that flows from left to right in Figure 1, often beginning with a more academic-literature inspired motivation, tends to have many recognizable attributes including a front end dominant theoretical positioning and a largely deductive approach to conclusions, albeit benefiting from at least some posteriori theoretical discussion (while avoiding HARK-ing, which we will return to). This research is normally motivated by the identification of research gaps made apparent by reviews of extant bodies of knowledge, leading through grounded argumentation to formal hypothesis testing. While this is, by far, the most common type of submission to <i>JOM</i>, this is clearly not the only approach scholars can and have taken in developing contributions.</p><p>An alternate path (Path B) draws inspiration and motivation predominantly from empirical observations, proceeding largely from right to left in the top of Figure 1. The observation of empirical regularities, which have not yet been fully rationalized by extant research, or the observation of phenomena that contradict existing theories, lead the scholarly effort down the path of “how can we explain what we are seeing?”, rather than “what do we expect to see, given our explanations?”\n <sup>2</sup>\n The outcome of this process does not need to be fully articulated theoretical statements. Rather, it can be tentative definitions of constructs and exploratory language to describe the observed phenomena. This approach, by its very nature, also provides an organic lead into abductive sensemaking, where we are creating theoretical arguments to explain precisely how observations fit into a broader phenomenon in ways that have not been previously articulated. In doing so, we are implicitly anticipating future observations in specific contexts, rather than using existing observations to support theoretical arguments. That is, the claims of such sensemaking arguments often take the form of propositions with the hope that they are eventually followed up by subsequent empirical efforts, utilizing alternate sources of evidence in support of deductive inquiry as well. This can come in the form of separate follow-on studies or a well-crafted multi-method effort. Nevertheless, the process of creating constructs and narratives to describe phenomena and the abductive articulation of theoretical arguments that match the criteria outlined in section 1 are as much as a contribution as the later empirical testing of those propositions.</p><p>How are these paths related to research approaches that we see across our corpus of research at <i>JOM</i>, from largely data-crunching for validation to eliciting real-world responses, to engaging with the real-world in developing theory? Any of these could potentially involve a heavier theory back end (posteriori theorization), with theory motivating approaches at stages of execution and certainly lending motivation, to some minimal degree, at the front end as well. Figure 2 presents the processes through which we see theory being inspired by, and opening the door to, a range of empirical tactics that make use of data from real-world processes—the domain of <i>JOM</i> inquiries—to develop or improve theories about those processes and how they should be managed.</p><p>One way of being empirical involves efforts to <i>observe</i> (access, document, and assess) the real-world processes and reflect on the potential causes for the observed regularities (top arc of Figure 2). If the observed regularities are not explained by existing theory or they constitute anomalies from what is expected from the theory, we need to propose potential constructs, language, and explanations; this is pathway B in Figure 1 and is characterized by the abductive process described above. Alternatively, if these observations, even if not inspired by theoretical predictions, do match existing theories and explanations, we can inductively gain confidence on the existing theory from the probabilistic encounters of specific instances.</p><p>A second way of being empirical is to <i>test</i> theoretically derived claims. Ideally, this takes place through experimentation: laboratory experiments attempt to maximize the control and the precision in measurement of variables, while filed experiments maximize the realism and generalizability of the findings (McGrarth <span>1982</span>). Given the high risk and cost of field experiments, efforts to scrutinize design early are clearly of benefit to all parties; hence the recent Registered Reports Review (3R) initiative put in place at <i>JOM</i> (Abdulla, Escamilla, and Oliva <span>2024</span>). Clearly, randomized controlled trials are not always possible and quasi-experimental designs (Shadish, Cook, and Campbell <span>2001</span>) or natural experiments (where the treatment is applied ‘haphazardly’ to some units but denied to others) are valid ways to either refute the claims or, if not rejected, increase their validity. An alternative way to test theoretically-derived claims is to rely on non-experimental data—either explicitly gathered for the study (primary data) or repurposed from other data gathering efforts (secondary data)—and establish causal claims through statistical estimation procedures (Cunningham <span>2021</span>; Pearl and Mackenzie <span>2018</span>). These approaches follow a Path A strategy and correspond to the loops in Figure 2 through “test claims”; one passing through the real-world process reflecting the treatment needed for quasi/experimental work, and the other emblematic of the fact that all observation and data acquisition is guided by the theoretical claims that are being tested.</p><p>A third way of being empirical is to <i>intervene</i> using theory to guide improvements in real-world processes; that is, use the theory to provide solutions. While <i>JOM</i> has explicit editorial policies not to focus on solutions as contributions (JOM <span>2004</span>)\n <sup>3</sup>\n , there is ample potential to learn about the relevance and usefulness of a theory when attempting to use it to control or improve a problem situation. The recent creation of the Intervention-based Research (IBR) department in <i>JOM</i> has opened the path to use interventions to test and develop theory within the context of a situation where the researcher engages with practitioners as an agent of change in the problem situation (Oliva <span>2019</span>). The fact that the intervention might require immediate changes to the implementation strategy and that outcomes are not often what was predicted by the theory, create the opportunity to document new data from the real word processes that could lead to modifications to the theory originally used to guide the intervention. As such, intervention-based research uses Path A to design the intervention (deductively from an existing theory) but leverages the data from the intervention to abductively derive insights for theory (Path B); see the loops created through “adapt” in Figure 2.</p><p>Regardless of the chosen empirical strategy (observation, testing, and intervention), the role of theoretical argumentation, both a priori and posteriori, with different degrees of emphasis depending on the chosen path, is fundamental regardless of what we do. It precedes specific actions, but also clearly emerges from others. Its role and placement are contingent on what is being accomplished, but we can't accomplish much of anything without it. At the end of the day, in a scientific endeavor, the criteria to assess the contribution of an empirical study is its contribution to theory. If the process is inductive/abductive and we are only making sense of unexplained regularities or anomalies, clearly the articulation of a new theory that can be subsequently tested is enough of a contribution. However, if the purpose of the study is to test existing theory (whether with secondary data or through experiments and intervention) then placing the findings from the study in the proper context—for example, how theories need to be updated? What are new research questions that are triggered be these results?—is a requirement for the contribution to be meaningful.</p><p>What does all this mean for reviewers and editors?</p><p>As we have affirmed in the <i>JOM</i> editorial team guidelines, all reviewers and their associated reviews are required to be developmental (see https://www.jom-hub.com/editorial-team). This is not wish, it is a mandate. It is also not merely wordplay. Developmental reviews have very specific properties. They identify weakness of papers but make deliberate efforts to help authors shore up those weaknesses. The role of reviewers, at <i>JOM</i>, is not that of ‘gatekeeper’. Their primary role is not to provide an up or down vote. Their primary role is that of providing substantial commentary and guidance. Reviews should never merely state generic grievances without options for redress where that exits.</p><p>Furthermore, with specific regard to theory, a review should never merely state generic disdain for a paper's theoretical elements. Reviews should also not fall victim to the fallacies posed in Section 2, such as a general failure to sufficiently reference extant theory, or comprehensively articulate mechanisms. If a relevant theory exists for use as analogy or comparison, and a reviewer is familiar with that theoretical reference, it is the job of the reviewer to be explicit in guiding the authors towards the consideration of that work. If a mechanism exists that the reviewer feels the authors should describe, it is incumbent on the review to be explicit regarding precisely what that mechanism might be. If as a reviewer you feel ‘something is missing, but can't say what’… Don't include that sentiment in your review as such a statement clearly doesn't serve to help develop a paper.</p><p>There are also, certainly, boundaries on the kind of guidance reviews and editors should give regarding theory. For example, reviewers and editors should not create “HARK-ing traps” for authors. That is, it is inappropriate for reviewers to request an author team to develop theoretical arguments to be positioned a priori, if the motivation for such is based on results emerging from the existing analysis demonstrated in the manuscript. While some authors may recognize such recommendations as overtly problematic, some may not and still others may feel it is the only way to get through the review process successfully. To be clear, such action on the part of reviewers or editors is inappropriate. Reviewers should help authors strengthen arguments that they have used to motivate their methods and analysis. It is also fully acceptable to position ‘new’ arguments posteriori (within Discussion sections) in the interest of future research. In both instances, reviewers are obliged to be developmentally constructive in this regard, offering specific recommendations rather than general requests for ‘more’. However, suggesting that unexpected findings brought forward by the analysis be accounted for by the addition of new front end theoretical arguments (as if they existed a priori) is not an acceptable path for reviewers to go down.</p><p>Furthermore, reviewers and editors need to be fully appreciative of the very real possibility that incredibly strong contributions can take on a structure that originates not from an identification of a research-literature gap, but rather from direct observation. If we are to encourage researchers at <i>JOM</i> and other journals to engage with practice, we must imagine that some of that engagement is going to lead to the recognition of regularities and anomalies that have not yet been explained, and that such observations are at least as important (if not more so) than inspirations drawn predominantly from extant published work. We must be open to these highly abductive paths taken by authors, while still expecting authors to fulfill what is required in the form of thoughtful sensemaking that all for impactful theoretical contributions.</p>","PeriodicalId":51097,"journal":{"name":"Journal of Operations Management","volume":"71 1","pages":"4-10"},"PeriodicalIF":6.5000,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/joom.1348","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Operations Management","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/joom.1348","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 0
Abstract
Across fields of scholarship, ever since scholarship has existed, there have been numerous discussions opining on what theory is, why it is useful and how best to craft theoretical arguments and frameworks. Every few years, a new discussion particularly relevant to a domain of study emerges. Often the intention of such discussions is to reiterate critical points made in the past as still applicable. In other instances, the discussions attempt to recast and reshape perspectives on theory. Both reiteration and alternate perspectives can prove valuable, as new scholars enter the field and as priorities for journals, editors and review teams evolve.
These points are also of interest to contemporary discussions at the Journal of Operations Management (JOM). As an outlet long regarded for impactful empirical work in the field, we have long been interested in the appropriate use of theory and have also had a long history of intervening in our field to re-emphasize the ‘what’, ‘why’ and ‘how’ of meaningful theoretical structures and argumentation. As editors of the journal, we believe it is valuable to reiterate what is well-accepted regarding the role and nature of effective theory in research, whether we are discussing grand theories, theoretical frameworks, mid-range theory or theoretical arguments for specific mechanisms. However, we also strongly believe that it is critically valuable to outline how theoretical contributions may differ, while still offering considerable value to a research effort and the field.
What is core to the substantive nature of theoretical contributions, of course, must be driven by priorities regarding its role; just as the selection of empirical methods must be driven by the claims emerging from theoretical arguments (even nascent ones), and insights for future scholars driven by observation and analysis. By outlining contemporary priorities that define meaningful theory we are in a far better position to simultaneously expand perspectives on how theoretical contributions can be made, as well as challenge or dispel some often difficult-to-justify criticisms that scholars (authors, reviewers and editors) confront regarding what is ‘good’ theory.
According to Fried (2020), this “statistical equivalency” is one of the fundamental reasons that we cannot escape the need for well-reasoned theoretical arguments, designed to help us make sense of highly complex settings, in which a wealth of observed signals is accompanied by a wealth of unobserved signals. It is exactly when phenomena are not straightforward and mechanisms are not obvious, where sensemaking, and associated deliberate research inquiry, is critical.
In the same vein, a ‘complete theory’, akin to a physical law, doesn't present much of a motivator for research—if there is no uncertainty regarding cause and effect, there is little reason to expect that an inquiry into such phenomena would be of interest to a research community. Fortunately, in the domains that are studied in management, we seldom come close to complete theories. Occasionally we find enough evidence to corroborate what we might refer to as grand theories and associated frameworks. More often, we observe, or perceive, phenomena that exhibit patterns (either across a body of literature or direct observations in the field) that inspire us to question whether such patterns are repeatable. Indeed, theories are never finished products but rather exist along a continuum of sensemaking from vague hunches to detailed accounts of causal mechanism (Mohr 1982; Weick 1989), where the initial phases of theorizing often include the creation or definition of constructs and narratives to account for the observed phenomenon.
With the rise of replication discussions so prominent today, it would be a mistake to forget that methods are merely a means to an end, that they are bound to be imperfectly replicable in observations and analyses they yield. The most critical aspect of replication comes down to whether we can reinforce existing understanding, or whether such attempts at sensemaking require modification, qualification or replacement. That should be the primacy of replication interest for research communities; with a possible exception for communities focused on methodological contributions. Similarly, researchers certainly must be permitted to demonstrate thought that aligns with (replicates) existing theoretical arguments, based on the identification of repeated insights from whatever source, just as they must be permitted to deviate from such arguments if the patterns they encounter do not align. In the complex contexts that characterize management research domains, it is not helpful to expect scholars to identify universal laws, nor is it appropriate to bind them to recognizing or aligning with claims that others have made to that end.
Furthermore, it should be noted that not all theoretical arguments (hypotheses or propositions) are created equal. There are potential explanations that are clearly better than others. How do we assess the quality of a potential explanation? Bunge (1967), articulates the desired attributes of well-formulated scientific hypotheses as (1) logically sound, (2) grounded in previous knowledge, and (3) empirically testable. We believe that the quality of a conjecture can be judged by the extent to which it fulfills these criteria.
1
Thus, while two alternative explanations might be equally capable of explaining the data, we can easily assess which has more scientific credibility based on those criteria, for example, ‘a hard object hit and broke the glass’ versus ‘a soft object hit and broke the glass.’
If we accept the three points listed above as fundamental to the value and role of theory and the desirable attributes of claims, it is also clear, based on our experience with the editorial process, that certain misconceptions regarding what makes “good theory” continue to exist. We outline a few of these fallacies here, along with why they must be deemed to be fundamentally flawed.
In recognizing what is truly important when it comes to theory, and pushing aside concerns that are not ‘real’ concerns, we can now focus on the fruitful pathways available to authors as their embark on theoretical considerations in their work, and as reviewers and editors approach efforts to further develop such work. Figure 1 presents a generalization of two paths available to authors as they leverage observations and theory to build meaningful contributions to the field.
The common path (Path A) that flows from left to right in Figure 1, often beginning with a more academic-literature inspired motivation, tends to have many recognizable attributes including a front end dominant theoretical positioning and a largely deductive approach to conclusions, albeit benefiting from at least some posteriori theoretical discussion (while avoiding HARK-ing, which we will return to). This research is normally motivated by the identification of research gaps made apparent by reviews of extant bodies of knowledge, leading through grounded argumentation to formal hypothesis testing. While this is, by far, the most common type of submission to JOM, this is clearly not the only approach scholars can and have taken in developing contributions.
An alternate path (Path B) draws inspiration and motivation predominantly from empirical observations, proceeding largely from right to left in the top of Figure 1. The observation of empirical regularities, which have not yet been fully rationalized by extant research, or the observation of phenomena that contradict existing theories, lead the scholarly effort down the path of “how can we explain what we are seeing?”, rather than “what do we expect to see, given our explanations?”
2
The outcome of this process does not need to be fully articulated theoretical statements. Rather, it can be tentative definitions of constructs and exploratory language to describe the observed phenomena. This approach, by its very nature, also provides an organic lead into abductive sensemaking, where we are creating theoretical arguments to explain precisely how observations fit into a broader phenomenon in ways that have not been previously articulated. In doing so, we are implicitly anticipating future observations in specific contexts, rather than using existing observations to support theoretical arguments. That is, the claims of such sensemaking arguments often take the form of propositions with the hope that they are eventually followed up by subsequent empirical efforts, utilizing alternate sources of evidence in support of deductive inquiry as well. This can come in the form of separate follow-on studies or a well-crafted multi-method effort. Nevertheless, the process of creating constructs and narratives to describe phenomena and the abductive articulation of theoretical arguments that match the criteria outlined in section 1 are as much as a contribution as the later empirical testing of those propositions.
How are these paths related to research approaches that we see across our corpus of research at JOM, from largely data-crunching for validation to eliciting real-world responses, to engaging with the real-world in developing theory? Any of these could potentially involve a heavier theory back end (posteriori theorization), with theory motivating approaches at stages of execution and certainly lending motivation, to some minimal degree, at the front end as well. Figure 2 presents the processes through which we see theory being inspired by, and opening the door to, a range of empirical tactics that make use of data from real-world processes—the domain of JOM inquiries—to develop or improve theories about those processes and how they should be managed.
One way of being empirical involves efforts to observe (access, document, and assess) the real-world processes and reflect on the potential causes for the observed regularities (top arc of Figure 2). If the observed regularities are not explained by existing theory or they constitute anomalies from what is expected from the theory, we need to propose potential constructs, language, and explanations; this is pathway B in Figure 1 and is characterized by the abductive process described above. Alternatively, if these observations, even if not inspired by theoretical predictions, do match existing theories and explanations, we can inductively gain confidence on the existing theory from the probabilistic encounters of specific instances.
A second way of being empirical is to test theoretically derived claims. Ideally, this takes place through experimentation: laboratory experiments attempt to maximize the control and the precision in measurement of variables, while filed experiments maximize the realism and generalizability of the findings (McGrarth 1982). Given the high risk and cost of field experiments, efforts to scrutinize design early are clearly of benefit to all parties; hence the recent Registered Reports Review (3R) initiative put in place at JOM (Abdulla, Escamilla, and Oliva 2024). Clearly, randomized controlled trials are not always possible and quasi-experimental designs (Shadish, Cook, and Campbell 2001) or natural experiments (where the treatment is applied ‘haphazardly’ to some units but denied to others) are valid ways to either refute the claims or, if not rejected, increase their validity. An alternative way to test theoretically-derived claims is to rely on non-experimental data—either explicitly gathered for the study (primary data) or repurposed from other data gathering efforts (secondary data)—and establish causal claims through statistical estimation procedures (Cunningham 2021; Pearl and Mackenzie 2018). These approaches follow a Path A strategy and correspond to the loops in Figure 2 through “test claims”; one passing through the real-world process reflecting the treatment needed for quasi/experimental work, and the other emblematic of the fact that all observation and data acquisition is guided by the theoretical claims that are being tested.
A third way of being empirical is to intervene using theory to guide improvements in real-world processes; that is, use the theory to provide solutions. While JOM has explicit editorial policies not to focus on solutions as contributions (JOM 2004)
3
, there is ample potential to learn about the relevance and usefulness of a theory when attempting to use it to control or improve a problem situation. The recent creation of the Intervention-based Research (IBR) department in JOM has opened the path to use interventions to test and develop theory within the context of a situation where the researcher engages with practitioners as an agent of change in the problem situation (Oliva 2019). The fact that the intervention might require immediate changes to the implementation strategy and that outcomes are not often what was predicted by the theory, create the opportunity to document new data from the real word processes that could lead to modifications to the theory originally used to guide the intervention. As such, intervention-based research uses Path A to design the intervention (deductively from an existing theory) but leverages the data from the intervention to abductively derive insights for theory (Path B); see the loops created through “adapt” in Figure 2.
Regardless of the chosen empirical strategy (observation, testing, and intervention), the role of theoretical argumentation, both a priori and posteriori, with different degrees of emphasis depending on the chosen path, is fundamental regardless of what we do. It precedes specific actions, but also clearly emerges from others. Its role and placement are contingent on what is being accomplished, but we can't accomplish much of anything without it. At the end of the day, in a scientific endeavor, the criteria to assess the contribution of an empirical study is its contribution to theory. If the process is inductive/abductive and we are only making sense of unexplained regularities or anomalies, clearly the articulation of a new theory that can be subsequently tested is enough of a contribution. However, if the purpose of the study is to test existing theory (whether with secondary data or through experiments and intervention) then placing the findings from the study in the proper context—for example, how theories need to be updated? What are new research questions that are triggered be these results?—is a requirement for the contribution to be meaningful.
What does all this mean for reviewers and editors?
As we have affirmed in the JOM editorial team guidelines, all reviewers and their associated reviews are required to be developmental (see https://www.jom-hub.com/editorial-team). This is not wish, it is a mandate. It is also not merely wordplay. Developmental reviews have very specific properties. They identify weakness of papers but make deliberate efforts to help authors shore up those weaknesses. The role of reviewers, at JOM, is not that of ‘gatekeeper’. Their primary role is not to provide an up or down vote. Their primary role is that of providing substantial commentary and guidance. Reviews should never merely state generic grievances without options for redress where that exits.
Furthermore, with specific regard to theory, a review should never merely state generic disdain for a paper's theoretical elements. Reviews should also not fall victim to the fallacies posed in Section 2, such as a general failure to sufficiently reference extant theory, or comprehensively articulate mechanisms. If a relevant theory exists for use as analogy or comparison, and a reviewer is familiar with that theoretical reference, it is the job of the reviewer to be explicit in guiding the authors towards the consideration of that work. If a mechanism exists that the reviewer feels the authors should describe, it is incumbent on the review to be explicit regarding precisely what that mechanism might be. If as a reviewer you feel ‘something is missing, but can't say what’… Don't include that sentiment in your review as such a statement clearly doesn't serve to help develop a paper.
There are also, certainly, boundaries on the kind of guidance reviews and editors should give regarding theory. For example, reviewers and editors should not create “HARK-ing traps” for authors. That is, it is inappropriate for reviewers to request an author team to develop theoretical arguments to be positioned a priori, if the motivation for such is based on results emerging from the existing analysis demonstrated in the manuscript. While some authors may recognize such recommendations as overtly problematic, some may not and still others may feel it is the only way to get through the review process successfully. To be clear, such action on the part of reviewers or editors is inappropriate. Reviewers should help authors strengthen arguments that they have used to motivate their methods and analysis. It is also fully acceptable to position ‘new’ arguments posteriori (within Discussion sections) in the interest of future research. In both instances, reviewers are obliged to be developmentally constructive in this regard, offering specific recommendations rather than general requests for ‘more’. However, suggesting that unexpected findings brought forward by the analysis be accounted for by the addition of new front end theoretical arguments (as if they existed a priori) is not an acceptable path for reviewers to go down.
Furthermore, reviewers and editors need to be fully appreciative of the very real possibility that incredibly strong contributions can take on a structure that originates not from an identification of a research-literature gap, but rather from direct observation. If we are to encourage researchers at JOM and other journals to engage with practice, we must imagine that some of that engagement is going to lead to the recognition of regularities and anomalies that have not yet been explained, and that such observations are at least as important (if not more so) than inspirations drawn predominantly from extant published work. We must be open to these highly abductive paths taken by authors, while still expecting authors to fulfill what is required in the form of thoughtful sensemaking that all for impactful theoretical contributions.
期刊介绍:
The Journal of Operations Management (JOM) is a leading academic publication dedicated to advancing the field of operations management (OM) through rigorous and original research. The journal's primary audience is the academic community, although it also values contributions that attract the interest of practitioners. However, it does not publish articles that are primarily aimed at practitioners, as academic relevance is a fundamental requirement.
JOM focuses on the management aspects of various types of operations, including manufacturing, service, and supply chain operations. The journal's scope is broad, covering both profit-oriented and non-profit organizations. The core criterion for publication is that the research question must be centered around operations management, rather than merely using operations as a context. For instance, a study on charismatic leadership in a manufacturing setting would only be within JOM's scope if it directly relates to the management of operations; the mere setting of the study is not enough.
Published papers in JOM are expected to address real-world operational questions and challenges. While not all research must be driven by practical concerns, there must be a credible link to practice that is considered from the outset of the research, not as an afterthought. Authors are cautioned against assuming that academic knowledge can be easily translated into practical applications without proper justification.
JOM's articles are abstracted and indexed by several prestigious databases and services, including Engineering Information, Inc.; Executive Sciences Institute; INSPEC; International Abstracts in Operations Research; Cambridge Scientific Abstracts; SciSearch/Science Citation Index; CompuMath Citation Index; Current Contents/Engineering, Computing & Technology; Information Access Company; and Social Sciences Citation Index. This ensures that the journal's research is widely accessible and recognized within the academic and professional communities.