Suzanne de Treville, Tyson R. Browning, Rogelio Oliva
{"title":"Empirically grounding analytics (EGA) research in the Journal of Operations Management","authors":"Suzanne de Treville, Tyson R. Browning, Rogelio Oliva","doi":"10.1002/joom.1242","DOIUrl":null,"url":null,"abstract":"<p>Empirically grounding analytics (EGA) is an area of research that emerges at the intersection of empirical and analytical research. By “empirically grounding,” we mean both the empirical justification of model assumptions and parameters and the empirical assessment of model results and insights. EGA is a critical but largely missing aspect of operations management (OM) research. Spearman and Hopp (<span>2021</span>, p. 805) stated that “since empirical testing and refutation of operations models is not an accepted practice in the IE/OM research community, we are unlikely to leverage these to their full potential.” They named several “examples of overly simplistic building blocks leading to questionable representations of complex systems” (p. 805) and suggested that research using analytical tools like closed queuing network models and the Poisson model of demand processes could incorporate empirical experiments to improve understanding of where they do and do not fit reality, highlighting “the importance of making empirical tests of modeling assumptions, both to ensure the validity of the model for its proposed purpose and to identify opportunities for improving or extending our modeling capabilities. The fact that very few IE/OM papers make such empirical tests is an obstacle to progress in our field” (p. 808). They concluded that “Editors should push authors to compare mathematical models with empirical data. Showing that a result holds in one case but not another adds nuance and practicality to research results. It also provides stimulus for research progress” (p. 814). These arguments remind of Little's (1970) observation that many potentially useful analytical models are not widely adopted in practice. Thus, EGA research can help to close two major gaps between (1) the empirical and analytical subdivisions in the OM field and (2) scholarly output and practical relevance.</p><p>As a journal focused on empirical research, the <i>Journal of Operations Management</i> (<i>JOM</i>) seeks to encourage EGA submissions and publications, but doing so requires our community of authors, reviewers, and editors to share an understanding of the expectations. While such contributions have been encouraged for some time in the verbiage on the <i>JOM</i> website, a more formal effort to draw out examples of EGA research was driven by an editorial call (Browning & de Treville, <span>2018</span>), and we have since had many discussions, panels, webinars, and workshops to continue to develop and communicate the expectations. This editorial represents another step in that development.</p><p>In a general sense, an EGA paper combines mathematical, stochastic, and/or economic modeling insights with empirical data. Modeling captures non-linearities and elements of distributions and allows these parameters to be incorporated into decision making, whereas empirical research transforms observations into knowledge. Analytical models are evaluated in terms of their results and insights, which might prompt further extensions to or modifications of the model, including new or different inputs and recalibrations. Most modeling papers stop there because the primary contribution is the analytical model. Although some realism is required, it falls short of empirical grounding, and a gap is often left between the model's insights and what implementation in practice will entail. Filling this gap by empirically grounding an analytic model creates knowledge by linking analytical insights to what has been observed using empirical methods (such as case studies, action research, field experiments, interviews, analysis of secondary data, etc.) to establish a theoretically and empirically relevant research question. Moreover, since analytical models tend to make many simplifying assumptions, EGA research can help tease out where these assumptions are valid and where they excessively bias results.</p><p>Figure 1 situates two kinds of EGA research with traditional analytical models. Typically, publications with analytical models focus on the center of the figure: the model details and the insights derived from it. The left side of the figure refers to the empirical grounding of the model, that is, whether there is empirical evidence to justify the model's assumptions, parameters, and specific calibrations. The right side of the figure refers to empirical evidence of the impact of the model, that is, whether the model fits the problem situation, can be used in real time, and provides useful output.</p><p>The concerns expressed above by Spearman and Hopp stem from the expectation that a single paper will present both the model and the empirical testing. This expectation leads to the situation in which empirical testing serves only to demonstrate the model in action, rather than preparing the way for the insights encapsulated in the model to be deployed in practice. Given the lack of openness (among some) to publishing further empirical testing, the model may be accepted by the research community based on its analytical strength—but the first question anyone from practice will ask is, “Who else has used this, and what were the results?” <i>JOM</i> is interested in papers that address questions related to both empirical sides of the development and use of analytical models—their grounding and their impact—that is, either side of Figure 1. Are data available for model parameters? How well do the results work in a variety of real situations? Are the results practically implementable? Are they useful to practitioners? Will managers actually use them? Figure 1 thus highlights important but often undervalued elements encountered in empirically grounding insights from analytical models. Both sides of Figure 1 require a significant amount of empirical research—and it is empirical work on either side of Figure 1 that is the primary contribution of an EGA paper in <i>JOM</i>. It is usually expecting too much of single paper for it to address both sides of Figure 1 sufficiently.</p><p>On the left side of the figure, analytical models are linked to data and observations of reality: Their assumptions, parameters, and calibration should bear resemblance to a real situation. Here, an empirical contribution focuses on the empirical discovery of a new regularity (new assumption) that leads to the development or revision of analytical models to exploit that new-found regularity. Contributions on the left side of Figure 1 represent the “heavy lifting” of empirically grounding models, transforming mathematical insights into a form that permits measurement and application, and making existing mathematical and modeling insights available to address an observed problem. Finding, collecting, preparing, and analyzing data requires a substantial amount of work—especially when it is impossible to obtain data from the company or situation on which a model was developed. Key parameter values may be unobservable and require estimation from available data. Also, the assumptions that made the model tractable may not hold in key contexts: Empirical research needs to address this tradeoff between parsimony and accuracy. At <i>JOM</i> we want the value of such research to be recognized.</p><p>Contributions on the right side of Figure 1 assess an existing model's performance in real contexts and address emerging issues. Experiments, field tests, and intervention-based research methods are likely candidates for this type of EGA research. These contributions typically build on the empirical insights from the left side of Figure 1 and the insights/results of prior analytical models, but they add the new knowledge created when the effect on decision making of the nonlinearities captured by analytical models is observed empirically. We classify these contributions as EGA as well, although one could also consider them as “analytically grounded empirics.”</p><p>Engaging in either side of Figure 1 will trigger an improvement process where the model is revised based on new assumptions or the availability of new data, and/or its effectiveness (usefulness) and efficiency are increased in the real-world context. This will require a toggling back and forth between inductive reasoning to capture the new empirical evidence, deductive reasoning through the analytical model, and abductive reasoning (Josephson & Josephson, <span>1994</span>) to reconcile the emerging insights and empirical regularities. The surprising and unexpected results that trigger the abduction logic indicate that both the model and its empirical grounding matter to creating actionable knowledge. Creating space for abduction is one of the reasons why successful EGA contributions are more likely to come from the sides than the center of Figure 1. Again, <i>JOM</i> encourages papers that tackle either side of Figure 1 and empirically motivate a significant revision to existing models (see examples below).</p><p>The above-described empirical grounding is often replaced in the modeling literature (where the focus is the model formulation and insights) by either stylized assumptions (explicit simplifications that still capture key elements of the problem situation) or artificial (simulated) data to assess the model performance. Table 1 identifies four types of modeling efforts, depending on the source of assumptions and data for assessing model performance (cf. the left and right sides of Figure 1), together with the key contribution of each type of study (the italicized terms in each cell). The upper-left quadrant (a stylized model tested with artificial data) is common where an analytical insight is a paper's primary contribution. Empirical grounding can take the form of either moving to empirical data applied in an actual situation (lower-left quadrant) or observing areas in practice where the model requires extension (upper-right quadrant).</p><p>An effective EGA process will encourage moving across the quadrants in Table 1: Progress made in any quadrant can open new doors in adjacent quadrants. As we gain fluency in managing the EGA research process, it will become easier to take analytical insights into the field, transforming them into effective interventions through a multi-stage process that links analytical and empirical publication outlets. All else being equal, <i>JOM</i> is more interested in empirical studies that assess the effectiveness and usefulness of a model (despite its simplifying assumptions), that is, the right side of Figure 1. However, it should be noted that items in the lower half of the table—referring to analytical models that are fit to empirical data but provide no insight into how implementation of the model increased knowledge, understanding, or improvements to the model—typically do not qualify as EGA even though the research is carried out in a real context. The contribution from EGA papers (in <i>JOM</i>) must be foremost empirical—even if some of the insights arise from the analytical model—but the use of the data must translate into model improvements that further improve the results derived from the model. This strategy, however, should not be confused with intervention-based research (IBR), where the outcome of the intervention is to improve existing theories or develop new theoretical insights as a result of the engagement with the problem situation (Chandrasekaran et al., <span>2020</span>; Oliva, <span>2019</span>).</p><p>Tables 2 and 3 summarize aspects of several EGA papers that we will discuss further in this section. We begin with some example papers (in Table 2) that fit best on the left side of Figure 1, followed by papers (in Table 3) that fit best on the right side. Most of these papers have been published in <i>JOM</i> and exemplify the new space that we are seeking to develop, in which empirical work is done to improve the usability of a model.</p><p>EGA papers in <i>JOM</i> must have an empirical focus. The analytical insights to be explored empirically are likely to emerge from a model, or models, that have already been published elsewhere. The <i>JOM</i> paper would be evaluated primarily in terms of its empirical contribution rather than its modeling insights. To make this clear in a manuscript, we often advise authors to summarize the model in an appendix (citing its original publication, of course). When the development of an analytical model takes center stage in a paper, that is a sign that it is probably not a good fit for <i>JOM</i> (because the focus of the paper is on the center of Figure 1 rather than on either side of it).</p><p>How much empirical grounding is enough? No paper will ever be able to do this completely; it is a matter of degree. Whether the degree is sufficient is a question of warrant (Ketokivi & Mantere, <span>2021</span>), and whether it is significant is largely subjective (more on this below). How much does the grounding add new insight or change the understanding? A manuscript must provide sufficient warrant for its claims of appropriate grounding and the significance of the new insights, often by showing how and why a model's assumptions, calibrations, factors, and/or results should be significantly different. It is incumbent upon authors to convince reviewers that grounding is sufficient and leads to something significant.</p><p>The requisite empirical grounding can be achieved by a variety of methods, both qualitative and quantitative. Model parameterization should similarly be grounded in empirical data, and assumptions that the model makes must be empirically reasonable. As with all research published in <i>JOM</i>, authors must seek a sense of generality, not just focus on a single instance of a problem. We encourage authors to make use of publicly available data in generating empirical insights from the application of the analytical model, while noting that reviewers are not always accustomed to this use of publicly available data: Authors should be prepared to carefully explain what they are doing and why their data set provides warrant for empirical grounding.</p><p>The other, usual expectations of a <i>JOM</i> paper also apply. For one, the paper should contribute to OM theory. This contribution distinguishes a <i>JOM</i> EGA paper from an article published in a journal such as the <i>INFORMS Journal of Applied Analytics</i> (formerly called <i>Interfaces</i>), wherein articles are oriented toward practitioners and designed to illustrate the use of analytical models in practice. An EGA contribution in <i>JOM</i> brings new knowledge and understanding, occupying a different space than practitioner-oriented usage guides and mere examples of model deployment and application. As with other types of papers in <i>JOM</i>, the paper's contribution must also be sufficiently significant rather than marginal. This criterion is admittedly subjective, with each reviewer bringing their own perspective on the size of a paper's contribution. As a general OM journal, <i>JOM</i> expects contributions to be generalizable rather than specifically applicable only to niche areas. Other author guidelines apply, including the maximum 40-page manuscript length guideline.</p><p><i>JOM</i> is announcing an open call for papers for a special issue on EGA. This call will mention further example papers from other journals. We expect this special issue to provide opportunities to develop and exhibit what <i>JOM</i> expects from EGA papers.</p>","PeriodicalId":51097,"journal":{"name":"Journal of Operations Management","volume":"69 2","pages":"337-348"},"PeriodicalIF":6.5000,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/joom.1242","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Operations Management","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/joom.1242","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 1
Abstract
Empirically grounding analytics (EGA) is an area of research that emerges at the intersection of empirical and analytical research. By “empirically grounding,” we mean both the empirical justification of model assumptions and parameters and the empirical assessment of model results and insights. EGA is a critical but largely missing aspect of operations management (OM) research. Spearman and Hopp (2021, p. 805) stated that “since empirical testing and refutation of operations models is not an accepted practice in the IE/OM research community, we are unlikely to leverage these to their full potential.” They named several “examples of overly simplistic building blocks leading to questionable representations of complex systems” (p. 805) and suggested that research using analytical tools like closed queuing network models and the Poisson model of demand processes could incorporate empirical experiments to improve understanding of where they do and do not fit reality, highlighting “the importance of making empirical tests of modeling assumptions, both to ensure the validity of the model for its proposed purpose and to identify opportunities for improving or extending our modeling capabilities. The fact that very few IE/OM papers make such empirical tests is an obstacle to progress in our field” (p. 808). They concluded that “Editors should push authors to compare mathematical models with empirical data. Showing that a result holds in one case but not another adds nuance and practicality to research results. It also provides stimulus for research progress” (p. 814). These arguments remind of Little's (1970) observation that many potentially useful analytical models are not widely adopted in practice. Thus, EGA research can help to close two major gaps between (1) the empirical and analytical subdivisions in the OM field and (2) scholarly output and practical relevance.
As a journal focused on empirical research, the Journal of Operations Management (JOM) seeks to encourage EGA submissions and publications, but doing so requires our community of authors, reviewers, and editors to share an understanding of the expectations. While such contributions have been encouraged for some time in the verbiage on the JOM website, a more formal effort to draw out examples of EGA research was driven by an editorial call (Browning & de Treville, 2018), and we have since had many discussions, panels, webinars, and workshops to continue to develop and communicate the expectations. This editorial represents another step in that development.
In a general sense, an EGA paper combines mathematical, stochastic, and/or economic modeling insights with empirical data. Modeling captures non-linearities and elements of distributions and allows these parameters to be incorporated into decision making, whereas empirical research transforms observations into knowledge. Analytical models are evaluated in terms of their results and insights, which might prompt further extensions to or modifications of the model, including new or different inputs and recalibrations. Most modeling papers stop there because the primary contribution is the analytical model. Although some realism is required, it falls short of empirical grounding, and a gap is often left between the model's insights and what implementation in practice will entail. Filling this gap by empirically grounding an analytic model creates knowledge by linking analytical insights to what has been observed using empirical methods (such as case studies, action research, field experiments, interviews, analysis of secondary data, etc.) to establish a theoretically and empirically relevant research question. Moreover, since analytical models tend to make many simplifying assumptions, EGA research can help tease out where these assumptions are valid and where they excessively bias results.
Figure 1 situates two kinds of EGA research with traditional analytical models. Typically, publications with analytical models focus on the center of the figure: the model details and the insights derived from it. The left side of the figure refers to the empirical grounding of the model, that is, whether there is empirical evidence to justify the model's assumptions, parameters, and specific calibrations. The right side of the figure refers to empirical evidence of the impact of the model, that is, whether the model fits the problem situation, can be used in real time, and provides useful output.
The concerns expressed above by Spearman and Hopp stem from the expectation that a single paper will present both the model and the empirical testing. This expectation leads to the situation in which empirical testing serves only to demonstrate the model in action, rather than preparing the way for the insights encapsulated in the model to be deployed in practice. Given the lack of openness (among some) to publishing further empirical testing, the model may be accepted by the research community based on its analytical strength—but the first question anyone from practice will ask is, “Who else has used this, and what were the results?” JOM is interested in papers that address questions related to both empirical sides of the development and use of analytical models—their grounding and their impact—that is, either side of Figure 1. Are data available for model parameters? How well do the results work in a variety of real situations? Are the results practically implementable? Are they useful to practitioners? Will managers actually use them? Figure 1 thus highlights important but often undervalued elements encountered in empirically grounding insights from analytical models. Both sides of Figure 1 require a significant amount of empirical research—and it is empirical work on either side of Figure 1 that is the primary contribution of an EGA paper in JOM. It is usually expecting too much of single paper for it to address both sides of Figure 1 sufficiently.
On the left side of the figure, analytical models are linked to data and observations of reality: Their assumptions, parameters, and calibration should bear resemblance to a real situation. Here, an empirical contribution focuses on the empirical discovery of a new regularity (new assumption) that leads to the development or revision of analytical models to exploit that new-found regularity. Contributions on the left side of Figure 1 represent the “heavy lifting” of empirically grounding models, transforming mathematical insights into a form that permits measurement and application, and making existing mathematical and modeling insights available to address an observed problem. Finding, collecting, preparing, and analyzing data requires a substantial amount of work—especially when it is impossible to obtain data from the company or situation on which a model was developed. Key parameter values may be unobservable and require estimation from available data. Also, the assumptions that made the model tractable may not hold in key contexts: Empirical research needs to address this tradeoff between parsimony and accuracy. At JOM we want the value of such research to be recognized.
Contributions on the right side of Figure 1 assess an existing model's performance in real contexts and address emerging issues. Experiments, field tests, and intervention-based research methods are likely candidates for this type of EGA research. These contributions typically build on the empirical insights from the left side of Figure 1 and the insights/results of prior analytical models, but they add the new knowledge created when the effect on decision making of the nonlinearities captured by analytical models is observed empirically. We classify these contributions as EGA as well, although one could also consider them as “analytically grounded empirics.”
Engaging in either side of Figure 1 will trigger an improvement process where the model is revised based on new assumptions or the availability of new data, and/or its effectiveness (usefulness) and efficiency are increased in the real-world context. This will require a toggling back and forth between inductive reasoning to capture the new empirical evidence, deductive reasoning through the analytical model, and abductive reasoning (Josephson & Josephson, 1994) to reconcile the emerging insights and empirical regularities. The surprising and unexpected results that trigger the abduction logic indicate that both the model and its empirical grounding matter to creating actionable knowledge. Creating space for abduction is one of the reasons why successful EGA contributions are more likely to come from the sides than the center of Figure 1. Again, JOM encourages papers that tackle either side of Figure 1 and empirically motivate a significant revision to existing models (see examples below).
The above-described empirical grounding is often replaced in the modeling literature (where the focus is the model formulation and insights) by either stylized assumptions (explicit simplifications that still capture key elements of the problem situation) or artificial (simulated) data to assess the model performance. Table 1 identifies four types of modeling efforts, depending on the source of assumptions and data for assessing model performance (cf. the left and right sides of Figure 1), together with the key contribution of each type of study (the italicized terms in each cell). The upper-left quadrant (a stylized model tested with artificial data) is common where an analytical insight is a paper's primary contribution. Empirical grounding can take the form of either moving to empirical data applied in an actual situation (lower-left quadrant) or observing areas in practice where the model requires extension (upper-right quadrant).
An effective EGA process will encourage moving across the quadrants in Table 1: Progress made in any quadrant can open new doors in adjacent quadrants. As we gain fluency in managing the EGA research process, it will become easier to take analytical insights into the field, transforming them into effective interventions through a multi-stage process that links analytical and empirical publication outlets. All else being equal, JOM is more interested in empirical studies that assess the effectiveness and usefulness of a model (despite its simplifying assumptions), that is, the right side of Figure 1. However, it should be noted that items in the lower half of the table—referring to analytical models that are fit to empirical data but provide no insight into how implementation of the model increased knowledge, understanding, or improvements to the model—typically do not qualify as EGA even though the research is carried out in a real context. The contribution from EGA papers (in JOM) must be foremost empirical—even if some of the insights arise from the analytical model—but the use of the data must translate into model improvements that further improve the results derived from the model. This strategy, however, should not be confused with intervention-based research (IBR), where the outcome of the intervention is to improve existing theories or develop new theoretical insights as a result of the engagement with the problem situation (Chandrasekaran et al., 2020; Oliva, 2019).
Tables 2 and 3 summarize aspects of several EGA papers that we will discuss further in this section. We begin with some example papers (in Table 2) that fit best on the left side of Figure 1, followed by papers (in Table 3) that fit best on the right side. Most of these papers have been published in JOM and exemplify the new space that we are seeking to develop, in which empirical work is done to improve the usability of a model.
EGA papers in JOM must have an empirical focus. The analytical insights to be explored empirically are likely to emerge from a model, or models, that have already been published elsewhere. The JOM paper would be evaluated primarily in terms of its empirical contribution rather than its modeling insights. To make this clear in a manuscript, we often advise authors to summarize the model in an appendix (citing its original publication, of course). When the development of an analytical model takes center stage in a paper, that is a sign that it is probably not a good fit for JOM (because the focus of the paper is on the center of Figure 1 rather than on either side of it).
How much empirical grounding is enough? No paper will ever be able to do this completely; it is a matter of degree. Whether the degree is sufficient is a question of warrant (Ketokivi & Mantere, 2021), and whether it is significant is largely subjective (more on this below). How much does the grounding add new insight or change the understanding? A manuscript must provide sufficient warrant for its claims of appropriate grounding and the significance of the new insights, often by showing how and why a model's assumptions, calibrations, factors, and/or results should be significantly different. It is incumbent upon authors to convince reviewers that grounding is sufficient and leads to something significant.
The requisite empirical grounding can be achieved by a variety of methods, both qualitative and quantitative. Model parameterization should similarly be grounded in empirical data, and assumptions that the model makes must be empirically reasonable. As with all research published in JOM, authors must seek a sense of generality, not just focus on a single instance of a problem. We encourage authors to make use of publicly available data in generating empirical insights from the application of the analytical model, while noting that reviewers are not always accustomed to this use of publicly available data: Authors should be prepared to carefully explain what they are doing and why their data set provides warrant for empirical grounding.
The other, usual expectations of a JOM paper also apply. For one, the paper should contribute to OM theory. This contribution distinguishes a JOM EGA paper from an article published in a journal such as the INFORMS Journal of Applied Analytics (formerly called Interfaces), wherein articles are oriented toward practitioners and designed to illustrate the use of analytical models in practice. An EGA contribution in JOM brings new knowledge and understanding, occupying a different space than practitioner-oriented usage guides and mere examples of model deployment and application. As with other types of papers in JOM, the paper's contribution must also be sufficiently significant rather than marginal. This criterion is admittedly subjective, with each reviewer bringing their own perspective on the size of a paper's contribution. As a general OM journal, JOM expects contributions to be generalizable rather than specifically applicable only to niche areas. Other author guidelines apply, including the maximum 40-page manuscript length guideline.
JOM is announcing an open call for papers for a special issue on EGA. This call will mention further example papers from other journals. We expect this special issue to provide opportunities to develop and exhibit what JOM expects from EGA papers.
期刊介绍:
The Journal of Operations Management (JOM) is a leading academic publication dedicated to advancing the field of operations management (OM) through rigorous and original research. The journal's primary audience is the academic community, although it also values contributions that attract the interest of practitioners. However, it does not publish articles that are primarily aimed at practitioners, as academic relevance is a fundamental requirement.
JOM focuses on the management aspects of various types of operations, including manufacturing, service, and supply chain operations. The journal's scope is broad, covering both profit-oriented and non-profit organizations. The core criterion for publication is that the research question must be centered around operations management, rather than merely using operations as a context. For instance, a study on charismatic leadership in a manufacturing setting would only be within JOM's scope if it directly relates to the management of operations; the mere setting of the study is not enough.
Published papers in JOM are expected to address real-world operational questions and challenges. While not all research must be driven by practical concerns, there must be a credible link to practice that is considered from the outset of the research, not as an afterthought. Authors are cautioned against assuming that academic knowledge can be easily translated into practical applications without proper justification.
JOM's articles are abstracted and indexed by several prestigious databases and services, including Engineering Information, Inc.; Executive Sciences Institute; INSPEC; International Abstracts in Operations Research; Cambridge Scientific Abstracts; SciSearch/Science Citation Index; CompuMath Citation Index; Current Contents/Engineering, Computing & Technology; Information Access Company; and Social Sciences Citation Index. This ensures that the journal's research is widely accessible and recognized within the academic and professional communities.