ISJ社论:探讨期刊影响因子最新发展的影响

IF 6.5 2区 管理学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE Information Systems Journal Pub Date : 2023-01-04 DOI:10.1111/isj.12426
Robert M. Davison, Paul Benjamin Lowry
{"title":"ISJ社论:探讨期刊影响因子最新发展的影响","authors":"Robert M. Davison,&nbsp;Paul Benjamin Lowry","doi":"10.1111/isj.12426","DOIUrl":null,"url":null,"abstract":"<p>In 2020, we noticed significant spikes in the JIFs of several leading information systems (IS) journals, both inside and outside the Association for Information Systems (AIS) basket of eight (AIS-8) premier journals. A moderate degree of JIF fluctuation is normal, and a few journals increase steadily year after year, but the changes in 2020 were remarkable. For example, the JIF for the <i>Journal of Strategic Information Systems</i> (<i>JSIS</i>) rose from 3.949 in 2019 to 7.838 in 2020; likewise, the JIF for <i>Information &amp; Organisation</i> (<i>I&amp;O</i>), rose from 3.300 to 6.300. Every other major IS journal saw significant increases. In analysing the patterns, our scope embraces 13 journals, beginning with the AIS-8: <i>European Journal of Information Systems</i> (<i>EJIS</i>), <i>Information Systems Journal</i> (<i>ISJ</i>), <i>Information Systems Research</i> (<i>ISR</i>), <i>Journal of Information Technology</i> (<i>JIT</i>), <i>Journal of the Association for Information Systems</i> (<i>JAIS</i>), <i>Journal of Management Information Systems</i> (<i>JMIS</i>), <i>Journal of Strategic Information Systems</i> (<i>JSIS</i>), and <i>Management Information Systems Quarterly</i> (<i>MISQ</i>). We also included five well-regarded and highly cited IS journals outside the AIS-8: <i>Decision Support Systems</i> (<i>DSS</i>), <i>Information &amp; Management</i> (<i>I&amp;M</i>), <i>Information &amp; Organisation</i> (<i>I&amp;O</i>), <i>International Journal of Information Management</i> (<i>IJIM</i>), and <i>IT &amp; People</i> (<i>IT&amp;P</i>).</p><p>What can we learn from this trawl through the recent journal citation data? What are the implications for the journals and their various stakeholders? Several prominent trends emerge, notably, the short-term skewing effect of highly cited articles, the impact of self-citation, and the great extent to which IS research is cited in articles appearing in many other journals and conferences, some of which are in disciplines far removed from IS. A more detailed analysis of the kinds of article that are highly cited may be valuable; the tentative analysis here suggests that reviews, research agendas, research frameworks, and methods articles are particularly well received. COVID-19 has provided a topic of global interest and significance, and many COVID-19-related articles are well cited, but it may well be that many others are not cited. Indeed, it is instructive to examine the tail of zeros (see Table 11): the articles that are not cited at all and thus contribute only to the attenuating effect of the denominator and not the aggrandizing effect of the numerator in the JIF calculation.</p><p>We suggest that these data may be of considerable concern to editors and publishers. Although a couple of journals have only one uncited article, several journals reach double figures (10–28 articles), with as many as 17.5% of citable articles being uncited. High numbers of uncited articles will naturally diminish a journal's JIF; thus, editors who hope to see a higher JIF must consider how to ensure that the articles they publish are cited. Why does one article languish uncited while another article harvests a rich crop of citations? There are many possible reasons. For instance, a well-written, rigorously conducted piece of research may fail to excite the imagination of readers while a hastily cobbled together, flash-in-the-pan analysis of a trendy topic may simply be timelier and more interesting, thus generating more citations. A key reason could be relevance and importance of research and how wide an audience an article targets. For instance, most analytical modelling papers have far less citations than their empirical counterparts partly because a considerably smaller number of people read the former as compared to the latter, because modelling requires training most academics do not have. Therefore, relevance, importance, understandability, and accessibility could be key factors. Meanwhile, some papers are cited because everyone doing similar research cites them, even if inappropriately, a trend that seems common in methodological articles on broad topics like common methods bias, structural equation modelling, formative constructs, case studies, action research, and so on.</p><p>We also expect that whether a journal is affiliated with a professional publisher also has a large influence on these metrics. For example, Elsevier is the largest academic publisher and provides many high-quality services and resources for its journals that are not provided at journals like <i>MISQ</i> and <i>JAIS</i>, arguably providing great advantages to Elsevier journals (e.g. Scopus, Mendeley, ScienceDirect, Researcher Academy, early view, cross-citation suggestions, full citation support in all major formats, and so on). Similar services are provided at the other professional publishers like Wiley, Sage, Springer, Emerald, and Taylor &amp; Francis. The other key factor is that these major publishers have much more extensive coverage in libraries, abstracting services, and indexing services. They also have professional staff dedicated to marketing, promoting articles, media outreach, production, copy editing, and so on. As an illustration, if a researcher finds an article on Google Scholar and is brought to a professional publisher's site for the article, the site will typically provide: (1) recommended articles, (2) citing articles, (3) full meta-data and article metrics, (4) complete article bibliographic information that can be easily downloaded in any format, (5) electronic cross-referencing to every paper cited, (6) links for all the coauthors and related papers published, (7) ability to track a citation and receive alerts, (8) links to share article on social media, and so on. Little of this is provided by <i>MISQ</i> or <i>JAIS</i> as they both are pushed through the AIS e-library, which has a more basic, conference-paper-like interface and does not even provide direct citation downloading or electronic references to cited articles. Thus, we surmise that if <i>MISQ</i> or <i>JAIS</i> was published by and integrated with a major publisher, then their JIFs would be much higher than at present, and they would also have much lower rates of zero-cited items.</p><p>It seems to us that the factors that give rise to JIFs could constitute a perilous situation, especially for journals not affiliated with major professional publishers. Although we recognise the value of research frameworks, agendas, reviews, and methods, all of which tend to be well cited, the justification for the citation may be primarily that a hard-working author has done all the homework on a particular topic, saving time for others who do not need to read all the background materials (as this one article has done the job for them) and who apply or build on it in their own empirical research. Such articles tend to be few and far between, so their individual impact on the JIF of a particular journal is perhaps limited, if only by the two-year citation window. However, there is also the potential for us to become irremediably hooked on the ever-changing chimera of whatever is novel or interesting while neglecting to undertake more careful investigations into and analyses of those critical or “wicked” problems that demand more resources but whose impact may lie far in the future. Moreover, we often fail to address the important scientific task of conducting and publishing replications and extensions, as these are simply not trendy or novel (Brendel et al., <span>2023</span>). Top IS journals are becoming so notorious for demanding novel theory, topics, methods, and analyses that we might argue that many articles are not replicable and threaten to be one-hit wonders that are highly cited for the wrong reasons. Taken to the extreme, this could lead to a lack of coherence and a failure to cultivate fundamental, important research discourses that build upon one another.</p><p>In a world where authors do not need to read an entire article to decide whether or not to cite it, perhaps finding only a relevant snippet to quote from a search engine, the winners in the JIF league may be those journals that publish research articles that are variously high quality, interesting, popular, or fetishized and that deploy IS most effectively to secure an audience of potential citers for those articles. Early View mechanisms are favoured by publishers because of their revenue implications as well as copyright control, although not all journals have yet implemented this arrangement. Academia.com, ResearchGate, and similar intermediaries are more controversial, given the potential for revenue loss, yet are highly popular with authors keen to share their research. Whether the sharing of research takes place on an official platform or that of an intermediary, all stakeholders have an interest in the citations that follow. The publishers should not be neglected; they compete for “intellectual footprint”, that is, both the volume of research published and the extent to which it is cited.</p><p>The focus on “what may be cited” has implications for the pragmatics of research that may alarm researchers. How would researchers feel if their carefully conducted, rigorous, relevant study is desk rejected because it is unlikely to garner any citations? Superficially, it may seem unlikely that an editor could know in advance that a newly submitted article will, after several rounds of revision and after a year or more has elapsed, turn out to be cited or not, but a general obsession with novelty appears to be the common surrogate. A more sophisticated analysis of submissions, rejections, acceptances, and the eventual citation counts could well reveal patterns making possible the design of an artificial intelligence-based desk decision (reject or review) recommendation system that achieves precisely that outcome. In practice, we suspect that the weak potential for future citations is never likely to be offered as a justification for rejection, but several proxies could be devised, notably the lack of interest or novelty in a research design or theory and the question of an article's fit with the journal's scope or objectives. But who is to say what is interesting or novel? Reviewers certainly wield this power and are free to say whatever they like. We expect (or hope) that associate and senior editors at top IS journals consider reviewer comments with care: they also need to undertake their own careful assessment of the individual merits of a research article (cf. Tarafdar and Davison, <span>2021</span>). Authors can also help themselves by carefully justifying their choice of research topic and explaining why it is important, worth doing, and of benefit to which stakeholders. This is perhaps no more than a variation on problematization (Chatterjee &amp; Davison, <span>2021</span>), but it may contribute to ensuring the continued survival of research that leads to solving critical problems, even though some might find it uninteresting and even when it fails to inspire citations by academic researchers.</p><p>The continued relevance of JIFs is subject to scrutiny. A journal-level metric cannot indicate either the extent to which a particular research article is of high quality or how it is valued by the readership. Instead, assessing the merits of individual articles is necessary, and this assessment need not be restricted to citation counting. For instance, one can also look at the impact of the research beyond academic contexts, whether in policy or practice, using tools like Altmetrics.5 Certainly, JIFs are important indicators of journal impact and a proxy for the general quality of the journal, but they are one of several such indicators. Thus, it is important that we do not misuse JIFs (especially in crucial resource decisions, such as for people facing tenure, promotion, and hiring decisions) and that they be balanced with other sources of input (Dean et al., <span>2011</span>; Lowry et al., <span>2013</span>). At the same time, JIFs may provide some indication of the extent to which a given journal is relevant to an academic audience. A higher JIF implies that more people have found the collective research published by that journal to be of value. Because JIF data are subject to spike effects when trendy articles are published, it is safer to look at longer-term trends in the JIF data. The short-term nature of the two-year JIF (which is the most often mentioned) is clearly problematic: many articles create sustained value over much longer periods, which cannot be captured in a two-year window.</p><p>Journal editors who wish to improve JIFs also need to be concerned about the depreciating effects of zero-cited papers (Table 11). Why are some articles highly cited and others not cited at all? Some of the answer can be found in the (journal-level) self-citation scores (see Table 8); some journals appear to actively encourage authors to cite papers published in the journal to which they are submitting (see numbers in the red diagonal). Additionally, while some journals receive large numbers of citations from other IS journals, others do not attract so many citations. For instance, articles published in <i>MISQ</i> were cited 308 times by the other 12 journals in our analysis, but articles published in <i>IT&amp;P</i> and <i>I&amp;O</i> were cited only 31 and 34 times, respectively, in the other 12 journals, receiving no citations at all from several of the 12. Editors need to consider carefully which market they aim to exert an impact on, and thus to be cited in, and then examine how to achieve that impact effectively. Alternatively, attention to article-level metrics may be a promising route that champions the quality of individual articles in a more sensitive manner.</p><p>Going somewhat beyond the data presented here, we suggest that journal editors also need to look at review–revise cycle times. If the review–revise process is slow and if accepted articles conventionally (or on average) go through multiple rounds of revision, then the articles that are published will have taken longer to get through the review system; by the time they are citable, they may no longer be sufficiently current to attract as much attention as those that are reviewed, revised, and published more quickly. The duration and rigour of the review process may be correlated, and we believe that all journal editors would agree on the importance of a rigorous review process. However, the amount of time involved is a slightly different matter; we allow reviewers generous amounts of time, yet, in our multiple experiences as AEs, SEs, and EICs at various IS journals, we find that some reviewers do not even start reading or reviewing until the deadline approaches (or after!). Inevitably, some of these reviewers require extensions. Meanwhile, some authors appear not to start thinking about the revision until the deadline approaches, and so they, too, often need more time, especially with COVID-19, ends of semesters, and the usual life events. None of these types of extension is desired by editors or the audience for the research, yet all are common. The net result is longer review–revise cycle times and, perhaps, lower citation counts as the research becomes more dated.</p><p>As editors, we routinely enforce deadlines and chase anyone in any role who is overdue. But the key issue we deal with is that all of us do this work as unpaid volunteers, so, when push comes to shove, there is a lack of incentive in the journal editing and reviewing system in the IS discipline. We end up with the same small group of people doing most of the work and a large portion of academics contributing little to the review or editing process. It is common today to need to invite ten or even twenty scholars to review an article, in order to secure the two who agree to do so. Is it perhaps time to rethink and retool our incentive systems beyond mere thanks and kudos? The financial rewards of academic publishing accrue first to the publishers who sell content and second to authors, who may be rewarded by their institutions. Those who contribute to the review process are primarily rewarded with kudos and, perhaps, with a sense of their power to determine the fate of others' research. That may not be enough. We suggest that if we more consistently reward those in the review process, we may also see faster turnaround times (at least from reviewers) and even better-quality reviews. Perhaps it is time to consider charging submission fees and paying reviewers and editors as is done at all of the top accounting and finance journals?6</p><p>In this issue of the <i>ISJ,</i> we present eight papers. In the first paper, Adam et al. (<span>2023</span>) note that IS research on platform control has largely focused on examining how input control (i.e. the mechanisms used to control platform access) affects complementors' intentions and behaviours after their decision to join a digital platform. Against this backdrop, the authors investigate how input control on platforms can also be a salient signal that shapes prospective complementors' expected benefits and costs (i.e. their performance and effort expectancy), and ultimately their decision to join a digital platform. The paper's experimental results show that the overall relationship between perceived input control and intention to join follows an inverted U-shape curve, which means that neither a low nor a high, but a moderate degree of perceived input control maximizes prospective complementors’ intention to join. These insights help platform researchers and providers to understand how input control works as both a governing tool and as a signalling tool.</p><p>In the second paper, Iannacci et al. (<span>2023</span>) aim at aligning the Qualitative Comparative Analysis (QCA) counterfactual approach with the logic of retroduction. Drawing on the notion that QCA revolves around an ongoing dialogue between data and theory, Iannacci et al. (<span>2023</span>) show that the minimization of complex solutions in QCA consists of hypothesising the effect(s) of unobserved configurations of causal conditions in a plausible fashion. Accordingly, they argue that plausibility should become a substitute for validity and that, by adding theory or speculative thoughts to empirical data, retroduction provides a compelling account of the synchronous effects stemming from autonomous technologies.</p><p>In the third paper, Zou et al. (<span>2023</span>) note that trust effectively mitigates customer uncertainty in initial purchases and plays an important role in customer retention. Despite decades of development, the effect of trust has not yet received sufficient scholarly attention with respect to repurchase behaviour, notably since some studies have agreed on an oversimplified diminishing effect. The authors develop a nuanced understanding of the boundary condition under which trust operates and investigate the role of institutional contexts in online repurchase behaviour. They confirm their hypotheses in two studies of e-commerce and mobile banking using the latent moderated structural (LMS) equations approach.</p><p>In the fourth paper, Herterich et al. (<span>2023</span>) report on a case study undertaken at four companies that participate in an industrial smart service ecosystem. Taking an affordance-theoretic perspective, the authors uncover both the antecedents and the process of emergent smart service ecosystems. They find that smart service ecosystems have three socio-technical antecedents: a shared worldview, structural flexibility and integrity, and an architecture of participation. They explain the emergence of smart service ecosystems as a consequence of specialisation in shared affordances and integration of idiosyncratic affordances into collective affordances. They derive seven propositions regarding the emergence of smart services, outline opportunities for further research, and present practical guidelines for manufacturing firms.</p><p>In the fifth paper, Rabl et al. (<span>2023</span>) integrate the IS and digital entrepreneurship literatures with the intrapreneurship literature to establish both if and under which conditions support by different types of digital technologies enables employee intrapreneurial behaviour in organisations. Findings from a metric conjoint experiment with 1360 decisions nested within 85 employees show significant positive effects of support by social media, support by collaborative technologies, and support by intelligent decision support systems on employee intrapreneurial behaviour. However, the relative impact of support by these digital technologies varied with different levels of management support for innovation and employee intrapreneurial self-efficacy. The authors not only provide empirical evidence for digital technology support as an enabler of individual intrapreneurial behaviour, but in line with an interactionist perspective they also identify management support for innovation and employee intrapreneurial self-efficacy as important contingencies for leveraging the potential of digital technology support.</p><p>In the sixth paper, Yazdanmehr et al. (<span>2023</span>) observe that studies on employee responses to information security policy (ISP) demands show that stress can lead to emotion-focused coping and subsequent ISP violations. Using the Transactional Model of Stress and Coping theory, the authors argue that inward and outward emotion-focused and problem-focused coping responses to ISP demands coexist and influence ISP violations and that the effects of security-related stress (SRS) on coping responses are contingent on ISP-related self-efficacy and organisational support. The results indicate SRS triggers all three coping responses, and ISP-related self-efficacy and organisational support reduce the effects of SRS on inward and outward emotion-focused coping. Problem-focused coping decreases ISP violation intention, whereas inward and outward emotion-focused coping increases it.</p><p>In the seventh paper, Wakefield &amp; Wakefield (<span>2023</span>) examine how affective polarisation drives responses on controversial topics on social media. Their model is built on the framework of social identity theory. Social identity and passion motivate users to distance themselves from rivals, but along non-intuitive paths. Inflated feelings for the in-group, more than dislike of the out-group, drive affective polarisation. Counter to stereotypes, those with obsessive passion hold less animosity toward rivals than others identified with the group. Contrary to expectations that polar opposites attack, Twitter users with greater affective polarisation prefer to shut out rivals by muting, blocking or unfollowing, exacerbating echo chambers.</p><p>In the eighth paper, Vial et al. (<span>2023</span>) study the management of AI projects in a North American AI consulting firm. They find that the combination of elements drawn from traditional project management, agile practices, and AI workflow practices enables this firm to be effective in delivering AI projects to their customers. At the same time, successfully managing AI projects involves resolving conflicts between three different logics that underpin these elements. From their case findings, the authors derive four strategies to help organisations better manage their AI projects.</p>","PeriodicalId":48049,"journal":{"name":"Information Systems Journal","volume":"33 3","pages":"419-436"},"PeriodicalIF":6.5000,"publicationDate":"2023-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/isj.12426","citationCount":"0","resultStr":"{\"title\":\"ISJ editorial: Addressing the implications of recent developments in journal impact factors\",\"authors\":\"Robert M. Davison,&nbsp;Paul Benjamin Lowry\",\"doi\":\"10.1111/isj.12426\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In 2020, we noticed significant spikes in the JIFs of several leading information systems (IS) journals, both inside and outside the Association for Information Systems (AIS) basket of eight (AIS-8) premier journals. A moderate degree of JIF fluctuation is normal, and a few journals increase steadily year after year, but the changes in 2020 were remarkable. For example, the JIF for the <i>Journal of Strategic Information Systems</i> (<i>JSIS</i>) rose from 3.949 in 2019 to 7.838 in 2020; likewise, the JIF for <i>Information &amp; Organisation</i> (<i>I&amp;O</i>), rose from 3.300 to 6.300. Every other major IS journal saw significant increases. In analysing the patterns, our scope embraces 13 journals, beginning with the AIS-8: <i>European Journal of Information Systems</i> (<i>EJIS</i>), <i>Information Systems Journal</i> (<i>ISJ</i>), <i>Information Systems Research</i> (<i>ISR</i>), <i>Journal of Information Technology</i> (<i>JIT</i>), <i>Journal of the Association for Information Systems</i> (<i>JAIS</i>), <i>Journal of Management Information Systems</i> (<i>JMIS</i>), <i>Journal of Strategic Information Systems</i> (<i>JSIS</i>), and <i>Management Information Systems Quarterly</i> (<i>MISQ</i>). We also included five well-regarded and highly cited IS journals outside the AIS-8: <i>Decision Support Systems</i> (<i>DSS</i>), <i>Information &amp; Management</i> (<i>I&amp;M</i>), <i>Information &amp; Organisation</i> (<i>I&amp;O</i>), <i>International Journal of Information Management</i> (<i>IJIM</i>), and <i>IT &amp; People</i> (<i>IT&amp;P</i>).</p><p>What can we learn from this trawl through the recent journal citation data? What are the implications for the journals and their various stakeholders? Several prominent trends emerge, notably, the short-term skewing effect of highly cited articles, the impact of self-citation, and the great extent to which IS research is cited in articles appearing in many other journals and conferences, some of which are in disciplines far removed from IS. A more detailed analysis of the kinds of article that are highly cited may be valuable; the tentative analysis here suggests that reviews, research agendas, research frameworks, and methods articles are particularly well received. COVID-19 has provided a topic of global interest and significance, and many COVID-19-related articles are well cited, but it may well be that many others are not cited. Indeed, it is instructive to examine the tail of zeros (see Table 11): the articles that are not cited at all and thus contribute only to the attenuating effect of the denominator and not the aggrandizing effect of the numerator in the JIF calculation.</p><p>We suggest that these data may be of considerable concern to editors and publishers. Although a couple of journals have only one uncited article, several journals reach double figures (10–28 articles), with as many as 17.5% of citable articles being uncited. High numbers of uncited articles will naturally diminish a journal's JIF; thus, editors who hope to see a higher JIF must consider how to ensure that the articles they publish are cited. Why does one article languish uncited while another article harvests a rich crop of citations? There are many possible reasons. For instance, a well-written, rigorously conducted piece of research may fail to excite the imagination of readers while a hastily cobbled together, flash-in-the-pan analysis of a trendy topic may simply be timelier and more interesting, thus generating more citations. A key reason could be relevance and importance of research and how wide an audience an article targets. For instance, most analytical modelling papers have far less citations than their empirical counterparts partly because a considerably smaller number of people read the former as compared to the latter, because modelling requires training most academics do not have. Therefore, relevance, importance, understandability, and accessibility could be key factors. Meanwhile, some papers are cited because everyone doing similar research cites them, even if inappropriately, a trend that seems common in methodological articles on broad topics like common methods bias, structural equation modelling, formative constructs, case studies, action research, and so on.</p><p>We also expect that whether a journal is affiliated with a professional publisher also has a large influence on these metrics. For example, Elsevier is the largest academic publisher and provides many high-quality services and resources for its journals that are not provided at journals like <i>MISQ</i> and <i>JAIS</i>, arguably providing great advantages to Elsevier journals (e.g. Scopus, Mendeley, ScienceDirect, Researcher Academy, early view, cross-citation suggestions, full citation support in all major formats, and so on). Similar services are provided at the other professional publishers like Wiley, Sage, Springer, Emerald, and Taylor &amp; Francis. The other key factor is that these major publishers have much more extensive coverage in libraries, abstracting services, and indexing services. They also have professional staff dedicated to marketing, promoting articles, media outreach, production, copy editing, and so on. As an illustration, if a researcher finds an article on Google Scholar and is brought to a professional publisher's site for the article, the site will typically provide: (1) recommended articles, (2) citing articles, (3) full meta-data and article metrics, (4) complete article bibliographic information that can be easily downloaded in any format, (5) electronic cross-referencing to every paper cited, (6) links for all the coauthors and related papers published, (7) ability to track a citation and receive alerts, (8) links to share article on social media, and so on. Little of this is provided by <i>MISQ</i> or <i>JAIS</i> as they both are pushed through the AIS e-library, which has a more basic, conference-paper-like interface and does not even provide direct citation downloading or electronic references to cited articles. Thus, we surmise that if <i>MISQ</i> or <i>JAIS</i> was published by and integrated with a major publisher, then their JIFs would be much higher than at present, and they would also have much lower rates of zero-cited items.</p><p>It seems to us that the factors that give rise to JIFs could constitute a perilous situation, especially for journals not affiliated with major professional publishers. Although we recognise the value of research frameworks, agendas, reviews, and methods, all of which tend to be well cited, the justification for the citation may be primarily that a hard-working author has done all the homework on a particular topic, saving time for others who do not need to read all the background materials (as this one article has done the job for them) and who apply or build on it in their own empirical research. Such articles tend to be few and far between, so their individual impact on the JIF of a particular journal is perhaps limited, if only by the two-year citation window. However, there is also the potential for us to become irremediably hooked on the ever-changing chimera of whatever is novel or interesting while neglecting to undertake more careful investigations into and analyses of those critical or “wicked” problems that demand more resources but whose impact may lie far in the future. Moreover, we often fail to address the important scientific task of conducting and publishing replications and extensions, as these are simply not trendy or novel (Brendel et al., <span>2023</span>). Top IS journals are becoming so notorious for demanding novel theory, topics, methods, and analyses that we might argue that many articles are not replicable and threaten to be one-hit wonders that are highly cited for the wrong reasons. Taken to the extreme, this could lead to a lack of coherence and a failure to cultivate fundamental, important research discourses that build upon one another.</p><p>In a world where authors do not need to read an entire article to decide whether or not to cite it, perhaps finding only a relevant snippet to quote from a search engine, the winners in the JIF league may be those journals that publish research articles that are variously high quality, interesting, popular, or fetishized and that deploy IS most effectively to secure an audience of potential citers for those articles. Early View mechanisms are favoured by publishers because of their revenue implications as well as copyright control, although not all journals have yet implemented this arrangement. Academia.com, ResearchGate, and similar intermediaries are more controversial, given the potential for revenue loss, yet are highly popular with authors keen to share their research. Whether the sharing of research takes place on an official platform or that of an intermediary, all stakeholders have an interest in the citations that follow. The publishers should not be neglected; they compete for “intellectual footprint”, that is, both the volume of research published and the extent to which it is cited.</p><p>The focus on “what may be cited” has implications for the pragmatics of research that may alarm researchers. How would researchers feel if their carefully conducted, rigorous, relevant study is desk rejected because it is unlikely to garner any citations? Superficially, it may seem unlikely that an editor could know in advance that a newly submitted article will, after several rounds of revision and after a year or more has elapsed, turn out to be cited or not, but a general obsession with novelty appears to be the common surrogate. A more sophisticated analysis of submissions, rejections, acceptances, and the eventual citation counts could well reveal patterns making possible the design of an artificial intelligence-based desk decision (reject or review) recommendation system that achieves precisely that outcome. In practice, we suspect that the weak potential for future citations is never likely to be offered as a justification for rejection, but several proxies could be devised, notably the lack of interest or novelty in a research design or theory and the question of an article's fit with the journal's scope or objectives. But who is to say what is interesting or novel? Reviewers certainly wield this power and are free to say whatever they like. We expect (or hope) that associate and senior editors at top IS journals consider reviewer comments with care: they also need to undertake their own careful assessment of the individual merits of a research article (cf. Tarafdar and Davison, <span>2021</span>). Authors can also help themselves by carefully justifying their choice of research topic and explaining why it is important, worth doing, and of benefit to which stakeholders. This is perhaps no more than a variation on problematization (Chatterjee &amp; Davison, <span>2021</span>), but it may contribute to ensuring the continued survival of research that leads to solving critical problems, even though some might find it uninteresting and even when it fails to inspire citations by academic researchers.</p><p>The continued relevance of JIFs is subject to scrutiny. A journal-level metric cannot indicate either the extent to which a particular research article is of high quality or how it is valued by the readership. Instead, assessing the merits of individual articles is necessary, and this assessment need not be restricted to citation counting. For instance, one can also look at the impact of the research beyond academic contexts, whether in policy or practice, using tools like Altmetrics.5 Certainly, JIFs are important indicators of journal impact and a proxy for the general quality of the journal, but they are one of several such indicators. Thus, it is important that we do not misuse JIFs (especially in crucial resource decisions, such as for people facing tenure, promotion, and hiring decisions) and that they be balanced with other sources of input (Dean et al., <span>2011</span>; Lowry et al., <span>2013</span>). At the same time, JIFs may provide some indication of the extent to which a given journal is relevant to an academic audience. A higher JIF implies that more people have found the collective research published by that journal to be of value. Because JIF data are subject to spike effects when trendy articles are published, it is safer to look at longer-term trends in the JIF data. The short-term nature of the two-year JIF (which is the most often mentioned) is clearly problematic: many articles create sustained value over much longer periods, which cannot be captured in a two-year window.</p><p>Journal editors who wish to improve JIFs also need to be concerned about the depreciating effects of zero-cited papers (Table 11). Why are some articles highly cited and others not cited at all? Some of the answer can be found in the (journal-level) self-citation scores (see Table 8); some journals appear to actively encourage authors to cite papers published in the journal to which they are submitting (see numbers in the red diagonal). Additionally, while some journals receive large numbers of citations from other IS journals, others do not attract so many citations. For instance, articles published in <i>MISQ</i> were cited 308 times by the other 12 journals in our analysis, but articles published in <i>IT&amp;P</i> and <i>I&amp;O</i> were cited only 31 and 34 times, respectively, in the other 12 journals, receiving no citations at all from several of the 12. Editors need to consider carefully which market they aim to exert an impact on, and thus to be cited in, and then examine how to achieve that impact effectively. Alternatively, attention to article-level metrics may be a promising route that champions the quality of individual articles in a more sensitive manner.</p><p>Going somewhat beyond the data presented here, we suggest that journal editors also need to look at review–revise cycle times. If the review–revise process is slow and if accepted articles conventionally (or on average) go through multiple rounds of revision, then the articles that are published will have taken longer to get through the review system; by the time they are citable, they may no longer be sufficiently current to attract as much attention as those that are reviewed, revised, and published more quickly. The duration and rigour of the review process may be correlated, and we believe that all journal editors would agree on the importance of a rigorous review process. However, the amount of time involved is a slightly different matter; we allow reviewers generous amounts of time, yet, in our multiple experiences as AEs, SEs, and EICs at various IS journals, we find that some reviewers do not even start reading or reviewing until the deadline approaches (or after!). Inevitably, some of these reviewers require extensions. Meanwhile, some authors appear not to start thinking about the revision until the deadline approaches, and so they, too, often need more time, especially with COVID-19, ends of semesters, and the usual life events. None of these types of extension is desired by editors or the audience for the research, yet all are common. The net result is longer review–revise cycle times and, perhaps, lower citation counts as the research becomes more dated.</p><p>As editors, we routinely enforce deadlines and chase anyone in any role who is overdue. But the key issue we deal with is that all of us do this work as unpaid volunteers, so, when push comes to shove, there is a lack of incentive in the journal editing and reviewing system in the IS discipline. We end up with the same small group of people doing most of the work and a large portion of academics contributing little to the review or editing process. It is common today to need to invite ten or even twenty scholars to review an article, in order to secure the two who agree to do so. Is it perhaps time to rethink and retool our incentive systems beyond mere thanks and kudos? The financial rewards of academic publishing accrue first to the publishers who sell content and second to authors, who may be rewarded by their institutions. Those who contribute to the review process are primarily rewarded with kudos and, perhaps, with a sense of their power to determine the fate of others' research. That may not be enough. We suggest that if we more consistently reward those in the review process, we may also see faster turnaround times (at least from reviewers) and even better-quality reviews. Perhaps it is time to consider charging submission fees and paying reviewers and editors as is done at all of the top accounting and finance journals?6</p><p>In this issue of the <i>ISJ,</i> we present eight papers. In the first paper, Adam et al. (<span>2023</span>) note that IS research on platform control has largely focused on examining how input control (i.e. the mechanisms used to control platform access) affects complementors' intentions and behaviours after their decision to join a digital platform. Against this backdrop, the authors investigate how input control on platforms can also be a salient signal that shapes prospective complementors' expected benefits and costs (i.e. their performance and effort expectancy), and ultimately their decision to join a digital platform. The paper's experimental results show that the overall relationship between perceived input control and intention to join follows an inverted U-shape curve, which means that neither a low nor a high, but a moderate degree of perceived input control maximizes prospective complementors’ intention to join. These insights help platform researchers and providers to understand how input control works as both a governing tool and as a signalling tool.</p><p>In the second paper, Iannacci et al. (<span>2023</span>) aim at aligning the Qualitative Comparative Analysis (QCA) counterfactual approach with the logic of retroduction. Drawing on the notion that QCA revolves around an ongoing dialogue between data and theory, Iannacci et al. (<span>2023</span>) show that the minimization of complex solutions in QCA consists of hypothesising the effect(s) of unobserved configurations of causal conditions in a plausible fashion. Accordingly, they argue that plausibility should become a substitute for validity and that, by adding theory or speculative thoughts to empirical data, retroduction provides a compelling account of the synchronous effects stemming from autonomous technologies.</p><p>In the third paper, Zou et al. (<span>2023</span>) note that trust effectively mitigates customer uncertainty in initial purchases and plays an important role in customer retention. Despite decades of development, the effect of trust has not yet received sufficient scholarly attention with respect to repurchase behaviour, notably since some studies have agreed on an oversimplified diminishing effect. The authors develop a nuanced understanding of the boundary condition under which trust operates and investigate the role of institutional contexts in online repurchase behaviour. They confirm their hypotheses in two studies of e-commerce and mobile banking using the latent moderated structural (LMS) equations approach.</p><p>In the fourth paper, Herterich et al. (<span>2023</span>) report on a case study undertaken at four companies that participate in an industrial smart service ecosystem. Taking an affordance-theoretic perspective, the authors uncover both the antecedents and the process of emergent smart service ecosystems. They find that smart service ecosystems have three socio-technical antecedents: a shared worldview, structural flexibility and integrity, and an architecture of participation. They explain the emergence of smart service ecosystems as a consequence of specialisation in shared affordances and integration of idiosyncratic affordances into collective affordances. They derive seven propositions regarding the emergence of smart services, outline opportunities for further research, and present practical guidelines for manufacturing firms.</p><p>In the fifth paper, Rabl et al. (<span>2023</span>) integrate the IS and digital entrepreneurship literatures with the intrapreneurship literature to establish both if and under which conditions support by different types of digital technologies enables employee intrapreneurial behaviour in organisations. Findings from a metric conjoint experiment with 1360 decisions nested within 85 employees show significant positive effects of support by social media, support by collaborative technologies, and support by intelligent decision support systems on employee intrapreneurial behaviour. However, the relative impact of support by these digital technologies varied with different levels of management support for innovation and employee intrapreneurial self-efficacy. The authors not only provide empirical evidence for digital technology support as an enabler of individual intrapreneurial behaviour, but in line with an interactionist perspective they also identify management support for innovation and employee intrapreneurial self-efficacy as important contingencies for leveraging the potential of digital technology support.</p><p>In the sixth paper, Yazdanmehr et al. (<span>2023</span>) observe that studies on employee responses to information security policy (ISP) demands show that stress can lead to emotion-focused coping and subsequent ISP violations. Using the Transactional Model of Stress and Coping theory, the authors argue that inward and outward emotion-focused and problem-focused coping responses to ISP demands coexist and influence ISP violations and that the effects of security-related stress (SRS) on coping responses are contingent on ISP-related self-efficacy and organisational support. The results indicate SRS triggers all three coping responses, and ISP-related self-efficacy and organisational support reduce the effects of SRS on inward and outward emotion-focused coping. Problem-focused coping decreases ISP violation intention, whereas inward and outward emotion-focused coping increases it.</p><p>In the seventh paper, Wakefield &amp; Wakefield (<span>2023</span>) examine how affective polarisation drives responses on controversial topics on social media. Their model is built on the framework of social identity theory. Social identity and passion motivate users to distance themselves from rivals, but along non-intuitive paths. Inflated feelings for the in-group, more than dislike of the out-group, drive affective polarisation. Counter to stereotypes, those with obsessive passion hold less animosity toward rivals than others identified with the group. Contrary to expectations that polar opposites attack, Twitter users with greater affective polarisation prefer to shut out rivals by muting, blocking or unfollowing, exacerbating echo chambers.</p><p>In the eighth paper, Vial et al. (<span>2023</span>) study the management of AI projects in a North American AI consulting firm. They find that the combination of elements drawn from traditional project management, agile practices, and AI workflow practices enables this firm to be effective in delivering AI projects to their customers. At the same time, successfully managing AI projects involves resolving conflicts between three different logics that underpin these elements. From their case findings, the authors derive four strategies to help organisations better manage their AI projects.</p>\",\"PeriodicalId\":48049,\"journal\":{\"name\":\"Information Systems Journal\",\"volume\":\"33 3\",\"pages\":\"419-436\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2023-01-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/isj.12426\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Systems Journal\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/isj.12426\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Systems Journal","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/isj.12426","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

摘要

每年7月,都会发布上一年的期刊影响因子。尽管我们承认一种长期存在的反对意见,即拒绝这种期刊级别的指标,并认为只应评估文章级别的质量和影响,或建议采取其他影响措施(例如,Bollen等人,2005年;Leydesdorf,2012年),但人们热切期待将JIF作为期刊质量和影响的指标(Lowry et al.,2013)。我们也承认与期刊影响的科学计量分析相关的固有风险(参见Clarke,2016),特别是在推断任何人(出版商、编辑、作者或读者)可能会因该分析而做什么时。报告最多的联合执行框架涵盖两年,传统计算如下:
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ISJ editorial: Addressing the implications of recent developments in journal impact factors

In 2020, we noticed significant spikes in the JIFs of several leading information systems (IS) journals, both inside and outside the Association for Information Systems (AIS) basket of eight (AIS-8) premier journals. A moderate degree of JIF fluctuation is normal, and a few journals increase steadily year after year, but the changes in 2020 were remarkable. For example, the JIF for the Journal of Strategic Information Systems (JSIS) rose from 3.949 in 2019 to 7.838 in 2020; likewise, the JIF for Information & Organisation (I&O), rose from 3.300 to 6.300. Every other major IS journal saw significant increases. In analysing the patterns, our scope embraces 13 journals, beginning with the AIS-8: European Journal of Information Systems (EJIS), Information Systems Journal (ISJ), Information Systems Research (ISR), Journal of Information Technology (JIT), Journal of the Association for Information Systems (JAIS), Journal of Management Information Systems (JMIS), Journal of Strategic Information Systems (JSIS), and Management Information Systems Quarterly (MISQ). We also included five well-regarded and highly cited IS journals outside the AIS-8: Decision Support Systems (DSS), Information & Management (I&M), Information & Organisation (I&O), International Journal of Information Management (IJIM), and IT & People (IT&P).

What can we learn from this trawl through the recent journal citation data? What are the implications for the journals and their various stakeholders? Several prominent trends emerge, notably, the short-term skewing effect of highly cited articles, the impact of self-citation, and the great extent to which IS research is cited in articles appearing in many other journals and conferences, some of which are in disciplines far removed from IS. A more detailed analysis of the kinds of article that are highly cited may be valuable; the tentative analysis here suggests that reviews, research agendas, research frameworks, and methods articles are particularly well received. COVID-19 has provided a topic of global interest and significance, and many COVID-19-related articles are well cited, but it may well be that many others are not cited. Indeed, it is instructive to examine the tail of zeros (see Table 11): the articles that are not cited at all and thus contribute only to the attenuating effect of the denominator and not the aggrandizing effect of the numerator in the JIF calculation.

We suggest that these data may be of considerable concern to editors and publishers. Although a couple of journals have only one uncited article, several journals reach double figures (10–28 articles), with as many as 17.5% of citable articles being uncited. High numbers of uncited articles will naturally diminish a journal's JIF; thus, editors who hope to see a higher JIF must consider how to ensure that the articles they publish are cited. Why does one article languish uncited while another article harvests a rich crop of citations? There are many possible reasons. For instance, a well-written, rigorously conducted piece of research may fail to excite the imagination of readers while a hastily cobbled together, flash-in-the-pan analysis of a trendy topic may simply be timelier and more interesting, thus generating more citations. A key reason could be relevance and importance of research and how wide an audience an article targets. For instance, most analytical modelling papers have far less citations than their empirical counterparts partly because a considerably smaller number of people read the former as compared to the latter, because modelling requires training most academics do not have. Therefore, relevance, importance, understandability, and accessibility could be key factors. Meanwhile, some papers are cited because everyone doing similar research cites them, even if inappropriately, a trend that seems common in methodological articles on broad topics like common methods bias, structural equation modelling, formative constructs, case studies, action research, and so on.

We also expect that whether a journal is affiliated with a professional publisher also has a large influence on these metrics. For example, Elsevier is the largest academic publisher and provides many high-quality services and resources for its journals that are not provided at journals like MISQ and JAIS, arguably providing great advantages to Elsevier journals (e.g. Scopus, Mendeley, ScienceDirect, Researcher Academy, early view, cross-citation suggestions, full citation support in all major formats, and so on). Similar services are provided at the other professional publishers like Wiley, Sage, Springer, Emerald, and Taylor & Francis. The other key factor is that these major publishers have much more extensive coverage in libraries, abstracting services, and indexing services. They also have professional staff dedicated to marketing, promoting articles, media outreach, production, copy editing, and so on. As an illustration, if a researcher finds an article on Google Scholar and is brought to a professional publisher's site for the article, the site will typically provide: (1) recommended articles, (2) citing articles, (3) full meta-data and article metrics, (4) complete article bibliographic information that can be easily downloaded in any format, (5) electronic cross-referencing to every paper cited, (6) links for all the coauthors and related papers published, (7) ability to track a citation and receive alerts, (8) links to share article on social media, and so on. Little of this is provided by MISQ or JAIS as they both are pushed through the AIS e-library, which has a more basic, conference-paper-like interface and does not even provide direct citation downloading or electronic references to cited articles. Thus, we surmise that if MISQ or JAIS was published by and integrated with a major publisher, then their JIFs would be much higher than at present, and they would also have much lower rates of zero-cited items.

It seems to us that the factors that give rise to JIFs could constitute a perilous situation, especially for journals not affiliated with major professional publishers. Although we recognise the value of research frameworks, agendas, reviews, and methods, all of which tend to be well cited, the justification for the citation may be primarily that a hard-working author has done all the homework on a particular topic, saving time for others who do not need to read all the background materials (as this one article has done the job for them) and who apply or build on it in their own empirical research. Such articles tend to be few and far between, so their individual impact on the JIF of a particular journal is perhaps limited, if only by the two-year citation window. However, there is also the potential for us to become irremediably hooked on the ever-changing chimera of whatever is novel or interesting while neglecting to undertake more careful investigations into and analyses of those critical or “wicked” problems that demand more resources but whose impact may lie far in the future. Moreover, we often fail to address the important scientific task of conducting and publishing replications and extensions, as these are simply not trendy or novel (Brendel et al., 2023). Top IS journals are becoming so notorious for demanding novel theory, topics, methods, and analyses that we might argue that many articles are not replicable and threaten to be one-hit wonders that are highly cited for the wrong reasons. Taken to the extreme, this could lead to a lack of coherence and a failure to cultivate fundamental, important research discourses that build upon one another.

In a world where authors do not need to read an entire article to decide whether or not to cite it, perhaps finding only a relevant snippet to quote from a search engine, the winners in the JIF league may be those journals that publish research articles that are variously high quality, interesting, popular, or fetishized and that deploy IS most effectively to secure an audience of potential citers for those articles. Early View mechanisms are favoured by publishers because of their revenue implications as well as copyright control, although not all journals have yet implemented this arrangement. Academia.com, ResearchGate, and similar intermediaries are more controversial, given the potential for revenue loss, yet are highly popular with authors keen to share their research. Whether the sharing of research takes place on an official platform or that of an intermediary, all stakeholders have an interest in the citations that follow. The publishers should not be neglected; they compete for “intellectual footprint”, that is, both the volume of research published and the extent to which it is cited.

The focus on “what may be cited” has implications for the pragmatics of research that may alarm researchers. How would researchers feel if their carefully conducted, rigorous, relevant study is desk rejected because it is unlikely to garner any citations? Superficially, it may seem unlikely that an editor could know in advance that a newly submitted article will, after several rounds of revision and after a year or more has elapsed, turn out to be cited or not, but a general obsession with novelty appears to be the common surrogate. A more sophisticated analysis of submissions, rejections, acceptances, and the eventual citation counts could well reveal patterns making possible the design of an artificial intelligence-based desk decision (reject or review) recommendation system that achieves precisely that outcome. In practice, we suspect that the weak potential for future citations is never likely to be offered as a justification for rejection, but several proxies could be devised, notably the lack of interest or novelty in a research design or theory and the question of an article's fit with the journal's scope or objectives. But who is to say what is interesting or novel? Reviewers certainly wield this power and are free to say whatever they like. We expect (or hope) that associate and senior editors at top IS journals consider reviewer comments with care: they also need to undertake their own careful assessment of the individual merits of a research article (cf. Tarafdar and Davison, 2021). Authors can also help themselves by carefully justifying their choice of research topic and explaining why it is important, worth doing, and of benefit to which stakeholders. This is perhaps no more than a variation on problematization (Chatterjee & Davison, 2021), but it may contribute to ensuring the continued survival of research that leads to solving critical problems, even though some might find it uninteresting and even when it fails to inspire citations by academic researchers.

The continued relevance of JIFs is subject to scrutiny. A journal-level metric cannot indicate either the extent to which a particular research article is of high quality or how it is valued by the readership. Instead, assessing the merits of individual articles is necessary, and this assessment need not be restricted to citation counting. For instance, one can also look at the impact of the research beyond academic contexts, whether in policy or practice, using tools like Altmetrics.5 Certainly, JIFs are important indicators of journal impact and a proxy for the general quality of the journal, but they are one of several such indicators. Thus, it is important that we do not misuse JIFs (especially in crucial resource decisions, such as for people facing tenure, promotion, and hiring decisions) and that they be balanced with other sources of input (Dean et al., 2011; Lowry et al., 2013). At the same time, JIFs may provide some indication of the extent to which a given journal is relevant to an academic audience. A higher JIF implies that more people have found the collective research published by that journal to be of value. Because JIF data are subject to spike effects when trendy articles are published, it is safer to look at longer-term trends in the JIF data. The short-term nature of the two-year JIF (which is the most often mentioned) is clearly problematic: many articles create sustained value over much longer periods, which cannot be captured in a two-year window.

Journal editors who wish to improve JIFs also need to be concerned about the depreciating effects of zero-cited papers (Table 11). Why are some articles highly cited and others not cited at all? Some of the answer can be found in the (journal-level) self-citation scores (see Table 8); some journals appear to actively encourage authors to cite papers published in the journal to which they are submitting (see numbers in the red diagonal). Additionally, while some journals receive large numbers of citations from other IS journals, others do not attract so many citations. For instance, articles published in MISQ were cited 308 times by the other 12 journals in our analysis, but articles published in IT&P and I&O were cited only 31 and 34 times, respectively, in the other 12 journals, receiving no citations at all from several of the 12. Editors need to consider carefully which market they aim to exert an impact on, and thus to be cited in, and then examine how to achieve that impact effectively. Alternatively, attention to article-level metrics may be a promising route that champions the quality of individual articles in a more sensitive manner.

Going somewhat beyond the data presented here, we suggest that journal editors also need to look at review–revise cycle times. If the review–revise process is slow and if accepted articles conventionally (or on average) go through multiple rounds of revision, then the articles that are published will have taken longer to get through the review system; by the time they are citable, they may no longer be sufficiently current to attract as much attention as those that are reviewed, revised, and published more quickly. The duration and rigour of the review process may be correlated, and we believe that all journal editors would agree on the importance of a rigorous review process. However, the amount of time involved is a slightly different matter; we allow reviewers generous amounts of time, yet, in our multiple experiences as AEs, SEs, and EICs at various IS journals, we find that some reviewers do not even start reading or reviewing until the deadline approaches (or after!). Inevitably, some of these reviewers require extensions. Meanwhile, some authors appear not to start thinking about the revision until the deadline approaches, and so they, too, often need more time, especially with COVID-19, ends of semesters, and the usual life events. None of these types of extension is desired by editors or the audience for the research, yet all are common. The net result is longer review–revise cycle times and, perhaps, lower citation counts as the research becomes more dated.

As editors, we routinely enforce deadlines and chase anyone in any role who is overdue. But the key issue we deal with is that all of us do this work as unpaid volunteers, so, when push comes to shove, there is a lack of incentive in the journal editing and reviewing system in the IS discipline. We end up with the same small group of people doing most of the work and a large portion of academics contributing little to the review or editing process. It is common today to need to invite ten or even twenty scholars to review an article, in order to secure the two who agree to do so. Is it perhaps time to rethink and retool our incentive systems beyond mere thanks and kudos? The financial rewards of academic publishing accrue first to the publishers who sell content and second to authors, who may be rewarded by their institutions. Those who contribute to the review process are primarily rewarded with kudos and, perhaps, with a sense of their power to determine the fate of others' research. That may not be enough. We suggest that if we more consistently reward those in the review process, we may also see faster turnaround times (at least from reviewers) and even better-quality reviews. Perhaps it is time to consider charging submission fees and paying reviewers and editors as is done at all of the top accounting and finance journals?6

In this issue of the ISJ, we present eight papers. In the first paper, Adam et al. (2023) note that IS research on platform control has largely focused on examining how input control (i.e. the mechanisms used to control platform access) affects complementors' intentions and behaviours after their decision to join a digital platform. Against this backdrop, the authors investigate how input control on platforms can also be a salient signal that shapes prospective complementors' expected benefits and costs (i.e. their performance and effort expectancy), and ultimately their decision to join a digital platform. The paper's experimental results show that the overall relationship between perceived input control and intention to join follows an inverted U-shape curve, which means that neither a low nor a high, but a moderate degree of perceived input control maximizes prospective complementors’ intention to join. These insights help platform researchers and providers to understand how input control works as both a governing tool and as a signalling tool.

In the second paper, Iannacci et al. (2023) aim at aligning the Qualitative Comparative Analysis (QCA) counterfactual approach with the logic of retroduction. Drawing on the notion that QCA revolves around an ongoing dialogue between data and theory, Iannacci et al. (2023) show that the minimization of complex solutions in QCA consists of hypothesising the effect(s) of unobserved configurations of causal conditions in a plausible fashion. Accordingly, they argue that plausibility should become a substitute for validity and that, by adding theory or speculative thoughts to empirical data, retroduction provides a compelling account of the synchronous effects stemming from autonomous technologies.

In the third paper, Zou et al. (2023) note that trust effectively mitigates customer uncertainty in initial purchases and plays an important role in customer retention. Despite decades of development, the effect of trust has not yet received sufficient scholarly attention with respect to repurchase behaviour, notably since some studies have agreed on an oversimplified diminishing effect. The authors develop a nuanced understanding of the boundary condition under which trust operates and investigate the role of institutional contexts in online repurchase behaviour. They confirm their hypotheses in two studies of e-commerce and mobile banking using the latent moderated structural (LMS) equations approach.

In the fourth paper, Herterich et al. (2023) report on a case study undertaken at four companies that participate in an industrial smart service ecosystem. Taking an affordance-theoretic perspective, the authors uncover both the antecedents and the process of emergent smart service ecosystems. They find that smart service ecosystems have three socio-technical antecedents: a shared worldview, structural flexibility and integrity, and an architecture of participation. They explain the emergence of smart service ecosystems as a consequence of specialisation in shared affordances and integration of idiosyncratic affordances into collective affordances. They derive seven propositions regarding the emergence of smart services, outline opportunities for further research, and present practical guidelines for manufacturing firms.

In the fifth paper, Rabl et al. (2023) integrate the IS and digital entrepreneurship literatures with the intrapreneurship literature to establish both if and under which conditions support by different types of digital technologies enables employee intrapreneurial behaviour in organisations. Findings from a metric conjoint experiment with 1360 decisions nested within 85 employees show significant positive effects of support by social media, support by collaborative technologies, and support by intelligent decision support systems on employee intrapreneurial behaviour. However, the relative impact of support by these digital technologies varied with different levels of management support for innovation and employee intrapreneurial self-efficacy. The authors not only provide empirical evidence for digital technology support as an enabler of individual intrapreneurial behaviour, but in line with an interactionist perspective they also identify management support for innovation and employee intrapreneurial self-efficacy as important contingencies for leveraging the potential of digital technology support.

In the sixth paper, Yazdanmehr et al. (2023) observe that studies on employee responses to information security policy (ISP) demands show that stress can lead to emotion-focused coping and subsequent ISP violations. Using the Transactional Model of Stress and Coping theory, the authors argue that inward and outward emotion-focused and problem-focused coping responses to ISP demands coexist and influence ISP violations and that the effects of security-related stress (SRS) on coping responses are contingent on ISP-related self-efficacy and organisational support. The results indicate SRS triggers all three coping responses, and ISP-related self-efficacy and organisational support reduce the effects of SRS on inward and outward emotion-focused coping. Problem-focused coping decreases ISP violation intention, whereas inward and outward emotion-focused coping increases it.

In the seventh paper, Wakefield & Wakefield (2023) examine how affective polarisation drives responses on controversial topics on social media. Their model is built on the framework of social identity theory. Social identity and passion motivate users to distance themselves from rivals, but along non-intuitive paths. Inflated feelings for the in-group, more than dislike of the out-group, drive affective polarisation. Counter to stereotypes, those with obsessive passion hold less animosity toward rivals than others identified with the group. Contrary to expectations that polar opposites attack, Twitter users with greater affective polarisation prefer to shut out rivals by muting, blocking or unfollowing, exacerbating echo chambers.

In the eighth paper, Vial et al. (2023) study the management of AI projects in a North American AI consulting firm. They find that the combination of elements drawn from traditional project management, agile practices, and AI workflow practices enables this firm to be effective in delivering AI projects to their customers. At the same time, successfully managing AI projects involves resolving conflicts between three different logics that underpin these elements. From their case findings, the authors derive four strategies to help organisations better manage their AI projects.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Systems Journal
Information Systems Journal INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
14.60
自引率
7.80%
发文量
44
期刊介绍: The Information Systems Journal (ISJ) is an international journal promoting the study of, and interest in, information systems. Articles are welcome on research, practice, experience, current issues and debates. The ISJ encourages submissions that reflect the wide and interdisciplinary nature of the subject and articles that integrate technological disciplines with social, contextual and management issues, based on research using appropriate research methods.The ISJ has particularly built its reputation by publishing qualitative research and it continues to welcome such papers. Quantitative research papers are also welcome but they need to emphasise the context of the research and the theoretical and practical implications of their findings.The ISJ does not publish purely technical papers.
期刊最新文献
Issue Information Issue Information Issue Information Digital transformation in Latin America: Challenges and opportunities Examining formation and alleviation of information security fatigue by using job demands–resources theory
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1