{"title":"人工智能驱动的虚假信息:组织准备和应对的框架","authors":"Elise Karinshak, Yan Jin","doi":"10.1108/jcom-09-2022-0113","DOIUrl":null,"url":null,"abstract":"PurposeDisinformation, false information designed with the intention to mislead, can significantly damage organizational operation and reputation, interfering with communication and relationship management in a wide breadth of risk and crisis contexts. Modern digital platforms and emerging technologies, including artificial intelligence (AI), introduce novel risks in crisis management (Guthrie and Rich, 2022). Disinformation literature in security and computer science has assessed how previously introduced technologies have affected disinformation, demanding a systematic and coordinated approach for sustainable counter-disinformation efforts. However, there is a lack of theory-driven, evidence-based research and practice in public relations that advises how organizations can effectively and proactively manage risks and crises driven by AI (Guthrie and Rich, 2022).Design/methodology/approachAs a first step in closing this research-practice gap, the authors first synthesize theoretical and technical literature characterizing the effects of AI on disinformation. Upon this review, the authors propose a conceptual framework for disinformation response in the corporate sector that assesses (1) technologies affecting disinformation attacks and counterattacks and (2) how organizations can proactively prepare and equip communication teams to better protect businesses and stakeholders.FindingsThis research illustrates that future disinformation response efforts will not be able to rely solely on detection strategies, as AI-created content quality becomes more and more convincing (and ultimately, indistinguishable), and that future disinformation management efforts will need to rely on content influence rather than volume (due to emerging capabilities for automated production of disinformation). Built upon these fundamental, literature-driven characteristics, the framework provides organizations actor-level and content-level perspectives for influence and discusses their implications for disinformation management.Originality/valueThis research provides a theoretical basis and practitioner insights by anticipating how AI technologies will impact corporate disinformation attacks and outlining how companies can respond. The proposed framework provides a theory-driven, practical approach for effective, proactive disinformation management systems with the capacity and agility to detect risks and mitigate crises driven by evolving AI technologies. Together, this framework and the discussed strategies offer great value to forward-looking disinformation management efforts. Subsequent research can build upon this framework as AI technologies are deployed in disinformation campaigns, and practitioners can leverage this framework in the development of counter-disinformation efforts.","PeriodicalId":51660,"journal":{"name":"Journal of Communication Management","volume":" ","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI-driven disinformation: a framework for organizational preparation and response\",\"authors\":\"Elise Karinshak, Yan Jin\",\"doi\":\"10.1108/jcom-09-2022-0113\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"PurposeDisinformation, false information designed with the intention to mislead, can significantly damage organizational operation and reputation, interfering with communication and relationship management in a wide breadth of risk and crisis contexts. Modern digital platforms and emerging technologies, including artificial intelligence (AI), introduce novel risks in crisis management (Guthrie and Rich, 2022). Disinformation literature in security and computer science has assessed how previously introduced technologies have affected disinformation, demanding a systematic and coordinated approach for sustainable counter-disinformation efforts. However, there is a lack of theory-driven, evidence-based research and practice in public relations that advises how organizations can effectively and proactively manage risks and crises driven by AI (Guthrie and Rich, 2022).Design/methodology/approachAs a first step in closing this research-practice gap, the authors first synthesize theoretical and technical literature characterizing the effects of AI on disinformation. Upon this review, the authors propose a conceptual framework for disinformation response in the corporate sector that assesses (1) technologies affecting disinformation attacks and counterattacks and (2) how organizations can proactively prepare and equip communication teams to better protect businesses and stakeholders.FindingsThis research illustrates that future disinformation response efforts will not be able to rely solely on detection strategies, as AI-created content quality becomes more and more convincing (and ultimately, indistinguishable), and that future disinformation management efforts will need to rely on content influence rather than volume (due to emerging capabilities for automated production of disinformation). Built upon these fundamental, literature-driven characteristics, the framework provides organizations actor-level and content-level perspectives for influence and discusses their implications for disinformation management.Originality/valueThis research provides a theoretical basis and practitioner insights by anticipating how AI technologies will impact corporate disinformation attacks and outlining how companies can respond. The proposed framework provides a theory-driven, practical approach for effective, proactive disinformation management systems with the capacity and agility to detect risks and mitigate crises driven by evolving AI technologies. Together, this framework and the discussed strategies offer great value to forward-looking disinformation management efforts. Subsequent research can build upon this framework as AI technologies are deployed in disinformation campaigns, and practitioners can leverage this framework in the development of counter-disinformation efforts.\",\"PeriodicalId\":51660,\"journal\":{\"name\":\"Journal of Communication Management\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2023-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Communication Management\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1108/jcom-09-2022-0113\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMMUNICATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Communication Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/jcom-09-2022-0113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
AI-driven disinformation: a framework for organizational preparation and response
PurposeDisinformation, false information designed with the intention to mislead, can significantly damage organizational operation and reputation, interfering with communication and relationship management in a wide breadth of risk and crisis contexts. Modern digital platforms and emerging technologies, including artificial intelligence (AI), introduce novel risks in crisis management (Guthrie and Rich, 2022). Disinformation literature in security and computer science has assessed how previously introduced technologies have affected disinformation, demanding a systematic and coordinated approach for sustainable counter-disinformation efforts. However, there is a lack of theory-driven, evidence-based research and practice in public relations that advises how organizations can effectively and proactively manage risks and crises driven by AI (Guthrie and Rich, 2022).Design/methodology/approachAs a first step in closing this research-practice gap, the authors first synthesize theoretical and technical literature characterizing the effects of AI on disinformation. Upon this review, the authors propose a conceptual framework for disinformation response in the corporate sector that assesses (1) technologies affecting disinformation attacks and counterattacks and (2) how organizations can proactively prepare and equip communication teams to better protect businesses and stakeholders.FindingsThis research illustrates that future disinformation response efforts will not be able to rely solely on detection strategies, as AI-created content quality becomes more and more convincing (and ultimately, indistinguishable), and that future disinformation management efforts will need to rely on content influence rather than volume (due to emerging capabilities for automated production of disinformation). Built upon these fundamental, literature-driven characteristics, the framework provides organizations actor-level and content-level perspectives for influence and discusses their implications for disinformation management.Originality/valueThis research provides a theoretical basis and practitioner insights by anticipating how AI technologies will impact corporate disinformation attacks and outlining how companies can respond. The proposed framework provides a theory-driven, practical approach for effective, proactive disinformation management systems with the capacity and agility to detect risks and mitigate crises driven by evolving AI technologies. Together, this framework and the discussed strategies offer great value to forward-looking disinformation management efforts. Subsequent research can build upon this framework as AI technologies are deployed in disinformation campaigns, and practitioners can leverage this framework in the development of counter-disinformation efforts.