Solving the discrete logarithm problem in a finite prime field is an extremely important computing problem in modern cryptography. The hardness of solving the discrete logarithm problem in a finite prime field is the security foundation of numerous cryptography schemes. In this paper, we propose the double index calculus algorithm to solve the discrete logarithm problem in a finite prime field. Our algorithm is faster than the index calculus algorithm, which is the state-of-the-art algorithm for solving the discrete logarithm problem in a finite prime field. Empirical experiment results indicate that our algorithm could be more than a 30-fold increase in computing speed than the index calculus algorithm when the bit length of the order of prime field is 70 bits. In addition, our algorithm is more general than the index calculus algorithm. Specifically, when the base of the target discrete logarithm problem is not the multiplication generator, the index calculus algorithm may fail to solve the discrete logarithm problem while our algorithm still can work.
{"title":"Double Index Calculus Algorithm: Faster Solving Discrete Logarithm Problem in Finite Prime Field","authors":"Wen Huang, Zhishuo Zhang, Weixin Zhao, Jian Peng, Yongjian Liao, Yuyu Wang","doi":"arxiv-2409.08784","DOIUrl":"https://doi.org/arxiv-2409.08784","url":null,"abstract":"Solving the discrete logarithm problem in a finite prime field is an\u0000extremely important computing problem in modern cryptography. The hardness of\u0000solving the discrete logarithm problem in a finite prime field is the security\u0000foundation of numerous cryptography schemes. In this paper, we propose the\u0000double index calculus algorithm to solve the discrete logarithm problem in a\u0000finite prime field. Our algorithm is faster than the index calculus algorithm,\u0000which is the state-of-the-art algorithm for solving the discrete logarithm\u0000problem in a finite prime field. Empirical experiment results indicate that our\u0000algorithm could be more than a 30-fold increase in computing speed than the\u0000index calculus algorithm when the bit length of the order of prime field is 70\u0000bits. In addition, our algorithm is more general than the index calculus\u0000algorithm. Specifically, when the base of the target discrete logarithm problem\u0000is not the multiplication generator, the index calculus algorithm may fail to\u0000solve the discrete logarithm problem while our algorithm still can work.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Faith in the US electoral system is at risk. This issue stems from trust or lack thereof. Poor leaders ranted and attempted to sew discord in the democratic process and even tried to influence election results. Historically, the US has relied on paper ballots to cast private votes. Votes are watered down by the Electoral College. Elections are contested due to voter IDs and proof of citizenship. Methods of voting are nonsensically complex. In the technology age, this can be solved with a Smartcard National ID backed by Public-Key Infrastructure (PKI). This could be a method to restore hope in democracy and move the country back towards elections under a Popular Vote. Numbers are empirical and immutable and can solve the issue of Election Security in a bipartisan way. NATO allies like Estonia have already broken ground in using technology for eDemocracy or (Internet-based) iVoting. Acknowledging cyber attacks will happen, this is an opportunity for DHS and DOD (CYBERCOM) to collaborate on domestic operations and protect critical election infrastructure. This idea will not fix malicious information operations or civil stupidity. However, this is the way forward to securing elections now and forever. The views expressed by this whitepaper are those of the author and do not reflect the official policy or position of Dakota State University, the N.H. Army National Guard, the U.S. Army, the Department of Defense, or the U.S. Government. Cleared for release by DOPSR on 13 SEP 2024.
{"title":"National Treasure: The Call for e-Democracy and US Election Security","authors":"Adam Dorian Wong","doi":"arxiv-2409.08952","DOIUrl":"https://doi.org/arxiv-2409.08952","url":null,"abstract":"Faith in the US electoral system is at risk. This issue stems from trust or\u0000lack thereof. Poor leaders ranted and attempted to sew discord in the\u0000democratic process and even tried to influence election results. Historically,\u0000the US has relied on paper ballots to cast private votes. Votes are watered\u0000down by the Electoral College. Elections are contested due to voter IDs and\u0000proof of citizenship. Methods of voting are nonsensically complex. In the\u0000technology age, this can be solved with a Smartcard National ID backed by\u0000Public-Key Infrastructure (PKI). This could be a method to restore hope in\u0000democracy and move the country back towards elections under a Popular Vote.\u0000Numbers are empirical and immutable and can solve the issue of Election\u0000Security in a bipartisan way. NATO allies like Estonia have already broken\u0000ground in using technology for eDemocracy or (Internet-based) iVoting.\u0000Acknowledging cyber attacks will happen, this is an opportunity for DHS and DOD\u0000(CYBERCOM) to collaborate on domestic operations and protect critical election\u0000infrastructure. This idea will not fix malicious information operations or\u0000civil stupidity. However, this is the way forward to securing elections now and\u0000forever. The views expressed by this whitepaper are those of the author and do\u0000not reflect the official policy or position of Dakota State University, the\u0000N.H. Army National Guard, the U.S. Army, the Department of Defense, or the U.S.\u0000Government. Cleared for release by DOPSR on 13 SEP 2024.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cybersecurity software tool evaluation is difficult due to the inherently adversarial nature of the field. A penetration testing (or offensive) tool must be tested against a viable defensive adversary and a defensive tool must, similarly, be tested against a viable offensive adversary. Characterizing the tool's performance inherently depends on the quality of the adversary, which can vary from test to test. This paper proposes the use of a 'perfect' network, representing computing systems, a network and the attack pathways through it as a methodology to use for testing cybersecurity decision-making tools. This facilitates testing by providing a known and consistent standard for comparison. It also allows testing to include researcher-selected levels of error, noise and uncertainty to evaluate cybersecurity tools under these experimental conditions.
{"title":"Cybersecurity Software Tool Evaluation Using a 'Perfect' Network Model","authors":"Jeremy Straub","doi":"arxiv-2409.09175","DOIUrl":"https://doi.org/arxiv-2409.09175","url":null,"abstract":"Cybersecurity software tool evaluation is difficult due to the inherently\u0000adversarial nature of the field. A penetration testing (or offensive) tool must\u0000be tested against a viable defensive adversary and a defensive tool must,\u0000similarly, be tested against a viable offensive adversary. Characterizing the\u0000tool's performance inherently depends on the quality of the adversary, which\u0000can vary from test to test. This paper proposes the use of a 'perfect' network,\u0000representing computing systems, a network and the attack pathways through it as\u0000a methodology to use for testing cybersecurity decision-making tools. This\u0000facilitates testing by providing a known and consistent standard for\u0000comparison. It also allows testing to include researcher-selected levels of\u0000error, noise and uncertainty to evaluate cybersecurity tools under these\u0000experimental conditions.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quantum computing poses a significant global threat to today's security mechanisms. As a result, security experts and public sectors have issued guidelines to help organizations migrate their software to post-quantum cryptography (PQC). Despite these efforts, there is a lack of (semi-)automatic tools to support this transition especially when software is used and deployed as binary executables. To address this gap, in this work, we first propose a set of requirements necessary for a tool to detect quantum-vulnerable software executables. Following these requirements, we introduce QED: a toolchain for Quantum-vulnerable Executable Detection. QED uses a three-phase approach to identify quantum-vulnerable dependencies in a given set of executables, from file-level to API-level, and finally, precise identification of a static trace that triggers a quantum-vulnerable API. We evaluate QED on both a synthetic dataset with four cryptography libraries and a real-world dataset with over 200 software executables. The results demonstrate that: (1) QED discerns quantum-vulnerable from quantum-safe executables with 100% accuracy in the synthetic dataset; (2) QED is practical and scalable, completing analyses on average in less than 4 seconds per real-world executable; and (3) QED reduces the manual workload required by analysts to identify quantum-vulnerable executables in the real-world dataset by more than 90%. We hope that QED can become a crucial tool to facilitate the transition to PQC, particularly for small and medium-sized businesses with limited resources.
量子计算对当今的安全机制构成了重大的全球性威胁。因此,安全专家和公共部门发布了指导方针,帮助企业将其软件迁移到后量子加密技术(PQC)。尽管做出了这些努力,但仍缺乏(半)自动工具来支持这一过渡,尤其是在软件作为二进制可执行文件使用和部署时。为了填补这一空白,我们首先提出了检测量子漏洞软件可执行文件的工具所需的一系列要求。根据这些要求,我们介绍了 QED:量子漏洞可执行文件检测工具链。QED 采用三阶段方法识别给定可执行文件集中的量子漏洞依赖关系,从文件级到 API 级,最后精确识别触发量子漏洞 API 的静态轨迹。我们在包含四个密码学库的合成数据集和包含 200 多个软件可执行文件的真实世界数据集上对 QED 进行了评估。结果表明(1) 在合成数据集中,QED 从量子安全的可执行文件中识别量子漏洞的准确率达到 100%;(2) QED 实用且可扩展,平均每个真实世界可执行文件只需不到 4 秒就能完成分析;(3) QED 将分析师识别真实世界数据集中量子漏洞可执行文件所需的人工工作量减少了 90% 以上。我们希望 QED 能够成为促进向 PQC 过渡的重要工具,尤其是对资源有限的中小型企业而言。
{"title":"A Toolchain for Assisting Migration of Software Executables Towards Post-Quantum Crytography","authors":"Norrathep Rattanavipanon, Jakapan Suaboot, Warodom Werapun","doi":"arxiv-2409.07852","DOIUrl":"https://doi.org/arxiv-2409.07852","url":null,"abstract":"Quantum computing poses a significant global threat to today's security\u0000mechanisms. As a result, security experts and public sectors have issued\u0000guidelines to help organizations migrate their software to post-quantum\u0000cryptography (PQC). Despite these efforts, there is a lack of (semi-)automatic\u0000tools to support this transition especially when software is used and deployed\u0000as binary executables. To address this gap, in this work, we first propose a\u0000set of requirements necessary for a tool to detect quantum-vulnerable software\u0000executables. Following these requirements, we introduce QED: a toolchain for\u0000Quantum-vulnerable Executable Detection. QED uses a three-phase approach to\u0000identify quantum-vulnerable dependencies in a given set of executables, from\u0000file-level to API-level, and finally, precise identification of a static trace\u0000that triggers a quantum-vulnerable API. We evaluate QED on both a synthetic\u0000dataset with four cryptography libraries and a real-world dataset with over 200\u0000software executables. The results demonstrate that: (1) QED discerns\u0000quantum-vulnerable from quantum-safe executables with 100% accuracy in the\u0000synthetic dataset; (2) QED is practical and scalable, completing analyses on\u0000average in less than 4 seconds per real-world executable; and (3) QED reduces\u0000the manual workload required by analysts to identify quantum-vulnerable\u0000executables in the real-world dataset by more than 90%. We hope that QED can\u0000become a crucial tool to facilitate the transition to PQC, particularly for\u0000small and medium-sized businesses with limited resources.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jian Cui, Hanna Kim, Eugene Jang, Dayeon Yim, Kicheol Kim, Yongjae Lee, Jin-Woo Chung, Seungwon Shin, Xiaojing Liao
Twitter is recognized as a crucial platform for the dissemination and gathering of Cyber Threat Intelligence (CTI). Its capability to provide real-time, actionable intelligence makes it an indispensable tool for detecting security events, helping security professionals cope with ever-growing threats. However, the large volume of tweets and inherent noises of human-crafted tweets pose significant challenges in accurately identifying security events. While many studies tried to filter out event-related tweets based on keywords, they are not effective due to their limitation in understanding the semantics of tweets. Another challenge in security event detection from Twitter is the comprehensive coverage of security events. Previous studies emphasized the importance of early detection of security events, but they overlooked the importance of event coverage. To cope with these challenges, in our study, we introduce a novel event attribution-centric tweet embedding method to enable the high precision and coverage of events. Our experiment result shows that the proposed method outperforms existing text and graph-based tweet embedding methods in identifying security events. Leveraging this novel embedding approach, we have developed and implemented a framework, Tweezers, that is applicable to security event detection from Twitter for CTI gathering. This framework has demonstrated its effectiveness, detecting twice as many events compared to established baselines. Additionally, we have showcased two applications, built on Tweezers for the integration and inspection of security events, i.e., security event trend analysis and informative security user identification.
{"title":"Tweezers: A Framework for Security Event Detection via Event Attribution-centric Tweet Embedding","authors":"Jian Cui, Hanna Kim, Eugene Jang, Dayeon Yim, Kicheol Kim, Yongjae Lee, Jin-Woo Chung, Seungwon Shin, Xiaojing Liao","doi":"arxiv-2409.08221","DOIUrl":"https://doi.org/arxiv-2409.08221","url":null,"abstract":"Twitter is recognized as a crucial platform for the dissemination and\u0000gathering of Cyber Threat Intelligence (CTI). Its capability to provide\u0000real-time, actionable intelligence makes it an indispensable tool for detecting\u0000security events, helping security professionals cope with ever-growing threats.\u0000However, the large volume of tweets and inherent noises of human-crafted tweets\u0000pose significant challenges in accurately identifying security events. While\u0000many studies tried to filter out event-related tweets based on keywords, they\u0000are not effective due to their limitation in understanding the semantics of\u0000tweets. Another challenge in security event detection from Twitter is the\u0000comprehensive coverage of security events. Previous studies emphasized the\u0000importance of early detection of security events, but they overlooked the\u0000importance of event coverage. To cope with these challenges, in our study, we\u0000introduce a novel event attribution-centric tweet embedding method to enable\u0000the high precision and coverage of events. Our experiment result shows that the\u0000proposed method outperforms existing text and graph-based tweet embedding\u0000methods in identifying security events. Leveraging this novel embedding\u0000approach, we have developed and implemented a framework, Tweezers, that is\u0000applicable to security event detection from Twitter for CTI gathering. This\u0000framework has demonstrated its effectiveness, detecting twice as many events\u0000compared to established baselines. Additionally, we have showcased two\u0000applications, built on Tweezers for the integration and inspection of security\u0000events, i.e., security event trend analysis and informative security user\u0000identification.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To address the challenges of internal security policy compliance and dynamic threat response in organizations, we present a novel framework that integrates artificial intelligence (AI), blockchain, and smart contracts. We propose a system that automates the enforcement of security policies, reducing manual effort and potential human error. Utilizing AI, we can analyse cyber threat intelligence rapidly, identify non-compliances and automatically adjust cyber defence mechanisms. Blockchain technology provides an immutable ledger for transparent logging of compliance actions, while smart contracts ensure uniform application of security measures. The framework's effectiveness is demonstrated through simulations, showing improvements in compliance enforcement rates and response times compared to traditional methods. Ultimately, our approach provides for a scalable solution for managing complex security policies, reducing costs and enhancing the efficiency while achieving compliance. Finally, we discuss practical implications and propose future research directions to further refine the system and address implementation challenges.
{"title":"Automated Cybersecurity Compliance and Threat Response Using AI, Blockchain & Smart Contracts","authors":"Lampis Alevizos, Vinh Thong Ta","doi":"arxiv-2409.08390","DOIUrl":"https://doi.org/arxiv-2409.08390","url":null,"abstract":"To address the challenges of internal security policy compliance and dynamic\u0000threat response in organizations, we present a novel framework that integrates\u0000artificial intelligence (AI), blockchain, and smart contracts. We propose a\u0000system that automates the enforcement of security policies, reducing manual\u0000effort and potential human error. Utilizing AI, we can analyse cyber threat\u0000intelligence rapidly, identify non-compliances and automatically adjust cyber\u0000defence mechanisms. Blockchain technology provides an immutable ledger for\u0000transparent logging of compliance actions, while smart contracts ensure uniform\u0000application of security measures. The framework's effectiveness is demonstrated\u0000through simulations, showing improvements in compliance enforcement rates and\u0000response times compared to traditional methods. Ultimately, our approach\u0000provides for a scalable solution for managing complex security policies,\u0000reducing costs and enhancing the efficiency while achieving compliance.\u0000Finally, we discuss practical implications and propose future research\u0000directions to further refine the system and address implementation challenges.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"94 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we show that with the ability to jailbreak a GenAI model, attackers can escalate the outcome of attacks against RAG-based GenAI-powered applications in severity and scale. In the first part of the paper, we show that attackers can escalate RAG membership inference attacks and RAG entity extraction attacks to RAG documents extraction attacks, forcing a more severe outcome compared to existing attacks. We evaluate the results obtained from three extraction methods, the influence of the type and the size of five embeddings algorithms employed, the size of the provided context, and the GenAI engine. We show that attackers can extract 80%-99.8% of the data stored in the database used by the RAG of a Q&A chatbot. In the second part of the paper, we show that attackers can escalate the scale of RAG data poisoning attacks from compromising a single GenAI-powered application to compromising the entire GenAI ecosystem, forcing a greater scale of damage. This is done by crafting an adversarial self-replicating prompt that triggers a chain reaction of a computer worm within the ecosystem and forces each affected application to perform a malicious activity and compromise the RAG of additional applications. We evaluate the performance of the worm in creating a chain of confidential data extraction about users within a GenAI ecosystem of GenAI-powered email assistants and analyze how the performance of the worm is affected by the size of the context, the adversarial self-replicating prompt used, the type and size of the embeddings algorithm employed, and the number of hops in the propagation. Finally, we review and analyze guardrails to protect RAG-based inference and discuss the tradeoffs.
{"title":"Unleashing Worms and Extracting Data: Escalating the Outcome of Attacks against RAG-based Inference in Scale and Severity Using Jailbreaking","authors":"Stav Cohen, Ron Bitton, Ben Nassi","doi":"arxiv-2409.08045","DOIUrl":"https://doi.org/arxiv-2409.08045","url":null,"abstract":"In this paper, we show that with the ability to jailbreak a GenAI model,\u0000attackers can escalate the outcome of attacks against RAG-based GenAI-powered\u0000applications in severity and scale. In the first part of the paper, we show\u0000that attackers can escalate RAG membership inference attacks and RAG entity\u0000extraction attacks to RAG documents extraction attacks, forcing a more severe\u0000outcome compared to existing attacks. We evaluate the results obtained from\u0000three extraction methods, the influence of the type and the size of five\u0000embeddings algorithms employed, the size of the provided context, and the GenAI\u0000engine. We show that attackers can extract 80%-99.8% of the data stored in the\u0000database used by the RAG of a Q&A chatbot. In the second part of the paper, we\u0000show that attackers can escalate the scale of RAG data poisoning attacks from\u0000compromising a single GenAI-powered application to compromising the entire\u0000GenAI ecosystem, forcing a greater scale of damage. This is done by crafting an\u0000adversarial self-replicating prompt that triggers a chain reaction of a\u0000computer worm within the ecosystem and forces each affected application to\u0000perform a malicious activity and compromise the RAG of additional applications.\u0000We evaluate the performance of the worm in creating a chain of confidential\u0000data extraction about users within a GenAI ecosystem of GenAI-powered email\u0000assistants and analyze how the performance of the worm is affected by the size\u0000of the context, the adversarial self-replicating prompt used, the type and size\u0000of the embeddings algorithm employed, and the number of hops in the\u0000propagation. Finally, we review and analyze guardrails to protect RAG-based\u0000inference and discuss the tradeoffs.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Large Language Models (LLMs) demonstrate impressive capabilities across various fields, yet their increasing use raises critical security concerns. This article reviews recent literature addressing key issues in LLM security, with a focus on accuracy, bias, content detection, and vulnerability to attacks. Issues related to inaccurate or misleading outputs from LLMs is discussed, with emphasis on the implementation from fact-checking methodologies to enhance response reliability. Inherent biases within LLMs are critically examined through diverse evaluation techniques, including controlled input studies and red teaming exercises. A comprehensive analysis of bias mitigation strategies is presented, including approaches from pre-processing interventions to in-training adjustments and post-processing refinements. The article also probes the complexity of distinguishing LLM-generated content from human-produced text, introducing detection mechanisms like DetectGPT and watermarking techniques while noting the limitations of machine learning enabled classifiers under intricate circumstances. Moreover, LLM vulnerabilities, including jailbreak attacks and prompt injection exploits, are analyzed by looking into different case studies and large-scale competitions like HackAPrompt. This review is concluded by retrospecting defense mechanisms to safeguard LLMs, accentuating the need for more extensive research into the LLM security field.
{"title":"Securing Large Language Models: Addressing Bias, Misinformation, and Prompt Attacks","authors":"Benji Peng, Keyu Chen, Ming Li, Pohsun Feng, Ziqian Bi, Junyu Liu, Qian Niu","doi":"arxiv-2409.08087","DOIUrl":"https://doi.org/arxiv-2409.08087","url":null,"abstract":"Large Language Models (LLMs) demonstrate impressive capabilities across\u0000various fields, yet their increasing use raises critical security concerns.\u0000This article reviews recent literature addressing key issues in LLM security,\u0000with a focus on accuracy, bias, content detection, and vulnerability to\u0000attacks. Issues related to inaccurate or misleading outputs from LLMs is\u0000discussed, with emphasis on the implementation from fact-checking methodologies\u0000to enhance response reliability. Inherent biases within LLMs are critically\u0000examined through diverse evaluation techniques, including controlled input\u0000studies and red teaming exercises. A comprehensive analysis of bias mitigation\u0000strategies is presented, including approaches from pre-processing interventions\u0000to in-training adjustments and post-processing refinements. The article also\u0000probes the complexity of distinguishing LLM-generated content from\u0000human-produced text, introducing detection mechanisms like DetectGPT and\u0000watermarking techniques while noting the limitations of machine learning\u0000enabled classifiers under intricate circumstances. Moreover, LLM\u0000vulnerabilities, including jailbreak attacks and prompt injection exploits, are\u0000analyzed by looking into different case studies and large-scale competitions\u0000like HackAPrompt. This review is concluded by retrospecting defense mechanisms\u0000to safeguard LLMs, accentuating the need for more extensive research into the\u0000LLM security field.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel approach to deter unauthorized deepfakes and enable user tracking in generative models, even when the user has full access to the model parameters, by integrating key-based model authentication with watermarking techniques. Our method involves providing users with model parameters accompanied by a unique, user-specific key. During inference, the model is conditioned upon the key along with the standard input. A valid key results in the expected output, while an invalid key triggers a degraded output, thereby enforcing key-based model authentication. For user tracking, the model embeds the user's unique key as a watermark within the generated content, facilitating the identification of the user's ID. We demonstrate the effectiveness of our approach on two types of models, audio codecs and vocoders, utilizing the SilentCipher watermarking method. Additionally, we assess the robustness of the embedded watermarks against various distortions, validating their reliability in various scenarios.
{"title":"LOCKEY: A Novel Approach to Model Authentication and Deepfake Tracking","authors":"Mayank Kumar Singh, Naoya Takahashi, Wei-Hsiang Liao, Yuki Mitsufuji","doi":"arxiv-2409.07743","DOIUrl":"https://doi.org/arxiv-2409.07743","url":null,"abstract":"This paper presents a novel approach to deter unauthorized deepfakes and\u0000enable user tracking in generative models, even when the user has full access\u0000to the model parameters, by integrating key-based model authentication with\u0000watermarking techniques. Our method involves providing users with model\u0000parameters accompanied by a unique, user-specific key. During inference, the\u0000model is conditioned upon the key along with the standard input. A valid key\u0000results in the expected output, while an invalid key triggers a degraded\u0000output, thereby enforcing key-based model authentication. For user tracking,\u0000the model embeds the user's unique key as a watermark within the generated\u0000content, facilitating the identification of the user's ID. We demonstrate the\u0000effectiveness of our approach on two types of models, audio codecs and\u0000vocoders, utilizing the SilentCipher watermarking method. Additionally, we\u0000assess the robustness of the embedded watermarks against various distortions,\u0000validating their reliability in various scenarios.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-fungible tokens (NFTs) offer a unique method for representing digital and physical assets on the blockchain. However, the NFT market has recently experienced a downturn in interest, mainly due to challenges related to high entry barriers and limited market liquidity. Fractionalization emerges as a promising solution, allowing multiple parties to hold a stake in a single NFT. By breaking down ownership into fractional shares, this approach lowers the entry barrier for investors, enhances market liquidity, and democratizes access to valuable digital assets. Despite these benefits, the current landscape of NFT fractionalization is fragmented, with no standardized framework to guide the secure and interoperable implementation of fractionalization mechanisms. This paper contributions are twofold: first, we provide a detailed analysis of the current NFT fractionalization landscape focusing on security challenges; second, we introduce a standardized approach that addresses these challenges, paving the way for more secure, interoperable, and accessible NFT fractionalization platforms.
{"title":"A Secure Standard for NFT Fractionalization","authors":"Wejdene Haouari, Marios Fokaefs","doi":"arxiv-2409.08190","DOIUrl":"https://doi.org/arxiv-2409.08190","url":null,"abstract":"Non-fungible tokens (NFTs) offer a unique method for representing digital and\u0000physical assets on the blockchain. However, the NFT market has recently\u0000experienced a downturn in interest, mainly due to challenges related to high\u0000entry barriers and limited market liquidity. Fractionalization emerges as a\u0000promising solution, allowing multiple parties to hold a stake in a single NFT.\u0000By breaking down ownership into fractional shares, this approach lowers the\u0000entry barrier for investors, enhances market liquidity, and democratizes access\u0000to valuable digital assets. Despite these benefits, the current landscape of\u0000NFT fractionalization is fragmented, with no standardized framework to guide\u0000the secure and interoperable implementation of fractionalization mechanisms.\u0000This paper contributions are twofold: first, we provide a detailed analysis of\u0000the current NFT fractionalization landscape focusing on security challenges;\u0000second, we introduce a standardized approach that addresses these challenges,\u0000paving the way for more secure, interoperable, and accessible NFT\u0000fractionalization platforms.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}