首页 > 最新文献

arXiv - CS - Cryptography and Security最新文献

英文 中文
A Response to: A Note on "Privacy Preserving n-Party Scalar Product Protocol" 回应:关于 "保护隐私的 n 方标量产品协议 "的说明
Pub Date : 2024-09-16 DOI: arxiv-2409.10057
Florian van Daalen, Lianne Ippel, Andre Dekker, Inigo Bermejo
We reply to the comments on our proposed privacy preserving n-party scalarproduct protocol made by Liu. In their comment Liu raised concerns regardingthe security and scalability of the $n$-party scalar product protocol. In thisreply, we show that their concerns are unfounded and that the $n$-party scalarproduct protocol is safe for its intended purposes. Their concerns regardingthe security are based on a misunderstanding of the protocol. Additionally,while the scalability of the protocol puts limitations on its use, the protocolstill has numerous practical applications when applied in the correctscenarios. Specifically within vertically partitioned scenarios, which ofteninvolve few parties, the protocol remains practical. In this reply we clarifyLiu's misunderstanding. Additionally, we explain why the protocols scaling isnot a practical problem in its intended application.
我们对 Liu 就我们提出的保护隐私的 n 方标量乘积协议所做的评论进行了回复。Liu 在评论中对 $n$ 方标量乘法协议的安全性和可扩展性提出了担忧。在本回复中,我们将证明他们的担忧是没有根据的,$n$方标量乘积协议对于其预期目的是安全的。他们对安全性的担忧是基于对协议的误解。此外,虽然该协议的可扩展性对其使用造成了限制,但如果应用在正确的场景中,该协议仍有许多实际应用。特别是在垂直分区的情况下,通常只有很少几方参与,该协议仍然实用。在本答复中,我们将澄清刘博士的误解。此外,我们还解释了为什么协议的缩放在其预期应用中不是一个实际问题。
{"title":"A Response to: A Note on \"Privacy Preserving n-Party Scalar Product Protocol\"","authors":"Florian van Daalen, Lianne Ippel, Andre Dekker, Inigo Bermejo","doi":"arxiv-2409.10057","DOIUrl":"https://doi.org/arxiv-2409.10057","url":null,"abstract":"We reply to the comments on our proposed privacy preserving n-party scalar\u0000product protocol made by Liu. In their comment Liu raised concerns regarding\u0000the security and scalability of the $n$-party scalar product protocol. In this\u0000reply, we show that their concerns are unfounded and that the $n$-party scalar\u0000product protocol is safe for its intended purposes. Their concerns regarding\u0000the security are based on a misunderstanding of the protocol. Additionally,\u0000while the scalability of the protocol puts limitations on its use, the protocol\u0000still has numerous practical applications when applied in the correct\u0000scenarios. Specifically within vertically partitioned scenarios, which often\u0000involve few parties, the protocol remains practical. In this reply we clarify\u0000Liu's misunderstanding. Additionally, we explain why the protocols scaling is\u0000not a practical problem in its intended application.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"212 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmarking Secure Sampling Protocols for Differential Privacy 差异隐私安全采样协议基准测试
Pub Date : 2024-09-16 DOI: arxiv-2409.10667
Yucheng Fu, Tianhao Wang
Differential privacy (DP) is widely employed to provide privacy protectionfor individuals by limiting information leakage from the aggregated data. Twowell-known models of DP are the central model and the local model. The formerrequires a trustworthy server for data aggregation, while the latter requiresindividuals to add noise, significantly decreasing the utility of aggregatedresults. Recently, many studies have proposed to achieve DP with SecureMulti-party Computation (MPC) in distributed settings, namely, the distributedmodel, which has utility comparable to central model while, under specificsecurity assumptions, preventing parties from obtaining others' information.One challenge of realizing DP in distributed model is efficiently samplingnoise with MPC. Although many secure sampling methods have been proposed, theyhave different security assumptions and isolated theoretical analyses. There isa lack of experimental evaluations to measure and compare their performances.We fill this gap by benchmarking existing sampling protocols in MPC andperforming comprehensive measurements of their efficiency. First, we present ataxonomy of the underlying techniques of these sampling protocols. Second, weextend widely used distributed noise generation protocols to be resilientagainst Byzantine attackers. Third, we implement discrete sampling protocolsand align their security settings for a fair comparison. We then conduct anextensive evaluation to study their efficiency and utility.
差分隐私(DP)被广泛应用于通过限制汇总数据的信息泄漏来保护个人隐私。两种著名的差分隐私模型是中心模型和本地模型。前者需要一个值得信赖的服务器进行数据聚合,而后者则需要个人添加噪音,从而大大降低了聚合结果的效用。最近,许多研究提出在分布式环境中通过安全多方计算(MPC)实现 DP,即分布式模型(distributedmodel),其效用与中心模型相当,同时在特定的安全假设下,可防止各方获取他人信息。在分布式模型中实现 DP 的一个挑战是利用 MPC 对噪声进行高效采样。虽然已经提出了许多安全采样方法,但它们的安全假设各不相同,理论分析也各自独立。我们通过对 MPC 中现有的采样协议进行基准测试,并对其效率进行全面测量,填补了这一空白。首先,我们对这些采样协议的基础技术进行了分类。其次,我们扩展了广泛使用的分布式噪声生成协议,使其能够抵御拜占庭攻击者。第三,我们实现了离散采样协议,并调整了它们的安全设置,以便进行公平比较。然后,我们进行了广泛的评估,研究它们的效率和实用性。
{"title":"Benchmarking Secure Sampling Protocols for Differential Privacy","authors":"Yucheng Fu, Tianhao Wang","doi":"arxiv-2409.10667","DOIUrl":"https://doi.org/arxiv-2409.10667","url":null,"abstract":"Differential privacy (DP) is widely employed to provide privacy protection\u0000for individuals by limiting information leakage from the aggregated data. Two\u0000well-known models of DP are the central model and the local model. The former\u0000requires a trustworthy server for data aggregation, while the latter requires\u0000individuals to add noise, significantly decreasing the utility of aggregated\u0000results. Recently, many studies have proposed to achieve DP with Secure\u0000Multi-party Computation (MPC) in distributed settings, namely, the distributed\u0000model, which has utility comparable to central model while, under specific\u0000security assumptions, preventing parties from obtaining others' information.\u0000One challenge of realizing DP in distributed model is efficiently sampling\u0000noise with MPC. Although many secure sampling methods have been proposed, they\u0000have different security assumptions and isolated theoretical analyses. There is\u0000a lack of experimental evaluations to measure and compare their performances.\u0000We fill this gap by benchmarking existing sampling protocols in MPC and\u0000performing comprehensive measurements of their efficiency. First, we present a\u0000taxonomy of the underlying techniques of these sampling protocols. Second, we\u0000extend widely used distributed noise generation protocols to be resilient\u0000against Byzantine attackers. Third, we implement discrete sampling protocols\u0000and align their security settings for a fair comparison. We then conduct an\u0000extensive evaluation to study their efficiency and utility.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PersonaMark: Personalized LLM watermarking for model protection and user attribution PersonaMark:用于模型保护和用户归属的个性化 LLM 水印
Pub Date : 2024-09-15 DOI: arxiv-2409.09739
Yuehan Zhang, Peizhuo Lv, Yinpeng Liu, Yongqiang Ma, Wei Lu, Xiaofeng Wang, Xiaozhong Liu, Jiawei Liu
The rapid development of LLMs brings both convenience and potential threats.As costumed and private LLMs are widely applied, model copyright protection hasbecome important. Text watermarking is emerging as a promising solution toAI-generated text detection and model protection issues. However, current textwatermarks have largely ignored the critical need for injecting differentwatermarks for different users, which could help attribute the watermark to aspecific individual. In this paper, we explore the personalized textwatermarking scheme for LLM copyright protection and other scenarios, ensuringaccountability and traceability in content generation. Specifically, we proposea novel text watermarking method PersonaMark that utilizes sentence structureas the hidden medium for the watermark information and optimizes thesentence-level generation algorithm to minimize disruption to the model'snatural generation process. By employing a personalized hashing function toinject unique watermark signals for different users, personalized watermarkedtext can be obtained. Since our approach performs on sentence level instead oftoken probability, the text quality is highly preserved. The injection processof unique watermark signals for different users is time-efficient for a largenumber of users with the designed multi-user hashing function. As far as weknow, we achieved personalized text watermarking for the first time throughthis. We conduct an extensive evaluation of four different LLMs in terms ofperplexity, sentiment polarity, alignment, readability, etc. The resultsdemonstrate that our method maintains performance with minimal perturbation tothe model's behavior, allows for unbiased insertion of watermark information,and exhibits strong watermark recognition capabilities.
随着古装和私人葡京娱乐场官方网站的广泛应用,模型版权保护变得尤为重要。文本水印正在成为解决人工智能生成的文本检测和模型保护问题的一种有前途的解决方案。然而,目前的文本水印在很大程度上忽视了为不同用户注入不同水印的关键需求,而这有助于将水印归属于特定个人。在本文中,我们探讨了适用于 LLM 版权保护和其他场景的个性化文本水印方案,以确保内容生成的可问责性和可追溯性。具体来说,我们提出了一种新颖的文本水印方法 PersonaMark,该方法利用句子结构作为水印信息的隐藏媒介,并优化了句子级生成算法,以尽量减少对模型自然生成过程的干扰。通过使用个性化哈希函数为不同用户注入独特的水印信号,可以获得个性化的水印文本。由于我们的方法是在句子层面而非单词概率层面执行的,因此文本质量得到了很好的保护。利用所设计的多用户散列函数,为不同用户注入独特水印信号的过程对于大量用户来说非常省时。据我们所知,我们首次实现了个性化文本水印。我们从复杂度、情感极性、对齐度、可读性等方面对四种不同的 LLM 进行了广泛的评估。结果表明,我们的方法能在对模型行为干扰最小的情况下保持性能,允许无偏见地插入水印信息,并表现出强大的水印识别能力。
{"title":"PersonaMark: Personalized LLM watermarking for model protection and user attribution","authors":"Yuehan Zhang, Peizhuo Lv, Yinpeng Liu, Yongqiang Ma, Wei Lu, Xiaofeng Wang, Xiaozhong Liu, Jiawei Liu","doi":"arxiv-2409.09739","DOIUrl":"https://doi.org/arxiv-2409.09739","url":null,"abstract":"The rapid development of LLMs brings both convenience and potential threats.\u0000As costumed and private LLMs are widely applied, model copyright protection has\u0000become important. Text watermarking is emerging as a promising solution to\u0000AI-generated text detection and model protection issues. However, current text\u0000watermarks have largely ignored the critical need for injecting different\u0000watermarks for different users, which could help attribute the watermark to a\u0000specific individual. In this paper, we explore the personalized text\u0000watermarking scheme for LLM copyright protection and other scenarios, ensuring\u0000accountability and traceability in content generation. Specifically, we propose\u0000a novel text watermarking method PersonaMark that utilizes sentence structure\u0000as the hidden medium for the watermark information and optimizes the\u0000sentence-level generation algorithm to minimize disruption to the model's\u0000natural generation process. By employing a personalized hashing function to\u0000inject unique watermark signals for different users, personalized watermarked\u0000text can be obtained. Since our approach performs on sentence level instead of\u0000token probability, the text quality is highly preserved. The injection process\u0000of unique watermark signals for different users is time-efficient for a large\u0000number of users with the designed multi-user hashing function. As far as we\u0000know, we achieved personalized text watermarking for the first time through\u0000this. We conduct an extensive evaluation of four different LLMs in terms of\u0000perplexity, sentiment polarity, alignment, readability, etc. The results\u0000demonstrate that our method maintains performance with minimal perturbation to\u0000the model's behavior, allows for unbiased insertion of watermark information,\u0000and exhibits strong watermark recognition capabilities.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GLEAN: Generative Learning for Eliminating Adversarial Noise GLEAN:消除对抗性噪音的生成学习
Pub Date : 2024-09-15 DOI: arxiv-2409.10578
Justin Lyu Kim, Kyoungwan Woo
In the age of powerful diffusion models such as DALL-E and Stable Diffusion,many in the digital art community have suffered style mimicry attacks due tofine-tuning these models on their works. The ability to mimic an artist's stylevia text-to-image diffusion models raises serious ethical issues, especiallywithout explicit consent. Glaze, a tool that applies various ranges ofperturbations to digital art, has shown significant success in preventing stylemimicry attacks, at the cost of artifacts ranging from imperceptible noise tosevere quality degradation. The release of Glaze has sparked furtherdiscussions regarding the effectiveness of similar protection methods. In thispaper, we propose GLEAN- applying I2I generative networks to stripperturbations from Glazed images, evaluating the performance of style mimicryattacks before and after GLEAN on the results of Glaze. GLEAN aims to supportand enhance Glaze by highlighting its limitations and encouraging furtherdevelopment.
在强大的扩散模型(如 DALL-E 和 Stable Diffusion)时代,数字艺术界的许多人由于在自己的作品上微调这些模型而遭受了风格模仿攻击。通过文本到图像扩散模型模拟艺术家风格的能力引发了严重的伦理问题,尤其是在未经明确同意的情况下。Glaze 是一种能对数字艺术作品进行各种干扰的工具,它在防止风格模仿攻击方面取得了显著的成功,但代价是产生了从难以察觉的噪音到严重质量下降的各种人工痕迹。Glaze 的发布引发了人们对类似保护方法有效性的进一步讨论。在本文中,我们提出了 GLEAN--应用 I2I 生成网络从 Glazed 图像中剥离干扰,评估 GLEAN 前后风格模仿攻击对 Glaze 结果的影响。GLEAN 的目的是通过强调 Glaze 的局限性和鼓励进一步开发来支持和增强 Glaze。
{"title":"GLEAN: Generative Learning for Eliminating Adversarial Noise","authors":"Justin Lyu Kim, Kyoungwan Woo","doi":"arxiv-2409.10578","DOIUrl":"https://doi.org/arxiv-2409.10578","url":null,"abstract":"In the age of powerful diffusion models such as DALL-E and Stable Diffusion,\u0000many in the digital art community have suffered style mimicry attacks due to\u0000fine-tuning these models on their works. The ability to mimic an artist's style\u0000via text-to-image diffusion models raises serious ethical issues, especially\u0000without explicit consent. Glaze, a tool that applies various ranges of\u0000perturbations to digital art, has shown significant success in preventing style\u0000mimicry attacks, at the cost of artifacts ranging from imperceptible noise to\u0000severe quality degradation. The release of Glaze has sparked further\u0000discussions regarding the effectiveness of similar protection methods. In this\u0000paper, we propose GLEAN- applying I2I generative networks to strip\u0000perturbations from Glazed images, evaluating the performance of style mimicry\u0000attacks before and after GLEAN on the results of Glaze. GLEAN aims to support\u0000and enhance Glaze by highlighting its limitations and encouraging further\u0000development.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting Physical-World Adversarial Attack on Traffic Sign Recognition: A Commercial Systems Perspective 重新审视物理世界对交通标志识别的对抗性攻击:商业系统视角
Pub Date : 2024-09-15 DOI: arxiv-2409.09860
Ningfei Wang, Shaoyuan Xie, Takami Sato, Yunpeng Luo, Kaidi Xu, Qi Alfred Chen
Traffic Sign Recognition (TSR) is crucial for safe and correct drivingautomation. Recent works revealed a general vulnerability of TSR models tophysical-world adversarial attacks, which can be low-cost, highly deployable,and capable of causing severe attack effects such as hiding a critical trafficsign or spoofing a fake one. However, so far existing works generally onlyconsidered evaluating the attack effects on academic TSR models, leaving theimpacts of such attacks on real-world commercial TSR systems largely unclear.In this paper, we conduct the first large-scale measurement of physical-worldadversarial attacks against commercial TSR systems. Our testing results revealthat it is possible for existing attack works from academia to have highlyreliable (100%) attack success against certain commercial TSR systemfunctionality, but such attack capabilities are not generalizable, leading tomuch lower-than-expected attack success rates overall. We find that onepotential major factor is a spatial memorization design that commonly exists intoday's commercial TSR systems. We design new attack success metrics that canmathematically model the impacts of such design on the TSR system-level attacksuccess, and use them to revisit existing attacks. Through these efforts, weuncover 7 novel observations, some of which directly challenge the observationsor claims in prior works due to the introduction of the new metrics.
交通标志识别(TSR)对于安全、正确的自动驾驶至关重要。最近的研究揭示了 TSR 模型在物理世界对抗攻击面前的普遍脆弱性,这种攻击成本低、可部署性强,能够造成严重的攻击效果,如隐藏关键交通标志或欺骗伪造交通标志。然而,迄今为止,现有的工作一般只考虑评估学术 TSR 模型的攻击效果,而对这类攻击对真实商业 TSR 系统的影响却不甚了解。我们的测试结果表明,学术界现有的攻击作品有可能对某些商业 TSR 系统功能进行高度可靠(100%)的攻击,但这种攻击能力并不具有普遍性,导致整体攻击成功率大大低于预期。我们发现,一个潜在的主要因素是当今商用 TSR 系统中普遍存在的空间记忆设计。我们设计了新的攻击成功率指标,可以对这种设计对 TSR 系统级攻击成功率的影响进行数学建模,并利用这些指标重新审视现有的攻击。通过这些努力,我们发现了 7 个新观察点,其中一些观察点由于新指标的引入而直接挑战了先前工作中的观察点或主张。
{"title":"Revisiting Physical-World Adversarial Attack on Traffic Sign Recognition: A Commercial Systems Perspective","authors":"Ningfei Wang, Shaoyuan Xie, Takami Sato, Yunpeng Luo, Kaidi Xu, Qi Alfred Chen","doi":"arxiv-2409.09860","DOIUrl":"https://doi.org/arxiv-2409.09860","url":null,"abstract":"Traffic Sign Recognition (TSR) is crucial for safe and correct driving\u0000automation. Recent works revealed a general vulnerability of TSR models to\u0000physical-world adversarial attacks, which can be low-cost, highly deployable,\u0000and capable of causing severe attack effects such as hiding a critical traffic\u0000sign or spoofing a fake one. However, so far existing works generally only\u0000considered evaluating the attack effects on academic TSR models, leaving the\u0000impacts of such attacks on real-world commercial TSR systems largely unclear.\u0000In this paper, we conduct the first large-scale measurement of physical-world\u0000adversarial attacks against commercial TSR systems. Our testing results reveal\u0000that it is possible for existing attack works from academia to have highly\u0000reliable (100%) attack success against certain commercial TSR system\u0000functionality, but such attack capabilities are not generalizable, leading to\u0000much lower-than-expected attack success rates overall. We find that one\u0000potential major factor is a spatial memorization design that commonly exists in\u0000today's commercial TSR systems. We design new attack success metrics that can\u0000mathematically model the impacts of such design on the TSR system-level attack\u0000success, and use them to revisit existing attacks. Through these efforts, we\u0000uncover 7 novel observations, some of which directly challenge the observations\u0000or claims in prior works due to the introduction of the new metrics.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Taming the Ransomware Threats: Leveraging Prospect Theory for Rational Payment Decisions 驯服勒索软件威胁:利用前景理论做出理性支付决策
Pub Date : 2024-09-15 DOI: arxiv-2409.09744
Pranjal Sharma
Day by day, the frequency of ransomware attacks on organizations isexperiencing a significant surge. High-profile incidents involving majorentities like Las Vegas giants MGM Resorts, Caesar Entertainment, and Boeingunderscore the profound impact, posing substantial business barriers. When asudden cyberattack occurs, organizations often find themselves at a loss, witha looming countdown to pay the ransom, leading to a cascade of impromptu andunfavourable decisions. This paper adopts a novel approach, leveraging ProspectTheory, to elucidate the tactics employed by cyber attackers to enticeorganizations into paying the ransom. Furthermore, it introduces an algorithmbased on Prospect Theory and an Attack Recovery Plan, enabling organizations tomake informed decisions on whether to consent to the ransom demands or resist.This algorithm Ransomware Risk Analysis and Decision Support (RADS) usesProspect Theory to re-instantiate the shifted reference manipulated asperceived gains by attackers and adjusts for the framing effect created due totime urgency. Additionally, leveraging application criticality andincorporating Prospect Theory's insights into under/over weighingprobabilities, RADS facilitates informed decision-making that transcends thesimplistic framework of "consent" or "resistance," enabling organizations toachieve optimal decisions.
勒索软件对企业的攻击频率与日俱增。拉斯维加斯巨头米高梅度假村(MGM Resorts)、凯撒娱乐公司(Caesar Entertainment)和波音公司(Boeingunders)等大型企业遭遇的高知名度事件证明,勒索软件攻击影响深远,造成了巨大的业务障碍。当突如其来的网络攻击发生时,企业往往会发现自己无所适从,支付赎金的倒计时迫在眉睫,从而导致一连串临时和不利的决策。本文采用一种新颖的方法,利用前景理论来阐明网络攻击者为诱使组织支付赎金而采取的策略。该算法 "勒索软件风险分析与决策支持"(RADS)利用前景理论重新证实了攻击者作为预期收益操纵的转移参考,并调整了因时间紧迫性而产生的框架效应。此外,RADS 利用应用程序的临界性,并结合前景理论对权衡过低/过高可能性的见解,促进了超越 "同意 "或 "抵制 "这一简单框架的知情决策,使组织能够做出最佳决策。
{"title":"Taming the Ransomware Threats: Leveraging Prospect Theory for Rational Payment Decisions","authors":"Pranjal Sharma","doi":"arxiv-2409.09744","DOIUrl":"https://doi.org/arxiv-2409.09744","url":null,"abstract":"Day by day, the frequency of ransomware attacks on organizations is\u0000experiencing a significant surge. High-profile incidents involving major\u0000entities like Las Vegas giants MGM Resorts, Caesar Entertainment, and Boeing\u0000underscore the profound impact, posing substantial business barriers. When a\u0000sudden cyberattack occurs, organizations often find themselves at a loss, with\u0000a looming countdown to pay the ransom, leading to a cascade of impromptu and\u0000unfavourable decisions. This paper adopts a novel approach, leveraging Prospect\u0000Theory, to elucidate the tactics employed by cyber attackers to entice\u0000organizations into paying the ransom. Furthermore, it introduces an algorithm\u0000based on Prospect Theory and an Attack Recovery Plan, enabling organizations to\u0000make informed decisions on whether to consent to the ransom demands or resist.\u0000This algorithm Ransomware Risk Analysis and Decision Support (RADS) uses\u0000Prospect Theory to re-instantiate the shifted reference manipulated as\u0000perceived gains by attackers and adjusts for the framing effect created due to\u0000time urgency. Additionally, leveraging application criticality and\u0000incorporating Prospect Theory's insights into under/over weighing\u0000probabilities, RADS facilitates informed decision-making that transcends the\u0000simplistic framework of \"consent\" or \"resistance,\" enabling organizations to\u0000achieve optimal decisions.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nebula: Efficient, Private and Accurate Histogram Estimation 星云高效、私密、精确的直方图估算
Pub Date : 2024-09-15 DOI: arxiv-2409.09676
Ali Shahin Shamsabadi, Peter Snyder, Ralph Giles, Aurélien Bellet, Hamed Haddadi
We present Nebula, a system for differential private histogram estimation ofdata distributed among clients. Nebula enables clients to locally subsample andencode their data such that an untrusted server learns only data values thatmeet an aggregation threshold to satisfy differential privacy guarantees.Compared with other private histogram estimation systems, Nebula uniquelyachieves all of the following: textit{i)} a strict upper bound on privacyleakage; textit{ii)} client privacy under realistic trust assumptions;textit{iii)} significantly better utility compared to standard localdifferential privacy systems; and textit{iv)} avoiding trusted third-parties,multi-party computation, or trusted hardware. We provide both a formalevaluation of Nebula's privacy, utility and efficiency guarantees, along withan empirical evaluation on three real-world datasets. We demonstrate thatclients can encode and upload their data efficiently (only 0.0058 secondsrunning time and 0.0027 MB data communication) and privately (strongdifferential privacy guarantees $varepsilon=1$). On the United States Censusdataset, the Nebula's untrusted aggregation server estimates histograms withabove 88% better utility than the existing local deployment of differentialprivacy. Additionally, we describe a variant that allows clients to submitmulti-dimensional data, with similar privacy, utility, and performance.Finally, we provide an open source implementation of Nebula.
我们介绍的 Nebula 是一种对分布在客户端的数据进行差分隐私直方图估算的系统。Nebula使客户端能够对其数据进行本地子采样和编码,这样,不受信任的服务器只能了解满足聚合阈值的数据值,从而满足差分隐私保证:textit{i)}隐私泄露的严格上限;textit{ii)}现实信任假设下的客户端隐私;textit{iii)}与标准的局部差分隐私系统相比,效用明显更好;textit{iv)}避免了可信第三方、多方计算或可信硬件。我们对星云的隐私、效用和效率保证进行了形式评估,并在三个真实数据集上进行了实证评估。我们证明,客户可以高效地编码和上传他们的数据(运行时间仅为0.0058秒,数据通信量为0.0027 MB),并且是私密的(强差分隐私保证 $/varepsilon=1$)。在美国人口普查数据集上,星云的非信任聚合服务器估算出的直方图的实用性比现有的本地差分隐私部署高出88%。此外,我们还描述了一种允许客户端提交多维数据的变体,其隐私性、实用性和性能都与之类似。最后,我们提供了Nebula的开源实现。
{"title":"Nebula: Efficient, Private and Accurate Histogram Estimation","authors":"Ali Shahin Shamsabadi, Peter Snyder, Ralph Giles, Aurélien Bellet, Hamed Haddadi","doi":"arxiv-2409.09676","DOIUrl":"https://doi.org/arxiv-2409.09676","url":null,"abstract":"We present Nebula, a system for differential private histogram estimation of\u0000data distributed among clients. Nebula enables clients to locally subsample and\u0000encode their data such that an untrusted server learns only data values that\u0000meet an aggregation threshold to satisfy differential privacy guarantees.\u0000Compared with other private histogram estimation systems, Nebula uniquely\u0000achieves all of the following: textit{i)} a strict upper bound on privacy\u0000leakage; textit{ii)} client privacy under realistic trust assumptions;\u0000textit{iii)} significantly better utility compared to standard local\u0000differential privacy systems; and textit{iv)} avoiding trusted third-parties,\u0000multi-party computation, or trusted hardware. We provide both a formal\u0000evaluation of Nebula's privacy, utility and efficiency guarantees, along with\u0000an empirical evaluation on three real-world datasets. We demonstrate that\u0000clients can encode and upload their data efficiently (only 0.0058 seconds\u0000running time and 0.0027 MB data communication) and privately (strong\u0000differential privacy guarantees $varepsilon=1$). On the United States Census\u0000dataset, the Nebula's untrusted aggregation server estimates histograms with\u0000above 88% better utility than the existing local deployment of differential\u0000privacy. Additionally, we describe a variant that allows clients to submit\u0000multi-dimensional data, with similar privacy, utility, and performance.\u0000Finally, we provide an open source implementation of Nebula.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hacking, The Lazy Way: LLM Augmented Pentesting 黑客,懒惰的方式LLM 扩增渗透测试
Pub Date : 2024-09-14 DOI: arxiv-2409.09493
Dhruva Goyal, Sitaraman Subramanian, Aditya Peela
Security researchers are continually challenged by the need to stay currentwith rapidly evolving cybersecurity research, tools, and techniques. Thisconstant cycle of learning, unlearning, and relearning, combined with therepetitive tasks of sifting through documentation and analyzing data, oftenhinders productivity and innovation. This has led to a disparity where onlyorganizations with substantial resources can access top-tier security experts,while others rely on firms with less skilled researchers who focus primarily oncompliance rather than actual security. We introduce "LLM Augmented Pentesting," demonstrated through a tool named"Pentest Copilot," to address this gap. This approach integrates Large LanguageModels into penetration testing workflows. Our research includes a "chain ofthought" mechanism to streamline token usage and boost performance, as well asunique Retrieval Augmented Generation implementation to minimize hallucinationsand keep models aligned with the latest techniques. Additionally, we propose anovel file analysis approach, enabling LLMs to understand files. Furthermore,we highlight a unique infrastructure system that supports if implemented, cansupport in-browser assisted penetration testing, offering a robust platform forcybersecurity professionals, These advancements mark a significant step towardbridging the gap between automated tools and human expertise, offering apowerful solution to the challenges faced by modern cybersecurity teams.
安全研究人员不断面临着挑战,他们需要与时俱进,掌握快速发展的网络安全研究、工具和技术。这种学习、不学习、再学习的循环往复,再加上筛选文档和分析数据的繁重任务,往往会阻碍工作效率和创新。这就造成了一种差距,即只有拥有大量资源的组织才能获得顶级安全专家的帮助,而其他组织则依赖于那些技术水平较低的研究人员,他们主要关注的是合规性而不是实际的安全性。我们通过名为 "Pentest Copilot "的工具,介绍了 "LLM Augmented Pentesting",以弥补这一差距。这种方法将大型语言模型集成到渗透测试工作流程中。我们的研究包括简化令牌使用和提高性能的 "思维链 "机制,以及独特的 "检索增强生成"(Retrieval Augmented Generation)实现,以最大限度地减少幻觉,并使模型与最新技术保持一致。此外,我们还提出了一种高级文件分析方法,使 LLM 能够理解文件。此外,我们还强调了一个独特的基础架构系统,该系统一旦实施,就能支持浏览器内的辅助渗透测试,为网络安全专业人员提供了一个强大的平台。
{"title":"Hacking, The Lazy Way: LLM Augmented Pentesting","authors":"Dhruva Goyal, Sitaraman Subramanian, Aditya Peela","doi":"arxiv-2409.09493","DOIUrl":"https://doi.org/arxiv-2409.09493","url":null,"abstract":"Security researchers are continually challenged by the need to stay current\u0000with rapidly evolving cybersecurity research, tools, and techniques. This\u0000constant cycle of learning, unlearning, and relearning, combined with the\u0000repetitive tasks of sifting through documentation and analyzing data, often\u0000hinders productivity and innovation. This has led to a disparity where only\u0000organizations with substantial resources can access top-tier security experts,\u0000while others rely on firms with less skilled researchers who focus primarily on\u0000compliance rather than actual security. We introduce \"LLM Augmented Pentesting,\" demonstrated through a tool named\u0000\"Pentest Copilot,\" to address this gap. This approach integrates Large Language\u0000Models into penetration testing workflows. Our research includes a \"chain of\u0000thought\" mechanism to streamline token usage and boost performance, as well as\u0000unique Retrieval Augmented Generation implementation to minimize hallucinations\u0000and keep models aligned with the latest techniques. Additionally, we propose a\u0000novel file analysis approach, enabling LLMs to understand files. Furthermore,\u0000we highlight a unique infrastructure system that supports if implemented, can\u0000support in-browser assisted penetration testing, offering a robust platform for\u0000cybersecurity professionals, These advancements mark a significant step toward\u0000bridging the gap between automated tools and human expertise, offering a\u0000powerful solution to the challenges faced by modern cybersecurity teams.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-world Adversarial Defense against Patch Attacks based on Diffusion Model 基于扩散模型的真实世界补丁攻击对抗防御系统
Pub Date : 2024-09-14 DOI: arxiv-2409.09406
Xingxing Wei, Caixin Kang, Yinpeng Dong, Zhengyi Wang, Shouwei Ruan, Yubo Chen, Hang Su
Adversarial patches present significant challenges to the robustness of deeplearning models, making the development of effective defenses become criticalfor real-world applications. This paper introduces DIFFender, a novelDIFfusion-based DeFender framework that leverages the power of a text-guideddiffusion model to counter adversarial patch attacks. At the core of ourapproach is the discovery of the Adversarial Anomaly Perception (AAP)phenomenon, which enables the diffusion model to accurately detect and locateadversarial patches by analyzing distributional anomalies. DIFFender seamlesslyintegrates the tasks of patch localization and restoration within a unifieddiffusion model framework, enhancing defense efficacy through their closeinteraction. Additionally, DIFFender employs an efficient few-shotprompt-tuning algorithm, facilitating the adaptation of the pre-traineddiffusion model to defense tasks without the need for extensive retraining. Ourcomprehensive evaluation, covering image classification and face recognitiontasks, as well as real-world scenarios, demonstrates DIFFender's robustperformance against adversarial attacks. The framework's versatility andgeneralizability across various settings, classifiers, and attack methodologiesmark a significant advancement in adversarial patch defense strategies. Exceptfor the popular visible domain, we have identified another advantage ofDIFFender: its capability to easily expand into the infrared domain.Consequently, we demonstrate the good flexibility of DIFFender, which candefend against both infrared and visible adversarial patch attacksalternatively using a universal defense framework.
对抗性补丁对深度学习模型的鲁棒性提出了巨大挑战,因此开发有效的防御措施对现实世界的应用至关重要。本文介绍了 DIFFender,这是一种基于 DIFfusion 的新型 DeFender 框架,它利用文本引导扩散模型的强大功能来对抗对抗性补丁攻击。我们方法的核心是发现了对抗性异常感知(AAP)现象,它使扩散模型能够通过分析分布异常来准确检测和定位对抗性补丁。DIFFender 在统一的扩散模型框架内无缝集成了补丁定位和恢复任务,通过它们之间的紧密交互提高了防御效率。此外,DIFFender 还采用了一种高效的几发提示调整算法,使预先训练好的扩散模型能够适应防御任务,而无需进行大量的重新训练。我们的综合评估涵盖了图像分类和人脸识别任务以及现实世界的各种场景,证明了 DIFFender 在对抗恶意攻击方面的强大性能。该框架在各种环境、分类器和攻击方法中的通用性和通用性标志着对抗性补丁防御策略的重大进步。因此,我们展示了 DIFFender 的良好灵活性,它可以利用通用防御框架同时防御红外和可见光对抗性补丁攻击。
{"title":"Real-world Adversarial Defense against Patch Attacks based on Diffusion Model","authors":"Xingxing Wei, Caixin Kang, Yinpeng Dong, Zhengyi Wang, Shouwei Ruan, Yubo Chen, Hang Su","doi":"arxiv-2409.09406","DOIUrl":"https://doi.org/arxiv-2409.09406","url":null,"abstract":"Adversarial patches present significant challenges to the robustness of deep\u0000learning models, making the development of effective defenses become critical\u0000for real-world applications. This paper introduces DIFFender, a novel\u0000DIFfusion-based DeFender framework that leverages the power of a text-guided\u0000diffusion model to counter adversarial patch attacks. At the core of our\u0000approach is the discovery of the Adversarial Anomaly Perception (AAP)\u0000phenomenon, which enables the diffusion model to accurately detect and locate\u0000adversarial patches by analyzing distributional anomalies. DIFFender seamlessly\u0000integrates the tasks of patch localization and restoration within a unified\u0000diffusion model framework, enhancing defense efficacy through their close\u0000interaction. Additionally, DIFFender employs an efficient few-shot\u0000prompt-tuning algorithm, facilitating the adaptation of the pre-trained\u0000diffusion model to defense tasks without the need for extensive retraining. Our\u0000comprehensive evaluation, covering image classification and face recognition\u0000tasks, as well as real-world scenarios, demonstrates DIFFender's robust\u0000performance against adversarial attacks. The framework's versatility and\u0000generalizability across various settings, classifiers, and attack methodologies\u0000mark a significant advancement in adversarial patch defense strategies. Except\u0000for the popular visible domain, we have identified another advantage of\u0000DIFFender: its capability to easily expand into the infrared domain.\u0000Consequently, we demonstrate the good flexibility of DIFFender, which can\u0000defend against both infrared and visible adversarial patch attacks\u0000alternatively using a universal defense framework.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"212 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Protecting Vehicle Location Privacy with Contextually-Driven Synthetic Location Generation 利用上下文驱动的合成位置生成保护车辆位置隐私
Pub Date : 2024-09-14 DOI: arxiv-2409.09495
Sourabh Yadav, Chenyang Yu, Xinpeng Xie, Yan Huang, Chenxi Qiu
Geo-obfuscation is a Location Privacy Protection Mechanism used inlocation-based services that allows users to report obfuscated locationsinstead of exact ones. A formal privacy criterion, geoindistinguishability(Geo-Ind), requires real locations to be hard to distinguish from nearbylocations (by attackers) based on their obfuscated representations. However,Geo-Ind often fails to consider context, such as road networks and vehicletraffic conditions, making it less effective in protecting the location privacyof vehicles, of which the mobility are heavily influenced by these factors. In this paper, we introduce VehiTrack, a new threat model to demonstrate thevulnerability of Geo-Ind in protecting vehicle location privacy fromcontext-aware inference attacks. Our experiments demonstrate that VehiTrack canaccurately determine exact vehicle locations from obfuscated data, reducingaverage inference errors by 61.20% with Laplacian noise and 47.35% with linearprogramming (LP) compared to traditional Bayesian attacks. By using contextualdata like road networks and traffic flow, VehiTrack effectively eliminates asignificant number of seemingly "impossible" locations during its search forthe actual location of the vehicles. Based on these insights, we proposeTransProtect, a new geo-obfuscation approach that limits obfuscation torealistic vehicle movement patterns, complicating attackers' ability todifferentiate obfuscated from actual locations. Our results show thatTransProtect increases VehiTrack's inference error by 57.75% with Laplaciannoise and 27.21% with LP, significantly enhancing protection against theseattacks.
地理混淆是基于位置的服务中使用的一种位置隐私保护机制,它允许用户报告混淆的位置而不是准确的位置。一种正式的隐私标准,即地理可区分性(Geo-Ind),要求真实位置与(攻击者)基于其混淆表示的近似位置难以区分。然而,Geo-Ind 通常没有考虑道路网络和车辆交通状况等背景因素,因此在保护车辆位置隐私方面效果不佳,而车辆的流动性在很大程度上受到这些因素的影响。在本文中,我们介绍了一种新的威胁模型 VehiTrack,以证明 Geo-Ind 在保护车辆位置隐私免受上下文感知推理攻击方面的脆弱性。我们的实验证明,VehiTrack 可以准确地从混淆数据中确定车辆的确切位置,与传统的贝叶斯攻击相比,拉普拉斯噪声(Laplacian noise)的平均推理错误率降低了 61.20%,线性编程(LP)的平均推理错误率降低了 47.35%。通过使用道路网络和交通流量等上下文数据,VehiTrack 在搜索车辆实际位置的过程中有效地排除了大量看似 "不可能 "的位置。基于这些见解,我们提出了一种新的地理混淆方法--TransProtect,这种方法将混淆限制在真实的车辆运动模式上,使攻击者区分混淆位置和实际位置的能力变得更加复杂。我们的研究结果表明,TransProtect 使 VehiTrack 的推理误差在拉普拉斯噪声下增加了 57.75%,在 LP 下增加了 27.21%,显著增强了对这些攻击的防护能力。
{"title":"Protecting Vehicle Location Privacy with Contextually-Driven Synthetic Location Generation","authors":"Sourabh Yadav, Chenyang Yu, Xinpeng Xie, Yan Huang, Chenxi Qiu","doi":"arxiv-2409.09495","DOIUrl":"https://doi.org/arxiv-2409.09495","url":null,"abstract":"Geo-obfuscation is a Location Privacy Protection Mechanism used in\u0000location-based services that allows users to report obfuscated locations\u0000instead of exact ones. A formal privacy criterion, geoindistinguishability\u0000(Geo-Ind), requires real locations to be hard to distinguish from nearby\u0000locations (by attackers) based on their obfuscated representations. However,\u0000Geo-Ind often fails to consider context, such as road networks and vehicle\u0000traffic conditions, making it less effective in protecting the location privacy\u0000of vehicles, of which the mobility are heavily influenced by these factors. In this paper, we introduce VehiTrack, a new threat model to demonstrate the\u0000vulnerability of Geo-Ind in protecting vehicle location privacy from\u0000context-aware inference attacks. Our experiments demonstrate that VehiTrack can\u0000accurately determine exact vehicle locations from obfuscated data, reducing\u0000average inference errors by 61.20% with Laplacian noise and 47.35% with linear\u0000programming (LP) compared to traditional Bayesian attacks. By using contextual\u0000data like road networks and traffic flow, VehiTrack effectively eliminates a\u0000significant number of seemingly \"impossible\" locations during its search for\u0000the actual location of the vehicles. Based on these insights, we propose\u0000TransProtect, a new geo-obfuscation approach that limits obfuscation to\u0000realistic vehicle movement patterns, complicating attackers' ability to\u0000differentiate obfuscated from actual locations. Our results show that\u0000TransProtect increases VehiTrack's inference error by 57.75% with Laplacian\u0000noise and 27.21% with LP, significantly enhancing protection against these\u0000attacks.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Cryptography and Security
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1