首页 > 最新文献

ArXiv最新文献

英文 中文
Recovering the Pre-Fine-Tuning Weights of Generative Models 恢复生成模型的预微调权重
Pub Date : 2024-02-15 DOI: 10.48550/arXiv.2402.10208
Eliahu Horwitz, Jonathan Kahana, Yedid Hoshen
The dominant paradigm in generative modeling consists of two steps: i) pre-training on a large-scale but unsafe dataset, ii) aligning the pre-trained model with human values via fine-tuning. This practice is considered safe, as no current method can recover the unsafe, pre-fine-tuning model weights. In this paper, we demonstrate that this assumption is often false. Concretely, we present Spectral DeTuning, a method that can recover the weights of the pre-fine-tuning model using a few low-rank (LoRA) fine-tuned models. In contrast to previous attacks that attempt to recover pre-fine-tuning capabilities, our method aims to recover the exact pre-fine-tuning weights. Our approach exploits this new vulnerability against large-scale models such as a personalized Stable Diffusion and an aligned Mistral.
生成式建模的主流模式包括两个步骤:i) 在大规模但不安全的数据集上进行预训练;ii) 通过微调使预训练模型与人类的价值观相一致。这种做法被认为是安全的,因为目前没有任何方法可以恢复不安全的、预先微调的模型权重。在本文中,我们将证明这一假设往往是错误的。具体来说,我们提出了光谱去微调法(Spectral DeTuning),这是一种可以使用少量低阶(LoRA)微调模型恢复微调前模型权重的方法。与以往试图恢复预微调能力的攻击不同,我们的方法旨在恢复精确的预微调权重。我们的方法利用了这一新漏洞来对付大规模模型,如个性化稳定扩散模型和对齐 Mistral 模型。
{"title":"Recovering the Pre-Fine-Tuning Weights of Generative Models","authors":"Eliahu Horwitz, Jonathan Kahana, Yedid Hoshen","doi":"10.48550/arXiv.2402.10208","DOIUrl":"https://doi.org/10.48550/arXiv.2402.10208","url":null,"abstract":"The dominant paradigm in generative modeling consists of two steps: i) pre-training on a large-scale but unsafe dataset, ii) aligning the pre-trained model with human values via fine-tuning. This practice is considered safe, as no current method can recover the unsafe, pre-fine-tuning model weights. In this paper, we demonstrate that this assumption is often false. Concretely, we present Spectral DeTuning, a method that can recover the weights of the pre-fine-tuning model using a few low-rank (LoRA) fine-tuned models. In contrast to previous attacks that attempt to recover pre-fine-tuning capabilities, our method aims to recover the exact pre-fine-tuning weights. Our approach exploits this new vulnerability against large-scale models such as a personalized Stable Diffusion and an aligned Mistral.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"31 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantized Embedding Vectors for Controllable Diffusion Language Models 可控扩散语言模型的量化嵌入向量
Pub Date : 2024-02-15 DOI: 10.48550/arXiv.2402.10107
Cheng Kang, Xinye Chen, Yong Hu, Daniel Novak
Improving the controllability, portability, and inference speed of diffusion language models (DLMs) is a key challenge in natural language generation. While recent research has shown significant success in complex text generation with language models, the memory and computational power are still very demanding and fall short of expectations, which naturally results in low portability and instability for the models. To mitigate these issues, numerous well-established methods were proposed for neural network quantization. To further enhance their portability of independent deployment as well as improve their stability evaluated by language perplexity, we propose a novel approach called the Quantized Embedding Controllable Diffusion Language Model (QE-CDLM). QE-CDLM builds upon the recent successful controllable DLMs by remodeling the task-specific embedding space via quantization. This leads to a gradient-based controller for the generation tasks, and more stable intermediate latent variables are obtained, which naturally brings in an accelerated convergence as well as better controllability. Additionally, the adaption fine-tuning method is employed to reduce tunable weights. Experimental results on five challenging fine-grained control tasks demonstrate that QE-CDLM compares favorably to existing methods in terms of quality and feasibility, achieving better perplexity and lightweight fine-tuning.
提高扩散语言模型(DLM)的可控性、可移植性和推理速度是自然语言生成中的一个关键挑战。虽然最近的研究表明,利用语言模型生成复杂文本取得了巨大成功,但对内存和计算能力的要求仍然很高,与预期相差甚远,这自然会导致模型的可移植性低和不稳定。为了缓解这些问题,人们提出了许多成熟的神经网络量化方法。为了进一步提高神经网络独立部署的可移植性以及通过语言复杂度评估的稳定性,我们提出了一种名为量化嵌入可控扩散语言模型(QE-CDLM)的新方法。QE-CDLM 以最近成功的可控扩散语言模型为基础,通过量化重塑了特定任务的嵌入空间。这为生成任务带来了基于梯度的控制器,并获得了更稳定的中间潜变量,从而自然而然地加快了收敛速度并提高了可控性。此外,还采用了自适应微调方法来减少可调权重。五项具有挑战性的细粒度控制任务的实验结果表明,QE-CDLM 在质量和可行性方面优于现有方法,实现了更好的复杂度和轻量级微调。
{"title":"Quantized Embedding Vectors for Controllable Diffusion Language Models","authors":"Cheng Kang, Xinye Chen, Yong Hu, Daniel Novak","doi":"10.48550/arXiv.2402.10107","DOIUrl":"https://doi.org/10.48550/arXiv.2402.10107","url":null,"abstract":"Improving the controllability, portability, and inference speed of diffusion language models (DLMs) is a key challenge in natural language generation. While recent research has shown significant success in complex text generation with language models, the memory and computational power are still very demanding and fall short of expectations, which naturally results in low portability and instability for the models. To mitigate these issues, numerous well-established methods were proposed for neural network quantization. To further enhance their portability of independent deployment as well as improve their stability evaluated by language perplexity, we propose a novel approach called the Quantized Embedding Controllable Diffusion Language Model (QE-CDLM). QE-CDLM builds upon the recent successful controllable DLMs by remodeling the task-specific embedding space via quantization. This leads to a gradient-based controller for the generation tasks, and more stable intermediate latent variables are obtained, which naturally brings in an accelerated convergence as well as better controllability. Additionally, the adaption fine-tuning method is employed to reduce tunable weights. Experimental results on five challenging fine-grained control tasks demonstrate that QE-CDLM compares favorably to existing methods in terms of quality and feasibility, achieving better perplexity and lightweight fine-tuning.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139963082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital versus Analog Transmissions for Federated Learning over Wireless Networks 无线网络联合学习的数字传输与模拟传输
Pub Date : 2024-02-15 DOI: 10.48550/arXiv.2402.09657
Jiacheng Yao, Weihong Xu, Zhaohui Yang, Xiaohu You, M. Bennis, H. V. Poor
In this paper, we quantitatively compare these two effective communication schemes, i.e., digital and analog ones, for wireless federated learning (FL) over resource-constrained networks, highlighting their essential differences as well as their respective application scenarios. We first examine both digital and analog transmission methods, together with a unified and fair comparison scheme under practical constraints. A universal convergence analysis under various imperfections is established for FL performance evaluation in wireless networks. These analytical results reveal that the fundamental difference between the two paradigms lies in whether communication and computation are jointly designed or not. The digital schemes decouple the communication design from specific FL tasks, making it difficult to support simultaneous uplink transmission of massive devices with limited bandwidth. In contrast, the analog communication allows over-the-air computation (AirComp), thus achieving efficient spectrum utilization. However, computation-oriented analog transmission reduces power efficiency, and its performance is sensitive to computational errors. Finally, numerical simulations are conducted to verify these theoretical observations.
在本文中,我们定量比较了这两种有效的通信方案,即数字和模拟方案,用于资源受限网络上的无线联合学习(FL),强调了它们的本质区别以及各自的应用场景。我们首先研究了数字和模拟传输方法,以及在实际限制条件下的统一公平比较方案。我们还为无线网络中的 FL 性能评估建立了各种不完善条件下的通用收敛分析。这些分析结果表明,两种模式的根本区别在于是否联合设计了通信和计算。数字方案将通信设计与具体的 FL 任务脱钩,因此难以支持带宽有限的大规模设备同时进行上行链路传输。相比之下,模拟通信允许空中计算(AirComp),从而实现了有效的频谱利用。然而,以计算为导向的模拟传输会降低能效,而且其性能对计算误差很敏感。最后,我们进行了数值模拟来验证这些理论观点。
{"title":"Digital versus Analog Transmissions for Federated Learning over Wireless Networks","authors":"Jiacheng Yao, Weihong Xu, Zhaohui Yang, Xiaohu You, M. Bennis, H. V. Poor","doi":"10.48550/arXiv.2402.09657","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09657","url":null,"abstract":"In this paper, we quantitatively compare these two effective communication schemes, i.e., digital and analog ones, for wireless federated learning (FL) over resource-constrained networks, highlighting their essential differences as well as their respective application scenarios. We first examine both digital and analog transmission methods, together with a unified and fair comparison scheme under practical constraints. A universal convergence analysis under various imperfections is established for FL performance evaluation in wireless networks. These analytical results reveal that the fundamental difference between the two paradigms lies in whether communication and computation are jointly designed or not. The digital schemes decouple the communication design from specific FL tasks, making it difficult to support simultaneous uplink transmission of massive devices with limited bandwidth. In contrast, the analog communication allows over-the-air computation (AirComp), thus achieving efficient spectrum utilization. However, computation-oriented analog transmission reduces power efficiency, and its performance is sensitive to computational errors. Finally, numerical simulations are conducted to verify these theoretical observations.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"11 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139963124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why are Sensitive Functions Hard for Transformers? 为什么变压器难以实现敏感功能?
Pub Date : 2024-02-15 DOI: 10.48550/arXiv.2402.09963
Michael Hahn, Mark Rofin
Empirical studies have identified a range of learnability biases and limitations of transformers, such as a persistent difficulty in learning to compute simple formal languages such as PARITY, and a bias towards low-degree functions. However, theoretical understanding remains limited, with existing expressiveness theory either overpredicting or underpredicting realistic learning abilities. We prove that, under the transformer architecture, the loss landscape is constrained by the input-space sensitivity: Transformers whose output is sensitive to many parts of the input string inhabit isolated points in parameter space, leading to a low-sensitivity bias in generalization. We show theoretically and empirically that this theory unifies a broad array of empirical observations about the learning abilities and biases of transformers, such as their generalization bias towards low sensitivity and low degree, and difficulty in length generalization for PARITY. This shows that understanding transformers' inductive biases requires studying not just their in-principle expressivity, but also their loss landscape.
实证研究发现了变换器的一系列可学习性偏差和局限性,例如在学习计算简单的形式语言(如 PARITY)时始终存在困难,而且偏向于低度函数。然而,理论上的理解仍然有限,现有的表现力理论要么过高预测了现实的学习能力,要么过低预测了现实的学习能力。我们证明,在变换器架构下,损失情况受到输入空间敏感性的限制:变压器的输出对输入字符串的许多部分都很敏感,因此会居住在参数空间的孤立点上,从而导致泛化过程中的低灵敏度偏差。我们从理论和实证角度证明,这一理论统一了关于变换器学习能力和偏差的大量实证观察结果,例如它们的泛化偏向于低灵敏度和低度,以及 PARITY 的长度泛化困难。这表明,要理解变换器的归纳偏差,不仅需要研究它们的原理表达能力,还需要研究它们的损失景观。
{"title":"Why are Sensitive Functions Hard for Transformers?","authors":"Michael Hahn, Mark Rofin","doi":"10.48550/arXiv.2402.09963","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09963","url":null,"abstract":"Empirical studies have identified a range of learnability biases and limitations of transformers, such as a persistent difficulty in learning to compute simple formal languages such as PARITY, and a bias towards low-degree functions. However, theoretical understanding remains limited, with existing expressiveness theory either overpredicting or underpredicting realistic learning abilities. We prove that, under the transformer architecture, the loss landscape is constrained by the input-space sensitivity: Transformers whose output is sensitive to many parts of the input string inhabit isolated points in parameter space, leading to a low-sensitivity bias in generalization. We show theoretically and empirically that this theory unifies a broad array of empirical observations about the learning abilities and biases of transformers, such as their generalization bias towards low sensitivity and low degree, and difficulty in length generalization for PARITY. This shows that understanding transformers' inductive biases requires studying not just their in-principle expressivity, but also their loss landscape.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"29 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Excitation Projective Simulation with a Many-Body Physics Inspired Inductive Bias 受多体物理学启发的电感偏置多激励投射模拟
Pub Date : 2024-02-15 DOI: 10.48550/arXiv.2402.10192
Philip A. LeMaitre, Marius Krumm, H. Briegel
With the impressive progress of deep learning, applications relying on machine learning are increasingly being integrated into daily life. However, most deep learning models have an opaque, oracle-like nature making it difficult to interpret and understand their decisions. This problem led to the development of the field known as eXplainable Artificial Intelligence (XAI). One method in this field known as Projective Simulation (PS) models a chain-of-thought as a random walk of a particle on a graph with vertices that have concepts attached to them. While this description has various benefits, including the possibility of quantization, it cannot be naturally used to model thoughts that combine several concepts simultaneously. To overcome this limitation, we introduce Multi-Excitation Projective Simulation (mePS), a generalization that considers a chain-of-thought to be a random walk of several particles on a hypergraph. A definition for a dynamic hypergraph is put forward to describe the agent's training history along with applications to AI and hypergraph visualization. An inductive bias inspired by the remarkably successful few-body interaction models used in quantum many-body physics is formalized for our classical mePS framework and employed to tackle the exponential complexity associated with naive implementations of hypergraphs. We prove that our inductive bias reduces the complexity from exponential to polynomial, with the exponent representing the cutoff on how many particles can interact. We numerically apply our method to two toy environments and a more complex scenario modelling the diagnosis of a broken computer. These environments demonstrate the resource savings provided by an appropriate choice of inductive bias, as well as showcasing aspects of interpretability. A quantum model for mePS is also briefly outlined and some future directions for it are discussed.
随着深度学习取得令人瞩目的进展,依赖机器学习的应用正越来越多地融入日常生活。然而,大多数深度学习模型都具有不透明、类似甲骨文的性质,因此很难解释和理解其决策。这一问题导致了可解释人工智能(XAI)领域的发展。该领域的一种方法被称为 "投影模拟"(Projective Simulation,PS),它将思维链建模为粒子在图形上的随机行走,而图形的顶点都附有概念。虽然这种描述方法有各种优点,包括量化的可能性,但它无法自然地用于模拟同时结合多个概念的思维。为了克服这一局限,我们引入了多激发投影模拟(mePS),这是一种将思维链视为超图上多个粒子随机行走的概括。我们提出了动态超图的定义,以描述代理的训练历史,并将其应用于人工智能和超图可视化。量子多体物理学中使用的少体相互作用模型取得了巨大成功,受此启发,我们为经典的 mePS 框架正式确定了归纳偏差,并利用它来解决与超图的天真实现相关的指数级复杂性问题。我们证明,我们的归纳偏差将复杂性从指数级降低到了多项式级,指数代表了粒子相互作用数量的截止值。我们将我们的方法应用于两个玩具环境和一个更复杂的计算机故障诊断模型。这些环境表明,选择适当的归纳偏差可以节省资源,并展示了可解释性的各个方面。此外,还简要介绍了 mePS 的量子模型,并讨论了该模型的一些未来发展方向。
{"title":"Multi-Excitation Projective Simulation with a Many-Body Physics Inspired Inductive Bias","authors":"Philip A. LeMaitre, Marius Krumm, H. Briegel","doi":"10.48550/arXiv.2402.10192","DOIUrl":"https://doi.org/10.48550/arXiv.2402.10192","url":null,"abstract":"With the impressive progress of deep learning, applications relying on machine learning are increasingly being integrated into daily life. However, most deep learning models have an opaque, oracle-like nature making it difficult to interpret and understand their decisions. This problem led to the development of the field known as eXplainable Artificial Intelligence (XAI). One method in this field known as Projective Simulation (PS) models a chain-of-thought as a random walk of a particle on a graph with vertices that have concepts attached to them. While this description has various benefits, including the possibility of quantization, it cannot be naturally used to model thoughts that combine several concepts simultaneously. To overcome this limitation, we introduce Multi-Excitation Projective Simulation (mePS), a generalization that considers a chain-of-thought to be a random walk of several particles on a hypergraph. A definition for a dynamic hypergraph is put forward to describe the agent's training history along with applications to AI and hypergraph visualization. An inductive bias inspired by the remarkably successful few-body interaction models used in quantum many-body physics is formalized for our classical mePS framework and employed to tackle the exponential complexity associated with naive implementations of hypergraphs. We prove that our inductive bias reduces the complexity from exponential to polynomial, with the exponent representing the cutoff on how many particles can interact. We numerically apply our method to two toy environments and a more complex scenario modelling the diagnosis of a broken computer. These environments demonstrate the resource savings provided by an appropriate choice of inductive bias, as well as showcasing aspects of interpretability. A quantum model for mePS is also briefly outlined and some future directions for it are discussed.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"16 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LAPDoc: Layout-Aware Prompting for Documents LAPDoc:文档布局感知提示
Pub Date : 2024-02-15 DOI: 10.48550/arXiv.2402.09841
Marcel Lamott, Yves-Noel Weweler, A. Ulges, Faisal Shafait, Dirk Krechel, Darko Obradovic
Recent advances in training large language models (LLMs) using massive amounts of solely textual data lead to strong generalization across many domains and tasks, including document-specific tasks. Opposed to that there is a trend to train multi-modal transformer architectures tailored for document understanding that are designed specifically to fuse textual inputs with the corresponding document layout. This involves a separate fine-tuning step for which additional training data is required. At present, no document transformers with comparable generalization to LLMs are available That raises the question which type of model is to be preferred for document understanding tasks. In this paper we investigate the possibility to use purely text-based LLMs for document-specific tasks by using layout enrichment. We explore drop-in modifications and rule-based methods to enrich purely textual LLM prompts with layout information. In our experiments we investigate the effects on the commercial ChatGPT model and the open-source LLM Solar. We demonstrate that using our approach both LLMs show improved performance on various standard document benchmarks. In addition, we study the impact of noisy OCR and layout errors, as well as the limitations of LLMs when it comes to utilizing document layout. Our results indicate that layout enrichment can improve the performance of purely text-based LLMs for document understanding by up to 15% compared to just using plain document text. In conclusion, this approach should be considered for the best model choice between text-based LLM or multi-modal document transformers.
最近,在使用海量纯文本数据训练大型语言模型(LLMs)方面取得了进展,从而在许多领域和任务(包括特定文档任务)中实现了强大的泛化能力。与此相反,现在的趋势是训练为文档理解量身定制的多模式转换器架构,这种架构专门设计用于将文本输入与相应的文档布局融合在一起。这涉及一个单独的微调步骤,需要额外的训练数据。目前,还没有与 LLM 具有类似通用性的文档转换器。在本文中,我们研究了通过布局丰富化将纯文本 LLM 用于特定文档任务的可能性。我们探索了用布局信息丰富纯文本 LLM 提示的插入式修改和基于规则的方法。在实验中,我们研究了商业 ChatGPT 模型和开源 LLM Solar 的效果。我们证明,使用我们的方法后,这两种 LLM 在各种标准文档基准测试中的性能都有所提高。此外,我们还研究了噪声 OCR 和布局错误的影响,以及 LLM 在利用文档布局方面的局限性。我们的研究结果表明,与只使用纯文本文档相比,丰富布局可以将纯文本 LLMs 的文档理解性能提高 15%。总之,在基于文本的 LLM 或多模式文档转换器之间选择最佳模型时,应该考虑这种方法。
{"title":"LAPDoc: Layout-Aware Prompting for Documents","authors":"Marcel Lamott, Yves-Noel Weweler, A. Ulges, Faisal Shafait, Dirk Krechel, Darko Obradovic","doi":"10.48550/arXiv.2402.09841","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09841","url":null,"abstract":"Recent advances in training large language models (LLMs) using massive amounts of solely textual data lead to strong generalization across many domains and tasks, including document-specific tasks. Opposed to that there is a trend to train multi-modal transformer architectures tailored for document understanding that are designed specifically to fuse textual inputs with the corresponding document layout. This involves a separate fine-tuning step for which additional training data is required. At present, no document transformers with comparable generalization to LLMs are available That raises the question which type of model is to be preferred for document understanding tasks. In this paper we investigate the possibility to use purely text-based LLMs for document-specific tasks by using layout enrichment. We explore drop-in modifications and rule-based methods to enrich purely textual LLM prompts with layout information. In our experiments we investigate the effects on the commercial ChatGPT model and the open-source LLM Solar. We demonstrate that using our approach both LLMs show improved performance on various standard document benchmarks. In addition, we study the impact of noisy OCR and layout errors, as well as the limitations of LLMs when it comes to utilizing document layout. Our results indicate that layout enrichment can improve the performance of purely text-based LLMs for document understanding by up to 15% compared to just using plain document text. In conclusion, this approach should be considered for the best model choice between text-based LLM or multi-modal document transformers.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"9 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139963064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Much Does Each Datapoint Leak Your Privacy? Quantifying the Per-datum Membership Leakage 每个数据点泄露了您多少隐私?量化每个数据点的成员信息泄露情况
Pub Date : 2024-02-15 DOI: 10.48550/arXiv.2402.10065
Achraf Azize, Debabrota Basu
We study the per-datum Membership Inference Attacks (MIAs), where an attacker aims to infer whether a fixed target datum has been included in the input dataset of an algorithm and thus, violates privacy. First, we define the membership leakage of a datum as the advantage of the optimal adversary targeting to identify it. Then, we quantify the per-datum membership leakage for the empirical mean, and show that it depends on the Mahalanobis distance between the target datum and the data-generating distribution. We further assess the effect of two privacy defences, i.e. adding Gaussian noise and sub-sampling. We quantify exactly how both of them decrease the per-datum membership leakage. Our analysis builds on a novel proof technique that combines an Edgeworth expansion of the likelihood ratio test and a Lindeberg-Feller central limit theorem. Our analysis connects the existing likelihood ratio and scalar product attacks, and also justifies different canary selection strategies used in the privacy auditing literature. Finally, our experiments demonstrate the impacts of the leakage score, the sub-sampling ratio and the noise scale on the per-datum membership leakage as indicated by the theory.
我们研究的是每数据成员推断攻击(MIAs),攻击者的目的是推断算法的输入数据集中是否包含了固定的目标数据,从而侵犯隐私。首先,我们将一个数据的成员资格泄漏定义为识别该数据的最佳对手目标的优势。然后,我们量化了经验平均值的每个数据的成员资格泄漏,并证明它取决于目标数据与数据生成分布之间的马哈拉诺比距离。我们进一步评估了两种隐私保护措施的效果,即添加高斯噪声和子采样。我们准确量化了这两种方法是如何减少每个数据的成员泄漏的。我们的分析建立在一种新颖的证明技术之上,该技术结合了似然比检验的埃奇沃斯扩展和林德伯格-费勒中心极限定理。我们的分析将现有的似然比和标量乘积攻击联系起来,同时也证明了隐私审计文献中使用的不同金丝雀选择策略的合理性。最后,我们的实验证明了理论所指出的泄漏分数、子采样比和噪声尺度对每数据成员泄漏的影响。
{"title":"How Much Does Each Datapoint Leak Your Privacy? Quantifying the Per-datum Membership Leakage","authors":"Achraf Azize, Debabrota Basu","doi":"10.48550/arXiv.2402.10065","DOIUrl":"https://doi.org/10.48550/arXiv.2402.10065","url":null,"abstract":"We study the per-datum Membership Inference Attacks (MIAs), where an attacker aims to infer whether a fixed target datum has been included in the input dataset of an algorithm and thus, violates privacy. First, we define the membership leakage of a datum as the advantage of the optimal adversary targeting to identify it. Then, we quantify the per-datum membership leakage for the empirical mean, and show that it depends on the Mahalanobis distance between the target datum and the data-generating distribution. We further assess the effect of two privacy defences, i.e. adding Gaussian noise and sub-sampling. We quantify exactly how both of them decrease the per-datum membership leakage. Our analysis builds on a novel proof technique that combines an Edgeworth expansion of the likelihood ratio test and a Lindeberg-Feller central limit theorem. Our analysis connects the existing likelihood ratio and scalar product attacks, and also justifies different canary selection strategies used in the privacy auditing literature. Finally, our experiments demonstrate the impacts of the leakage score, the sub-sampling ratio and the noise scale on the per-datum membership leakage as indicated by the theory.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"14 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139963341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring a Behavioral Model of "Positive Friction" in Human-AI Interaction 探索人机交互中 "积极摩擦 "的行为模式
Pub Date : 2024-02-15 DOI: 10.48550/arXiv.2402.09683
Zeya Chen, Ruth Schmidt
Designing seamless, frictionless user experiences has long been a dominant trend in both applied behavioral science and artificial intelligence (AI), in which the goal of making desirable actions easy and efficient informs efforts to minimize friction in user experiences. However, in some settings, friction can be genuinely beneficial, such as the insertion of deliberate delays to increase reflection, preventing individuals from resorting to automatic or biased behaviors, and enhancing opportunities for unexpected discoveries. More recently, the popularization and availability of AI on a widespread scale has only increased the need to examine how friction can help or hinder users of AI; it also suggests a need to consider how positive friction can benefit AI practitioners, both during development processes (e.g., working with diverse teams) and to inform how AI is designed into offerings. This paper first proposes a"positive friction"model that can help characterize how friction is currently beneficial in user and developer experiences with AI, diagnose the potential need for friction where it may not yet exist in these contexts, and inform how positive friction can be used to generate solutions, especially as advances in AI continue to be progress and new opportunities emerge. It then explores this model in the context of AI users and developers by proposing the value of taking a hybrid"AI+human"lens, and concludes by suggesting questions for further exploration.
长期以来,设计无缝、无摩擦的用户体验一直是应用行为科学和人工智能(AI)领域的主流趋势。然而,在某些情况下,摩擦也可能是真正有益的,例如故意插入延迟以增加思考,防止个人诉诸自动或有偏见的行为,以及增加意外发现的机会。最近,人工智能的大规模普及和可用性增加了研究摩擦如何帮助或阻碍人工智能用户的必要性;这也表明有必要考虑积极摩擦如何在开发过程中(例如与不同团队合作)为人工智能从业者带来益处,并为如何将人工智能设计到产品中提供信息。本文首先提出了一个 "正摩擦 "模型,该模型可以帮助描述摩擦目前在用户和开发人员使用人工智能的体验中是如何受益的,诊断在这些环境中可能还不存在的摩擦的潜在需求,并告知如何利用正摩擦来产生解决方案,特别是随着人工智能的不断进步和新机会的出现。然后,通过提出采用 "人工智能+人类 "混合视角的价值,在人工智能用户和开发人员的背景下探索这一模式,最后提出进一步探索的问题。
{"title":"Exploring a Behavioral Model of \"Positive Friction\" in Human-AI Interaction","authors":"Zeya Chen, Ruth Schmidt","doi":"10.48550/arXiv.2402.09683","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09683","url":null,"abstract":"Designing seamless, frictionless user experiences has long been a dominant trend in both applied behavioral science and artificial intelligence (AI), in which the goal of making desirable actions easy and efficient informs efforts to minimize friction in user experiences. However, in some settings, friction can be genuinely beneficial, such as the insertion of deliberate delays to increase reflection, preventing individuals from resorting to automatic or biased behaviors, and enhancing opportunities for unexpected discoveries. More recently, the popularization and availability of AI on a widespread scale has only increased the need to examine how friction can help or hinder users of AI; it also suggests a need to consider how positive friction can benefit AI practitioners, both during development processes (e.g., working with diverse teams) and to inform how AI is designed into offerings. This paper first proposes a\"positive friction\"model that can help characterize how friction is currently beneficial in user and developer experiences with AI, diagnose the potential need for friction where it may not yet exist in these contexts, and inform how positive friction can be used to generate solutions, especially as advances in AI continue to be progress and new opportunities emerge. It then explores this model in the context of AI users and developers by proposing the value of taking a hybrid\"AI+human\"lens, and concludes by suggesting questions for further exploration.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"25 23","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convex Equilibrium-Free Stability and Performance Analysis of Discrete-Time Nonlinear Systems 离散时间非线性系统的无凸均衡稳定性和性能分析
Pub Date : 2024-02-15 DOI: 10.48550/arXiv.2402.09870
P. Koelewijn, Siep Weiland, Roland T'oth
This paper considers the equilibrium-free stability and performance analysis of discrete-time nonlinear systems. We consider two types of equilibrium-free notions. Namely, the universal shifted concept, which considers stability and performance w.r.t. all equilibrium points of the system, and the incremental concept, which considers stability and performance between trajectories of the system. In this paper, we show how universal shifted stability and performance of discrete-time systems can be analyzed by making use of the time-difference dynamics. Moreover, we extend the existing results for incremental dissipativity for discrete-time systems based on dissipativity analysis of the differential dynamics to more general state-dependent storage functions for less conservative results. Finally, we show how both these equilibrium-free notions can be cast as a convex analysis problem by making use of the linear parameter-varying framework, which is also demonstrated by means of an example.
本文探讨离散时间非线性系统的无平衡稳定性和性能分析。我们考虑了两类无平衡概念。即普遍偏移概念和增量概念,前者考虑系统所有平衡点的稳定性和性能,后者考虑系统轨迹之间的稳定性和性能。在本文中,我们展示了如何利用时差动力学来分析离散时间系统的普遍偏移稳定性和性能。此外,我们还将基于微分动力学耗散性分析的离散时间系统增量耗散性的现有结果扩展到更一般的状态依赖存储函数,以获得不那么保守的结果。最后,我们展示了如何利用线性参数变化框架,将这两个无平衡概念转化为凸分析问题,并通过实例进行了演示。
{"title":"Convex Equilibrium-Free Stability and Performance Analysis of Discrete-Time Nonlinear Systems","authors":"P. Koelewijn, Siep Weiland, Roland T'oth","doi":"10.48550/arXiv.2402.09870","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09870","url":null,"abstract":"This paper considers the equilibrium-free stability and performance analysis of discrete-time nonlinear systems. We consider two types of equilibrium-free notions. Namely, the universal shifted concept, which considers stability and performance w.r.t. all equilibrium points of the system, and the incremental concept, which considers stability and performance between trajectories of the system. In this paper, we show how universal shifted stability and performance of discrete-time systems can be analyzed by making use of the time-difference dynamics. Moreover, we extend the existing results for incremental dissipativity for discrete-time systems based on dissipativity analysis of the differential dynamics to more general state-dependent storage functions for less conservative results. Finally, we show how both these equilibrium-free notions can be cast as a convex analysis problem by making use of the linear parameter-varying framework, which is also demonstrated by means of an example.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"23 22","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-vertebral CT-based FE models implementing linear isotropic population-based material properties for the intervertebral discs cannot accurately predict strains 基于多椎体 CT 的 FE 模型,采用基于椎间盘各向同性群体材料特性的线性模型,无法准确预测应变
Pub Date : 2024-02-15 DOI: 10.48550/arXiv.2402.09790
Chiara Garavelli, A. Aldieri, M. Palanca, Luca Patruno, M. Viceconti
Vertebral fractures prediction in clinics lacks of accuracy. The most used scores have limitations in distinguishing between subjects at risk or not. Finite element (FE) models generated from computed tomography (CT) of these patients may improve the predictive capability. Many models have already been proposed but the most of them considered the single vertebral body, excluding from the analysis the role of the inter-vertebral discs in the distribution of the load through the spine. Multi-vertebral models instead allow to examine more complex boundary condition. However, CT scans do not provide subject-specif information about the material properties of the disc. Consequently, the goal of the study was to validate a multi-vertebral FE model with subject specific modelling of the vertebral bone and population-based properties assigned to the disc, idealizing them with a linear isotropic material. Boundary condition were assigned in order to reproduce an experimental test performed on the same specimen and recorded using digital image correlation technique (DIC). FE and DIC strains on the vertebral surfaces are compared point-wise. Young's modulus values in the range 25-30 MPa allowed to achieve a comparable order of magnitude between experimental and computational data. However, the two distribution remained strongly different. To conclude, subject-specific material properties need to be assigned also to the discs as well as to the vertebrae to achieve acceptable accuracy in the assessment of the fracture risk.
临床上对椎体骨折的预测缺乏准确性。最常用的评分在区分受试者是否存在风险方面存在局限性。根据这些患者的计算机断层扫描(CT)生成的有限元(FE)模型可以提高预测能力。目前已经提出了许多模型,但其中大多数都只考虑了单个椎体,将椎间盘在脊柱负荷分布中的作用排除在分析之外。多椎体模型则可以研究更复杂的边界条件。然而,CT 扫描无法提供有关椎间盘材料特性的特定信息。因此,本研究的目标是验证一个多椎体 FE 模型,该模型具有针对特定受试者的椎骨建模和基于人群的椎间盘属性,将其理想化为线性各向同性材料。设定边界条件是为了重现在同一试样上进行的实验测试,该测试使用数字图像相关技术(DIC)记录。对椎体表面的 FE 应变和 DIC 应变进行了点对点比较。杨氏模量值在 25-30 兆帕之间,因此实验数据和计算数据的数量级相当。然而,两者的分布仍然存在很大差异。总之,需要为椎间盘和椎体分配特定的材料属性,以便在评估骨折风险时达到可接受的准确性。
{"title":"Multi-vertebral CT-based FE models implementing linear isotropic population-based material properties for the intervertebral discs cannot accurately predict strains","authors":"Chiara Garavelli, A. Aldieri, M. Palanca, Luca Patruno, M. Viceconti","doi":"10.48550/arXiv.2402.09790","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09790","url":null,"abstract":"Vertebral fractures prediction in clinics lacks of accuracy. The most used scores have limitations in distinguishing between subjects at risk or not. Finite element (FE) models generated from computed tomography (CT) of these patients may improve the predictive capability. Many models have already been proposed but the most of them considered the single vertebral body, excluding from the analysis the role of the inter-vertebral discs in the distribution of the load through the spine. Multi-vertebral models instead allow to examine more complex boundary condition. However, CT scans do not provide subject-specif information about the material properties of the disc. Consequently, the goal of the study was to validate a multi-vertebral FE model with subject specific modelling of the vertebral bone and population-based properties assigned to the disc, idealizing them with a linear isotropic material. Boundary condition were assigned in order to reproduce an experimental test performed on the same specimen and recorded using digital image correlation technique (DIC). FE and DIC strains on the vertebral surfaces are compared point-wise. Young's modulus values in the range 25-30 MPa allowed to achieve a comparable order of magnitude between experimental and computational data. However, the two distribution remained strongly different. To conclude, subject-specific material properties need to be assigned also to the discs as well as to the vertebrae to achieve acceptable accuracy in the assessment of the fracture risk.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"16 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ArXiv
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1