首页 > 最新文献

arXiv - CS - Machine Learning最新文献

英文 中文
FedHide: Federated Learning by Hiding in the Neighbors FedHide:通过隐藏在邻居中进行联合学习
Pub Date : 2024-09-12 DOI: arxiv-2409.07808
Hyunsin Park, Sungrack Yun
We propose a prototype-based federated learning method designed for embeddingnetworks in classification or verification tasks. Our focus is on scenarioswhere each client has data from a single class. The main challenge is todevelop an embedding network that can distinguish between different classeswhile adhering to privacy constraints. Sharing true class prototypes with theserver or other clients could potentially compromise sensitive information. Totackle this issue, we propose a proxy class prototype that will be shared amongclients instead of the true class prototype. Our approach generates proxy classprototypes by linearly combining them with their nearest neighbors. Thistechnique conceals the true class prototype while enabling clients to learndiscriminative embedding networks. We compare our method to alternativetechniques, such as adding random Gaussian noise and using random selectionwith cosine similarity constraints. Furthermore, we evaluate the robustness ofour approach against gradient inversion attacks and introduce a measure forprototype leakage. This measure quantifies the extent of private informationrevealed when sharing the proposed proxy class prototype. Moreover, we providea theoretical analysis of the convergence properties of our approach. Ourproposed method for federated learning from scratch demonstrates itseffectiveness through empirical results on three benchmark datasets: CIFAR-100,VoxCeleb1, and VGGFace2.
我们提出了一种基于原型的联合学习方法,设计用于在分类或验证任务中嵌入网络。我们的重点是每个客户端都拥有来自单一类别的数据的场景。我们面临的主要挑战是如何开发一种嵌入网络,既能区分不同类别,又能遵守隐私约束。与服务器或其他客户端共享真正的类原型可能会泄露敏感信息。为了解决这个问题,我们提出了一种代理类原型,它将在客户端之间共享,而不是真正的类原型。我们的方法通过将代理类原型与其最近的邻居进行线性组合来生成代理类原型。这种方法既能隐藏真正的类原型,又能让客户学习到有区分度的嵌入网络。我们将我们的方法与其他技术进行了比较,如添加随机高斯噪声和使用带有余弦相似性约束的随机选择。此外,我们还评估了我们的方法对梯度反转攻击的鲁棒性,并引入了一种原型泄漏度量方法。该指标量化了共享所提出的代理类原型时泄露的私人信息的程度。此外,我们还对我们方法的收敛特性进行了理论分析。我们提出的从零开始的联合学习方法通过在三个基准数据集上的实证结果证明了它的有效性:CIFAR-100、VoxCeleb1 和 VGGFace2。
{"title":"FedHide: Federated Learning by Hiding in the Neighbors","authors":"Hyunsin Park, Sungrack Yun","doi":"arxiv-2409.07808","DOIUrl":"https://doi.org/arxiv-2409.07808","url":null,"abstract":"We propose a prototype-based federated learning method designed for embedding\u0000networks in classification or verification tasks. Our focus is on scenarios\u0000where each client has data from a single class. The main challenge is to\u0000develop an embedding network that can distinguish between different classes\u0000while adhering to privacy constraints. Sharing true class prototypes with the\u0000server or other clients could potentially compromise sensitive information. To\u0000tackle this issue, we propose a proxy class prototype that will be shared among\u0000clients instead of the true class prototype. Our approach generates proxy class\u0000prototypes by linearly combining them with their nearest neighbors. This\u0000technique conceals the true class prototype while enabling clients to learn\u0000discriminative embedding networks. We compare our method to alternative\u0000techniques, such as adding random Gaussian noise and using random selection\u0000with cosine similarity constraints. Furthermore, we evaluate the robustness of\u0000our approach against gradient inversion attacks and introduce a measure for\u0000prototype leakage. This measure quantifies the extent of private information\u0000revealed when sharing the proposed proxy class prototype. Moreover, we provide\u0000a theoretical analysis of the convergence properties of our approach. Our\u0000proposed method for federated learning from scratch demonstrates its\u0000effectiveness through empirical results on three benchmark datasets: CIFAR-100,\u0000VoxCeleb1, and VGGFace2.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attack End-to-End Autonomous Driving through Module-Wise Noise 通过模块噪声攻克端到端自动驾驶技术
Pub Date : 2024-09-12 DOI: arxiv-2409.07706
Lu Wang, Tianyuan Zhang, Yikai Han, Muyang Fang, Ting Jin, Jiaqi Kang
With recent breakthroughs in deep neural networks, numerous tasks withinautonomous driving have exhibited remarkable performance. However, deeplearning models are susceptible to adversarial attacks, presenting significantsecurity risks to autonomous driving systems. Presently, end-to-endarchitectures have emerged as the predominant solution for autonomous driving,owing to their collaborative nature across different tasks. Yet, theimplications of adversarial attacks on such models remain relativelyunexplored. In this paper, we conduct comprehensive adversarial securityresearch on the modular end-to-end autonomous driving model for the first time.We thoroughly consider the potential vulnerabilities in the model inferenceprocess and design a universal attack scheme through module-wise noiseinjection. We conduct large-scale experiments on the full-stack autonomousdriving model and demonstrate that our attack method outperforms previousattack methods. We trust that our research will offer fresh insights intoensuring the safety and reliability of autonomous driving systems.
随着最近深度神经网络的突破,许多自动驾驶任务都表现出了不俗的性能。然而,深度学习模型容易受到对抗性攻击,给自动驾驶系统带来了巨大的安全风险。目前,端到端体系结构已成为自动驾驶的主要解决方案,因为它们具有跨不同任务的协作性质。然而,对抗性攻击对此类模型的影响仍然相对较少。本文首次对模块化端到端自动驾驶模型进行了全面的对抗性安全研究。我们深入考虑了模型推理过程中的潜在漏洞,并设计了一种通过模块噪声注入的通用攻击方案。我们在全栈自动驾驶模型上进行了大规模实验,证明我们的攻击方法优于之前的攻击方法。我们相信,我们的研究将为确保自动驾驶系统的安全性和可靠性提供新的见解。
{"title":"Attack End-to-End Autonomous Driving through Module-Wise Noise","authors":"Lu Wang, Tianyuan Zhang, Yikai Han, Muyang Fang, Ting Jin, Jiaqi Kang","doi":"arxiv-2409.07706","DOIUrl":"https://doi.org/arxiv-2409.07706","url":null,"abstract":"With recent breakthroughs in deep neural networks, numerous tasks within\u0000autonomous driving have exhibited remarkable performance. However, deep\u0000learning models are susceptible to adversarial attacks, presenting significant\u0000security risks to autonomous driving systems. Presently, end-to-end\u0000architectures have emerged as the predominant solution for autonomous driving,\u0000owing to their collaborative nature across different tasks. Yet, the\u0000implications of adversarial attacks on such models remain relatively\u0000unexplored. In this paper, we conduct comprehensive adversarial security\u0000research on the modular end-to-end autonomous driving model for the first time.\u0000We thoroughly consider the potential vulnerabilities in the model inference\u0000process and design a universal attack scheme through module-wise noise\u0000injection. We conduct large-scale experiments on the full-stack autonomous\u0000driving model and demonstrate that our attack method outperforms previous\u0000attack methods. We trust that our research will offer fresh insights into\u0000ensuring the safety and reliability of autonomous driving systems.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Laplacian-based Bayesian Multi-fidelity Modeling 基于图谱拉普拉斯的贝叶斯多保真度建模
Pub Date : 2024-09-12 DOI: arxiv-2409.08211
Orazio Pinti, Jeremy M. Budd, Franca Hoffmann, Assad A. Oberai
We present a novel probabilistic approach for generating multi-fidelity datawhile accounting for errors inherent in both low- and high-fidelity data. Inthis approach a graph Laplacian constructed from the low-fidelity data is usedto define a multivariate Gaussian prior density for the coordinates of the truedata points. In addition, few high-fidelity data points are used to construct aconjugate likelihood term. Thereafter, Bayes rule is applied to derive anexplicit expression for the posterior density which is also multivariateGaussian. The maximum textit{a posteriori} (MAP) estimate of this density isselected to be the optimal multi-fidelity estimate. It is shown that the MAPestimate and the covariance of the posterior density can be determined throughthe solution of linear systems of equations. Thereafter, two methods, one basedon spectral truncation and another based on a low-rank approximation, aredeveloped to solve these equations efficiently. The multi-fidelity approach istested on a variety of problems in solid and fluid mechanics with data thatrepresents vectors of quantities of interest and discretized spatial fields inone and two dimensions. The results demonstrate that by utilizing a smallfraction of high-fidelity data, the multi-fidelity approach can significantlyimprove the accuracy of a large collection of low-fidelity data points.
我们提出了一种新颖的概率方法,用于生成多保真度数据,同时考虑低保真度和高保真度数据中固有的误差。在这种方法中,根据低保真数据构建的图拉普拉卡矩被用来定义真实数据点坐标的多元高斯先验密度。此外,少数高保真数据点被用于构建共轭似然项。之后,应用贝叶斯规则推导出后验密度的显式表达式,后验密度也是多元高斯的。该密度的最大后验估计值(MAP)被选为最佳多保真度估计值。研究表明,MAP 估计值和后验密度的协方差可以通过线性方程组的求解来确定。随后,研究人员开发了两种方法,一种是基于谱截断的方法,另一种是基于低阶近似的方法,以高效求解这些方程。多保真度方法在固体力学和流体力学的各种问题上进行了测试,测试数据代表了相关量的矢量以及一维和二维的离散空间场。结果表明,通过利用一小部分高保真数据,多保真方法可以显著提高大量低保真数据点的精度。
{"title":"Graph Laplacian-based Bayesian Multi-fidelity Modeling","authors":"Orazio Pinti, Jeremy M. Budd, Franca Hoffmann, Assad A. Oberai","doi":"arxiv-2409.08211","DOIUrl":"https://doi.org/arxiv-2409.08211","url":null,"abstract":"We present a novel probabilistic approach for generating multi-fidelity data\u0000while accounting for errors inherent in both low- and high-fidelity data. In\u0000this approach a graph Laplacian constructed from the low-fidelity data is used\u0000to define a multivariate Gaussian prior density for the coordinates of the true\u0000data points. In addition, few high-fidelity data points are used to construct a\u0000conjugate likelihood term. Thereafter, Bayes rule is applied to derive an\u0000explicit expression for the posterior density which is also multivariate\u0000Gaussian. The maximum textit{a posteriori} (MAP) estimate of this density is\u0000selected to be the optimal multi-fidelity estimate. It is shown that the MAP\u0000estimate and the covariance of the posterior density can be determined through\u0000the solution of linear systems of equations. Thereafter, two methods, one based\u0000on spectral truncation and another based on a low-rank approximation, are\u0000developed to solve these equations efficiently. The multi-fidelity approach is\u0000tested on a variety of problems in solid and fluid mechanics with data that\u0000represents vectors of quantities of interest and discretized spatial fields in\u0000one and two dimensions. The results demonstrate that by utilizing a small\u0000fraction of high-fidelity data, the multi-fidelity approach can significantly\u0000improve the accuracy of a large collection of low-fidelity data points.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Causally Invariant Reward Functions from Diverse Demonstrations 从多样化演示中学习因果不变的奖励函数
Pub Date : 2024-09-12 DOI: arxiv-2409.08012
Ivan Ovinnikov, Eugene Bykovets, Joachim M. Buhmann
Inverse reinforcement learning methods aim to retrieve the reward function ofa Markov decision process based on a dataset of expert demonstrations. Thecommonplace scarcity and heterogeneous sources of such demonstrations can leadto the absorption of spurious correlations in the data by the learned rewardfunction. Consequently, this adaptation often exhibits behavioural overfittingto the expert data set when a policy is trained on the obtained reward functionunder distribution shift of the environment dynamics. In this work, we explorea novel regularization approach for inverse reinforcement learning methodsbased on the causal invariance principle with the goal of improved rewardfunction generalization. By applying this regularization to both exact andapproximate formulations of the learning task, we demonstrate superior policyperformance when trained using the recovered reward functions in a transfersetting
反强化学习方法旨在根据专家示范数据集检索马尔可夫决策过程的奖励函数。这种示范的普遍稀缺性和异质性来源会导致学习到的奖励函数吸收数据中的虚假相关性。因此,在环境动态分布变化的情况下,根据获得的奖励函数训练策略时,这种适应往往会表现出对专家数据集的行为过拟合。在这项工作中,我们探索了一种基于因果不变性原理的反强化学习方法的新型正则化方法,目的是改进奖励函数的泛化。通过将这种正则化方法应用于学习任务的精确表述和近似表述,我们证明了在转移设置中使用恢复的奖励函数进行训练时,反强化学习方法具有卓越的策略性能。
{"title":"Learning Causally Invariant Reward Functions from Diverse Demonstrations","authors":"Ivan Ovinnikov, Eugene Bykovets, Joachim M. Buhmann","doi":"arxiv-2409.08012","DOIUrl":"https://doi.org/arxiv-2409.08012","url":null,"abstract":"Inverse reinforcement learning methods aim to retrieve the reward function of\u0000a Markov decision process based on a dataset of expert demonstrations. The\u0000commonplace scarcity and heterogeneous sources of such demonstrations can lead\u0000to the absorption of spurious correlations in the data by the learned reward\u0000function. Consequently, this adaptation often exhibits behavioural overfitting\u0000to the expert data set when a policy is trained on the obtained reward function\u0000under distribution shift of the environment dynamics. In this work, we explore\u0000a novel regularization approach for inverse reinforcement learning methods\u0000based on the causal invariance principle with the goal of improved reward\u0000function generalization. By applying this regularization to both exact and\u0000approximate formulations of the learning task, we demonstrate superior policy\u0000performance when trained using the recovered reward functions in a transfer\u0000setting","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Taylor-Sensus Network: Embracing Noise to Enlighten Uncertainty for Scientific Data 泰勒共识网络:拥抱噪音,揭示科学数据的不确定性
Pub Date : 2024-09-12 DOI: arxiv-2409.07942
Guangxuan Song, Dongmei Fu, Zhongwei Qiu, Jintao Meng, Dawei Zhang
Uncertainty estimation is crucial in scientific data for machine learning.Current uncertainty estimation methods mainly focus on the model's inherentuncertainty, while neglecting the explicit modeling of noise in the data.Furthermore, noise estimation methods typically rely on temporal or spatialdependencies, which can pose a significant challenge in structured scientificdata where such dependencies among samples are often absent. To address thesechallenges in scientific research, we propose the Taylor-Sensus Network(TSNet). TSNet innovatively uses a Taylor series expansion to model complex,heteroscedastic noise and proposes a deep Taylor block for aware noisedistribution. TSNet includes a noise-aware contrastive learning module and adata density perception module for aleatoric and epistemic uncertainty.Additionally, an uncertainty combination operator is used to integrate theseuncertainties, and the network is trained using a novel heteroscedastic meansquare error loss. TSNet demonstrates superior performance over mainstream andstate-of-the-art methods in experiments, highlighting its potential inscientific research and noise resistance. It will be open-source to facilitatethe community of "AI for Science".
目前的不确定性估计方法主要关注模型的固有不确定性,而忽略了数据中噪声的显式建模。此外,噪声估计方法通常依赖于时间或空间依赖性,这对于结构化科学数据来说是一个巨大的挑战,因为样本之间往往不存在这种依赖性。为了解决科学研究中的这些挑战,我们提出了泰勒-共识网络(TSNet)。TSNet 创新性地使用泰勒级数展开来模拟复杂的异速噪声,并为感知噪声分布提出了深度泰勒块。TSNet 包括一个噪声感知对比学习模块和一个数据密度感知模块,用于分析不确定性和认识不确定性。此外,还使用了一个不确定性组合算子来整合这些不确定性,并使用一种新颖的异前缀均方误差损失来训练网络。在实验中,TSNet 的性能优于主流方法和最先进的方法,凸显了其在科学研究和抗噪声方面的潜力。它将开源,以促进 "人工智能促进科学 "社区的发展。
{"title":"Taylor-Sensus Network: Embracing Noise to Enlighten Uncertainty for Scientific Data","authors":"Guangxuan Song, Dongmei Fu, Zhongwei Qiu, Jintao Meng, Dawei Zhang","doi":"arxiv-2409.07942","DOIUrl":"https://doi.org/arxiv-2409.07942","url":null,"abstract":"Uncertainty estimation is crucial in scientific data for machine learning.\u0000Current uncertainty estimation methods mainly focus on the model's inherent\u0000uncertainty, while neglecting the explicit modeling of noise in the data.\u0000Furthermore, noise estimation methods typically rely on temporal or spatial\u0000dependencies, which can pose a significant challenge in structured scientific\u0000data where such dependencies among samples are often absent. To address these\u0000challenges in scientific research, we propose the Taylor-Sensus Network\u0000(TSNet). TSNet innovatively uses a Taylor series expansion to model complex,\u0000heteroscedastic noise and proposes a deep Taylor block for aware noise\u0000distribution. TSNet includes a noise-aware contrastive learning module and a\u0000data density perception module for aleatoric and epistemic uncertainty.\u0000Additionally, an uncertainty combination operator is used to integrate these\u0000uncertainties, and the network is trained using a novel heteroscedastic mean\u0000square error loss. TSNet demonstrates superior performance over mainstream and\u0000state-of-the-art methods in experiments, highlighting its potential in\u0000scientific research and noise resistance. It will be open-source to facilitate\u0000the community of \"AI for Science\".","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CliquePH: Higher-Order Information for Graph Neural Networks through Persistent Homology on Clique Graphs CliquePH:通过簇图上的持久同源性为图神经网络提供高阶信息
Pub Date : 2024-09-12 DOI: arxiv-2409.08217
Davide Buffelli, Farzin Soleymani, Bastian Rieck
Graph neural networks have become the default choice by practitioners forgraph learning tasks such as graph classification and node classification.Nevertheless, popular graph neural network models still struggle to capturehigher-order information, i.e., information that goes emph{beyond} pairwiseinteractions. Recent work has shown that persistent homology, a tool fromtopological data analysis, can enrich graph neural networks with topologicalinformation that they otherwise could not capture. Calculating such features isefficient for dimension 0 (connected components) and dimension 1 (cycles).However, when it comes to higher-order structures, it does not scale well, witha complexity of $O(n^d)$, where $n$ is the number of nodes and $d$ is the orderof the structures. In this work, we introduce a novel method that extractsinformation about higher-order structures in the graph while still using theefficient low-dimensional persistent homology algorithm. On standard benchmarkdatasets, we show that our method can lead to up to $31%$ improvements in testaccuracy.
尽管如此,流行的图神经网络模型仍然难以捕捉高阶信息,即超越成对交互的信息。最近的研究表明,持久同源性(一种拓扑数据分析工具)可以为图神经网络提供拓扑信息,而这些信息是图神经网络无法捕捉到的。然而,当涉及到高阶结构时,计算效率就不高了,复杂度为 $O(n^d)$,其中 $n$ 是节点数,$d$ 是结构的阶数。在这项工作中,我们引入了一种新方法,它可以提取图中的高阶结构信息,同时仍然使用高效的低维持久同调算法。在标准基准数据集上,我们展示了我们的方法可以使测试精度提高 31%$。
{"title":"CliquePH: Higher-Order Information for Graph Neural Networks through Persistent Homology on Clique Graphs","authors":"Davide Buffelli, Farzin Soleymani, Bastian Rieck","doi":"arxiv-2409.08217","DOIUrl":"https://doi.org/arxiv-2409.08217","url":null,"abstract":"Graph neural networks have become the default choice by practitioners for\u0000graph learning tasks such as graph classification and node classification.\u0000Nevertheless, popular graph neural network models still struggle to capture\u0000higher-order information, i.e., information that goes emph{beyond} pairwise\u0000interactions. Recent work has shown that persistent homology, a tool from\u0000topological data analysis, can enrich graph neural networks with topological\u0000information that they otherwise could not capture. Calculating such features is\u0000efficient for dimension 0 (connected components) and dimension 1 (cycles).\u0000However, when it comes to higher-order structures, it does not scale well, with\u0000a complexity of $O(n^d)$, where $n$ is the number of nodes and $d$ is the order\u0000of the structures. In this work, we introduce a novel method that extracts\u0000information about higher-order structures in the graph while still using the\u0000efficient low-dimensional persistent homology algorithm. On standard benchmark\u0000datasets, we show that our method can lead to up to $31%$ improvements in test\u0000accuracy.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"85 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Online Grooming Detection Employing Context Determination and Message-Level Analysis 利用上下文判断和信息级分析增强在线疏导检测
Pub Date : 2024-09-12 DOI: arxiv-2409.07958
Jake Street, Isibor Ihianle, Funminiyi Olajide, Ahmad Lotfi
Online Grooming (OG) is a prevalent threat facing predominately childrenonline, with groomers using deceptive methods to prey on the vulnerability ofchildren on social media/messaging platforms. These attacks can have severepsychological and physical impacts, including a tendency towardsrevictimization. Current technical measures are inadequate, especially with theadvent of end-to-end encryption which hampers message monitoring. Existingsolutions focus on the signature analysis of child abuse media, which does noteffectively address real-time OG detection. This paper proposes that OG attacksare complex, requiring the identification of specific communication patternsbetween adults and children. It introduces a novel approach leveraging advancedmodels such as BERT and RoBERTa for Message-Level Analysis and a ContextDetermination approach for classifying actor interactions, including theintroduction of Actor Significance Thresholds and Message SignificanceThresholds. The proposed method aims to enhance accuracy and robustness indetecting OG by considering the dynamic and multi-faceted nature of theseattacks. Cross-dataset experiments evaluate the robustness and versatility ofour approach. This paper's contributions include improved detectionmethodologies and the potential for application in various scenarios,addressing gaps in current literature and practices.
网络诱拐(OG)是一种普遍存在的威胁,主要是儿童在网络上面临的威胁,诱拐者在社交媒体/信息平台上使用欺骗方法利用儿童的弱点。这些攻击会造成严重的心理和生理影响,包括受害倾向。目前的技术措施还不够完善,尤其是端到端加密技术的发明阻碍了信息监控。现有的解决方案主要集中在虐童媒体的签名分析上,这并不能有效地解决实时 OG 检测问题。本文认为,OG 攻击非常复杂,需要识别成人和儿童之间的特定通信模式。本文介绍了一种利用 BERT 和 RoBERTa 等先进模型进行消息级分析的新方法,以及一种对行为者互动进行分类的上下文确定方法,包括引入行为者重要性阈值和消息重要性阈值。所提出的方法旨在通过考虑这些攻击的动态和多面性,提高检测 OG 的准确性和鲁棒性。跨数据集实验评估了我们方法的鲁棒性和通用性。本文的贡献包括改进了检测方法,并具有在各种场景中应用的潜力,填补了当前文献和实践中的空白。
{"title":"Enhanced Online Grooming Detection Employing Context Determination and Message-Level Analysis","authors":"Jake Street, Isibor Ihianle, Funminiyi Olajide, Ahmad Lotfi","doi":"arxiv-2409.07958","DOIUrl":"https://doi.org/arxiv-2409.07958","url":null,"abstract":"Online Grooming (OG) is a prevalent threat facing predominately children\u0000online, with groomers using deceptive methods to prey on the vulnerability of\u0000children on social media/messaging platforms. These attacks can have severe\u0000psychological and physical impacts, including a tendency towards\u0000revictimization. Current technical measures are inadequate, especially with the\u0000advent of end-to-end encryption which hampers message monitoring. Existing\u0000solutions focus on the signature analysis of child abuse media, which does not\u0000effectively address real-time OG detection. This paper proposes that OG attacks\u0000are complex, requiring the identification of specific communication patterns\u0000between adults and children. It introduces a novel approach leveraging advanced\u0000models such as BERT and RoBERTa for Message-Level Analysis and a Context\u0000Determination approach for classifying actor interactions, including the\u0000introduction of Actor Significance Thresholds and Message Significance\u0000Thresholds. The proposed method aims to enhance accuracy and robustness in\u0000detecting OG by considering the dynamic and multi-faceted nature of these\u0000attacks. Cross-dataset experiments evaluate the robustness and versatility of\u0000our approach. This paper's contributions include improved detection\u0000methodologies and the potential for application in various scenarios,\u0000addressing gaps in current literature and practices.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Role of Deep Learning Regularizations on Actors in Offline RL 深度学习正则化在离线 RL 中对行动者的作用
Pub Date : 2024-09-11 DOI: arxiv-2409.07606
Denis Tarasov, Anja Surina, Caglar Gulcehre
Deep learning regularization techniques, such as emph{dropout}, emph{layernormalization}, or emph{weight decay}, are widely adopted in the constructionof modern artificial neural networks, often resulting in more robust trainingprocesses and improved generalization capabilities. However, in the domain ofemph{Reinforcement Learning} (RL), the application of these techniques hasbeen limited, usually applied to value function estimatorscitep{hiraoka2021dropout, smith2022walk}, and may result in detrimentaleffects. This issue is even more pronounced in offline RL settings, which beargreater similarity to supervised learning but have received less attention.Recent work in continuous offline RL has demonstrated that while we can buildsufficiently powerful critic networks, the generalization of actor networksremains a bottleneck. In this study, we empirically show that applying standardregularization techniques to actor networks in offline RL actor-criticalgorithms yields improvements of 6% on average across two algorithms andthree different continuous D4RL domains.
深度学习正则化技术,如{emph{dropout}}、{emph{layernormalization}或{emph{weight decay}},在现代人工神经网络的构建中被广泛采用,通常会带来更稳健的训练过程和更好的泛化能力。然而,在强化学习(RL)领域,这些技术的应用一直很有限,通常应用于值函数估计器(value function estimators){hiraoka2021dropout,smith2022walk},并可能导致有害影响。这一问题在离线 RL 设置中更为突出,离线 RL 与监督学习更为相似,但受到的关注较少。最近在连续离线 RL 方面的研究表明,虽然我们可以构建足够强大的批评者网络,但行为者网络的泛化仍然是一个瓶颈。在本研究中,我们通过实证研究表明,在离线 RL 角色批判算法中对角色网络应用标准规则化技术,可以在两种算法和三个不同的连续 D4RL 领域中平均提高 6%。
{"title":"The Role of Deep Learning Regularizations on Actors in Offline RL","authors":"Denis Tarasov, Anja Surina, Caglar Gulcehre","doi":"arxiv-2409.07606","DOIUrl":"https://doi.org/arxiv-2409.07606","url":null,"abstract":"Deep learning regularization techniques, such as emph{dropout}, emph{layer\u0000normalization}, or emph{weight decay}, are widely adopted in the construction\u0000of modern artificial neural networks, often resulting in more robust training\u0000processes and improved generalization capabilities. However, in the domain of\u0000emph{Reinforcement Learning} (RL), the application of these techniques has\u0000been limited, usually applied to value function estimators\u0000citep{hiraoka2021dropout, smith2022walk}, and may result in detrimental\u0000effects. This issue is even more pronounced in offline RL settings, which bear\u0000greater similarity to supervised learning but have received less attention.\u0000Recent work in continuous offline RL has demonstrated that while we can build\u0000sufficiently powerful critic networks, the generalization of actor networks\u0000remains a bottleneck. In this study, we empirically show that applying standard\u0000regularization techniques to actor networks in offline RL actor-critic\u0000algorithms yields improvements of 6% on average across two algorithms and\u0000three different continuous D4RL domains.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What to align in multimodal contrastive learning? 在多模态对比学习中该如何调整?
Pub Date : 2024-09-11 DOI: arxiv-2409.07402
Benoit Dufumier, Javiera Castillo-Navarro, Devis Tuia, Jean-Philippe Thiran
Humans perceive the world through multisensory integration, blending theinformation of different modalities to adapt their behavior. Contrastivelearning offers an appealing solution for multimodal self-supervised learning.Indeed, by considering each modality as a different view of the same entity, itlearns to align features of different modalities in a shared representationspace. However, this approach is intrinsically limited as it only learns sharedor redundant information between modalities, while multimodal interactions canarise in other ways. In this work, we introduce CoMM, a Contrastive MultiModallearning strategy that enables the communication between modalities in a singlemultimodal space. Instead of imposing cross- or intra- modality constraints, wepropose to align multimodal representations by maximizing the mutualinformation between augmented versions of these multimodal features. Ourtheoretical analysis shows that shared, synergistic and unique terms ofinformation naturally emerge from this formulation, allowing us to estimatemultimodal interactions beyond redundancy. We test CoMM both in a controlledand in a series of real-world settings: in the former, we demonstrate that CoMMeffectively captures redundant, unique and synergistic information betweenmodalities. In the latter, CoMM learns complex multimodal interactions andachieves state-of-the-art results on the six multimodal benchmarks.
人类通过多感官整合来感知世界,融合不同模式的信息来调整自己的行为。对比学习为多模态自监督学习提供了一种极具吸引力的解决方案。事实上,通过将每种模态视为同一实体的不同视角,对比学习可以将不同模态的特征整合到一个共享的表征空间中。然而,这种方法有其内在的局限性,因为它只能学习模态之间的共享或冗余信息,而多模态交互可以通过其他方式产生。在这项工作中,我们引入了一种对比多模态学习策略(CoMM),它能在单一多模态空间中实现模态之间的交流。我们并不强加跨模态或模内模态约束,而是建议通过最大化这些多模态特征的增强版本之间的相互信息来调整多模态表征。我们的理论分析表明,共享、协同和独特的信息项会从这一表述中自然产生,从而使我们能够估算冗余之外的多模态交互。我们在受控环境和一系列真实世界环境中测试了 CoMM:在前者中,我们证明 CoMM 能够有效捕捉多模态之间的冗余、独特和协同信息。在后者中,CoMM 学习复杂的多模态交互,并在六个多模态基准测试中取得了最先进的结果。
{"title":"What to align in multimodal contrastive learning?","authors":"Benoit Dufumier, Javiera Castillo-Navarro, Devis Tuia, Jean-Philippe Thiran","doi":"arxiv-2409.07402","DOIUrl":"https://doi.org/arxiv-2409.07402","url":null,"abstract":"Humans perceive the world through multisensory integration, blending the\u0000information of different modalities to adapt their behavior. Contrastive\u0000learning offers an appealing solution for multimodal self-supervised learning.\u0000Indeed, by considering each modality as a different view of the same entity, it\u0000learns to align features of different modalities in a shared representation\u0000space. However, this approach is intrinsically limited as it only learns shared\u0000or redundant information between modalities, while multimodal interactions can\u0000arise in other ways. In this work, we introduce CoMM, a Contrastive MultiModal\u0000learning strategy that enables the communication between modalities in a single\u0000multimodal space. Instead of imposing cross- or intra- modality constraints, we\u0000propose to align multimodal representations by maximizing the mutual\u0000information between augmented versions of these multimodal features. Our\u0000theoretical analysis shows that shared, synergistic and unique terms of\u0000information naturally emerge from this formulation, allowing us to estimate\u0000multimodal interactions beyond redundancy. We test CoMM both in a controlled\u0000and in a series of real-world settings: in the former, we demonstrate that CoMM\u0000effectively captures redundant, unique and synergistic information between\u0000modalities. In the latter, CoMM learns complex multimodal interactions and\u0000achieves state-of-the-art results on the six multimodal benchmarks.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Continual and Incremental Learning Approach for TinyML On-device Training Using Dataset Distillation and Model Size Adaption 利用数据集蒸馏和模型大小调整实现 TinyML 设备上训练的持续增量学习方法
Pub Date : 2024-09-11 DOI: arxiv-2409.07114
Marcus Rüb, Philipp Tuchel, Axel Sikora, Daniel Mueller-Gritschneder
A new algorithm for incremental learning in the context of Tiny Machinelearning (TinyML) is presented, which is optimized for low-performance andenergy efficient embedded devices. TinyML is an emerging field that deploysmachine learning models on resource-constrained devices such asmicrocontrollers, enabling intelligent applications like voice recognition,anomaly detection, predictive maintenance, and sensor data processing inenvironments where traditional machine learning models are not feasible. Thealgorithm solve the challenge of catastrophic forgetting through the use ofknowledge distillation to create a small, distilled dataset. The novelty of themethod is that the size of the model can be adjusted dynamically, so that thecomplexity of the model can be adapted to the requirements of the task. Thisoffers a solution for incremental learning in resource-constrainedenvironments, where both model size and computational efficiency are criticalfactors. Results show that the proposed algorithm offers a promising approachfor TinyML incremental learning on embedded devices. The algorithm was testedon five datasets including: CIFAR10, MNIST, CORE50, HAR, Speech Commands. Thefindings indicated that, despite using only 43% of Floating Point Operations(FLOPs) compared to a larger fixed model, the algorithm experienced anegligible accuracy loss of just 1%. In addition, the presented method ismemory efficient. While state-of-the-art incremental learning is usually verymemory intensive, the method requires only 1% of the original data set.
本文介绍了微型机器学习(TinyML)背景下的增量学习新算法,该算法针对低性能、高能效的嵌入式设备进行了优化。TinyML 是一个新兴领域,它在微控制器等资源受限的设备上部署机器学习模型,在传统机器学习模型不可行的环境中实现语音识别、异常检测、预测性维护和传感器数据处理等智能应用。该算法通过使用知识蒸馏来创建一个小型蒸馏数据集,从而解决了灾难性遗忘的难题。这种方法的新颖之处在于,模型的大小可以动态调整,因此模型的复杂度可以适应任务的要求。这为资源受限环境下的增量学习提供了解决方案,在这种环境下,模型大小和计算效率都是关键因素。结果表明,所提出的算法为嵌入式设备上的 TinyML 增量学习提供了一种很有前景的方法。该算法在五个数据集上进行了测试,包括这些数据集包括:CIFAR10、MNIST、CORE50、HAR、语音命令。测试结果表明,尽管与更大的固定模型相比,该算法只使用了 43% 的浮点运算(FLOPs),但其准确率损失却微乎其微,仅为 1%。此外,该方法还具有内存效率高的特点。最先进的增量学习通常需要大量内存,而该方法只需要原始数据集的 1%。
{"title":"A Continual and Incremental Learning Approach for TinyML On-device Training Using Dataset Distillation and Model Size Adaption","authors":"Marcus Rüb, Philipp Tuchel, Axel Sikora, Daniel Mueller-Gritschneder","doi":"arxiv-2409.07114","DOIUrl":"https://doi.org/arxiv-2409.07114","url":null,"abstract":"A new algorithm for incremental learning in the context of Tiny Machine\u0000learning (TinyML) is presented, which is optimized for low-performance and\u0000energy efficient embedded devices. TinyML is an emerging field that deploys\u0000machine learning models on resource-constrained devices such as\u0000microcontrollers, enabling intelligent applications like voice recognition,\u0000anomaly detection, predictive maintenance, and sensor data processing in\u0000environments where traditional machine learning models are not feasible. The\u0000algorithm solve the challenge of catastrophic forgetting through the use of\u0000knowledge distillation to create a small, distilled dataset. The novelty of the\u0000method is that the size of the model can be adjusted dynamically, so that the\u0000complexity of the model can be adapted to the requirements of the task. This\u0000offers a solution for incremental learning in resource-constrained\u0000environments, where both model size and computational efficiency are critical\u0000factors. Results show that the proposed algorithm offers a promising approach\u0000for TinyML incremental learning on embedded devices. The algorithm was tested\u0000on five datasets including: CIFAR10, MNIST, CORE50, HAR, Speech Commands. The\u0000findings indicated that, despite using only 43% of Floating Point Operations\u0000(FLOPs) compared to a larger fixed model, the algorithm experienced a\u0000negligible accuracy loss of just 1%. In addition, the presented method is\u0000memory efficient. While state-of-the-art incremental learning is usually very\u0000memory intensive, the method requires only 1% of the original data set.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"112 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Machine Learning
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1