首页 > 最新文献

Machine learning and knowledge extraction最新文献

英文 中文
FairCaipi: A Combination of Explanatory Interactive and Fair Machine Learning for Human and Machine Bias Reduction 公平caipi:解释性互动与公平机器学习的结合,以减少人类与机器的偏见
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-18 DOI: 10.3390/make5040076
Louisa Heidrich, Emanuel Slany, Stephan Scheele, Ute Schmid
The rise of machine-learning applications in domains with critical end-user impact has led to a growing concern about the fairness of learned models, with the goal of avoiding biases that negatively impact specific demographic groups. Most existing bias-mitigation strategies adapt the importance of data instances during pre-processing. Since fairness is a contextual concept, we advocate for an interactive machine-learning approach that enables users to provide iterative feedback for model adaptation. Specifically, we propose to adapt the explanatory interactive machine-learning approach Caipi for fair machine learning. FairCaipi incorporates human feedback in the loop on predictions and explanations to improve the fairness of the model. Experimental results demonstrate that FairCaipi outperforms a state-of-the-art pre-processing bias mitigation strategy in terms of the fairness and the predictive performance of the resulting machine-learning model. We show that FairCaipi can both uncover and reduce bias in machine-learning models and allows us to detect human bias.
机器学习应用在具有关键最终用户影响的领域的兴起,导致人们越来越关注学习模型的公平性,其目标是避免对特定人口群体产生负面影响的偏见。大多数现有的偏差缓解策略在预处理过程中调整数据实例的重要性。由于公平是一个上下文概念,我们提倡一种交互式机器学习方法,使用户能够为模型适应提供迭代反馈。具体来说,我们建议将解释性交互式机器学习方法Caipi用于公平的机器学习。FairCaipi在预测和解释的循环中加入了人类的反馈,以提高模型的公平性。实验结果表明,FairCaipi在公平性和结果机器学习模型的预测性能方面优于最先进的预处理偏见缓解策略。我们证明FairCaipi可以发现和减少机器学习模型中的偏见,并允许我们检测人类偏见。
{"title":"FairCaipi: A Combination of Explanatory Interactive and Fair Machine Learning for Human and Machine Bias Reduction","authors":"Louisa Heidrich, Emanuel Slany, Stephan Scheele, Ute Schmid","doi":"10.3390/make5040076","DOIUrl":"https://doi.org/10.3390/make5040076","url":null,"abstract":"The rise of machine-learning applications in domains with critical end-user impact has led to a growing concern about the fairness of learned models, with the goal of avoiding biases that negatively impact specific demographic groups. Most existing bias-mitigation strategies adapt the importance of data instances during pre-processing. Since fairness is a contextual concept, we advocate for an interactive machine-learning approach that enables users to provide iterative feedback for model adaptation. Specifically, we propose to adapt the explanatory interactive machine-learning approach Caipi for fair machine learning. FairCaipi incorporates human feedback in the loop on predictions and explanations to improve the fairness of the model. Experimental results demonstrate that FairCaipi outperforms a state-of-the-art pre-processing bias mitigation strategy in terms of the fairness and the predictive performance of the resulting machine-learning model. We show that FairCaipi can both uncover and reduce bias in machine-learning models and allows us to detect human bias.","PeriodicalId":93033,"journal":{"name":"Machine learning and knowledge extraction","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135889021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Techniques for Radar-Based Continuous Human Activity Recognition 基于雷达的连续人类活动识别的深度学习技术
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-14 DOI: 10.3390/make5040075
Ruchita Mehta, Sara Sharifzadeh, Vasile Palade, Bo Tan, Alireza Daneshkhah, Yordanka Karayaneva
Human capability to perform routine tasks declines with age and age-related problems. Remote human activity recognition (HAR) is beneficial for regular monitoring of the elderly population. This paper addresses the problem of the continuous detection of daily human activities using a mm-wave Doppler radar. In this study, two strategies have been employed: the first method uses un-equalized series of activities, whereas the second method utilizes a gradient-based strategy for equalization of the series of activities. The dynamic time warping (DTW) algorithm and Long Short-term Memory (LSTM) techniques have been implemented for the classification of un-equalized and equalized series of activities, respectively. The input for DTW was provided using three strategies. The first approach uses the pixel-level data of frames (UnSup-PLevel). In the other two strategies, a convolutional variational autoencoder (CVAE) is used to extract Un-Supervised Encoded features (UnSup-EnLevel) and Supervised Encoded features (Sup-EnLevel) from the series of Doppler frames. The second approach for equalized data series involves the application of four distinct feature extraction methods: i.e., convolutional neural networks (CNN), supervised and unsupervised CVAE, and principal component Analysis (PCA). The extracted features were considered as an input to the LSTM. This paper presents a comparative analysis of a novel supervised feature extraction pipeline, employing Sup-ENLevel-DTW and Sup-EnLevel-LSTM, against several state-of-the-art unsupervised methods, including UnSUp-EnLevel-DTW, UnSup-EnLevel-LSTM, CNN-LSTM, and PCA-LSTM. The results demonstrate the superiority of the Sup-EnLevel-LSTM strategy. However, the UnSup-PLevel strategy worked surprisingly well without using annotations and frame equalization.
人类执行日常任务的能力随着年龄和与年龄有关的问题而下降。远程人体活动识别(HAR)有利于老年人的定期监测。本文研究了利用毫米波多普勒雷达对人类日常活动进行连续探测的问题。在本研究中,采用了两种策略:第一种方法使用一系列不均衡的活动,而第二种方法使用基于梯度的策略来均衡一系列活动。动态时间规整(DTW)算法和长短期记忆(LSTM)技术分别用于非均衡和均衡系列活动的分类。DTW的投入是通过三种战略提供的。第一种方法使用帧的像素级数据(unsup -level)。在另外两种策略中,使用卷积变分自编码器(CVAE)从多普勒帧序列中提取无监督编码特征(UnSup-EnLevel)和有监督编码特征(Sup-EnLevel)。均衡化数据序列的第二种方法涉及到四种不同特征提取方法的应用:即卷积神经网络(CNN)、有监督和无监督CVAE以及主成分分析(PCA)。将提取的特征作为LSTM的输入。本文提出了一种新的监督特征提取管道,采用supp - enlevel - dtw和supp - enlevel - lstm,与几种最先进的无监督方法,包括unsupp - enlevel - dtw, unsupp - enlevel - lstm, CNN-LSTM和PCA-LSTM进行比较分析。结果证明了Sup-EnLevel-LSTM策略的优越性。然而,unsup - level策略在不使用注释和帧均衡的情况下出奇地好。
{"title":"Deep Learning Techniques for Radar-Based Continuous Human Activity Recognition","authors":"Ruchita Mehta, Sara Sharifzadeh, Vasile Palade, Bo Tan, Alireza Daneshkhah, Yordanka Karayaneva","doi":"10.3390/make5040075","DOIUrl":"https://doi.org/10.3390/make5040075","url":null,"abstract":"Human capability to perform routine tasks declines with age and age-related problems. Remote human activity recognition (HAR) is beneficial for regular monitoring of the elderly population. This paper addresses the problem of the continuous detection of daily human activities using a mm-wave Doppler radar. In this study, two strategies have been employed: the first method uses un-equalized series of activities, whereas the second method utilizes a gradient-based strategy for equalization of the series of activities. The dynamic time warping (DTW) algorithm and Long Short-term Memory (LSTM) techniques have been implemented for the classification of un-equalized and equalized series of activities, respectively. The input for DTW was provided using three strategies. The first approach uses the pixel-level data of frames (UnSup-PLevel). In the other two strategies, a convolutional variational autoencoder (CVAE) is used to extract Un-Supervised Encoded features (UnSup-EnLevel) and Supervised Encoded features (Sup-EnLevel) from the series of Doppler frames. The second approach for equalized data series involves the application of four distinct feature extraction methods: i.e., convolutional neural networks (CNN), supervised and unsupervised CVAE, and principal component Analysis (PCA). The extracted features were considered as an input to the LSTM. This paper presents a comparative analysis of a novel supervised feature extraction pipeline, employing Sup-ENLevel-DTW and Sup-EnLevel-LSTM, against several state-of-the-art unsupervised methods, including UnSUp-EnLevel-DTW, UnSup-EnLevel-LSTM, CNN-LSTM, and PCA-LSTM. The results demonstrate the superiority of the Sup-EnLevel-LSTM strategy. However, the UnSup-PLevel strategy worked surprisingly well without using annotations and frame equalization.","PeriodicalId":93033,"journal":{"name":"Machine learning and knowledge extraction","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135766400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Similarity-Based Framework for Unsupervised Domain Adaptation: Peer Reviewing Policy for Pseudo-Labeling 基于相似性的无监督领域自适应框架:伪标签的同行评审策略
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-12 DOI: 10.3390/make5040074
Joel Arweiler, Cihan Ates, Jesus Cerquides, Rainer Koch, Hans-Jörg Bauer
The inherent dependency of deep learning models on labeled data is a well-known problem and one of the barriers that slows down the integration of such methods into different fields of applied sciences and engineering, in which experimental and numerical methods can easily generate a colossal amount of unlabeled data. This paper proposes an unsupervised domain adaptation methodology that mimics the peer review process to label new observations in a different domain from the training set. The approach evaluates the validity of a hypothesis using domain knowledge acquired from the training set through a similarity analysis, exploring the projected feature space to examine the class centroid shifts. The methodology is tested on a binary classification problem, where synthetic images of cubes and cylinders in different orientations are generated. The methodology improves the accuracy of the object classifier from 60% to around 90% in the case of a domain shift in physical feature space without human labeling.
深度学习模型对标记数据的固有依赖是一个众所周知的问题,也是阻碍这种方法融入应用科学和工程不同领域的障碍之一,在这些领域,实验和数值方法很容易产生大量未标记的数据。本文提出了一种无监督的领域自适应方法,该方法模仿同行评审过程来标记与训练集不同领域的新观察结果。该方法利用从训练集中获得的领域知识,通过相似性分析来评估假设的有效性,探索投影特征空间来检查类质心的移动。该方法在一个二元分类问题上进行了测试,该问题生成了不同方向的立方体和圆柱体的合成图像。在没有人工标记的情况下,在物理特征空间中发生域移位的情况下,该方法将目标分类器的准确率从60%提高到90%左右。
{"title":"Similarity-Based Framework for Unsupervised Domain Adaptation: Peer Reviewing Policy for Pseudo-Labeling","authors":"Joel Arweiler, Cihan Ates, Jesus Cerquides, Rainer Koch, Hans-Jörg Bauer","doi":"10.3390/make5040074","DOIUrl":"https://doi.org/10.3390/make5040074","url":null,"abstract":"The inherent dependency of deep learning models on labeled data is a well-known problem and one of the barriers that slows down the integration of such methods into different fields of applied sciences and engineering, in which experimental and numerical methods can easily generate a colossal amount of unlabeled data. This paper proposes an unsupervised domain adaptation methodology that mimics the peer review process to label new observations in a different domain from the training set. The approach evaluates the validity of a hypothesis using domain knowledge acquired from the training set through a similarity analysis, exploring the projected feature space to examine the class centroid shifts. The methodology is tested on a binary classification problem, where synthetic images of cubes and cylinders in different orientations are generated. The methodology improves the accuracy of the object classifier from 60% to around 90% in the case of a domain shift in physical feature space without human labeling.","PeriodicalId":93033,"journal":{"name":"Machine learning and knowledge extraction","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136013669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mssgan: Enforcing Multiple Generators to Learn Multiple Subspaces to Avoid the Mode Collapse 强制多个生成器学习多个子空间以避免模式崩溃
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-10 DOI: 10.3390/make5040073
Miguel S. Soriano-Garcia, Ricardo Sevilla-Escoboza, Angel Garcia-Pedrero
Generative Adversarial Networks are powerful generative models that are used in different areas and with multiple applications. However, this type of model has a training problem called mode collapse. This problem causes the generator to not learn the complete distribution of the data with which it is trained. To force the network to learn the entire data distribution, MSSGAN is introduced. This model has multiple generators and distributes the training data in multiple subspaces, where each generator is enforced to learn only one of the groups with the help of a classifier. We demonstrate that our model performs better on the FID and Sample Distribution metrics compared to previous models to avoid mode collapse. Experimental results show how each of the generators learns different information and, in turn, generates satisfactory quality samples.
生成对抗网络是一种强大的生成模型,用于不同的领域和多种应用。然而,这种类型的模型有一个训练问题,称为模式崩溃。这个问题导致生成器无法学习训练数据的完整分布。迫使网络学习整个数据分布,MSSGAN介绍。该模型具有多个生成器,并将训练数据分布在多个子空间中,其中每个生成器在分类器的帮助下强制只学习其中一个组。我们证明,与以前的模型相比,我们的模型在FID和样本分布指标上表现更好,以避免模式崩溃。实验结果表明,每个生成器如何学习不同的信息,从而产生令人满意的质量样本。
{"title":"Mssgan: Enforcing Multiple Generators to Learn Multiple Subspaces to Avoid the Mode Collapse","authors":"Miguel S. Soriano-Garcia, Ricardo Sevilla-Escoboza, Angel Garcia-Pedrero","doi":"10.3390/make5040073","DOIUrl":"https://doi.org/10.3390/make5040073","url":null,"abstract":"Generative Adversarial Networks are powerful generative models that are used in different areas and with multiple applications. However, this type of model has a training problem called mode collapse. This problem causes the generator to not learn the complete distribution of the data with which it is trained. To force the network to learn the entire data distribution, MSSGAN is introduced. This model has multiple generators and distributes the training data in multiple subspaces, where each generator is enforced to learn only one of the groups with the help of a classifier. We demonstrate that our model performs better on the FID and Sample Distribution metrics compared to previous models to avoid mode collapse. Experimental results show how each of the generators learns different information and, in turn, generates satisfactory quality samples.","PeriodicalId":93033,"journal":{"name":"Machine learning and knowledge extraction","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136358073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explaining Deep Q-Learning Experience Replay with SHapley Additive exPlanations 用SHapley加法解释解释深度Q-Learning体验回放
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-09 DOI: 10.3390/make5040072
Robert S. Sullivan, Luca Longo
Reinforcement Learning (RL) has shown promise in optimizing complex control and decision-making processes but Deep Reinforcement Learning (DRL) lacks interpretability, limiting its adoption in regulated sectors like manufacturing, finance, and healthcare. Difficulties arise from DRL’s opaque decision-making, hindering efficiency and resource use, this issue is amplified with every advancement. While many seek to move from Experience Replay to A3C, the latter demands more resources. Despite efforts to improve Experience Replay selection strategies, there is a tendency to keep the capacity high. We investigate training a Deep Convolutional Q-learning agent across 20 Atari games intentionally reducing Experience Replay capacity from 1×106 to 5×102. We find that a reduction from 1×104 to 5×103 doesn’t significantly affect rewards, offering a practical path to resource-efficient DRL. To illuminate agent decisions and align them with game mechanics, we employ a novel method: visualizing Experience Replay via Deep SHAP Explainer. This approach fosters comprehension and transparent, interpretable explanations, though any capacity reduction must be cautious to avoid overfitting. Our study demonstrates the feasibility of reducing Experience Replay and advocates for transparent, interpretable decision explanations using the Deep SHAP Explainer to promote enhancing resource efficiency in Experience Replay.
强化学习(RL)在优化复杂的控制和决策过程方面显示出了希望,但深度强化学习(DRL)缺乏可解释性,限制了其在制造业、金融和医疗保健等受监管行业的应用。DRL的决策不透明,阻碍了效率和资源的利用,这一问题随着每一次进步而被放大。虽然许多人试图从Experience Replay转向A3C,但后者需要更多资源。尽管努力改进体验重放选择策略,但仍有保持高容量的趋势。我们研究了在20个雅达利游戏中训练一个深度卷积q学习代理,故意将体验重放能力从1×106降低到5×102。我们发现,从1×104到5×103的减少对奖励没有显著影响,这为资源高效DRL提供了一条实用的途径。为了阐明智能体的决策并使其与游戏机制保持一致,我们采用了一种新颖的方法:通过Deep SHAP Explainer可视化体验回放。这种方法促进理解和透明,可解释的解释,尽管任何容量减少必须谨慎,以避免过度拟合。我们的研究证明了减少经验重播的可行性,并倡导使用Deep SHAP解释器进行透明、可解释的决策解释,以促进提高经验重播中的资源效率。
{"title":"Explaining Deep Q-Learning Experience Replay with SHapley Additive exPlanations","authors":"Robert S. Sullivan, Luca Longo","doi":"10.3390/make5040072","DOIUrl":"https://doi.org/10.3390/make5040072","url":null,"abstract":"Reinforcement Learning (RL) has shown promise in optimizing complex control and decision-making processes but Deep Reinforcement Learning (DRL) lacks interpretability, limiting its adoption in regulated sectors like manufacturing, finance, and healthcare. Difficulties arise from DRL’s opaque decision-making, hindering efficiency and resource use, this issue is amplified with every advancement. While many seek to move from Experience Replay to A3C, the latter demands more resources. Despite efforts to improve Experience Replay selection strategies, there is a tendency to keep the capacity high. We investigate training a Deep Convolutional Q-learning agent across 20 Atari games intentionally reducing Experience Replay capacity from 1×106 to 5×102. We find that a reduction from 1×104 to 5×103 doesn’t significantly affect rewards, offering a practical path to resource-efficient DRL. To illuminate agent decisions and align them with game mechanics, we employ a novel method: visualizing Experience Replay via Deep SHAP Explainer. This approach fosters comprehension and transparent, interpretable explanations, though any capacity reduction must be cautious to avoid overfitting. Our study demonstrates the feasibility of reducing Experience Replay and advocates for transparent, interpretable decision explanations using the Deep SHAP Explainer to promote enhancing resource efficiency in Experience Replay.","PeriodicalId":93033,"journal":{"name":"Machine learning and knowledge extraction","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135094453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning Method for Changepoint Detection in Short Time Series Data 短时间序列数据中变化点检测的机器学习方法
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-05 DOI: 10.3390/make5040071
Veronika Smejkalová, Radovan Šomplák, Martin Rosecký, Kristína Šramková
Analysis of data is crucial in waste management to improve effective planning from both short- and long-term perspectives. Real-world data often presents anomalies, but in the waste management sector, anomaly detection is seldom performed. The main goal and contribution of this paper is a proposal of a complex machine learning framework for changepoint detection in a large number of short time series from waste management. In such a case, it is not possible to use only an expert-based approach due to the time-consuming nature of this process and subjectivity. The proposed framework consists of two steps: (1) outlier detection via outlier test for trend-adjusted data, and (2) changepoints are identified via comparison of linear model parameters. In order to use the proposed method, it is necessary to have a sufficient number of experts’ assessments of the presence of anomalies in time series. The proposed framework is demonstrated on waste management data from the Czech Republic. It is observed that certain waste categories in specific regions frequently exhibit changepoints. On the micro-regional level, approximately 31.1% of time series contain at least one outlier and 16.4% exhibit changepoints. Certain groups of waste are more prone to the occurrence of anomalies. The results indicate that even in the case of aggregated data, anomalies are not rare, and their presence should always be checked.
数据分析对于废物管理至关重要,可以从短期和长期的角度改进有效的规划。现实世界的数据经常出现异常,但在废物管理领域,很少进行异常检测。本文的主要目标和贡献是提出了一个复杂的机器学习框架,用于从废物管理的大量短时间序列中检测变更点。在这种情况下,由于这个过程的耗时和主观性,不可能只使用基于专家的方法。该框架包括两个步骤:(1)通过对趋势调整数据的异常值检验来检测异常值;(2)通过比较线性模型参数来识别变化点。为了使用所提出的方法,需要有足够数量的专家对时间序列中异常的存在进行评估。拟议的框架以捷克共和国的废物管理数据为依据。可以观察到,特定区域的某些废物类别经常表现出变化点。在微观区域水平上,约31.1%的时间序列至少包含一个异常值,16.4%的时间序列表现出变化点。某些废物组更容易发生异常情况。结果表明,即使在汇总数据的情况下,异常也不罕见,并且应该始终检查它们的存在。
{"title":"Machine Learning Method for Changepoint Detection in Short Time Series Data","authors":"Veronika Smejkalová, Radovan Šomplák, Martin Rosecký, Kristína Šramková","doi":"10.3390/make5040071","DOIUrl":"https://doi.org/10.3390/make5040071","url":null,"abstract":"Analysis of data is crucial in waste management to improve effective planning from both short- and long-term perspectives. Real-world data often presents anomalies, but in the waste management sector, anomaly detection is seldom performed. The main goal and contribution of this paper is a proposal of a complex machine learning framework for changepoint detection in a large number of short time series from waste management. In such a case, it is not possible to use only an expert-based approach due to the time-consuming nature of this process and subjectivity. The proposed framework consists of two steps: (1) outlier detection via outlier test for trend-adjusted data, and (2) changepoints are identified via comparison of linear model parameters. In order to use the proposed method, it is necessary to have a sufficient number of experts’ assessments of the presence of anomalies in time series. The proposed framework is demonstrated on waste management data from the Czech Republic. It is observed that certain waste categories in specific regions frequently exhibit changepoints. On the micro-regional level, approximately 31.1% of time series contain at least one outlier and 16.4% exhibit changepoints. Certain groups of waste are more prone to the occurrence of anomalies. The results indicate that even in the case of aggregated data, anomalies are not rare, and their presence should always be checked.","PeriodicalId":93033,"journal":{"name":"Machine learning and knowledge extraction","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135481730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When Federated Learning Meets Watermarking: A Comprehensive Overview of Techniques for Intellectual Property Protection 当联邦学习遇到水印:知识产权保护技术的综合概述
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-04 DOI: 10.3390/make5040070
Mohammed Lansari, Reda Bellafqira, Katarzyna Kapusta, Vincent Thouvenot, Olivier Bettan, Gouenou Coatrieux
Federated learning (FL) is a technique that allows multiple participants to collaboratively train a Deep Neural Network (DNN) without the need to centralize their data. Among other advantages, it comes with privacy-preserving properties, making it attractive for application in sensitive contexts, such as health care or the military. Although the data are not explicitly exchanged, the training procedure requires sharing information about participants’ models. This makes the individual models vulnerable to theft or unauthorized distribution by malicious actors. To address the issue of ownership rights protection in the context of machine learning (ML), DNN watermarking methods have been developed during the last five years. Most existing works have focused on watermarking in a centralized manner, but only a few methods have been designed for FL and its unique constraints. In this paper, we provide an overview of recent advancements in federated learning watermarking, shedding light on the new challenges and opportunities that arise in this field.
联邦学习(FL)是一种允许多个参与者协作训练深度神经网络(DNN)而无需集中数据的技术。除其他优点外,它还具有保护隐私的特性,使其在敏感环境(如医疗保健或军事)中的应用具有吸引力。虽然数据没有明确交换,但训练过程需要共享参与者模型的信息。这使得单个模型容易受到恶意行为者的盗窃或未经授权的分发。为了解决机器学习(ML)背景下的所有权保护问题,DNN水印方法在过去五年中得到了发展。现有的大多数工作都集中在集中的方式进行水印,但针对FL及其独特的约束条件设计的方法很少。在本文中,我们概述了联邦学习水印的最新进展,揭示了该领域出现的新挑战和机遇。
{"title":"When Federated Learning Meets Watermarking: A Comprehensive Overview of Techniques for Intellectual Property Protection","authors":"Mohammed Lansari, Reda Bellafqira, Katarzyna Kapusta, Vincent Thouvenot, Olivier Bettan, Gouenou Coatrieux","doi":"10.3390/make5040070","DOIUrl":"https://doi.org/10.3390/make5040070","url":null,"abstract":"Federated learning (FL) is a technique that allows multiple participants to collaboratively train a Deep Neural Network (DNN) without the need to centralize their data. Among other advantages, it comes with privacy-preserving properties, making it attractive for application in sensitive contexts, such as health care or the military. Although the data are not explicitly exchanged, the training procedure requires sharing information about participants’ models. This makes the individual models vulnerable to theft or unauthorized distribution by malicious actors. To address the issue of ownership rights protection in the context of machine learning (ML), DNN watermarking methods have been developed during the last five years. Most existing works have focused on watermarking in a centralized manner, but only a few methods have been designed for FL and its unique constraints. In this paper, we provide an overview of recent advancements in federated learning watermarking, shedding light on the new challenges and opportunities that arise in this field.","PeriodicalId":93033,"journal":{"name":"Machine learning and knowledge extraction","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135591387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Entropy-Aware Time-Varying Graph Neural Networks with Generalized Temporal Hawkes Process: Dynamic Link Prediction in the Presence of Node Addition and Deletion 具有广义时间Hawkes过程的熵感知时变图神经网络:存在节点添加和删除的动态链路预测
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-04 DOI: 10.3390/make5040069
Bahareh Najafi, Saeedeh Parsaeefard, Alberto Leon-Garcia
This paper addresses the problem of learning temporal graph representations, which capture the changing nature of complex evolving networks. Existing approaches mainly focus on adding new nodes and edges to capture dynamic graph structures. However, to achieve more accurate representation of graph evolution, we consider both the addition and deletion of nodes and edges as events. These events occur at irregular time scales and are modeled using temporal point processes. Our goal is to learn the conditional intensity function of the temporal point process to investigate the influence of deletion events on node representation learning for link-level prediction. We incorporate network entropy, a measure of node and edge significance, to capture the effect of node deletion and edge removal in our framework. Additionally, we leveraged the characteristics of a generalized temporal Hawkes process, which considers the inhibitory effects of events where past occurrences can reduce future intensity. This framework enables dynamic representation learning by effectively modeling both addition and deletion events in the temporal graph. To evaluate our approach, we utilize autonomous system graphs, a family of inhomogeneous sparse graphs with instances of node and edge additions and deletions, in a link prediction task. By integrating these enhancements into our framework, we improve the accuracy of dynamic link prediction and enable better understanding of the dynamic evolution of complex networks.
本文解决了学习时态图表示的问题,它捕捉了复杂进化网络的变化性质。现有的方法主要集中在添加新的节点和边来捕获动态图结构。然而,为了更准确地表示图的进化,我们将节点和边的添加和删除都视为事件。这些事件发生在不规则的时间尺度上,并使用时间点过程进行建模。我们的目标是学习时间点过程的条件强度函数,以研究删除事件对节点表示学习的影响,用于链接级预测。在我们的框架中,我们结合了网络熵,一种节点和边缘重要性的度量,来捕捉节点删除和边缘去除的效果。此外,我们利用了广义时间霍克斯过程的特征,该过程考虑了过去发生的事件可以降低未来强度的事件的抑制效应。该框架通过对时间图中的添加和删除事件进行有效建模来实现动态表示学习。为了评估我们的方法,我们在链接预测任务中使用自治系统图,这是一组具有节点和边缘添加和删除实例的非齐次稀疏图。通过将这些增强功能集成到我们的框架中,我们提高了动态链路预测的准确性,并能够更好地理解复杂网络的动态演变。
{"title":"Entropy-Aware Time-Varying Graph Neural Networks with Generalized Temporal Hawkes Process: Dynamic Link Prediction in the Presence of Node Addition and Deletion","authors":"Bahareh Najafi, Saeedeh Parsaeefard, Alberto Leon-Garcia","doi":"10.3390/make5040069","DOIUrl":"https://doi.org/10.3390/make5040069","url":null,"abstract":"This paper addresses the problem of learning temporal graph representations, which capture the changing nature of complex evolving networks. Existing approaches mainly focus on adding new nodes and edges to capture dynamic graph structures. However, to achieve more accurate representation of graph evolution, we consider both the addition and deletion of nodes and edges as events. These events occur at irregular time scales and are modeled using temporal point processes. Our goal is to learn the conditional intensity function of the temporal point process to investigate the influence of deletion events on node representation learning for link-level prediction. We incorporate network entropy, a measure of node and edge significance, to capture the effect of node deletion and edge removal in our framework. Additionally, we leveraged the characteristics of a generalized temporal Hawkes process, which considers the inhibitory effects of events where past occurrences can reduce future intensity. This framework enables dynamic representation learning by effectively modeling both addition and deletion events in the temporal graph. To evaluate our approach, we utilize autonomous system graphs, a family of inhomogeneous sparse graphs with instances of node and edge additions and deletions, in a link prediction task. By integrating these enhancements into our framework, we improve the accuracy of dynamic link prediction and enable better understanding of the dynamic evolution of complex networks.","PeriodicalId":93033,"journal":{"name":"Machine learning and knowledge extraction","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135591386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting the Long-Term Dependencies in Time Series Using Recurrent Artificial Neural Networks 利用递归人工神经网络预测时间序列中的长期依赖关系
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-02 DOI: 10.3390/make5040068
Cristian Ubal, Gustavo Di-Giorgi, Javier E. Contreras-Reyes, Rodrigo Salas
Long-term dependence is an essential feature for the predictability of time series. Estimating the parameter that describes long memory is essential to describing the behavior of time series models. However, most long memory estimation methods assume that this parameter has a constant value throughout the time series, and do not consider that the parameter may change over time. In this work, we propose an automated methodology that combines the estimation methodologies of the fractional differentiation parameter (and/or Hurst parameter) with its application to Recurrent Neural Networks (RNNs) in order for said networks to learn and predict long memory dependencies from information obtained in nonlinear time series. The proposal combines three methods that allow for better approximation in the prediction of the values of the parameters for each one of the windows obtained, using Recurrent Neural Networks as an adaptive method to learn and predict the dependencies of long memory in Time Series. For the RNNs, we have evaluated four different architectures: the Simple RNN, LSTM, the BiLSTM, and the GRU. These models are built from blocks with gates controlling the cell state and memory. We have evaluated the proposed approach using both synthetic and real-world data sets. We have simulated ARFIMA models for the synthetic data to generate several time series by varying the fractional differentiation parameter. We have evaluated the proposed approach using synthetic and real datasets using Whittle’s estimates of the Hurst parameter classically obtained in each window. We have simulated ARFIMA models in such a way that the synthetic data generate several time series by varying the fractional differentiation parameter. The real-world IPSA stock option index and Tree Ringtime series datasets were evaluated. All of the results show that the proposed approach can predict the Hurst exponent with good performance by selecting the optimal window size and overlap change.
长期依赖是时间序列可预测性的一个重要特征。估计描述长记忆的参数对于描述时间序列模型的行为至关重要。然而,大多数长记忆估计方法假设该参数在整个时间序列中具有恒定值,并且不考虑该参数可能随时间变化。在这项工作中,我们提出了一种自动化的方法,将分数微分参数(和/或Hurst参数)的估计方法与其应用于递归神经网络(rnn)相结合,以便所述网络从非线性时间序列中获得的信息中学习和预测长记忆依赖性。该建议结合了三种方法,允许更好地逼近预测所获得的每个窗口的参数值,使用递归神经网络作为一种自适应方法来学习和预测时间序列中长记忆的依赖性。对于RNN,我们评估了四种不同的体系结构:简单RNN、LSTM、BiLSTM和GRU。这些模型由带有控制单元状态和内存的门的块构建而成。我们使用合成数据集和实际数据集评估了所提出的方法。我们对合成数据进行ARFIMA模型模拟,通过改变分数阶微分参数来生成多个时间序列。我们使用在每个窗口中经典地获得的Hurst参数的Whittle估计的合成和真实数据集评估了所提出的方法。我们模拟了ARFIMA模型,通过改变分数阶微分参数,合成数据生成多个时间序列。对真实世界IPSA股票期权指数和Tree Ringtime序列数据集进行了评估。结果表明,该方法通过选择最优窗口大小和重叠变化,可以较好地预测Hurst指数。
{"title":"Predicting the Long-Term Dependencies in Time Series Using Recurrent Artificial Neural Networks","authors":"Cristian Ubal, Gustavo Di-Giorgi, Javier E. Contreras-Reyes, Rodrigo Salas","doi":"10.3390/make5040068","DOIUrl":"https://doi.org/10.3390/make5040068","url":null,"abstract":"Long-term dependence is an essential feature for the predictability of time series. Estimating the parameter that describes long memory is essential to describing the behavior of time series models. However, most long memory estimation methods assume that this parameter has a constant value throughout the time series, and do not consider that the parameter may change over time. In this work, we propose an automated methodology that combines the estimation methodologies of the fractional differentiation parameter (and/or Hurst parameter) with its application to Recurrent Neural Networks (RNNs) in order for said networks to learn and predict long memory dependencies from information obtained in nonlinear time series. The proposal combines three methods that allow for better approximation in the prediction of the values of the parameters for each one of the windows obtained, using Recurrent Neural Networks as an adaptive method to learn and predict the dependencies of long memory in Time Series. For the RNNs, we have evaluated four different architectures: the Simple RNN, LSTM, the BiLSTM, and the GRU. These models are built from blocks with gates controlling the cell state and memory. We have evaluated the proposed approach using both synthetic and real-world data sets. We have simulated ARFIMA models for the synthetic data to generate several time series by varying the fractional differentiation parameter. We have evaluated the proposed approach using synthetic and real datasets using Whittle’s estimates of the Hurst parameter classically obtained in each window. We have simulated ARFIMA models in such a way that the synthetic data generate several time series by varying the fractional differentiation parameter. The real-world IPSA stock option index and Tree Ringtime series datasets were evaluated. All of the results show that the proposed approach can predict the Hurst exponent with good performance by selecting the optimal window size and overlap change.","PeriodicalId":93033,"journal":{"name":"Machine learning and knowledge extraction","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135898923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Topology of Vision Transformer for Real-Time Video Action Recognition in an End-To-End Cloud Solution 端到端云解决方案中实时视频动作识别视觉转换器的最优拓扑
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-29 DOI: 10.3390/make5040067
Saman Sarraf, Milton Kabia
This study introduces an optimal topology of vision transformers for real-time video action recognition in a cloud-based solution. Although model performance is a key criterion for real-time video analysis use cases, inference latency plays a more crucial role in adopting such technology in real-world scenarios. Our objective is to reduce the inference latency of the solution while admissibly maintaining the vision transformer’s performance. Thus, we employed the optimal cloud components as the foundation of our machine learning pipeline and optimized the topology of vision transformers. We utilized UCF101, including more than one million action recognition video clips. The modeling pipeline consists of a preprocessing module to extract frames from video clips, training two-dimensional (2D) vision transformer models, and deep learning baselines. The pipeline also includes a postprocessing step to aggregate the frame-level predictions to generate the video-level predictions at inference. The results demonstrate that our optimal vision transformer model with an input dimension of 56 × 56 × 3 with eight attention heads produces an F1 score of 91.497% for the testing set. The optimized vision transformer reduces the inference latency by 40.70%, measured through a batch-processing approach, with a 55.63% faster training time than the baseline. Lastly, we developed an enhanced skip-frame approach to improve the inference latency by finding an optimal ratio of frames for prediction at inference, where we could further reduce the inference latency by 57.15%. This study reveals that the vision transformer model is highly optimizable for inference latency while maintaining the model performance.
本文介绍了一种基于云的实时视频动作识别解决方案的视觉变压器的最佳拓扑结构。虽然模型性能是实时视频分析用例的关键标准,但在实际场景中采用这种技术时,推理延迟起着更为关键的作用。我们的目标是减少解决方案的推理延迟,同时在允许的范围内保持视觉转换器的性能。因此,我们采用最优的云组件作为机器学习管道的基础,并优化了视觉变压器的拓扑结构。我们使用了UCF101,包括100多万个动作识别视频片段。建模管道包括从视频片段中提取帧的预处理模块、训练二维(2D)视觉转换模型和深度学习基线。该管道还包括一个后处理步骤,用于聚合帧级预测以在推理时生成视频级预测。结果表明,当输入尺寸为56 × 56 × 3、注意头为8个时,最优视觉变形模型的F1得分为91.497%。通过批处理方法测量,优化后的视觉转换器将推理延迟降低了40.70%,训练时间比基线快55.63%。最后,我们开发了一种增强的跳过帧方法,通过在推理中找到用于预测的最佳帧比例来改善推理延迟,我们可以进一步将推理延迟降低57.15%。研究表明,视觉转换模型在保持模型性能的同时,对推理延迟具有高度的可优化性。
{"title":"Optimal Topology of Vision Transformer for Real-Time Video Action Recognition in an End-To-End Cloud Solution","authors":"Saman Sarraf, Milton Kabia","doi":"10.3390/make5040067","DOIUrl":"https://doi.org/10.3390/make5040067","url":null,"abstract":"This study introduces an optimal topology of vision transformers for real-time video action recognition in a cloud-based solution. Although model performance is a key criterion for real-time video analysis use cases, inference latency plays a more crucial role in adopting such technology in real-world scenarios. Our objective is to reduce the inference latency of the solution while admissibly maintaining the vision transformer’s performance. Thus, we employed the optimal cloud components as the foundation of our machine learning pipeline and optimized the topology of vision transformers. We utilized UCF101, including more than one million action recognition video clips. The modeling pipeline consists of a preprocessing module to extract frames from video clips, training two-dimensional (2D) vision transformer models, and deep learning baselines. The pipeline also includes a postprocessing step to aggregate the frame-level predictions to generate the video-level predictions at inference. The results demonstrate that our optimal vision transformer model with an input dimension of 56 × 56 × 3 with eight attention heads produces an F1 score of 91.497% for the testing set. The optimized vision transformer reduces the inference latency by 40.70%, measured through a batch-processing approach, with a 55.63% faster training time than the baseline. Lastly, we developed an enhanced skip-frame approach to improve the inference latency by finding an optimal ratio of frames for prediction at inference, where we could further reduce the inference latency by 57.15%. This study reveals that the vision transformer model is highly optimizable for inference latency while maintaining the model performance.","PeriodicalId":93033,"journal":{"name":"Machine learning and knowledge extraction","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135246715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Machine learning and knowledge extraction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1