首页 > 最新文献

Transactions on machine learning research最新文献

英文 中文
Federated Learning with Convex Global and Local Constraints. 全局和局部约束条件下的联合学习
Pub Date : 2024-01-01 Epub Date: 2024-05-03
Chuan He, Le Peng, Ju Sun

In practice, many machine learning (ML) problems come with constraints, and their applied domains involve distributed sensitive data that cannot be shared with others, e.g., in healthcare. Collaborative learning in such practical scenarios entails federated learning (FL) for ML problems with constraints, or FL with constraints for short. Despite the extensive developments of FL techniques in recent years, these techniques only deal with unconstrained FL problems or FL problems with simple constraints that are amenable to easy projections. There is little work dealing with FL problems with general constraints. To fill this gap, we take the first step toward building an algorithmic framework for solving FL problems with general constraints. In particular, we propose a new FL algorithm for constrained ML problems based on the proximal augmented Lagrangian (AL) method. Assuming convex objective and convex constraints plus other mild conditions, we establish the worst-case complexity of the proposed algorithm. Our numerical experiments show the effectiveness of our algorithm in performing Neyman-Pearson classification and fairness-aware learning with nonconvex constraints, in an FL setting.

在实践中,许多机器学习(ML)问题都带有约束条件,其应用领域涉及不能与他人共享的分布式敏感数据,例如在医疗保健领域。在这种实际场景中进行协作学习,需要针对有约束条件的 ML 问题进行联合学习(FL),或简称为有约束条件的联合学习。尽管近年来联合学习技术得到了广泛的发展,但这些技术只能处理无约束联合学习问题或具有简单约束条件的联合学习问题,这些约束条件易于预测。处理一般约束条件下的 FL 问题的工作很少。为了填补这一空白,我们迈出了第一步,为解决具有一般约束条件的 FL 问题建立了算法框架。特别是,我们提出了一种基于近似增强拉格朗日(AL)方法的新 FL 算法。假设凸目标和凸约束以及其他温和条件,我们建立了所提算法的最坏情况复杂度。我们的数值实验表明,我们的算法在 FL 环境下执行 Neyman-Pearson 分类和具有非凸约束的公平感知学习时非常有效。
{"title":"Federated Learning with Convex Global and Local Constraints.","authors":"Chuan He, Le Peng, Ju Sun","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>In practice, many machine learning (ML) problems come with constraints, and their applied domains involve distributed sensitive data that cannot be shared with others, e.g., in healthcare. Collaborative learning in such practical scenarios entails federated learning (FL) for ML problems with constraints, or <i>FL with constraints</i> for short. Despite the extensive developments of FL techniques in recent years, these techniques only deal with unconstrained FL problems or FL problems with simple constraints that are amenable to easy projections. There is little work dealing with FL problems with general constraints. To fill this gap, we take the first step toward building an algorithmic framework for solving FL problems with general constraints. In particular, we propose a new FL algorithm for constrained ML problems based on the proximal augmented Lagrangian (AL) method. Assuming convex objective and convex constraints plus other mild conditions, we establish the worst-case complexity of the proposed algorithm. Our numerical experiments show the effectiveness of our algorithm in performing Neyman-Pearson classification and fairness-aware learning with nonconvex constraints, in an FL setting.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11295925/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141891198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online model selection by learning how compositional kernels evolve. 通过学习组成核的演变过程进行在线模型选择。
Eura Shin, Predrag Klasnja, Susan A Murphy, Finale Doshi-Velez

Motivated by the need for efficient, personalized learning in mobile health, we investigate the problem of online compositional kernel selection for multi-task Gaussian Process regression. Existing composition selection methods do not satisfy our strict criteria in health; selection must occur quickly, and the selected kernels must maintain the appropriate level of complexity, sparsity, and stability as data arrives online. We introduce the Kernel Evolution Model (KEM), a generative process on how to evolve kernel compositions in a way that manages the bias-variance trade-off as we observe more data about a user. Using pilot data, we learn a set of kernel evolutions that can be used to quickly select kernels for new test users. KEM reliably selects high-performing kernels for a range of synthetic and real data sets, including two health data sets.

受移动医疗领域高效、个性化学习需求的驱动,我们研究了多任务高斯过程回归的在线组成核选择问题。现有的组合选择方法无法满足我们在健康领域的严格标准;选择必须快速进行,并且所选内核必须在数据在线到达时保持适当的复杂性、稀疏性和稳定性水平。我们引入了内核演化模型(KEM),它是一种生成过程,可以在观察到更多用户数据时,以管理偏差-方差权衡的方式演化内核组合。利用试验数据,我们学习了一组内核演化,可用于为新的测试用户快速选择内核。KEM 可以为一系列合成数据集和真实数据集(包括两个健康数据集)可靠地选择高性能内核。
{"title":"Online model selection by learning how compositional kernels evolve.","authors":"Eura Shin, Predrag Klasnja, Susan A Murphy, Finale Doshi-Velez","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Motivated by the need for efficient, personalized learning in mobile health, we investigate the problem of online compositional kernel selection for multi-task Gaussian Process regression. Existing composition selection methods do not satisfy our strict criteria in health; selection must occur quickly, and the selected kernels must maintain the appropriate level of complexity, sparsity, and stability as data arrives online. We introduce the Kernel Evolution Model (KEM), a generative process on how to evolve kernel compositions in a way that manages the bias-variance trade-off as we observe more data about a user. Using pilot data, we learn a set of <i>kernel evolutions</i> that can be used to quickly select kernels for new test users. KEM reliably selects high-performing kernels for a range of synthetic and real data sets, including two health data sets.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11142638/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Distribution Shift: Spurious Features Through the Lens of Training Dynamics. 超越分布偏移:从训练动态的角度看虚假特征。
Nihal Murali, Aahlad Puli, Ke Yu, Rajesh Ranganath, Kayhan Batmanghelich

Deep Neural Networks (DNNs) are prone to learning spurious features that correlate with the label during training but are irrelevant to the learning problem. This hurts model generalization and poses problems when deploying them in safety-critical applications. This paper aims to better understand the effects of spurious features through the lens of the learning dynamics of the internal neurons during the training process. We make the following observations: (1) While previous works highlight the harmful effects of spurious features on the generalization ability of DNNs, we emphasize that not all spurious features are harmful. Spurious features can be "benign" or "harmful" depending on whether they are "harder" or "easier" to learn than the core features for a given model. This definition is model and dataset dependent. (2) We build upon this premise and use instance difficulty methods (like Prediction Depth (Baldock et al., 2021)) to quantify "easiness" for a given model and to identify this behavior during the training phase. (3) We empirically show that the harmful spurious features can be detected by observing the learning dynamics of the DNN's early layers. In other words, easy features learned by the initial layers of a DNN early during the training can (potentially) hurt model generalization. We verify our claims on medical and vision datasets, both simulated and real, and justify the empirical success of our hypothesis by showing the theoretical connections between Prediction Depth and information-theoretic concepts like 𝒱-usable information (Ethayarajh et al., 2021). Lastly, our experiments show that monitoring only accuracy during training (as is common in machine learning pipelines) is insufficient to detect spurious features. We, therefore, highlight the need for monitoring early training dynamics using suitable instance difficulty metrics.

深度神经网络(DNN)在训练过程中容易学习到与标签相关但与学习问题无关的虚假特征。这会损害模型的泛化,在安全关键型应用中部署它们时会遇到问题。本文旨在通过内部神经元在训练过程中的学习动态,更好地理解虚假特征的影响。我们提出以下几点看法:(1)虽然之前的研究强调了虚假特征对 DNN 泛化能力的有害影响,但我们强调并非所有虚假特征都是有害的。虚假特征可以是 "良性 "的,也可以是 "有害 "的,这取决于它们比给定模型的核心特征 "更难 "学习还是 "更容易 "学习。这一定义取决于模型和数据集。(2) 在此基础上,我们使用实例难度方法(如 Prediction Depth,Baldock 等人,2021 年)来量化给定模型的 "易学性",并在训练阶段识别这种行为。(3) 我们通过经验证明,通过观察 DNN 早期层的学习动态,可以检测出有害的虚假特征。换句话说,DNN 初始层在训练初期学习到的简单特征(可能)会损害模型的泛化。我们在医学和视觉数据集(包括模拟和真实数据集)上验证了我们的说法,并通过展示预测深度和信息论概念(如𝒱-usable information)之间的理论联系(Ethayarajh 等人,2021 年)来证明我们的假设在经验上是成功的。最后,我们的实验表明,在训练过程中只监控准确率(这在机器学习管道中很常见)不足以检测到虚假特征。因此,我们强调需要使用合适的实例难度指标来监控早期的训练动态。
{"title":"Beyond Distribution Shift: Spurious Features Through the Lens of Training Dynamics.","authors":"Nihal Murali, Aahlad Puli, Ke Yu, Rajesh Ranganath, Kayhan Batmanghelich","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Deep Neural Networks (DNNs) are prone to learning spurious features that correlate with the label during training but are irrelevant to the learning problem. This hurts model generalization and poses problems when deploying them in safety-critical applications. This paper aims to better understand the effects of spurious features through the lens of the learning dynamics of the internal neurons during the training process. We make the following observations: (1) While previous works highlight the harmful effects of spurious features on the generalization ability of DNNs, we emphasize that not all spurious features are harmful. Spurious features can be \"<i>benign</i>\" or <i>\"harmful\"</i> depending on whether they are \"harder\" or \"easier\" to learn than the core features for a given model. This definition is model and dataset dependent. (2) We build upon this premise and use <i>instance difficulty</i> methods (like Prediction Depth (Baldock et al., 2021)) to quantify \"easiness\" for a given model and to identify this behavior during the training phase. (3) We empirically show that the harmful spurious features can be detected by observing the learning dynamics of the DNN's <i>early layers</i>. In other words, easy features learned by the initial layers of a DNN early during the training can (potentially) hurt model generalization. We verify our claims on medical and vision datasets, both simulated and real, and justify the empirical success of our hypothesis by showing the theoretical connections between Prediction Depth and information-theoretic concepts like <math><mi>𝒱</mi></math>-usable information (Ethayarajh et al., 2021). Lastly, our experiments show that monitoring only accuracy during training (as is common in machine learning pipelines) is insufficient to detect spurious features. We, therefore, highlight the need for monitoring early training dynamics using suitable instance difficulty metrics.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11029547/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140863872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RIFLE: Imputation and Robust Inference from Low Order Marginals. RIFLE:根据低阶边际值进行归因和稳健推断。
Sina Baharlouei, Kelechi Ogudu, Sze-Chuan Suen, Meisam Razaviyayn

The ubiquity of missing values in real-world datasets poses a challenge for statistical inference and can prevent similar datasets from being analyzed in the same study, precluding many existing datasets from being used for new analyses. While an extensive collection of packages and algorithms have been developed for data imputation, the overwhelming majority perform poorly if there are many missing values and low sample sizes, which are unfortunately common characteristics in empirical data. Such low-accuracy estimations adversely affect the performance of downstream statistical models. We develop a statistical inference framework for regression and classification in the presence of missing data without imputation. Our framework, RIFLE (Robust InFerence via Low-order moment Estimations), estimates low-order moments of the underlying data distribution with corresponding confidence intervals to learn a distributionally robust model. We specialize our framework to linear regression and normal discriminant analysis, and we provide convergence and performance guarantees. This framework can also be adapted to impute missing data. In numerical experiments, we compare RIFLE to several state-of-the-art approaches (including MICE, Amelia, MissForest, KNN-imputer, MIDA, and Mean Imputer) for imputation and inference in the presence of missing values. Our experiments demonstrate that RIFLE outperforms other benchmark algorithms when the percentage of missing values is high and/or when the number of data points is relatively small. RIFLE is publicly available at https://github.com/optimization-for-data-driven-science/RIFLE.

在现实世界的数据集中,缺失值无处不在,这给统计推断带来了挑战,并可能导致无法在同一研究中对类似数据集进行分析,从而使许多现有数据集无法用于新的分析。虽然已经开发了大量的数据估算软件包和算法,但绝大多数软件包和算法在缺失值多和样本量少的情况下表现不佳,而这正是经验数据的常见特征。这种低准确度的估计会对下游统计模型的性能产生不利影响。我们开发了一个统计推断框架,用于在存在缺失数据的情况下进行回归和分类,而无需估算。我们的框架 RIFLE(Robust InFerence via Low-order moment Estimations)通过相应的置信区间估计基础数据分布的低阶矩,从而学习分布上稳健的模型。我们将框架专门用于线性回归和正态判别分析,并提供收敛性和性能保证。这一框架还可用于缺失数据的补偿。在数值实验中,我们将 RIFLE 与几种最先进的方法(包括 MICE、Amelia、MissForest、KNN-imputer、MIDA 和 Mean Imputer)进行了比较,以便在存在缺失值的情况下进行归因和推断。我们的实验表明,当缺失值比例较高和/或数据点数量相对较少时,RIFLE 的表现优于其他基准算法。RIFLE 在 https://github.com/optimization-for-data-driven-science/RIFLE 上公开发布。
{"title":"RIFLE: Imputation and Robust Inference from Low Order Marginals.","authors":"Sina Baharlouei, Kelechi Ogudu, Sze-Chuan Suen, Meisam Razaviyayn","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The ubiquity of missing values in real-world datasets poses a challenge for statistical inference and can prevent similar datasets from being analyzed in the same study, precluding many existing datasets from being used for new analyses. While an extensive collection of packages and algorithms have been developed for data imputation, the overwhelming majority perform poorly if there are many missing values and low sample sizes, which are unfortunately common characteristics in empirical data. Such low-accuracy estimations adversely affect the performance of downstream statistical models. We develop a statistical inference framework for <i>regression and classification in the presence of missing data without imputation</i>. Our framework, RIFLE (Robust InFerence via Low-order moment Estimations), estimates low-order moments of the underlying data distribution with corresponding confidence intervals to learn a distributionally robust model. We specialize our framework to linear regression and normal discriminant analysis, and we provide convergence and performance guarantees. This framework can also be adapted to impute missing data. In numerical experiments, we compare RIFLE to several state-of-the-art approaches (including MICE, Amelia, MissForest, KNN-imputer, MIDA, and Mean Imputer) for imputation and inference in the presence of missing values. Our experiments demonstrate that RIFLE outperforms other benchmark algorithms when the percentage of missing values is high and/or when the number of data points is relatively small. RIFLE is publicly available at https://github.com/optimization-for-data-driven-science/RIFLE.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10977932/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140320107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Convergence and Calibration of Deep Learning with Differential Privacy. 论具有差异隐私的深度学习的收敛与校准
Zhiqi Bu, Hua Wang, Zongyu Dai, Qi Long

Differentially private (DP) training preserves the data privacy usually at the cost of slower convergence (and thus lower accuracy), as well as more severe mis-calibration than its non-private counterpart. To analyze the convergence of DP training, we formulate a continuous time analysis through the lens of neural tangent kernel (NTK), which characterizes the per-sample gradient clipping and the noise addition in DP training, for arbitrary network architectures and loss functions. Interestingly, we show that the noise addition only affects the privacy risk but not the convergence or calibration, whereas the per-sample gradient clipping (under both flat and layerwise clipping styles) only affects the convergence and calibration. Furthermore, we observe that while DP models trained with small clipping norm usually achieve the best accurate, but are poorly calibrated and thus unreliable. In sharp contrast, DP models trained with large clipping norm enjoy the same privacy guarantee and similar accuracy, but are significantly more calibrated. Our code can be found at https://github.com/woodyx218/opacus_global_clipping.

差分私有(DP)训练通常以收敛速度较慢(因此准确度较低)以及比非私有训练更严重的误校准为代价,来保护数据隐私。为了分析 DP 训练的收敛性,我们从神经正切核(NTK)的角度进行了连续时间分析,它描述了 DP 训练中针对任意网络架构和损失函数的每样本梯度剪切和噪声添加。有趣的是,我们发现噪声添加只会影响隐私风险,而不会影响收敛性或校准,而每样本梯度剪切(在平面剪切和分层剪切方式下)只会影响收敛性和校准。此外,我们还观察到,虽然用小剪切规范训练的 DP 模型通常能达到最佳精度,但校准效果很差,因此并不可靠。与此形成鲜明对比的是,用大剪切规范训练的 DP 模型享有相同的隐私保证和相似的准确度,但校准效果明显更好。我们的代码见 https://github.com/woodyx218/opacus_global_clipping。
{"title":"On the Convergence and Calibration of Deep Learning with Differential Privacy.","authors":"Zhiqi Bu, Hua Wang, Zongyu Dai, Qi Long","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Differentially private (DP) training preserves the data privacy usually at the cost of slower convergence (and thus lower accuracy), as well as more severe mis-calibration than its non-private counterpart. To analyze the convergence of DP training, we formulate a continuous time analysis through the lens of neural tangent kernel (NTK), which characterizes the per-sample gradient clipping and the noise addition in DP training, for arbitrary network architectures and loss functions. Interestingly, we show that the noise addition only affects the privacy risk but not the convergence or calibration, whereas the per-sample gradient clipping (under both flat and layerwise clipping styles) only affects the convergence and calibration. Furthermore, we observe that while DP models trained with small clipping norm usually achieve the best accurate, but are poorly calibrated and thus unreliable. In sharp contrast, DP models trained with large clipping norm enjoy the same privacy guarantee and similar accuracy, but are significantly more <i>calibrated</i>. Our code can be found at https://github.com/woodyx218/opacus_global_clipping.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982613/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140337962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Traditional Machine Learning Models for Building Energy Performance Prediction: A Comparative Research 建筑节能性能预测的传统机器学习模型比较研究
Pub Date : 2023-05-29 DOI: 10.11648/j.mlr.20230801.11
Zeyu Wu, Hongyang He
: A large proportion of total energy consumption is caused by buildings. Accurately predicting the heating and cooling demand of a building is crucial in the initial design phase in order to determine the most efficient solution from various designs. In this paper, in order to explore the effectiveness of basic machine learning algorithms to solve this problem, different machine learning models were used to estimate the heating and cooling loads of buildings, utilising data on the energy efficiency of buildings. Notably, this paper also discusses the performance of deep neural network prediction models and concludes that among traditional machine learning algorithms, GradientBoostingRegressor achieves better predictions, with Heating prediction reaching 0.998553 and Cooling prediction Compared with our machine learning algorithm HB-Regressor, the prediction accuracy of HB-Regressor is higher, reaching 0.998672 and 0.995153 respectively, but the fitting speed is not as fast as the GradientBoostingRegressor algorithm.
:建筑能耗占总能耗的很大一部分。为了从各种设计中确定最有效的解决方案,准确预测建筑物的供暖和制冷需求在初始设计阶段至关重要。在本文中,为了探索基本机器学习算法解决这一问题的有效性,利用建筑物能效数据,使用不同的机器学习模型来估计建筑物的供暖和制冷负荷。值得注意的是,本文还讨论了深度神经网络预测模型的性能,得出结论:在传统的机器学习算法中,GradientBoostingRegressor的预测效果更好,加热预测达到0.998553,冷却预测与我们的机器学习算法HB-Regressor相比,HB-Regressor的预测精度更高,分别达到0.998672和0.995153。但拟合速度不如GradientBoostingRegressor算法快。
{"title":"Traditional Machine Learning Models for Building Energy Performance Prediction: A Comparative Research","authors":"Zeyu Wu, Hongyang He","doi":"10.11648/j.mlr.20230801.11","DOIUrl":"https://doi.org/10.11648/j.mlr.20230801.11","url":null,"abstract":": A large proportion of total energy consumption is caused by buildings. Accurately predicting the heating and cooling demand of a building is crucial in the initial design phase in order to determine the most efficient solution from various designs. In this paper, in order to explore the effectiveness of basic machine learning algorithms to solve this problem, different machine learning models were used to estimate the heating and cooling loads of buildings, utilising data on the energy efficiency of buildings. Notably, this paper also discusses the performance of deep neural network prediction models and concludes that among traditional machine learning algorithms, GradientBoostingRegressor achieves better predictions, with Heating prediction reaching 0.998553 and Cooling prediction Compared with our machine learning algorithm HB-Regressor, the prediction accuracy of HB-Regressor is higher, reaching 0.998672 and 0.995153 respectively, but the fitting speed is not as fast as the GradientBoostingRegressor algorithm.","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"82 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78974691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Indexing of Digital Objects Through Learning from User Data 通过学习用户数据实现数字对象的自动索引
Pub Date : 2023-01-31 DOI: 10.11648/j.mlr.20220702.12
C. Leung, Yuanxi Li
{"title":"Automatic Indexing of Digital Objects Through Learning from User Data","authors":"C. Leung, Yuanxi Li","doi":"10.11648/j.mlr.20220702.12","DOIUrl":"https://doi.org/10.11648/j.mlr.20220702.12","url":null,"abstract":"","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84342864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts. 你的公平性有多强?看不见的分配变化下的公平评估与维持。
Haotao Wang, Junyuan Hong, Jiayu Zhou, Zhangyang Wang

Increasing concerns have been raised on deep learning fairness in recent years. Existing fairness-aware machine learning methods mainly focus on the fairness of in-distribution data. However, in real-world applications, it is common to have distribution shift between the training and test data. In this paper, we first show that the fairness achieved by existing methods can be easily broken by slight distribution shifts. To solve this problem, we propose a novel fairness learning method termed CUrvature MAtching (CUMA), which can achieve robust fairness generalizable to unseen domains with unknown distributional shifts. Specifically, CUMA enforces the model to have similar generalization ability on the majority and minority groups, by matching the loss curvature distributions of the two groups. We evaluate our method on three popular fairness datasets. Compared with existing methods, CUMA achieves superior fairness under unseen distribution shifts, without sacrificing either the overall accuracy or the in-distribution fairness.

近年来,深度学习公平性问题引起了越来越多的关注。现有的公平性感知机器学习方法主要关注分布内数据的公平性。然而,在真实的应用程序中,训练数据和测试数据之间的分布转移是很常见的。在本文中,我们首先证明了现有方法所达到的公平性很容易被轻微的分布变化所破坏。为了解决这一问题,我们提出了一种新的公平性学习方法曲率匹配(CUMA),该方法可以实现可推广到未知分布变化的未知领域的鲁棒公平性。具体来说,CUMA通过匹配多数群体和少数群体的损失曲率分布,使模型具有相似的泛化能力。我们在三个流行的公平性数据集上评估了我们的方法。与现有方法相比,CUMA在不牺牲总体精度和分布内公平性的前提下,在不可见的分布偏移情况下实现了更好的公平性。
{"title":"How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts.","authors":"Haotao Wang,&nbsp;Junyuan Hong,&nbsp;Jiayu Zhou,&nbsp;Zhangyang Wang","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Increasing concerns have been raised on deep learning fairness in recent years. Existing fairness-aware machine learning methods mainly focus on the fairness of in-distribution data. However, in real-world applications, it is common to have distribution shift between the training and test data. In this paper, we first show that the fairness achieved by existing methods can be easily broken by slight distribution shifts. To solve this problem, we propose a novel fairness learning method termed CUrvature MAtching (CUMA), which can achieve robust fairness generalizable to unseen domains with unknown distributional shifts. Specifically, CUMA enforces the model to have similar generalization ability on the majority and minority groups, by matching the loss curvature distributions of the two groups. We evaluate our method on three popular fairness datasets. Compared with existing methods, CUMA achieves superior fairness under unseen distribution shifts, without sacrificing either the overall accuracy or the in-distribution fairness.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10097499/pdf/nihms-1888011.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9310075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating Potential Outcome Distributions with Collaborating Causal Networks. 利用协作因果网络估算潜在结果分布。
Tianhui Zhou, William E Carson, David Carlson

Traditional causal inference approaches leverage observational study data to estimate the difference in observed (factual) and unobserved (counterfactual) outcomes for a potential treatment, known as the Conditional Average Treatment Effect (CATE). However, CATE corresponds to the comparison on the first moment alone, and as such may be insufficient in reflecting the full picture of treatment effects. As an alternative, estimating the full potential outcome distributions could provide greater insights. However, existing methods for estimating treatment effect potential outcome distributions often impose restrictive or overly-simplistic assumptions about these distributions. Here, we propose Collaborating Causal Networks (CCN), a novel methodology which goes beyond the estimation of CATE alone by learning the full potential outcome distributions. Estimation of outcome distributions via the CCN framework does not require restrictive assumptions of the underlying data generating process (e.g. Gaussian errors). Additionally, our proposed method facilitates estimation of the utility of each possible treatment and permits individual-specific variation through utility functions (e.g. risk tolerance variability). CCN not only extends outcome estimation beyond traditional risk difference, but also enables a more comprehensive decision making process through definition of flexible comparisons. Under assumptions commonly made in the causal inference literature, we show that CCN learns distributions that asymptotically capture the correct potential outcome distributions. Furthermore, we propose an adjustment approach that is empirically effective in alleviating sample imbalance between treatment groups in observational studies. Finally, we evaluate the performance of CCN in multiple experiments on both synthetic and semi-synthetic data. We demonstrate that CCN learns improved distribution estimates compared to existing Bayesian and deep generative methods as well as improved decisions with respects to a variety of utility functions.

传统的因果推断方法利用观察研究数据来估算潜在治疗的观察结果(事实)与非观察结果(反事实)之间的差异,即所谓的条件平均治疗效果(CATE)。然而,CATE 仅对应于第一时刻的比较,因此可能不足以反映治疗效果的全貌。作为一种替代方法,估算全部潜在结果分布可以提供更深入的见解。然而,现有的治疗效果潜在结果分布估计方法往往对这些分布施加了限制性或过于简单的假设。在此,我们提出了协作因果网络(CCN),这是一种新颖的方法,它通过学习完整的潜在结果分布,超越了单纯的 CATE 估算。通过 CCN 框架估计结果分布不需要对基础数据生成过程(如高斯误差)进行限制性假设。此外,我们提出的方法有助于估算每种可能治疗方法的效用,并通过效用函数(如风险承受能力变异)允许个体特定的变异。CCN 不仅将结果估算扩展到传统的风险差异之外,还通过定义灵活的比较方法实现了更全面的决策过程。在因果推理文献中常见的假设条件下,我们证明了 CCN 所学习的分布可以渐近地捕捉到正确的潜在结果分布。此外,我们还提出了一种调整方法,该方法可有效缓解观察研究中治疗组间的样本不平衡问题。最后,我们在合成数据和半合成数据的多个实验中评估了 CCN 的性能。我们证明,与现有的贝叶斯方法和深度生成方法相比,CCN 学习到的分布估计有所改进,在各种效用函数方面的决策也有所改进。
{"title":"Estimating Potential Outcome Distributions with Collaborating Causal Networks.","authors":"Tianhui Zhou, William E Carson, David Carlson","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Traditional causal inference approaches leverage observational study data to estimate the difference in observed (factual) and unobserved (counterfactual) outcomes for a potential treatment, known as the Conditional Average Treatment Effect (CATE). However, CATE corresponds to the comparison on the first moment alone, and as such may be insufficient in reflecting the full picture of treatment effects. As an alternative, estimating the full potential outcome distributions could provide greater insights. However, existing methods for estimating treatment effect potential outcome distributions often impose restrictive or overly-simplistic assumptions about these distributions. Here, we propose Collaborating Causal Networks (CCN), a novel methodology which goes beyond the estimation of CATE alone by learning the <i>full potential outcome distributions</i>. Estimation of outcome distributions via the CCN framework does not require restrictive assumptions of the underlying data generating process (e.g. Gaussian errors). Additionally, our proposed method facilitates estimation of the utility of each possible treatment and permits individual-specific variation through utility functions (e.g. risk tolerance variability). CCN not only extends outcome estimation beyond traditional risk difference, but also enables a more comprehensive decision making process through definition of flexible comparisons. Under assumptions commonly made in the causal inference literature, we show that CCN learns distributions that asymptotically capture the correct potential outcome distributions. Furthermore, we propose an adjustment approach that is empirically effective in alleviating sample imbalance between treatment groups in observational studies. Finally, we evaluate the performance of CCN in multiple experiments on both synthetic and semi-synthetic data. We demonstrate that CCN learns improved distribution estimates compared to existing Bayesian and deep generative methods as well as improved decisions with respects to a variety of utility functions.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10769464/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139378979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts 你的公平性有多强?看不见的分配变化下的公平评估与维持
Pub Date : 2022-07-04 DOI: 10.48550/arXiv.2207.01168
Haotao Wang, Junyuan Hong, Jiayu Zhou, Zhangyang Wang
Increasing concerns have been raised on deep learning fairness in recent years. Existing fairness-aware machine learning methods mainly focus on the fairness of in-distribution data. However, in real-world applications, it is common to have distribution shift between the training and test data. In this paper, we first show that the fairness achieved by existing methods can be easily broken by slight distribution shifts. To solve this problem, we propose a novel fairness learning method termed CUrvature MAtching (CUMA), which can achieve robust fairness generalizable to unseen domains with unknown distributional shifts. Specifically, CUMA enforces the model to have similar generalization ability on the majority and minority groups, by matching the loss curvature distributions of the two groups. We evaluate our method on three popular fairness datasets. Compared with existing methods, CUMA achieves superior fairness under unseen distribution shifts, without sacrificing either the overall accuracy or the in-distribution fairness.
近年来,深度学习公平性问题引起了越来越多的关注。现有的公平性感知机器学习方法主要关注分布内数据的公平性。然而,在真实的应用程序中,训练数据和测试数据之间的分布转移是很常见的。在本文中,我们首先证明了现有方法所达到的公平性很容易被轻微的分布变化所破坏。为了解决这一问题,我们提出了一种新的公平性学习方法曲率匹配(CUMA),该方法可以实现可推广到未知分布变化的未知领域的鲁棒公平性。具体来说,CUMA通过匹配多数群体和少数群体的损失曲率分布,使模型具有相似的泛化能力。我们在三个流行的公平性数据集上评估了我们的方法。与现有方法相比,CUMA在不牺牲总体精度和分布内公平性的前提下,在不可见的分布偏移情况下实现了更好的公平性。
{"title":"How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts","authors":"Haotao Wang, Junyuan Hong, Jiayu Zhou, Zhangyang Wang","doi":"10.48550/arXiv.2207.01168","DOIUrl":"https://doi.org/10.48550/arXiv.2207.01168","url":null,"abstract":"Increasing concerns have been raised on deep learning fairness in recent years. Existing fairness-aware machine learning methods mainly focus on the fairness of in-distribution data. However, in real-world applications, it is common to have distribution shift between the training and test data. In this paper, we first show that the fairness achieved by existing methods can be easily broken by slight distribution shifts. To solve this problem, we propose a novel fairness learning method termed CUrvature MAtching (CUMA), which can achieve robust fairness generalizable to unseen domains with unknown distributional shifts. Specifically, CUMA enforces the model to have similar generalization ability on the majority and minority groups, by matching the loss curvature distributions of the two groups. We evaluate our method on three popular fairness datasets. Compared with existing methods, CUMA achieves superior fairness under unseen distribution shifts, without sacrificing either the overall accuracy or the in-distribution fairness.","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84067865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Transactions on machine learning research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1