首页 > 最新文献

Journal of Machine Learning Research最新文献

英文 中文
An l Eigenvector Perturbation Bound and Its Application to Robust Covariance Estimation. 一个l∞特征向量扰动界及其在鲁棒协方差估计中的应用。
IF 4.3 3区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2018-04-01
Jianqing Fan, Weichen Wang, Yiqiao Zhong

In statistics and machine learning, we are interested in the eigenvectors (or singular vectors) of certain matrices (e.g. covariance matrices, data matrices, etc). However, those matrices are usually perturbed by noises or statistical errors, either from random sampling or structural patterns. The Davis-Kahan sin θ theorem is often used to bound the difference between the eigenvectors of a matrix A and those of a perturbed matrix A ˜ = A + E , in terms of l 2 norm. In this paper, we prove that when A is a low-rank and incoherent matrix, the l norm perturbation bound of singular vectors (or eigenvectors in the symmetric case) is smaller by a factor of d 1 or d 2 for left and right vectors, where d 1 and d 2 are the matrix dimensions. The power of this new perturbation result is shown in robust covariance estimation, particularly when random variables have heavy tails. There, we propose new robust covariance estimators and establish their asymptotic properties using the newly developed perturbation bound. Our theoretical results are verified through extensive numerical experiments.

在统计学和机器学习中,我们对某些矩阵(如协方差矩阵、数据矩阵等)的特征向量(或奇异向量)感兴趣。然而,这些矩阵通常受到来自随机采样或结构模式的噪声或统计误差的干扰。Davis-Kahan-sinθ定理通常用于根据L2范数来约束矩阵a的特征向量与扰动矩阵a~=a+E的特征向量之间的差。在本文中,我们证明了当A是一个低秩非相干矩阵时,对于左向量和右向量,奇异向量(或对称情况下的特征向量)的l∞范数扰动界小于d1或d2的因子,其中d1和d2是矩阵维数。这种新的扰动结果的功率显示在鲁棒协方差估计中,特别是当随机变量具有重尾时。在那里,我们提出了新的鲁棒协方差估计,并使用新发展的扰动界建立了它们的渐近性质。我们的理论结果通过大量的数值实验得到了验证。
{"title":"<ArticleTitle xmlns:ns0=\"http://www.w3.org/1998/Math/MathML\">An <ns0:math> <ns0:mrow><ns0:msub><ns0:mi>l</ns0:mi> <ns0:mi>∞</ns0:mi></ns0:msub> </ns0:mrow> </ns0:math> Eigenvector Perturbation Bound and Its Application to Robust Covariance Estimation.","authors":"Jianqing Fan, Weichen Wang, Yiqiao Zhong","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>In statistics and machine learning, we are interested in the eigenvectors (or singular vectors) of certain matrices (e.g. covariance matrices, data matrices, etc). However, those matrices are usually perturbed by noises or statistical errors, either from random sampling or structural patterns. The Davis-Kahan sin <i>θ</i> theorem is often used to bound the difference between the eigenvectors of a matrix A and those of a perturbed matrix <math> <mrow><mover><mi>A</mi> <mo>˜</mo></mover> <mo>=</mo> <mi>A</mi> <mo>+</mo> <mi>E</mi></mrow> </math> , in terms of <math> <mrow><msub><mi>l</mi> <mn>2</mn></msub> </mrow> </math> norm. In this paper, we prove that when <i>A</i> is a low-rank and incoherent matrix, the <math> <mrow><msub><mi>l</mi> <mi>∞</mi></msub> </mrow> </math> norm perturbation bound of singular vectors (or eigenvectors in the symmetric case) is smaller by a factor of <math> <mrow> <msqrt> <mrow><msub><mi>d</mi> <mn>1</mn></msub> </mrow> </msqrt> </mrow> </math> or <math> <mrow> <msqrt> <mrow><msub><mi>d</mi> <mn>2</mn></msub> </mrow> </msqrt> </mrow> </math> for left and right vectors, where <i>d</i> <sub>1</sub> and <i>d</i> <sub>2</sub> are the matrix dimensions. The power of this new perturbation result is shown in robust covariance estimation, particularly when random variables have heavy tails. There, we propose new robust covariance estimators and establish their asymptotic properties using the newly developed perturbation bound. Our theoretical results are verified through extensive numerical experiments.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6867801/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49684379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simultaneous Clustering and Estimation of Heterogeneous Graphical Models. 异构图形模型的同步聚类和估算。
IF 6 3区 计算机科学 Q1 Mathematics Pub Date : 2018-04-01
Botao Hao, Will Wei Sun, Yufeng Liu, Guang Cheng

We consider joint estimation of multiple graphical models arising from heterogeneous and high-dimensional observations. Unlike most previous approaches which assume that the cluster structure is given in advance, an appealing feature of our method is to learn cluster structure while estimating heterogeneous graphical models. This is achieved via a high dimensional version of Expectation Conditional Maximization (ECM) algorithm (Meng and Rubin, 1993). A joint graphical lasso penalty is imposed on the conditional maximization step to extract both homogeneity and heterogeneity components across all clusters. Our algorithm is computationally efficient due to fast sparse learning routines and can be implemented without unsupervised learning knowledge. The superior performance of our method is demonstrated by extensive experiments and its application to a Glioblastoma cancer dataset reveals some new insights in understanding the Glioblastoma cancer. In theory, a non-asymptotic error bound is established for the output directly from our high dimensional ECM algorithm, and it consists of two quantities: statistical error (statistical accuracy) and optimization error (computational complexity). Such a result gives a theoretical guideline in terminating our ECM iterations.

我们考虑的是对由异构高维观测结果产生的多个图形模型进行联合估计。以前的大多数方法都假定聚类结构是事先给定的,而我们的方法与之不同,它的一个吸引人的特点是在估计异构图形模型的同时学习聚类结构。这是通过高维版本的期望条件最大化(ECM)算法(Meng 和 Rubin,1993 年)实现的。在条件最大化步骤中施加了联合图形套索惩罚,以提取所有聚类中的同质性和异质性成分。由于采用了快速稀疏学习程序,我们的算法计算效率很高,而且无需无监督学习知识即可实现。大量实验证明了我们的方法性能优越,将其应用于胶质母细胞瘤癌症数据集揭示了理解胶质母细胞瘤癌症的一些新见解。从理论上讲,我们为高维 ECM 算法的直接输出建立了一个非渐进误差约束,它包括两个量:统计误差(统计准确性)和优化误差(计算复杂性)。这一结果为终止 ECM 迭代提供了理论指导。
{"title":"Simultaneous Clustering and Estimation of Heterogeneous Graphical Models.","authors":"Botao Hao, Will Wei Sun, Yufeng Liu, Guang Cheng","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We consider joint estimation of multiple graphical models arising from heterogeneous and high-dimensional observations. Unlike most previous approaches which assume that the cluster structure is given in advance, an appealing feature of our method is to learn cluster structure while estimating heterogeneous graphical models. This is achieved via a high dimensional version of Expectation Conditional Maximization (ECM) algorithm (Meng and Rubin, 1993). A joint graphical lasso penalty is imposed on the conditional maximization step to extract both homogeneity and heterogeneity components across all clusters. Our algorithm is computationally efficient due to fast sparse learning routines and can be implemented without unsupervised learning knowledge. The superior performance of our method is demonstrated by extensive experiments and its application to a Glioblastoma cancer dataset reveals some new insights in understanding the Glioblastoma cancer. In theory, a non-asymptotic error bound is established for the output directly from our high dimensional ECM algorithm, and it consists of two quantities: <i>statistical error</i> (statistical accuracy) and <i>optimization error</i> (computational complexity). Such a result gives a theoretical guideline in terminating our ECM iterations.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6338433/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36923362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Robust Learning Approach for Regression Models Based on Distributionally Robust Optimization. 基于分布鲁棒优化的回归模型鲁棒学习方法。
IF 6 3区 计算机科学 Q1 Mathematics Pub Date : 2018-01-01
Ruidi Chen, Ioannis Ch Paschalidis

We present a Distributionally Robust Optimization (DRO) approach to estimate a robustified regression plane in a linear regression setting, when the observed samples are potentially contaminated with adversarially corrupted outliers. Our approach mitigates the impact of outliers by hedging against a family of probability distributions on the observed data, some of which assign very low probabilities to the outliers. The set of distributions under consideration are close to the empirical distribution in the sense of the Wasserstein metric. We show that this DRO formulation can be relaxed to a convex optimization problem which encompasses a class of models. By selecting proper norm spaces for the Wasserstein metric, we are able to recover several commonly used regularized regression models. We provide new insights into the regularization term and give guidance on the selection of the regularization coefficient from the standpoint of a confidence region. We establish two types of performance guarantees for the solution to our formulation under mild conditions. One is related to its out-of-sample behavior (prediction bias), and the other concerns the discrepancy between the estimated and true regression planes (estimation bias). Extensive numerical results demonstrate the superiority of our approach to a host of regression models, in terms of the prediction and estimation accuracies. We also consider the application of our robust learning procedure to outlier detection, and show that our approach achieves a much higher AUC (Area Under the ROC Curve) than M-estimation (Huber, 1964, 1973).

我们提出了一种分布鲁棒优化(DRO)方法来估计线性回归设置中的鲁棒回归平面,当观察到的样本可能受到对抗性异常值的污染时。我们的方法通过对冲观测数据上的一系列概率分布来减轻异常值的影响,其中一些将非常低的概率分配给异常值。所考虑的分布集在瓦瑟斯坦度量的意义上接近经验分布。我们证明这个DRO公式可以松弛为一个包含一类模型的凸优化问题。通过为Wasserstein度量选择适当的范数空间,我们能够恢复几个常用的正则化回归模型。我们对正则化项提供了新的见解,并从置信域的角度对正则化系数的选择提供了指导。在温和的条件下,我们建立了两种类型的性能保证。一个是关于它的样本外行为(预测偏差),另一个是关于估计和真实回归平面之间的差异(估计偏差)。大量的数值结果表明,我们的方法在预测和估计精度方面优于许多回归模型。我们还考虑将鲁棒学习过程应用于离群值检测,并表明我们的方法实现了比m估计高得多的AUC (ROC曲线下的面积)(Huber, 1964,1973)。
{"title":"A Robust Learning Approach for Regression Models Based on Distributionally Robust Optimization.","authors":"Ruidi Chen,&nbsp;Ioannis Ch Paschalidis","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We present a <i>Distributionally Robust Optimization (DRO)</i> approach to estimate a robustified regression plane in a linear regression setting, when the observed samples are potentially contaminated with adversarially corrupted outliers. Our approach mitigates the impact of outliers by hedging against a family of probability distributions on the observed data, some of which assign very low probabilities to the outliers. The set of distributions under consideration are close to the empirical distribution in the sense of the Wasserstein metric. We show that this DRO formulation can be relaxed to a convex optimization problem which encompasses a class of models. By selecting proper norm spaces for the Wasserstein metric, we are able to recover several commonly used regularized regression models. We provide new insights into the regularization term and give guidance on the selection of the regularization coefficient from the standpoint of a confidence region. We establish two types of performance guarantees for the solution to our formulation under mild conditions. One is related to its out-of-sample behavior (prediction bias), and the other concerns the discrepancy between the estimated and true regression planes (estimation bias). Extensive numerical results demonstrate the superiority of our approach to a host of regression models, in terms of the prediction and estimation accuracies. We also consider the application of our robust learning procedure to outlier detection, and show that our approach achieves a much higher AUC (Area Under the ROC Curve) than M-estimation (Huber, 1964, 1973).</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8378760/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39333876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A constructive approach to L0 penalized regression 一种建设性的L0惩罚回归方法
IF 6 3区 计算机科学 Q1 Mathematics Pub Date : 2018-01-01 DOI: 10.5555/3291125.3291135
HuangJian, JiaoYuling, liuyanyan, LuXiliang
We propose a constructive approach to estimating sparse, high-dimensional linear regression models. The approach is a computational algorithm motivated from the KKT conditions for the l0-penalized ...
我们提出了一种建设性的方法来估计稀疏,高维线性回归模型。该方法是一种基于KKT条件的计算算法。
{"title":"A constructive approach to L0 penalized regression","authors":"HuangJian, JiaoYuling, liuyanyan, LuXiliang","doi":"10.5555/3291125.3291135","DOIUrl":"https://doi.org/10.5555/3291125.3291135","url":null,"abstract":"We propose a constructive approach to estimating sparse, high-dimensional linear regression models. The approach is a computational algorithm motivated from the KKT conditions for the l0-penalized ...","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71140309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Invariant models for causal transfer learning 因果迁移学习的不变模型
IF 6 3区 计算机科学 Q1 Mathematics Pub Date : 2018-01-01 DOI: 10.5555/3291125.3291161
Rojas-CarullaMateo, SchölkopfBernhard, TurnerRichard, PetersJonas
Methods of transfer learning try to combine knowledge from several related tasks (or domains) to improve performance on a test task. Inspired by causal methodology, we relax the usual covariate shi...
迁移学习的方法试图将来自几个相关任务(或领域)的知识结合起来,以提高测试任务的表现。受因果方法学的启发,我们放宽了通常的协变量shi…
{"title":"Invariant models for causal transfer learning","authors":"Rojas-CarullaMateo, SchölkopfBernhard, TurnerRichard, PetersJonas","doi":"10.5555/3291125.3291161","DOIUrl":"https://doi.org/10.5555/3291125.3291161","url":null,"abstract":"Methods of transfer learning try to combine knowledge from several related tasks (or domains) to improve performance on a test task. Inspired by causal methodology, we relax the usual covariate shi...","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71140389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Auto-WEKA 2.0: Automatic model selection and hyperparameter optimization in WEKA Auto-WEKA 2.0:在WEKA中实现自动模型选择和超参数优化
IF 6 3区 计算机科学 Q1 Mathematics Pub Date : 2017-01-01 DOI: 10.1007/978-3-030-05318-5_4
Lars Kotthoff, C. Thornton, H. Hoos, F. Hutter, Kevin Leyton-Brown
{"title":"Auto-WEKA 2.0: Automatic model selection and hyperparameter optimization in WEKA","authors":"Lars Kotthoff, C. Thornton, H. Hoos, F. Hutter, Kevin Leyton-Brown","doi":"10.1007/978-3-030-05318-5_4","DOIUrl":"https://doi.org/10.1007/978-3-030-05318-5_4","url":null,"abstract":"","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90069062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 644
Learning Scalable Deep Kernels with Recurrent Structure. 用循环结构学习可扩展深度核。
IF 6 3区 计算机科学 Q1 Mathematics Pub Date : 2017-01-01
Maruan Al-Shedivat, Andrew Gordon Wilson, Yunus Saatchi, Zhiting Hu, Eric P Xing

Many applications in speech, robotics, finance, and biology deal with sequential data, where ordering matters and recurrent structures are common. However, this structure cannot be easily captured by standard kernel functions. To model such structure, we propose expressive closed-form kernel functions for Gaussian processes. The resulting model, GP-LSTM, fully encapsulates the inductive biases of long short-term memory (LSTM) recurrent networks, while retaining the non-parametric probabilistic advantages of Gaussian processes. We learn the properties of the proposed kernels by optimizing the Gaussian process marginal likelihood using a new provably convergent semi-stochastic gradient procedure, and exploit the structure of these kernels for scalable training and prediction. This approach provides a practical representation for Bayesian LSTMs. We demonstrate state-of-the-art performance on several benchmarks, and thoroughly investigate a consequential autonomous driving application, where the predictive uncertainties provided by GP-LSTM are uniquely valuable.

语音、机器人、金融和生物学中的许多应用都处理顺序数据,其中排序问题和循环结构是常见的。然而,这种结构不能被标准核函数轻易捕获。为了对这种结构建模,我们提出了高斯过程的表达性闭形式核函数。所得模型GP-LSTM充分封装了长短期记忆(LSTM)循环网络的归纳偏差,同时保留了高斯过程的非参数概率优势。我们通过使用一种新的可证明收敛的半随机梯度过程优化高斯过程的边际似然来学习所提出的核的性质,并利用这些核的结构进行可扩展的训练和预测。这种方法为贝叶斯lstm提供了一种实用的表示。我们在几个基准测试中展示了最先进的性能,并深入研究了相应的自动驾驶应用,其中GP-LSTM提供的预测不确定性具有独特的价值。
{"title":"Learning Scalable Deep Kernels with Recurrent Structure.","authors":"Maruan Al-Shedivat,&nbsp;Andrew Gordon Wilson,&nbsp;Yunus Saatchi,&nbsp;Zhiting Hu,&nbsp;Eric P Xing","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Many applications in speech, robotics, finance, and biology deal with sequential data, where ordering matters and recurrent structures are common. However, this structure cannot be easily captured by standard kernel functions. To model such structure, we propose expressive closed-form kernel functions for Gaussian processes. The resulting model, GP-LSTM, fully encapsulates the inductive biases of long short-term memory (LSTM) recurrent networks, while retaining the non-parametric probabilistic advantages of Gaussian processes. We learn the properties of the proposed kernels by optimizing the Gaussian process marginal likelihood using a new provably convergent semi-stochastic gradient procedure, and exploit the structure of these kernels for scalable training and prediction. This approach provides a practical representation for Bayesian LSTMs. We demonstrate state-of-the-art performance on several benchmarks, and thoroughly investigate a consequential autonomous driving application, where the predictive uncertainties provided by GP-LSTM are uniquely valuable.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6334642/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36923363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empirical evaluation of resampling procedures for optimising SVM hyperparameters 优化支持向量机超参数的重采样过程的经验评价
IF 6 3区 计算机科学 Q1 Mathematics Pub Date : 2017-01-01 DOI: 10.5555/3122009.3122024
WainerJacques, CawleyGavin
Tuning the regularisation and kernel hyperparameters is a vital step in optimising the generalisation performance of kernel methods, such as the support vector machine (SVM). This is most often per...
调优正则化和核超参数是优化核方法(如支持向量机)泛化性能的关键步骤。这是最常见的……
{"title":"Empirical evaluation of resampling procedures for optimising SVM hyperparameters","authors":"WainerJacques, CawleyGavin","doi":"10.5555/3122009.3122024","DOIUrl":"https://doi.org/10.5555/3122009.3122024","url":null,"abstract":"Tuning the regularisation and kernel hyperparameters is a vital step in optimising the generalisation performance of kernel methods, such as the support vector machine (SVM). This is most often per...","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71139671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Community Extraction in Multilayer Networks with Heterogeneous Community Structure. 具有异构社区结构的多层网络中的社区提取。
IF 6 3区 计算机科学 Q1 Mathematics Pub Date : 2017-01-01
James D Wilson, John Palowitch, Shankar Bhamidi, Andrew B Nobel

Multilayer networks are a useful way to capture and model multiple, binary or weighted relationships among a fixed group of objects. While community detection has proven to be a useful exploratory technique for the analysis of single-layer networks, the development of community detection methods for multilayer networks is still in its infancy. We propose and investigate a procedure, called Multilayer Extraction, that identifies densely connected vertex-layer sets in multilayer networks. Multilayer Extraction makes use of a significance based score that quantifies the connectivity of an observed vertex-layer set through comparison with a fixed degree random graph model. Multilayer Extraction directly handles networks with heterogeneous layers where community structure may be different from layer to layer. The procedure can capture overlapping communities, as well as background vertex-layer pairs that do not belong to any community. We establish consistency of the vertex-layer set optimizer of our proposed multilayer score under the multilayer stochastic block model. We investigate the performance of Multilayer Extraction on three applications and a test bed of simulations. Our theoretical and numerical evaluations suggest that Multilayer Extraction is an effective exploratory tool for analyzing complex multilayer networks. Publicly available code is available at https://github.com/jdwilson4/MultilayerExtraction.

多层网络是捕捉和建模固定对象组之间的多个、二进制或加权关系的有用方法。虽然社区检测已被证明是分析单层网络的一种有用的探索性技术,但多层网络的社区检测方法的开发仍处于初级阶段。我们提出并研究了一种称为多层提取的程序,该程序可以识别多层网络中的密连接顶点层集。多层提取利用基于显著性的分数,该分数通过与固定度随机图模型的比较来量化观察到的顶点层集的连通性。多层提取直接处理具有异构层的网络,其中社区结构可能因层而异。该过程可以捕获重叠的社区,以及不属于任何社区的背景顶点层对。在多层随机块模型下,我们建立了所提出的多层分数的顶点层集优化器的一致性。我们研究了多层提取在三个应用程序和模拟试验台上的性能。我们的理论和数值评估表明,多层提取是分析复杂多层网络的有效探索工具。公开代码可在https://github.com/jdwilson4/MultilayerExtraction.
{"title":"Community Extraction in Multilayer Networks with Heterogeneous Community Structure.","authors":"James D Wilson,&nbsp;John Palowitch,&nbsp;Shankar Bhamidi,&nbsp;Andrew B Nobel","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Multilayer networks are a useful way to capture and model multiple, binary or weighted relationships among a fixed group of objects. While community detection has proven to be a useful exploratory technique for the analysis of single-layer networks, the development of community detection methods for multilayer networks is still in its infancy. We propose and investigate a procedure, called Multilayer Extraction, that identifies densely connected vertex-layer sets in multilayer networks. Multilayer Extraction makes use of a significance based score that quantifies the connectivity of an observed vertex-layer set through comparison with a fixed degree random graph model. Multilayer Extraction directly handles networks with heterogeneous layers where community structure may be different from layer to layer. The procedure can capture overlapping communities, as well as background vertex-layer pairs that do not belong to any community. We establish consistency of the vertex-layer set optimizer of our proposed multilayer score under the multilayer stochastic block model. We investigate the performance of Multilayer Extraction on three applications and a test bed of simulations. Our theoretical and numerical evaluations suggest that Multilayer Extraction is an effective exploratory tool for analyzing complex multilayer networks. Publicly available code is available at https://github.com/jdwilson4/MultilayerExtraction.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6927681/pdf/nihms-1022819.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37486356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic differentiation in machine learning 机器学习中的自动微分
IF 6 3区 计算机科学 Q1 Mathematics Pub Date : 2017-01-01 DOI: 10.5555/3122009.3242010
BaydinAtılım Günes, A. PearlmutterBarak, RadulAlexey Andreyevich, S. Mark
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "auto-diff", is a fa...
衍生函数,主要以梯度和黑森函数的形式出现,在机器学习中无处不在。自动微分(AD),也称为算法微分或简称为“自动微分”,是一种…
{"title":"Automatic differentiation in machine learning","authors":"BaydinAtılım Günes, A. PearlmutterBarak, RadulAlexey Andreyevich, S. Mark","doi":"10.5555/3122009.3242010","DOIUrl":"https://doi.org/10.5555/3122009.3242010","url":null,"abstract":"Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply \"auto-diff\", is a fa...","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71139732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
Journal of Machine Learning Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1