首页 > 最新文献

J. Mach. Learn. Res.最新文献

英文 中文
Harry: A Tool for Measuring String Similarity 哈里:测量弦相似度的工具
Pub Date : 2016-01-01 DOI: 10.5281/ZENODO.10074
Konrad Rieck, Christian Wressnegger
Comparing strings and assessing their similarity is a basic operation in many application domains of machine learning, such as in information retrieval, natural language processing and bioinformatics. The practitioner can choose from a large variety of available similarity measures for this task, each emphasizing different aspects of the string data. In this article, we present Harry, a small tool specifically designed for measuring the similarity of strings. Harry implements over 20 similarity measures, including common string distances and string kernels, such as the Levenshtein distance and the Subsequence kernel. The tool has been designed with efficiency in mind and allows for multi-threaded as well as distributed computing, enabling the analysis of large data sets of strings. Harry supports common data formats and thus can interface with analysis environments, such as Matlab, Pylab and Weka.
比较字符串并评估它们的相似性是机器学习许多应用领域的基本操作,如信息检索、自然语言处理和生物信息学。从业者可以从大量可用的相似性度量中进行选择,每个度量都强调字符串数据的不同方面。在本文中,我们介绍Harry,一个专门用于测量字符串相似性的小工具。Harry实现了20多种相似性度量,包括常见的字符串距离和字符串核,如Levenshtein距离和子序列核。该工具在设计时考虑了效率,支持多线程和分布式计算,支持对大型字符串数据集的分析。Harry支持常见的数据格式,因此可以与分析环境,如Matlab, Pylab和Weka接口。
{"title":"Harry: A Tool for Measuring String Similarity","authors":"Konrad Rieck, Christian Wressnegger","doi":"10.5281/ZENODO.10074","DOIUrl":"https://doi.org/10.5281/ZENODO.10074","url":null,"abstract":"Comparing strings and assessing their similarity is a basic operation in many application domains of machine learning, such as in information retrieval, natural language processing and bioinformatics. The practitioner can choose from a large variety of available similarity measures for this task, each emphasizing different aspects of the string data. In this article, we present Harry, a small tool specifically designed for measuring the similarity of strings. Harry implements over 20 similarity measures, including common string distances and string kernels, such as the Levenshtein distance and the Subsequence kernel. The tool has been designed with efficiency in mind and allows for multi-threaded as well as distributed computing, enabling the analysis of large data sets of strings. Harry supports common data formats and thus can interface with analysis environments, such as Matlab, Pylab and Weka.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"37 1","pages":"9:1-9:5"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79899781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Scalable Learning of Bayesian Network Classifiers 贝叶斯网络分类器的可扩展学习
Pub Date : 2016-01-01 DOI: 10.5555/2946645.2946689
Ana M. Martínez, Geoffrey I. Webb, Shenglei Chen, Nayyar Zaidi
Ever increasing data quantity makes ever more urgent the need for highly scalable learners that have good classification performance. Therefore, an out-of-core learner with excellent time and space complexity, along with high expressivity (that is, capacity to learn very complex multivariate probability distributions) is extremely desirable. This paper presents such a learner. We propose an extension to the k-dependence Bayesian classifier (KDB) that discriminatively selects a sub-model of a full KDB classifier. It requires only one additional pass through the training data, making it a three-pass learner. Our extensive experimental evaluation on 16 large data sets reveals that this out-of-core algorithm achieves competitive classification performance, and substantially better training and classification time than state-of-the-art in-core learners such as random forest and linear and non-linear logistic regression.
随着数据量的不断增加,对具有良好分类性能的高可扩展性学习器的需求日益迫切。因此,一个具有出色的时间和空间复杂性,以及高表达能力(即学习非常复杂的多元概率分布的能力)的外核学习器是非常可取的。本文提出了这样一个学习器。我们提出了k依赖贝叶斯分类器(KDB)的扩展,该扩展可以判别地选择完整KDB分类器的子模型。它只需要额外通过一次训练数据,使其成为一个三次学习。我们对16个大型数据集进行了广泛的实验评估,结果表明,这种核心外算法实现了具有竞争力的分类性能,并且比最先进的核心内学习器(如随机森林和线性和非线性逻辑回归)的训练和分类时间要短得多。
{"title":"Scalable Learning of Bayesian Network Classifiers","authors":"Ana M. Martínez, Geoffrey I. Webb, Shenglei Chen, Nayyar Zaidi","doi":"10.5555/2946645.2946689","DOIUrl":"https://doi.org/10.5555/2946645.2946689","url":null,"abstract":"Ever increasing data quantity makes ever more urgent the need for highly scalable learners that have good classification performance. Therefore, an out-of-core learner with excellent time and space complexity, along with high expressivity (that is, capacity to learn very complex multivariate probability distributions) is extremely desirable. This paper presents such a learner. We propose an extension to the k-dependence Bayesian classifier (KDB) that discriminatively selects a sub-model of a full KDB classifier. It requires only one additional pass through the training data, making it a three-pass learner. Our extensive experimental evaluation on 16 large data sets reveals that this out-of-core algorithm achieves competitive classification performance, and substantially better training and classification time than state-of-the-art in-core learners such as random forest and linear and non-linear logistic regression.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"32 1","pages":"44:1-44:35"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75291508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
A General Framework for Constrained Bayesian Optimization using Information-based Search 基于信息搜索的约束贝叶斯优化通用框架
Pub Date : 2015-11-30 DOI: 10.17863/CAM.6477
José Miguel Hernández-Lobato, M. Gelbart, Ryan P. Adams, Matthew W. Hoffman, Zoubin Ghahramani
We present an information-theoretic framework for solving global black-box optimization problems that also have black-box constraints. Of particular interest to us is to efficiently solve problems with decoupled constraints, in which subsets of the objective and constraint functions may be evaluated independently. For example, when the objective is evaluated on a CPU and the constraints are evaluated independently on a GPU. These problems require an acquisition function that can be separated into the contributions of the individual function evaluations. We develop one such acquisition function and call it Predictive Entropy Search with Constraints (PESC). PESC is an approximation to the expected information gain criterion and it compares favorably to alternative approaches based on improvement in several synthetic and real-world problems. In addition to this, we consider problems with a mix of functions that are fast and slow to evaluate. These problems require balancing the amount of time spent in the meta-computation of PESC and in the actual evaluation of the target objective. We take a bounded rationality approach and develop a partial update for PESC which trades o_ accuracy against speed. We then propose a method for adaptively switching between the partial and full updates for PESC. This allows us to interpolate between versions of PESC that are efficient in terms of function evaluations and those that are efficient in terms of wall-clock time. Overall, we demonstrate that PESC is an effective algorithm that provides a promising direction towards a unified solution for constrained Bayesian optimization.
我们提出了一个解决全局黑箱优化问题的信息论框架,该问题也有黑箱约束。我们特别感兴趣的是有效地解决具有解耦约束的问题,其中目标函数和约束函数的子集可以独立评估。例如,当在CPU上评估目标而在GPU上独立评估约束时。这些问题需要一个获取函数,它可以被分解成各个函数评估的贡献。我们开发了一个这样的获取函数,并将其称为约束预测熵搜索(PESC)。PESC是一种近似于预期信息增益准则的方法,它在几个综合问题和实际问题的改进基础上优于其他方法。除此之外,我们还考虑了计算速度快和计算速度慢的函数混合的问题。这些问题需要平衡PESC元计算和目标目标实际评价所花费的时间。我们采用有限理性方法,并开发了PESC的部分更新,该更新以速度为代价换取了_精度。然后,我们提出了一种在PESC部分更新和完全更新之间自适应切换的方法。这允许我们在函数计算方面有效的PESC版本和在时钟时间方面有效的PESC版本之间进行插值。总的来说,我们证明了PESC是一种有效的算法,为约束贝叶斯优化的统一解提供了一个有希望的方向。
{"title":"A General Framework for Constrained Bayesian Optimization using Information-based Search","authors":"José Miguel Hernández-Lobato, M. Gelbart, Ryan P. Adams, Matthew W. Hoffman, Zoubin Ghahramani","doi":"10.17863/CAM.6477","DOIUrl":"https://doi.org/10.17863/CAM.6477","url":null,"abstract":"We present an information-theoretic framework for solving global black-box optimization problems that also have black-box constraints. Of particular interest to us is to efficiently solve problems with decoupled constraints, in which subsets of the objective and constraint functions may be evaluated independently. For example, when the objective is evaluated on a CPU and the constraints are evaluated independently on a GPU. These problems require an acquisition function that can be separated into the contributions of the individual function evaluations. We develop one such acquisition function and call it Predictive Entropy Search with Constraints (PESC). PESC is an approximation to the expected information gain criterion and it compares favorably to alternative approaches based on improvement in several synthetic and real-world problems. In addition to this, we consider problems with a mix of functions that are fast and slow to evaluate. These problems require balancing the amount of time spent in the meta-computation of PESC and in the actual evaluation of the target objective. We take a bounded rationality approach and develop a partial update for PESC which trades o_ accuracy against speed. We then propose a method for adaptively switching between the partial and full updates for PESC. This allows us to interpolate between versions of PESC that are efficient in terms of function evaluations and those that are efficient in terms of wall-clock time. Overall, we demonstrate that PESC is an effective algorithm that provides a promising direction towards a unified solution for constrained Bayesian optimization.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"6 1","pages":"160:1-160:53"},"PeriodicalIF":0.0,"publicationDate":"2015-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80620930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 144
Train and Test Tightness of LP Relaxations in Structured Prediction 结构化预测中LP松弛的训练和测试紧密性
Pub Date : 2015-11-04 DOI: 10.17863/CAM.242
Ofer Meshi, M. Mahdavi, Adrian Weller, D. Sontag
Structured prediction is used in areas such as computer vision and natural language processing to predict structured outputs such as segmentations or parse trees. In these settings, prediction is performed by MAP inference or, equivalently, by solving an integer linear program. Because of the complex scoring functions required to obtain accurate predictions, both learning and inference typically require the use of approximate solvers. We propose a theoretical explanation to the striking observation that approximations based on linear programming (LP) relaxations are often tight on real-world instances. In particular, we show that learning with LP relaxed inference encourages integrality of training instances, and that tightness generalizes from train to test data.
结构化预测用于计算机视觉和自然语言处理等领域,以预测结构化输出,如分割或解析树。在这些设置中,通过MAP推理执行预测,或者通过求解整数线性程序执行预测。由于获得准确预测所需的复杂评分函数,学习和推理通常都需要使用近似求解器。我们对基于线性规划(LP)松弛的近似在现实世界实例中通常是紧密的这一惊人的观察结果提出了一个理论解释。特别地,我们证明了LP松弛推理的学习促进了训练实例的完整性,并且紧密性从训练数据推广到测试数据。
{"title":"Train and Test Tightness of LP Relaxations in Structured Prediction","authors":"Ofer Meshi, M. Mahdavi, Adrian Weller, D. Sontag","doi":"10.17863/CAM.242","DOIUrl":"https://doi.org/10.17863/CAM.242","url":null,"abstract":"Structured prediction is used in areas such as computer vision and natural language processing to predict structured outputs such as segmentations or parse trees. In these settings, prediction is performed by MAP inference or, equivalently, by solving an integer linear program. Because of the complex scoring functions required to obtain accurate predictions, both learning and inference typically require the use of approximate solvers. We propose a theoretical explanation to the striking observation that approximations based on linear programming (LP) relaxations are often tight on real-world instances. In particular, we show that learning with LP relaxed inference encourages integrality of training instances, and that tightness generalizes from train to test data.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"1 1","pages":"13:1-13:34"},"PeriodicalIF":0.0,"publicationDate":"2015-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82473385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Towards an axiomatic approach to hierarchical clustering of measures 迈向度量的层次聚类的公理方法
Pub Date : 2015-08-15 DOI: 10.5555/2789272.2886812
P. Thomann, Ingo Steinwart, Nico Schmid
We propose some axioms for hierarchical clustering of probability measures and investigate their ramifications. The basic idea is to let the user stipulate the clusters for some elementary measures. This is done without the need of any notion of metric, similarity or dissimilarity. Our main results then show that for each suitable choice of user-defined clustering on elementary measures we obtain a unique notion of clustering on a large set of distributions satisfying a set of additivity and continuity axioms. We illustrate the developed theory by numerous examples including some with and some without a density.
我们提出了一些概率测度的层次聚类公理,并研究了它们的分支。其基本思想是让用户规定一些基本措施的集群。这样做不需要任何度量、相似或不同的概念。然后,我们的主要结果表明,对于每一个合适的在初等测度上的用户自定义聚类的选择,我们得到了在满足一组可加性和连续性公理的大分布集上的聚类的唯一概念。我们用许多例子来说明已发展的理论,包括一些有密度和一些没有密度的例子。
{"title":"Towards an axiomatic approach to hierarchical clustering of measures","authors":"P. Thomann, Ingo Steinwart, Nico Schmid","doi":"10.5555/2789272.2886812","DOIUrl":"https://doi.org/10.5555/2789272.2886812","url":null,"abstract":"We propose some axioms for hierarchical clustering of probability measures and investigate their ramifications. The basic idea is to let the user stipulate the clusters for some elementary measures. This is done without the need of any notion of metric, similarity or dissimilarity. Our main results then show that for each suitable choice of user-defined clustering on elementary measures we obtain a unique notion of clustering on a large set of distributions satisfying a set of additivity and continuity axioms. We illustrate the developed theory by numerous examples including some with and some without a density.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"18 1","pages":"1949-2002"},"PeriodicalIF":0.0,"publicationDate":"2015-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81653314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
RLPy: a value-function-based reinforcement learning framework for education and research RLPy:一个用于教育和研究的基于价值函数的强化学习框架
Pub Date : 2015-08-01 DOI: 10.5555/2789272.2886799
A. Geramifard, Christoph Dann, Robert H. Klein, Will Dabney, J. How
RLPy is an object-oriented reinforcement learning software package with a focus on value-function-based methods using linear function approximation and discrete actions. The framework was designed for both educational and research purposes. It provides a rich library of fine-grained, easily exchangeable components for learning agents (e.g., policies or representations of value functions), facilitating recently increased specialization in reinforcement learning. RLPy is written in Python to allow fast prototyping, but is also suitable for large-scale experiments through its built-in support for optimized numerical libraries and parallelization. Code profiling, domain visualizations, and data analysis are integrated in a self-contained package available under the Modified BSD License at http://github.com/rlpy/rlpy. All of these properties allow users to compare various reinforcement learning algorithms with little effort.
RLPy是一个面向对象的强化学习软件包,侧重于使用线性函数近似和离散动作的基于值函数的方法。该框架是为教育和研究目的而设计的。它为学习代理(例如,策略或值函数的表示)提供了一个丰富的细粒度,易于交换的组件库,促进了最近强化学习的专业化。RLPy是用Python编写的,允许快速原型,但也适合大规模实验,因为它内置了对优化的数值库和并行化的支持。代码分析、领域可视化和数据分析集成在一个自包含的包中,可以在修改后的BSD许可证下从http://github.com/rlpy/rlpy获得。所有这些属性都允许用户毫不费力地比较各种强化学习算法。
{"title":"RLPy: a value-function-based reinforcement learning framework for education and research","authors":"A. Geramifard, Christoph Dann, Robert H. Klein, Will Dabney, J. How","doi":"10.5555/2789272.2886799","DOIUrl":"https://doi.org/10.5555/2789272.2886799","url":null,"abstract":"RLPy is an object-oriented reinforcement learning software package with a focus on value-function-based methods using linear function approximation and discrete actions. The framework was designed for both educational and research purposes. It provides a rich library of fine-grained, easily exchangeable components for learning agents (e.g., policies or representations of value functions), facilitating recently increased specialization in reinforcement learning. RLPy is written in Python to allow fast prototyping, but is also suitable for large-scale experiments through its built-in support for optimized numerical libraries and parallelization. Code profiling, domain visualizations, and data analysis are integrated in a self-contained package available under the Modified BSD License at http://github.com/rlpy/rlpy. All of these properties allow users to compare various reinforcement learning algorithms with little effort.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"78 1","pages":"1573-1578"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80057235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
Optimal estimation of low rank density matrices 低秩密度矩阵的最优估计
Pub Date : 2015-07-17 DOI: 10.5555/2789272.2886806
V. Koltchinskii, Dong Xia
The density matrices are positively semi-definite Hermitian matrices of unit trace that describe the state of a quantum system. The goal of the paper is to develop minimax lower bounds on error rates of estimation of low rank density matrices in trace regression models used in quantum state tomography (in particular, in the case of Pauli measurements) with explicit dependence of the bounds on the rank and other complexity parameters. Such bounds are established for several statistically relevant distances, including quantum versions of Kullback-Leibler divergence (relative entropy distance) and of Hellinger distance (so called Bures distance), and Schatten $p$-norm distances. Sharp upper bounds and oracle inequalities for least squares estimator with von Neumann entropy penalization are obtained showing that minimax lower bounds are attained (up to logarithmic factors) for these distances.
密度矩阵是描述量子系统状态的单位迹线的正半确定厄米矩阵。本文的目标是开发用于量子态层析(特别是泡利测量的情况下)的迹回归模型中低秩密度矩阵估计错误率的最小最大下界,其边界显式依赖于秩和其他复杂性参数。这样的界限是为几个统计上相关的距离建立的,包括量子版本的Kullback-Leibler散度(相对熵距离)和Hellinger距离(所谓的Bures距离),以及Schatten $p$范数距离。得到了具有冯·诺依曼熵惩罚的最小二乘估计的尖锐上界和oracle不等式,表明对于这些距离获得了极大极小下界(直到对数因子)。
{"title":"Optimal estimation of low rank density matrices","authors":"V. Koltchinskii, Dong Xia","doi":"10.5555/2789272.2886806","DOIUrl":"https://doi.org/10.5555/2789272.2886806","url":null,"abstract":"The density matrices are positively semi-definite Hermitian matrices of unit trace that describe the state of a quantum system. The goal of the paper is to develop minimax lower bounds on error rates of estimation of low rank density matrices in trace regression models used in quantum state tomography (in particular, in the case of Pauli measurements) with explicit dependence of the bounds on the rank and other complexity parameters. Such bounds are established for several statistically relevant distances, including quantum versions of Kullback-Leibler divergence (relative entropy distance) and of Hellinger distance (so called Bures distance), and Schatten $p$-norm distances. Sharp upper bounds and oracle inequalities for least squares estimator with von Neumann entropy penalization are obtained showing that minimax lower bounds are attained (up to logarithmic factors) for these distances.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"32 1","pages":"1757-1792"},"PeriodicalIF":0.0,"publicationDate":"2015-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82020503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Sharp oracle bounds for monotone and convex regression through aggregation 通过聚合的单调和凸回归的尖锐oracle界
Pub Date : 2015-06-29 DOI: 10.5555/2789272.2886809
P. Bellec, A. Tsybakov
We derive oracle inequalities for the problems of isotonic and convex regression using the combination of $Q$-aggregation procedure and sparsity pattern aggregation. This improves upon the previous results including the oracle inequalities for the constrained least squares estimator. One of the improvements is that our oracle inequalities are sharp, i.e., with leading constant 1. It allows us to obtain bounds for the minimax regret thus accounting for model misspecification, which was not possible based on the previous results. Another improvement is that we obtain oracle inequalities both with high probability and in expectation.
结合$Q$-聚合过程和稀疏模式聚合,导出了等渗和凸回归问题的oracle不等式。这改进了前面的结果,包括约束最小二乘估计的oracle不等式。其中一个改进是我们的oracle不等式是尖锐的,即,以1为前导常数。它允许我们获得最小最大遗憾的界限,从而考虑模型错误规范,这是不可能的,基于以前的结果。另一个改进是我们获得了高概率和期望的oracle不等式。
{"title":"Sharp oracle bounds for monotone and convex regression through aggregation","authors":"P. Bellec, A. Tsybakov","doi":"10.5555/2789272.2886809","DOIUrl":"https://doi.org/10.5555/2789272.2886809","url":null,"abstract":"We derive oracle inequalities for the problems of isotonic and convex regression using the combination of $Q$-aggregation procedure and sparsity pattern aggregation. This improves upon the previous results including the oracle inequalities for the constrained least squares estimator. One of the improvements is that our oracle inequalities are sharp, i.e., with leading constant 1. It allows us to obtain bounds for the minimax regret thus accounting for model misspecification, which was not possible based on the previous results. Another improvement is that we obtain oracle inequalities both with high probability and in expectation.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"49 1","pages":"1879-1892"},"PeriodicalIF":0.0,"publicationDate":"2015-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87563078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Encog: library of interchangeable machine learning models for Java and C# Encog:用于Java和c#的可互换机器学习模型库
Pub Date : 2015-06-15 DOI: 10.5555/2789272.2886789
Jeff Heaton
This paper introduces the Encog library for Java and C#, a scalable, adaptable, multiplatform machine learning framework that was 1st released in 2008. Encog allows a variety of machine learning models to be applied to datasets using regression, classification, and clustering. Various supported machine learning models can be used interchangeably with minimal recoding. Encog uses efficient multithreaded code to reduce training time by exploiting modern multicore processors. The current version of Encog can be downloaded from this http URL
本文介绍了用于Java和c#的Encog库,这是一个可扩展的、可适应的、多平台的机器学习框架,于2008年首次发布。Encog允许使用回归、分类和聚类将各种机器学习模型应用于数据集。各种支持的机器学习模型可以以最少的重新编码互换使用。Encog使用高效的多线程代码,通过利用现代多核处理器来减少训练时间。当前版本的Encog可以从这个http URL下载
{"title":"Encog: library of interchangeable machine learning models for Java and C#","authors":"Jeff Heaton","doi":"10.5555/2789272.2886789","DOIUrl":"https://doi.org/10.5555/2789272.2886789","url":null,"abstract":"This paper introduces the Encog library for Java and C#, a scalable, adaptable, multiplatform machine learning framework that was 1st released in 2008. Encog allows a variety of machine learning models to be applied to datasets using regression, classification, and clustering. Various supported machine learning models can be used interchangeably with minimal recoding. Encog uses efficient multithreaded code to reduce training time by exploiting modern multicore processors. The current version of Encog can be downloaded from this http URL","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"21 1","pages":"1243-1247"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86239936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
Exceptional rotations of random graphs: a VC theory 随机图的异常旋转:一个VC理论
Pub Date : 2015-06-09 DOI: 10.5555/2789272.2886810
L. Addario-Berry, S. Bhamidi, Sébastien Bubeck, L. Devroye, G. Lugosi, R. Oliveira
In this paper we explore maximal deviations of large random structures from their typical behavior. We introduce a model for a high-dimensional random graph process and ask analogous questions to those of Vapnik and Chervonenkis for deviations of averages: how "rich" does the process have to be so that one sees atypical behavior. In particular, we study a natural process of ErdH{o}s-R'enyi random graphs indexed by unit vectors in $mathbb{R}^d$. We investigate the deviations of the process with respect to three fundamental properties: clique number, chromatic number, and connectivity. In all cases we establish upper and lower bounds for the minimal dimension $d$ that guarantees the existence of "exceptional directions" in which the random graph behaves atypically with respect to the property. For each of the three properties, four theorems are established, to describe upper and lower bounds for the threshold dimension in the subcritical and supercritical regimes.
本文探讨了大型随机结构对其典型行为的最大偏差。我们引入了一个高维随机图过程的模型,并提出了与Vapnik和Chervonenkis关于平均偏差的类似问题:这个过程要有多“丰富”才能看到非典型行为。特别地,我们研究了在$mathbb{R}^d$中单位向量索引的ErdH{o} -R enyi随机图的一个自然过程。我们研究了关于三个基本性质的过程偏差:团数,色数和连通性。在所有情况下,我们建立了最小维d的上界和下界,保证了“例外方向”的存在,其中随机图在该性质方面表现出非典型。对于这三个性质中的每一个,建立了四个定理,以描述亚临界和超临界状态下阈值维的上界和下界。
{"title":"Exceptional rotations of random graphs: a VC theory","authors":"L. Addario-Berry, S. Bhamidi, Sébastien Bubeck, L. Devroye, G. Lugosi, R. Oliveira","doi":"10.5555/2789272.2886810","DOIUrl":"https://doi.org/10.5555/2789272.2886810","url":null,"abstract":"In this paper we explore maximal deviations of large random structures from their typical behavior. We introduce a model for a high-dimensional random graph process and ask analogous questions to those of Vapnik and Chervonenkis for deviations of averages: how \"rich\" does the process have to be so that one sees atypical behavior. In particular, we study a natural process of ErdH{o}s-R'enyi random graphs indexed by unit vectors in $mathbb{R}^d$. We investigate the deviations of the process with respect to three fundamental properties: clique number, chromatic number, and connectivity. In all cases we establish upper and lower bounds for the minimal dimension $d$ that guarantees the existence of \"exceptional directions\" in which the random graph behaves atypically with respect to the property. For each of the three properties, four theorems are established, to describe upper and lower bounds for the threshold dimension in the subcritical and supercritical regimes.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"11 1","pages":"1893-1922"},"PeriodicalIF":0.0,"publicationDate":"2015-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80765627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
J. Mach. Learn. Res.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1