首页 > 最新文献

Machine Learning最新文献

英文 中文
An effective keyword search co-occurrence multi-layer graph mining approach 一种有效的关键词搜索共现多层图挖掘方法
IF 7.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-02 DOI: 10.1007/s10994-024-06528-9
Janet Oluwasola Bolorunduro, Zhaonian Zou, Mohamed Jaward Bah

A combination of tools and methods known as "graph mining" is used to evaluate real-world graphs, forecast the potential effects of a given graph’s structure and properties for various applications, and build models that can yield actual graphs that closely resemble the structure seen in real-world graphs of interest. However, some graph mining approaches possess scalability and dynamic graph challenges, limiting practical applications. In machine learning and data mining, among the unique methods is graph embedding, known as network representation learning where representative methods suggest encoding the complicated graph structures into embedding by utilizing specific pre-defined metrics. Co-occurrence graphs and keyword searches are the foundation of search engine optimizations for diverse real-world applications. Current work on keyword searches on graphs is based on pre-established information retrieval search criteria and does not provide semantic linkages. Recent works on co-occurrence and keyword search methods function effectively on graphs with only one layer instead of many layers. However, the graph neural network has been utilized in recent years as a branch of graph model due to its excellent performance. This paper proposes an Effective Keyword Search Co-occurrence Multi-Layer Graph mining method by employing two core approaches: Multi-layer Graph Embedding and Graph Neural Networks. We conducted extensive tests using benchmarks on real-world data sets. Considering the experimental findings, the proposed method enhanced with the regularization approach is substantially excellent, with a 10% increment in precision, recall, and f1-score.

被称为 "图挖掘 "的工具和方法组合可用于评估现实世界中的图,预测给定图的结构和属性对各种应用的潜在影响,并建立模型,以生成与现实世界中相关图的结构非常相似的实际图。然而,一些图挖掘方法面临着可扩展性和动态图的挑战,限制了实际应用。在机器学习和数据挖掘领域,图嵌入是一种独特的方法,被称为网络表示学习,其中具有代表性的方法建议利用特定的预定义指标将复杂的图结构编码为嵌入。共现图和关键词搜索是搜索引擎优化的基础,适用于各种实际应用。目前在图上进行关键词搜索的工作是基于预先确定的信息检索搜索标准,并不提供语义链接。最近关于共现和关键词搜索方法的研究成果能在只有一层而非多层的图上有效发挥作用。然而,图神经网络作为图模型的一个分支,因其出色的性能近年来得到了广泛应用。本文通过采用两种核心方法,提出了一种有效的关键词搜索共现多层图挖掘方法:多层图嵌入和图神经网络。我们利用真实世界数据集上的基准进行了大量测试。从实验结果来看,使用正则化方法增强的拟议方法非常出色,精确度、召回率和 f1 分数均提高了 10%。
{"title":"An effective keyword search co-occurrence multi-layer graph mining approach","authors":"Janet Oluwasola Bolorunduro, Zhaonian Zou, Mohamed Jaward Bah","doi":"10.1007/s10994-024-06528-9","DOIUrl":"https://doi.org/10.1007/s10994-024-06528-9","url":null,"abstract":"<p>A combination of tools and methods known as \"graph mining\" is used to evaluate real-world graphs, forecast the potential effects of a given graph’s structure and properties for various applications, and build models that can yield actual graphs that closely resemble the structure seen in real-world graphs of interest. However, some graph mining approaches possess scalability and dynamic graph challenges, limiting practical applications. In machine learning and data mining, among the unique methods is graph embedding, known as network representation learning where representative methods suggest encoding the complicated graph structures into embedding by utilizing specific pre-defined metrics. Co-occurrence graphs and keyword searches are the foundation of search engine optimizations for diverse real-world applications. Current work on keyword searches on graphs is based on pre-established information retrieval search criteria and does not provide semantic linkages. Recent works on co-occurrence and keyword search methods function effectively on graphs with only one layer instead of many layers. However, the graph neural network has been utilized in recent years as a branch of graph model due to its excellent performance. This paper proposes an Effective Keyword Search Co-occurrence Multi-Layer Graph mining method by employing two core approaches: Multi-layer Graph Embedding and Graph Neural Networks. We conducted extensive tests using benchmarks on real-world data sets. Considering the experimental findings, the proposed method enhanced with the regularization approach is substantially excellent, with a 10% increment in precision, recall, and f1-score.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"32 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140596326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training data influence analysis and estimation: a survey 培训数据的影响分析和估计:一项调查
IF 7.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-29 DOI: 10.1007/s10994-023-06495-7
Zayd Hammoudeh, Daniel Lowd

Good models require good training data. For overparameterized deep models, the causal relationship between training data and model predictions is increasingly opaque and poorly understood. Influence analysis partially demystifies training’s underlying interactions by quantifying the amount each training instance alters the final model. Measuring the training data’s influence exactly can be provably hard in the worst case; this has led to the development and use of influence estimators, which only approximate the true influence. This paper provides the first comprehensive survey of training data influence analysis and estimation. We begin by formalizing the various, and in places orthogonal, definitions of training data influence. We then organize state-of-the-art influence analysis methods into a taxonomy; we describe each of these methods in detail and compare their underlying assumptions, asymptotic complexities, and overall strengths and weaknesses. Finally, we propose future research directions to make influence analysis more useful in practice as well as more theoretically and empirically sound.

好的模型需要好的训练数据。对于参数过高的深度模型来说,训练数据与模型预测之间的因果关系越来越不透明,也越来越难以理解。影响分析通过量化每个训练实例对最终模型的改变程度,部分揭示了训练的潜在交互作用。在最坏的情况下,精确测量训练数据的影响是非常困难的;这就导致了影响估计器的开发和使用,而影响估计器只能接近真实的影响。本文首次对训练数据的影响分析和估计进行了全面研究。首先,我们对训练数据影响的各种定义进行了形式化,有些定义甚至是正交的。然后,我们将最先进的影响分析方法归纳为一个分类法;我们详细描述了每种方法,并比较了它们的基本假设、渐近复杂性和总体优缺点。最后,我们提出了未来的研究方向,以使影响分析在实践中更加有用,在理论和经验上更加合理。
{"title":"Training data influence analysis and estimation: a survey","authors":"Zayd Hammoudeh, Daniel Lowd","doi":"10.1007/s10994-023-06495-7","DOIUrl":"https://doi.org/10.1007/s10994-023-06495-7","url":null,"abstract":"<p>Good models require good training data. For overparameterized deep models, the causal relationship between training data and model predictions is increasingly opaque and poorly understood. Influence analysis partially demystifies training’s underlying interactions by quantifying the amount each training instance alters the final model. Measuring the training data’s influence exactly can be provably hard in the worst case; this has led to the development and use of influence estimators, which only approximate the true influence. This paper provides the first comprehensive survey of training data influence analysis and estimation. We begin by formalizing the various, and in places orthogonal, definitions of training data influence. We then organize state-of-the-art influence analysis methods into a taxonomy; we describe each of these methods in detail and compare their underlying assumptions, asymptotic complexities, and overall strengths and weaknesses. Finally, we propose future research directions to make influence analysis more useful in practice as well as more theoretically and empirically sound.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"43 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning with a reject option: a survey 带有拒绝选项的机器学习:一项调查
IF 7.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-29 DOI: 10.1007/s10994-024-06534-x
Kilian Hendrickx, Lorenzo Perini, Dries Van der Plas, Wannes Meert, Jesse Davis

Machine learning models always make a prediction, even when it is likely to be inaccurate. This behavior should be avoided in many decision support applications, where mistakes can have severe consequences. Albeit already studied in 1970, machine learning with rejection recently gained interest. This machine learning subfield enables machine learning models to abstain from making a prediction when likely to make a mistake. This survey aims to provide an overview on machine learning with rejection. We introduce the conditions leading to two types of rejection, ambiguity and novelty rejection, which we carefully formalize. Moreover, we review and categorize strategies to evaluate a model’s predictive and rejective quality. Additionally, we define the existing architectures for models with rejection and describe the standard techniques for learning such models. Finally, we provide examples of relevant application domains and show how machine learning with rejection relates to other machine learning research areas.

机器学习模型总是会做出预测,即使预测可能并不准确。在许多决策支持应用中都应避免这种行为,因为错误会带来严重后果。尽管早在 1970 年就有人研究过,但带有拒绝功能的机器学习最近又引起了人们的兴趣。这一机器学习子领域能让机器学习模型在可能犯错时放弃预测。本研究旨在概述带有拒绝功能的机器学习。我们介绍了导致两种类型拒绝的条件,即模糊性拒绝和新奇性拒绝,并对其进行了细致的形式化。此外,我们还对评估模型预测和拒绝质量的策略进行了回顾和分类。此外,我们还定义了具有拒绝功能的现有模型架构,并介绍了学习此类模型的标准技术。最后,我们提供了相关应用领域的示例,并说明了带拒绝功能的机器学习与其他机器学习研究领域的关系。
{"title":"Machine learning with a reject option: a survey","authors":"Kilian Hendrickx, Lorenzo Perini, Dries Van der Plas, Wannes Meert, Jesse Davis","doi":"10.1007/s10994-024-06534-x","DOIUrl":"https://doi.org/10.1007/s10994-024-06534-x","url":null,"abstract":"<p>Machine learning models always make a prediction, even when it is likely to be inaccurate. This behavior should be avoided in many decision support applications, where mistakes can have severe consequences. Albeit already studied in 1970, machine learning with rejection recently gained interest. This machine learning subfield enables machine learning models to abstain from making a prediction when likely to make a mistake. This survey aims to provide an overview on machine learning with rejection. We introduce the conditions leading to two types of rejection, ambiguity and novelty rejection, which we carefully formalize. Moreover, we review and categorize strategies to evaluate a model’s predictive and rejective quality. Additionally, we define the existing architectures for models with rejection and describe the standard techniques for learning such models. Finally, we provide examples of relevant application domains and show how machine learning with rejection relates to other machine learning research areas.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"17 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalization for web-based services using offline reinforcement learning 利用离线强化学习实现网络服务个性化
IF 7.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-28 DOI: 10.1007/s10994-024-06525-y
Pavlos Athanasios Apostolopoulos, Zehui Wang, Hanson Wang, Tenghyu Xu, Chad Zhou, Kittipate Virochsiri, Norm Zhou, Igor L. Markov

Large-scale Web-based services present opportunities for improving UI policies based on observed user interactions. We address challenges of learning such policies through offline reinforcement learning (RL). Deployed in a production system for user authentication in a major social network, it significantly improves long-term objectives. We articulate practical challenges, provide insights on training and evaluation of offline RL, and discuss generalizations toward offline RL’s deployment in industry-scale applications.

基于网络的大规模服务为根据观察到的用户交互情况改进用户界面策略提供了机会。我们通过离线强化学习(RL)来应对学习此类策略的挑战。在一个大型社交网络的用户身份验证生产系统中部署后,该系统显著改善了长期目标。我们阐述了实际挑战,提供了离线强化学习的训练和评估见解,并讨论了在行业规模应用中部署离线强化学习的一般化问题。
{"title":"Personalization for web-based services using offline reinforcement learning","authors":"Pavlos Athanasios Apostolopoulos, Zehui Wang, Hanson Wang, Tenghyu Xu, Chad Zhou, Kittipate Virochsiri, Norm Zhou, Igor L. Markov","doi":"10.1007/s10994-024-06525-y","DOIUrl":"https://doi.org/10.1007/s10994-024-06525-y","url":null,"abstract":"<p>Large-scale Web-based services present opportunities for improving UI policies based on observed user interactions. We address challenges of learning such policies through offline reinforcement learning (RL). Deployed in a production system for user authentication in a major social network, it significantly improves long-term objectives. We articulate practical challenges, provide insights on training and evaluation of offline RL, and discuss generalizations toward offline RL’s deployment in industry-scale applications.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"20 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can cross-domain term extraction benefit from cross-lingual transfer and nested term labeling? 跨域术语提取能否受益于跨语言转移和嵌套术语标注?
IF 7.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-27 DOI: 10.1007/s10994-023-06506-7
Hanh Thi Hong Tran, Matej Martinc, Andraz Repar, Nikola Ljubešić, Antoine Doucet, Senja Pollak

Automatic term extraction (ATE) is a natural language processing task that eases the effort of manually identifying terms from domain-specific corpora by providing a list of candidate terms. In this paper, we treat ATE as a sequence-labeling task and explore the efficacy of XLMR in evaluating cross-lingual and multilingual learning against monolingual learning in the cross-domain ATE context. Additionally, we introduce NOBI, a novel annotation mechanism enabling the labeling of single-word nested terms. Our experiments are conducted on the ACTER corpus, encompassing four domains and three languages (English, French, and Dutch), as well as the RSDO5 Slovenian corpus, encompassing four additional domains. Results indicate that cross-lingual and multilingual models outperform monolingual settings, showcasing improved F1-scores for all languages within the ACTER dataset. When incorporating an additional Slovenian corpus into the training set, the multilingual model exhibits superior performance compared to state-of-the-art approaches in specific scenarios. Moreover, the newly introduced NOBI labeling mechanism enhances the classifier’s capacity to extract short nested terms significantly, leading to substantial improvements in Recall for the ACTER dataset and consequentially boosting the overall F1-score performance.

自动术语提取(ATE)是一项自然语言处理任务,它通过提供候选术语列表,减轻了从特定领域语料库中手动识别术语的工作量。在本文中,我们将 ATE 视为序列标注任务,并探讨了 XLMR 在跨领域 ATE 中评估跨语言和多语言学习与单语言学习的效果。此外,我们还引入了 NOBI,这是一种新颖的标注机制,可对单词嵌套术语进行标注。我们在 ACTER 语料库(包含四个域和三种语言(英语、法语和荷兰语))以及 RSDO5 斯洛文尼亚语料库(包含另外四个域)上进行了实验。结果表明,跨语言和多语言模型优于单语言设置,ACTER 数据集中所有语言的 F1 分数都有所提高。在将斯洛文尼亚语语料纳入训练集时,多语言模型在特定场景中的表现优于最先进的方法。此外,新引入的 NOBI 标签机制显著增强了分类器提取嵌套短词的能力,从而大幅提高了 ACTER 数据集的召回率,并因此提升了整体 F1 分数性能。
{"title":"Can cross-domain term extraction benefit from cross-lingual transfer and nested term labeling?","authors":"Hanh Thi Hong Tran, Matej Martinc, Andraz Repar, Nikola Ljubešić, Antoine Doucet, Senja Pollak","doi":"10.1007/s10994-023-06506-7","DOIUrl":"https://doi.org/10.1007/s10994-023-06506-7","url":null,"abstract":"<p>Automatic term extraction (ATE) is a natural language processing task that eases the effort of manually identifying terms from domain-specific corpora by providing a list of candidate terms. In this paper, we treat ATE as a sequence-labeling task and explore the efficacy of XLMR in evaluating cross-lingual and multilingual learning against monolingual learning in the cross-domain ATE context. Additionally, we introduce NOBI, a novel annotation mechanism enabling the labeling of single-word nested terms. Our experiments are conducted on the ACTER corpus, encompassing four domains and three languages (English, French, and Dutch), as well as the RSDO5 Slovenian corpus, encompassing four additional domains. Results indicate that cross-lingual and multilingual models outperform monolingual settings, showcasing improved F1-scores for all languages within the ACTER dataset. When incorporating an additional Slovenian corpus into the training set, the multilingual model exhibits superior performance compared to state-of-the-art approaches in specific scenarios. Moreover, the newly introduced NOBI labeling mechanism enhances the classifier’s capacity to extract short nested terms significantly, leading to substantial improvements in Recall for the ACTER dataset and consequentially boosting the overall F1-score performance.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"32 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140310898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure discovery in PAC-learning by random projections 通过随机投影发现 PAC-learning 中的结构
IF 7.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-26 DOI: 10.1007/s10994-024-06531-0

Abstract

High dimensional learning is data-hungry in general; however, many natural data sources and real-world learning problems posses some hidden low-complexity structure that permit effective learning from relatively small sample sizes. We are interested in the general question of how to discover and exploit such hidden benign traits when problem-specific prior knowledge is insufficient. In this work, we address this question through random projection’s ability to expose structure. We study both compressive learning and high dimensional learning from this angle by introducing the notions of compressive distortion and compressive complexity. We give user-friendly PAC bounds in the agnostic setting that are formulated in terms of these quantities, and we show that our bounds can be tight when these quantities are small. We then instantiate these quantities in several examples of particular learning problems, demonstrating their ability to discover interpretable structural characteristics that make high dimensional instances of these problems solvable to good approximation in a random linear subspace. In the examples considered, these turn out to resemble some familiar benign traits such as the margin, the margin distribution, the intrinsic dimension, the spectral decay of the data covariance, or the norms of parameters—while our general notions of compressive distortion and compressive complexity serve to unify these, and may be used to discover benign structural traits for other PAC-learnable problems.

摘要 高维学习一般都是数据饥渴型的;然而,许多自然数据源和现实世界中的学习问题都具有一些隐藏的低复杂性结构,允许从相对较小的样本量中进行有效学习。我们感兴趣的一般问题是,当特定问题的先验知识不足时,如何发现和利用这种隐藏的良性特征。在这项工作中,我们通过随机投影揭示结构的能力来解决这个问题。通过引入压缩失真和压缩复杂性的概念,我们从这个角度研究了压缩学习和高维学习。我们在不可知论环境中给出了用户友好的 PAC 界值,这些界值是用这些量来表述的。然后,我们在几个特定学习问题的实例中实例化了这些量,展示了它们发现可解释结构特征的能力,这些特征使得这些问题的高维实例可以在随机线性子空间中很好地近似求解。在所考虑的示例中,这些特征类似于我们熟悉的一些良性特征,例如边际、边际分布、内在维度、数据协方差的频谱衰减或参数规范,而我们的压缩失真和压缩复杂性的一般概念有助于统一这些特征,并可用于发现其他 PAC 可学习问题的良性结构特征。
{"title":"Structure discovery in PAC-learning by random projections","authors":"","doi":"10.1007/s10994-024-06531-0","DOIUrl":"https://doi.org/10.1007/s10994-024-06531-0","url":null,"abstract":"<h3>Abstract</h3> <p>High dimensional learning is data-hungry in general; however, many natural data sources and real-world learning problems posses some hidden low-complexity structure that permit effective learning from relatively small sample sizes. We are interested in the general question of how to discover and exploit such hidden benign traits when problem-specific prior knowledge is insufficient. In this work, we address this question through random projection’s ability to expose structure. We study both compressive learning and high dimensional learning from this angle by introducing the notions of compressive distortion and compressive complexity. We give user-friendly PAC bounds in the agnostic setting that are formulated in terms of these quantities, and we show that our bounds can be tight when these quantities are small. We then instantiate these quantities in several examples of particular learning problems, demonstrating their ability to discover interpretable structural characteristics that make high dimensional instances of these problems solvable to good approximation in a random linear subspace. In the examples considered, these turn out to resemble some familiar benign traits such as the margin, the margin distribution, the intrinsic dimension, the spectral decay of the data covariance, or the norms of parameters—while our general notions of compressive distortion and compressive complexity serve to unify these, and may be used to discover benign structural traits for other PAC-learnable problems.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"45 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140311133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When are they coming? Understanding and forecasting the timeline of arrivals at the FC Barcelona stadium on match days 他们何时到来?了解并预测比赛日抵达巴塞罗那足球俱乐部球场的时间安排
IF 7.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-26 DOI: 10.1007/s10994-023-06499-3
Feliu Serra-Burriel, Pedro Delicado, Fernando M. Cucchietti, Eduardo Graells-Garrido, Alex Gil, Imanol Eguskiza

Futbol Club Barcelona operates the largest stadium in Europe (with a seating capacity of almost one hundred thousand people) and manages recurring sports events. These are influenced by multiple conditions (time and day of the week, weather, adversary) and affect city dynamics—e.g., peak demand for related services like public transport and stores. We study fine grain audience entrances at the stadium segregated by visitor type and gate to gain insights and predict the arrival behavior of future games, with a direct impact on the organizational performance and productivity of the business. We can forecast the timeline of arrivals at gate level 72 h prior to kickoff, facilitating operational and organizational decision-making by anticipating potential agglomerations and audience behavior. Furthermore, we can identify patterns for different types of visitors and understand how relevant factors affect them. These findings directly impact commercial and business interests and can alter operational logistics, venue management, and safety.

巴塞罗那足球俱乐部运营着欧洲最大的体育场(可容纳近十万人),并管理着经常性的体育赛事。这些赛事受到多种条件(时间、星期、天气、对手)的影响,并对城市动态产生影响,例如对公共交通和商店等相关服务的高峰需求。我们对体育场的观众入口进行细粒度研究,按观众类型和入口进行分类,以深入了解并预测未来比赛的到达行为,这对企业的组织绩效和生产率有直接影响。我们可以在开球前 72 小时预测入场观众的到达时间,通过预测潜在的聚集和观众行为来促进运营和组织决策。此外,我们还能识别不同类型游客的模式,了解相关因素对他们的影响。这些发现会直接影响商业和企业利益,并能改变运营物流、场地管理和安全。
{"title":"When are they coming? Understanding and forecasting the timeline of arrivals at the FC Barcelona stadium on match days","authors":"Feliu Serra-Burriel, Pedro Delicado, Fernando M. Cucchietti, Eduardo Graells-Garrido, Alex Gil, Imanol Eguskiza","doi":"10.1007/s10994-023-06499-3","DOIUrl":"https://doi.org/10.1007/s10994-023-06499-3","url":null,"abstract":"<p>Futbol Club Barcelona operates the largest stadium in Europe (with a seating capacity of almost one hundred thousand people) and manages recurring sports events. These are influenced by multiple conditions (time and day of the week, weather, adversary) and affect city dynamics—e.g., peak demand for related services like public transport and stores. We study fine grain audience entrances at the stadium segregated by visitor type and gate to gain insights and predict the arrival behavior of future games, with a direct impact on the organizational performance and productivity of the business. We can forecast the timeline of arrivals at gate level 72 h prior to kickoff, facilitating operational and organizational decision-making by anticipating potential agglomerations and audience behavior. Furthermore, we can identify patterns for different types of visitors and understand how relevant factors affect them. These findings directly impact commercial and business interests and can alter operational logistics, venue management, and safety.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"72 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140310938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ijuice: integer JUstIfied counterfactual explanations Ijuice:整数化的反事实解释
IF 7.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-26 DOI: 10.1007/s10994-024-06530-1
Alejandro Kuratomi, Ioanna Miliou, Zed Lee, Tony Lindgren, Panagiotis Papapetrou

Counterfactual explanations modify the feature values of an instance in order to alter its prediction from an undesired to a desired label. As such, they are highly useful for providing trustworthy interpretations of decision-making in domains where complex and opaque machine learning algorithms are utilized. To guarantee their quality and promote user trust, they need to satisfy the faithfulness desideratum, when supported by the data distribution. We hereby propose a counterfactual generation algorithm for mixed-feature spaces that prioritizes faithfulness through k-justification, a novel counterfactual property introduced in this paper. The proposed algorithm employs a graph representation of the search space and provides counterfactuals by solving an integer program. In addition, the algorithm is classifier-agnostic and is not dependent on the order in which the feature space is explored. In our empirical evaluation, we demonstrate that it guarantees k-justification while showing comparable performance to state-of-the-art methods in feasibility, sparsity, and proximity.

反事实解释可以修改实例的特征值,从而将其预测从不佳标签变为理想标签。因此,在使用复杂而不透明的机器学习算法的领域中,反事实解释对于提供可信的决策解释非常有用。为了保证其质量并提高用户信任度,它们需要在数据分布的支持下满足忠实性要求。在此,我们提出了一种混合特征空间的反事实生成算法,该算法通过 k-justification 优先考虑忠实性,这是本文引入的一种新颖的反事实属性。本文提出的算法采用搜索空间的图表示法,通过求解整数程序来提供反事实。此外,该算法与分类器无关,也不依赖于探索特征空间的顺序。在实证评估中,我们证明了该算法在可行性、稀疏性和接近性方面与最先进的方法性能相当,同时还保证了 k 的合理性。
{"title":"Ijuice: integer JUstIfied counterfactual explanations","authors":"Alejandro Kuratomi, Ioanna Miliou, Zed Lee, Tony Lindgren, Panagiotis Papapetrou","doi":"10.1007/s10994-024-06530-1","DOIUrl":"https://doi.org/10.1007/s10994-024-06530-1","url":null,"abstract":"<p>Counterfactual explanations modify the feature values of an instance in order to alter its prediction from an undesired to a desired label. As such, they are highly useful for providing trustworthy interpretations of decision-making in domains where complex and opaque machine learning algorithms are utilized. To guarantee their quality and promote user trust, they need to satisfy the <i>faithfulness</i> desideratum, when supported by the data distribution. We hereby propose a counterfactual generation algorithm for mixed-feature spaces that prioritizes faithfulness through <i>k-justification</i>, a novel counterfactual property introduced in this paper. The proposed algorithm employs a graph representation of the search space and provides counterfactuals by solving an integer program. In addition, the algorithm is classifier-agnostic and is not dependent on the order in which the feature space is explored. In our empirical evaluation, we demonstrate that it guarantees k-justification while showing comparable performance to state-of-the-art methods in <i>feasibility</i>, <i>sparsity</i>, and <i>proximity</i>.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"47 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140311335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bounding the Rademacher complexity of Fourier neural operators 限制傅立叶神经算子的拉德马赫复杂性
IF 7.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-26 DOI: 10.1007/s10994-024-06533-y
Taeyoung Kim, Myungjoo Kang

Recently, several types of neural operators have been developed, including deep operator networks, graph neural operators, and Multiwavelet-based operators. Compared with these models, the Fourier neural operator (FNO), a physics-inspired machine learning method, is computationally efficient and can learn nonlinear operators between function spaces independent of a certain finite basis. This study investigated the bounding of the Rademacher complexity of the FNO based on specific group norms. Using capacity based on these norms, we bound the generalization error of the model. In addition, we investigate the correlation between the empirical generalization error and the proposed capacity of FNO. We infer that the type of group norm determines the information about the weights and architecture of the FNO model stored in capacity. The experimental results offer insight into the impact of the number of modes used in the FNO model on the generalization error. The results confirm that our capacity is an effective index for estimating generalization errors.

最近,人们开发了多种类型的神经算子,包括深度算子网络、图神经算子和基于多小波的算子。与这些模型相比,傅立叶神经算子(FNO)作为一种受物理学启发的机器学习方法,计算效率高,可以学习独立于一定有限基础的函数空间之间的非线性算子。本研究基于特定的组规范研究了 FNO 的拉德马赫复杂度边界。利用基于这些规范的容量,我们对模型的泛化误差进行了约束。此外,我们还研究了经验泛化误差与 FNO 拟议容量之间的相关性。我们推断,群体规范的类型决定了存储在容量中的 FNO 模型的权重和结构信息。实验结果让我们深入了解了 FNO 模型中使用的模式数量对泛化误差的影响。结果证实,我们的容量是估算泛化误差的有效指标。
{"title":"Bounding the Rademacher complexity of Fourier neural operators","authors":"Taeyoung Kim, Myungjoo Kang","doi":"10.1007/s10994-024-06533-y","DOIUrl":"https://doi.org/10.1007/s10994-024-06533-y","url":null,"abstract":"<p>Recently, several types of neural operators have been developed, including deep operator networks, graph neural operators, and Multiwavelet-based operators. Compared with these models, the Fourier neural operator (FNO), a physics-inspired machine learning method, is computationally efficient and can learn nonlinear operators between function spaces independent of a certain finite basis. This study investigated the bounding of the Rademacher complexity of the FNO based on specific group norms. Using capacity based on these norms, we bound the generalization error of the model. In addition, we investigate the correlation between the empirical generalization error and the proposed capacity of FNO. We infer that the type of group norm determines the information about the weights and architecture of the FNO model stored in capacity. The experimental results offer insight into the impact of the number of modes used in the FNO model on the generalization error. The results confirm that our capacity is an effective index for estimating generalization errors.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"42 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gradient boosted trees for evolving data streams 用于演化数据流的梯度提升树
IF 7.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-22 DOI: 10.1007/s10994-024-06517-y
Nuwan Gunasekara, Bernhard Pfahringer, Heitor Gomes, Albert Bifet

Gradient Boosting is a widely-used machine learning technique that has proven highly effective in batch learning. However, its effectiveness in stream learning contexts lags behind bagging-based ensemble methods, which currently dominate the field. One reason for this discrepancy is the challenge of adapting the booster to new concept following a concept drift. Resetting the entire booster can lead to significant performance degradation as it struggles to learn the new concept. Resetting only some parts of the booster can be more effective, but identifying which parts to reset is difficult, given that each boosting step builds on the previous prediction. To overcome these difficulties, we propose Streaming Gradient Boosted Trees (Sgbt), which is trained using weighted squared loss elicited in XGBoost. Sgbt exploits trees with a replacement strategy to detect and recover from drifts, thus enabling the ensemble to adapt without sacrificing the predictive performance. Our empirical evaluation of Sgbt on a range of streaming datasets with challenging drift scenarios demonstrates that it outperforms current state-of-the-art methods for evolving data streams.

梯度提升(Gradient Boosting)是一种广泛使用的机器学习技术,已被证明在批量学习中非常有效。然而,它在流学习环境中的有效性却落后于基于袋法的集合方法,而后者目前在该领域占据主导地位。造成这种差异的原因之一是,在概念漂移之后,如何使助推器适应新概念是一个挑战。重置整个助推器会导致性能显著下降,因为它要努力学习新概念。只重置助推器的某些部分可能会更有效,但由于每个助推步骤都建立在前一个预测的基础上,因此很难确定要重置哪些部分。为了克服这些困难,我们提出了流梯度提升树(Sgbt),它是利用 XGBoost 中引出的加权平方损失进行训练的。Sgbt 利用具有替换策略的树来检测和恢复漂移,从而使集合能够在不牺牲预测性能的情况下进行调整。我们在一系列具有挑战性漂移场景的流数据集上对 Sgbt 进行了实证评估,结果表明它优于当前最先进的数据流演化方法。
{"title":"Gradient boosted trees for evolving data streams","authors":"Nuwan Gunasekara, Bernhard Pfahringer, Heitor Gomes, Albert Bifet","doi":"10.1007/s10994-024-06517-y","DOIUrl":"https://doi.org/10.1007/s10994-024-06517-y","url":null,"abstract":"<p>Gradient Boosting is a widely-used machine learning technique that has proven highly effective in batch learning. However, its effectiveness in stream learning contexts lags behind bagging-based ensemble methods, which currently dominate the field. One reason for this discrepancy is the challenge of adapting the booster to new concept following a concept drift. Resetting the entire booster can lead to significant performance degradation as it struggles to learn the new concept. Resetting only some parts of the booster can be more effective, but identifying which parts to reset is difficult, given that each boosting step builds on the previous prediction. To overcome these difficulties, we propose Streaming Gradient Boosted Trees (<span>Sgbt</span>), which is trained using weighted squared loss elicited in <span>XGBoost</span>. <span>Sgbt</span> exploits trees with a replacement strategy to detect and recover from drifts, thus enabling the ensemble to adapt without sacrificing the predictive performance. Our empirical evaluation of <span>Sgbt</span> on a range of streaming datasets with challenging drift scenarios demonstrates that it outperforms current state-of-the-art methods for evolving data streams.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"25 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140205735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Machine Learning
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1