A combination of tools and methods known as "graph mining" is used to evaluate real-world graphs, forecast the potential effects of a given graph’s structure and properties for various applications, and build models that can yield actual graphs that closely resemble the structure seen in real-world graphs of interest. However, some graph mining approaches possess scalability and dynamic graph challenges, limiting practical applications. In machine learning and data mining, among the unique methods is graph embedding, known as network representation learning where representative methods suggest encoding the complicated graph structures into embedding by utilizing specific pre-defined metrics. Co-occurrence graphs and keyword searches are the foundation of search engine optimizations for diverse real-world applications. Current work on keyword searches on graphs is based on pre-established information retrieval search criteria and does not provide semantic linkages. Recent works on co-occurrence and keyword search methods function effectively on graphs with only one layer instead of many layers. However, the graph neural network has been utilized in recent years as a branch of graph model due to its excellent performance. This paper proposes an Effective Keyword Search Co-occurrence Multi-Layer Graph mining method by employing two core approaches: Multi-layer Graph Embedding and Graph Neural Networks. We conducted extensive tests using benchmarks on real-world data sets. Considering the experimental findings, the proposed method enhanced with the regularization approach is substantially excellent, with a 10% increment in precision, recall, and f1-score.
被称为 "图挖掘 "的工具和方法组合可用于评估现实世界中的图,预测给定图的结构和属性对各种应用的潜在影响,并建立模型,以生成与现实世界中相关图的结构非常相似的实际图。然而,一些图挖掘方法面临着可扩展性和动态图的挑战,限制了实际应用。在机器学习和数据挖掘领域,图嵌入是一种独特的方法,被称为网络表示学习,其中具有代表性的方法建议利用特定的预定义指标将复杂的图结构编码为嵌入。共现图和关键词搜索是搜索引擎优化的基础,适用于各种实际应用。目前在图上进行关键词搜索的工作是基于预先确定的信息检索搜索标准,并不提供语义链接。最近关于共现和关键词搜索方法的研究成果能在只有一层而非多层的图上有效发挥作用。然而,图神经网络作为图模型的一个分支,因其出色的性能近年来得到了广泛应用。本文通过采用两种核心方法,提出了一种有效的关键词搜索共现多层图挖掘方法:多层图嵌入和图神经网络。我们利用真实世界数据集上的基准进行了大量测试。从实验结果来看,使用正则化方法增强的拟议方法非常出色,精确度、召回率和 f1 分数均提高了 10%。
{"title":"An effective keyword search co-occurrence multi-layer graph mining approach","authors":"Janet Oluwasola Bolorunduro, Zhaonian Zou, Mohamed Jaward Bah","doi":"10.1007/s10994-024-06528-9","DOIUrl":"https://doi.org/10.1007/s10994-024-06528-9","url":null,"abstract":"<p>A combination of tools and methods known as \"graph mining\" is used to evaluate real-world graphs, forecast the potential effects of a given graph’s structure and properties for various applications, and build models that can yield actual graphs that closely resemble the structure seen in real-world graphs of interest. However, some graph mining approaches possess scalability and dynamic graph challenges, limiting practical applications. In machine learning and data mining, among the unique methods is graph embedding, known as network representation learning where representative methods suggest encoding the complicated graph structures into embedding by utilizing specific pre-defined metrics. Co-occurrence graphs and keyword searches are the foundation of search engine optimizations for diverse real-world applications. Current work on keyword searches on graphs is based on pre-established information retrieval search criteria and does not provide semantic linkages. Recent works on co-occurrence and keyword search methods function effectively on graphs with only one layer instead of many layers. However, the graph neural network has been utilized in recent years as a branch of graph model due to its excellent performance. This paper proposes an Effective Keyword Search Co-occurrence Multi-Layer Graph mining method by employing two core approaches: Multi-layer Graph Embedding and Graph Neural Networks. We conducted extensive tests using benchmarks on real-world data sets. Considering the experimental findings, the proposed method enhanced with the regularization approach is substantially excellent, with a 10% increment in precision, recall, and f1-score.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"32 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140596326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-29DOI: 10.1007/s10994-023-06495-7
Zayd Hammoudeh, Daniel Lowd
Good models require good training data. For overparameterized deep models, the causal relationship between training data and model predictions is increasingly opaque and poorly understood. Influence analysis partially demystifies training’s underlying interactions by quantifying the amount each training instance alters the final model. Measuring the training data’s influence exactly can be provably hard in the worst case; this has led to the development and use of influence estimators, which only approximate the true influence. This paper provides the first comprehensive survey of training data influence analysis and estimation. We begin by formalizing the various, and in places orthogonal, definitions of training data influence. We then organize state-of-the-art influence analysis methods into a taxonomy; we describe each of these methods in detail and compare their underlying assumptions, asymptotic complexities, and overall strengths and weaknesses. Finally, we propose future research directions to make influence analysis more useful in practice as well as more theoretically and empirically sound.
{"title":"Training data influence analysis and estimation: a survey","authors":"Zayd Hammoudeh, Daniel Lowd","doi":"10.1007/s10994-023-06495-7","DOIUrl":"https://doi.org/10.1007/s10994-023-06495-7","url":null,"abstract":"<p>Good models require good training data. For overparameterized deep models, the causal relationship between training data and model predictions is increasingly opaque and poorly understood. Influence analysis partially demystifies training’s underlying interactions by quantifying the amount each training instance alters the final model. Measuring the training data’s influence exactly can be provably hard in the worst case; this has led to the development and use of influence estimators, which only approximate the true influence. This paper provides the first comprehensive survey of training data influence analysis and estimation. We begin by formalizing the various, and in places orthogonal, definitions of training data influence. We then organize state-of-the-art influence analysis methods into a taxonomy; we describe each of these methods in detail and compare their underlying assumptions, asymptotic complexities, and overall strengths and weaknesses. Finally, we propose future research directions to make influence analysis more useful in practice as well as more theoretically and empirically sound.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"43 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-29DOI: 10.1007/s10994-024-06534-x
Kilian Hendrickx, Lorenzo Perini, Dries Van der Plas, Wannes Meert, Jesse Davis
Machine learning models always make a prediction, even when it is likely to be inaccurate. This behavior should be avoided in many decision support applications, where mistakes can have severe consequences. Albeit already studied in 1970, machine learning with rejection recently gained interest. This machine learning subfield enables machine learning models to abstain from making a prediction when likely to make a mistake. This survey aims to provide an overview on machine learning with rejection. We introduce the conditions leading to two types of rejection, ambiguity and novelty rejection, which we carefully formalize. Moreover, we review and categorize strategies to evaluate a model’s predictive and rejective quality. Additionally, we define the existing architectures for models with rejection and describe the standard techniques for learning such models. Finally, we provide examples of relevant application domains and show how machine learning with rejection relates to other machine learning research areas.
{"title":"Machine learning with a reject option: a survey","authors":"Kilian Hendrickx, Lorenzo Perini, Dries Van der Plas, Wannes Meert, Jesse Davis","doi":"10.1007/s10994-024-06534-x","DOIUrl":"https://doi.org/10.1007/s10994-024-06534-x","url":null,"abstract":"<p>Machine learning models always make a prediction, even when it is likely to be inaccurate. This behavior should be avoided in many decision support applications, where mistakes can have severe consequences. Albeit already studied in 1970, machine learning with rejection recently gained interest. This machine learning subfield enables machine learning models to abstain from making a prediction when likely to make a mistake. This survey aims to provide an overview on machine learning with rejection. We introduce the conditions leading to two types of rejection, ambiguity and novelty rejection, which we carefully formalize. Moreover, we review and categorize strategies to evaluate a model’s predictive and rejective quality. Additionally, we define the existing architectures for models with rejection and describe the standard techniques for learning such models. Finally, we provide examples of relevant application domains and show how machine learning with rejection relates to other machine learning research areas.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"17 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-28DOI: 10.1007/s10994-024-06525-y
Pavlos Athanasios Apostolopoulos, Zehui Wang, Hanson Wang, Tenghyu Xu, Chad Zhou, Kittipate Virochsiri, Norm Zhou, Igor L. Markov
Large-scale Web-based services present opportunities for improving UI policies based on observed user interactions. We address challenges of learning such policies through offline reinforcement learning (RL). Deployed in a production system for user authentication in a major social network, it significantly improves long-term objectives. We articulate practical challenges, provide insights on training and evaluation of offline RL, and discuss generalizations toward offline RL’s deployment in industry-scale applications.
{"title":"Personalization for web-based services using offline reinforcement learning","authors":"Pavlos Athanasios Apostolopoulos, Zehui Wang, Hanson Wang, Tenghyu Xu, Chad Zhou, Kittipate Virochsiri, Norm Zhou, Igor L. Markov","doi":"10.1007/s10994-024-06525-y","DOIUrl":"https://doi.org/10.1007/s10994-024-06525-y","url":null,"abstract":"<p>Large-scale Web-based services present opportunities for improving UI policies based on observed user interactions. We address challenges of learning such policies through offline reinforcement learning (RL). Deployed in a production system for user authentication in a major social network, it significantly improves long-term objectives. We articulate practical challenges, provide insights on training and evaluation of offline RL, and discuss generalizations toward offline RL’s deployment in industry-scale applications.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"20 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-27DOI: 10.1007/s10994-023-06506-7
Hanh Thi Hong Tran, Matej Martinc, Andraz Repar, Nikola Ljubešić, Antoine Doucet, Senja Pollak
Automatic term extraction (ATE) is a natural language processing task that eases the effort of manually identifying terms from domain-specific corpora by providing a list of candidate terms. In this paper, we treat ATE as a sequence-labeling task and explore the efficacy of XLMR in evaluating cross-lingual and multilingual learning against monolingual learning in the cross-domain ATE context. Additionally, we introduce NOBI, a novel annotation mechanism enabling the labeling of single-word nested terms. Our experiments are conducted on the ACTER corpus, encompassing four domains and three languages (English, French, and Dutch), as well as the RSDO5 Slovenian corpus, encompassing four additional domains. Results indicate that cross-lingual and multilingual models outperform monolingual settings, showcasing improved F1-scores for all languages within the ACTER dataset. When incorporating an additional Slovenian corpus into the training set, the multilingual model exhibits superior performance compared to state-of-the-art approaches in specific scenarios. Moreover, the newly introduced NOBI labeling mechanism enhances the classifier’s capacity to extract short nested terms significantly, leading to substantial improvements in Recall for the ACTER dataset and consequentially boosting the overall F1-score performance.
自动术语提取(ATE)是一项自然语言处理任务,它通过提供候选术语列表,减轻了从特定领域语料库中手动识别术语的工作量。在本文中,我们将 ATE 视为序列标注任务,并探讨了 XLMR 在跨领域 ATE 中评估跨语言和多语言学习与单语言学习的效果。此外,我们还引入了 NOBI,这是一种新颖的标注机制,可对单词嵌套术语进行标注。我们在 ACTER 语料库(包含四个域和三种语言(英语、法语和荷兰语))以及 RSDO5 斯洛文尼亚语料库(包含另外四个域)上进行了实验。结果表明,跨语言和多语言模型优于单语言设置,ACTER 数据集中所有语言的 F1 分数都有所提高。在将斯洛文尼亚语语料纳入训练集时,多语言模型在特定场景中的表现优于最先进的方法。此外,新引入的 NOBI 标签机制显著增强了分类器提取嵌套短词的能力,从而大幅提高了 ACTER 数据集的召回率,并因此提升了整体 F1 分数性能。
{"title":"Can cross-domain term extraction benefit from cross-lingual transfer and nested term labeling?","authors":"Hanh Thi Hong Tran, Matej Martinc, Andraz Repar, Nikola Ljubešić, Antoine Doucet, Senja Pollak","doi":"10.1007/s10994-023-06506-7","DOIUrl":"https://doi.org/10.1007/s10994-023-06506-7","url":null,"abstract":"<p>Automatic term extraction (ATE) is a natural language processing task that eases the effort of manually identifying terms from domain-specific corpora by providing a list of candidate terms. In this paper, we treat ATE as a sequence-labeling task and explore the efficacy of XLMR in evaluating cross-lingual and multilingual learning against monolingual learning in the cross-domain ATE context. Additionally, we introduce NOBI, a novel annotation mechanism enabling the labeling of single-word nested terms. Our experiments are conducted on the ACTER corpus, encompassing four domains and three languages (English, French, and Dutch), as well as the RSDO5 Slovenian corpus, encompassing four additional domains. Results indicate that cross-lingual and multilingual models outperform monolingual settings, showcasing improved F1-scores for all languages within the ACTER dataset. When incorporating an additional Slovenian corpus into the training set, the multilingual model exhibits superior performance compared to state-of-the-art approaches in specific scenarios. Moreover, the newly introduced NOBI labeling mechanism enhances the classifier’s capacity to extract short nested terms significantly, leading to substantial improvements in Recall for the ACTER dataset and consequentially boosting the overall F1-score performance.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"32 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140310898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26DOI: 10.1007/s10994-024-06531-0
Abstract
High dimensional learning is data-hungry in general; however, many natural data sources and real-world learning problems posses some hidden low-complexity structure that permit effective learning from relatively small sample sizes. We are interested in the general question of how to discover and exploit such hidden benign traits when problem-specific prior knowledge is insufficient. In this work, we address this question through random projection’s ability to expose structure. We study both compressive learning and high dimensional learning from this angle by introducing the notions of compressive distortion and compressive complexity. We give user-friendly PAC bounds in the agnostic setting that are formulated in terms of these quantities, and we show that our bounds can be tight when these quantities are small. We then instantiate these quantities in several examples of particular learning problems, demonstrating their ability to discover interpretable structural characteristics that make high dimensional instances of these problems solvable to good approximation in a random linear subspace. In the examples considered, these turn out to resemble some familiar benign traits such as the margin, the margin distribution, the intrinsic dimension, the spectral decay of the data covariance, or the norms of parameters—while our general notions of compressive distortion and compressive complexity serve to unify these, and may be used to discover benign structural traits for other PAC-learnable problems.
{"title":"Structure discovery in PAC-learning by random projections","authors":"","doi":"10.1007/s10994-024-06531-0","DOIUrl":"https://doi.org/10.1007/s10994-024-06531-0","url":null,"abstract":"<h3>Abstract</h3> <p>High dimensional learning is data-hungry in general; however, many natural data sources and real-world learning problems posses some hidden low-complexity structure that permit effective learning from relatively small sample sizes. We are interested in the general question of how to discover and exploit such hidden benign traits when problem-specific prior knowledge is insufficient. In this work, we address this question through random projection’s ability to expose structure. We study both compressive learning and high dimensional learning from this angle by introducing the notions of compressive distortion and compressive complexity. We give user-friendly PAC bounds in the agnostic setting that are formulated in terms of these quantities, and we show that our bounds can be tight when these quantities are small. We then instantiate these quantities in several examples of particular learning problems, demonstrating their ability to discover interpretable structural characteristics that make high dimensional instances of these problems solvable to good approximation in a random linear subspace. In the examples considered, these turn out to resemble some familiar benign traits such as the margin, the margin distribution, the intrinsic dimension, the spectral decay of the data covariance, or the norms of parameters—while our general notions of compressive distortion and compressive complexity serve to unify these, and may be used to discover benign structural traits for other PAC-learnable problems.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"45 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140311133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26DOI: 10.1007/s10994-023-06499-3
Feliu Serra-Burriel, Pedro Delicado, Fernando M. Cucchietti, Eduardo Graells-Garrido, Alex Gil, Imanol Eguskiza
Futbol Club Barcelona operates the largest stadium in Europe (with a seating capacity of almost one hundred thousand people) and manages recurring sports events. These are influenced by multiple conditions (time and day of the week, weather, adversary) and affect city dynamics—e.g., peak demand for related services like public transport and stores. We study fine grain audience entrances at the stadium segregated by visitor type and gate to gain insights and predict the arrival behavior of future games, with a direct impact on the organizational performance and productivity of the business. We can forecast the timeline of arrivals at gate level 72 h prior to kickoff, facilitating operational and organizational decision-making by anticipating potential agglomerations and audience behavior. Furthermore, we can identify patterns for different types of visitors and understand how relevant factors affect them. These findings directly impact commercial and business interests and can alter operational logistics, venue management, and safety.
{"title":"When are they coming? Understanding and forecasting the timeline of arrivals at the FC Barcelona stadium on match days","authors":"Feliu Serra-Burriel, Pedro Delicado, Fernando M. Cucchietti, Eduardo Graells-Garrido, Alex Gil, Imanol Eguskiza","doi":"10.1007/s10994-023-06499-3","DOIUrl":"https://doi.org/10.1007/s10994-023-06499-3","url":null,"abstract":"<p>Futbol Club Barcelona operates the largest stadium in Europe (with a seating capacity of almost one hundred thousand people) and manages recurring sports events. These are influenced by multiple conditions (time and day of the week, weather, adversary) and affect city dynamics—e.g., peak demand for related services like public transport and stores. We study fine grain audience entrances at the stadium segregated by visitor type and gate to gain insights and predict the arrival behavior of future games, with a direct impact on the organizational performance and productivity of the business. We can forecast the timeline of arrivals at gate level 72 h prior to kickoff, facilitating operational and organizational decision-making by anticipating potential agglomerations and audience behavior. Furthermore, we can identify patterns for different types of visitors and understand how relevant factors affect them. These findings directly impact commercial and business interests and can alter operational logistics, venue management, and safety.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"72 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140310938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26DOI: 10.1007/s10994-024-06530-1
Alejandro Kuratomi, Ioanna Miliou, Zed Lee, Tony Lindgren, Panagiotis Papapetrou
Counterfactual explanations modify the feature values of an instance in order to alter its prediction from an undesired to a desired label. As such, they are highly useful for providing trustworthy interpretations of decision-making in domains where complex and opaque machine learning algorithms are utilized. To guarantee their quality and promote user trust, they need to satisfy the faithfulness desideratum, when supported by the data distribution. We hereby propose a counterfactual generation algorithm for mixed-feature spaces that prioritizes faithfulness through k-justification, a novel counterfactual property introduced in this paper. The proposed algorithm employs a graph representation of the search space and provides counterfactuals by solving an integer program. In addition, the algorithm is classifier-agnostic and is not dependent on the order in which the feature space is explored. In our empirical evaluation, we demonstrate that it guarantees k-justification while showing comparable performance to state-of-the-art methods in feasibility, sparsity, and proximity.
反事实解释可以修改实例的特征值,从而将其预测从不佳标签变为理想标签。因此,在使用复杂而不透明的机器学习算法的领域中,反事实解释对于提供可信的决策解释非常有用。为了保证其质量并提高用户信任度,它们需要在数据分布的支持下满足忠实性要求。在此,我们提出了一种混合特征空间的反事实生成算法,该算法通过 k-justification 优先考虑忠实性,这是本文引入的一种新颖的反事实属性。本文提出的算法采用搜索空间的图表示法,通过求解整数程序来提供反事实。此外,该算法与分类器无关,也不依赖于探索特征空间的顺序。在实证评估中,我们证明了该算法在可行性、稀疏性和接近性方面与最先进的方法性能相当,同时还保证了 k 的合理性。
{"title":"Ijuice: integer JUstIfied counterfactual explanations","authors":"Alejandro Kuratomi, Ioanna Miliou, Zed Lee, Tony Lindgren, Panagiotis Papapetrou","doi":"10.1007/s10994-024-06530-1","DOIUrl":"https://doi.org/10.1007/s10994-024-06530-1","url":null,"abstract":"<p>Counterfactual explanations modify the feature values of an instance in order to alter its prediction from an undesired to a desired label. As such, they are highly useful for providing trustworthy interpretations of decision-making in domains where complex and opaque machine learning algorithms are utilized. To guarantee their quality and promote user trust, they need to satisfy the <i>faithfulness</i> desideratum, when supported by the data distribution. We hereby propose a counterfactual generation algorithm for mixed-feature spaces that prioritizes faithfulness through <i>k-justification</i>, a novel counterfactual property introduced in this paper. The proposed algorithm employs a graph representation of the search space and provides counterfactuals by solving an integer program. In addition, the algorithm is classifier-agnostic and is not dependent on the order in which the feature space is explored. In our empirical evaluation, we demonstrate that it guarantees k-justification while showing comparable performance to state-of-the-art methods in <i>feasibility</i>, <i>sparsity</i>, and <i>proximity</i>.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"47 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140311335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26DOI: 10.1007/s10994-024-06533-y
Taeyoung Kim, Myungjoo Kang
Recently, several types of neural operators have been developed, including deep operator networks, graph neural operators, and Multiwavelet-based operators. Compared with these models, the Fourier neural operator (FNO), a physics-inspired machine learning method, is computationally efficient and can learn nonlinear operators between function spaces independent of a certain finite basis. This study investigated the bounding of the Rademacher complexity of the FNO based on specific group norms. Using capacity based on these norms, we bound the generalization error of the model. In addition, we investigate the correlation between the empirical generalization error and the proposed capacity of FNO. We infer that the type of group norm determines the information about the weights and architecture of the FNO model stored in capacity. The experimental results offer insight into the impact of the number of modes used in the FNO model on the generalization error. The results confirm that our capacity is an effective index for estimating generalization errors.
{"title":"Bounding the Rademacher complexity of Fourier neural operators","authors":"Taeyoung Kim, Myungjoo Kang","doi":"10.1007/s10994-024-06533-y","DOIUrl":"https://doi.org/10.1007/s10994-024-06533-y","url":null,"abstract":"<p>Recently, several types of neural operators have been developed, including deep operator networks, graph neural operators, and Multiwavelet-based operators. Compared with these models, the Fourier neural operator (FNO), a physics-inspired machine learning method, is computationally efficient and can learn nonlinear operators between function spaces independent of a certain finite basis. This study investigated the bounding of the Rademacher complexity of the FNO based on specific group norms. Using capacity based on these norms, we bound the generalization error of the model. In addition, we investigate the correlation between the empirical generalization error and the proposed capacity of FNO. We infer that the type of group norm determines the information about the weights and architecture of the FNO model stored in capacity. The experimental results offer insight into the impact of the number of modes used in the FNO model on the generalization error. The results confirm that our capacity is an effective index for estimating generalization errors.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"42 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-22DOI: 10.1007/s10994-024-06517-y
Nuwan Gunasekara, Bernhard Pfahringer, Heitor Gomes, Albert Bifet
Gradient Boosting is a widely-used machine learning technique that has proven highly effective in batch learning. However, its effectiveness in stream learning contexts lags behind bagging-based ensemble methods, which currently dominate the field. One reason for this discrepancy is the challenge of adapting the booster to new concept following a concept drift. Resetting the entire booster can lead to significant performance degradation as it struggles to learn the new concept. Resetting only some parts of the booster can be more effective, but identifying which parts to reset is difficult, given that each boosting step builds on the previous prediction. To overcome these difficulties, we propose Streaming Gradient Boosted Trees (Sgbt), which is trained using weighted squared loss elicited in XGBoost. Sgbt exploits trees with a replacement strategy to detect and recover from drifts, thus enabling the ensemble to adapt without sacrificing the predictive performance. Our empirical evaluation of Sgbt on a range of streaming datasets with challenging drift scenarios demonstrates that it outperforms current state-of-the-art methods for evolving data streams.
{"title":"Gradient boosted trees for evolving data streams","authors":"Nuwan Gunasekara, Bernhard Pfahringer, Heitor Gomes, Albert Bifet","doi":"10.1007/s10994-024-06517-y","DOIUrl":"https://doi.org/10.1007/s10994-024-06517-y","url":null,"abstract":"<p>Gradient Boosting is a widely-used machine learning technique that has proven highly effective in batch learning. However, its effectiveness in stream learning contexts lags behind bagging-based ensemble methods, which currently dominate the field. One reason for this discrepancy is the challenge of adapting the booster to new concept following a concept drift. Resetting the entire booster can lead to significant performance degradation as it struggles to learn the new concept. Resetting only some parts of the booster can be more effective, but identifying which parts to reset is difficult, given that each boosting step builds on the previous prediction. To overcome these difficulties, we propose Streaming Gradient Boosted Trees (<span>Sgbt</span>), which is trained using weighted squared loss elicited in <span>XGBoost</span>. <span>Sgbt</span> exploits trees with a replacement strategy to detect and recover from drifts, thus enabling the ensemble to adapt without sacrificing the predictive performance. Our empirical evaluation of <span>Sgbt</span> on a range of streaming datasets with challenging drift scenarios demonstrates that it outperforms current state-of-the-art methods for evolving data streams.</p>","PeriodicalId":49900,"journal":{"name":"Machine Learning","volume":"25 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140205735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}