首页 > 最新文献

Journal of Intelligent Information Systems最新文献

英文 中文
A novel technique using graph neural networks and relevance scoring to improve the performance of knowledge graph-based question answering systems 利用图神经网络和相关性评分提高基于知识图谱的问题解答系统性能的新技术
IF 3.4 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-22 DOI: 10.1007/s10844-023-00839-4
Sincy V. Thambi, P. C. Reghu Raj

A Knowledge Graph-based Question Answering (KGQA) system attempts to answer a given natural language question using a knowledge graph (KG) rather than from text data. The current KGQA methods attempt to determine whether there is an explicit relationship between the entities in the question and a well-structured relationship between them in the KG. However, such strategies are difficult to build and train, limiting their consistency and versatility. The use of language models such as BERT has aided in the advancement of natural language question answering. In this paper, we present a novel Graph Neural Network(GNN) based approach with relevance scoring for improving KGQA. GNNs use the weight of nodes and edges to influence the information propagation while updating the node features in the network. The suggested method comprises subgraph construction, weighing of nodes and edges, and pruning processes to obtain meaningful answers. BERT-based GNN is used to build subgraph node embeddings. We tested the influence of weighting for both nodes and edges and observed that the system performs better for weighted graphs than unweighted graphs. Additionally, we experimented with several GNN convolutional layers and obtainined improved results by combining GENeralised Graph Convolution (GENConv) with node weights for simple questions. Extensive testing on benchmark datasets confirmed the effectiveness of the proposed model in comparison to state-of-the-art KGQA systems.

基于知识图谱的问题解答(KGQA)系统试图利用知识图谱(KG)而不是文本数据来回答给定的自然语言问题。目前的 KGQA 方法试图确定问题中的实体与知识图谱中结构良好的实体之间是否存在明确的关系。然而,这种策略难以构建和训练,限制了其一致性和通用性。语言模型(如 BERT)的使用促进了自然语言问题解答的发展。在本文中,我们提出了一种基于图神经网络(GNN)的相关性评分新方法,用于改进 KGQA。图神经网络利用节点和边的权重来影响信息传播,同时更新网络中的节点特征。建议的方法包括子图构建、节点和边的权重以及剪枝过程,以获得有意义的答案。基于 BERT 的 GNN 用于构建子图节点嵌入。我们测试了节点和边的权重的影响,并观察到该系统对加权图的性能优于非加权图。此外,我们还试验了多个 GNN 卷积层,并通过将 GENeralised Graph Convolution(GENConv)与简单问题的节点权重相结合,获得了更好的结果。在基准数据集上进行的广泛测试证实,与最先进的 KGQA 系统相比,所提出的模型非常有效。
{"title":"A novel technique using graph neural networks and relevance scoring to improve the performance of knowledge graph-based question answering systems","authors":"Sincy V. Thambi, P. C. Reghu Raj","doi":"10.1007/s10844-023-00839-4","DOIUrl":"https://doi.org/10.1007/s10844-023-00839-4","url":null,"abstract":"<p>A Knowledge Graph-based Question Answering (KGQA) system attempts to answer a given natural language question using a knowledge graph (KG) rather than from text data. The current KGQA methods attempt to determine whether there is an explicit relationship between the entities in the question and a well-structured relationship between them in the KG. However, such strategies are difficult to build and train, limiting their consistency and versatility. The use of language models such as BERT has aided in the advancement of natural language question answering. In this paper, we present a novel Graph Neural Network(GNN) based approach with relevance scoring for improving KGQA. GNNs use the weight of nodes and edges to influence the information propagation while updating the node features in the network. The suggested method comprises subgraph construction, weighing of nodes and edges, and pruning processes to obtain meaningful answers. BERT-based GNN is used to build subgraph node embeddings. We tested the influence of weighting for both nodes and edges and observed that the system performs better for weighted graphs than unweighted graphs. Additionally, we experimented with several GNN convolutional layers and obtainined improved results by combining GENeralised Graph Convolution (GENConv) with node weights for simple questions. Extensive testing on benchmark datasets confirmed the effectiveness of the proposed model in comparison to state-of-the-art KGQA systems.</p>","PeriodicalId":56119,"journal":{"name":"Journal of Intelligent Information Systems","volume":"17 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139551672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sentiment analysis of twitter data to detect and predict political leniency using natural language processing 利用自然语言处理对 twitter 数据进行情感分析,以检测和预测政治宽大政策
IF 3.4 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-19 DOI: 10.1007/s10844-024-00842-3

Abstract

This paper analyses Twitter data to detect the political lean of a profile by extracting and classifying sentiments expressed through tweets. The work utilizes natural language processing, augmented with sentiment analysis algorithms and machine learning techniques, to classify specific keywords. The proposed methodology initially performs data pre-processing, followed by multi-aspect sentiment analysis for computing the sentiment score of the extracted keywords, for precisely classifying users into various clusters based on similarity score with respect to a sample user in each cluster. The proposed technique also predicts the sentiment of a profile towards unknown keywords and gauges the bias of an unidentified user towards political events or social issues. The proposed technique was tested on Twitter dataset with 1.72 million tweets taken from over 10,000 profiles and was able to successfully identify the political leniency of the user profiles with 99% confidence level, and also on a synthetic dataset with 2500 tweets, where the predicted accuracy and F1 score were 0.99 and 0.985 respectively, and 0.97 and 0.975 when neutral users were also considered for classification. The paper could also identify the impact of political decisions on various clusters, by analyzing the shift in the number of users belonging to the different clusters.

摘要 本文分析了 Twitter 数据,通过提取和分类推文中表达的情感来检测个人资料的政治倾向。这项工作利用自然语言处理,辅以情感分析算法和机器学习技术,对特定关键词进行分类。所提出的方法首先进行数据预处理,然后进行多方面的情感分析,计算所提取关键词的情感得分,根据与每个群组中样本用户的相似度得分,将用户精确地分类到不同的群组中。所提出的技术还能预测个人资料对未知关键词的情感,并衡量未识别用户对政治事件或社会问题的偏好。所提出的技术在 Twitter 数据集上进行了测试,该数据集包含来自 10,000 多个用户配置文件的 172 万条推文,能够以 99% 的置信度成功识别出用户配置文件的政治宽松度,同时还在一个包含 2500 条推文的合成数据集上进行了测试,预测准确率和 F1 分数分别为 0.99 和 0.985,当中立用户也被考虑进行分类时,预测准确率和 F1 分数分别为 0.97 和 0.975。论文还通过分析属于不同聚类的用户数量的变化,确定了政治决策对不同聚类的影响。
{"title":"Sentiment analysis of twitter data to detect and predict political leniency using natural language processing","authors":"","doi":"10.1007/s10844-024-00842-3","DOIUrl":"https://doi.org/10.1007/s10844-024-00842-3","url":null,"abstract":"<h3>Abstract</h3> <p>This paper analyses Twitter data to detect the political lean of a profile by extracting and classifying sentiments expressed through tweets. The work utilizes natural language processing, augmented with sentiment analysis algorithms and machine learning techniques, to classify specific keywords. The proposed methodology initially performs data pre-processing, followed by multi-aspect sentiment analysis for computing the sentiment score of the extracted keywords, for precisely classifying users into various clusters based on similarity score with respect to a sample user in each cluster. The proposed technique also predicts the sentiment of a profile towards unknown keywords and gauges the bias of an unidentified user towards political events or social issues. The proposed technique was tested on Twitter dataset with 1.72 million tweets taken from over 10,000 profiles and was able to successfully identify the political leniency of the user profiles with 99% confidence level, and also on a synthetic dataset with 2500 tweets, where the predicted accuracy and F1 score were 0.99 and 0.985 respectively, and 0.97 and 0.975 when neutral users were also considered for classification. The paper could also identify the impact of political decisions on various clusters, by analyzing the shift in the number of users belonging to the different clusters.</p>","PeriodicalId":56119,"journal":{"name":"Journal of Intelligent Information Systems","volume":"28 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139509067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A qualitative analysis of knowledge graphs in recommendation scenarios through semantics-aware autoencoders 通过语义感知自动编码器对推荐场景中的知识图谱进行定性分析
IF 3.4 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-19 DOI: 10.1007/s10844-023-00830-z
Vito Bellini, Eugenio Di Sciascio, Francesco Maria Donini, Claudio Pomo, Azzurra Ragone, Angelo Schiavone

Knowledge Graphs (KGs) have already proven their strength as a source of high-quality information for different tasks such as data integration, search, text summarization, and personalization. Another prominent research field that has been benefiting from the adoption of KGs is that of Recommender Systems (RSs). Feeding a RS with data coming from a KG improves recommendation accuracy, diversity, and novelty, and paves the way to the creation of interpretable models that can be used for explanations. This possibility of combining a KG with a RS raises the question whether such an addition can be performed in a plug-and-play fashion – also with respect to the recommendation domain – or whether each combination needs a careful evaluation. To investigate such a question, we consider all possible combinations of (i) three recommendation tasks (books, music, movies); (ii) three recommendation models fed with data from a KG (and in particular, a semantics-aware deep learning model, that we discuss in detail), compared with three baseline models without KG addition; (iii) two main encyclopedic KGs freely available on the Web: DBpedia and Wikidata. Supported by an extensive experimental evaluation, we show the final results in terms of accuracy and diversity of the various combinations, highlighting that the injection of knowledge does not always pay off. Moreover, we show how the choice of the KG, and the form of data in it, affect the results, depending on the recommendation domain and the learning model.

知识图谱(KG)已经证明了其作为高质量信息源在数据整合、搜索、文本摘要和个性化等不同任务中的优势。另一个因采用知识图谱而受益的著名研究领域是推荐系统(RS)。向 RS 输入来自 KG 的数据可以提高推荐的准确性、多样性和新颖性,并为创建可用于解释的可解释模型铺平道路。将 KG 与 RS 结合起来的这种可能性提出了一个问题,即这种添加是否可以即插即用的方式进行--也适用于推荐领域--还是每种组合都需要仔细评估。为了研究这个问题,我们考虑了以下所有可能的组合:(i) 三项推荐任务(书籍、音乐、电影);(ii) 使用来自 KG 的数据(特别是我们将详细讨论的语义感知深度学习模型)的三种推荐模型,与不添加 KG 的三种基线模型进行比较;(iii) 网络上免费提供的两种主要百科全书式 KG:DBpedia 和 Wikidata。在大量实验评估的支持下,我们展示了各种组合在准确性和多样性方面的最终结果,突出说明了知识的注入并不总能带来回报。此外,我们还展示了根据推荐领域和学习模型,KG 的选择和其中的数据形式对结果的影响。
{"title":"A qualitative analysis of knowledge graphs in recommendation scenarios through semantics-aware autoencoders","authors":"Vito Bellini, Eugenio Di Sciascio, Francesco Maria Donini, Claudio Pomo, Azzurra Ragone, Angelo Schiavone","doi":"10.1007/s10844-023-00830-z","DOIUrl":"https://doi.org/10.1007/s10844-023-00830-z","url":null,"abstract":"<p>Knowledge Graphs (KGs) have already proven their strength as a source of high-quality information for different tasks such as data integration, search, text summarization, and personalization. Another prominent research field that has been benefiting from the adoption of KGs is that of Recommender Systems (RSs). Feeding a RS with data coming from a KG improves recommendation accuracy, diversity, and novelty, and paves the way to the creation of interpretable models that can be used for explanations. This possibility of combining a KG with a RS raises the question whether such an addition can be performed in a plug-and-play fashion – also with respect to the recommendation domain – or whether each combination needs a careful evaluation. To investigate such a question, we consider all possible combinations of <i>(i)</i> three recommendation tasks (books, music, movies); <i>(ii)</i> three recommendation models fed with data from a KG (and in particular, a semantics-aware deep learning model, that we discuss in detail), compared with three baseline models without KG addition; <i>(iii)</i> two main encyclopedic KGs freely available on the Web: DBpedia and Wikidata. Supported by an extensive experimental evaluation, we show the final results in terms of accuracy and diversity of the various combinations, highlighting that the injection of knowledge does not always pay off. Moreover, we show how the choice of the KG, and the form of data in it, affect the results, depending on the recommendation domain and the learning model.</p>","PeriodicalId":56119,"journal":{"name":"Journal of Intelligent Information Systems","volume":"14 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139509261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing the fairness of offensive memes detection models by mitigating unintended political bias 通过减少意外的政治偏见,提高攻击性备忘录检测模型的公平性
IF 3.4 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-06 DOI: 10.1007/s10844-023-00834-9

Abstract

This paper tackles the critical challenge of detecting and mitigating unintended political bias in offensive meme detection. Political memes are a powerful tool that can be used to influence public opinion and disrupt voters’ mindsets. However, current visual-linguistic models for offensive meme detection exhibit unintended bias and struggle to accurately classify non-offensive and offensive memes. This can harm the fairness of the democratic process either by targeting minority groups or promoting harmful political ideologies. With Hindi being the fifth most spoken language globally and having a significant number of native speakers, it is essential to detect and remove Hindi-based offensive memes to foster a fair and equitable democratic process. To address these concerns, we propose three debiasing techniques to mitigate the overrepresentation of majority group perspectives while addressing the suppression of minority opinions in political discourse. To support our approach, we curate a comprehensive dataset called Pol_Off_Meme, designed especially for the Hindi language. Empirical analysis of this dataset demonstrates the efficacy of our proposed debiasing techniques in reducing political bias in internet memes, promoting a fair and equitable democratic environment. Our debiased model, named (DRTIM^{Adv}_{Att}) , exhibited superior performance compared to the CLIP-based baseline model. It achieved a significant improvement of +9.72% in the F1-score while reducing the False Positive Rate Difference (FPRD) by -16% and the False Negative Rate Difference (FNRD) by -14.01%. Our efforts strive to cultivate a more informed and inclusive political discourse, ensuring that all opinions, irrespective of their majority or minority status, receive adequate attention and representation.

摘要 本文探讨了在攻击性备忘录检测中检测和减轻意外政治偏见这一关键挑战。政治备忘录是一种强大的工具,可以用来影响公众舆论和扰乱选民的心态。然而,目前用于检测冒犯性备忘录的视觉语言学模型表现出了意外的偏见,难以准确地对非冒犯性备忘录和冒犯性备忘录进行分类。这可能会损害民主进程的公平性,要么针对少数群体,要么宣扬有害的政治意识形态。印地语是全球使用人数最多的第五大语言,母语使用者人数众多,因此必须检测和删除基于印地语的攻击性备忘录,以促进公平公正的民主进程。为了解决这些问题,我们提出了三种去污技术,以减轻多数群体观点的过度代表性,同时解决政治话语中对少数群体观点的压制问题。为了支持我们的方法,我们专门为印地语设计了一个名为 Pol_Off_Meme 的综合数据集。对该数据集的实证分析表明,我们提出的去中心化技术能有效减少网络备忘录中的政治偏见,促进公平公正的民主环境。与基于CLIP的基线模型相比,我们的去除法模型(名为(DRTIM^{Adv}_{Att}))表现出了更优越的性能。它的 F1 分数大幅提高了 9.72%,同时假阳性率差异(FPRD)降低了 -16%,假阴性率差异(FNRD)降低了 -14.01%。我们的努力旨在培养一种更加知情和包容的政治话语,确保所有意见,无论其处于多数还是少数地位,都能得到充分的关注和代表。
{"title":"Enhancing the fairness of offensive memes detection models by mitigating unintended political bias","authors":"","doi":"10.1007/s10844-023-00834-9","DOIUrl":"https://doi.org/10.1007/s10844-023-00834-9","url":null,"abstract":"<h3>Abstract</h3> <p>This paper tackles the critical challenge of detecting and mitigating unintended political bias in offensive meme detection. Political memes are a powerful tool that can be used to influence public opinion and disrupt voters’ mindsets. However, current visual-linguistic models for offensive meme detection exhibit unintended bias and struggle to accurately classify non-offensive and offensive memes. This can harm the fairness of the democratic process either by targeting minority groups or promoting harmful political ideologies. With Hindi being the fifth most spoken language globally and having a significant number of native speakers, it is essential to detect and remove Hindi-based offensive memes to foster a fair and equitable democratic process. To address these concerns, we propose three debiasing techniques to mitigate the overrepresentation of majority group perspectives while addressing the suppression of minority opinions in political discourse. To support our approach, we curate a comprehensive dataset called Pol_Off_Meme, designed especially for the Hindi language. Empirical analysis of this dataset demonstrates the efficacy of our proposed debiasing techniques in reducing political bias in internet memes, promoting a fair and equitable democratic environment. Our debiased model, named <span> <span>(DRTIM^{Adv}_{Att})</span> </span>, exhibited superior performance compared to the CLIP-based baseline model. It achieved a significant improvement of +9.72% in the F1-score while reducing the False Positive Rate Difference (FPRD) by -16% and the False Negative Rate Difference (FNRD) by -14.01%. Our efforts strive to cultivate a more informed and inclusive political discourse, ensuring that all opinions, irrespective of their majority or minority status, receive adequate attention and representation.</p>","PeriodicalId":56119,"journal":{"name":"Journal of Intelligent Information Systems","volume":"20 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2024-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139375936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Movie tag prediction: An extreme multi-label multi-modal transformer-based solution with explanation 电影标签预测:基于转换器的极端多标签多模态解决方案及说明
IF 3.4 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-06 DOI: 10.1007/s10844-023-00836-7
Massimo Guarascio, Marco Minici, Francesco Sergio Pisani, Erika De Francesco, Pasquale Lambardi

Providing rich and accurate metadata for indexing media content is a crucial problem for all the companies offering streaming entertainment services. These metadata are commonly employed to enhance search engine results and feed recommendation algorithms to improve the matching with user interests. However, the problem of labeling multimedia content with informative tags is challenging as the labeling procedure, manually performed by domain experts, is time-consuming and prone to error. Recently, the adoption of AI-based methods has been demonstrated to be an effective approach for automating this complex process. However, developing an effective solution requires coping with different challenging issues, such as data noise and the scarcity of labeled examples during the training phase. In this work, we address these challenges by introducing a Transformer-based framework for multi-modal multi-label classification enriched with model prediction explanation capabilities. These explanations can help the domain expert to understand the system’s predictions. Experimentation conducted on two real test cases demonstrates its effectiveness.

对于所有提供流媒体娱乐服务的公司来说,为媒体内容索引提供丰富而准确的元数据是一个至关重要的问题。这些元数据通常用于增强搜索引擎结果,并为推荐算法提供信息,以提高与用户兴趣的匹配度。然而,为多媒体内容标注信息标签是一个具有挑战性的问题,因为由领域专家手动执行的标注程序既耗时又容易出错。最近,基于人工智能的方法被证明是实现这一复杂过程自动化的有效方法。然而,开发有效的解决方案需要应对各种挑战性问题,如数据噪声和训练阶段标注示例的稀缺性。在这项工作中,我们引入了一个基于 Transformer 的多模态多标签分类框架,并丰富了模型预测解释功能,以应对这些挑战。这些解释可以帮助领域专家理解系统的预测。在两个实际测试案例中进行的实验证明了它的有效性。
{"title":"Movie tag prediction: An extreme multi-label multi-modal transformer-based solution with explanation","authors":"Massimo Guarascio, Marco Minici, Francesco Sergio Pisani, Erika De Francesco, Pasquale Lambardi","doi":"10.1007/s10844-023-00836-7","DOIUrl":"https://doi.org/10.1007/s10844-023-00836-7","url":null,"abstract":"<p>Providing rich and accurate metadata for indexing media content is a crucial problem for all the companies offering streaming entertainment services. These metadata are commonly employed to enhance search engine results and feed recommendation algorithms to improve the matching with user interests. However, the problem of labeling multimedia content with informative tags is challenging as the labeling procedure, manually performed by domain experts, is time-consuming and prone to error. Recently, the adoption of AI-based methods has been demonstrated to be an effective approach for automating this complex process. However, developing an effective solution requires coping with different challenging issues, such as data noise and the scarcity of labeled examples during the training phase. In this work, we address these challenges by introducing a Transformer-based framework for multi-modal multi-label classification enriched with model prediction explanation capabilities. These explanations can help the domain expert to understand the system’s predictions. Experimentation conducted on two real test cases demonstrates its effectiveness.</p>","PeriodicalId":56119,"journal":{"name":"Journal of Intelligent Information Systems","volume":"4 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2024-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139375935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TSUNAMI - an explainable PPM approach for customer churn prediction in evolving retail data environments TSUNAMI - 在不断变化的零售数据环境中预测客户流失的可解释 PPM 方法
IF 3.4 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-28 DOI: 10.1007/s10844-023-00838-5
Vincenzo Pasquadibisceglie, Annalisa Appice, Giuseppe Ieva, Donato Malerba

Retail companies are greatly interested in performing continuous monitoring of purchase traces of customers, to identify weak customers and take the necessary actions to improve customer satisfaction and ensure their revenues remain unaffected. In this paper, we formulate the customer churn prediction problem as a Predictive Process Monitoring (PPM) problem to be addressed under possible dynamic conditions of evolving retail data environments. To this aim, we propose TSUNAMI as a PPM approach to monitor the customer loyalty in the retail sector. It processes online the sale receipt stream produced by customers of a retail business company and learns a deep neural model to early detect possible purchase customer traces that will outcome in future churners. In addition, the proposed approach integrates a mechanism to detect concept drifts in customer purchase traces and adapts the deep neural model to concept drifts. Finally, to make decisions of customer purchase monitoring explainable to potential stakeholders, we analyse Shapley values of decisions, to explain which characteristics of the customer purchase traces are the most relevant for disentangling churners from non-churners and how these characteristics have possibly changed over time. Experiments with two benchmark retail data sets explore the effectiveness of the proposed approach.

零售公司非常希望对顾客的购买痕迹进行持续监控,以识别薄弱顾客,并采取必要行动提高顾客满意度,确保收入不受影响。在本文中,我们将客户流失预测问题表述为预测过程监控(PPM)问题,以便在不断变化的零售数据环境的可能动态条件下加以解决。为此,我们提出了 TSUNAMI 作为一种 PPM 方法,用于监控零售业的客户忠诚度。该方法在线处理零售商业公司客户产生的销售收据流,并学习深度神经模型,以尽早发现可能导致未来客户流失的购买客户痕迹。此外,所提出的方法还整合了一种机制,用于检测客户购买痕迹中的概念漂移,并根据概念漂移调整深度神经模型。最后,为了向潜在的利益相关者解释客户购买监控的决策,我们分析了决策的 Shapley 值,以解释客户购买痕迹中哪些特征与区分客户流失者和非客户流失者最相关,以及这些特征随着时间的推移可能发生的变化。利用两个基准零售数据集进行的实验探索了所建议方法的有效性。
{"title":"TSUNAMI - an explainable PPM approach for customer churn prediction in evolving retail data environments","authors":"Vincenzo Pasquadibisceglie, Annalisa Appice, Giuseppe Ieva, Donato Malerba","doi":"10.1007/s10844-023-00838-5","DOIUrl":"https://doi.org/10.1007/s10844-023-00838-5","url":null,"abstract":"<p>Retail companies are greatly interested in performing continuous monitoring of purchase traces of customers, to identify weak customers and take the necessary actions to improve customer satisfaction and ensure their revenues remain unaffected. In this paper, we formulate the customer churn prediction problem as a Predictive Process Monitoring (PPM) problem to be addressed under possible dynamic conditions of evolving retail data environments. To this aim, we propose <span>TSUNAMI</span> as a PPM approach to monitor the customer loyalty in the retail sector. It processes online the sale receipt stream produced by customers of a retail business company and learns a deep neural model to early detect possible purchase customer traces that will outcome in future churners. In addition, the proposed approach integrates a mechanism to detect concept drifts in customer purchase traces and adapts the deep neural model to concept drifts. Finally, to make decisions of customer purchase monitoring explainable to potential stakeholders, we analyse Shapley values of decisions, to explain which characteristics of the customer purchase traces are the most relevant for disentangling churners from non-churners and how these characteristics have possibly changed over time. Experiments with two benchmark retail data sets explore the effectiveness of the proposed approach.</p>","PeriodicalId":56119,"journal":{"name":"Journal of Intelligent Information Systems","volume":"37 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139065395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A bayesian-neural-networks framework for scaling posterior distributions over different-curation datasets 在不同配置数据集上扩展后验分布的贝叶斯神经网络框架
IF 3.4 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-26 DOI: 10.1007/s10844-023-00837-6
Alfredo Cuzzocrea, Alessandro Baldo, Edoardo Fadda

In this paper, we propose and experimentally assess an innovative framework for scaling posterior distributions over different-curation datasets, based on Bayesian-Neural-Networks (BNN). Another innovation of our proposed study consists in enhancing the accuracy of the Bayesian classifier via intelligent sampling algorithms. The proposed methodology is relevant in emerging applicative settings, such as provenance detection and analysis and cybercrime. Our contributions are complemented by a comprehensive experimental evaluation and analysis over both static and dynamic image datasets. Derived results confirm the successful application of our proposed methodology to emerging big data analytics settings.

在本文中,我们提出了一个基于贝叶斯神经网络(BNN)的创新框架,并对其进行了实验性评估,该框架用于在不同配置数据集上缩放后验分布。我们提出的另一项创新是通过智能采样算法提高贝叶斯分类器的准确性。所提出的方法适用于新兴的应用环境,如出处检测和分析以及网络犯罪。我们对静态和动态图像数据集进行了全面的实验评估和分析,对我们的贡献进行了补充。得出的结果证实,我们提出的方法可成功应用于新兴的大数据分析环境。
{"title":"A bayesian-neural-networks framework for scaling posterior distributions over different-curation datasets","authors":"Alfredo Cuzzocrea, Alessandro Baldo, Edoardo Fadda","doi":"10.1007/s10844-023-00837-6","DOIUrl":"https://doi.org/10.1007/s10844-023-00837-6","url":null,"abstract":"<p>In this paper, we propose and experimentally assess <i>an innovative framework for scaling posterior distributions over different-curation datasets, based on Bayesian-Neural-Networks (BNN)</i>. Another innovation of our proposed study consists in enhancing the accuracy of the Bayesian classifier via intelligent sampling algorithms. The proposed methodology is relevant in emerging applicative settings, such as <i>provenance detection and analysis</i> and <i>cybercrime</i>. Our contributions are complemented by a comprehensive experimental evaluation and analysis over both static and dynamic image datasets. Derived results confirm the successful application of our proposed methodology to emerging <i>big data analytics</i> settings.</p>","PeriodicalId":56119,"journal":{"name":"Journal of Intelligent Information Systems","volume":"44 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139051948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tell me what you Like: introducing natural language preference elicitation strategies in a virtual assistant for the movie domain 告诉我你喜欢什么:在电影虚拟助手中引入自然语言偏好激发策略
IF 3.4 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-12 DOI: 10.1007/s10844-023-00835-8
Cataldo Musto, Alessandro Francesco Maria Martina, Andrea Iovine, Fedelucio Narducci, Marco de Gemmis, Giovanni Semeraro

Preference elicitation is a crucial step for every recommendation algorithm. In this paper, we present a strategy that allows users to express their preferences and needs through natural language statements. In particular, our natural language preference elicitation pipeline allows users to express preferences on objective movie features (e.g., actors, directors, etc.) as well as on subjective features that are collected by mining user-written movie reviews. To validate our claims, we carried out a user study in the movie domain ((N=114)). The main finding of our experiment is that users tend to express their preferences by using objective features, whose usage largely overcomes that of subjective features, which are more complicated to be expressed. However, when the users are able to express their preferences also in terms of subjective features, they obtain better recommendations in a lower number of conversation turns. We have also identified the main challenges that arise when users talk to the virtual assistant by using subjective features, and this paves the way for future developments of our methodology.

偏好提取是每一种推荐算法的关键步骤。在本文中,我们提出了一种策略,允许用户通过自然语言语句表达他们的偏好和需求。特别是,我们的自然语言偏好引出管道允许用户表达对客观电影特征(例如,演员,导演等)以及通过挖掘用户编写的电影评论收集的主观特征的偏好。为了验证我们的说法,我们在电影领域((N=114))进行了一项用户研究。我们实验的主要发现是,用户倾向于使用客观特征来表达他们的偏好,客观特征的使用在很大程度上超过了主观特征的使用,主观特征的表达更加复杂。然而,当用户也能够在主观特征方面表达他们的偏好时,他们会在更少的会话回合中获得更好的推荐。我们还确定了用户通过使用主观特征与虚拟助手交谈时出现的主要挑战,这为我们方法的未来发展铺平了道路。
{"title":"Tell me what you Like: introducing natural language preference elicitation strategies in a virtual assistant for the movie domain","authors":"Cataldo Musto, Alessandro Francesco Maria Martina, Andrea Iovine, Fedelucio Narducci, Marco de Gemmis, Giovanni Semeraro","doi":"10.1007/s10844-023-00835-8","DOIUrl":"https://doi.org/10.1007/s10844-023-00835-8","url":null,"abstract":"<p>Preference elicitation is a crucial step for every recommendation algorithm. In this paper, we present a strategy that allows users to express their preferences and needs through natural language statements. In particular, our natural language preference elicitation pipeline allows users to express preferences on <i>objective</i> movie features (e.g., actors, directors, etc.) as well as on <i>subjective</i> features that are collected by mining user-written movie reviews. To validate our claims, we carried out a user study in the movie domain (<span>(N=114)</span>). The main finding of our experiment is that users tend to express their preferences by using <i>objective</i> features, whose usage largely overcomes that of <i>subjective</i> features, which are more complicated to be expressed. However, when the users are able to express their preferences also in terms of <i>subjective</i> features, they obtain better recommendations in a lower number of conversation turns. We have also identified the main challenges that arise when users talk to the virtual assistant by using subjective features, and this paves the way for future developments of our methodology.</p>","PeriodicalId":56119,"journal":{"name":"Journal of Intelligent Information Systems","volume":"75 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138629518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Audio super-resolution via vision transformer 通过视觉变压器实现超分辨率音频
IF 3.4 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-12 DOI: 10.1007/s10844-023-00833-w
Simona Nisticò, Luigi Palopoli, Adele Pia Romano

Audio super-resolution refers to techniques that improve the audio signals quality, usually by exploiting bandwidth extension methods, whereby audio enhancement is obtained by expanding the phase and the spectrogram of the input audio traces. These techniques are therefore much significant for all those cases where audio traces miss relevant parts of the audible spectrum. In several cases, the given input signal contains the low-band frequencies (the easiest to capture with low-quality recording instruments) whereas the high-band must be generated. In this paper, we illustrate techniques implemented into a system for bandwidth extension that works on musical tracks and generates the high-band frequencies starting from the low-band ones. The system, called ViT Super-resolution ((textit{ViT-SR})), features an architecture based on a Generative Adversarial Network and Vision Transformer model. In particular, two versions of the architecture will be presented in this paper, that work on different input frequency ranges. Experiments, which are accounted for in the paper, prove the effectiveness of our approach. In particular, the objective has been attained to demonstrate that it is possible to faithfully reconstruct the high-band signal of an audio file having only its low-band spectrum available as the input, therewith including the usually difficult to synthetically generate harmonics occurring in the audio tracks, which significantly contribute to the final perceived sound quality.

音频超分辨率是指提高音频信号质量的技术,通常通过利用带宽扩展方法,通过扩展输入音频走线的相位和频谱图来获得音频增强。因此,这些技术对于所有音频跟踪丢失可听频谱相关部分的情况都非常重要。在一些情况下,给定的输入信号包含低频带频率(用低质量的录音仪器最容易捕获),而必须生成高频带。在本文中,我们举例说明了实现到带宽扩展系统中的技术,该系统适用于音乐轨道,并从低频带开始产生高频带频率。该系统被称为ViT超分辨率((textit{ViT-SR})),其特点是基于生成对抗网络和视觉转换模型的架构。特别地,本文将介绍该架构的两个版本,它们在不同的输入频率范围内工作。实验证明了该方法的有效性。特别是,目标已经实现,以证明有可能忠实地重建音频文件的高频带信号,只有其低频带频谱可用作为输入,从而包括通常难以合成产生的音频轨道中出现的谐波,这对最终感知的音质有重要贡献。
{"title":"Audio super-resolution via vision transformer","authors":"Simona Nisticò, Luigi Palopoli, Adele Pia Romano","doi":"10.1007/s10844-023-00833-w","DOIUrl":"https://doi.org/10.1007/s10844-023-00833-w","url":null,"abstract":"<p>Audio super-resolution refers to techniques that improve the audio signals quality, usually by exploiting bandwidth extension methods, whereby audio enhancement is obtained by expanding the phase and the spectrogram of the input audio traces. These techniques are therefore much significant for all those cases where audio traces miss relevant parts of the audible spectrum. In several cases, the given input signal contains the low-band frequencies (the easiest to capture with low-quality recording instruments) whereas the high-band must be generated. In this paper, we illustrate techniques implemented into a system for bandwidth extension that works on musical tracks and generates the high-band frequencies starting from the low-band ones. The system, called <i>ViT Super-resolution</i> (<span>(textit{ViT-SR})</span>), features an architecture based on a Generative Adversarial Network and Vision Transformer model. In particular, two versions of the architecture will be presented in this paper, that work on different input frequency ranges. Experiments, which are accounted for in the paper, prove the effectiveness of our approach. In particular, the objective has been attained to demonstrate that it is possible to faithfully reconstruct the high-band signal of an audio file having only its low-band spectrum available as the input, therewith including the usually difficult to synthetically generate harmonics occurring in the audio tracks, which significantly contribute to the final perceived sound quality.</p>","PeriodicalId":56119,"journal":{"name":"Journal of Intelligent Information Systems","volume":"90 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138630225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How can text mining improve the explainability of Food security situations? 文本挖掘如何提高粮食安全状况的可解释性?
IF 3.4 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-11 DOI: 10.1007/s10844-023-00832-x
Hugo Deléglise, Agnès Bégué, Roberto Interdonato, Elodie Maître d’Hôtel, Mathieu Roche, Maguelonne Teisseire

Food Security (FS) is a major concern in West Africa, particularly in Burkina Faso, which has been the epicenter of a humanitarian crisis since the beginning of this century. Early warning systems for FS and famines rely mainly on numerical data for their analyses, whereas textual data, which are more complex to process, are rarely used. However, this data is easy to access and represents a source of relevant information that is complementary to commonly used data sources. This study explores methods for obtaining the explanatory context associated with FS from textual data. Based on a corpus of local newspaper articles, we analyze FS over the last ten years in Burkina Faso. We propose an original and dedicated pipeline that combines different textual analysis approaches to obtain an explanatory model evaluated on real-world and large-scale data. The results of our analyses have proven how our approach provides significant results that offer distinct and complementary qualitative information on food security and its spatial and temporal characteristics.

粮食安全(FS)是西非,尤其是布基纳法索关注的一个主要问题,自本世纪初以来,布基纳法索一直是人道主义危机的中心。粮食安全和饥荒预警系统主要依靠数字数据进行分析,而处理起来更为复杂的文本数据则很少使用。然而,这些数据易于获取,是对常用数据源进行补充的相关信息来源。本研究探讨了从文本数据中获取与财务报表相关的解释性语境的方法。基于当地报纸文章的语料库,我们分析了布基纳法索过去十年的金融服务情况。我们提出了一个独创的专用管道,将不同的文本分析方法结合起来,以获得一个在真实世界和大规模数据上进行评估的解释性模型。我们的分析结果证明了我们的方法如何提供了重要的结果,提供了关于粮食安全及其空间和时间特征的独特而互补的定性信息。
{"title":"How can text mining improve the explainability of Food security situations?","authors":"Hugo Deléglise, Agnès Bégué, Roberto Interdonato, Elodie Maître d’Hôtel, Mathieu Roche, Maguelonne Teisseire","doi":"10.1007/s10844-023-00832-x","DOIUrl":"https://doi.org/10.1007/s10844-023-00832-x","url":null,"abstract":"<p>Food Security (FS) is a major concern in West Africa, particularly in Burkina Faso, which has been the epicenter of a humanitarian crisis since the beginning of this century. Early warning systems for FS and famines rely mainly on numerical data for their analyses, whereas textual data, which are more complex to process, are rarely used. However, this data is easy to access and represents a source of relevant information that is complementary to commonly used data sources. This study explores methods for obtaining the explanatory context associated with FS from textual data. Based on a corpus of local newspaper articles, we analyze FS over the last ten years in Burkina Faso. We propose an original and dedicated pipeline that combines different textual analysis approaches to obtain an explanatory model evaluated on real-world and large-scale data. The results of our analyses have proven how our approach provides significant results that offer distinct and complementary qualitative information on food security and its spatial and temporal characteristics.</p>","PeriodicalId":56119,"journal":{"name":"Journal of Intelligent Information Systems","volume":"10 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138577089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Intelligent Information Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1