Pub Date : 2024-05-16DOI: 10.1016/j.datak.2024.102315
Gabriele Scaffidi Militone, Daniele Apiletti, Giovanni Malnati
In many contexts where data is streamed on a large scale, such as video surveillance systems, there is a dual requirement: secure data storage and continuous access to audio and video content by third parties, such as human operators or specific business logic, even while the media files are still being collected. However, using transactions to ensure data persistence often limits system throughput and latency. This paper presents a solution that enables both high ingestion rates with transactional data persistence and near real-time, low-latency access to the stream during collection. This immediate access enables the prompt application of specialized data engineering algorithms during data acquisition. The proposed solution is particularly suitable for binary data sources such as audio and video recordings in surveillance systems, and it can be extended to various big data scenarios via well-defined general interfaces. The scalability of the approach is based on the microservice architecture. Preliminary results obtained with Apache Kafka and MongoDB replica sets show that the proposed solution provides up to 3 times higher throughput and 2.2 times lower latency compared to standard multi-document transactions.
{"title":"Hermes, a low-latency transactional storage for binary data streams from remote devices","authors":"Gabriele Scaffidi Militone, Daniele Apiletti, Giovanni Malnati","doi":"10.1016/j.datak.2024.102315","DOIUrl":"10.1016/j.datak.2024.102315","url":null,"abstract":"<div><p>In many contexts where data is streamed on a large scale, such as video surveillance systems, there is a dual requirement: secure data storage and continuous access to audio and video content by third parties, such as human operators or specific business logic, even while the media files are still being collected. However, using transactions to ensure data persistence often limits system throughput and latency. This paper presents a solution that enables both high ingestion rates with transactional data persistence and near real-time, low-latency access to the stream during collection. This immediate access enables the prompt application of specialized data engineering algorithms during data acquisition. The proposed solution is particularly suitable for binary data sources such as audio and video recordings in surveillance systems, and it can be extended to various big data scenarios via well-defined general interfaces. The scalability of the approach is based on the microservice architecture. Preliminary results obtained with Apache Kafka and MongoDB replica sets show that the proposed solution provides up to 3 times higher throughput and 2.2 times lower latency compared to standard multi-document transactions.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"153 ","pages":"Article 102315"},"PeriodicalIF":2.5,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141042236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hotel reviews play a vital role in tourism recommender system. They should be analyzed effectively to enhance the accuracy of recommendations which can be generated either from crisp ratings on a fixed scale or real sentiments of reviews. But crisp ratings cannot represent the actual feelings of reviewers. Existing tourism recommender systems mostly recommend hotels on the basis of vague and sparse ratings resulting in inaccurate recommendations or preferences for online users. This paper presents a semantic approach to analyze the online reviews being crawled from tripadvisor.in. It discovers the underlying fuzzy semantics of reviews with respect to the multiple criteria of hotels rather than using the crisp ratings. The crawled reviews are preprocessed via data cleaning such as stopword and punctuation removal, tokenization, lemmatization, pos tagging to understand the semantics efficiently. Nouns representing frequent features of hotels are extracted from pre-processed reviews which are further used to identify opinion phrases. Fuzzy weights are derived from normalized frequency of frequent nouns and combined with sentiment score of all the synonyms of adjectives in the identified opinion phrases. This results in fuzzy semantics which form an ideal representation of reviews for a multi-criteria tourism recommender system. The proposed work is implemented in python by crawling the recent reviews of Jaipur hotels from TripAdvisor and analyzing their semantics. The resultant fuzzy semantics form a manually tagged dataset of reviews tagged with sentiments of identified aspects, respectively. Experimental results show improved sentiment score while considering all the synonyms of adjectives. The results are further used to fine-tune BERT models to form encodings for a query-based recommender system. The proposed approach can help tourism and hospitality service providers to take advantage of such sentiment analysis to examine the negative comments or unpleasant experiences of tourists and making appropriate improvements. Moreover, it will help online users to get better recommendations while planning their trips.
{"title":"Analyzing fuzzy semantics of reviews for multi-criteria recommendations","authors":"Navreen Kaur Boparai , Himanshu Aggarwal , Rinkle Rani","doi":"10.1016/j.datak.2024.102314","DOIUrl":"10.1016/j.datak.2024.102314","url":null,"abstract":"<div><p>Hotel reviews play a vital role in tourism recommender system. They should be analyzed effectively to enhance the accuracy of recommendations which can be generated either from crisp ratings on a fixed scale or real sentiments of reviews. But crisp ratings cannot represent the actual feelings of reviewers. Existing tourism recommender systems mostly recommend hotels on the basis of vague and sparse ratings resulting in inaccurate recommendations or preferences for online users. This paper presents a semantic approach to analyze the online reviews being crawled from tripadvisor.in. It discovers the underlying fuzzy semantics of reviews with respect to the multiple criteria of hotels rather than using the crisp ratings. The crawled reviews are preprocessed via data cleaning such as stopword and punctuation removal, tokenization, lemmatization, pos tagging to understand the semantics efficiently. Nouns representing frequent features of hotels are extracted from pre-processed reviews which are further used to identify opinion phrases. Fuzzy weights are derived from normalized frequency of frequent nouns and combined with sentiment score of all the synonyms of adjectives in the identified opinion phrases. This results in fuzzy semantics which form an ideal representation of reviews for a multi-criteria tourism recommender system. The proposed work is implemented in python by crawling the recent reviews of Jaipur hotels from TripAdvisor and analyzing their semantics. The resultant fuzzy semantics form a manually tagged dataset of reviews tagged with sentiments of identified aspects, respectively. Experimental results show improved sentiment score while considering all the synonyms of adjectives. The results are further used to fine-tune BERT models to form encodings for a query-based recommender system. The proposed approach can help tourism and hospitality service providers to take advantage of such sentiment analysis to examine the negative comments or unpleasant experiences of tourists and making appropriate improvements. Moreover, it will help online users to get better recommendations while planning their trips.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"152 ","pages":"Article 102314"},"PeriodicalIF":2.5,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141034319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-11DOI: 10.1016/j.datak.2024.102310
Corentin Burnay, Mathieu Lega, Sarah Bouraga
Decision makers in organizations strive to improve the quality of their decisions. One way to improve that process is to objectify the decisions with facts. Data-driven Decision Support Systems (data-driven DSS), and more specifically business intelligence (BI) intend to achieve this. Organizations invest massively in the development of BI data-driven DSS and expect them to be adopted and to effectively support decision makers. This raises many technical and methodological challenges, especially regarding the design of BI dashboards, which can be seen as the visible tip of the BI data-driven DSS iceberg and which play a major role in the adoption of the entire system. In this paper, the dashboard content is investigated as one possible root cause for BI data-driven DSS dashboard adoption or rejection through an early empirical research. More precisely, this work is composed of three parts. In the first part, the concept of cognitive loads is studied in the context of BI dashboards and the informational, the representational and the non-informational loads are introduced. In the second part, the effects of these loads on the adoption of BI dashboards are then studied through an experiment with 167 respondents and a Structural Equation Modeling (SEM) analysis. The result is a Dashboard Adoption Model, enriching the seminal Technology Acceptance Model with new content-oriented variables to support the design of more supportive BI data-driven DSS dashboards. Finally, in the third part, a set of indicators is proposed to help dashboards designers in the monitoring of the loads of their dashboards practically.
组织中的决策者都在努力提高决策质量。改进这一过程的方法之一就是用事实将决策客观化。数据驱动的决策支持系统(DSS),更具体地说就是商业智能(BI),就是为了实现这一目标。各组织在开发商业智能数据驱动型决策支持系统方面投入了大量资金,并期望这些系统能够被采用并为决策者提供有效支持。这就提出了许多技术和方法上的挑战,尤其是在商业智能仪表盘的设计方面,它可以被视为商业智能数据驱动型数据支持系统的冰山一角,在整个系统的采用方面发挥着重要作用。本文通过早期实证研究,将仪表盘内容作为 BI 数据驱动的 DSS 仪表盘采用或拒绝的可能根源之一进行调查。更确切地说,这项工作由三部分组成。在第一部分中,研究了 BI 面板背景下的认知负荷概念,并介绍了信息负荷、表征负荷和非信息负荷。在第二部分中,通过对 167 名受访者进行实验和结构方程建模(SEM)分析,研究了这些负载对采用商业智能仪表盘的影响。研究结果是仪表盘采用模型,该模型用新的内容导向变量丰富了开创性的技术接受模型,以支持设计更具支持性的商业智能数据驱动的 DSS 仪表盘。最后,第三部分提出了一套指标,以帮助仪表盘设计者切实监测仪表盘的负载情况。
{"title":"Business intelligence and cognitive loads: Proposition of a dashboard adoption model","authors":"Corentin Burnay, Mathieu Lega, Sarah Bouraga","doi":"10.1016/j.datak.2024.102310","DOIUrl":"https://doi.org/10.1016/j.datak.2024.102310","url":null,"abstract":"<div><p>Decision makers in organizations strive to improve the quality of their decisions. One way to improve that process is to objectify the decisions with facts. Data-driven Decision Support Systems (data-driven DSS), and more specifically business intelligence (BI) intend to achieve this. Organizations invest massively in the development of BI data-driven DSS and expect them to be adopted and to effectively support decision makers. This raises many technical and methodological challenges, especially regarding the design of BI dashboards, which can be seen as the visible tip of the BI data-driven DSS iceberg and which play a major role in the adoption of the entire system. In this paper, the dashboard content is investigated as one possible root cause for BI data-driven DSS dashboard adoption or rejection through an early empirical research. More precisely, this work is composed of three parts. In the first part, the concept of cognitive loads is studied in the context of BI dashboards and the informational, the representational and the non-informational loads are introduced. In the second part, the effects of these loads on the adoption of BI dashboards are then studied through an experiment with 167 respondents and a Structural Equation Modeling (SEM) analysis. The result is a Dashboard Adoption Model, enriching the seminal Technology Acceptance Model with new content-oriented variables to support the design of more supportive BI data-driven DSS dashboards. Finally, in the third part, a set of indicators is proposed to help dashboards designers in the monitoring of the loads of their dashboards practically.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"152 ","pages":"Article 102310"},"PeriodicalIF":2.5,"publicationDate":"2024-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140951807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-08DOI: 10.1016/j.datak.2024.102303
Thibault Falque , Bertrand Mazure , Karim Tabia
Punctuality is a sensitive issue in large airports and hubs for passenger experience and for controlling operational costs. This paper presents a real and challenging problem of predicting and explaining flight off-block delays. We study the case of the international airport Paris Charles de Gaulle (Paris-CDG) starting from the specificities of this problem at Paris-CDG until the proposal of modelings then solutions and the analysis of the results on real data covering an entire year of activity. The proof of concept provided in this paper allows us to believe that the proposed approach could help improve the management of delays and reduce the impact of the resulting consequences.
准点率是大型机场和枢纽的一个敏感问题,关系到乘客体验和运营成本控制。本文提出了一个具有挑战性的实际问题,即如何预测和解释航班延误。我们以巴黎戴高乐国际机场(Paris Charles de Gaulle,简称 "Paris-CDG")为例进行研究,从巴黎戴高乐机场这一问题的特殊性入手,到提出模型和解决方案,再到对全年活动的真实数据进行结果分析。本文所提供的概念证明让我们相信,所提出的方法有助于改善延误管理并减少由此造成的影响。
{"title":"Machine learning for predicting off-block delays: A case study at Paris — Charles de Gaulle International Airport","authors":"Thibault Falque , Bertrand Mazure , Karim Tabia","doi":"10.1016/j.datak.2024.102303","DOIUrl":"10.1016/j.datak.2024.102303","url":null,"abstract":"<div><p>Punctuality is a sensitive issue in large airports and hubs for passenger experience and for controlling operational costs. This paper presents a real and challenging problem of predicting and explaining flight off-block delays. We study the case of the international airport Paris Charles de Gaulle (Paris-CDG) starting from the specificities of this problem at Paris-CDG until the proposal of modelings then solutions and the analysis of the results on real data covering an entire year of activity. The proof of concept provided in this paper allows us to believe that the proposed approach could help improve the management of delays and reduce the impact of the resulting consequences.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"152 ","pages":"Article 102303"},"PeriodicalIF":2.5,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169023X24000272/pdfft?md5=ff8c7468240914b3ce61469a0954468c&pid=1-s2.0-S0169023X24000272-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141043841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manually integrating data of diverse formats and languages is vital to many artificial intelligence applications. However, the task itself remains challenging and time-consuming. This paper highlights the potential of Large Language Models (LLMs) to streamline data extraction and resolution processes. Our approach aims to address the ongoing challenge of integrating heterogeneous data sources, encouraging advancements in the field of data engineering. Applied on the specific use case of learning disorders in higher education, our research demonstrates LLMs’ capability to effectively extract data from unstructured sources. It is then further highlighted that LLMs can enhance data integration by providing the ability to resolve entities originating from multiple data sources. Crucially, the paper underscores the necessity of preliminary data modeling decisions to ensure the success of such technological applications. By merging human expertise with LLM-driven automation, this study advocates for the further exploration of semi-autonomous data engineering pipelines.
{"title":"To prompt or not to prompt: Navigating the use of Large Language Models for integrating and modeling heterogeneous data","authors":"Adel Remadi , Karim El Hage , Yasmina Hobeika , Francesca Bugiotti","doi":"10.1016/j.datak.2024.102313","DOIUrl":"https://doi.org/10.1016/j.datak.2024.102313","url":null,"abstract":"<div><p>Manually integrating data of diverse formats and languages is vital to many artificial intelligence applications. However, the task itself remains challenging and time-consuming. This paper highlights the potential of Large Language Models (LLMs) to streamline data extraction and resolution processes. Our approach aims to address the ongoing challenge of integrating heterogeneous data sources, encouraging advancements in the field of data engineering. Applied on the specific use case of learning disorders in higher education, our research demonstrates LLMs’ capability to effectively extract data from unstructured sources. It is then further highlighted that LLMs can enhance data integration by providing the ability to resolve entities originating from multiple data sources. Crucially, the paper underscores the necessity of preliminary data modeling decisions to ensure the success of such technological applications. By merging human expertise with LLM-driven automation, this study advocates for the further exploration of semi-autonomous data engineering pipelines.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"152 ","pages":"Article 102313"},"PeriodicalIF":2.5,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169023X24000375/pdfft?md5=11ee9c76542d55fac49075892a9a8c7d&pid=1-s2.0-S0169023X24000375-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140918204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-03DOI: 10.1016/j.datak.2024.102309
Milos Chaloupka, Martin Necasky
The RDF is a popular and well-documented format for publishing structured data on the web. It enables data to be consumed without the knowledge of how the data is internally stored. There are already several native RDF storage solutions that provide a SPARQL endpoint. However, native RDF stores are not widely adopted. It is still more common to store data in a relational database. One of the useful features of native RDF storage solutions is providing a SPARQL endpoint, a web service to query RDF data with SPARQL. To provide this feature also on top of prevalent relational databases, solutions for virtual SPARQL endpoints on top of a relational database have appeared. To benchmark these solutions, a state-of-the-art tool, the Berlin SPARQL Benchmark (BSBM), is used. However, BSBM was designed primarily to benchmark native RDF stores. It can also be used to benchmark solutions for virtual SPARQL endpoints. However, since BSBM was not designed for virtual SPARQL endpoints, each implementation uses that tool differently for evaluation. As a result, the evaluation is not consistent and therefore hardly comparable. In this paper, we demonstrate how this well-defined benchmarking tool for SPARQL endpoints can be used to evaluate virtual endpoints over relational databases, perform the evaluation on the available implementations, and provide instructions on how to repeat the same evaluation in the future.
{"title":"Using Berlin SPARQL benchmark to evaluate virtual SPARQL endpoints over relational databases","authors":"Milos Chaloupka, Martin Necasky","doi":"10.1016/j.datak.2024.102309","DOIUrl":"https://doi.org/10.1016/j.datak.2024.102309","url":null,"abstract":"<div><p>The RDF is a popular and well-documented format for publishing structured data on the web. It enables data to be consumed without the knowledge of how the data is internally stored. There are already several native RDF storage solutions that provide a SPARQL endpoint. However, native RDF stores are not widely adopted. It is still more common to store data in a relational database. One of the useful features of native RDF storage solutions is providing a SPARQL endpoint, a web service to query RDF data with SPARQL. To provide this feature also on top of prevalent relational databases, solutions for virtual SPARQL endpoints on top of a relational database have appeared. To benchmark these solutions, a state-of-the-art tool, the Berlin SPARQL Benchmark (BSBM), is used. However, BSBM was designed primarily to benchmark native RDF stores. It can also be used to benchmark solutions for virtual SPARQL endpoints. However, since BSBM was not designed for virtual SPARQL endpoints, each implementation uses that tool differently for evaluation. As a result, the evaluation is not consistent and therefore hardly comparable. In this paper, we demonstrate how this well-defined benchmarking tool for SPARQL endpoints can be used to evaluate virtual endpoints over relational databases, perform the evaluation on the available implementations, and provide instructions on how to repeat the same evaluation in the future.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"152 ","pages":"Article 102309"},"PeriodicalIF":2.5,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140905621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01DOI: 10.1016/j.datak.2024.102306
Saman Jamshidi , Mahin Mohammadi , Saeed Bagheri , Hamid Esmaeili Najafabadi , Alireza Rezvanian , Mehdi Gheisari , Mustafa Ghaderzadeh , Amir Shahab Shahabi , Zongda Wu
Text classification plays a critical role in managing large volumes of electronically produced texts. As the number of such texts increases, manual analysis becomes impractical, necessitating an intelligent approach for processing information. Deep learning models have witnessed widespread application in text classification, including the use of recurrent neural networks like Many to One Long Short-Term Memory (MTO LSTM). Nonetheless, this model is limited by its reliance on only the last token for text labelling. To overcome this limitation, this study introduces a novel hybrid model that combines Bidirectional Encoder Representations from Transformers (BERT), Many To Many Long Short-Term Memory (MTM LSTM), and Decision Templates (DT) for text classification. In this new model, the text is first embedded using the BERT model and then trained using MTM LSTM to approximate the target at each token. Finally, the approximations are fused using DT. The proposed model is evaluated using the well-known IMDB movie review dataset for binary classification and Drug Review Dataset for multiclass classification. The results demonstrate superior performance in terms of accuracy, recall, precision, and F1 score compared to previous models. The hybrid model presented in this study holds significant potential for a wide range of text classification tasks and stands as a valuable contribution to the field.
{"title":"Effective text classification using BERT, MTM LSTM, and DT","authors":"Saman Jamshidi , Mahin Mohammadi , Saeed Bagheri , Hamid Esmaeili Najafabadi , Alireza Rezvanian , Mehdi Gheisari , Mustafa Ghaderzadeh , Amir Shahab Shahabi , Zongda Wu","doi":"10.1016/j.datak.2024.102306","DOIUrl":"https://doi.org/10.1016/j.datak.2024.102306","url":null,"abstract":"<div><p>Text classification plays a critical role in managing large volumes of electronically produced texts. As the number of such texts increases, manual analysis becomes impractical, necessitating an intelligent approach for processing information. Deep learning models have witnessed widespread application in text classification, including the use of recurrent neural networks like Many to One Long Short-Term Memory (MTO LSTM). Nonetheless, this model is limited by its reliance on only the last token for text labelling. To overcome this limitation, this study introduces a novel hybrid model that combines Bidirectional Encoder Representations from Transformers (BERT), Many To Many Long Short-Term Memory (MTM LSTM), and Decision Templates (DT) for text classification. In this new model, the text is first embedded using the BERT model and then trained using MTM LSTM to approximate the target at each token. Finally, the approximations are fused using DT. The proposed model is evaluated using the well-known IMDB movie review dataset for binary classification and Drug Review Dataset for multiclass classification. The results demonstrate superior performance in terms of accuracy, recall, precision, and F1 score compared to previous models. The hybrid model presented in this study holds significant potential for a wide range of text classification tasks and stands as a valuable contribution to the field.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"151 ","pages":"Article 102306"},"PeriodicalIF":2.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140825257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01DOI: 10.1016/j.datak.2024.102307
José Antonio García-Díaz , Ghassan Beydoun , Rafel Valencia-García
Author profiling consists of extracting their demographic and psychographic information by examining their writings. This information can then be used to improve the reader experience and to detect bots or propagators of hoaxes and/or hate speech. Therefore, author profiling can be applied to build more robust and efficient Knowledge-Based Systems for tasks such as content moderation, user profiling, and information retrieval. Author profiling is typically performed automatically as a document classification task. Recently, language models based on transformers have also proven to be quite effective in this task. However, the size and heterogeneity of novel language models, makes it necessary to evaluate them in context. The contributions we make in this paper are four-fold: First, we evaluate which language models are best suited to perform author profiling in Spanish. These experiments include basic, distilled, and multilingual models. Second, we evaluate how feature integration can improve performance for this task. We evaluate two distinct strategies: knowledge integration and ensemble learning. Third, we evaluate the ability of linguistic features to improve the interpretability of the results. Fourth, we evaluate the performance of each language model in terms of memory, training, and inference times. Our results indicate that the use of lightweight models can indeed achieve similar performance to heavy models and that multilingual models are actually less effective than models trained with one language. Finally, we confirm that the best models and strategies for integrating features ultimately depend on the context of the task.
{"title":"Evaluating Transformers and Linguistic Features integration for Author Profiling tasks in Spanish","authors":"José Antonio García-Díaz , Ghassan Beydoun , Rafel Valencia-García","doi":"10.1016/j.datak.2024.102307","DOIUrl":"https://doi.org/10.1016/j.datak.2024.102307","url":null,"abstract":"<div><p>Author profiling consists of extracting their demographic and psychographic information by examining their writings. This information can then be used to improve the reader experience and to detect bots or propagators of hoaxes and/or hate speech. Therefore, author profiling can be applied to build more robust and efficient Knowledge-Based Systems for tasks such as content moderation, user profiling, and information retrieval. Author profiling is typically performed automatically as a document classification task. Recently, language models based on transformers have also proven to be quite effective in this task. However, the size and heterogeneity of novel language models, makes it necessary to evaluate them in context. The contributions we make in this paper are four-fold: First, we evaluate which language models are best suited to perform author profiling in Spanish. These experiments include basic, distilled, and multilingual models. Second, we evaluate how feature integration can improve performance for this task. We evaluate two distinct strategies: knowledge integration and ensemble learning. Third, we evaluate the ability of linguistic features to improve the interpretability of the results. Fourth, we evaluate the performance of each language model in terms of memory, training, and inference times. Our results indicate that the use of lightweight models can indeed achieve similar performance to heavy models and that multilingual models are actually less effective than models trained with one language. Finally, we confirm that the best models and strategies for integrating features ultimately depend on the context of the task.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"151 ","pages":"Article 102307"},"PeriodicalIF":2.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169023X24000314/pdfft?md5=42a482dbed2e2a640c46e89a6f3a69c8&pid=1-s2.0-S0169023X24000314-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140825258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01DOI: 10.1016/j.datak.2024.102312
R. Rajesh
We observe and analyze the causal relations among risk factors in a system, considering the manufacturing supply chains. Seven major categories of risks were identified and scrutinized and the detailed analysis of causal relations using the grey influence analysis (GINA) methodology is outlined. With expert response based survey, we conduct an initial analysis of the risks using risk matrix analysis (RMA) and the risks under high priority are identified. Later, the GINA is implemented to understand the causal relations among various categories of risks, which is particularly useful in group decision-making environments. The results from RMA concludes that the capacity risks (CR) and delays (DL) are in the category of very high priority risks. GINA results also ratify the conclusions from RMA and observes that managers need to control and manage capacity risks (CR) and delays (DL) with high priorities. Additionally from the results of GINA, the causal factors disruptions (DS) and forecast risks (FR) appear to be primary importance and if unattended can lead to the initiation of several other risks in supply chains. Managers are recommended to identify disruptions at an early stage in supply chains and reduce the forecast errors to avoid bullwhips in supply chains.
{"title":"Managerial risk data analytics applications using grey influence analysis (GINA)","authors":"R. Rajesh","doi":"10.1016/j.datak.2024.102312","DOIUrl":"https://doi.org/10.1016/j.datak.2024.102312","url":null,"abstract":"<div><p>We observe and analyze the causal relations among risk factors in a system, considering the manufacturing supply chains. Seven major categories of risks were identified and scrutinized and the detailed analysis of causal relations using the grey influence analysis (GINA) methodology is outlined. With expert response based survey, we conduct an initial analysis of the risks using risk matrix analysis (RMA) and the risks under high priority are identified. Later, the GINA is implemented to understand the causal relations among various categories of risks, which is particularly useful in group decision-making environments. The results from RMA concludes that the <em>capacity risks (CR)</em> and <em>delays (DL)</em> are in the category of very high priority risks. GINA results also ratify the conclusions from RMA and observes that managers need to control and manage <em>capacity risks (CR)</em> and <em>delays (DL)</em> with high priorities. Additionally from the results of GINA, the causal factors <em>disruptions (DS)</em> and <em>forecast risks (FR)</em> appear to be primary importance and if unattended can lead to the initiation of several other risks in supply chains. Managers are recommended to identify disruptions at an early stage in supply chains and reduce the forecast errors to avoid bullwhips in supply chains.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"151 ","pages":"Article 102312"},"PeriodicalIF":2.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140879377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-30DOI: 10.1016/j.datak.2024.102308
Ramla Belalta , Mouhoub Belazzoug , Farid Meziane
Disambiguating name mentions in texts is a crucial task in Natural Language Processing, especially in entity linking. The credibility and efficiency of such systems depend largely on this task. For a given name entity mention in a text, there are many potential candidate entities that may refer to it in the knowledge base. Therefore, it is very difficult to assign the correct candidate from the whole set of candidate entities of this mention. To solve this problem, collective entity disambiguation is a prominent approach. In this paper, we present a novel algorithm called CPSR for collective entity disambiguation, which is based on a graph approach and semantic relatedness. A clique partitioning algorithm is used to find the best clique that contains a set of candidate entities. These candidate entities provide the answers to the corresponding mentions in the disambiguation process. To evaluate our algorithm, we carried out a series of experiments on seven well-known datasets, namely, AIDA/CoNLL2003-TestB, IITB, MSNBC, AQUAINT, ACE2004, Cweb, and Wiki. The Kensho Derived Wikimedia Dataset (KDWD) is used as the knowledge base for our system. From the experimental results, our CPSR algorithm outperforms both the baselines and other well-known state-of-the-art approaches.
{"title":"A graph based named entity disambiguation using clique partitioning and semantic relatedness","authors":"Ramla Belalta , Mouhoub Belazzoug , Farid Meziane","doi":"10.1016/j.datak.2024.102308","DOIUrl":"https://doi.org/10.1016/j.datak.2024.102308","url":null,"abstract":"<div><p>Disambiguating name mentions in texts is a crucial task in Natural Language Processing, especially in entity linking. The credibility and efficiency of such systems depend largely on this task. For a given name entity mention in a text, there are many potential candidate entities that may refer to it in the knowledge base. Therefore, it is very difficult to assign the correct candidate from the whole set of candidate entities of this mention. To solve this problem, collective entity disambiguation is a prominent approach. In this paper, we present a novel algorithm called CPSR for collective entity disambiguation, which is based on a graph approach and semantic relatedness. A clique partitioning algorithm is used to find the best clique that contains a set of candidate entities. These candidate entities provide the answers to the corresponding mentions in the disambiguation process. To evaluate our algorithm, we carried out a series of experiments on seven well-known datasets, namely, AIDA/CoNLL2003-TestB, IITB, MSNBC, AQUAINT, ACE2004, Cweb, and Wiki. The Kensho Derived Wikimedia Dataset (KDWD) is used as the knowledge base for our system. From the experimental results, our CPSR algorithm outperforms both the baselines and other well-known state-of-the-art approaches.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"152 ","pages":"Article 102308"},"PeriodicalIF":2.5,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140901817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}