Wendenda Nathanael Kabore, Rong-Terng Juang, Hsin-Piao Lin, B. A. Tesfaw, G. B. Tarekegn
In wireless networks, drone base stations (DBSs) offer significant benefits in terms of Quality of Service (QoS) improvement due to their line-of-sight (LoS) transmission capabilities and adaptability. However, LoS links can suffer degradation in complex propagation environments, especially in urban areas with dense structures like buildings. As a promising technology to enhance the wireless communication networks, reconfigurable intelligent surfaces (RIS) have emerged in various Internet of Things (IoT) applications by adjusting the amplitude and phase of reflected signals, thereby improving signal strength and network efficiency. This study aims to propose a novel approach to enhance communication coverage and throughput for mobile ground users by intelligently leveraging signal reflection from DBSs using ground-based RIS. We employ Deep Reinforcement Learning (DRL) to optimize both the DBS location and RIS phase-shifts. Numerical results demonstrate significant improvements in system performance, including communication quality and network throughput, validating the effectiveness of the proposed approach.
{"title":"Optimizing the Deployment of an Aerial Base Station and the Phase-Shift of a Ground Reconfigurable Intelligent Surface for Wireless Communication Systems Using Deep Reinforcement Learning","authors":"Wendenda Nathanael Kabore, Rong-Terng Juang, Hsin-Piao Lin, B. A. Tesfaw, G. B. Tarekegn","doi":"10.3390/info15070386","DOIUrl":"https://doi.org/10.3390/info15070386","url":null,"abstract":"In wireless networks, drone base stations (DBSs) offer significant benefits in terms of Quality of Service (QoS) improvement due to their line-of-sight (LoS) transmission capabilities and adaptability. However, LoS links can suffer degradation in complex propagation environments, especially in urban areas with dense structures like buildings. As a promising technology to enhance the wireless communication networks, reconfigurable intelligent surfaces (RIS) have emerged in various Internet of Things (IoT) applications by adjusting the amplitude and phase of reflected signals, thereby improving signal strength and network efficiency. This study aims to propose a novel approach to enhance communication coverage and throughput for mobile ground users by intelligently leveraging signal reflection from DBSs using ground-based RIS. We employ Deep Reinforcement Learning (DRL) to optimize both the DBS location and RIS phase-shifts. Numerical results demonstrate significant improvements in system performance, including communication quality and network throughput, validating the effectiveness of the proposed approach.","PeriodicalId":510156,"journal":{"name":"Information","volume":"323 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141691807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Serious games play a key role in the medical field, particularly in enhancing cognitive abilities in the elderly. However, the sensory organs of the elderly decline over time, and the intervention effect of traditional serious games for older adults.. The objective of this study is to identify the evolution and current problems of serious game technology for the elderly by using bibliometric analysis. We selected 319 relevant documents from 2013 to 2024 from the Web of Science (WOS) database. This study uses Publish or Perish (Windows GUl Edition) and VOSviewer(1.6.20) for performance analysis and scientific charting. We deeply analyze the early trends, emerging technologies, and publication trends, including citations and journals, subject areas, and regional and institutional. Here, we identified serious games for older adults rely heavily on visual presentation, often utilizing screens for screening, rehabilitation, and therapeutic interventions. This may cause further visual impairment in older adults who are experiencing visual decline. In addition, we proposed the combination of rich tactile feedback and external devices as one of the effective solutions to the current problems for future research.
严肃游戏在医疗领域发挥着重要作用,尤其是在提高老年人认知能力方面。然而,老年人的感觉器官会随着时间的推移而衰退,传统的严肃游戏对老年人的干预效果并不理想。本研究的目的是通过文献计量学分析,找出老年人严肃游戏技术的发展历程和目前存在的问题。我们从科学网(WOS)数据库中选取了 2013 年至 2024 年的 319 篇相关文献。本研究使用 Publish or Perish (Windows GUl Edition) 和 VOSviewer(1.6.20) 进行性能分析和科学图表绘制。我们深入分析了早期趋势、新兴技术和出版趋势,包括引文和期刊、学科领域、地区和机构。在此,我们发现面向老年人的严肃游戏在很大程度上依赖于视觉呈现,通常利用屏幕进行筛查、康复和治疗干预。这可能会进一步损害正在经历视力衰退的老年人的视力。此外,我们提出将丰富的触觉反馈与外部设备相结合,作为解决当前问题的有效方法之一,供今后研究参考。
{"title":"Evolution and Future of Serious Game Technology for Older Adults","authors":"Xin Huang, Nazlena Mohamad Ali, S. Sahrani","doi":"10.3390/info15070385","DOIUrl":"https://doi.org/10.3390/info15070385","url":null,"abstract":"Serious games play a key role in the medical field, particularly in enhancing cognitive abilities in the elderly. However, the sensory organs of the elderly decline over time, and the intervention effect of traditional serious games for older adults.. The objective of this study is to identify the evolution and current problems of serious game technology for the elderly by using bibliometric analysis. We selected 319 relevant documents from 2013 to 2024 from the Web of Science (WOS) database. This study uses Publish or Perish (Windows GUl Edition) and VOSviewer(1.6.20) for performance analysis and scientific charting. We deeply analyze the early trends, emerging technologies, and publication trends, including citations and journals, subject areas, and regional and institutional. Here, we identified serious games for older adults rely heavily on visual presentation, often utilizing screens for screening, rehabilitation, and therapeutic interventions. This may cause further visual impairment in older adults who are experiencing visual decline. In addition, we proposed the combination of rich tactile feedback and external devices as one of the effective solutions to the current problems for future research.","PeriodicalId":510156,"journal":{"name":"Information","volume":"106 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141695469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence (AI) has witnessed an exponential increase in use in various applications. Recently, the academic community started to research and inject new AI-based approaches to provide solutions to traditional software-engineering problems. However, a comprehensive and holistic understanding of the current status needs to be included. To close the above gap, synthetic knowledge synthesis was used to induce the research landscape of the contemporary research literature on the use of AI in software engineering. The synthesis resulted in 15 research categories and 5 themes—namely, natural language processing in software engineering, use of artificial intelligence in the management of the software development life cycle, use of machine learning in fault/defect prediction and effort estimation, employment of deep learning in intelligent software engineering and code management, and mining software repositories to improve software quality. The most productive country was China (n = 2042), followed by the United States (n = 1193), India (n = 934), Germany (n = 445), and Canada (n = 381). A high percentage (n = 47.4%) of papers were funded, showing the strong interest in this research topic. The convergence of AI and software engineering can significantly reduce the required resources, improve the quality, enhance the user experience, and improve the well-being of software developers.
{"title":"The Use of AI in Software Engineering: A Synthetic Knowledge Synthesis of the Recent Research Literature","authors":"Peter Kokol","doi":"10.3390/info15060354","DOIUrl":"https://doi.org/10.3390/info15060354","url":null,"abstract":"Artificial intelligence (AI) has witnessed an exponential increase in use in various applications. Recently, the academic community started to research and inject new AI-based approaches to provide solutions to traditional software-engineering problems. However, a comprehensive and holistic understanding of the current status needs to be included. To close the above gap, synthetic knowledge synthesis was used to induce the research landscape of the contemporary research literature on the use of AI in software engineering. The synthesis resulted in 15 research categories and 5 themes—namely, natural language processing in software engineering, use of artificial intelligence in the management of the software development life cycle, use of machine learning in fault/defect prediction and effort estimation, employment of deep learning in intelligent software engineering and code management, and mining software repositories to improve software quality. The most productive country was China (n = 2042), followed by the United States (n = 1193), India (n = 934), Germany (n = 445), and Canada (n = 381). A high percentage (n = 47.4%) of papers were funded, showing the strong interest in this research topic. The convergence of AI and software engineering can significantly reduce the required resources, improve the quality, enhance the user experience, and improve the well-being of software developers.","PeriodicalId":510156,"journal":{"name":"Information","volume":"44 24","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141340123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Risk assessment is a critical sub-process in information security risk management (ISRM) that is used to identify an organization’s vulnerabilities and threats as well as evaluate current and planned security controls. Therefore, adequate resources and return on investments should be considered when reviewing assets. However, many existing frameworks lack granular guidelines and mostly operate on qualitative human input and feedback, which increases subjective and unreliable judgment within organizations. Consequently, current risk assessment methods require additional time and cost to test all information security controls thoroughly. The principal aim of this study is to critically review the Information Security Control Prioritization (ISCP) models that improve the Information Security Risk Assessment (ISRA) process, by using literature analysis to investigate ISRA’s main problems and challenges. We recommend that designing a streamlined and standardized Information Security Control Prioritization model would greatly reduce the uncertainty, cost, and time associated with the assessment of information security controls, thereby helping organizations prioritize critical controls reliably and more efficiently based on clear and practical guidelines.
{"title":"Strategic Approaches in Network Communication and Information Security Risk Assessment","authors":"Nadher Alsafwani, Y. Fazea, Fuad Alnajjar","doi":"10.3390/info15060353","DOIUrl":"https://doi.org/10.3390/info15060353","url":null,"abstract":"Risk assessment is a critical sub-process in information security risk management (ISRM) that is used to identify an organization’s vulnerabilities and threats as well as evaluate current and planned security controls. Therefore, adequate resources and return on investments should be considered when reviewing assets. However, many existing frameworks lack granular guidelines and mostly operate on qualitative human input and feedback, which increases subjective and unreliable judgment within organizations. Consequently, current risk assessment methods require additional time and cost to test all information security controls thoroughly. The principal aim of this study is to critically review the Information Security Control Prioritization (ISCP) models that improve the Information Security Risk Assessment (ISRA) process, by using literature analysis to investigate ISRA’s main problems and challenges. We recommend that designing a streamlined and standardized Information Security Control Prioritization model would greatly reduce the uncertainty, cost, and time associated with the assessment of information security controls, thereby helping organizations prioritize critical controls reliably and more efficiently based on clear and practical guidelines.","PeriodicalId":510156,"journal":{"name":"Information","volume":"46 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141338379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jarmila Horváthová, Martina Mokrišová, Alexander Schneider
Diagnosing the financial health of companies and their performance is currently one of the basic questions that attracts the attention of researchers and experts in the field of finance and management. In this study, we focused on the proposal of models for measuring the financial health and performance of businesses. These models were built for companies doing business within the Slovak construction industry. Construction companies are identified by their higher liquidity and different capital structure compared to other industries. Therefore, simple classifiers are not able to effectively predict their financial health. In this paper, we investigated whether boosting ensembles are a suitable alternative for performance analysis. The result of the research is the finding that deep learning is a suitable approach aimed at measuring the financial health and performance of the analyzed sample of companies. The developed models achieved perfect classification accuracy when using the AdaBoost and Gradient-boosting algorithms. The application of a decision tree as a base learner also proved to be very appropriate. The result is a decision tree with adequate depth and very good interpretability.
{"title":"The Application of Machine Learning in Diagnosing the Financial Health and Performance of Companies in the Construction Industry","authors":"Jarmila Horváthová, Martina Mokrišová, Alexander Schneider","doi":"10.3390/info15060355","DOIUrl":"https://doi.org/10.3390/info15060355","url":null,"abstract":"Diagnosing the financial health of companies and their performance is currently one of the basic questions that attracts the attention of researchers and experts in the field of finance and management. In this study, we focused on the proposal of models for measuring the financial health and performance of businesses. These models were built for companies doing business within the Slovak construction industry. Construction companies are identified by their higher liquidity and different capital structure compared to other industries. Therefore, simple classifiers are not able to effectively predict their financial health. In this paper, we investigated whether boosting ensembles are a suitable alternative for performance analysis. The result of the research is the finding that deep learning is a suitable approach aimed at measuring the financial health and performance of the analyzed sample of companies. The developed models achieved perfect classification accuracy when using the AdaBoost and Gradient-boosting algorithms. The application of a decision tree as a base learner also proved to be very appropriate. The result is a decision tree with adequate depth and very good interpretability.","PeriodicalId":510156,"journal":{"name":"Information","volume":"57 51","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141344767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Channel estimation accuracy significantly affects the performance of orthogonal frequency-division multiplexing (OFDM) systems. In the literature, there are quite a few channel estimation methods. However, the performances of these methods deteriorate considerably when the wireless channels suffer from nonlinear distortions and interferences. Machine learning (ML) shows great potential for solving nonparametric problems. This paper proposes ML-based channel estimation methods for systems with comb-type pilot patterns and random pilot symbols, such as ATSC 3.0. We compare their performances with conventional channel estimations in ATSC 3.0 systems for linear and nonlinear channel models. We also evaluate the robustness of the ML-based methods against channel model mismatch and signal-to-noise ratio (SNR) mismatch. The results show that the ML-based channel estimations achieve good mean squared error (MSE) performance for linear and nonlinear channels if the channel statistics used for the training stage match those of the deployment stage. Otherwise, the ML estimation models may overfit the training channel, leading to poor deployment performance. Furthermore, the deep neural network (DNN)-based method does not outperform the linear channel estimation methods in nonlinear channels.
信道估计精度对正交频分复用(OFDM)系统的性能有很大影响。文献中有许多信道估计方法。然而,当无线信道受到非线性失真和干扰影响时,这些方法的性能就会大打折扣。机器学习(ML)在解决非参数问题方面显示出巨大的潜力。本文针对具有梳状先导模式和随机先导符号的系统(如 ATSC 3.0)提出了基于 ML 的信道估计方法。我们比较了这些方法在 ATSC 3.0 系统中的线性和非线性信道模型中与传统信道估计方法的性能。我们还评估了基于 ML 的方法对信道模型失配和信噪比(SNR)失配的鲁棒性。结果表明,如果用于训练阶段的信道统计与部署阶段的信道统计相匹配,那么基于 ML 的信道估计方法在线性和非线性信道上都能获得良好的均方误差 (MSE) 性能。否则,ML 估计模型可能会过度拟合训练信道,导致部署性能不佳。此外,在非线性信道中,基于深度神经网络(DNN)的方法并不优于线性信道估计方法。
{"title":"Machine Learning-Based Channel Estimation Techniques for ATSC 3.0","authors":"Yu-Sun Liu, Shingchern You, Yu-Chun Lai","doi":"10.3390/info15060350","DOIUrl":"https://doi.org/10.3390/info15060350","url":null,"abstract":"Channel estimation accuracy significantly affects the performance of orthogonal frequency-division multiplexing (OFDM) systems. In the literature, there are quite a few channel estimation methods. However, the performances of these methods deteriorate considerably when the wireless channels suffer from nonlinear distortions and interferences. Machine learning (ML) shows great potential for solving nonparametric problems. This paper proposes ML-based channel estimation methods for systems with comb-type pilot patterns and random pilot symbols, such as ATSC 3.0. We compare their performances with conventional channel estimations in ATSC 3.0 systems for linear and nonlinear channel models. We also evaluate the robustness of the ML-based methods against channel model mismatch and signal-to-noise ratio (SNR) mismatch. The results show that the ML-based channel estimations achieve good mean squared error (MSE) performance for linear and nonlinear channels if the channel statistics used for the training stage match those of the deployment stage. Otherwise, the ML estimation models may overfit the training channel, leading to poor deployment performance. Furthermore, the deep neural network (DNN)-based method does not outperform the linear channel estimation methods in nonlinear channels.","PeriodicalId":510156,"journal":{"name":"Information","volume":"60 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141347567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the field of visualization, understanding users’ analytical reasoning is important for evaluating the effectiveness of visualization applications. Several studies have been conducted to capture and analyze user interactions to comprehend this reasoning process. However, few have successfully linked these interactions to users’ reasoning processes. This paper introduces an approach that addresses the limitation by correlating semantic user interactions with analysis decisions using an interactive wire transaction analysis system and a visual state transition matrix, both designed as visual analytics applications. The system enables interactive analysis for evaluating financial fraud in wire transactions. It also allows mapping captured user interactions and analytical decisions back onto the visualization to reveal their decision differences. The visual state transition matrix further aids in understanding users’ analytical flows, revealing their decision-making processes. Classification machine learning algorithms are applied to evaluate the effectiveness of our approach in understanding users’ analytical reasoning process by connecting the captured semantic user interactions to their decisions (i.e., suspicious, not suspicious, and inconclusive) on wire transactions. With the algorithms, an average of 72% accuracy is determined to classify the semantic user interactions. For classifying individual decisions, the average accuracy is 70%. Notably, the accuracy for classifying ‘inconclusive’ decisions is 83%. Overall, the proposed approach improves the understanding of users’ analytical decisions and provides a robust method for evaluating user interactions in visualization tools.
{"title":"Leveraging Machine Learning to Analyze Semantic User Interactions in Visual Analytics","authors":"D. H. Jeong, Bong-Keun Jeong, Soo Yeon Ji","doi":"10.3390/info15060351","DOIUrl":"https://doi.org/10.3390/info15060351","url":null,"abstract":"In the field of visualization, understanding users’ analytical reasoning is important for evaluating the effectiveness of visualization applications. Several studies have been conducted to capture and analyze user interactions to comprehend this reasoning process. However, few have successfully linked these interactions to users’ reasoning processes. This paper introduces an approach that addresses the limitation by correlating semantic user interactions with analysis decisions using an interactive wire transaction analysis system and a visual state transition matrix, both designed as visual analytics applications. The system enables interactive analysis for evaluating financial fraud in wire transactions. It also allows mapping captured user interactions and analytical decisions back onto the visualization to reveal their decision differences. The visual state transition matrix further aids in understanding users’ analytical flows, revealing their decision-making processes. Classification machine learning algorithms are applied to evaluate the effectiveness of our approach in understanding users’ analytical reasoning process by connecting the captured semantic user interactions to their decisions (i.e., suspicious, not suspicious, and inconclusive) on wire transactions. With the algorithms, an average of 72% accuracy is determined to classify the semantic user interactions. For classifying individual decisions, the average accuracy is 70%. Notably, the accuracy for classifying ‘inconclusive’ decisions is 83%. Overall, the proposed approach improves the understanding of users’ analytical decisions and provides a robust method for evaluating user interactions in visualization tools.","PeriodicalId":510156,"journal":{"name":"Information","volume":"63 46","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141346996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. M. R. Cunha, Sand Correa, Fabrizzio Soares, Maria Ribeiro, Waldir Moreira, Raphael Gomes, Leandro A. Freitas, Antonio Oliveira-Jr
Multi-Access Edge Computing (MEC) reduces latency, provides high-bandwidth applications with real-time performance and reliability, supporting new applications and services for the present and future Beyond the Fifth Generation (B5G). Radio Network Information Service (RNIS) plays a crucial role in obtaining information from the Radio Access Network (RAN). With the advent of 5G, RNIS requires improvements to handle information from the new generations of RAN. In this scenario, improving the RNIS is essential to boost new applications according to the strict requirements imposed. Hence, this work proposes a new RNIS as a service to the MEC framework in B5G networks to improve MEC applications. The service is validated and evaluated, and demonstrates the ability to adequately serve a large number of MEC apps (two, four, six and eight) and from 100 to 2000 types of User Equipment (UE).
{"title":"A Novel Radio Network Information Service (RNIS) to MEC Framework in B5G Networks","authors":"K. M. R. Cunha, Sand Correa, Fabrizzio Soares, Maria Ribeiro, Waldir Moreira, Raphael Gomes, Leandro A. Freitas, Antonio Oliveira-Jr","doi":"10.3390/info15060352","DOIUrl":"https://doi.org/10.3390/info15060352","url":null,"abstract":"Multi-Access Edge Computing (MEC) reduces latency, provides high-bandwidth applications with real-time performance and reliability, supporting new applications and services for the present and future Beyond the Fifth Generation (B5G). Radio Network Information Service (RNIS) plays a crucial role in obtaining information from the Radio Access Network (RAN). With the advent of 5G, RNIS requires improvements to handle information from the new generations of RAN. In this scenario, improving the RNIS is essential to boost new applications according to the strict requirements imposed. Hence, this work proposes a new RNIS as a service to the MEC framework in B5G networks to improve MEC applications. The service is validated and evaluated, and demonstrates the ability to adequately serve a large number of MEC apps (two, four, six and eight) and from 100 to 2000 types of User Equipment (UE).","PeriodicalId":510156,"journal":{"name":"Information","volume":"6 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141348694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jing Bai, Junfeng Zhou, Shuotong Chen, Ming Du, Ziyang Chen, Mengtao Min
SimRank is a widely used metric for evaluating vertex similarity based on graph topology, with diverse applications such as large-scale graph mining and natural language processing. The objective of the single-source and top-k SimRank query problem is to retrieve the kvertices with the largest SimRank to the source vertex. However, existing algorithms suffer from inefficiency as they require computing SimRank for all vertices to retrieve the top-k results. To address this issue, we propose an algorithm named HitSimthat utilizes a branch and bound strategy for the single-source and top-k query. HitSim initially partitions vertices into distinct sets based on their shortest-meeting lengths to the source vertex. Subsequently, it computes an upper bound of SimRank for each set. If the upper bound of a set is no larger than the minimum value of the current top-k results, HitSim efficiently batch-prunes the unpromising vertices within the set. However, in scenarios where the graph becomes dense, certain sets with large upper bounds may contain numerous vertices with small SimRank, leading to redundant overhead when processing these vertices. To address this issue, we propose an optimized algorithm named HitSim-OPT that computes the upper bound of SimRank for each vertex instead of each set, resulting in a fine-grained and efficient pruning process. The experimental results conducted on six real-world datasets demonstrate the performance of our algorithms in efficiently addressing the single-source and top-k query problem.
SimRank 是一种基于图拓扑的顶点相似度评估指标,在大规模图挖掘和自然语言处理等领域有着广泛的应用。单源和顶 k SimRank 查询问题的目标是检索与源顶点具有最大 SimRank 的 k 个顶点。然而,现有算法效率低下,因为它们需要计算所有顶点的 SimRank 才能检索出顶 k 结果。为了解决这个问题,我们提出了一种名为 HitSim 的算法,该算法利用分支和约束策略进行单源和顶 k 查询。HitSim 最初根据顶点到源顶点的最短相遇长度将顶点划分为不同的集合。随后,它会计算每个集合的 SimRank 上限。如果一个集合的上界不大于当前 top-k 结果的最小值,HitSim 就会高效地批量剔除集合中不具潜力的顶点。然而,在图变得密集的情况下,某些具有较大上限的集合可能包含大量 SimRank 较小的顶点,从而导致处理这些顶点时的冗余开销。为了解决这个问题,我们提出了一种名为 HitSim-OPT 的优化算法,它可以计算每个顶点而不是每个集合的 SimRank 上限,从而实现精细高效的剪枝过程。在六个真实数据集上进行的实验结果证明了我们的算法在高效解决单源和顶k查询问题方面的性能。
{"title":"HitSim: An Efficient Algorithm for Single-Source and Top-k SimRank Computation","authors":"Jing Bai, Junfeng Zhou, Shuotong Chen, Ming Du, Ziyang Chen, Mengtao Min","doi":"10.3390/info15060348","DOIUrl":"https://doi.org/10.3390/info15060348","url":null,"abstract":"SimRank is a widely used metric for evaluating vertex similarity based on graph topology, with diverse applications such as large-scale graph mining and natural language processing. The objective of the single-source and top-k SimRank query problem is to retrieve the kvertices with the largest SimRank to the source vertex. However, existing algorithms suffer from inefficiency as they require computing SimRank for all vertices to retrieve the top-k results. To address this issue, we propose an algorithm named HitSimthat utilizes a branch and bound strategy for the single-source and top-k query. HitSim initially partitions vertices into distinct sets based on their shortest-meeting lengths to the source vertex. Subsequently, it computes an upper bound of SimRank for each set. If the upper bound of a set is no larger than the minimum value of the current top-k results, HitSim efficiently batch-prunes the unpromising vertices within the set. However, in scenarios where the graph becomes dense, certain sets with large upper bounds may contain numerous vertices with small SimRank, leading to redundant overhead when processing these vertices. To address this issue, we propose an optimized algorithm named HitSim-OPT that computes the upper bound of SimRank for each vertex instead of each set, resulting in a fine-grained and efficient pruning process. The experimental results conducted on six real-world datasets demonstrate the performance of our algorithms in efficiently addressing the single-source and top-k query problem.","PeriodicalId":510156,"journal":{"name":"Information","volume":"51 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141353138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of large language models (LLMs) is now spreading in several areas of research and development. This work is concerned with systematically reviewing LLMs’ involvement in engineering education. Starting from a general research question, two queries were used to select 370 papers from the literature. Filtering them through several inclusion/exclusion criteria led to the selection of 20 papers. These were investigated based on eight dimensions to identify areas of engineering disciplines that involve LLMs, where they are most present, how this involvement takes place, and which LLM-based tools are used, if any. Addressing these key issues allowed three more specific research questions to be answered, offering a clear overview of the current involvement of LLMs in engineering education. The research outcomes provide insights into the potential and challenges of LLMs in transforming engineering education, contributing to its responsible and effective future implementation. This review’s outcomes could help address the best ways to involve LLMs in engineering education activities and measure their effectiveness as time progresses. For this reason, this study addresses suggestions on how to improve activities in engineering education. The systematic review on which this research is based conforms to the rules of the current literature regarding inclusion/exclusion criteria and quality assessments in order to make the results as objective as possible and easily replicable.
{"title":"Large Language Models (LLMs) in Engineering Education: A Systematic Review and Suggestions for Practical Adoption","authors":"S. Filippi, Barbara Motyl","doi":"10.3390/info15060345","DOIUrl":"https://doi.org/10.3390/info15060345","url":null,"abstract":"The use of large language models (LLMs) is now spreading in several areas of research and development. This work is concerned with systematically reviewing LLMs’ involvement in engineering education. Starting from a general research question, two queries were used to select 370 papers from the literature. Filtering them through several inclusion/exclusion criteria led to the selection of 20 papers. These were investigated based on eight dimensions to identify areas of engineering disciplines that involve LLMs, where they are most present, how this involvement takes place, and which LLM-based tools are used, if any. Addressing these key issues allowed three more specific research questions to be answered, offering a clear overview of the current involvement of LLMs in engineering education. The research outcomes provide insights into the potential and challenges of LLMs in transforming engineering education, contributing to its responsible and effective future implementation. This review’s outcomes could help address the best ways to involve LLMs in engineering education activities and measure their effectiveness as time progresses. For this reason, this study addresses suggestions on how to improve activities in engineering education. The systematic review on which this research is based conforms to the rules of the current literature regarding inclusion/exclusion criteria and quality assessments in order to make the results as objective as possible and easily replicable.","PeriodicalId":510156,"journal":{"name":"Information","volume":"15 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141354030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}