L. Nadal, Mumtaz Ali, F. J. Vílchez, J. Fabrega, M. Svaluto Moreolo
In the last 15 years, global data traffic has been doubling approximately every 2–3 years, and there is a strong indication that this pattern will persist. Hence, also driven by the emergence of new applications and services expected within the 6G era, new transmission systems and technologies should be investigated to enhance network capacity and achieve increased bandwidth, improved spectral efficiency, and greater flexibility to effectively accommodate all the expected data traffic. In this paper, an innovative transmission solution based on multiband (MB) over spatial division multiplexing (SDM) sliceable bandwidth/bitrate variable transceiver (S-BVT) is implemented and assessed in relation to the provision of sustainable capacity scaling. MB transmission (S+C+L) over 25.4 km of 19-cores multicore fibre (MCF) is experimentally assessed and demonstrated achieving an aggregated capacity of 119.1 Gb/s at 4.62×10−3 bit error rate (BER). The proposed modular sliceable transceiver architecture arises as a suitable option towards achieving 500 Tb/s per fibre transmission, by further enabling more slices covering all the available S+C+L spectra and the 19 cores of the MCF.
{"title":"The Multiband over Spatial Division Multiplexing Sliceable Transceiver for Future Optical Networks","authors":"L. Nadal, Mumtaz Ali, F. J. Vílchez, J. Fabrega, M. Svaluto Moreolo","doi":"10.3390/fi15120381","DOIUrl":"https://doi.org/10.3390/fi15120381","url":null,"abstract":"In the last 15 years, global data traffic has been doubling approximately every 2–3 years, and there is a strong indication that this pattern will persist. Hence, also driven by the emergence of new applications and services expected within the 6G era, new transmission systems and technologies should be investigated to enhance network capacity and achieve increased bandwidth, improved spectral efficiency, and greater flexibility to effectively accommodate all the expected data traffic. In this paper, an innovative transmission solution based on multiband (MB) over spatial division multiplexing (SDM) sliceable bandwidth/bitrate variable transceiver (S-BVT) is implemented and assessed in relation to the provision of sustainable capacity scaling. MB transmission (S+C+L) over 25.4 km of 19-cores multicore fibre (MCF) is experimentally assessed and demonstrated achieving an aggregated capacity of 119.1 Gb/s at 4.62×10−3 bit error rate (BER). The proposed modular sliceable transceiver architecture arises as a suitable option towards achieving 500 Tb/s per fibre transmission, by further enabling more slices covering all the available S+C+L spectra and the 19 cores of the MCF.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"26 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139231197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Extended Reality (XR) is an emerging technology that enables enhanced interaction between the real world and virtual environments. In this study, we conduct a scoping review of XR engines for developing gamified apps and serious games. Our study revolves around four aspects: (1) existing XR game engines, (2) their primary features, (3) supported serious game attributes, and (4) supported learning activities. We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) model to conduct the scoping review, which included 40 primary studies published between 2019 and 2023. Our findings help us understand how current XR engines support the development of XR-enriched serious games and gamified apps for specific learning activities. Additionally, based on our findings, we suggest a set of pre-established game attributes that could be commonly supported by all XR game engines across the different game categories proposed by Lameras. Hence, this scoping review can help developers (1) select important game attributes for their new games and (2) choose the game engine that provides the most support to these attributes.
{"title":"Extended Reality (XR) Engines for Developing Gamified Apps and Serious Games: A Scoping Review","authors":"Humberto Marín-Vega, G. Alor-Hernández, Maritza Bustos-López, Ignacio López-Martínez, Norma Leticia Hernández-Chaparro","doi":"10.3390/fi15120379","DOIUrl":"https://doi.org/10.3390/fi15120379","url":null,"abstract":"Extended Reality (XR) is an emerging technology that enables enhanced interaction between the real world and virtual environments. In this study, we conduct a scoping review of XR engines for developing gamified apps and serious games. Our study revolves around four aspects: (1) existing XR game engines, (2) their primary features, (3) supported serious game attributes, and (4) supported learning activities. We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) model to conduct the scoping review, which included 40 primary studies published between 2019 and 2023. Our findings help us understand how current XR engines support the development of XR-enriched serious games and gamified apps for specific learning activities. Additionally, based on our findings, we suggest a set of pre-established game attributes that could be commonly supported by all XR game engines across the different game categories proposed by Lameras. Hence, this scoping review can help developers (1) select important game attributes for their new games and (2) choose the game engine that provides the most support to these attributes.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"11 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139230896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oana Marin, T. Cioara, Liana Toderean, D. Mitrea, I. Anghel
Blockchain and tokens are relatively new research areas insufficiently explored from both technical and economic perspectives. Even though tokens provide benefits such as easier market access, increased liquidity, lower transaction costs, and automated transactional process, their valuation and price determination are still challenging due to factors such as a lack of intrinsic value, volatility, and regulation making trading risky. In this paper, we address this knowledge gap by reviewing the existing literature on token creation and valuation to identify and document the factors affecting their valuation, investment, and founding, as well as the most promising domains of applicability. The study follows the PRISMA methodology and uses the Web of Science database, defining clear research questions and objective inclusion criteria for the articles. We discuss token technical development, including creating, issuing, and managing tokens on an Ethereum blockchain using smart contracts. The study revealed several key factors that significantly impact the field of tokenomics: demand and supply, social incentives, market conditions, macroeconomics, collective behavior, speculation, and inclusion in index funds. The most relevant use cases of blockchain and tokens are related to the digitization of virtual and physical assets, accountability, and traceability usual in smart grids or supply chains management, social governance, and art and gamification including metaverse.
区块链和代币是相对较新的研究领域,无论从技术角度还是经济角度来看,都没有得到充分探索。尽管代币具有更便捷的市场准入、更高的流动性、更低的交易成本和自动化的交易流程等优势,但由于缺乏内在价值、波动性和监管使交易存在风险等因素,代币的估值和价格确定仍具有挑战性。在本文中,我们通过回顾有关代币创建和估值的现有文献,确定并记录了影响代币估值、投资和创建的因素,以及最有前景的适用领域,从而填补了这一知识空白。本研究遵循 PRISMA 方法并使用 Web of Science 数据库,为文章定义了明确的研究问题和客观的纳入标准。我们讨论了代币的技术开发,包括使用智能合约在以太坊区块链上创建、发行和管理代币。研究揭示了对代币经济学领域产生重大影响的几个关键因素:供求关系、社会激励、市场条件、宏观经济、集体行为、投机和纳入指数基金。区块链和代币最相关的用例涉及虚拟和实物资产的数字化、智能电网或供应链管理中常见的问责制和可追溯性、社会治理以及包括元宇宙在内的艺术和游戏化。
{"title":"Review of Blockchain Tokens Creation and Valuation","authors":"Oana Marin, T. Cioara, Liana Toderean, D. Mitrea, I. Anghel","doi":"10.3390/fi15120382","DOIUrl":"https://doi.org/10.3390/fi15120382","url":null,"abstract":"Blockchain and tokens are relatively new research areas insufficiently explored from both technical and economic perspectives. Even though tokens provide benefits such as easier market access, increased liquidity, lower transaction costs, and automated transactional process, their valuation and price determination are still challenging due to factors such as a lack of intrinsic value, volatility, and regulation making trading risky. In this paper, we address this knowledge gap by reviewing the existing literature on token creation and valuation to identify and document the factors affecting their valuation, investment, and founding, as well as the most promising domains of applicability. The study follows the PRISMA methodology and uses the Web of Science database, defining clear research questions and objective inclusion criteria for the articles. We discuss token technical development, including creating, issuing, and managing tokens on an Ethereum blockchain using smart contracts. The study revealed several key factors that significantly impact the field of tokenomics: demand and supply, social incentives, market conditions, macroeconomics, collective behavior, speculation, and inclusion in index funds. The most relevant use cases of blockchain and tokens are related to the digitization of virtual and physical assets, accountability, and traceability usual in smart grids or supply chains management, social governance, and art and gamification including metaverse.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"98 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139231851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The study of business process analysis and optimisation has attracted significant scholarly interest in the recent past, due to its integral role in boosting organisational performance. A specific area of focus within this broader research field is process mining (PM). Its purpose is to extract knowledge and insights from event logs maintained by information systems, thereby discovering process models and identifying process-related issues. On the other hand, statistical model checking (SMC) is a verification technique used to analyse and validate properties of stochastic systems that employs statistical methods and random sampling to estimate the likelihood of a property being satisfied. In a seamless business setting, it is essential to validate and verify process models. The objective of this paper is to apply the SMC technique in process mining for the verification and validation of process models with stochastic behaviour and large state space, where probabilistic model checking is not feasible. We propose a novel methodology in this research direction that integrates SMC and PM by formally modelling discovered and replayed process models and apply statistical methods to estimate the results. The methodology facilitates an automated and proficient evaluation of the extent to which a process model aligns with user requirements and assists in selecting the optimal model. We demonstrate the effectiveness of our methodology with a case study of a loan application process performed in a financial institution that deals with loan applications submitted by customers. The case study highlights our methodology’s capability to identify the performance constraints of various process models and aid enhancement efforts.
{"title":"Statistical Model Checking in Process Mining: A Comprehensive Approach to Analyse Stochastic Processes","authors":"Fawad Ali Mangi, Guoxin Su, Minjie Zhang","doi":"10.3390/fi15120378","DOIUrl":"https://doi.org/10.3390/fi15120378","url":null,"abstract":"The study of business process analysis and optimisation has attracted significant scholarly interest in the recent past, due to its integral role in boosting organisational performance. A specific area of focus within this broader research field is process mining (PM). Its purpose is to extract knowledge and insights from event logs maintained by information systems, thereby discovering process models and identifying process-related issues. On the other hand, statistical model checking (SMC) is a verification technique used to analyse and validate properties of stochastic systems that employs statistical methods and random sampling to estimate the likelihood of a property being satisfied. In a seamless business setting, it is essential to validate and verify process models. The objective of this paper is to apply the SMC technique in process mining for the verification and validation of process models with stochastic behaviour and large state space, where probabilistic model checking is not feasible. We propose a novel methodology in this research direction that integrates SMC and PM by formally modelling discovered and replayed process models and apply statistical methods to estimate the results. The methodology facilitates an automated and proficient evaluation of the extent to which a process model aligns with user requirements and assists in selecting the optimal model. We demonstrate the effectiveness of our methodology with a case study of a loan application process performed in a financial institution that deals with loan applications submitted by customers. The case study highlights our methodology’s capability to identify the performance constraints of various process models and aid enhancement efforts.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"6 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139235608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Noha Hassan, Xavier Fernando, Isaac Woungang, A. Anpalagan
In combination with the expected traffic avalanche foreseen for the next decade, solutions supporting energy-efficient, scalable and flexible network operations are essential. Considering the myriad of user case requirements, THz and mmW bands will play key roles in 6G networks. While mmW is known for short-rate LOS connections, THz transmission is subjected to even severe propagation losses, resulting in very short-range connections. In this context, we evaluate a dynamic multi-band user association algorithm to optimize connectivity in coexisting RF/mmW/THz networks. The algorithm periodically calculates association scores for each user–base station pair based on real-time channel conditions across bands, accounting for factors like signal strength, link blockage risk and noise. It then reassociates users in batches to balance loads while considering user priorities and network conditions. We simulate the algorithm’s performance within a realistic propagation model, where high path loss, molecular absorption, blockage, and narrow beam widths contribute to lower coverage at higher frequencies. Results demonstrate the algorithm’s ability to efficiently utilize network resources across diverse operating environments. In addition, our results show that the choice of frequency band depends on the specific requirements of the application, the environment, and the trade-offs between coverage distance, capacity, and interference conditions.
结合未来十年预计出现的流量雪崩,支持高能效、可扩展和灵活的网络运营的解决方案至关重要。考虑到各种用户需求,太赫兹和毫米波频段将在 6G 网络中发挥关键作用。毫米波是众所周知的短速率 LOS 连接,而太赫兹传输甚至会受到严重的传播损耗,导致非常短的距离连接。在这种情况下,我们评估了一种动态多频段用户关联算法,以优化射频/毫米波/太赫兹共存网络中的连接。该算法根据各频段的实时信道条件,考虑信号强度、链路阻塞风险和噪声等因素,定期计算每个用户-基站对的关联分数。然后,该算法考虑用户优先级和网络条件,分批重新关联用户,以平衡负载。我们模拟了该算法在现实传播模型中的性能,在该模型中,高路径损耗、分子吸收、阻塞和窄波束宽度导致了较高频率的较低覆盖率。结果表明,该算法能够在不同的运行环境下有效利用网络资源。此外,我们的结果表明,频段的选择取决于应用的具体要求、环境以及覆盖距离、容量和干扰条件之间的权衡。
{"title":"User Association Performance Trade-Offs in Integrated RF/mmWave/THz Communications","authors":"Noha Hassan, Xavier Fernando, Isaac Woungang, A. Anpalagan","doi":"10.3390/fi15120376","DOIUrl":"https://doi.org/10.3390/fi15120376","url":null,"abstract":"In combination with the expected traffic avalanche foreseen for the next decade, solutions supporting energy-efficient, scalable and flexible network operations are essential. Considering the myriad of user case requirements, THz and mmW bands will play key roles in 6G networks. While mmW is known for short-rate LOS connections, THz transmission is subjected to even severe propagation losses, resulting in very short-range connections. In this context, we evaluate a dynamic multi-band user association algorithm to optimize connectivity in coexisting RF/mmW/THz networks. The algorithm periodically calculates association scores for each user–base station pair based on real-time channel conditions across bands, accounting for factors like signal strength, link blockage risk and noise. It then reassociates users in batches to balance loads while considering user priorities and network conditions. We simulate the algorithm’s performance within a realistic propagation model, where high path loss, molecular absorption, blockage, and narrow beam widths contribute to lower coverage at higher frequencies. Results demonstrate the algorithm’s ability to efficiently utilize network resources across diverse operating environments. In addition, our results show that the choice of frequency band depends on the specific requirements of the application, the environment, and the trade-offs between coverage distance, capacity, and interference conditions.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"2015 25","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139239269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Ngo, Ons Aouedi, Kandaraj Piamrat, Thomas Hassan, Philippe Raipin-Parvédy
As the complexity and scale of modern networks continue to grow, the need for efficient, secure management, and optimization becomes increasingly vital. Digital twin (DT) technology has emerged as a promising approach to address these challenges by providing a virtual representation of the physical network, enabling analysis, diagnosis, emulation, and control. The emergence of Software-defined network (SDN) has facilitated a holistic view of the network topology, enabling the use of Graph neural network (GNN) as a data-driven technique to solve diverse problems in future networks. This survey explores the intersection of GNNs and Network digital twins (NDTs), providing an overview of their applications, enabling technologies, challenges, and opportunities. We discuss how GNNs and NDTs can be leveraged to improve network performance, optimize routing, enable network slicing, and enhance security in future networks. Additionally, we highlight certain advantages of incorporating GNNs into NDTs and present two case studies. Finally, we address the key challenges and promising directions in the field, aiming to inspire further advancements and foster innovation in GNN-based NDTs for future networks.
{"title":"Empowering Digital Twin for Future Networks with Graph Neural Networks: Overview, Enabling Technologies, Challenges, and Opportunities","authors":"D. Ngo, Ons Aouedi, Kandaraj Piamrat, Thomas Hassan, Philippe Raipin-Parvédy","doi":"10.3390/fi15120377","DOIUrl":"https://doi.org/10.3390/fi15120377","url":null,"abstract":"As the complexity and scale of modern networks continue to grow, the need for efficient, secure management, and optimization becomes increasingly vital. Digital twin (DT) technology has emerged as a promising approach to address these challenges by providing a virtual representation of the physical network, enabling analysis, diagnosis, emulation, and control. The emergence of Software-defined network (SDN) has facilitated a holistic view of the network topology, enabling the use of Graph neural network (GNN) as a data-driven technique to solve diverse problems in future networks. This survey explores the intersection of GNNs and Network digital twins (NDTs), providing an overview of their applications, enabling technologies, challenges, and opportunities. We discuss how GNNs and NDTs can be leveraged to improve network performance, optimize routing, enable network slicing, and enhance security in future networks. Additionally, we highlight certain advantages of incorporating GNNs into NDTs and present two case studies. Finally, we address the key challenges and promising directions in the field, aiming to inspire further advancements and foster innovation in GNN-based NDTs for future networks.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"56 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139241911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher J Lynch, Erik J. Jensen, Virginia Zamponi, Kevin O’Brien, Erika F. Frydenlund, Ross Gore
Large language models (LLMs) excel in providing natural language responses that sound authoritative, reflect knowledge of the context area, and can present from a range of varied perspectives. Agent-based models and simulations consist of simulated agents that interact within a simulated environment to explore societal, social, and ethical, among other, problems. Simulated agents generate large volumes of data and discerning useful and relevant content is an onerous task. LLMs can help in communicating agents’ perspectives on key life events by providing natural language narratives. However, these narratives should be factual, transparent, and reproducible. Therefore, we present a structured narrative prompt for sending queries to LLMs, we experiment with the narrative generation process using OpenAI’s ChatGPT, and we assess statistically significant differences across 11 Positive and Negative Affect Schedule (PANAS) sentiment levels between the generated narratives and real tweets using chi-squared tests and Fisher’s exact tests. The narrative prompt structure effectively yields narratives with the desired components from ChatGPT. In four out of forty-four categories, ChatGPT generated narratives which have sentiment scores that were not discernibly different, in terms of statistical significance (alpha level α=0.05), from the sentiment expressed in real tweets. Three outcomes are provided: (1) a list of benefits and challenges for LLMs in narrative generation; (2) a structured prompt for requesting narratives of an LLM chatbot based on simulated agents’ information; (3) an assessment of statistical significance in the sentiment prevalence of the generated narratives compared to real tweets. This indicates significant promise in the utilization of LLMs for helping to connect a simulated agent’s experiences with real people.
{"title":"A Structured Narrative Prompt for Prompting Narratives from Large Language Models: Sentiment Assessment of ChatGPT-Generated Narratives and Real Tweets","authors":"Christopher J Lynch, Erik J. Jensen, Virginia Zamponi, Kevin O’Brien, Erika F. Frydenlund, Ross Gore","doi":"10.3390/fi15120375","DOIUrl":"https://doi.org/10.3390/fi15120375","url":null,"abstract":"Large language models (LLMs) excel in providing natural language responses that sound authoritative, reflect knowledge of the context area, and can present from a range of varied perspectives. Agent-based models and simulations consist of simulated agents that interact within a simulated environment to explore societal, social, and ethical, among other, problems. Simulated agents generate large volumes of data and discerning useful and relevant content is an onerous task. LLMs can help in communicating agents’ perspectives on key life events by providing natural language narratives. However, these narratives should be factual, transparent, and reproducible. Therefore, we present a structured narrative prompt for sending queries to LLMs, we experiment with the narrative generation process using OpenAI’s ChatGPT, and we assess statistically significant differences across 11 Positive and Negative Affect Schedule (PANAS) sentiment levels between the generated narratives and real tweets using chi-squared tests and Fisher’s exact tests. The narrative prompt structure effectively yields narratives with the desired components from ChatGPT. In four out of forty-four categories, ChatGPT generated narratives which have sentiment scores that were not discernibly different, in terms of statistical significance (alpha level α=0.05), from the sentiment expressed in real tweets. Three outcomes are provided: (1) a list of benefits and challenges for LLMs in narrative generation; (2) a structured prompt for requesting narratives of an LLM chatbot based on simulated agents’ information; (3) an assessment of statistical significance in the sentiment prevalence of the generated narratives compared to real tweets. This indicates significant promise in the utilization of LLMs for helping to connect a simulated agent’s experiences with real people.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"78 ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139244559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karthikeyan Saminathan, Sai Tharun Reddy Mulka, Sangeetha Damodharan, Rajagopal Maheswar, J. Lorincz
The COVID-19 pandemic made all organizations and enterprises work on cloud platforms from home, which greatly facilitates cyberattacks. Employees who work remotely and use cloud-based platforms are chosen as targets for cyberattacks. For that reason, cyber security is a more concerning issue and is now incorporated into almost every smart gadget and has become a prerequisite in every software product and service. There are various mitigations for external cyber security attacks, but hardly any for insider security threats, as they are difficult to detect and mitigate. Thus, insider cyber security threat detection has become a serious concern in recent years. Hence, this paper proposes an unsupervised deep learning approach that employs an artificial neural network (ANN)-based autoencoder to detect anomalies in an insider cyber security attack scenario. The proposed approach analyzes the behavior of the patterns of users and machines for anomalies and sends an alert based on a set security threshold. The threshold value set for security detection is calculated based on reconstruction errors that are obtained through testing the normal data. When the proposed model reconstructs the user behavior without generating sufficient reconstruction errors, i.e., no more than the threshold, the user is flagged as normal; otherwise, it is flagged as a security intruder. The proposed approach performed well, with an accuracy of 94.3% for security threat detection, a false positive rate of 11.1%, and a precision of 89.1%. From the obtained experimental results, it was found that the proposed method for insider security threat detection outperforms the existing methods in terms of performance reliability, due to implementation of ANN-based autoencoder which uses a larger number of features in the process of security threat detection.
COVID-19 大流行使得所有组织和企业都在家使用云平台工作,这为网络攻击提供了极大的便利。远程工作和使用云平台的员工被选为网络攻击的目标。因此,网络安全是一个更加令人担忧的问题,现在几乎所有的智能小工具都集成了网络安全,网络安全已成为每个软件产品和服务的先决条件。外部网络安全攻击有各种缓解措施,但内部安全威胁几乎没有任何缓解措施,因为它们难以检测和缓解。因此,内部网络安全威胁检测已成为近年来备受关注的问题。因此,本文提出了一种无监督深度学习方法,采用基于人工神经网络(ANN)的自动编码器来检测内部网络安全攻击场景中的异常情况。所提出的方法会分析用户和机器的异常行为模式,并根据设定的安全阈值发送警报。为安全检测设定的阈值是根据测试正常数据获得的重构误差计算得出的。当提议的模型重构用户行为时,不会产生足够的重构误差,即不超过阈值,用户就会被标记为正常用户;否则,就会被标记为安全入侵者。所提出的方法性能良好,安全威胁检测的准确率为 94.3%,误报率为 11.1%,精确率为 89.1%。从获得的实验结果来看,由于基于 ANN 的自动编码器在安全威胁检测过程中使用了更多的特征,因此所提出的内部安全威胁检测方法在性能可靠性方面优于现有方法。
{"title":"An Artificial Neural Network Autoencoder for Insider Cyber Security Threat Detection","authors":"Karthikeyan Saminathan, Sai Tharun Reddy Mulka, Sangeetha Damodharan, Rajagopal Maheswar, J. Lorincz","doi":"10.3390/fi15120373","DOIUrl":"https://doi.org/10.3390/fi15120373","url":null,"abstract":"The COVID-19 pandemic made all organizations and enterprises work on cloud platforms from home, which greatly facilitates cyberattacks. Employees who work remotely and use cloud-based platforms are chosen as targets for cyberattacks. For that reason, cyber security is a more concerning issue and is now incorporated into almost every smart gadget and has become a prerequisite in every software product and service. There are various mitigations for external cyber security attacks, but hardly any for insider security threats, as they are difficult to detect and mitigate. Thus, insider cyber security threat detection has become a serious concern in recent years. Hence, this paper proposes an unsupervised deep learning approach that employs an artificial neural network (ANN)-based autoencoder to detect anomalies in an insider cyber security attack scenario. The proposed approach analyzes the behavior of the patterns of users and machines for anomalies and sends an alert based on a set security threshold. The threshold value set for security detection is calculated based on reconstruction errors that are obtained through testing the normal data. When the proposed model reconstructs the user behavior without generating sufficient reconstruction errors, i.e., no more than the threshold, the user is flagged as normal; otherwise, it is flagged as a security intruder. The proposed approach performed well, with an accuracy of 94.3% for security threat detection, a false positive rate of 11.1%, and a precision of 89.1%. From the obtained experimental results, it was found that the proposed method for insider security threat detection outperforms the existing methods in terms of performance reliability, due to implementation of ANN-based autoencoder which uses a larger number of features in the process of security threat detection.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"84 ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139245354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Minghui Sha, Dewu Wang, Fei Meng, Wenyan Wang, Yu Han
With the increasing complexity of radar jamming threats, accurate and automatic jamming recognition is essential but remains challenging. Conventional algorithms often suffer from sharply decreased recognition accuracy under low jamming-to-noise ratios (JNR).Artificial intelligence-based jamming signal recognition is currently the main research directions for this issue. This paper proposes a new radar jamming recognition framework called Diff-SwinT. Firstly, the time-frequency representations of jamming signals are generated using Choi-Williams distribution. Then, a diffusion model with U-Net backbone is trained by adding Gaussian noise in the forward process and reconstructing in the reverse process, obtaining an inverse diffusion model with denoising capability. Next, Swin Transformer extracts hierarchical multi-scale features from the denoised time-frequency plots, and the features are fed into linear layers for classification. Experiments show that compared to using Swin Transformer, the proposed framework improves overall accuracy by 15% to 10% at JNR from −16 dB to −8 dB, demonstrating the efficacy of diffusion-based denoising in enhancing model robustness. Compared to VGG-based and feature-fusion-based recognition methods, the proposed framework has over 27% overall accuracy advantage under JNR from −16 dB to −8 dB. This integrated approach significantly enhances intelligent radar jamming recognition capability in complex environments.
随着雷达干扰威胁的日益复杂,准确和自动的干扰识别至关重要,但仍然具有挑战性。基于人工智能的干扰信号识别是目前这一问题的主要研究方向。本文提出了一种新的雷达干扰识别框架 Diff-SwinT。首先,利用 Choi-Williams 分布生成干扰信号的时频表示。然后,通过在正向过程中加入高斯噪声,在反向过程中进行重建,训练出一个以 U-Net 为骨干的扩散模型,从而得到一个具有去噪能力的反向扩散模型。接下来,Swin Transformer 从去噪的时频图中提取分层多尺度特征,并将特征输入线性层进行分类。实验表明,与使用 Swin Transformer 相比,在 JNR 从 -16 dB 下降到 -8 dB 的情况下,所提出的框架能将整体准确率提高 15%-10%,证明了基于扩散的去噪在增强模型鲁棒性方面的功效。与基于 VGG 和基于特征融合的识别方法相比,所提出的框架在 JNR 从 -16 dB 到 -8 dB 的范围内具有超过 27% 的整体准确率优势。这种综合方法大大增强了复杂环境下的智能雷达干扰识别能力。
{"title":"Diff-SwinT: An Integrated Framework of Diffusion Model and Swin Transformer for Radar Jamming Recognition","authors":"Minghui Sha, Dewu Wang, Fei Meng, Wenyan Wang, Yu Han","doi":"10.3390/fi15120374","DOIUrl":"https://doi.org/10.3390/fi15120374","url":null,"abstract":"With the increasing complexity of radar jamming threats, accurate and automatic jamming recognition is essential but remains challenging. Conventional algorithms often suffer from sharply decreased recognition accuracy under low jamming-to-noise ratios (JNR).Artificial intelligence-based jamming signal recognition is currently the main research directions for this issue. This paper proposes a new radar jamming recognition framework called Diff-SwinT. Firstly, the time-frequency representations of jamming signals are generated using Choi-Williams distribution. Then, a diffusion model with U-Net backbone is trained by adding Gaussian noise in the forward process and reconstructing in the reverse process, obtaining an inverse diffusion model with denoising capability. Next, Swin Transformer extracts hierarchical multi-scale features from the denoised time-frequency plots, and the features are fed into linear layers for classification. Experiments show that compared to using Swin Transformer, the proposed framework improves overall accuracy by 15% to 10% at JNR from −16 dB to −8 dB, demonstrating the efficacy of diffusion-based denoising in enhancing model robustness. Compared to VGG-based and feature-fusion-based recognition methods, the proposed framework has over 27% overall accuracy advantage under JNR from −16 dB to −8 dB. This integrated approach significantly enhances intelligent radar jamming recognition capability in complex environments.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"16 5","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139244107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohaimenul Azam Khan Raiaan, Nur Mohammad Fahad, Shovan Chowdhury, Debopom Sutradhar, Saadman Sakib Mihad, Md. Motaharul Islam
Significant threats to ecological equilibrium and sustainable agriculture are posed by the extinction of animal species and the subsequent effects on farms. Farmers face difficult decisions, such as installing electric fences to protect their farms, although these measures can harm animals essential for maintaining ecological equilibrium. To tackle these essential issues, our research introduces an innovative solution in the form of an object-detection system. In this research, we designed and implemented a system that leverages the ESP32-CAM platform in conjunction with the YOLOv8 object-detection model. Our proposed system aims to identify endangered species and harmful animals within farming environments, providing real-time alerts to farmers and endangered wildlife by integrating a cloud-based alert system. To train the YOLOv8 model effectively, we meticulously compiled diverse image datasets featuring these animals in agricultural settings, subsequently annotating them. After that, we tuned the hyperparameter of the YOLOv8 model to enhance the performance of the model. The results from our optimized YOLOv8 model are auspicious. It achieves a remarkable mean average precision (mAP) of 92.44% and an impressive sensitivity rate of 96.65% on an unseen test dataset, firmly establishing its efficacy. After achieving an optimal result, we employed the model in our IoT system and when the system detects the presence of these animals, it immediately activates an audible buzzer. Additionally, a cloud-based system was utilized to notify neighboring farmers effectively and alert animals to potential danger. This research’s significance lies in its potential to drive the conservation of endangered species while simultaneously mitigating the agricultural damage inflicted by these animals.
{"title":"IoT-Based Object-Detection System to Safeguard Endangered Animals and Bolster Agricultural Farm Security","authors":"Mohaimenul Azam Khan Raiaan, Nur Mohammad Fahad, Shovan Chowdhury, Debopom Sutradhar, Saadman Sakib Mihad, Md. Motaharul Islam","doi":"10.3390/fi15120372","DOIUrl":"https://doi.org/10.3390/fi15120372","url":null,"abstract":"Significant threats to ecological equilibrium and sustainable agriculture are posed by the extinction of animal species and the subsequent effects on farms. Farmers face difficult decisions, such as installing electric fences to protect their farms, although these measures can harm animals essential for maintaining ecological equilibrium. To tackle these essential issues, our research introduces an innovative solution in the form of an object-detection system. In this research, we designed and implemented a system that leverages the ESP32-CAM platform in conjunction with the YOLOv8 object-detection model. Our proposed system aims to identify endangered species and harmful animals within farming environments, providing real-time alerts to farmers and endangered wildlife by integrating a cloud-based alert system. To train the YOLOv8 model effectively, we meticulously compiled diverse image datasets featuring these animals in agricultural settings, subsequently annotating them. After that, we tuned the hyperparameter of the YOLOv8 model to enhance the performance of the model. The results from our optimized YOLOv8 model are auspicious. It achieves a remarkable mean average precision (mAP) of 92.44% and an impressive sensitivity rate of 96.65% on an unseen test dataset, firmly establishing its efficacy. After achieving an optimal result, we employed the model in our IoT system and when the system detects the presence of these animals, it immediately activates an audible buzzer. Additionally, a cloud-based system was utilized to notify neighboring farmers effectively and alert animals to potential danger. This research’s significance lies in its potential to drive the conservation of endangered species while simultaneously mitigating the agricultural damage inflicted by these animals.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"40 6","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139254065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}