首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
DNA: Dual-radio Dual-constraint Node Activation scheduling for energy-efficient data dissemination in IoT
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-11 DOI: 10.1016/j.future.2025.107746
Hua Qin , Hao Chen , Ni Li , Yaqi Deng , Gelan Yang , Yang Peng
With the fast expansion of the Internet of Things (IoT), a vast number of IoT gateways are being deployed and continuously disseminating data to proximate devices. As one of the most prevalent wireless technologies in our daily work and life, WiFi has been extensively used for data dissemination because of the widespread availability of WiFi infrastructures. However, data transmission over high-power WiFi can quickly deplete the batteries of IoT devices. Despite the introduction of numerous power saving protocols in WiFi-based IoT computer network systems, achieving both high energy efficiency and low delay remains a challenge due to the unpredictability of data traffic. To address this, we propose a dual-radio Dual-constraint Node Activation (DNA) scheduling scheme, which leverages an auxiliary low-power ZigBee radio to reactively activate the high-power WiFi radio for energy-efficient data dissemination. Besides the delay constraint required by WiFi upper-layer applications, the dual-radio energy optimization problem studied in this work is also limited by the constrained ZigBee bandwidth for performing radio activation. By jointly scheduling dual-radio duty cycles, DNA dynamically allocates ZigBee bandwidth to balance energy and delay for optimized system performance. Extensive real-world testing was conducted on a prototype dual-radio system equipped with off-the-shelf ZigBee and WiFi radios. Under medium bandwidth and delay constraints, DNA achieves an energy consumption of 7.95 mJ per data packet, which is 95.4% and 36.2% lower than the WiFi’s standard power saving protocol and a contemporary dual-radio scheduling scheme, respectively. Additionally, DNA has demonstrated superior reliability and adaptability in various scenarios.
{"title":"DNA: Dual-radio Dual-constraint Node Activation scheduling for energy-efficient data dissemination in IoT","authors":"Hua Qin ,&nbsp;Hao Chen ,&nbsp;Ni Li ,&nbsp;Yaqi Deng ,&nbsp;Gelan Yang ,&nbsp;Yang Peng","doi":"10.1016/j.future.2025.107746","DOIUrl":"10.1016/j.future.2025.107746","url":null,"abstract":"<div><div>With the fast expansion of the Internet of Things (IoT), a vast number of IoT gateways are being deployed and continuously disseminating data to proximate devices. As one of the most prevalent wireless technologies in our daily work and life, WiFi has been extensively used for data dissemination because of the widespread availability of WiFi infrastructures. However, data transmission over high-power WiFi can quickly deplete the batteries of IoT devices. Despite the introduction of numerous power saving protocols in WiFi-based IoT computer network systems, achieving both high energy efficiency and low delay remains a challenge due to the unpredictability of data traffic. To address this, we propose a dual-radio Dual-constraint Node Activation (DNA) scheduling scheme, which leverages an auxiliary low-power ZigBee radio to reactively activate the high-power WiFi radio for energy-efficient data dissemination. Besides the delay constraint required by WiFi upper-layer applications, the dual-radio energy optimization problem studied in this work is also limited by the constrained ZigBee bandwidth for performing radio activation. By jointly scheduling dual-radio duty cycles, DNA dynamically allocates ZigBee bandwidth to balance energy and delay for optimized system performance. Extensive real-world testing was conducted on a prototype dual-radio system equipped with off-the-shelf ZigBee and WiFi radios. Under medium bandwidth and delay constraints, DNA achieves an energy consumption of 7.95 mJ per data packet, which is 95.4% and 36.2% lower than the WiFi’s standard power saving protocol and a contemporary dual-radio scheduling scheme, respectively. Additionally, DNA has demonstrated superior reliability and adaptability in various scenarios.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107746"},"PeriodicalIF":6.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blending lossy and lossless data compression methods to support health data streaming in smart cities
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-11 DOI: 10.1016/j.future.2025.107748
Alexandre Andrade , Cristiano André da Costa , Alex Roehrs , Debora Muchaluat-Saade , Rodrigo da Rosa Righi
The digital transformation process has significantly boosted the widespread adoption of telemedicine and the utilization of wearable devices for vital signs remote monitoring. However, implementing a system for continuous monitoring of the population’s vital signs, with data being streamed from various locations within a smart city context, faces significant challenges. These challenges are related to bandwidth consumption, communication latency, and storage capacity due to the large volume of data. To overcome these challenges, a common practice consists in modeling an edge-fog-cloud layered architecture. The literature lacks software solutions capable of managing the simultaneous transmission of various vital signs data from geographically distributed individuals while maintaining the ability to generate health notifications promptly. In this context, we propose the VSAC (Vital Sign Adaptive Compressor) model, which combines lossy and lossless data compression algorithms in a layered architecture to support healthcare demands in a smart city. The main contribution is how we blend both strategies: we first use lossy compression to collect only valuable vital sign data for everyone, applying lossless algorithms afterwards to reduce the number of bytes before sending it to higher layers. We provide a real-time processing protocol that facilitates the collection of heterogeneous data distributed across different city regions. After executing a VSAC prototype, the results indicate that orchestrating the aforementioned two data compression algorithms is more efficient than conventional data reduction methods. In particular, we obtained gains of up to 42% when measuring the compression rate metric.
{"title":"Blending lossy and lossless data compression methods to support health data streaming in smart cities","authors":"Alexandre Andrade ,&nbsp;Cristiano André da Costa ,&nbsp;Alex Roehrs ,&nbsp;Debora Muchaluat-Saade ,&nbsp;Rodrigo da Rosa Righi","doi":"10.1016/j.future.2025.107748","DOIUrl":"10.1016/j.future.2025.107748","url":null,"abstract":"<div><div>The digital transformation process has significantly boosted the widespread adoption of telemedicine and the utilization of wearable devices for vital signs remote monitoring. However, implementing a system for continuous monitoring of the population’s vital signs, with data being streamed from various locations within a smart city context, faces significant challenges. These challenges are related to bandwidth consumption, communication latency, and storage capacity due to the large volume of data. To overcome these challenges, a common practice consists in modeling an edge-fog-cloud layered architecture. The literature lacks software solutions capable of managing the simultaneous transmission of various vital signs data from geographically distributed individuals while maintaining the ability to generate health notifications promptly. In this context, we propose the VSAC (Vital Sign Adaptive Compressor) model, which combines lossy and lossless data compression algorithms in a layered architecture to support healthcare demands in a smart city. The main contribution is how we blend both strategies: we first use lossy compression to collect only valuable vital sign data for everyone, applying lossless algorithms afterwards to reduce the number of bytes before sending it to higher layers. We provide a real-time processing protocol that facilitates the collection of heterogeneous data distributed across different city regions. After executing a VSAC prototype, the results indicate that orchestrating the aforementioned two data compression algorithms is more efficient than conventional data reduction methods. In particular, we obtained gains of up to 42% when measuring the compression rate metric.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107748"},"PeriodicalIF":6.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143403358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leading Smart Environments towards the Future Internet through Name Data Networking: A survey
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-10 DOI: 10.1016/j.future.2025.107754
Rafiq Ul Islam, Claudio Savaglio, Giancarlo Fortino
The increasing diffusion of Smart Environments enabled by the Internet of Things (IoT) technologies has evidenced the limitations of traditional Internet Protocol (IP), thus pushing for a paradigm shift from host-centric to Information-Centric Networking (ICN). The Named Data Networking (NDN) is a particular ICN implementation that prospects more efficient and effective communication and service provision, reason why it is widely considered as an enabler towards Future Internet. Driven by the PRISMA methodology, in this work we systematically survey the current literature and analyze opportunities and limitations of NDN adoption within Smart Environments, targeted application areas, adopted technologies and research gaps. In particular, by means of a research framework, we highlight how, by shifting from the traditional IP-based to NDN, Smart Environments can benefit from unseen degrees of mobility, scalability, security and performance, paving the way to innovative and cutting-edge cyberphysical services.
{"title":"Leading Smart Environments towards the Future Internet through Name Data Networking: A survey","authors":"Rafiq Ul Islam,&nbsp;Claudio Savaglio,&nbsp;Giancarlo Fortino","doi":"10.1016/j.future.2025.107754","DOIUrl":"10.1016/j.future.2025.107754","url":null,"abstract":"<div><div>The increasing diffusion of Smart Environments enabled by the Internet of Things (IoT) technologies has evidenced the limitations of traditional Internet Protocol (IP), thus pushing for a paradigm shift from host-centric to Information-Centric Networking (ICN). The Named Data Networking (NDN) is a particular ICN implementation that prospects more efficient and effective communication and service provision, reason why it is widely considered as an enabler towards Future Internet. Driven by the PRISMA methodology, in this work we systematically survey the current literature and analyze opportunities and limitations of NDN adoption within Smart Environments, targeted application areas, adopted technologies and research gaps. In particular, by means of a research framework, we highlight how, by shifting from the traditional IP-based to NDN, Smart Environments can benefit from unseen degrees of mobility, scalability, security and performance, paving the way to innovative and cutting-edge cyberphysical services.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107754"},"PeriodicalIF":6.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy–time modelling of distributed multi-population genetic algorithms with dynamic workload in HPC clusters
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-10 DOI: 10.1016/j.future.2025.107753
Juan José Escobar , Pablo Sánchez-Cuevas , Beatriz Prieto , Rukiye Savran Kızıltepe , Fernando Díaz-del-Río , Dragi Kimovski
Time and energy efficiency is a highly relevant objective in high-performance computing systems, with high costs for executing the tasks. Among these tasks, evolutionary algorithms are of consideration due to their inherent parallel scalability and usually costly fitness evaluation functions. In this respect, several scheduling strategies for workload balancing in heterogeneous systems have been proposed in the literature, with runtime and energy consumption reduction as their goals. Our hypothesis is that a dynamic workload distribution can be fitted with greater precision using metaheuristics, such as genetic algorithms, instead of linear regression. Therefore, this paper proposes a new mathematical model to predict the energy–time behaviour of applications based on multi-population genetic algorithms, which dynamically distributes the evaluation of individuals among the CPU–GPU devices of heterogeneous clusters. An accurate predictor would save time and energy by selecting the best resource set before running such applications. The estimation of the workload distributed to each device has been carried out by simulation, while the model parameters have been fitted in a two-phase run using another genetic algorithm and the experimental energy–time values of the target application as input. When the new model is analysed and compared with another based on linear regression, the one proposed in this work significantly improves the baseline approach, showing normalised prediction errors of 0.081 for runtime and 0.091 for energy consumption, compared to 0.213 and 0.256 shown in the baseline approach.
{"title":"Energy–time modelling of distributed multi-population genetic algorithms with dynamic workload in HPC clusters","authors":"Juan José Escobar ,&nbsp;Pablo Sánchez-Cuevas ,&nbsp;Beatriz Prieto ,&nbsp;Rukiye Savran Kızıltepe ,&nbsp;Fernando Díaz-del-Río ,&nbsp;Dragi Kimovski","doi":"10.1016/j.future.2025.107753","DOIUrl":"10.1016/j.future.2025.107753","url":null,"abstract":"<div><div>Time and energy efficiency is a highly relevant objective in high-performance computing systems, with high costs for executing the tasks. Among these tasks, evolutionary algorithms are of consideration due to their inherent parallel scalability and usually costly fitness evaluation functions. In this respect, several scheduling strategies for workload balancing in heterogeneous systems have been proposed in the literature, with runtime and energy consumption reduction as their goals. Our hypothesis is that a dynamic workload distribution can be fitted with greater precision using metaheuristics, such as genetic algorithms, instead of linear regression. Therefore, this paper proposes a new mathematical model to predict the energy–time behaviour of applications based on multi-population genetic algorithms, which dynamically distributes the evaluation of individuals among the CPU–GPU devices of heterogeneous clusters. An accurate predictor would save time and energy by selecting the best resource set before running such applications. The estimation of the workload distributed to each device has been carried out by simulation, while the model parameters have been fitted in a two-phase run using another genetic algorithm and the experimental energy–time values of the target application as input. When the new model is analysed and compared with another based on linear regression, the one proposed in this work significantly improves the baseline approach, showing normalised prediction errors of 0.081 for runtime and 0.091 for energy consumption, compared to 0.213 and 0.256 shown in the baseline approach.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107753"},"PeriodicalIF":6.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RGAnomaly: Data reconstruction-based generative adversarial networks for multivariate time series anomaly detection in the Internet of Things
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-10 DOI: 10.1016/j.future.2025.107751
Cheng Qian , Wenzhong Tang , Yanyang Wang
The Internet of Things encompasses a variety of components, including sensors and controllers, which generate vast amounts of multivariate time series data. Anomaly detection within this data can reveal patterns of behavior that deviate from normal operating states, providing timely alerts to mitigate potential serious issues or losses. The prevailing methodologies for multivariate time series anomaly detection are based on data reconstruction. However, these methodologies face challenges related to insufficient feature extraction and fusion, as well as instability in the reconstruction effectiveness of a single model. In this article, we propose RGAnomaly, a novel data reconstruction-based generative adversarial network model. This model leverages transformers and cross-attention mechanisms to extract and fuse the temporal and metric features of multivariate time series. RGAnomaly constructs a joint generator comprising an autoencoder and a variational autoencoder, which forms the adversarial structure with a discriminator. The anomaly score is derived from the combined data reconstruction loss and discrimination loss, providing a more comprehensive evaluation for anomaly detection. Comparative experiments and ablation studies on four public multivariate time series datasets demonstrate that RGAnomaly delivers superior performance in anomaly detection, effectively identifying anomalies in time series data within IoT environments.
{"title":"RGAnomaly: Data reconstruction-based generative adversarial networks for multivariate time series anomaly detection in the Internet of Things","authors":"Cheng Qian ,&nbsp;Wenzhong Tang ,&nbsp;Yanyang Wang","doi":"10.1016/j.future.2025.107751","DOIUrl":"10.1016/j.future.2025.107751","url":null,"abstract":"<div><div>The Internet of Things encompasses a variety of components, including sensors and controllers, which generate vast amounts of multivariate time series data. Anomaly detection within this data can reveal patterns of behavior that deviate from normal operating states, providing timely alerts to mitigate potential serious issues or losses. The prevailing methodologies for multivariate time series anomaly detection are based on data reconstruction. However, these methodologies face challenges related to insufficient feature extraction and fusion, as well as instability in the reconstruction effectiveness of a single model. In this article, we propose RGAnomaly, a novel data reconstruction-based generative adversarial network model. This model leverages transformers and cross-attention mechanisms to extract and fuse the temporal and metric features of multivariate time series. RGAnomaly constructs a joint generator comprising an autoencoder and a variational autoencoder, which forms the adversarial structure with a discriminator. The anomaly score is derived from the combined data reconstruction loss and discrimination loss, providing a more comprehensive evaluation for anomaly detection. Comparative experiments and ablation studies on four public multivariate time series datasets demonstrate that RGAnomaly delivers superior performance in anomaly detection, effectively identifying anomalies in time series data within IoT environments.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107751"},"PeriodicalIF":6.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing federated averaging of self-supervised monocular depth estimators for autonomous vehicles with Bayesian optimization
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-10 DOI: 10.1016/j.future.2025.107752
Elton F. de S. Soares , Emilio Vital Brazil , Carlos Alberto V. Campos
Recent research in computer vision for intelligent transportation systems has prominently focused on image-based depth estimation due to its cost-effectiveness and versatile applications. Monocular depth estimation methods, in particular, have gained attention for their reliance on a single camera, offering high versatility compared to binocular techniques requiring two fixed cameras. While advanced approaches leverage self-supervised deep neural network learning with proxy tasks like pose estimation and semantic segmentation, some overlook crucial requirements for real autonomous vehicle deployment. These include data privacy, reduced network consumption, distributed computational cost, and resilience to connectivity issues. Recent studies highlight the effectiveness of federated learning combined with Bayesian optimization in addressing these requirements without compromising model efficacy. Thus, we introduce BOFedSCDepth, a novel method integrating Bayesian optimization, federated learning, and deep self-supervision to train monocular depth estimators with better efficacy and efficiency than the state-of-the-art method on self-supervised federated learning. Evaluation experiments on KITTI and DDAD datasets demonstrate the superiority of our approach, achieving up to 40.1% test loss improvement over the baseline at the initial rounds of training with up to 33.3% communication cost reduction, linear computational cost overhead at the central server and no overhead at the autonomous vehicles.
{"title":"Enhancing federated averaging of self-supervised monocular depth estimators for autonomous vehicles with Bayesian optimization","authors":"Elton F. de S. Soares ,&nbsp;Emilio Vital Brazil ,&nbsp;Carlos Alberto V. Campos","doi":"10.1016/j.future.2025.107752","DOIUrl":"10.1016/j.future.2025.107752","url":null,"abstract":"<div><div>Recent research in computer vision for intelligent transportation systems has prominently focused on image-based depth estimation due to its cost-effectiveness and versatile applications. Monocular depth estimation methods, in particular, have gained attention for their reliance on a single camera, offering high versatility compared to binocular techniques requiring two fixed cameras. While advanced approaches leverage self-supervised deep neural network learning with proxy tasks like pose estimation and semantic segmentation, some overlook crucial requirements for real autonomous vehicle deployment. These include data privacy, reduced network consumption, distributed computational cost, and resilience to connectivity issues. Recent studies highlight the effectiveness of federated learning combined with Bayesian optimization in addressing these requirements without compromising model efficacy. Thus, we introduce BOFedSCDepth, a novel method integrating Bayesian optimization, federated learning, and deep self-supervision to train monocular depth estimators with better efficacy and efficiency than the state-of-the-art method on self-supervised federated learning. Evaluation experiments on KITTI and DDAD datasets demonstrate the superiority of our approach, achieving up to 40.1% test loss improvement over the baseline at the initial rounds of training with up to 33.3% communication cost reduction, linear computational cost overhead at the central server and no overhead at the autonomous vehicles.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107752"},"PeriodicalIF":6.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
K-bisimulation: A novel approach for simplifying heterogeneous information networks
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-10 DOI: 10.1016/j.future.2025.107749
Yongjie Liang , Wujie Hu , Jinzhao Wu
Heterogeneous information networks (HINs) are becoming increasingly important and widely used; however, fewer studies are focusing on the branch structures within HINs. Based on the commonalities of concurrent systems and heterogeneous information networks, as well as the significant application of bisimulation equivalence in concurrent systems, this article proposes k-bisimulation among nodes belonging to same node type, aiming to simplify the branching structure of that to obtain a cost-effective model, wherein the k is a positive integrate being closely related to the similarity degree of nodes. In this paper, we initially define the notion of k-bisimulation for nodes. Subsequently, we propose a computational method to identify k-bisimulation among nodes of same type in HINs. With the assistance of this method, we can derive a network that is approximately bisimular to the original one. Theoretical and practical analysis reveals that errors in connected paths between the original and bisimular networks are controllable. Experimental results indicate that, in comparison to the original network, the obtained network exhibits a reduction in the number of nodes and edges, while still preserve same or similar information.
{"title":"K-bisimulation: A novel approach for simplifying heterogeneous information networks","authors":"Yongjie Liang ,&nbsp;Wujie Hu ,&nbsp;Jinzhao Wu","doi":"10.1016/j.future.2025.107749","DOIUrl":"10.1016/j.future.2025.107749","url":null,"abstract":"<div><div>Heterogeneous information networks (HINs) are becoming increasingly important and widely used; however, fewer studies are focusing on the branch structures within HINs. Based on the commonalities of concurrent systems and heterogeneous information networks, as well as the significant application of bisimulation equivalence in concurrent systems, this article proposes k-bisimulation among nodes belonging to same node type, aiming to simplify the branching structure of that to obtain a cost-effective model, wherein the k is a positive integrate being closely related to the similarity degree of nodes. In this paper, we initially define the notion of k-bisimulation for nodes. Subsequently, we propose a computational method to identify k-bisimulation among nodes of same type in HINs. With the assistance of this method, we can derive a network that is approximately bisimular to the original one. Theoretical and practical analysis reveals that errors in connected paths between the original and bisimular networks are controllable. Experimental results indicate that, in comparison to the original network, the obtained network exhibits a reduction in the number of nodes and edges, while still preserve same or similar information.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107749"},"PeriodicalIF":6.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Web of Things approach for learning on the Edge–Cloud Continuum
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-08 DOI: 10.1016/j.future.2025.107736
Luca Bedogni , Federico Chiariotti
Internet of Things (IoT) devices provide constant, contextual data that can be leveraged to automatically reconfigure and optimize smart environments. Artificial Intelligence (AI) and deep learning techniques are tools of increasing importance for this, as Deep Reinforcement Learning (DRL) can provide a general solution to this problem. However, the heterogeneity of scenarios in which DRL models may be deployed is vast, making the design of universal plug-and-play models extremely difficult. Moreover, the real deployment of DRL models on the Edge, and in the IoT in particular, is limited by two factors: firstly, the computational complexity of the training procedure, and secondly, the need for a relatively long exploration phase, during which the agent proceeds by trial and error. A natural solution to both these issues is to use simulated environments by creating a Digital Twin (DT) of the environment, which can replicate physical entities in the digital domain, providing a standardized interface to the application layer. DTs allow for simulation and testing of models and services in a simulated environment, which may be hosted on more powerful Cloud servers without the need to exchange all the data generated by the real devices. In this paper, we present a novel architecture based on the emerging Web of Things (WoT) standard, which provides a DT of a smart environment and applies DRL techniques on real time data. We discuss the theoretical properties of DRL training using DTs, showcasing our system in an existing real deployment, comparing its performance with a legacy system. Our findings show that the implementation of a DT, specifically for DRL models, allows for faster convergence and finer tuning, as well as reducing the computational and communication demands on the Edge network. The use of multiple DTs with different complexities and data requirements can also help accelerate the training, progressing by steps.
{"title":"A Web of Things approach for learning on the Edge–Cloud Continuum","authors":"Luca Bedogni ,&nbsp;Federico Chiariotti","doi":"10.1016/j.future.2025.107736","DOIUrl":"10.1016/j.future.2025.107736","url":null,"abstract":"<div><div>Internet of Things (IoT) devices provide constant, contextual data that can be leveraged to automatically reconfigure and optimize smart environments. Artificial Intelligence (AI) and deep learning techniques are tools of increasing importance for this, as Deep Reinforcement Learning (DRL) can provide a general solution to this problem. However, the heterogeneity of scenarios in which DRL models may be deployed is vast, making the design of universal plug-and-play models extremely difficult. Moreover, the real deployment of DRL models on the Edge, and in the IoT in particular, is limited by two factors: firstly, the computational complexity of the training procedure, and secondly, the need for a relatively long exploration phase, during which the agent proceeds by trial and error. A natural solution to both these issues is to use simulated environments by creating a Digital Twin (DT) of the environment, which can replicate physical entities in the digital domain, providing a standardized interface to the application layer. DTs allow for simulation and testing of models and services in a simulated environment, which may be hosted on more powerful Cloud servers without the need to exchange all the data generated by the real devices. In this paper, we present a novel architecture based on the emerging Web of Things (WoT) standard, which provides a DT of a smart environment and applies DRL techniques on real time data. We discuss the theoretical properties of DRL training using DTs, showcasing our system in an existing real deployment, comparing its performance with a legacy system. Our findings show that the implementation of a DT, specifically for DRL models, allows for faster convergence and finer tuning, as well as reducing the computational and communication demands on the Edge network. The use of multiple DTs with different complexities and data requirements can also help accelerate the training, progressing by steps.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107736"},"PeriodicalIF":6.2,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical risk parity: Efficient implementation and real world analysis
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-07 DOI: 10.1016/j.future.2025.107744
Dario Deković , Petra Posedel Šimović
In this paper, we present an efficient implementation of the Hierarchical Risk Parity (HRP) portfolio optimization algorithm. HRP was designed to allocate portfolio weights by building a hierarchical tree of asset clusters and reducing risk through inverse variance allocation across the clusters. Our implementation improves the performance of the original algorithm by reducing its time complexity and making it more suitable for real-time systems. We evaluate the performance of our implementation on various constituents of the S&P 500 index, a market-capitalization-weighted index of 500 leading publicly traded companies in the U.S., using historical price data from 2005 to 2023. We compare the out-of-sample risk-adjusted returns of the HRP algorithm to those of a simple 1/N allocation method and find that the 1/N method outperforms HRP across all experimental setups. However, the HRP generated portfolios had a lower standard deviation by approximately 1% across all experimental setups. These results show that HRP can be of great use in generating portfolios when risk is the primary concern.
{"title":"Hierarchical risk parity: Efficient implementation and real world analysis","authors":"Dario Deković ,&nbsp;Petra Posedel Šimović","doi":"10.1016/j.future.2025.107744","DOIUrl":"10.1016/j.future.2025.107744","url":null,"abstract":"<div><div>In this paper, we present an efficient implementation of the Hierarchical Risk Parity (HRP) portfolio optimization algorithm. HRP was designed to allocate portfolio weights by building a hierarchical tree of asset clusters and reducing risk through inverse variance allocation across the clusters. Our implementation improves the performance of the original algorithm by reducing its time complexity and making it more suitable for real-time systems. We evaluate the performance of our implementation on various constituents of the S&amp;P 500 index, a market-capitalization-weighted index of 500 leading publicly traded companies in the U.S., using historical price data from 2005 to 2023. We compare the out-of-sample risk-adjusted returns of the HRP algorithm to those of a simple 1/N allocation method and find that the 1/N method outperforms HRP across all experimental setups. However, the HRP generated portfolios had a lower standard deviation by approximately 1% across all experimental setups. These results show that HRP can be of great use in generating portfolios when risk is the primary concern.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107744"},"PeriodicalIF":6.2,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time unsupervised video object detection on the edge
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-06 DOI: 10.1016/j.future.2025.107737
Paula Ruiz-Barroso, Francisco M. Castro, Nicolás Guil
Object detection in video is an essential computer vision task. Consequently, many efforts have been devoted to developing precise and fast deep-learning models for this task. These models are commonly deployed on discrete and powerful GPU devices to meet both frame rate performance and detection accuracy requirements. Furthermore, model training is usually performed in a strongly supervised way so that samples must be previously labelled by humans using a slow and costly process. In this paper, we develop a real-time implementation for unsupervised object detection in video employing a low-power device. We improve typical approaches for object detection using information supplied by optical flow to detect moving objects. Besides, we use an unsupervised clustering algorithm to group similar detections that avoid manual object labelling. Finally, we propose a methodology to optimize the deployment of our resulting framework on an embedded heterogeneous platform. Thus, we illustrate how all the computational resources of a Jetson AGX Xavier (CPU, GPU, and DLAs) can be used to fulfil frame rate, accuracy, and energy consumption requirements. Three different data representations (FP32, FP16 and INT8) are studied for the pipeline networks in order to evaluate the impact of all of them in our pipeline. Obtained results show that our proposed optimizations can improve up to 23.6× energy consumption and 32.2× execution time with respect to the non-optimized pipeline without penalizing the original mAP (59.44). This computational complexity reduction is achieved through knowledge distillation, using FP16 data precision, and deploying concurrent tasks in different computing units.
{"title":"Real-time unsupervised video object detection on the edge","authors":"Paula Ruiz-Barroso,&nbsp;Francisco M. Castro,&nbsp;Nicolás Guil","doi":"10.1016/j.future.2025.107737","DOIUrl":"10.1016/j.future.2025.107737","url":null,"abstract":"<div><div>Object detection in video is an essential computer vision task. Consequently, many efforts have been devoted to developing precise and fast deep-learning models for this task. These models are commonly deployed on discrete and powerful GPU devices to meet both frame rate performance and detection accuracy requirements. Furthermore, model training is usually performed in a strongly supervised way so that samples must be previously labelled by humans using a slow and costly process. In this paper, we develop a real-time implementation for unsupervised object detection in video employing a low-power device. We improve typical approaches for object detection using information supplied by optical flow to detect moving objects. Besides, we use an unsupervised clustering algorithm to group similar detections that avoid manual object labelling. Finally, we propose a methodology to optimize the deployment of our resulting framework on an embedded heterogeneous platform. Thus, we illustrate how all the computational resources of a Jetson AGX Xavier (CPU, GPU, and DLAs) can be used to fulfil frame rate, accuracy, and energy consumption requirements. Three different data representations (FP32, FP16 and INT8) are studied for the pipeline networks in order to evaluate the impact of all of them in our pipeline. Obtained results show that our proposed optimizations can improve up to <span><math><mrow><mn>23</mn><mo>.</mo><mn>6</mn><mo>×</mo></mrow></math></span> energy consumption and <span><math><mrow><mn>32</mn><mo>.</mo><mn>2</mn><mo>×</mo></mrow></math></span> execution time with respect to the non-optimized pipeline without penalizing the original mAP (59.44). This computational complexity reduction is achieved through knowledge distillation, using FP16 data precision, and deploying concurrent tasks in different computing units.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107737"},"PeriodicalIF":6.2,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1