Pub Date : 2023-12-11DOI: 10.1109/CommNet60167.2023.10365185
Hafsa Raissouli, Ahmad Alauddin Bin Ariffin, S. Belhaouari
The continuous rise in the number of IoT devices has led to an increasing importance of the fog computing paradigm. Part of the workload that should be processed is executed locally on the IoT device and the rest is offloaded and allocated to the fog nodes. This workload allocation decision should be done in a way that provides the lowest delay but while taking into account the energy consumption as well. This study presents an optimization of the workload allocation that minimizes delay and power consumption using the multi-objective evolutionary algorithms, namely, NSGA II, R-NSGA II, NSGA III, R-NSGA III and CTAEA. The experiments involve two scenarios, full transmission power of the IoT device, and half of its transmission power with varying workload sizes. The results manifested the superior performance of NSGA III and CTAEA in optimizing the allocation of tasks in fog computing environments. By demonstrating NSGA III and CTAEA’s effectiveness, this findings not only advance the understanding of evolutionary algorithms but also provide practical insights for optimizing fog computing systems. This research has broader implications for improving the efficiency and performance of fog computing applications, with potential applications across various scenarios in the field.
随着物联网设备数量的不断增加,雾计算模式的重要性日益凸显。需要处理的部分工作负载在物联网设备上本地执行,其余部分被卸载并分配给雾节点。这种工作负载分配决策应在提供最低延迟的同时兼顾能耗。本研究采用多目标进化算法(即 NSGA II、R-NSGA II、NSGA III、R-NSGA III 和 CTAEA)对工作负载分配进行了优化,使延迟和能耗最小化。实验涉及两种情况,一种是物联网设备的全部传输功率,另一种是其传输功率的一半,且工作负载大小各不相同。结果表明,NSGA III 和 CTAEA 在优化雾计算环境中的任务分配方面表现出色。通过证明 NSGA III 和 CTAEA 的有效性,该研究成果不仅加深了人们对进化算法的理解,还为优化雾计算系统提供了实用见解。这项研究对提高雾计算应用的效率和性能具有更广泛的意义,在该领域的各种场景中都有潜在应用。
{"title":"Workload Allocation in Fog Environment Using Multi-Objective Evolutionary Algorithms for Internet of Things","authors":"Hafsa Raissouli, Ahmad Alauddin Bin Ariffin, S. Belhaouari","doi":"10.1109/CommNet60167.2023.10365185","DOIUrl":"https://doi.org/10.1109/CommNet60167.2023.10365185","url":null,"abstract":"The continuous rise in the number of IoT devices has led to an increasing importance of the fog computing paradigm. Part of the workload that should be processed is executed locally on the IoT device and the rest is offloaded and allocated to the fog nodes. This workload allocation decision should be done in a way that provides the lowest delay but while taking into account the energy consumption as well. This study presents an optimization of the workload allocation that minimizes delay and power consumption using the multi-objective evolutionary algorithms, namely, NSGA II, R-NSGA II, NSGA III, R-NSGA III and CTAEA. The experiments involve two scenarios, full transmission power of the IoT device, and half of its transmission power with varying workload sizes. The results manifested the superior performance of NSGA III and CTAEA in optimizing the allocation of tasks in fog computing environments. By demonstrating NSGA III and CTAEA’s effectiveness, this findings not only advance the understanding of evolutionary algorithms but also provide practical insights for optimizing fog computing systems. This research has broader implications for improving the efficiency and performance of fog computing applications, with potential applications across various scenarios in the field.","PeriodicalId":505542,"journal":{"name":"2023 6th International Conference on Advanced Communication Technologies and Networking (CommNet)","volume":"75 3","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139183367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-11DOI: 10.1109/CommNet60167.2023.10365297
Ayush Bhardwaj
The cloud-native ecosystem has witnessed remarkable growth in recent years, resulting in a multitude of new frameworks tailored to address the changing needs of developers and operators. This expansion has brought forth a myriad of challenges, including complexity, security, adoption barriers, and developer experience. Gaining insight into developer perspectives is crucial for unlocking the full potential of cloud-native technologies and optimizing the various components of a cloud-native deployment. In this paper, we present the first comprehensive analysis of the challenges and opportunities within the cloud-native deployments, drawing from multiple surveys involving over 1000 participants. We have devised vital research questions to guide our analysis, addressing developers’ concerns regarding containerization, preferred workloads for autoscaling scenarios, workload isolation methods, challenges faced when integrating observability tools, and preferences, use cases, and obstacles encountered when adopting WebAssembly (WASM) in the cloud-native ecosystem. Our findings strive to bridge the knowledge gap between the research community and the industry, fostering future research directions in the dynamic and constantly evolving cloud-native domain.
{"title":"Navigating the Complexities of the Cloud-Native World: A Study of Developer Perspectives","authors":"Ayush Bhardwaj","doi":"10.1109/CommNet60167.2023.10365297","DOIUrl":"https://doi.org/10.1109/CommNet60167.2023.10365297","url":null,"abstract":"The cloud-native ecosystem has witnessed remarkable growth in recent years, resulting in a multitude of new frameworks tailored to address the changing needs of developers and operators. This expansion has brought forth a myriad of challenges, including complexity, security, adoption barriers, and developer experience. Gaining insight into developer perspectives is crucial for unlocking the full potential of cloud-native technologies and optimizing the various components of a cloud-native deployment. In this paper, we present the first comprehensive analysis of the challenges and opportunities within the cloud-native deployments, drawing from multiple surveys involving over 1000 participants. We have devised vital research questions to guide our analysis, addressing developers’ concerns regarding containerization, preferred workloads for autoscaling scenarios, workload isolation methods, challenges faced when integrating observability tools, and preferences, use cases, and obstacles encountered when adopting WebAssembly (WASM) in the cloud-native ecosystem. Our findings strive to bridge the knowledge gap between the research community and the industry, fostering future research directions in the dynamic and constantly evolving cloud-native domain.","PeriodicalId":505542,"journal":{"name":"2023 6th International Conference on Advanced Communication Technologies and Networking (CommNet)","volume":"28 3","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139183532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-11DOI: 10.1109/CommNet60167.2023.10365260
Samira Benkerroum, Khalid Chougdali
In recent years, computers or digital devices contribute to the global spread of cyber threats and cyber crimes. These cyberattacks leave some artefacts on the storage of the target device, for this reason they require special treatment, and which will have to be the subject of various investigations in order to study its behavior and analyze and prevent it so that this never happen again.Despite the continued development of digital forensic investigations for the recovery of evidence whether volatile or non-volatile, manual investigations are both time-intensive and laborious. The proposed solution is to use a method to automate manual forensic investigation tasks (forensic analysis) to reduce human effort and improve time efficiency.This paper presents a summary of the digital forensic investigation process, we discuss existing ML solutions to automate the analysis process.Finally, the paper proposes an approach based on machine learning where the binary classification was performed using the algorithms K-Nearest Neighbors, Naive Bayes, Random Forest, Support Vector Machine, Decision Tree, Logistic Regression, Gradient Boosted Tree, Multi-Layer Perceptron, using CIC-MalMem-2022 dataset to identify malware.The algorithms’ respective performances were contrasted. The performance metrics Precision, F1-score, Accuracy, Recall, and Area Under the Curve were used to assess the outcomes. Consequently, the Random Forest and Gradient Boosted Tree algorithms demonstrated superior performance, achieving a remarkable accuracy level of 99.98% in the detection of malware through memory scans. The Logistic Regression algorithm exhibited the least favorable performance in analyzing malware using memory data, achieving an accuracy rate of 95.75%. According to the results obtained, many algorithms used have obtained very satisfactory results.
{"title":"Enhancing Forensic Analysis Using a Machine Learning-based Approach","authors":"Samira Benkerroum, Khalid Chougdali","doi":"10.1109/CommNet60167.2023.10365260","DOIUrl":"https://doi.org/10.1109/CommNet60167.2023.10365260","url":null,"abstract":"In recent years, computers or digital devices contribute to the global spread of cyber threats and cyber crimes. These cyberattacks leave some artefacts on the storage of the target device, for this reason they require special treatment, and which will have to be the subject of various investigations in order to study its behavior and analyze and prevent it so that this never happen again.Despite the continued development of digital forensic investigations for the recovery of evidence whether volatile or non-volatile, manual investigations are both time-intensive and laborious. The proposed solution is to use a method to automate manual forensic investigation tasks (forensic analysis) to reduce human effort and improve time efficiency.This paper presents a summary of the digital forensic investigation process, we discuss existing ML solutions to automate the analysis process.Finally, the paper proposes an approach based on machine learning where the binary classification was performed using the algorithms K-Nearest Neighbors, Naive Bayes, Random Forest, Support Vector Machine, Decision Tree, Logistic Regression, Gradient Boosted Tree, Multi-Layer Perceptron, using CIC-MalMem-2022 dataset to identify malware.The algorithms’ respective performances were contrasted. The performance metrics Precision, F1-score, Accuracy, Recall, and Area Under the Curve were used to assess the outcomes. Consequently, the Random Forest and Gradient Boosted Tree algorithms demonstrated superior performance, achieving a remarkable accuracy level of 99.98% in the detection of malware through memory scans. The Logistic Regression algorithm exhibited the least favorable performance in analyzing malware using memory data, achieving an accuracy rate of 95.75%. According to the results obtained, many algorithms used have obtained very satisfactory results.","PeriodicalId":505542,"journal":{"name":"2023 6th International Conference on Advanced Communication Technologies and Networking (CommNet)","volume":"19 4","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139184003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The emergence of pre-trained models based on deep learning has considerably enhanced the development of many applications, such as chatbots. These models can be refined for specific tasks to improve chatbot accuracy. The core of the chatbot is its ability to understand the user’s intent through its Natural Language Understanding (NLU) component. Within NLU, intent classification is a central task. Recently, transformer models have revolutionized the resolution of this task by capturing the semantic relations between words in a sentence. This article presents a comparative study and critical analysis of four transformer models, which are Bert, Albert, Roberta, and Gpt2, to identify which offers the best accuracy for an existing dataset for the intent classification task.
{"title":"Pre-Trained Models for Intent Classification in Chatbot: Comparative Study and Critical Analysis","authors":"Adnane Souha, Charaf Ouaddi, Lamya Benaddi, Abdeslam Jakimi","doi":"10.1109/CommNet60167.2023.10365312","DOIUrl":"https://doi.org/10.1109/CommNet60167.2023.10365312","url":null,"abstract":"The emergence of pre-trained models based on deep learning has considerably enhanced the development of many applications, such as chatbots. These models can be refined for specific tasks to improve chatbot accuracy. The core of the chatbot is its ability to understand the user’s intent through its Natural Language Understanding (NLU) component. Within NLU, intent classification is a central task. Recently, transformer models have revolutionized the resolution of this task by capturing the semantic relations between words in a sentence. This article presents a comparative study and critical analysis of four transformer models, which are Bert, Albert, Roberta, and Gpt2, to identify which offers the best accuracy for an existing dataset for the intent classification task.","PeriodicalId":505542,"journal":{"name":"2023 6th International Conference on Advanced Communication Technologies and Networking (CommNet)","volume":"156 2","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139184128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-11DOI: 10.1109/CommNet60167.2023.10365261
Renata K. Gomes Dos Reis, J. A. Arnez, Caio B. Bezerra De Souza, M. Damasceno, Wederson Medeiros Silva
In mobile networks, handover refers to the process of transferring communications, data and radio resources of the mobile user while maintaining the Quality of Service (QoS) of High-Definition (HD) voice calls. Often, this process occurs from a serving cell to a neighbor cell based on the received signal power reported by the User Equipment (UE) to the Base Station (BS). The main contribution of this work is a guideline to evaluate an intra-LTE handover procedure and QoS metrics during a Voice Call over LTE (VoLTE) considering the X2 communication interface and handover decision algorithm based on Reference Signal Received Power (RSRP) values provided by a real-time experimental setup. Furthermore, packet analyzer software was used to collect the packets in real-time execution. In addition, the UE was set to record events that help us to track all the system activity with the mobile core network and monitor the performance. Therefore, received power levels were collected along the voice call, allowing a deep analysis of the QoS and signal level metrics. The results show how the handover procedure impact into QoS, generating an interruption gap during the signal measurements by the UE. In contrast, the results about QoS and signal level metrics satisfy the requirements during the VoLTE call.
{"title":"An Evaluation of High-Definition Voice Calls over IMS throughout handover procedures","authors":"Renata K. Gomes Dos Reis, J. A. Arnez, Caio B. Bezerra De Souza, M. Damasceno, Wederson Medeiros Silva","doi":"10.1109/CommNet60167.2023.10365261","DOIUrl":"https://doi.org/10.1109/CommNet60167.2023.10365261","url":null,"abstract":"In mobile networks, handover refers to the process of transferring communications, data and radio resources of the mobile user while maintaining the Quality of Service (QoS) of High-Definition (HD) voice calls. Often, this process occurs from a serving cell to a neighbor cell based on the received signal power reported by the User Equipment (UE) to the Base Station (BS). The main contribution of this work is a guideline to evaluate an intra-LTE handover procedure and QoS metrics during a Voice Call over LTE (VoLTE) considering the X2 communication interface and handover decision algorithm based on Reference Signal Received Power (RSRP) values provided by a real-time experimental setup. Furthermore, packet analyzer software was used to collect the packets in real-time execution. In addition, the UE was set to record events that help us to track all the system activity with the mobile core network and monitor the performance. Therefore, received power levels were collected along the voice call, allowing a deep analysis of the QoS and signal level metrics. The results show how the handover procedure impact into QoS, generating an interruption gap during the signal measurements by the UE. In contrast, the results about QoS and signal level metrics satisfy the requirements during the VoLTE call.","PeriodicalId":505542,"journal":{"name":"2023 6th International Conference on Advanced Communication Technologies and Networking (CommNet)","volume":"8 3","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139184255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-11DOI: 10.1109/CommNet60167.2023.10365288
Opel N. Mbanzabugabo, Charles Kabiri, Kayalvizhi Jayavel, L. Sibomana
For improving traffic safety and the driving experience, vehicle-to-infrastructure (V2I) networks require ultra-reliable and low-latency connectivity. However, because of the increased mobility of vehicles, the channel is continuously varying, making it difficult to maintain low latency and high connectivity in V2I networks. Additionally, as the number of vehicles increases, sufficient resources is needed to maintain continuous internet access via a Base Station (BS). By introducing a three-tiered prioritization mechanism, this study aims at improving resource constraints in V2I networks. The highest priority (priority 1) is assigned to emergency communications, which include critical communications such as ambulance notifications, accident reports, and alerts regarding very important people. Priority 2 is assigned to data, such as road conditions and traffic information. Data concerning entertainment, such as media and ordinary communications, come into the category of low priority (priority 3). By employing this prioritization strategy, efficient resource allocation is ensured, taking into consideration different levels of urgency within services. An Analytical Priority Resources Allocation Framework (APRAF) based on M/G/1 queuing method, is developed with an objective of enhancing connectivity in the uplink of vehicles to infrastructure networks. The proposed method is evaluated using an M/G/1 queuing method where delays on queue and system are considered as main performance metrics. Using numerical analysis, the lowest latency is obtained gradually from the highest to the lowest class. As a result, the latency (waiting delay on queue and on system) for vehicles on priority 1 is very low, and in this way the connectivity is improved comparatively to existing system without priority. The waiting delay in the system for vehicles in high priority is around 66 ms while in First Come First Served model without priority, the waiting delay is infinite. The reliability is also improved for vehicles with high priority.
{"title":"Performance Analysis of Vehicle to Infrastructure Network","authors":"Opel N. Mbanzabugabo, Charles Kabiri, Kayalvizhi Jayavel, L. Sibomana","doi":"10.1109/CommNet60167.2023.10365288","DOIUrl":"https://doi.org/10.1109/CommNet60167.2023.10365288","url":null,"abstract":"For improving traffic safety and the driving experience, vehicle-to-infrastructure (V2I) networks require ultra-reliable and low-latency connectivity. However, because of the increased mobility of vehicles, the channel is continuously varying, making it difficult to maintain low latency and high connectivity in V2I networks. Additionally, as the number of vehicles increases, sufficient resources is needed to maintain continuous internet access via a Base Station (BS). By introducing a three-tiered prioritization mechanism, this study aims at improving resource constraints in V2I networks. The highest priority (priority 1) is assigned to emergency communications, which include critical communications such as ambulance notifications, accident reports, and alerts regarding very important people. Priority 2 is assigned to data, such as road conditions and traffic information. Data concerning entertainment, such as media and ordinary communications, come into the category of low priority (priority 3). By employing this prioritization strategy, efficient resource allocation is ensured, taking into consideration different levels of urgency within services. An Analytical Priority Resources Allocation Framework (APRAF) based on M/G/1 queuing method, is developed with an objective of enhancing connectivity in the uplink of vehicles to infrastructure networks. The proposed method is evaluated using an M/G/1 queuing method where delays on queue and system are considered as main performance metrics. Using numerical analysis, the lowest latency is obtained gradually from the highest to the lowest class. As a result, the latency (waiting delay on queue and on system) for vehicles on priority 1 is very low, and in this way the connectivity is improved comparatively to existing system without priority. The waiting delay in the system for vehicles in high priority is around 66 ms while in First Come First Served model without priority, the waiting delay is infinite. The reliability is also improved for vehicles with high priority.","PeriodicalId":505542,"journal":{"name":"2023 6th International Conference on Advanced Communication Technologies and Networking (CommNet)","volume":"159 5","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139183460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-11DOI: 10.1109/CommNet60167.2023.10365290
Mohammed S. Bahaaelden, Beatriz Ortega, Rafael Perez-Jimenez, V. Almenar
Due to the impact of predominant inherent imaginary interference (IMI), for the first time in visible light communication (VLC) systems, 3-taps equalizer with flip-filter bank multicarrier (Flip-FBMC) burst is proposed to overcome the degradation performance along with 1-tap equalizer estimation. Over high delays channel profile, the proposed system offers 0.7 dB signal to noise ratio (SNR) gain at $10^{-3}$ of the symbol error rate (SER), while an identical error is obtained at high SNR region corresponding to SER $=10^{-4}$, compared to Flip-OFDM system. Moreover, the possibility to optimize its performance is developed through a tail-shortening mechanism, where $approx$ 11% increment in its spectral efficiency is obtained when compared to the throughput obtained by Flip-CP-OFDM system. Besides, the theoretical analysis reveals that the induced IMI outside the first order neighbor zone, which cannot be controlled using IAM, has a negligible impact on the estimation accuracy, compared to DCO-FBMC format, where a significant reduction in estimating level is captured.
{"title":"Enhanced Signal Transmission Performance based on Multitap Equalization in Optical FBMC Burst","authors":"Mohammed S. Bahaaelden, Beatriz Ortega, Rafael Perez-Jimenez, V. Almenar","doi":"10.1109/CommNet60167.2023.10365290","DOIUrl":"https://doi.org/10.1109/CommNet60167.2023.10365290","url":null,"abstract":"Due to the impact of predominant inherent imaginary interference (IMI), for the first time in visible light communication (VLC) systems, 3-taps equalizer with flip-filter bank multicarrier (Flip-FBMC) burst is proposed to overcome the degradation performance along with 1-tap equalizer estimation. Over high delays channel profile, the proposed system offers 0.7 dB signal to noise ratio (SNR) gain at $10^{-3}$ of the symbol error rate (SER), while an identical error is obtained at high SNR region corresponding to SER $=10^{-4}$, compared to Flip-OFDM system. Moreover, the possibility to optimize its performance is developed through a tail-shortening mechanism, where $approx$ 11% increment in its spectral efficiency is obtained when compared to the throughput obtained by Flip-CP-OFDM system. Besides, the theoretical analysis reveals that the induced IMI outside the first order neighbor zone, which cannot be controlled using IAM, has a negligible impact on the estimation accuracy, compared to DCO-FBMC format, where a significant reduction in estimating level is captured.","PeriodicalId":505542,"journal":{"name":"2023 6th International Conference on Advanced Communication Technologies and Networking (CommNet)","volume":"5 4","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139183714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-11DOI: 10.1109/CommNet60167.2023.10365284
Salah Eddine Essalhi, Mohamed Janati Idrissi, Mohammed Raiss El Fenni, H. Chafnaji
The rapid expansion of the Internet of Things (IoT) has led to an era dominated by diverse networks, with Mist and Fog computing becoming crucial for closer-to-device data processing. Despite the advantages, the surge of data from numerous devices raises challenges in optimizing system longevity, throughput, and latency. Most past research in Mist-IoT did not fully account for factors like residual energy, device location, workload capacity, butter size, and communication frequency, leading to high energy consumption during data exchanges between IoT and Fog systems. This study aims to address these factors for improved energy efficiency, especially with increasing data volume. It introduces a new approach using a Takagi-Sugeno-Kang (TSK) Type 2 Fuzzy Inference within a Fog-Mist-IoT architecture for smarter resource management during communication and task offloading in IoT ecosystem. Simulation results confirm its effectiveness.
{"title":"Optimized Energy Management with Fuzzy Clustering for Heterogeneous Fog-Mist-IoT Networks","authors":"Salah Eddine Essalhi, Mohamed Janati Idrissi, Mohammed Raiss El Fenni, H. Chafnaji","doi":"10.1109/CommNet60167.2023.10365284","DOIUrl":"https://doi.org/10.1109/CommNet60167.2023.10365284","url":null,"abstract":"The rapid expansion of the Internet of Things (IoT) has led to an era dominated by diverse networks, with Mist and Fog computing becoming crucial for closer-to-device data processing. Despite the advantages, the surge of data from numerous devices raises challenges in optimizing system longevity, throughput, and latency. Most past research in Mist-IoT did not fully account for factors like residual energy, device location, workload capacity, butter size, and communication frequency, leading to high energy consumption during data exchanges between IoT and Fog systems. This study aims to address these factors for improved energy efficiency, especially with increasing data volume. It introduces a new approach using a Takagi-Sugeno-Kang (TSK) Type 2 Fuzzy Inference within a Fog-Mist-IoT architecture for smarter resource management during communication and task offloading in IoT ecosystem. Simulation results confirm its effectiveness.","PeriodicalId":505542,"journal":{"name":"2023 6th International Conference on Advanced Communication Technologies and Networking (CommNet)","volume":"192 6","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139183798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-11DOI: 10.1109/CommNet60167.2023.10365275
Fahad Rahman, C. Titouna, Farid Naït-Abdesselam, Ahmed Serhrouchni
Blockchain Technology (BCT), characterized by its decentralized and distributed ledger structure, records transactions securely using cryptographic methods. As BCT continues to evolve rapidly, it’s becoming increasingly integral to distributed system architectures. However, scalability remains a critical challenge for researchers, especially as transaction volume, block size, or node count grows, potentially hindering the development of distributed ecosystems. This paper introduces a federated layered model aimed at creating a scalable Blockchain system. This model ensures immediate, atomic updates across all nodes, including cross-zone transactions. The proposed system’s architecture is not only scalable but also flexible and manageable, effectively addressing issues like network congestion and asymmetric node distribution. When compared to traditional blockchain models, this new architecture significantly enhances scalability.
{"title":"Scaling A Blockchain System With Layered Architecture","authors":"Fahad Rahman, C. Titouna, Farid Naït-Abdesselam, Ahmed Serhrouchni","doi":"10.1109/CommNet60167.2023.10365275","DOIUrl":"https://doi.org/10.1109/CommNet60167.2023.10365275","url":null,"abstract":"Blockchain Technology (BCT), characterized by its decentralized and distributed ledger structure, records transactions securely using cryptographic methods. As BCT continues to evolve rapidly, it’s becoming increasingly integral to distributed system architectures. However, scalability remains a critical challenge for researchers, especially as transaction volume, block size, or node count grows, potentially hindering the development of distributed ecosystems. This paper introduces a federated layered model aimed at creating a scalable Blockchain system. This model ensures immediate, atomic updates across all nodes, including cross-zone transactions. The proposed system’s architecture is not only scalable but also flexible and manageable, effectively addressing issues like network congestion and asymmetric node distribution. When compared to traditional blockchain models, this new architecture significantly enhances scalability.","PeriodicalId":505542,"journal":{"name":"2023 6th International Conference on Advanced Communication Technologies and Networking (CommNet)","volume":"184 4","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139183905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-11DOI: 10.1109/CommNet60167.2023.10365306
Christos Bouras, Charalampos Chatzigeorgiou, V. Kokkinos, A. Gkamas, P. Pouyioutas
This research paper presents a comprehensive investigation into the optimization of resource allocation in 5G networks through the technique of Downlink and Uplink Decoupling (DUDe). With the growing need for accurate modeling and scenario planning in 5G systems, DUDe allows for the configuration of an additional, lower frequency signal on the uplink to complement the existing uplink signal, rebalancing the uplink/downlink difference at the cell edge and enhancing coverage and network capacity. Drawing on extensive literature and industry trends, this study explores the benefits and challenges of DUDe, considering its impact on network performance, user experience, and future developments. The paper also introduces a simulation-based methodology and experiments to evaluate the effectiveness of DUDe in improving coverage, capacity, latency, and energy efficiency. The findings contribute to the understanding of DUDe’s potential in optimizing 5G networks, providing valuable insights for researchers and network operators in designing and deploying efficient resource allocation strategies for enhanced network performance in various scenarios.
{"title":"Optimizing Network Performance in 5G Systems with Downlink and Uplink Decoupling","authors":"Christos Bouras, Charalampos Chatzigeorgiou, V. Kokkinos, A. Gkamas, P. Pouyioutas","doi":"10.1109/CommNet60167.2023.10365306","DOIUrl":"https://doi.org/10.1109/CommNet60167.2023.10365306","url":null,"abstract":"This research paper presents a comprehensive investigation into the optimization of resource allocation in 5G networks through the technique of Downlink and Uplink Decoupling (DUDe). With the growing need for accurate modeling and scenario planning in 5G systems, DUDe allows for the configuration of an additional, lower frequency signal on the uplink to complement the existing uplink signal, rebalancing the uplink/downlink difference at the cell edge and enhancing coverage and network capacity. Drawing on extensive literature and industry trends, this study explores the benefits and challenges of DUDe, considering its impact on network performance, user experience, and future developments. The paper also introduces a simulation-based methodology and experiments to evaluate the effectiveness of DUDe in improving coverage, capacity, latency, and energy efficiency. The findings contribute to the understanding of DUDe’s potential in optimizing 5G networks, providing valuable insights for researchers and network operators in designing and deploying efficient resource allocation strategies for enhanced network performance in various scenarios.","PeriodicalId":505542,"journal":{"name":"2023 6th International Conference on Advanced Communication Technologies and Networking (CommNet)","volume":"85 4","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139184146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}