Pub Date : 2024-10-23DOI: 10.1007/s12243-024-01060-2
Raouf Boutaba, Guy Pujolle, Amina Boudendir, Abdallah Shami, Daniel Benevides da Costa
{"title":"Editorial of 6GNet 2023 special issue","authors":"Raouf Boutaba, Guy Pujolle, Amina Boudendir, Abdallah Shami, Daniel Benevides da Costa","doi":"10.1007/s12243-024-01060-2","DOIUrl":"10.1007/s12243-024-01060-2","url":null,"abstract":"","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"79 9-10","pages":"603 - 604"},"PeriodicalIF":1.8,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142524498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1007/s12243-024-01058-w
Seyed Behnam Andarzian, Cristian Daniele, Erik Poll
Fuzzing is a widely used and effective technique to test software. Unfortunately, certain systems, including network protocols, are more challenging to fuzz than others. An important complication with fuzzing network protocols is that this tends to be a slow process, which is problematic as fuzzing involves many test inputs. This article analyzes the root causes behind the inefficiency of fuzzing network protocols and strategies to avoid them. It extends our earlier work on network protocol fuzzers, which explored some of these strategies, to give a more comprehensive overview of overheads in fuzzing and ways to reduce them.
{"title":"On the (in)efficiency of fuzzing network protocols","authors":"Seyed Behnam Andarzian, Cristian Daniele, Erik Poll","doi":"10.1007/s12243-024-01058-w","DOIUrl":"https://doi.org/10.1007/s12243-024-01058-w","url":null,"abstract":"<p>Fuzzing is a widely used and effective technique to test software. Unfortunately, certain systems, including network protocols, are more challenging to fuzz than others. An important complication with fuzzing network protocols is that this tends to be a slow process, which is problematic as fuzzing involves many test inputs. This article analyzes the root causes behind the inefficiency of fuzzing network protocols and strategies to avoid them. It extends our earlier work on network protocol fuzzers, which explored some of these strategies, to give a more comprehensive overview of overheads in fuzzing and ways to reduce them.</p>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"11 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-23DOI: 10.1007/s12243-024-01054-0
Pooja Pathak, Richa Bhatia
Low-density parity check (LDPC) codes are employed for data channels due to their capability of achieving high throughput and good performance. However, the belief propagation decoding algorithm for LDPC codes has high computational complexity. The min-sum approach reduces decoding complexity at the expense of performance loss. In this paper, we investigate the performance of LDPC codes using interleaving. The codes are investigated using BPSK modulation for short to moderate message lengths for various numbers of iterations using the min-sum decoding algorithm. The paper aims to improve the block error rate (BLER) and bit error rate (BER) for short to moderate block lengths required for massive machine-type communications (mMTC) supporting numerous IoT devices with short data packets, and ultra-reliable low-latency communications (URLLC) for delay-sensitive services of 5G. By incorporating interleaving alongside min-sum decoding, the performance is not only improved but also reaches a level of comparability with established algorithms such as the belief propagation algorithm (BPA) and the sum-product algorithm (SPA). LDPC coding with interleaving and subsequent min-sum decoding is a promising approach for improving the performance metrics of codes for short to moderate block length without incurring a significant increase in decoding complexity.
{"title":"Investigation of LDPC codes with interleaving for 5G wireless networks","authors":"Pooja Pathak, Richa Bhatia","doi":"10.1007/s12243-024-01054-0","DOIUrl":"https://doi.org/10.1007/s12243-024-01054-0","url":null,"abstract":"<p>Low-density parity check (LDPC) codes are employed for data channels due to their capability of achieving high throughput and good performance. However, the belief propagation decoding algorithm for LDPC codes has high computational complexity. The min-sum approach reduces decoding complexity at the expense of performance loss. In this paper, we investigate the performance of LDPC codes using interleaving. The codes are investigated using BPSK modulation for short to moderate message lengths for various numbers of iterations using the min-sum decoding algorithm. The paper aims to improve the block error rate (BLER) and bit error rate (BER) for short to moderate block lengths required for massive machine-type communications (mMTC) supporting numerous IoT devices with short data packets, and ultra-reliable low-latency communications (URLLC) for delay-sensitive services of 5G. By incorporating interleaving alongside min-sum decoding, the performance is not only improved but also reaches a level of comparability with established algorithms such as the belief propagation algorithm (BPA) and the sum-product algorithm (SPA). LDPC coding with interleaving and subsequent min-sum decoding is a promising approach for improving the performance metrics of codes for short to moderate block length without incurring a significant increase in decoding complexity.</p>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"45 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141778741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the vigorous development of the Internet of Things (IoT), the demand for user equipment (UE) computing capacity is increasing. Multiaccess edge computing (MEC) provides users with high-performance and low-latency services by offloading computational tasks to the nearest MEC server-configured 5G radio access network (RAN). However, these computationally intensive tasks may lead to a sharp increase in the energy consumption of UE and cause downtime. In this paper, to address this challenge, we design an intelligent scheduling and management system (ISMS) to jointly optimize the allocation of MEC resources and wireless communication resources. The resource allocation problem is a mixed-integer nonlinear programming problem (MINLP), an NP-hard problem. The ISMS models this problem as an MDP with a state, action, reward, and policy and adopts a modified deep deterministic policy gradient (mDDPG) algorithm to ensure the weighted minimization of the energy consumption, latency, and cost of users. The simulation results show that the ISMS can effectively reduce the system’s energy consumption, latency, and cost. The proposed algorithm can provide more stable and efficient performance than other algorithms.
{"title":"Joint MEC selection and wireless resource allocation in 5G RAN","authors":"Tengteng Ma, Chen Li, Yuanmou Chen, Zehui Li, Zhenyu Zhang, Jing Zhao","doi":"10.1007/s12243-024-01050-4","DOIUrl":"https://doi.org/10.1007/s12243-024-01050-4","url":null,"abstract":"<p>With the vigorous development of the Internet of Things (IoT), the demand for user equipment (UE) computing capacity is increasing. Multiaccess edge computing (MEC) provides users with high-performance and low-latency services by offloading computational tasks to the nearest MEC server-configured 5G radio access network (RAN). However, these computationally intensive tasks may lead to a sharp increase in the energy consumption of UE and cause downtime. In this paper, to address this challenge, we design an intelligent scheduling and management system (ISMS) to jointly optimize the allocation of MEC resources and wireless communication resources. The resource allocation problem is a mixed-integer nonlinear programming problem (MINLP), an NP-hard problem. The ISMS models this problem as an MDP with a state, action, reward, and policy and adopts a modified deep deterministic policy gradient (mDDPG) algorithm to ensure the weighted minimization of the energy consumption, latency, and cost of users. The simulation results show that the ISMS can effectively reduce the system’s energy consumption, latency, and cost. The proposed algorithm can provide more stable and efficient performance than other algorithms.</p>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"94 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141778744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-23DOI: 10.1007/s12243-024-01055-z
Edvar Afonso, Miguel Elias M. Campista
Internet of Things (IoT) applications rely on data collection and centralized processing to assist decision-making. Nevertheless, in multi-hop Low Power and Lossy Networks (LLN) scenarios, data forwarding can be troublesome as it imposes multiple retransmissions, consuming more energy. This paper revisits the concept of mobile agents to collect data from sensors more efficiently. Upon receiving a data request, the IoT gateway performs a cache lookup and promptly dispatches a mobile agent to collect data if this is not available. Data collection then uses closed-loop itineraries computed using a Traveling Salesman Problem (TSP) heuristic starting at the network gateway. The itinerary goes through nodes producing solicited and unsolicited data. We assume that the unsolicited data will be requested soon, and opportunistically collecting it avoids future agent transmissions. We limit the collection capacity of each agent using a knapsack problem approach. Simulation results show that our proposal reduces the network traffic and energy consumption compared with a traditional mobile agent without opportunistic data collection. In addition, we show that data aggregation can further improve the performance of our proposal.
{"title":"Opportunistic data gathering in IoT networks using an energy-efficient data aggregation mechanism","authors":"Edvar Afonso, Miguel Elias M. Campista","doi":"10.1007/s12243-024-01055-z","DOIUrl":"https://doi.org/10.1007/s12243-024-01055-z","url":null,"abstract":"<p>Internet of Things (IoT) applications rely on data collection and centralized processing to assist decision-making. Nevertheless, in multi-hop Low Power and Lossy Networks (LLN) scenarios, data forwarding can be troublesome as it imposes multiple retransmissions, consuming more energy. This paper revisits the concept of mobile agents to collect data from sensors more efficiently. Upon receiving a data request, the IoT gateway performs a cache lookup and promptly dispatches a mobile agent to collect data if this is not available. Data collection then uses closed-loop itineraries computed using a Traveling Salesman Problem (TSP) heuristic starting at the network gateway. The itinerary goes through nodes producing solicited and unsolicited data. We assume that the unsolicited data will be requested soon, and opportunistically collecting it avoids future agent transmissions. We limit the collection capacity of each agent using a knapsack problem approach. Simulation results show that our proposal reduces the network traffic and energy consumption compared with a traditional mobile agent without opportunistic data collection. In addition, we show that data aggregation can further improve the performance of our proposal.</p>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"18 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141778742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-20DOI: 10.1007/s12243-024-01051-3
Nasser Sadeghi, Masoumeh Azghani
Power line communication systems (PLC) are used for data transmission. Accurate channel state information (CSI) is essential for the receiver design in such systems, however, impulsive noise poses a challenge for the channel estimation task. In this paper, we propose a deep learning based method for PLC channel estimation which is resistant against impulsive noise as well as the additive white Gaussian noise (AWGN). The proposed deep neural network consists of three sub-networks: The first one is a denoising network which aims to remove the noise from the received signal. The second sub-network offers a low-accuracy estimation of the channel using the denoised signal. The third sub-network is designed for high-accuracy channel estimation. The training of the proposed network is done in two stages: Firstly, the denoising sub-network is trained. Secondly, by freezing the trained parameters of the denoising network, the two-channel estimation sub-networks are trained. Moreover, we have derived the Cramer Rao lower bound of the PLC channel estimation problem. The proposed method has been evaluated through various simulation scenarios which confirm the superiority of the proposed method over its counterpart. The suggested algorithm indicates acceptable resistance against impulsive and Gaussian noises.
{"title":"Deep learning based channel estimation in PLC systems","authors":"Nasser Sadeghi, Masoumeh Azghani","doi":"10.1007/s12243-024-01051-3","DOIUrl":"https://doi.org/10.1007/s12243-024-01051-3","url":null,"abstract":"<p>Power line communication systems (PLC) are used for data transmission. Accurate channel state information (CSI) is essential for the receiver design in such systems, however, impulsive noise poses a challenge for the channel estimation task. In this paper, we propose a deep learning based method for PLC channel estimation which is resistant against impulsive noise as well as the additive white Gaussian noise (AWGN). The proposed deep neural network consists of three sub-networks: The first one is a denoising network which aims to remove the noise from the received signal. The second sub-network offers a low-accuracy estimation of the channel using the denoised signal. The third sub-network is designed for high-accuracy channel estimation. The training of the proposed network is done in two stages: Firstly, the denoising sub-network is trained. Secondly, by freezing the trained parameters of the denoising network, the two-channel estimation sub-networks are trained. Moreover, we have derived the Cramer Rao lower bound of the PLC channel estimation problem. The proposed method has been evaluated through various simulation scenarios which confirm the superiority of the proposed method over its counterpart. The suggested algorithm indicates acceptable resistance against impulsive and Gaussian noises.</p>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"6 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141742452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-17DOI: 10.1007/s12243-024-01056-y
Sam Aleyadeh, Abbas Javadtalab, Abdallah Shami
The increased adoption and development of 5 G-based services have greatly increased the dynamic nature of traffic, including types, sizes, and requirements. Flex-grid elastic optical networks (EONs) have become prolific in supporting these services. However, this transition led to issues such as lower traffic throughput and resource wastage in the form of bandwidth fragmentation. With the continued growth of these services, proper traffic management to reduce this issue has become essential. To overcome this challenge, we propose a Throughput and Latency-First Reinforcement Learning-based spectrum allocation algorithm (TLFRL) in IP-over-fixed/flex-grid optical networks. The main target of TLFRL is to reduce the need to reallocate the spectrum by lowering the fragmentation and blocking probability. We achieve this by leveraging advanced demand organization techniques while using traditional networking infrastructure intelligently to offload compatible services, avoiding latency violations. Extensive simulations evaluated traffic throughput, fragmentation, and average latency. The results show that the proposed solution outperforms contemporary fixed grid-based and heuristic approaches. It also provides comparable results to state-of-the-art flex-grid spectrum allocation techniques.
随着基于 5 G 的服务越来越多地采用和发展,流量的动态性质(包括类型、规模和要求)也大大增加。柔性光栅弹性光网络(EON)已成为支持这些服务的重要手段。然而,这种过渡导致了流量吞吐量降低和以带宽碎片形式出现的资源浪费等问题。随着这些服务的持续增长,为减少这一问题而进行适当的流量管理已变得至关重要。为了克服这一挑战,我们提出了一种基于吞吐量和延迟优先强化学习的频谱分配算法(TLFRL)。TLFRL 的主要目标是通过降低碎片和阻塞概率来减少重新分配频谱的需求。我们利用先进的需求组织技术来实现这一目标,同时智能地使用传统网络基础设施来卸载兼容服务,避免延迟违规。大量模拟评估了流量吞吐量、碎片和平均延迟。结果表明,所提出的解决方案优于当代基于固定网格的方法和启发式方法。它还提供了与最先进的灵活网格频谱分配技术相当的结果。
{"title":"Throughput and latency targeted RL spectrum allocation in heterogeneous OTN","authors":"Sam Aleyadeh, Abbas Javadtalab, Abdallah Shami","doi":"10.1007/s12243-024-01056-y","DOIUrl":"https://doi.org/10.1007/s12243-024-01056-y","url":null,"abstract":"<p>The increased adoption and development of 5 G-based services have greatly increased the dynamic nature of traffic, including types, sizes, and requirements. Flex-grid elastic optical networks (EONs) have become prolific in supporting these services. However, this transition led to issues such as lower traffic throughput and resource wastage in the form of bandwidth fragmentation. With the continued growth of these services, proper traffic management to reduce this issue has become essential. To overcome this challenge, we propose a Throughput and Latency-First Reinforcement Learning-based spectrum allocation algorithm (TLFRL) in IP-over-fixed/flex-grid optical networks. The main target of TLFRL is to reduce the need to reallocate the spectrum by lowering the fragmentation and blocking probability. We achieve this by leveraging advanced demand organization techniques while using traditional networking infrastructure intelligently to offload compatible services, avoiding latency violations. Extensive simulations evaluated traffic throughput, fragmentation, and average latency. The results show that the proposed solution outperforms contemporary fixed grid-based and heuristic approaches. It also provides comparable results to state-of-the-art flex-grid spectrum allocation techniques.</p>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"61 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141718795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
End-to-end encrypted messaging applications such as Signal became widely popular thanks to their capability to ensure the confidentiality and integrity of online communication. While the highest security guarantees were long reserved for two-party communication, solutions for n-party communication remained either inefficient or less secure until the standardization of the MLS protocol (Messaging Layer Security). This new protocol offers an efficient way to provide end-to-end secure communication with the same guarantees originally offered by the Signal protocol for two-party communication. However, both solutions still rely on a centralized component for message delivery, called the Delivery Service in the MLS protocol. The centralization of the delivery service makes it an ideal target for attackers and threatens the availability of any protocol relying on MLS. In order to overcome this issue, we propose DiSCreet (Distributed delIvery Service with Context-awaRE coopEraTion), a design that allows clients to exchange protocol messages efficiently and without any intermediary. It uses a probabilistic reliable-broadcast mechanism to efficiently deliver messages and the Cascade Consensus protocol to handle messages requiring an agreement. Our solution strengthens the availability of the MLS protocol without compromising its security. We compare the theoretical performance of DiSCreet with another distributed solution, the DCGKA protocol, and detail the implementation of our solution.
由于 Signal 等端到端加密信息应用程序能够确保在线通信的保密性和完整性,因此广受欢迎。长期以来,最高的安全保证一直保留在双方通信中,而 n 方通信的解决方案要么效率低下,要么安全性较低,直到 MLS 协议(消息层安全)的标准化。这一新协议提供了一种高效的端到端安全通信方式,与最初用于双方通信的 Signal 协议提供的保证相同。不过,这两种解决方案仍然依赖于一个集中式的信息传递组件,在 MLS 协议中称为传递服务。递送服务的集中化使其成为攻击者的理想目标,并威胁到依赖 MLS 的任何协议的可用性。为了解决这个问题,我们提出了 DiSCreet(Distributed delIvery Service with Context-awaRE coopEraTion),它允许客户端在没有任何中介的情况下高效交换协议信息。它使用概率可靠广播机制来有效传递信息,并使用级联共识协议来处理需要达成一致的信息。我们的解决方案增强了 MLS 协议的可用性,同时又不影响其安全性。我们将 DiSCreet 的理论性能与另一种分布式解决方案 DCGKA 协议进行了比较,并详细介绍了我们解决方案的实现。
{"title":"Discreet: distributed delivery service with context-aware cooperation","authors":"Ludovic Paillat, Claudia-Lavinia Ignat, Davide Frey, Mathieu Turuani, Amine Ismail","doi":"10.1007/s12243-024-01053-1","DOIUrl":"https://doi.org/10.1007/s12243-024-01053-1","url":null,"abstract":"<p>End-to-end encrypted messaging applications such as Signal became widely popular thanks to their capability to ensure the confidentiality and integrity of online communication. While the highest security guarantees were long reserved for two-party communication, solutions for n-party communication remained either inefficient or less secure until the standardization of the MLS protocol (Messaging Layer Security). This new protocol offers an efficient way to provide end-to-end secure communication with the same guarantees originally offered by the Signal protocol for two-party communication. However, both solutions still rely on a centralized component for message delivery, called the Delivery Service in the MLS protocol. The centralization of the delivery service makes it an ideal target for attackers and threatens the availability of any protocol relying on MLS. In order to overcome this issue, we propose DiSCreet (Distributed delIvery Service with Context-awaRE coopEraTion), a design that allows clients to exchange protocol messages efficiently and without any intermediary. It uses a probabilistic reliable-broadcast mechanism to efficiently deliver messages and the Cascade Consensus protocol to handle messages requiring an agreement. Our solution strengthens the availability of the MLS protocol without compromising its security. We compare the theoretical performance of DiSCreet with another distributed solution, the DCGKA protocol, and detail the implementation of our solution.</p>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"44 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141586718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet of Things (IoT) has revolutionized data manipulation across various applications, particularly in online healthcare paradigm, where medical data are collected and processed for remote monitoring and analysis. To improve the privacy and security of such sensitive healthcare data, the attribute-based encryption (ABE) with non-monotonic access policies has recently provided a fine-grained access control within cloud and IoT-based healthcare ecosystems. Specifically, the adoption of multi-authority ABE with untrusted authorities has eliminated the need for a trusted authority. However, ensuring the privacy of user’s identity and attribute sets from these untrusted authorities remains a significant challenge in this context. To address this challenge, this paper introduces an enhanced multi-authority ABE approach, incorporating a robust attribute revocation mechanism. This enhancement safeguards user’s identity and attribute-set privacy while remaining resilient against collusion attacks and ensuring backward secrecy. Moreover, the proposed approach provides non-monotonic access policies, which supports positive and negative constraints using NOT operation as well as AND and OR operations.
物联网(IoT)彻底改变了各种应用中的数据操作,特别是在在线医疗保健模式中,医疗数据被收集和处理,用于远程监控和分析。为了提高此类敏感医疗数据的隐私性和安全性,具有非单调访问策略的基于属性的加密(ABE)最近在基于云和物联网的医疗生态系统中提供了一种细粒度访问控制。具体来说,采用具有不可信授权的多授权 ABE 不再需要可信授权。然而,在这种情况下,如何从这些不可信机构确保用户身份和属性集的隐私仍然是一个重大挑战。为应对这一挑战,本文引入了一种增强型多授权 ABE 方法,其中包含一种稳健的属性撤销机制。这种增强方法既能保护用户身份和属性集隐私,又能抵御串通攻击并确保向后保密。此外,所提出的方法还提供了非单调访问策略,使用 NOT 运算以及 AND 和 OR 运算支持正负约束。
{"title":"A revocable attribute-based access control with non-monotonic access structure","authors":"Maede Ashouri-Talouki, Nafiseh Kahani, Masoud Barati, Zomorod Abedini","doi":"10.1007/s12243-024-01052-2","DOIUrl":"https://doi.org/10.1007/s12243-024-01052-2","url":null,"abstract":"<p>Internet of Things (IoT) has revolutionized data manipulation across various applications, particularly in online healthcare paradigm, where medical data are collected and processed for remote monitoring and analysis. To improve the privacy and security of such sensitive healthcare data, the attribute-based encryption (ABE) with non-monotonic access policies has recently provided a fine-grained access control within cloud and IoT-based healthcare ecosystems. Specifically, the adoption of multi-authority ABE with untrusted authorities has eliminated the need for a trusted authority. However, ensuring the privacy of user’s identity and attribute sets from these untrusted authorities remains a significant challenge in this context. To address this challenge, this paper introduces an enhanced multi-authority ABE approach, incorporating a robust attribute revocation mechanism. This enhancement safeguards user’s identity and attribute-set privacy while remaining resilient against collusion attacks and ensuring backward secrecy. Moreover, the proposed approach provides non-monotonic access policies, which supports positive and negative constraints using NOT operation as well as AND and OR operations.</p>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"17 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141574941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1007/s12243-024-01049-x
Diogo Menezes Ferrazani Mattos, Marc-Oliver Pahl, Carol Fung
{"title":"CSNet 2022 special issue—decentralized and data-driven security in networking","authors":"Diogo Menezes Ferrazani Mattos, Marc-Oliver Pahl, Carol Fung","doi":"10.1007/s12243-024-01049-x","DOIUrl":"10.1007/s12243-024-01049-x","url":null,"abstract":"","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"79 7-8","pages":"455 - 456"},"PeriodicalIF":1.8,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141699289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}