首页 > 最新文献

2022 IEEE Symposium on Computers and Communications (ISCC)最新文献

英文 中文
Enhancing Privacy of Online Chat Apps Utilising Secure Node End-to-End Encryption (SNE2EE) 利用安全节点端到端加密(SNE2EE)增强在线聊天应用的隐私
Pub Date : 2022-06-30 DOI: 10.1109/ISCC55528.2022.9912888
Nithish Velagala, Leandros A. Maglaras, N. Ayres, S. Moschoyiannis, L. Tassiulas
SNE2EE is a messaging service that protects indi-viduals in each and every stage of the data transfer process: creation, transmission, and reception. The aim of SNE2EE is to protect user communications not only when their date is being transported to another user via secure ports/protocols, but also while they are being created.
SNE2EE是一种消息传递服务,它在数据传输过程的每个阶段保护个人:创建、传输和接收。SNE2EE的目的是保护用户通信,不仅当他们的数据通过安全端口/协议传输给另一个用户时,而且当他们被创建时。
{"title":"Enhancing Privacy of Online Chat Apps Utilising Secure Node End-to-End Encryption (SNE2EE)","authors":"Nithish Velagala, Leandros A. Maglaras, N. Ayres, S. Moschoyiannis, L. Tassiulas","doi":"10.1109/ISCC55528.2022.9912888","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912888","url":null,"abstract":"SNE2EE is a messaging service that protects indi-viduals in each and every stage of the data transfer process: creation, transmission, and reception. The aim of SNE2EE is to protect user communications not only when their date is being transported to another user via secure ports/protocols, but also while they are being created.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115310189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A First Look at Accurate Network Traffic Generation in Virtual Environments 虚拟环境中准确的网络流量生成的第一眼
Pub Date : 2022-06-30 DOI: 10.1109/ISCC55528.2022.9913058
Giuseppe Aceto, Ciro Guida, Antonio Montieri, V. Persico, A. Pescapé
The generation of synthetic network traffic is necessary to several fundamental networking activities, ranging from device testing to path monitoring, with implications on security and management. While literature focused on high-rate traffic generation, for many use cases accurate traffic generation is of importance instead. These scenarios have expanded with Network Function Virtualization, Software Defined Networking, and Cloud applications, which introduce further causes for alterations of generated traffic. Such causes are described and experimentally evaluated in this work, where the generation accuracy of D-ITG, an open-source software generator, is investigated in a virtualized environment. A definition of accuracy in terms of Mean Absolute Percentage Error of the sequences of Payload Lengths (PLs) and Inter-Departure Times (IDTs) is exploited to this end. The tool is found accurate for all PLs and for IDTs greater than one millisecond, and after the correction of a systematic error, also from 100 us.
合成网络流量的生成对于几个基本的网络活动(从设备测试到路径监控)是必要的,这涉及到安全性和管理。虽然文献关注的是高速率的流量生成,但对于许多用例来说,准确的流量生成反而很重要。这些场景已经扩展为网络功能虚拟化、软件定义网络和云应用程序,这些应用程序引入了导致生成流量变化的进一步原因。在这项工作中,这些原因被描述和实验评估,其中D-ITG(一个开源软件生成器)的生成精度在虚拟化环境中进行了研究。根据有效载荷长度(PLs)和间隔起飞时间(IDTs)序列的平均绝对百分比误差来定义精度。该工具对所有PLs和大于1毫秒的idt都是准确的,并且在纠正了系统误差之后,也从100 us开始。
{"title":"A First Look at Accurate Network Traffic Generation in Virtual Environments","authors":"Giuseppe Aceto, Ciro Guida, Antonio Montieri, V. Persico, A. Pescapé","doi":"10.1109/ISCC55528.2022.9913058","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9913058","url":null,"abstract":"The generation of synthetic network traffic is necessary to several fundamental networking activities, ranging from device testing to path monitoring, with implications on security and management. While literature focused on high-rate traffic generation, for many use cases accurate traffic generation is of importance instead. These scenarios have expanded with Network Function Virtualization, Software Defined Networking, and Cloud applications, which introduce further causes for alterations of generated traffic. Such causes are described and experimentally evaluated in this work, where the generation accuracy of D-ITG, an open-source software generator, is investigated in a virtualized environment. A definition of accuracy in terms of Mean Absolute Percentage Error of the sequences of Payload Lengths (PLs) and Inter-Departure Times (IDTs) is exploited to this end. The tool is found accurate for all PLs and for IDTs greater than one millisecond, and after the correction of a systematic error, also from 100 us.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125159136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Platform for Federated Learning on the Edge: a Video Analysis Use Case 边缘联合学习平台:一个视频分析用例
Pub Date : 2022-06-30 DOI: 10.1109/ISCC55528.2022.9912968
Alessio Catalfamo, A. Celesti, M. Fazio, Giovanni Randazzo, M. Villari
Recently, both scientific and industrial communities have highlighted the importance to run Machine Learning (ML) applications on Edge computing closer to the end-user and to managed raw data, for many reasons including quality of service (QoS) and security. However, due to the limited computing, storage and network resources at the Edge, several ML algorithms have been re-designed to be deployed on Edge devices. In this paper, we want to explore in detail Edge Federation for supporting ML-based solutions. In particular, we present a new platform for the deployment and the management of complex services at the Edge. It provides an interface that allows us to arrange applications as a collection of interconnected lightweight loosely-coupled services (i.e., microservices) and enables their management across Federated Edge devices through the abstraction of the underlying clusters of physical devices. The proposed solution is validated by a use case related to video analysis in the morphological field.
最近,科学界和工业界都强调了在更靠近最终用户和管理原始数据的边缘计算上运行机器学习(ML)应用程序的重要性,原因有很多,包括服务质量(QoS)和安全性。然而,由于边缘设备的计算、存储和网络资源有限,一些机器学习算法被重新设计以部署在边缘设备上。在本文中,我们想详细探讨Edge Federation以支持基于ml的解决方案。特别地,我们提出了一个用于在Edge上部署和管理复杂服务的新平台。它提供了一个接口,允许我们将应用程序安排为相互连接的轻量级松耦合服务(即微服务)的集合,并通过抽象底层物理设备集群实现跨Federated Edge设备的管理。通过形态学领域的视频分析用例验证了所提出的解决方案。
{"title":"A Platform for Federated Learning on the Edge: a Video Analysis Use Case","authors":"Alessio Catalfamo, A. Celesti, M. Fazio, Giovanni Randazzo, M. Villari","doi":"10.1109/ISCC55528.2022.9912968","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912968","url":null,"abstract":"Recently, both scientific and industrial communities have highlighted the importance to run Machine Learning (ML) applications on Edge computing closer to the end-user and to managed raw data, for many reasons including quality of service (QoS) and security. However, due to the limited computing, storage and network resources at the Edge, several ML algorithms have been re-designed to be deployed on Edge devices. In this paper, we want to explore in detail Edge Federation for supporting ML-based solutions. In particular, we present a new platform for the deployment and the management of complex services at the Edge. It provides an interface that allows us to arrange applications as a collection of interconnected lightweight loosely-coupled services (i.e., microservices) and enables their management across Federated Edge devices through the abstraction of the underlying clusters of physical devices. The proposed solution is validated by a use case related to video analysis in the morphological field.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123321338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
ActDetector: A Sequence-based Framework for Network Attack Activity Detection ActDetector:基于序列的网络攻击活动检测框架
Pub Date : 2022-06-30 DOI: 10.1109/ISCC55528.2022.9912824
Jiaqi Kang, Huiran Yang, Y. Zhang, Yueyue Dai, Mengqi Zhan, Weiping Wang
The cyber security situation is not optimistic in recent years due to the rapid growth of security threats. What's more worrying is that threats are tending to be more sophis-ticated, which poses challenges to attack activity analysis. It is quite important for analysts to understand attack activities from a holistic perspective, rather than just pay attention to alerts. Currently, the attack activity analysis generally relies on human resources, which is a heavy workload for manual analysis. Besides, it's difficult to achieve high detection accuracy due to the missing and false-positive alerts. In this paper, we propose a new framework, ActDetector, to detect attack activities automatically from the raw Network Intrusion Detection System (NIDS) alerts, which will greatly reduce the workload of security analysts. We extract attack phase descriptions from alerts and embed attack activity descriptions to obtain their numerical expression. Finally, we use a temporal-sequence-based model to detect potential attack activities. We evaluate ActDetector with three datasets. Experimental results demonstrate that ActDetector can detect attack activities from the raw NIDS alerts with an average of 94.8% Precision, 95.0% Recall, and 94.6% F1-score.
近年来,网络安全形势不容乐观,安全威胁快速增长。更令人担忧的是,威胁越来越复杂,这给攻击活动分析带来了挑战。对于分析师来说,从整体角度理解攻击活动是非常重要的,而不仅仅是关注警报。目前,攻击活动分析一般依赖于人力资源,手工分析工作量较大。此外,由于漏报和误报报警的存在,很难达到较高的检测精度。在本文中,我们提出了一个新的框架,ActDetector,从原始网络入侵检测系统(NIDS)警报中自动检测攻击活动,这将大大减少安全分析人员的工作量。从警报中提取攻击阶段描述,嵌入攻击活动描述,得到攻击阶段描述的数值表达式。最后,我们使用基于时间序列的模型来检测潜在的攻击活动。我们用三个数据集评估ActDetector。实验结果表明,ActDetector可以从原始NIDS警报中检测出攻击活动,平均准确率为94.8%,召回率为95.0%,f1得分为94.6%。
{"title":"ActDetector: A Sequence-based Framework for Network Attack Activity Detection","authors":"Jiaqi Kang, Huiran Yang, Y. Zhang, Yueyue Dai, Mengqi Zhan, Weiping Wang","doi":"10.1109/ISCC55528.2022.9912824","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912824","url":null,"abstract":"The cyber security situation is not optimistic in recent years due to the rapid growth of security threats. What's more worrying is that threats are tending to be more sophis-ticated, which poses challenges to attack activity analysis. It is quite important for analysts to understand attack activities from a holistic perspective, rather than just pay attention to alerts. Currently, the attack activity analysis generally relies on human resources, which is a heavy workload for manual analysis. Besides, it's difficult to achieve high detection accuracy due to the missing and false-positive alerts. In this paper, we propose a new framework, ActDetector, to detect attack activities automatically from the raw Network Intrusion Detection System (NIDS) alerts, which will greatly reduce the workload of security analysts. We extract attack phase descriptions from alerts and embed attack activity descriptions to obtain their numerical expression. Finally, we use a temporal-sequence-based model to detect potential attack activities. We evaluate ActDetector with three datasets. Experimental results demonstrate that ActDetector can detect attack activities from the raw NIDS alerts with an average of 94.8% Precision, 95.0% Recall, and 94.6% F1-score.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125569346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-Learning for Cooperative Spectrum Sensing Optimization in Cognitive Internet of Things 认知物联网中协同频谱感知优化的深度学习
Pub Date : 2022-06-30 DOI: 10.1109/ISCC55528.2022.9912823
Hind Boukhairat, M. Koulali
Spectrum sensing is a critical component of Cognitive Internet of Things. It allows Secondary Users(SUs) to access underutilized frequency bands licensed to Primary Users (PUs) opportunistically without causing harmful interference to them. How-ever, accurate individual spectrum sensing solutions are complex to deploy. Thus, Cooperative Spectrum Sensing (CSS) techniques have flourished. These techniques combine individual sensing through a weighting mechanism at a fusion center to assess the channel status. The fusion process depends heavily on the indi-vidual detection thresholds at each SU and the weights attributed to their sensing results by the Fusion Center. In this paper, we propose to use Deep Neural Net-work to compute the optimal energy detection thresh-old and fusion weights. Our goal is to develop a solution that optimally adapts to the time-varying wireless channel conditions. Furthermore, our DNN-based so-lution eliminates the need to solve hard optimization problems, thus significantly reducing computational complexity, especially in large networks.
频谱感知是认知物联网的重要组成部分。它允许辅助用户(su)机会性地访问授权给主用户(pu)的未充分利用的频段,而不会对主用户(pu)造成有害干扰。然而,精确的单个频谱传感解决方案部署起来很复杂。因此,协同频谱传感(CSS)技术蓬勃发展。这些技术通过融合中心的加权机制将单个感知结合起来,以评估信道状态。融合过程在很大程度上取决于每个SU的单个检测阈值以及融合中心赋予其感知结果的权重。在本文中,我们提出使用深度神经网络计算最优能量检测阈值和融合权值。我们的目标是开发一种最适合时变无线信道条件的解决方案。此外,我们基于dnn的解决方案消除了解决困难优化问题的需要,从而显着降低了计算复杂性,特别是在大型网络中。
{"title":"Deep-Learning for Cooperative Spectrum Sensing Optimization in Cognitive Internet of Things","authors":"Hind Boukhairat, M. Koulali","doi":"10.1109/ISCC55528.2022.9912823","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912823","url":null,"abstract":"Spectrum sensing is a critical component of Cognitive Internet of Things. It allows Secondary Users(SUs) to access underutilized frequency bands licensed to Primary Users (PUs) opportunistically without causing harmful interference to them. How-ever, accurate individual spectrum sensing solutions are complex to deploy. Thus, Cooperative Spectrum Sensing (CSS) techniques have flourished. These techniques combine individual sensing through a weighting mechanism at a fusion center to assess the channel status. The fusion process depends heavily on the indi-vidual detection thresholds at each SU and the weights attributed to their sensing results by the Fusion Center. In this paper, we propose to use Deep Neural Net-work to compute the optimal energy detection thresh-old and fusion weights. Our goal is to develop a solution that optimally adapts to the time-varying wireless channel conditions. Furthermore, our DNN-based so-lution eliminates the need to solve hard optimization problems, thus significantly reducing computational complexity, especially in large networks.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116523558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient OFDM Channel Estimation with RRDBNet 基于RRDBNet的OFDM信道估计
Pub Date : 2022-06-30 DOI: 10.1109/ISCC55528.2022.9912769
Wei Gao, Meihong Yang, Wei Zhang, Libin Liu
Channel estimation is important for orthogonal frequency division multiplexing (OFDM) in current wireless communication systems. Prevalent channel estimation algorithms, however, cannot be widely deployed due to some practical reasons, such as poor robustness and high computational complexity. To solve the problems for OFDM systems, we propose a new channel estimation scheme with a fine-designed deep learning model, called RRDBNet. RRDBNet can be trained easily while maintaining the advantages of residual learning and increasing the structure capacity, by combining the multi-level residual network and dense links. Our simulation results show that RRDBNet outperforms the traditional least-square algorithm and existing DL-based super-resolution schemes, which ranges from 0.5 to 1dB at low SNR and from 2 to 3dB at high SNR. Besides, in terms of the number of pilots, RRDBNet is also superior to existing schemes and approaches LMMSE.
在当前无线通信系统中,信道估计是正交频分复用(OFDM)技术的重要组成部分。然而,由于鲁棒性差、计算量大等实际原因,目前流行的信道估计算法无法得到广泛应用。为了解决OFDM系统的问题,我们提出了一种新的信道估计方案,该方案采用了一种精心设计的深度学习模型,称为RRDBNet。将多层残差网络与密集链路相结合,可以在保持残差学习优势的同时,方便地训练RRDBNet,增加结构容量。仿真结果表明,RRDBNet优于传统的最小二乘算法和现有的基于dl的超分辨率方案,低信噪比下的范围为0.5 ~ 1dB,高信噪比下的范围为2 ~ 3dB。此外,在试点数量方面,RRDBNet也优于现有的方案和方法LMMSE。
{"title":"Efficient OFDM Channel Estimation with RRDBNet","authors":"Wei Gao, Meihong Yang, Wei Zhang, Libin Liu","doi":"10.1109/ISCC55528.2022.9912769","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912769","url":null,"abstract":"Channel estimation is important for orthogonal frequency division multiplexing (OFDM) in current wireless communication systems. Prevalent channel estimation algorithms, however, cannot be widely deployed due to some practical reasons, such as poor robustness and high computational complexity. To solve the problems for OFDM systems, we propose a new channel estimation scheme with a fine-designed deep learning model, called RRDBNet. RRDBNet can be trained easily while maintaining the advantages of residual learning and increasing the structure capacity, by combining the multi-level residual network and dense links. Our simulation results show that RRDBNet outperforms the traditional least-square algorithm and existing DL-based super-resolution schemes, which ranges from 0.5 to 1dB at low SNR and from 2 to 3dB at high SNR. Besides, in terms of the number of pilots, RRDBNet is also superior to existing schemes and approaches LMMSE.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121988993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Privacy vs Accuracy Trade-Off in Privacy Aware Face Recognition in Smart Systems 智能系统中隐私感知人脸识别的隐私与准确性权衡
Pub Date : 2022-06-30 DOI: 10.1109/ISCC55528.2022.9912465
Wisam Abbasi, Paolo Mori, A. Saracino, V. Frascolla
This paper proposes a novel approach for privacy preserving face recognition aimed to formally define a trade-off optimization criterion between data privacy and algorithm accuracy. In our methodology, real world face images are anonymized with Gaussian blurring for privacy preservation. The anonymized images are processed for face detection, face alignment, face representation, and face verification. The proposed methodology has been validated with a set of experiments on a well known dataset and three face recognition classifiers. The results demonstrate the effectiveness of our approach to correctly verify face images with different levels of privacy and results accuracy, and to maximize privacy with the least negative impact on face detection and face verification accuracy.
本文提出了一种新的隐私保护人脸识别方法,旨在正式定义数据隐私与算法精度之间的权衡优化准则。在我们的方法中,真实世界的人脸图像使用高斯模糊进行匿名化,以保护隐私。经过处理的匿名图像用于人脸检测、人脸对齐、人脸表示和人脸验证。所提出的方法已通过一组已知数据集和三个人脸识别分类器的实验进行了验证。结果表明,我们的方法可以正确验证具有不同隐私级别和结果准确性的人脸图像,并在对人脸检测和人脸验证精度的负面影响最小的情况下最大化隐私。
{"title":"Privacy vs Accuracy Trade-Off in Privacy Aware Face Recognition in Smart Systems","authors":"Wisam Abbasi, Paolo Mori, A. Saracino, V. Frascolla","doi":"10.1109/ISCC55528.2022.9912465","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912465","url":null,"abstract":"This paper proposes a novel approach for privacy preserving face recognition aimed to formally define a trade-off optimization criterion between data privacy and algorithm accuracy. In our methodology, real world face images are anonymized with Gaussian blurring for privacy preservation. The anonymized images are processed for face detection, face alignment, face representation, and face verification. The proposed methodology has been validated with a set of experiments on a well known dataset and three face recognition classifiers. The results demonstrate the effectiveness of our approach to correctly verify face images with different levels of privacy and results accuracy, and to maximize privacy with the least negative impact on face detection and face verification accuracy.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129498519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Scalable Digital Pathology Platform Over Standard Cloud Native Technologies 基于标准云原生技术的可扩展数字病理平台
Pub Date : 2022-06-30 DOI: 10.1109/ISCC55528.2022.9912933
Tibério Baptista, Rui Jesus, Luís Bastião Silva, C. Costa
The use of digital imaging in medicine has become a cornerstone of modern diagnosis and treatment processes. The new technologies available in this ecosystem allowed healthcare institutions to improve their workflows, data access, sharing, and visualization using standardized formats. The migration of these services to the cloud enables a remote diagnostic environment, where professionals can review the studies remotely and engage in collaborative sessions. Despite the advantages of cloud-ready environments, their adoption has been slowed down by the demanding scenario high-resolution medical images pose. Some studies can have several gigabytes of data that need to be managed and consumed in the network. In this context, performance constraints of the software platforms can result in severe denial of clinical service. This work proposes a highly scalable cloud platform for extreme medical imaging scenarios. It provides scalability with auto-scaling mechanisms that allow dynamic adjustment of computational resources according to the service load.
在医学中使用数字成像已经成为现代诊断和治疗过程的基石。该生态系统中可用的新技术使医疗保健机构能够使用标准化格式改进其工作流程、数据访问、共享和可视化。将这些服务迁移到云端可以实现远程诊断环境,专业人员可以远程审查研究并参与协作会议。尽管云就绪环境具有优势,但由于高分辨率医疗图像的要求,它们的采用速度有所放缓。一些研究可能有几个gb的数据需要在网络中管理和使用。在这种情况下,软件平台的性能限制可能导致严重的拒绝临床服务。这项工作提出了一个高度可扩展的云平台,用于极端医学成像场景。它通过自动扩展机制提供可伸缩性,允许根据服务负载动态调整计算资源。
{"title":"Scalable Digital Pathology Platform Over Standard Cloud Native Technologies","authors":"Tibério Baptista, Rui Jesus, Luís Bastião Silva, C. Costa","doi":"10.1109/ISCC55528.2022.9912933","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912933","url":null,"abstract":"The use of digital imaging in medicine has become a cornerstone of modern diagnosis and treatment processes. The new technologies available in this ecosystem allowed healthcare institutions to improve their workflows, data access, sharing, and visualization using standardized formats. The migration of these services to the cloud enables a remote diagnostic environment, where professionals can review the studies remotely and engage in collaborative sessions. Despite the advantages of cloud-ready environments, their adoption has been slowed down by the demanding scenario high-resolution medical images pose. Some studies can have several gigabytes of data that need to be managed and consumed in the network. In this context, performance constraints of the software platforms can result in severe denial of clinical service. This work proposes a highly scalable cloud platform for extreme medical imaging scenarios. It provides scalability with auto-scaling mechanisms that allow dynamic adjustment of computational resources according to the service load.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129613999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Resource Management in Smart Energy-Harvesting Systems 智能能量收集系统中的实时资源管理
Pub Date : 2022-06-30 DOI: 10.1109/ISCC55528.2022.9912792
M. Abdulla, Audrey Queudet, M. Chetto, Lamia Belouaer
Energy harvesting is an emerging technology that enhances the lifetime of Internet-of- Things (loT) applications. Satisfying real-time requirements for these systems is challenging. Dedicated real-time schedulers integrating both timing and energy constraints are required, such as the ED- H scheduling algorithm[l]. However, this algorithm has been proved to be optimal for independent tasks only (i.e., without considering any shared resources), thus preventing its confident deployment into computing infrastructures in which tasks are mostly interdependent. In this paper, we first derive worst-case blocking times and worst-case blocking energy for tasks sharing resources managed by the DPCP protocol[2] and scheduled under the ED- H scheme. Then, we provide a sufficient schedulability test for ED-H-DPCP guaranteeing off-line that both timing and energy constraints will be satisfied, even in the presence of shared resources.
能量收集是一项提高物联网应用寿命的新兴技术。满足这些系统的实时需求是具有挑战性的。需要集成时间和能量约束的专用实时调度程序,如ED- H调度算法[1]。然而,该算法已被证明仅对独立任务(即不考虑任何共享资源)是最优的,因此无法将其自信地部署到任务大多相互依赖的计算基础设施中。在本文中,我们首先推导了由DPCP协议[2]管理并在ED- H方案下调度的共享资源任务的最坏阻塞时间和最坏阻塞能量。然后,我们为ED-H-DPCP提供了一个充分的可调度性测试,保证离线时,即使存在共享资源,也能同时满足时间和能量约束。
{"title":"Real-time Resource Management in Smart Energy-Harvesting Systems","authors":"M. Abdulla, Audrey Queudet, M. Chetto, Lamia Belouaer","doi":"10.1109/ISCC55528.2022.9912792","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912792","url":null,"abstract":"Energy harvesting is an emerging technology that enhances the lifetime of Internet-of- Things (loT) applications. Satisfying real-time requirements for these systems is challenging. Dedicated real-time schedulers integrating both timing and energy constraints are required, such as the ED- H scheduling algorithm[l]. However, this algorithm has been proved to be optimal for independent tasks only (i.e., without considering any shared resources), thus preventing its confident deployment into computing infrastructures in which tasks are mostly interdependent. In this paper, we first derive worst-case blocking times and worst-case blocking energy for tasks sharing resources managed by the DPCP protocol[2] and scheduled under the ED- H scheme. Then, we provide a sufficient schedulability test for ED-H-DPCP guaranteeing off-line that both timing and energy constraints will be satisfied, even in the presence of shared resources.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129061954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Reinforcement Learning-based Radio Resource Allocation and Beam Management under Location Uncertainty in 5G mm Wave Networks 5G毫米波网络中基于深度强化学习的无线资源分配与波束管理
Pub Date : 2022-06-30 DOI: 10.1109/ISCC55528.2022.9912837
Y. Yao, Hao Zhou, M. Erol-Kantarci
Millimeter Wave (mmWave) is an important part of 5G new radio (NR), in which highly directional beams are adapted to compensate for the substantial propagation loss based on UE locations. However, the location information may have some errors such as GPS errors. In any case, some uncertainty, and localization error is unavoidable in most settings. Applying these distorted locations for clustering will increase the error of beam management. Meanwhile, the traffic demand may change dynamically in the wireless environment. Therefore, a scheme that can handle both the uncertainty of localization and dynamic radio resource allocation is needed. In this paper, we propose a UK-means-based clustering and deep reinforcement learning-based resource allocation algorithm (UK-DRL) for radio resource allocation and beam management in 5G mm Wave networks. We first apply UK-means as the clustering algorithm to mitigate the localization uncertainty, then deep reinforcement learning (DRL) is adopted to dynamically allocate radio resources. Finally, we compare the UK-DRL with K-means-based clustering and DRL-based resource allocation algorithm (K-DRL), the simulations show that our proposed UK-DRL-based method achieves 150% higher throughput and 61.5% lower delay compared with K-DRL when traffic load is 4Mbps.
毫米波(mmWave)是5G新无线电(NR)的重要组成部分,其中采用高定向波束来补偿基于UE位置的大量传播损耗。但是,位置信息可能会有一些错误,例如GPS错误。无论如何,在大多数情况下,一些不确定性和定位错误是不可避免的。使用这些扭曲的位置进行聚类会增加波束管理的误差。同时,在无线环境下,业务需求是动态变化的。因此,需要一种既能处理定位不确定性又能处理动态无线电资源分配的方案。在本文中,我们提出了一种基于uk均值的聚类和基于深度强化学习的资源分配算法(UK-DRL),用于5G毫米波网络中的无线电资源分配和波束管理。我们首先采用UK-means作为聚类算法来减轻定位不确定性,然后采用深度强化学习(DRL)来动态分配无线电资源。最后,我们将UK-DRL与基于k -means聚类和基于drl的资源分配算法(K-DRL)进行了比较,仿真结果表明,当流量负载为4Mbps时,我们提出的基于UK-DRL的方法比K-DRL的吞吐量提高150%,延迟降低61.5%。
{"title":"Deep Reinforcement Learning-based Radio Resource Allocation and Beam Management under Location Uncertainty in 5G mm Wave Networks","authors":"Y. Yao, Hao Zhou, M. Erol-Kantarci","doi":"10.1109/ISCC55528.2022.9912837","DOIUrl":"https://doi.org/10.1109/ISCC55528.2022.9912837","url":null,"abstract":"Millimeter Wave (mmWave) is an important part of 5G new radio (NR), in which highly directional beams are adapted to compensate for the substantial propagation loss based on UE locations. However, the location information may have some errors such as GPS errors. In any case, some uncertainty, and localization error is unavoidable in most settings. Applying these distorted locations for clustering will increase the error of beam management. Meanwhile, the traffic demand may change dynamically in the wireless environment. Therefore, a scheme that can handle both the uncertainty of localization and dynamic radio resource allocation is needed. In this paper, we propose a UK-means-based clustering and deep reinforcement learning-based resource allocation algorithm (UK-DRL) for radio resource allocation and beam management in 5G mm Wave networks. We first apply UK-means as the clustering algorithm to mitigate the localization uncertainty, then deep reinforcement learning (DRL) is adopted to dynamically allocate radio resources. Finally, we compare the UK-DRL with K-means-based clustering and DRL-based resource allocation algorithm (K-DRL), the simulations show that our proposed UK-DRL-based method achieves 150% higher throughput and 61.5% lower delay compared with K-DRL when traffic load is 4Mbps.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129202662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2022 IEEE Symposium on Computers and Communications (ISCC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1