The metaverse aims to provide immersive virtual worlds connecting with the physical world. To enable real-time interpersonal communications between users across the globe, the metaverse places high demands on network performance, including low latency, high bandwidth, and fast network speeds. This paper proposes a novel Media Convergence Metaverse Network (MCMN) framework to address these challenges. Specifically, the META controller serves as MCMN's logically centralized control plane, responsible for holistic orchestration across edge sites and end-to-end path computation between metaverse users. We develop a model-free deep reinforcement learning-based metaverse traffic optimization algorithm that learns to route flows while satisfying the Quality of Service (QoS) boundaries. The network slicing engine leverages artificial intelligence and machine learning to create isolated, customized virtual networks tailored for metaverse traffic dynamics on demand. It employs unsupervised and reinforcement learning techniques using network telemetry from the META controller to understand application traffic patterns and train cognitive slicer agents to make quality of service -aware decisions accordingly. Optimized delivery of diverse concurrent media types necessitates routing intelligence to meet distinct requirements while mitigating clashes over a shared infrastructure. Media-aware routing enhances traditional shortest-path approaches by combining topological metrics with workflow sensitivities. We realize an edge-assisted rendering fabric to offload complex processing from bandwidth-constrained endpoints while retaining visual realism. Extensive simulations demonstrate MCMN's superior performance compared to conventional networking paradigms. MCMN shows great promise to enable seamless interconnectivity and ultra-high fidelity communications to unlock the true potential of the metaverse.
元宇宙旨在提供与物理世界相连接的身临其境的虚拟世界。为了实现全球用户之间的实时人际交流,元宇宙对网络性能提出了很高的要求,包括低延迟、高带宽和高速网络。本文提出了一个新颖的媒体融合元宇宙网络(MCMN)框架来应对这些挑战。具体来说,META 控制器作为 MCMN 的逻辑集中控制平面,负责边缘站点之间的整体协调以及元网络用户之间的端到端路径计算。我们开发了一种基于无模型深度强化学习的元数据流量优化算法,该算法可在满足服务质量(QoS)边界的前提下学习流量路由。网络切片引擎利用人工智能和机器学习来创建隔离的、定制的虚拟网络,以满足元数据流量动态需求。它利用来自 META 控制器的网络遥测数据,采用无监督和强化学习技术来了解应用流量模式,并训练认知切片代理做出相应的服务质量感知决策。优化各种并发媒体类型的传输需要路由智能,以满足不同的要求,同时减少共享基础设施上的冲突。媒体感知路由通过将拓扑指标与工作流敏感性相结合,增强了传统的最短路径方法。我们实现了边缘辅助渲染结构,以便从带宽受限的端点卸载复杂的处理过程,同时保持视觉的真实感。大量的仿真证明,与传统网络范例相比,MCMN 的性能更加卓越。MCMN 在实现无缝互联和超高保真通信以释放元宇宙的真正潜力方面大有可为。
{"title":"Interpersonal Communication Interconnection in Media Convergence Metaverse","authors":"Xin Wang, Jianhui Lv, Achyut Shankar, Carsten Maple, Keqin Li, Qing Li","doi":"10.1145/3670998","DOIUrl":"https://doi.org/10.1145/3670998","url":null,"abstract":"<p>The metaverse aims to provide immersive virtual worlds connecting with the physical world. To enable real-time interpersonal communications between users across the globe, the metaverse places high demands on network performance, including low latency, high bandwidth, and fast network speeds. This paper proposes a novel Media Convergence Metaverse Network (MCMN) framework to address these challenges. Specifically, the META controller serves as MCMN's logically centralized control plane, responsible for holistic orchestration across edge sites and end-to-end path computation between metaverse users. We develop a model-free deep reinforcement learning-based metaverse traffic optimization algorithm that learns to route flows while satisfying the Quality of Service (QoS) boundaries. The network slicing engine leverages artificial intelligence and machine learning to create isolated, customized virtual networks tailored for metaverse traffic dynamics on demand. It employs unsupervised and reinforcement learning techniques using network telemetry from the META controller to understand application traffic patterns and train cognitive slicer agents to make quality of service -aware decisions accordingly. Optimized delivery of diverse concurrent media types necessitates routing intelligence to meet distinct requirements while mitigating clashes over a shared infrastructure. Media-aware routing enhances traditional shortest-path approaches by combining topological metrics with workflow sensitivities. We realize an edge-assisted rendering fabric to offload complex processing from bandwidth-constrained endpoints while retaining visual realism. Extensive simulations demonstrate MCMN's superior performance compared to conventional networking paradigms. MCMN shows great promise to enable seamless interconnectivity and ultra-high fidelity communications to unlock the true potential of the metaverse.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":"218 1","pages":""},"PeriodicalIF":5.3,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141259842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel framework for achieving precision landing in drone services. The proposed framework consists of two distinct decoupled modules, each designed to address a specific aspect of landing accuracy. The first module is concerned with intrinsic errors, where new error models are introduced. This includes a spherical error model that takes into account the orientation of the drone. Additionally, we propose a live position correction algorithm that employs the error models to correct for intrinsic errors in real-time. The second module focuses on external wind forces and presents an aerodynamics model with wind generation to simulate the drone’s physical environment. We utilize reinforcement learning to train the drone in simulation with the goal of landing precisely under dynamic wind conditions. Experimental results, conducted through simulations and validated in the physical world, demonstrate that our proposed framework significantly increases the landing accuracy while maintaining a low onboard computational cost.
{"title":"Using Reinforcement Learning and Error Models for Drone Precision Landing","authors":"Sepehr Saryazdi, Balsam Alkouz, Athman Bouguettaya, Abdallah Lakhdari","doi":"10.1145/3670997","DOIUrl":"https://doi.org/10.1145/3670997","url":null,"abstract":"<p>We propose a novel framework for achieving precision landing in drone services. The proposed framework consists of two distinct decoupled modules, each designed to address a specific aspect of landing accuracy. The first module is concerned with intrinsic errors, where new error models are introduced. This includes a spherical error model that takes into account the orientation of the drone. Additionally, we propose a live position correction algorithm that employs the error models to correct for intrinsic errors in real-time. The second module focuses on external wind forces and presents an aerodynamics model with wind generation to simulate the drone’s physical environment. We utilize reinforcement learning to train the drone in simulation with the goal of landing precisely under dynamic wind conditions. Experimental results, conducted through simulations and validated in the physical world, demonstrate that our proposed framework significantly increases the landing accuracy while maintaining a low onboard computational cost.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":"43 1","pages":""},"PeriodicalIF":5.3,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141259291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Security Operations Centres (SOCs) play a pivotal role in defending organisations against evolving cyber threats. They function as central hubs for detecting, analysing, and responding promptly to cyber incidents with the primary objective of ensuring the confidentiality, integrity, and availability of digital assets. However, they struggle against the growing problem of alert fatigue, where the sheer volume of alerts overwhelms SOC analysts and raises the risk of overlooking critical threats. In recent times, there has been a growing call for human-AI teaming, wherein humans and AI collaborate with each other, leveraging their complementary strengths and compensating for their weaknesses. The rapid advances in AI and the growing integration of AI-enabled tools and technologies within SOCs give rise to a compelling argument for the implementation of human-AI teaming within the SOC environment. Therefore, in this position paper, we present our vision for human-AI teaming to address the problem of alert fatigue in SOC. We propose the (mathcal {A}^2mathcal {C} ) Framework, which enables flexible and dynamic decision-making by allowing seamless transitions between automated, augmented, and collaborative modes of operation. Our framework allows AI-powered automation for routine alerts, AI-driven augmentation for expedited expert decision-making, and collaborative exploration for tackling complex, novel threats. By implementing and operationalising (mathcal {A}^2mathcal {C} ), SOCs can significantly reduce alert fatigue while empowering analysts to efficiently and effectively respond to security incidents.
{"title":"Towards Human-AI Teaming to Mitigate Alert Fatigue in Security Operations Centres","authors":"Mohan Baruwal Chhetri, Shahroz Tariq, Ronal Singh, Fateneh Jalalvand, Cecile Paris, Surya Nepal","doi":"10.1145/3670009","DOIUrl":"https://doi.org/10.1145/3670009","url":null,"abstract":"<p>Security Operations Centres (SOCs) play a pivotal role in defending organisations against evolving cyber threats. They function as central hubs for detecting, analysing, and responding promptly to cyber incidents with the primary objective of ensuring the confidentiality, integrity, and availability of digital assets. However, they struggle against the growing problem of alert fatigue, where the sheer volume of alerts overwhelms SOC analysts and raises the risk of overlooking critical threats. In recent times, there has been a growing call for human-AI teaming, wherein humans and AI collaborate with each other, leveraging their complementary strengths and compensating for their weaknesses. The rapid advances in AI and the growing integration of AI-enabled tools and technologies within SOCs give rise to a compelling argument for the implementation of human-AI teaming within the SOC environment. Therefore, in this position paper, we present our vision for human-AI teaming to address the problem of alert fatigue in SOC. We propose the (mathcal {A}^2mathcal {C} ) Framework, which enables flexible and dynamic decision-making by allowing seamless transitions between automated, augmented, and collaborative modes of operation. Our framework allows AI-powered automation for routine alerts, AI-driven augmentation for expedited expert decision-making, and collaborative exploration for tackling complex, novel threats. By implementing and operationalising (mathcal {A}^2mathcal {C} ), SOCs can significantly reduce alert fatigue while empowering analysts to efficiently and effectively respond to security incidents.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":"13 1","pages":""},"PeriodicalIF":5.3,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141189603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid advancement of the Internet of Things (IoT) and 5G networks in smart cities, the inevitable generation of massive amounts of data, commonly known as big data, has introduced increased latency within the traditional cloud computing paradigm. In response to this challenge, Mobile Edge Computing (MEC) has emerged as a viable solution, offloading a portion of mobile device workloads to nearby edge servers equipped with ample computational resources. Despite significant research in MEC systems, optimizing the placement of edge servers in smart cities to enhance network performance has received little attention. In this paper, we propose RESP, a novel Recursive clustering technique for Edge Server Placement in MEC environments. RESP operates based on the median of each cluster determined by the number of Base Transceiver Stations (BTSs), strategically placing edge servers to achieve workload balance and minimize network traffic between them. Our proposed clustering approach substantially improves load balancing compared to existing methods and demonstrates superior performance in handling traffic dynamics. Through experimental evaluation with real-world data from Shanghai Telecom’s base station dataset, our approach outperforms several representative techniques in terms of workload balancing and network traffic optimization. By addressing the ESP problem and introducing an advanced recursive clustering technique, this work makes a substantial contribution to optimizing mobile edge computing networks in smart cities. The proposed algorithm outperforms alternative methodologies, demonstrating a 10% average improvement in optimizing network traffic. Moreover, it achieves a 53% more suitable result in terms of computational load.
{"title":"RESP: A Recursive Clustering Approach for Edge Server Placement in Mobile Edge Computing","authors":"Ali Akbar Vali, Sadoon Azizi, Mohammad Shojafar","doi":"10.1145/3666091","DOIUrl":"https://doi.org/10.1145/3666091","url":null,"abstract":"<p>With the rapid advancement of the Internet of Things (IoT) and 5G networks in smart cities, the inevitable generation of massive amounts of data, commonly known as big data, has introduced increased latency within the traditional cloud computing paradigm. In response to this challenge, Mobile Edge Computing (MEC) has emerged as a viable solution, offloading a portion of mobile device workloads to nearby edge servers equipped with ample computational resources. Despite significant research in MEC systems, optimizing the placement of edge servers in smart cities to enhance network performance has received little attention. In this paper, we propose <i>RESP</i>, a novel <b>R</b>ecursive clustering technique for <b>E</b>dge <b>S</b>erver <b>P</b>lacement in MEC environments. RESP operates based on the median of each cluster determined by the number of Base Transceiver Stations (BTSs), strategically placing edge servers to achieve workload balance and minimize network traffic between them. Our proposed clustering approach substantially improves load balancing compared to existing methods and demonstrates superior performance in handling traffic dynamics. Through experimental evaluation with real-world data from Shanghai Telecom’s base station dataset, our approach outperforms several representative techniques in terms of workload balancing and network traffic optimization. By addressing the ESP problem and introducing an advanced recursive clustering technique, this work makes a substantial contribution to optimizing mobile edge computing networks in smart cities. The proposed algorithm outperforms alternative methodologies, demonstrating a 10% average improvement in optimizing network traffic. Moreover, it achieves a 53% more suitable result in terms of computational load.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":"99 1","pages":""},"PeriodicalIF":5.3,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141165911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Things (IoT) refers to a complex network comprising interconnected devices that transmit their data via the Internet. Due to their open environment, limited computation power, and absence of built-in security, IoT environments are susceptible to various cyberattacks. Denial of service (DDoS) attacks are among the most destructive types of threats. The Multi-vector DDoS attack is a contemporary and formidable form of DDoS wherein the attacker employs a collection of compromised IoT devices as zombies to initiate numerous DDoS attacks against a target server. A Blockchain-based Operational Threat Intelligence framework, OTI-IoT, is proposed in this paper to counter multi-vector DDoS attacks in IoT networks. A ”Prevent-then-Detect” methodology was utilized to deploy the OTI-IoT framework in two distinct stages. During Phase 1, the consortium Blockchain network validators employ the IPS module, composed of a smart contract for attack prevention & access control, and Proof of Voting consensus, to thwart attacks. Validators are outfitted with deep learning-based IDS instances to detect multi-vector DDoS attacks during Phase 2. Alert messages are generated by the IDS module’s alert generation & propagation smart contract in response to identifying malicious IoT sources. The feedback loop from the IDS module to the IPS module prevents incoming traffic from malicious sources. The proposed OTI framework capabilities are realized as an outcome of combining and storing the outcomes of the IDS and IPS modules on the consortium Blockchain. Each validator maintains a shared ledger containing information regarding threat sources to ensure robust security, transparency, and integrity. The operational execution of OTI-IoT occurs on an individual Ethereum Blockchain. The empirical findings indicate that our proposed framework is most suitable for real-time applications due to its ability to lower attack detection time, decreased block validation time, and higher attack prevention rate.
{"title":"OTI-IoT: A Blockchain-based Operational Threat Intelligence Framework for Multi-vector DDoS Attacks","authors":"Aswani Aguru, Suresh Erukala","doi":"10.1145/3664287","DOIUrl":"https://doi.org/10.1145/3664287","url":null,"abstract":"<p>The <b>Internet of Things (IoT)</b> refers to a complex network comprising interconnected devices that transmit their data via the Internet. Due to their open environment, limited computation power, and absence of built-in security, IoT environments are susceptible to various cyberattacks. Denial of service (DDoS) attacks are among the most destructive types of threats. The <b>Multi-vector DDoS attack</b> is a contemporary and formidable form of DDoS wherein the attacker employs a collection of compromised IoT devices as zombies to initiate numerous DDoS attacks against a target server. A Blockchain-based Operational Threat Intelligence framework, OTI-IoT, is proposed in this paper to counter multi-vector DDoS attacks in IoT networks. A <b>”Prevent-then-Detect”</b> methodology was utilized to deploy the OTI-IoT framework in two distinct stages. During Phase 1, the <b>consortium Blockchain network</b> validators employ the IPS module, composed of a smart contract for attack prevention & access control, and Proof of Voting consensus, to thwart attacks. Validators are outfitted with deep learning-based IDS instances to detect multi-vector DDoS attacks during Phase 2. Alert messages are generated by the IDS module’s alert generation & propagation smart contract in response to identifying malicious IoT sources. The feedback loop from the IDS module to the IPS module prevents incoming traffic from malicious sources. The proposed OTI framework capabilities are realized as an outcome of combining and storing the outcomes of the IDS and IPS modules on the consortium Blockchain. Each validator maintains a shared ledger containing information regarding threat sources to ensure robust security, transparency, and integrity. The operational execution of OTI-IoT occurs on an individual Ethereum Blockchain. The empirical findings indicate that our proposed framework is most suitable for real-time applications due to its ability to lower attack detection time, decreased block validation time, and higher attack prevention rate.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":"138 1","pages":""},"PeriodicalIF":5.3,"publicationDate":"2024-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140938725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valerio Bellandi, Paolo Ceravolo, Jonatan Maggesi, Samira Maghool
To gain a comprehensive understanding of a patient’s health, advanced analytics must be applied to the data collected by electronic health record (EHR) systems. However, managing and curating this data requires carefully designed workflows. While digitalization and standardization enable continuous health monitoring, missing data values and technical issues can compromise the consistency and timeliness of the data. In this paper, we propose a workflow for developing prognostic models that leverages the SMART BEAR infrastructure and the capabilities of the Big Data Analytics (BDA) engine to homogenize and harmonize data points. Our workflow improves the quality of the data by evaluating different imputation algorithms and selecting one that maintains the distribution and correlation of features similar to the raw data. We applied this workflow to a subset of the data stored in the SMART BEAR repository and examined its impact on the prediction of emerging health states such as cardiovascular disease and mild depression. We also discussed the possibility of model validation by clinicians in the SMART BEAR project, the transmission of subsequent actions in the decision support system, and the estimation of the required number of data points.
{"title":"Data management for continuous learning in EHR systems","authors":"Valerio Bellandi, Paolo Ceravolo, Jonatan Maggesi, Samira Maghool","doi":"10.1145/3660634","DOIUrl":"https://doi.org/10.1145/3660634","url":null,"abstract":"<p>To gain a comprehensive understanding of a patient’s health, advanced analytics must be applied to the data collected by electronic health record (EHR) systems. However, managing and curating this data requires carefully designed workflows. While digitalization and standardization enable continuous health monitoring, missing data values and technical issues can compromise the consistency and timeliness of the data. In this paper, we propose a workflow for developing prognostic models that leverages the SMART BEAR infrastructure and the capabilities of the Big Data Analytics (BDA) engine to homogenize and harmonize data points. Our workflow improves the quality of the data by evaluating different imputation algorithms and selecting one that maintains the distribution and correlation of features similar to the raw data. We applied this workflow to a subset of the data stored in the SMART BEAR repository and examined its impact on the prediction of emerging health states such as cardiovascular disease and mild depression. We also discussed the possibility of model validation by clinicians in the SMART BEAR project, the transmission of subsequent actions in the decision support system, and the estimation of the required number of data points.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":"19 1","pages":""},"PeriodicalIF":5.3,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vertical federated learning (VFL) revolutionizes privacy-preserved collaboration for small businesses, that have distinct but complementary feature sets. However, as the scope of VFL expands, the constant entering and leaving of participants, as well as the subsequent exercise of the “right to be forgotten” pose a great challenge in practice. The question of how to efficiently erase one’s contribution from the shared model remains largely unexplored in the context of vertical federated learning. In this paper, we introduce a vertical federated unlearning framework, which integrates model checkpointing techniques with a hybrid, first-order optimization technique. The core concept is to reduce backpropagation time and improve convergence/generalization by combining the advantages of the existing optimizers. We provide in-depth theoretical analysis and time complexity to illustrate the effectiveness of the proposed design. We conduct extensive experiments on 6 public datasets and demonstrate that our method could achieve up to 6.3 × speed-up compared to the baseline, with negligible influence on the original learning task.
{"title":"Efficient Vertical Federated Unlearning via Fast Retraining","authors":"Zichen Wang, Xiangshan Gao, Cong Wang, Peng Cheng, Jiming Chen","doi":"10.1145/3657290","DOIUrl":"https://doi.org/10.1145/3657290","url":null,"abstract":"<p>Vertical federated learning (VFL) revolutionizes privacy-preserved collaboration for small businesses, that have distinct but complementary feature sets. However, as the scope of VFL expands, the constant entering and leaving of participants, as well as the subsequent exercise of the “right to be forgotten” pose a great challenge in practice. The question of how to efficiently erase one’s contribution from the shared model remains largely unexplored in the context of vertical federated learning. In this paper, we introduce a vertical federated unlearning framework, which integrates model checkpointing techniques with a hybrid, first-order optimization technique. The core concept is to reduce backpropagation time and improve convergence/generalization by combining the advantages of the existing optimizers. We provide in-depth theoretical analysis and time complexity to illustrate the effectiveness of the proposed design. We conduct extensive experiments on 6 public datasets and demonstrate that our method could achieve up to 6.3 × speed-up compared to the baseline, with negligible influence on the original learning task.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":"76 1","pages":""},"PeriodicalIF":5.3,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140585739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanming Chen, Tong Luo, Weiwei Fang, Neal N. Xiong
Deep learning technology has grown significantly in new application scenarios such as smart cities and driverless vehicles, but its deployment needs to consume a lot of resources. It is usually difficult to execute inference task solely on resource-constrained Intelligent Internet-of-Things (IoT) devices to meet strictly service delay requirements. CNN-based inference task is usually offloaded to the edge servers or cloud. However, it maybe lead to unstable performance and privacy leaks. To address the above challenges, this paper aims to design a low latency distributed inference framework, EdgeCI, which assigns inference tasks to locally idle, connected and resource-constrained IoT device cluster networks. EdgeCI exploits two key optimization knobs, including: (1) Auction-based Workload Assignment Scheme (AWAS), which achieves the workload balance by assigning each workload partition to the more matching IoT device; (2) Fused-Layer parallelization strategy based on non-recursive Dynamic Programming (DPFL), which is aimed at further minimizing the inference time. We have implemented EdgeCI based on PyTorch and evaluated its performance with VGG-16 and ResNet-34 image recognition models. The experimental results prove that our proposed AWAS and DPFL outperform the typical state-of-the-art solutions. When they are well combined, EdgeCI can improve inference speed by 34.72% to 43.52%. EdgeCI outperforms the state-of-the art approaches on the tested platform.
{"title":"EdgeCI: Distributed Workload Assignment and Model Partitioning for CNN Inference on Edge Clusters","authors":"Yanming Chen, Tong Luo, Weiwei Fang, Neal N. Xiong","doi":"10.1145/3656041","DOIUrl":"https://doi.org/10.1145/3656041","url":null,"abstract":"<p>Deep learning technology has grown significantly in new application scenarios such as smart cities and driverless vehicles, but its deployment needs to consume a lot of resources. It is usually difficult to execute inference task solely on resource-constrained Intelligent Internet-of-Things (IoT) devices to meet strictly service delay requirements. CNN-based inference task is usually offloaded to the edge servers or cloud. However, it maybe lead to unstable performance and privacy leaks. To address the above challenges, this paper aims to design a low latency distributed inference framework, EdgeCI, which assigns inference tasks to locally idle, connected and resource-constrained IoT device cluster networks. EdgeCI exploits two key optimization knobs, including: (1) Auction-based Workload Assignment Scheme (AWAS), which achieves the workload balance by assigning each workload partition to the more matching IoT device; (2) Fused-Layer parallelization strategy based on non-recursive Dynamic Programming (DPFL), which is aimed at further minimizing the inference time. We have implemented EdgeCI based on PyTorch and evaluated its performance with VGG-16 and ResNet-34 image recognition models. The experimental results prove that our proposed AWAS and DPFL outperform the typical state-of-the-art solutions. When they are well combined, EdgeCI can improve inference speed by 34.72% to 43.52%. EdgeCI outperforms the state-of-the art approaches on the tested platform.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":"24 1","pages":""},"PeriodicalIF":5.3,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140585576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phishing attacks reached a record high in 2022, as reported by the Anti-Phishing Work Group [1], following an upward trend accelerated during the pandemic. Attackers employ increasingly sophisticated tools in their attempts to deceive unaware users into divulging confidential information. Recently, the research community has turned to the utilization of screenshots of legitimate and malicious websites to identify the brands that attackers aim to impersonate. In the field of Computer Vision, convolutional neural networks (CNNs) have been employed to analyze the visual rendering of websites, addressing the problem of phishing detection. However, along with the development of these new models, arose the need to understand their inner workings and the rationale behind each prediction. Answering the question, “How is this website attempting to steal the identity of a well-known brand?” becomes crucial when protecting end-users from such threats. In cybersecurity, the application of explainable AI (XAI) is an emerging approach that aims to answer such questions. In this paper, we propose VORTEX, a phishing website detection solution equipped with the capability to explain how a screenshot attempts to impersonate a specific brand. We conduct an extensive analysis of XAI methods for the phishing detection problem and demonstrate that VORTEX provides meaningful explanations regarding the detection results. Additionally, we evaluate the robustness of our model against Adversarial Example attacks. We adapt these attacks to the VORTEX architecture and evaluate their efficacy across multiple models and datasets. Our results show that VORTEX achieves superior accuracy compared to previous models, and learns semantically meaningful patterns to provide actionable explanations about phishing websites. Finally, VORTEX demonstrates an acceptable level of robustness against adversarial example attacks.
{"title":"VORTEX : Visual phishing detectiOns aRe Through EXplanations","authors":"Fabien Charmet, Tomohiro Morikawa, Akira Tanaka, Takeshi Takahashi","doi":"10.1145/3654665","DOIUrl":"https://doi.org/10.1145/3654665","url":null,"abstract":"<p>Phishing attacks reached a record high in 2022, as reported by the Anti-Phishing Work Group [1], following an upward trend accelerated during the pandemic. Attackers employ increasingly sophisticated tools in their attempts to deceive unaware users into divulging confidential information. Recently, the research community has turned to the utilization of screenshots of legitimate and malicious websites to identify the brands that attackers aim to impersonate. In the field of Computer Vision, convolutional neural networks (CNNs) have been employed to analyze the visual rendering of websites, addressing the problem of phishing detection. However, along with the development of these new models, arose the need to understand their inner workings and the rationale behind each prediction. Answering the question, “How is this website attempting to steal the identity of a well-known brand?” becomes crucial when protecting end-users from such threats. In cybersecurity, the application of explainable AI (XAI) is an emerging approach that aims to answer such questions. In this paper, we propose VORTEX, a phishing website detection solution equipped with the capability to explain how a screenshot attempts to impersonate a specific brand. We conduct an extensive analysis of XAI methods for the phishing detection problem and demonstrate that VORTEX provides meaningful explanations regarding the detection results. Additionally, we evaluate the robustness of our model against Adversarial Example attacks. We adapt these attacks to the VORTEX architecture and evaluate their efficacy across multiple models and datasets. Our results show that VORTEX achieves superior accuracy compared to previous models, and learns semantically meaningful patterns to provide actionable explanations about phishing websites. Finally, VORTEX demonstrates an acceptable level of robustness against adversarial example attacks.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":"58 1","pages":""},"PeriodicalIF":5.3,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140325162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The smart healthcare system not only focuses on physical health but also on emotional health. Music therapy, as a non-pharmacological treatment method, has been widely used in clinical treatment, but music selection and generation still require manual intervention. AI music generation technology can assist people in relieving stress and providing more personalized and efficient music therapy support. However, existing AI music generation highly relies on the note generated at the current time to produce the note at the next time. This will lead to disharmonious results. The first reason is the small errors being ignored at the current generated note. This error will accumulate and spread continuously, and finally make the music become random. To solve this problem, we propose a music selection module to filter the errors of generated note. The multi-think mechanism is proposed to filter the result multiple times, so that the generated note is as accurate as possible, eliminating the impact of the results on the next generation process. The second reason is that the results of multiple generation of each music clip are not the same or even do not follow the same music rules. Therefore, in the inference phase, a voting mechanism is proposed in this paper to select the note that follow the music rules that most experimental results follow as the final result. The subjective and objective evaluations demonstrate the superiority of our proposed model in generation of more smooth music that conforms to music rules. This model provides strong support for clinical music therapy, and provides new ideas for the research and practice of emotional health therapy based on the Internet of Things.
{"title":"Multi-Think Transformer for Enhancing Emotional Health","authors":"Jiarong Wang, Jiaji Wu, Shaohong Chen, Xiangyu Han, Mingzhou Tan, Jianguo Yu","doi":"10.1145/3652512","DOIUrl":"https://doi.org/10.1145/3652512","url":null,"abstract":"<p>The smart healthcare system not only focuses on physical health but also on emotional health. Music therapy, as a non-pharmacological treatment method, has been widely used in clinical treatment, but music selection and generation still require manual intervention. AI music generation technology can assist people in relieving stress and providing more personalized and efficient music therapy support. However, existing AI music generation highly relies on the note generated at the current time to produce the note at the next time. This will lead to disharmonious results. The first reason is the small errors being ignored at the current generated note. This error will accumulate and spread continuously, and finally make the music become random. To solve this problem, we propose a music selection module to filter the errors of generated note. The multi-think mechanism is proposed to filter the result multiple times, so that the generated note is as accurate as possible, eliminating the impact of the results on the next generation process. The second reason is that the results of multiple generation of each music clip are not the same or even do not follow the same music rules. Therefore, in the inference phase, a voting mechanism is proposed in this paper to select the note that follow the music rules that most experimental results follow as the final result. The subjective and objective evaluations demonstrate the superiority of our proposed model in generation of more smooth music that conforms to music rules. This model provides strong support for clinical music therapy, and provides new ideas for the research and practice of emotional health therapy based on the Internet of Things.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":"94 1","pages":""},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}