Yanming Chen, Tong Luo, Weiwei Fang, Neal N. Xiong
Deep learning technology has grown significantly in new application scenarios such as smart cities and driverless vehicles, but its deployment needs to consume a lot of resources. It is usually difficult to execute inference task solely on resource-constrained Intelligent Internet-of-Things (IoT) devices to meet strictly service delay requirements. CNN-based inference task is usually offloaded to the edge servers or cloud. However, it maybe lead to unstable performance and privacy leaks. To address the above challenges, this paper aims to design a low latency distributed inference framework, EdgeCI, which assigns inference tasks to locally idle, connected and resource-constrained IoT device cluster networks. EdgeCI exploits two key optimization knobs, including: (1) Auction-based Workload Assignment Scheme (AWAS), which achieves the workload balance by assigning each workload partition to the more matching IoT device; (2) Fused-Layer parallelization strategy based on non-recursive Dynamic Programming (DPFL), which is aimed at further minimizing the inference time. We have implemented EdgeCI based on PyTorch and evaluated its performance with VGG-16 and ResNet-34 image recognition models. The experimental results prove that our proposed AWAS and DPFL outperform the typical state-of-the-art solutions. When they are well combined, EdgeCI can improve inference speed by 34.72% to 43.52%. EdgeCI outperforms the state-of-the art approaches on the tested platform.
{"title":"EdgeCI: Distributed Workload Assignment and Model Partitioning for CNN Inference on Edge Clusters","authors":"Yanming Chen, Tong Luo, Weiwei Fang, Neal N. Xiong","doi":"10.1145/3656041","DOIUrl":"https://doi.org/10.1145/3656041","url":null,"abstract":"<p>Deep learning technology has grown significantly in new application scenarios such as smart cities and driverless vehicles, but its deployment needs to consume a lot of resources. It is usually difficult to execute inference task solely on resource-constrained Intelligent Internet-of-Things (IoT) devices to meet strictly service delay requirements. CNN-based inference task is usually offloaded to the edge servers or cloud. However, it maybe lead to unstable performance and privacy leaks. To address the above challenges, this paper aims to design a low latency distributed inference framework, EdgeCI, which assigns inference tasks to locally idle, connected and resource-constrained IoT device cluster networks. EdgeCI exploits two key optimization knobs, including: (1) Auction-based Workload Assignment Scheme (AWAS), which achieves the workload balance by assigning each workload partition to the more matching IoT device; (2) Fused-Layer parallelization strategy based on non-recursive Dynamic Programming (DPFL), which is aimed at further minimizing the inference time. We have implemented EdgeCI based on PyTorch and evaluated its performance with VGG-16 and ResNet-34 image recognition models. The experimental results prove that our proposed AWAS and DPFL outperform the typical state-of-the-art solutions. When they are well combined, EdgeCI can improve inference speed by 34.72% to 43.52%. EdgeCI outperforms the state-of-the art approaches on the tested platform.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140585576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phishing attacks reached a record high in 2022, as reported by the Anti-Phishing Work Group [1], following an upward trend accelerated during the pandemic. Attackers employ increasingly sophisticated tools in their attempts to deceive unaware users into divulging confidential information. Recently, the research community has turned to the utilization of screenshots of legitimate and malicious websites to identify the brands that attackers aim to impersonate. In the field of Computer Vision, convolutional neural networks (CNNs) have been employed to analyze the visual rendering of websites, addressing the problem of phishing detection. However, along with the development of these new models, arose the need to understand their inner workings and the rationale behind each prediction. Answering the question, “How is this website attempting to steal the identity of a well-known brand?” becomes crucial when protecting end-users from such threats. In cybersecurity, the application of explainable AI (XAI) is an emerging approach that aims to answer such questions. In this paper, we propose VORTEX, a phishing website detection solution equipped with the capability to explain how a screenshot attempts to impersonate a specific brand. We conduct an extensive analysis of XAI methods for the phishing detection problem and demonstrate that VORTEX provides meaningful explanations regarding the detection results. Additionally, we evaluate the robustness of our model against Adversarial Example attacks. We adapt these attacks to the VORTEX architecture and evaluate their efficacy across multiple models and datasets. Our results show that VORTEX achieves superior accuracy compared to previous models, and learns semantically meaningful patterns to provide actionable explanations about phishing websites. Finally, VORTEX demonstrates an acceptable level of robustness against adversarial example attacks.
{"title":"VORTEX : Visual phishing detectiOns aRe Through EXplanations","authors":"Fabien Charmet, Tomohiro Morikawa, Akira Tanaka, Takeshi Takahashi","doi":"10.1145/3654665","DOIUrl":"https://doi.org/10.1145/3654665","url":null,"abstract":"<p>Phishing attacks reached a record high in 2022, as reported by the Anti-Phishing Work Group [1], following an upward trend accelerated during the pandemic. Attackers employ increasingly sophisticated tools in their attempts to deceive unaware users into divulging confidential information. Recently, the research community has turned to the utilization of screenshots of legitimate and malicious websites to identify the brands that attackers aim to impersonate. In the field of Computer Vision, convolutional neural networks (CNNs) have been employed to analyze the visual rendering of websites, addressing the problem of phishing detection. However, along with the development of these new models, arose the need to understand their inner workings and the rationale behind each prediction. Answering the question, “How is this website attempting to steal the identity of a well-known brand?” becomes crucial when protecting end-users from such threats. In cybersecurity, the application of explainable AI (XAI) is an emerging approach that aims to answer such questions. In this paper, we propose VORTEX, a phishing website detection solution equipped with the capability to explain how a screenshot attempts to impersonate a specific brand. We conduct an extensive analysis of XAI methods for the phishing detection problem and demonstrate that VORTEX provides meaningful explanations regarding the detection results. Additionally, we evaluate the robustness of our model against Adversarial Example attacks. We adapt these attacks to the VORTEX architecture and evaluate their efficacy across multiple models and datasets. Our results show that VORTEX achieves superior accuracy compared to previous models, and learns semantically meaningful patterns to provide actionable explanations about phishing websites. Finally, VORTEX demonstrates an acceptable level of robustness against adversarial example attacks.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140325162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The smart healthcare system not only focuses on physical health but also on emotional health. Music therapy, as a non-pharmacological treatment method, has been widely used in clinical treatment, but music selection and generation still require manual intervention. AI music generation technology can assist people in relieving stress and providing more personalized and efficient music therapy support. However, existing AI music generation highly relies on the note generated at the current time to produce the note at the next time. This will lead to disharmonious results. The first reason is the small errors being ignored at the current generated note. This error will accumulate and spread continuously, and finally make the music become random. To solve this problem, we propose a music selection module to filter the errors of generated note. The multi-think mechanism is proposed to filter the result multiple times, so that the generated note is as accurate as possible, eliminating the impact of the results on the next generation process. The second reason is that the results of multiple generation of each music clip are not the same or even do not follow the same music rules. Therefore, in the inference phase, a voting mechanism is proposed in this paper to select the note that follow the music rules that most experimental results follow as the final result. The subjective and objective evaluations demonstrate the superiority of our proposed model in generation of more smooth music that conforms to music rules. This model provides strong support for clinical music therapy, and provides new ideas for the research and practice of emotional health therapy based on the Internet of Things.
{"title":"Multi-Think Transformer for Enhancing Emotional Health","authors":"Jiarong Wang, Jiaji Wu, Shaohong Chen, Xiangyu Han, Mingzhou Tan, Jianguo Yu","doi":"10.1145/3652512","DOIUrl":"https://doi.org/10.1145/3652512","url":null,"abstract":"<p>The smart healthcare system not only focuses on physical health but also on emotional health. Music therapy, as a non-pharmacological treatment method, has been widely used in clinical treatment, but music selection and generation still require manual intervention. AI music generation technology can assist people in relieving stress and providing more personalized and efficient music therapy support. However, existing AI music generation highly relies on the note generated at the current time to produce the note at the next time. This will lead to disharmonious results. The first reason is the small errors being ignored at the current generated note. This error will accumulate and spread continuously, and finally make the music become random. To solve this problem, we propose a music selection module to filter the errors of generated note. The multi-think mechanism is proposed to filter the result multiple times, so that the generated note is as accurate as possible, eliminating the impact of the results on the next generation process. The second reason is that the results of multiple generation of each music clip are not the same or even do not follow the same music rules. Therefore, in the inference phase, a voting mechanism is proposed in this paper to select the note that follow the music rules that most experimental results follow as the final result. The subjective and objective evaluations demonstrate the superiority of our proposed model in generation of more smooth music that conforms to music rules. This model provides strong support for clinical music therapy, and provides new ideas for the research and practice of emotional health therapy based on the Internet of Things.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advancement of the Internet of Medical Things (IoMT) has led to the emergence of various health and emotion care services, e.g., health monitoring. To cater to increasing computational requirements of IoMT services, Mobile Edge Computing (MEC) has emerged as an indispensable technology in smart health. Benefiting from the cost-effectiveness of deployment, unmanned aerial vehicles (UAVs) equipped with MEC servers in Non-Orthogonal Multiple Access (NOMA) have emerged as a promising solution for providing smart health services in proximity to medical devices (MDs). However, the escalating number of MDs and the limited availability of communication resources of UAVs give rise to a significant increase in transmission latency. Moreover, due to the limited communication range of UAVs, the geographically-distributed MDs lead to workload imbalance of UAVs, which deteriorates the service response delay. To this end, this paper proposes a UAV-enabled Distributed computation Offloading and Power control method with Multi-Agent, named DOPMA, for NOMA-based IoMT environment. Specifically, this paper introduces computation and transmission queue models to analyze the dynamic characteristics of task execution latency and energy consumption. Moreover, a credit assignment scheme-based reward function is designed considering both system-level rewards and rewards tailored to each MD, and an improved multi-agent deep deterministic policy gradient algorithm is developed to derive offloading and power control decisions independently. Extensive simulations demonstrate that the proposed method outperforms existing schemes, achieving (7.1% ) reduction in energy consumption and (16% ) decrease in average delay.
{"title":"Distributed Computation Offloading and Power Control for UAV-Enabled Internet of Medical Things","authors":"Jiakun Gao, Xiaolong Xu, Lianyong Qi, Wanchun Dou, Xiaoyu Xia, Xiaokang Zhou","doi":"10.1145/3652513","DOIUrl":"https://doi.org/10.1145/3652513","url":null,"abstract":"<p>The advancement of the Internet of Medical Things (IoMT) has led to the emergence of various health and emotion care services, e.g., health monitoring. To cater to increasing computational requirements of IoMT services, Mobile Edge Computing (MEC) has emerged as an indispensable technology in smart health. Benefiting from the cost-effectiveness of deployment, unmanned aerial vehicles (UAVs) equipped with MEC servers in Non-Orthogonal Multiple Access (NOMA) have emerged as a promising solution for providing smart health services in proximity to medical devices (MDs). However, the escalating number of MDs and the limited availability of communication resources of UAVs give rise to a significant increase in transmission latency. Moreover, due to the limited communication range of UAVs, the geographically-distributed MDs lead to workload imbalance of UAVs, which deteriorates the service response delay. To this end, this paper proposes a UAV-enabled Distributed computation Offloading and Power control method with Multi-Agent, named DOPMA, for NOMA-based IoMT environment. Specifically, this paper introduces computation and transmission queue models to analyze the dynamic characteristics of task execution latency and energy consumption. Moreover, a credit assignment scheme-based reward function is designed considering both system-level rewards and rewards tailored to each MD, and an improved multi-agent deep deterministic policy gradient algorithm is developed to derive offloading and power control decisions independently. Extensive simulations demonstrate that the proposed method outperforms existing schemes, achieving (7.1% ) reduction in energy consumption and (16% ) decrease in average delay.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Han Liang, Jincai Chen, Fazlullah Khan, Gautam Srivastava, Jiangfeng Zeng
Human perception heavily relies on two primary senses: vision and hearing, which are closely inter-connected and capable of complementing each other. Consequently, various multimodal learning tasks have emerged, with audio-visual event localization (AVEL) being a prominent example. AVEL is a popular task within the realm of multimodal learning, with the primary objective of identifying the presence of events within each video segment and predicting their respective categories. This task holds significant utility in domains such as healthcare monitoring and surveillance, among others. Generally speaking, audio-visual co-learning offers a more comprehensive information landscape compared to single-modal learning, as it allows for a more holistic perception of ambient information, aligning with real-world applications. Nevertheless, the inherent heterogeneity of audio and visual data can introduce challenges related to event semantics inconsistency, potentially leading to incorrect predictions. To track these challenges, we propose a multi-task hybrid attention network (MHAN) to acquire high-quality representation for multimodal data. Specifically, our network incorporates hybrid attention of uni- and parallel cross-modal (HAUC) modules, which consists of a uni-modal attention block and a parallel cross-modal attention block, leveraging multimodal complementary and hidden information for better representation. Furthermore, we advocate for the use of a uni-modal visual task as auxiliary supervision to enhance the performance of multimodal tasks employing a multi-task learning strategy. Our proposed model has been proven to outperform the state-of-the-art results based on extensive experiments conducted on the AVE dataset.
{"title":"Audio-Visual Event Localization using Multi-task Hybrid Attention Networks for Smart Healthcare Systems","authors":"Han Liang, Jincai Chen, Fazlullah Khan, Gautam Srivastava, Jiangfeng Zeng","doi":"10.1145/3653018","DOIUrl":"https://doi.org/10.1145/3653018","url":null,"abstract":"<p>Human perception heavily relies on two primary senses: vision and hearing, which are closely inter-connected and capable of complementing each other. Consequently, various multimodal learning tasks have emerged, with audio-visual event localization (AVEL) being a prominent example. AVEL is a popular task within the realm of multimodal learning, with the primary objective of identifying the presence of events within each video segment and predicting their respective categories. This task holds significant utility in domains such as healthcare monitoring and surveillance, among others. Generally speaking, audio-visual co-learning offers a more comprehensive information landscape compared to single-modal learning, as it allows for a more holistic perception of ambient information, aligning with real-world applications. Nevertheless, the inherent heterogeneity of audio and visual data can introduce challenges related to event semantics inconsistency, potentially leading to incorrect predictions. To track these challenges, we propose a multi-task hybrid attention network (MHAN) to acquire high-quality representation for multimodal data. Specifically, our network incorporates hybrid attention of uni- and parallel cross-modal (HAUC) modules, which consists of a uni-modal attention block and a parallel cross-modal attention block, leveraging multimodal complementary and hidden information for better representation. Furthermore, we advocate for the use of a uni-modal visual task as auxiliary supervision to enhance the performance of multimodal tasks employing a multi-task learning strategy. Our proposed model has been proven to outperform the state-of-the-art results based on extensive experiments conducted on the AVE dataset.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Li Wen, Lingfeng Bao, Jiachi Chen, John Grundy, Xin Xia, Xiaohu Yang
The cryptocurrency market cap experienced a great increase in recent years. However, large price fluctuations demonstrate the need for governance structures and identify whether there are market manipulations. In this paper, we conducted three analyses – social media data analysis, blockchain data analysis, and price bubble analysis – to investigate whether market manipulation exists on Bitcoin, Ethereum, and Dogecoin platforms. Social media data analysis aims to find the reasons for the price fluctuations. Blockchain data analysis is used to find the detailed behavior of the manipulators. Price bubble analysis is used to investigate the relation between price fluctuation and manipulators’ behavior. By using the three analyses, we show that market manipulation exists on Bitcoin, Ethereum and Dogecoin. However, market manipulation of Bitcoin is limited, and for most of Bitcoin’s price fluctuations, we found other explanations. The price for Ethereum is most sensitive to technical updates. Technical companies/teams usually hype some new concepts, e.g., ICO, DeFi, which causes a price spike. The price of Dogecoin has a high correlation with Elon Musk’s Twitter activity, which shows influential individuals have the ability to manipulate its prices. Also, the poor monetary liquidity of Dogecoin allows some users to manipulate its price.
{"title":"Market manipulation of Cryptocurrencies: Evidence from Social Media and Transaction Data","authors":"Li Wen, Lingfeng Bao, Jiachi Chen, John Grundy, Xin Xia, Xiaohu Yang","doi":"10.1145/3643812","DOIUrl":"https://doi.org/10.1145/3643812","url":null,"abstract":"<p>The cryptocurrency market cap experienced a great increase in recent years. However, large price fluctuations demonstrate the need for governance structures and identify whether there are market manipulations. In this paper, we conducted three analyses – social media data analysis, blockchain data analysis, and price bubble analysis – to investigate whether market manipulation exists on Bitcoin, Ethereum, and Dogecoin platforms. Social media data analysis aims to find the reasons for the price fluctuations. Blockchain data analysis is used to find the detailed behavior of the manipulators. Price bubble analysis is used to investigate the relation between price fluctuation and manipulators’ behavior. By using the three analyses, we show that market manipulation exists on Bitcoin, Ethereum and Dogecoin. However, market manipulation of Bitcoin is limited, and for most of Bitcoin’s price fluctuations, we found other explanations. The price for Ethereum is most sensitive to technical updates. Technical companies/teams usually hype some new concepts, e.g., ICO, DeFi, which causes a price spike. The price of Dogecoin has a high correlation with Elon Musk’s Twitter activity, which shows influential individuals have the ability to manipulate its prices. Also, the poor monetary liquidity of Dogecoin allows some users to manipulate its price.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139648306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional medical prescriptions based on physical paper-based documents are prone to manipulation, errors, and unauthorized reproduction due to their format. Addressing the limitations of the traditional prescription system, e-prescription systems have been introduced in several countries. However, e-prescription systems lead to several concerns like the risk of privacy loss, the problem of double-spending prescriptions, lack of interoperability, and single point of failure, all of which need to be addressed immediately. We propose an AI-assisted blockchain-enabled smart and secure e-prescription management framework to address these issues. Our proposed system overcomes the problems of the centralized e-prescription systems and enables efficient consent management to access prescriptions by incorporating blockchain-based smart contracts. Our work incorporates the Umbral proxy re-encryption scheme in the proposed system, avoiding the need for repeated encryption and decryption of the prescriptions when transferred between different entities in the network. In our work, we employ two different machine learning models(Random Forest classifier and LightGBM classifier) to assist the doctor in prescribing medicines. One is a drug recommendation model, which is aimed at providing drug recommendations considering the medical history of the patients and the general prescription pattern for the particular ailment of the patient. We have fine-tuned the SciBERT model for adverse drug reaction detection. The extensive experimentation and results show that the proposed e-prescription framework is secure, scalable, and interoperable. Further, the proposed machine learning models produce results higher than 95%.
{"title":"AI-assisted Blockchain-enabled Smart and Secure E-prescription Management Framework","authors":"Siva Sai, Vinay Chamola","doi":"10.1145/3641279","DOIUrl":"https://doi.org/10.1145/3641279","url":null,"abstract":"<p>Traditional medical prescriptions based on physical paper-based documents are prone to manipulation, errors, and unauthorized reproduction due to their format. Addressing the limitations of the traditional prescription system, e-prescription systems have been introduced in several countries. However, e-prescription systems lead to several concerns like the risk of privacy loss, the problem of double-spending prescriptions, lack of interoperability, and single point of failure, all of which need to be addressed immediately. We propose an AI-assisted blockchain-enabled smart and secure e-prescription management framework to address these issues. Our proposed system overcomes the problems of the centralized e-prescription systems and enables efficient consent management to access prescriptions by incorporating blockchain-based smart contracts. Our work incorporates the Umbral proxy re-encryption scheme in the proposed system, avoiding the need for repeated encryption and decryption of the prescriptions when transferred between different entities in the network. In our work, we employ two different machine learning models(Random Forest classifier and LightGBM classifier) to assist the doctor in prescribing medicines. One is a drug recommendation model, which is aimed at providing drug recommendations considering the medical history of the patients and the general prescription pattern for the particular ailment of the patient. We have fine-tuned the SciBERT model for adverse drug reaction detection. The extensive experimentation and results show that the proposed e-prescription framework is secure, scalable, and interoperable. Further, the proposed machine learning models produce results higher than 95%.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing intersection management systems, in urban cities, lack in meeting the current requirements of self-configuration, lightweight computing, and software-defined control, which are necessarily required for congested road-lane networks. To satisfy these requirements, this work proposes effective, scalable, multi-input and multi-output, and congestion prevention enabled intersection management system utilizing a software-defined control interface that not only regularly monitors the traffic to prevent congestion for minimizing queue length and waiting time, it also offers a computationally efficient solution in real-time. For effective intersection management, a modified linear-quadratic regulator, i.e., Quantized Linear Quadratic Regulator (QLQR), is designed along with Software-Defined Networking (SDN) enabled control interface to maximize throughput and vehicles speed and minimize queue length and waiting time at the intersection. Experimental results prove that the proposed SDN-QLQR improves the comparative performance in the interval of 24.94% – 49.07%, 35.78% – 68.86%, 36.67% – 59.08%, and 29.94% – 57.87% for various performance metrics, i.e., average queue length, average waiting time, throughput, and average speed respectively.
{"title":"SDN-enabled Quantized LQR for Smart Traffic Light Controller to Optimize Congestion","authors":"Anuj Sachan, Neetesh Kumar","doi":"10.1145/3641104","DOIUrl":"https://doi.org/10.1145/3641104","url":null,"abstract":"<p>Existing intersection management systems, in urban cities, lack in meeting the current requirements of self-configuration, lightweight computing, and software-defined control, which are necessarily required for congested road-lane networks. To satisfy these requirements, this work proposes effective, scalable, multi-input and multi-output, and congestion prevention enabled intersection management system utilizing a software-defined control interface that not only regularly monitors the traffic to prevent congestion for minimizing queue length and waiting time, it also offers a computationally efficient solution in real-time. For effective intersection management, a modified linear-quadratic regulator, i.e., Quantized Linear Quadratic Regulator (QLQR), is designed along with Software-Defined Networking (SDN) enabled control interface to maximize throughput and vehicles speed and minimize queue length and waiting time at the intersection. Experimental results prove that the proposed SDN-QLQR improves the comparative performance in the interval of 24.94% – 49.07%, 35.78% – 68.86%, 36.67% – 59.08%, and 29.94% – 57.87% for various performance metrics, i.e., average queue length, average waiting time, throughput, and average speed respectively.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139474811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In edge computing, Internet of Things (IoT) devices with weak computing power offload tasks to nearby edge servers for execution, so the task completion time can be reduced and delay sensitive tasks can be facilitated. However, if the task is offloaded to malicious edge servers, the system will suffer losses. Therefore, it is significant to identify the trusted edge servers and offload tasks to trusted edge servers, which can improve the performance of edge computing. However, it is still challenging. In this paper, a trust Active Detecting based Task Offloading (ADTO) scheme is proposed to maximize revenue in edge computing. The main innovation points of our work are as follows: (a) The ADTO scheme innovatively proposes a method to actively get trust by trust detection. This method offloads microtasks to edge servers whose trust needs to be identified, and then quickly identifies the trust of edge servers according to the completion of tasks by edge servers. Based on the identification of the trust, tasks can be offloaded to trusted edge servers, so as to improve the success rate of tasks. (b) Although the trust of edge servers can be identified by our detection, it needs to pay a price. Therefore, to maximize system revenue, searching the most suitable number of trusted edge servers for various conditions is transformed into an optimization problem. Finally, theoretical and experimental analysis shows the effectiveness of the proposed strategy, which can effectively identify the trusted and untrusted edge servers. The task offloading strategy based on trust detection proposed in this paper greatly improves the success rate of tasks, compared with the strategy without trust detection, the task success rate is increased by 40.27%, and there is a significant increase in revenue, which fully demonstrates the effectiveness of the strategy.
{"title":"ADTO: A Trust Active Detecting based Task Offloading Scheme in Edge Computing for Internet of Things","authors":"Xuezheng Yang, Zhiwen Zeng, Anfeng Liu, Neal N. Xiong, Shaobo Zhang","doi":"10.1145/3640013","DOIUrl":"https://doi.org/10.1145/3640013","url":null,"abstract":"<p>In edge computing, Internet of Things (IoT) devices with weak computing power offload tasks to nearby edge servers for execution, so the task completion time can be reduced and delay sensitive tasks can be facilitated. However, if the task is offloaded to malicious edge servers, the system will suffer losses. Therefore, it is significant to identify the trusted edge servers and offload tasks to trusted edge servers, which can improve the performance of edge computing. However, it is still challenging. In this paper, a trust Active Detecting based Task Offloading (ADTO) scheme is proposed to maximize revenue in edge computing. The main innovation points of our work are as follows: (a) The ADTO scheme innovatively proposes a method to actively get trust by trust detection. This method offloads microtasks to edge servers whose trust needs to be identified, and then quickly identifies the trust of edge servers according to the completion of tasks by edge servers. Based on the identification of the trust, tasks can be offloaded to trusted edge servers, so as to improve the success rate of tasks. (b) Although the trust of edge servers can be identified by our detection, it needs to pay a price. Therefore, to maximize system revenue, searching the most suitable number of trusted edge servers for various conditions is transformed into an optimization problem. Finally, theoretical and experimental analysis shows the effectiveness of the proposed strategy, which can effectively identify the trusted and untrusted edge servers. The task offloading strategy based on trust detection proposed in this paper greatly improves the success rate of tasks, compared with the strategy without trust detection, the task success rate is increased by 40.27%, and there is a significant increase in revenue, which fully demonstrates the effectiveness of the strategy.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139461497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent years have witnessed an increasing prevalence of wearable devices in the public, where atrial fibrillation (AF) detection is a popular application in these devices. Generally, AF detection is performed on cloud whereas this paper describes an on-device AF detection method. Technically, compressed sensing (CS) is first used for electrocardiograph (ECG) acquisition. Then QRS detection is proposed to be performed directly on the compressed CS measurements, rather than on the reconstructed signals on the powerful cloud server. Based on the extracted QRS information, AF is determined by quantitatively analyzing the (RR, dRR) plot. Databases with ECG samples collected from both medical-level (MIT-BIH afdb) and wearable ECG devices (Physionet Challenge 2017) are introduced for performance validation. The experiment results well demonstrate that our on-device AF detection algorithm can approach the performance of those implemented on the raw signals. Our proposal is suitable for AF screening directly on the wearable devices, without the support of the data center for signal reconstruction and intelligent analysis.
{"title":"Atrial Fibrillation Detection from Compressed ECG Measurements for Wireless Body Sensor Network","authors":"Yongyong Chen, Junxin Chen, Shuang Sun, Jingyong Su, Qiankun Li, Zhihan Lyu","doi":"10.1145/3637440","DOIUrl":"https://doi.org/10.1145/3637440","url":null,"abstract":"<p>Recent years have witnessed an increasing prevalence of wearable devices in the public, where atrial fibrillation (AF) detection is a popular application in these devices. Generally, AF detection is performed on cloud whereas this paper describes an on-device AF detection method. Technically, compressed sensing (CS) is first used for electrocardiograph (ECG) acquisition. Then QRS detection is proposed to be performed directly on the compressed CS measurements, rather than on the reconstructed signals on the powerful cloud server. Based on the extracted QRS information, AF is determined by quantitatively analyzing the (<i>RR</i>, <i>dRR</i>) plot. Databases with ECG samples collected from both medical-level (MIT-BIH afdb) and wearable ECG devices (Physionet Challenge 2017) are introduced for performance validation. The experiment results well demonstrate that our on-device AF detection algorithm can approach the performance of those implemented on the raw signals. Our proposal is suitable for AF screening directly on the wearable devices, without the support of the data center for signal reconstruction and intelligent analysis.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139413496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}