首页 > 最新文献

2022 IEEE World AI IoT Congress (AIIoT)最新文献

英文 中文
Fake News Detection in Social Networks Using Data Mining Techniques 利用数据挖掘技术检测社交网络中的假新闻
Pub Date : 2022-06-06 DOI: 10.1109/aiiot54504.2022.9817287
Hebah Alquran, Shadi Banitaan
Fake news is propagated by intentionally spreading false information on social media platforms. Fake news intends to mislead the public and damage the reputation of a person or entity. Detecting misinformation over digital platforms is essential to minimizing its adverse effects. While false comments and news can be easily posted on social media without any oversight, identifying real information from false information is often the most challenging part. This work examined the most relevant features that can be used for fake news detection. After selecting the significant features, prediction models are built and compared in terms of precision, recall, and F-score evaluation metrics using Naive Bayes, Bayesian Network, and J48 classification methods. Based on our experiments on a benchmark dataset, we obtained an overall F-score of 69.7% by employing the J48 classifier on the politician's brief statement, and the counts of the speaker's statement history feature set.
假新闻是通过在社交媒体平台上故意传播虚假信息来传播的。假新闻旨在误导公众,损害个人或实体的声誉。检测数字平台上的错误信息对于最大限度地减少其不利影响至关重要。虽然虚假评论和新闻很容易在没有任何监督的情况下发布在社交媒体上,但从虚假信息中识别真实信息往往是最具挑战性的部分。这项工作研究了可用于假新闻检测的最相关特征。在选择显著特征后,构建预测模型,并使用朴素贝叶斯、贝叶斯网络和J48分类方法在精度、召回率和F-score评价指标方面进行比较。基于我们在一个基准数据集上的实验,我们通过使用J48分类器对政治家的简短陈述和演讲者的陈述历史特征集的计数获得了69.7%的总体f分。
{"title":"Fake News Detection in Social Networks Using Data Mining Techniques","authors":"Hebah Alquran, Shadi Banitaan","doi":"10.1109/aiiot54504.2022.9817287","DOIUrl":"https://doi.org/10.1109/aiiot54504.2022.9817287","url":null,"abstract":"Fake news is propagated by intentionally spreading false information on social media platforms. Fake news intends to mislead the public and damage the reputation of a person or entity. Detecting misinformation over digital platforms is essential to minimizing its adverse effects. While false comments and news can be easily posted on social media without any oversight, identifying real information from false information is often the most challenging part. This work examined the most relevant features that can be used for fake news detection. After selecting the significant features, prediction models are built and compared in terms of precision, recall, and F-score evaluation metrics using Naive Bayes, Bayesian Network, and J48 classification methods. Based on our experiments on a benchmark dataset, we obtained an overall F-score of 69.7% by employing the J48 classifier on the politician's brief statement, and the counts of the speaker's statement history feature set.","PeriodicalId":409264,"journal":{"name":"2022 IEEE World AI IoT Congress (AIIoT)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124558846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Distributed Average Cost Reinforcement Learning approach for Power Control in Wireless 5G Networks 无线5G网络功率控制的分布式平均成本强化学习方法
Pub Date : 2022-06-06 DOI: 10.1109/aiiot54504.2022.9817168
A. Ornatelli, A. Giuseppi, A. Tortorelli
This paper deals with the transmission power control problem in wireless networks. Such a problem represents a well known and relevant issue as it allows to efficiently manage the network's required energy and the interference experienced by end-users. With the widespread diffusion of smart devices, the relevance of this aspect further increased and has been identified as such also in 5G standards. The problem has been formalized as a Multi-Agent Reinforcement Learning approach (MARL) to guarantee scalability and robustness. These two aspects also drove the development of an original Distributed Average-Cost Temporal-Difference (TD) Learning algorithm. To adopt such an algorithm, a Markov Game formulation of the power control problem has also been derived. The effectiveness of the proposed distributed framework in reducing the total network's transmission power has been proved by means of simulations in a specific case study.
本文研究无线网络中的传输功率控制问题。这样的问题代表了一个众所周知的相关问题,因为它允许有效地管理网络所需的能量和最终用户所经历的干扰。随着智能设备的广泛普及,这方面的相关性进一步增加,并且在5G标准中也被确定为这样。该问题已被形式化为多智能体强化学习方法(MARL),以保证可扩展性和鲁棒性。这两个方面也推动了原始分布式平均成本时间差(TD)学习算法的发展。为了采用这种算法,还推导了功率控制问题的马尔可夫博弈公式。通过具体案例的仿真,验证了所提出的分布式框架在降低网络总传输功率方面的有效性。
{"title":"A Distributed Average Cost Reinforcement Learning approach for Power Control in Wireless 5G Networks","authors":"A. Ornatelli, A. Giuseppi, A. Tortorelli","doi":"10.1109/aiiot54504.2022.9817168","DOIUrl":"https://doi.org/10.1109/aiiot54504.2022.9817168","url":null,"abstract":"This paper deals with the transmission power control problem in wireless networks. Such a problem represents a well known and relevant issue as it allows to efficiently manage the network's required energy and the interference experienced by end-users. With the widespread diffusion of smart devices, the relevance of this aspect further increased and has been identified as such also in 5G standards. The problem has been formalized as a Multi-Agent Reinforcement Learning approach (MARL) to guarantee scalability and robustness. These two aspects also drove the development of an original Distributed Average-Cost Temporal-Difference (TD) Learning algorithm. To adopt such an algorithm, a Markov Game formulation of the power control problem has also been derived. The effectiveness of the proposed distributed framework in reducing the total network's transmission power has been proved by means of simulations in a specific case study.","PeriodicalId":409264,"journal":{"name":"2022 IEEE World AI IoT Congress (AIIoT)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121172374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COVID-19 Prediction based on Infected Cases and Deaths of Bangladesh using Deep Transfer Learning 基于深度迁移学习的孟加拉国COVID-19感染病例和死亡预测
Pub Date : 2022-06-06 DOI: 10.1109/aiiot54504.2022.9817160
Khan Md Hasib, S. Sakib, J. Mahmud, Kamruzzaman Mithu, Md. Saifur Rahman, Mohammad Shafiul Alam
The severely infectious virus known as “COVID- 19” has wreaked havoc on the planet, trapping to keep the disease from spreading, while billions of people are staying inside. Every experts and professionals in many disciplines are working tirelessly to create a vaccine and preventative techniques to help the globe overcome this difficult crisis. In Bangladesh, the number of persons infected with Coronavirus is particularly alarming. A accurate prognosis of the epidemic, on the other hand, may aid in the management of this contagious illness until a remedy is discovered. This study aims to forecast impending COVID-19 exposed instances and fatalities using a time series dataset utilizing proposed deep transfer learning model where encoder-decoder CNN-LSTM along with deep CNN pretrained models such as: ResNet-50, DenseNet-201, MobileNet-V2, and Inception-ResNet-V2 performed. We also predict the regular exposed instances and fatalities throughout the following 180 days in data visualization segment using AIC and BIC selection criteria. The suggested paradigms are also used to anticipate Bangladesh's daily confirmed cases and daily which is evaluated by error based on three performance criteria. We discovered that ResNet-50 performs better among others for predicting infected case and deaths owing to COVID-19 in Bangladesh in terms of MAPE, MAE and RMSE evaluations.
被称为“COVID- 19”的严重传染性病毒在地球上造成了严重破坏,数十亿人被困在家里,以防止疾病传播。许多学科的每一位专家和专业人员都在不知疲倦地努力创造疫苗和预防技术,以帮助全球克服这一困难的危机。在孟加拉国,感染冠状病毒的人数尤其惊人。另一方面,对流行病的准确预测可能有助于控制这种传染病,直到找到补救办法为止。本研究旨在使用时间序列数据集预测即将发生的COVID-19暴露实例和死亡人数,该数据集利用提出的深度迁移学习模型,其中编码器-解码器CNN- lstm以及深度CNN预训练模型(如:ResNet-50、DenseNet-201、MobileNet-V2和Inception-ResNet-V2)进行。我们还使用AIC和BIC选择标准在数据可视化部分预测了接下来180天内的常规暴露实例和死亡人数。建议的范例还用于预测孟加拉国每天的确诊病例,并根据三个绩效标准进行误差评估。我们发现,在MAPE、MAE和RMSE评估方面,ResNet-50在预测孟加拉国COVID-19感染病例和死亡人数方面表现更好。
{"title":"COVID-19 Prediction based on Infected Cases and Deaths of Bangladesh using Deep Transfer Learning","authors":"Khan Md Hasib, S. Sakib, J. Mahmud, Kamruzzaman Mithu, Md. Saifur Rahman, Mohammad Shafiul Alam","doi":"10.1109/aiiot54504.2022.9817160","DOIUrl":"https://doi.org/10.1109/aiiot54504.2022.9817160","url":null,"abstract":"The severely infectious virus known as “COVID- 19” has wreaked havoc on the planet, trapping to keep the disease from spreading, while billions of people are staying inside. Every experts and professionals in many disciplines are working tirelessly to create a vaccine and preventative techniques to help the globe overcome this difficult crisis. In Bangladesh, the number of persons infected with Coronavirus is particularly alarming. A accurate prognosis of the epidemic, on the other hand, may aid in the management of this contagious illness until a remedy is discovered. This study aims to forecast impending COVID-19 exposed instances and fatalities using a time series dataset utilizing proposed deep transfer learning model where encoder-decoder CNN-LSTM along with deep CNN pretrained models such as: ResNet-50, DenseNet-201, MobileNet-V2, and Inception-ResNet-V2 performed. We also predict the regular exposed instances and fatalities throughout the following 180 days in data visualization segment using AIC and BIC selection criteria. The suggested paradigms are also used to anticipate Bangladesh's daily confirmed cases and daily which is evaluated by error based on three performance criteria. We discovered that ResNet-50 performs better among others for predicting infected case and deaths owing to COVID-19 in Bangladesh in terms of MAPE, MAE and RMSE evaluations.","PeriodicalId":409264,"journal":{"name":"2022 IEEE World AI IoT Congress (AIIoT)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129292040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Comparing Pretrained Image-Net CNN with a Siamese Architecture for Few-Shot Learning Applications in Radar Systems 比较预训练的Image-Net CNN与Siamese架构在雷达系统中的少镜头学习应用
Pub Date : 2022-06-06 DOI: 10.1109/aiiot54504.2022.9817228
Cesar Martinez Melgoza, Kayla Lee, Tyler Groom, Nate Ruppert, K. George, Henry Lin
Over the years, the increase in electronic devices and innovation towards technological capabilities have resulted in an increase in traffic in the electromagnetic spectrum, thus making it harder for radar systems to distinguish multiple emitters with added interference. Traditional methods for classification, such as machine learning, prove to be a suitable solution for this problem, however these models require an enormous amount of data to train and evaluate. This experiment implements a Few-Shot learning framework and evaluates the performance of different Neural Network Architectures such as a standard Convolutional Neural Network, and a Siamese Network from a previous experiment. The experiment will utilize different kinds of hardware equipment. This includes the ZCU104 FPGA board, AD-FMCOMMS2-EBZ RF module, the Jetson TX2, and NVIDIA Titan RTX. The hardware equipment will be evaluated using performance metrics such as hardware acceleration, to find the best medium between computational power, acceleration speed, and evaluation accuracy.
多年来,电子设备的增加和技术能力的创新导致电磁频谱的流量增加,从而使雷达系统更难区分具有附加干扰的多个发射器。传统的分类方法,如机器学习,被证明是解决这个问题的合适方法,但是这些模型需要大量的数据来训练和评估。本实验实现了一个Few-Shot学习框架,并评估了不同神经网络架构(如标准卷积神经网络和先前实验中的暹罗网络)的性能。实验将使用不同种类的硬件设备。这包括ZCU104 FPGA板,AD-FMCOMMS2-EBZ RF模块,Jetson TX2和NVIDIA Titan RTX。硬件设备将使用硬件加速等性能指标进行评估,以找到计算能力、加速速度和评估精度之间的最佳中间值。
{"title":"Comparing Pretrained Image-Net CNN with a Siamese Architecture for Few-Shot Learning Applications in Radar Systems","authors":"Cesar Martinez Melgoza, Kayla Lee, Tyler Groom, Nate Ruppert, K. George, Henry Lin","doi":"10.1109/aiiot54504.2022.9817228","DOIUrl":"https://doi.org/10.1109/aiiot54504.2022.9817228","url":null,"abstract":"Over the years, the increase in electronic devices and innovation towards technological capabilities have resulted in an increase in traffic in the electromagnetic spectrum, thus making it harder for radar systems to distinguish multiple emitters with added interference. Traditional methods for classification, such as machine learning, prove to be a suitable solution for this problem, however these models require an enormous amount of data to train and evaluate. This experiment implements a Few-Shot learning framework and evaluates the performance of different Neural Network Architectures such as a standard Convolutional Neural Network, and a Siamese Network from a previous experiment. The experiment will utilize different kinds of hardware equipment. This includes the ZCU104 FPGA board, AD-FMCOMMS2-EBZ RF module, the Jetson TX2, and NVIDIA Titan RTX. The hardware equipment will be evaluated using performance metrics such as hardware acceleration, to find the best medium between computational power, acceleration speed, and evaluation accuracy.","PeriodicalId":409264,"journal":{"name":"2022 IEEE World AI IoT Congress (AIIoT)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116975154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Employing Edge Computing to Enhance Self-Defense Capabilities of IoT Devices 利用边缘计算增强物联网设备的自我防御能力
Pub Date : 2022-06-06 DOI: 10.1109/aiiot54504.2022.9817368
Jack Li, Yim-Fun Hu
Although the success in application of the Internet pushes development and applications of the Internet-of-Things (IoT) on market quickly, IoT devices are also exposed to attacks from the network, which raises the security issues of IoT devices. Most IoT devices are embedded systems, and there was little work on device security as a part of the device design because most applications force engineers to mainly focus on how to implement the systems' functions with less hardware and software design as well as less power consumption. There are many new technologies, such as AI, machine learning that could provide good solutions to device security. However, all these new technologies rely on complex calculation and large amount of memory etc., which is not part of most IoT devices, such as a smart sensor. Using edge computing to provide some security solutions for IoT devices is one approach to solve the IoT security problems. Detecting some malfunctions in the system at an IoT device by edge computing is proposed in this work to make an IoT device more secure.
虽然互联网应用的成功推动了物联网(IoT)的快速发展和市场应用,但物联网设备也面临着来自网络的攻击,这就提出了物联网设备的安全问题。大多数物联网设备都是嵌入式系统,很少将设备安全作为设备设计的一部分,因为大多数应用程序迫使工程师主要关注如何以更少的硬件和软件设计以及更低的功耗实现系统的功能。有很多新技术,比如人工智能、机器学习,可以为设备安全提供很好的解决方案。然而,所有这些新技术都依赖于复杂的计算和大量的内存等,而这并不是大多数物联网设备的一部分,比如智能传感器。利用边缘计算为物联网设备提供一些安全解决方案是解决物联网安全问题的一种方法。本文提出了通过边缘计算检测物联网设备系统中的一些故障,以使物联网设备更加安全。
{"title":"Employing Edge Computing to Enhance Self-Defense Capabilities of IoT Devices","authors":"Jack Li, Yim-Fun Hu","doi":"10.1109/aiiot54504.2022.9817368","DOIUrl":"https://doi.org/10.1109/aiiot54504.2022.9817368","url":null,"abstract":"Although the success in application of the Internet pushes development and applications of the Internet-of-Things (IoT) on market quickly, IoT devices are also exposed to attacks from the network, which raises the security issues of IoT devices. Most IoT devices are embedded systems, and there was little work on device security as a part of the device design because most applications force engineers to mainly focus on how to implement the systems' functions with less hardware and software design as well as less power consumption. There are many new technologies, such as AI, machine learning that could provide good solutions to device security. However, all these new technologies rely on complex calculation and large amount of memory etc., which is not part of most IoT devices, such as a smart sensor. Using edge computing to provide some security solutions for IoT devices is one approach to solve the IoT security problems. Detecting some malfunctions in the system at an IoT device by edge computing is proposed in this work to make an IoT device more secure.","PeriodicalId":409264,"journal":{"name":"2022 IEEE World AI IoT Congress (AIIoT)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116750036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of Naïve Bayesian Algorithms for Cyber-Attacks Detection in Wireless Sensor Networks Naïve贝叶斯算法在无线传感器网络网络攻击检测中的应用
Pub Date : 2022-06-06 DOI: 10.1109/aiiot54504.2022.9817298
Shereen S. Ismail, H. Reza
Wireless Sensor Network (WSN) is one of the Internet of Things (IoT) operating platforms, which has proliferated into a wide range of applications. These networks comprise many resource-restricted sensors in terms of sensing, communication, storage, and power. Security becomes a critical concern to protect the network of scarce resources from any malicious activities that target the network. Several solutions have been presented in the literature; however, machine learning has proven its appropriateness in designing energy-efficient detection measures for cyber-attacks targeting WSNs. This paper presents a WSN security performance evaluation of three Naïve Bayesian machine learning classification technique variants: Gaussian Naïve Bayes, Multinomial Naïve Bayes, and Bernoulli Naïve Bayes, compared to three well-known base algorithms: K-Nearest Neighbors, Support Vector Machine, and Multilayer Perceptron. We applied Spearman correlation as a univariate feature selection. The specialized dataset, WSN-DS, was used for training and testing purposes. The performance of the six classifiers was evaluated in terms of accuracy, probability of detection, positive prediction value, probability of false alarm, probability of misdetection, memory usage, processing time, prediction time, and complexity.
无线传感器网络(WSN)是物联网(IoT)的操作平台之一,已经扩散到广泛的应用领域。这些网络在传感、通信、存储和电力方面包含许多资源受限的传感器。为了保护稀缺资源的网络免受任何针对网络的恶意活动的攻击,安全性成为一个至关重要的问题。文献中提出了几种解决方案;然而,机器学习已经证明了它在设计针对无线传感器网络攻击的节能检测措施方面的适用性。本文介绍了三种Naïve贝叶斯机器学习分类技术变体:高斯Naïve贝叶斯、多项式Naïve贝叶斯和伯努利Naïve贝叶斯的WSN安全性能评估,并与三种知名的基础算法:k -近邻、支持向量机和多层感知器进行了比较。我们应用Spearman相关作为单变量特征选择。专门的数据集WSN-DS用于训练和测试目的。从准确率、检测概率、正预测值、虚警概率、误检概率、内存使用、处理时间、预测时间和复杂度等方面对6个分类器的性能进行评价。
{"title":"Evaluation of Naïve Bayesian Algorithms for Cyber-Attacks Detection in Wireless Sensor Networks","authors":"Shereen S. Ismail, H. Reza","doi":"10.1109/aiiot54504.2022.9817298","DOIUrl":"https://doi.org/10.1109/aiiot54504.2022.9817298","url":null,"abstract":"Wireless Sensor Network (WSN) is one of the Internet of Things (IoT) operating platforms, which has proliferated into a wide range of applications. These networks comprise many resource-restricted sensors in terms of sensing, communication, storage, and power. Security becomes a critical concern to protect the network of scarce resources from any malicious activities that target the network. Several solutions have been presented in the literature; however, machine learning has proven its appropriateness in designing energy-efficient detection measures for cyber-attacks targeting WSNs. This paper presents a WSN security performance evaluation of three Naïve Bayesian machine learning classification technique variants: Gaussian Naïve Bayes, Multinomial Naïve Bayes, and Bernoulli Naïve Bayes, compared to three well-known base algorithms: K-Nearest Neighbors, Support Vector Machine, and Multilayer Perceptron. We applied Spearman correlation as a univariate feature selection. The specialized dataset, WSN-DS, was used for training and testing purposes. The performance of the six classifiers was evaluated in terms of accuracy, probability of detection, positive prediction value, probability of false alarm, probability of misdetection, memory usage, processing time, prediction time, and complexity.","PeriodicalId":409264,"journal":{"name":"2022 IEEE World AI IoT Congress (AIIoT)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115463968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
MusCare+: Muscle Monitoring for Anomalies MusCare+:肌肉异常监测
Pub Date : 2022-06-06 DOI: 10.1109/aiiot54504.2022.9817161
Nicholas Foley, Chen-Hsiang Yu
- Muscles are an essential part of everyday life and any damage or illness that affects them can cause massive problems. Patients who are diagnosed with muscle injuries and illnesses largely remain unmonitored, even though a few appointments they have with doctors annually. Moving from unmonitored to constant monitoring can not only paint a better picture of how a muscle condition is progressing, but it also can inform medical professionals if their treatment regimen is actually working. In this paper, we propose a new system that can monitor muscle health of a patient and predict the muscle conditions. This system mainly focuses on the shoulder but could be expanded to other areas of the body. By utilizing the strength of machine learning and the Android platform, we created a platform that can monitor muscle health quickly and easily. The current prototype system is not only able to display live data gathered from an EMG sensor, but it can also predict whether the muscle is currently flexed or relaxed. Although there is a limitation in current prototype system, a more robust machine learning algorithm could be trained to give a wide array of muscle health predictions.
肌肉是日常生活中必不可少的一部分,任何影响肌肉的损伤或疾病都会导致大量问题。被诊断患有肌肉损伤和疾病的患者,尽管每年有几次与医生的预约,但在很大程度上仍未受到监控。从不监测到持续监测不仅可以更好地描绘肌肉状况的进展情况,而且还可以告知医疗专业人员他们的治疗方案是否有效。在本文中,我们提出了一个新的系统,可以监测病人的肌肉健康和预测肌肉状况。这个系统主要集中在肩膀上,但可以扩展到身体的其他部位。通过利用机器学习的优势和Android平台,我们创造了一个可以快速,轻松地监测肌肉健康的平台。目前的原型系统不仅能够显示从肌电图传感器收集的实时数据,而且还可以预测肌肉当前是弯曲还是放松。尽管目前的原型系统存在局限性,但可以训练更强大的机器学习算法来提供广泛的肌肉健康预测。
{"title":"MusCare+: Muscle Monitoring for Anomalies","authors":"Nicholas Foley, Chen-Hsiang Yu","doi":"10.1109/aiiot54504.2022.9817161","DOIUrl":"https://doi.org/10.1109/aiiot54504.2022.9817161","url":null,"abstract":"- Muscles are an essential part of everyday life and any damage or illness that affects them can cause massive problems. Patients who are diagnosed with muscle injuries and illnesses largely remain unmonitored, even though a few appointments they have with doctors annually. Moving from unmonitored to constant monitoring can not only paint a better picture of how a muscle condition is progressing, but it also can inform medical professionals if their treatment regimen is actually working. In this paper, we propose a new system that can monitor muscle health of a patient and predict the muscle conditions. This system mainly focuses on the shoulder but could be expanded to other areas of the body. By utilizing the strength of machine learning and the Android platform, we created a platform that can monitor muscle health quickly and easily. The current prototype system is not only able to display live data gathered from an EMG sensor, but it can also predict whether the muscle is currently flexed or relaxed. Although there is a limitation in current prototype system, a more robust machine learning algorithm could be trained to give a wide array of muscle health predictions.","PeriodicalId":409264,"journal":{"name":"2022 IEEE World AI IoT Congress (AIIoT)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122002038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Energy Efficient Double Critic Deep Deterministic Policy Gradient Framework for Fog Computing 面向雾计算的节能双批评家深度确定性策略梯度框架
Pub Date : 2022-06-06 DOI: 10.1109/aiiot54504.2022.9817157
Bhargavi Krishnamurthy, S. Shiva
-Nowadays the data is growing at a faster pace and the big data applications are required to be more agile and flexible. There is a need for a decentralized model to carry out the required substantial amount of computation across edge devices as they has led to the innovation of fog computing. Energy consumption among the edge devices is one of the potential threatening issues in fog computing. Their high energy demand also contributes to higher computation cost. In this paper Double Critic (DC) approach is enforced over the Deep Deterministic Policy Gradient (DDPG) technique to design the DC-DDPG framework which formulates high quality energy efficiency policies for fog computing. The performance of the proposed framework is outstanding compared to existing works based on the metrics like energy consumption, response time, total cost, and throughput. They are measured under two different fog computing scenarios i.e., fog layer with multiple entities in a region and fog layer with multiple entities in multiple regions. Mathematical modeling reveals that the energy efficiency policies formulated are of high quality as they satisfy the quality assurance metrics, such as empirical correctness, robustness, model relevance, and data privacy.
-如今数据增长速度越来越快,大数据应用要求更加敏捷和灵活。需要一个分散的模型来跨边缘设备执行所需的大量计算,因为它们导致了雾计算的创新。边缘设备之间的能量消耗是雾计算中潜在的威胁问题之一。它们的高能量需求也导致了更高的计算成本。本文在深度确定性策略梯度(Deep Deterministic Policy Gradient, DDPG)技术的基础上,采用双批判(Double Critic, DC)方法设计了一个DC-DDPG框架,该框架为雾计算制定了高质量的能效策略。与基于能耗、响应时间、总成本和吞吐量等指标的现有工作相比,所提议的框架的性能非常出色。它们是在两种不同的雾计算场景下测量的,即一个区域内具有多个实体的雾层和多个区域内具有多个实体的雾层。数学建模表明,制定的能源效率政策是高质量的,因为它们满足质量保证指标,如经验正确性、鲁棒性、模型相关性和数据隐私性。
{"title":"Energy Efficient Double Critic Deep Deterministic Policy Gradient Framework for Fog Computing","authors":"Bhargavi Krishnamurthy, S. Shiva","doi":"10.1109/aiiot54504.2022.9817157","DOIUrl":"https://doi.org/10.1109/aiiot54504.2022.9817157","url":null,"abstract":"-Nowadays the data is growing at a faster pace and the big data applications are required to be more agile and flexible. There is a need for a decentralized model to carry out the required substantial amount of computation across edge devices as they has led to the innovation of fog computing. Energy consumption among the edge devices is one of the potential threatening issues in fog computing. Their high energy demand also contributes to higher computation cost. In this paper Double Critic (DC) approach is enforced over the Deep Deterministic Policy Gradient (DDPG) technique to design the DC-DDPG framework which formulates high quality energy efficiency policies for fog computing. The performance of the proposed framework is outstanding compared to existing works based on the metrics like energy consumption, response time, total cost, and throughput. They are measured under two different fog computing scenarios i.e., fog layer with multiple entities in a region and fog layer with multiple entities in multiple regions. Mathematical modeling reveals that the energy efficiency policies formulated are of high quality as they satisfy the quality assurance metrics, such as empirical correctness, robustness, model relevance, and data privacy.","PeriodicalId":409264,"journal":{"name":"2022 IEEE World AI IoT Congress (AIIoT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126705267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heart failure survival prediction using machine learning algorithm: am I safe from heart failure? 使用机器学习算法预测心力衰竭生存:我是否安全?
Pub Date : 2022-06-06 DOI: 10.1109/aiiot54504.2022.9817303
M. Mamun, Afia Farjana, Miraz Al Mamun, Md Salim Ahammed, Md Minhazur Rahman
Heart Failure (HF) is a prevalent ailment worldwide, and despite significant medical advancements in the past few decades, cardiovascular disease is still the leading cause of death. Although HF itself is a critical risk for patient survival, other co-existing pathophysiological conditions can present a significant threat to patient survival. Because so many elements contribute to a patient's survival in heart failure, predicting the chances of survival without using a computational technique can be difficult for cardiac doctors, eventually preventing the patient from receiving correct care. Fortunately, categorization and prediction models exist, which can assist cardiologists in designing proper treatment schemes using relevant medical data. This study aims to develop prediction models for patient survival in HF conditions. In this paper, we analyzed the UCI heart failure dataset containing relevant medical information of 299 HF patients. We applied several machine learning classifiers to predict the patient survival from HF-related pathophysiological parameters and analyzed the features corresponding to the most crucial risk factors using the correlation matrix. Our prediction models used the following machine learning techniques- Logistic Regression, Decision Tree, Support Vector Machine, XGBoost, LightGBM, Random Forest, KNN, and Bagging and were able to find a better result. Also, this paper presents a comparative study by analyzing the performance of different machine learning algorithms. Our analysis indicates that LightGBM achieved the highest Accuracy of 85% and AUC of 93% in predicting patient survival of HF patients compared to other machine learning algorithms.
心力衰竭(HF)是世界范围内的一种普遍疾病,尽管在过去的几十年里医学取得了重大进展,但心血管疾病仍然是导致死亡的主要原因。虽然心衰本身是患者生存的关键风险,但其他共存的病理生理状况也可能对患者生存构成重大威胁。由于心力衰竭患者的生存取决于许多因素,因此对心脏病医生来说,在不使用计算技术的情况下预测患者的生存机会是很困难的,最终会使患者无法得到正确的护理。幸运的是,分类和预测模型的存在,可以帮助心脏病专家设计适当的治疗方案,利用相关的医疗数据。本研究旨在建立心衰患者生存预测模型。在本文中,我们分析了包含299例HF患者相关医疗信息的UCI心力衰竭数据集。我们应用几个机器学习分类器从hf相关的病理生理参数中预测患者的生存,并使用相关矩阵分析最关键危险因素对应的特征。我们的预测模型使用了以下机器学习技术——逻辑回归、决策树、支持向量机、XGBoost、LightGBM、随机森林、KNN和Bagging,并且能够找到更好的结果。此外,本文还通过分析不同机器学习算法的性能进行了比较研究。我们的分析表明,与其他机器学习算法相比,LightGBM在预测心衰患者生存方面的准确率最高,为85%,AUC为93%。
{"title":"Heart failure survival prediction using machine learning algorithm: am I safe from heart failure?","authors":"M. Mamun, Afia Farjana, Miraz Al Mamun, Md Salim Ahammed, Md Minhazur Rahman","doi":"10.1109/aiiot54504.2022.9817303","DOIUrl":"https://doi.org/10.1109/aiiot54504.2022.9817303","url":null,"abstract":"Heart Failure (HF) is a prevalent ailment worldwide, and despite significant medical advancements in the past few decades, cardiovascular disease is still the leading cause of death. Although HF itself is a critical risk for patient survival, other co-existing pathophysiological conditions can present a significant threat to patient survival. Because so many elements contribute to a patient's survival in heart failure, predicting the chances of survival without using a computational technique can be difficult for cardiac doctors, eventually preventing the patient from receiving correct care. Fortunately, categorization and prediction models exist, which can assist cardiologists in designing proper treatment schemes using relevant medical data. This study aims to develop prediction models for patient survival in HF conditions. In this paper, we analyzed the UCI heart failure dataset containing relevant medical information of 299 HF patients. We applied several machine learning classifiers to predict the patient survival from HF-related pathophysiological parameters and analyzed the features corresponding to the most crucial risk factors using the correlation matrix. Our prediction models used the following machine learning techniques- Logistic Regression, Decision Tree, Support Vector Machine, XGBoost, LightGBM, Random Forest, KNN, and Bagging and were able to find a better result. Also, this paper presents a comparative study by analyzing the performance of different machine learning algorithms. Our analysis indicates that LightGBM achieved the highest Accuracy of 85% and AUC of 93% in predicting patient survival of HF patients compared to other machine learning algorithms.","PeriodicalId":409264,"journal":{"name":"2022 IEEE World AI IoT Congress (AIIoT)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125228930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
MVE-based Reinforcement Learning Framework with Explainability for improving Quality of Experience of Application Placement in Fog Computing 基于mve的可解释性强化学习框架提高雾计算中应用放置体验质量
Pub Date : 2022-06-06 DOI: 10.1109/aiiot54504.2022.9817331
Bhargavi Krishnamurthy, S. Shiva, Saikat Das, Ph.D.
Fog computing can process big data generated by the IoT (IoT) architectures. The hierarchical, heterogeneous and distributed form of fog computing makes the application placement a challenging task. IoT applications are time-sensitive, and their placement decision is dependent on the user's Quality of Experience (QoE). This paper proposes an explainable Model Value Evaluation based Reinforcement Learning (MVERL) framework for placing applications among appropriate fog nodes. The quality of the application placement policies is good in terms of metrics related to quality like correctness, model relevance, $in$-differential privacy, and robustness. The performance results of the proposed MVERL are evaluated considering fog nodes with both limited and unlimited processors. The simulation found that the proposed MVERL outperforms existing works concerning a few performance metrics.
雾计算可以处理物联网(IoT)架构产生的大数据。雾计算的分层、异构和分布式形式使得应用程序的放置成为一项具有挑战性的任务。物联网应用对时间敏感,它们的放置决定取决于用户的体验质量(QoE)。本文提出了一个可解释的基于模型价值评估的强化学习(MVERL)框架,用于将应用程序放置在适当的雾节点中。应用程序放置策略的质量在与质量相关的度量(如正确性、模型相关性、差分隐私和健壮性)方面是好的。考虑具有有限处理器和无限处理器的雾节点,对所提出的MVERL的性能结果进行了评估。仿真结果表明,所提出的MVERL在一些性能指标上优于现有的工作。
{"title":"MVE-based Reinforcement Learning Framework with Explainability for improving Quality of Experience of Application Placement in Fog Computing","authors":"Bhargavi Krishnamurthy, S. Shiva, Saikat Das, Ph.D.","doi":"10.1109/aiiot54504.2022.9817331","DOIUrl":"https://doi.org/10.1109/aiiot54504.2022.9817331","url":null,"abstract":"Fog computing can process big data generated by the IoT (IoT) architectures. The hierarchical, heterogeneous and distributed form of fog computing makes the application placement a challenging task. IoT applications are time-sensitive, and their placement decision is dependent on the user's Quality of Experience (QoE). This paper proposes an explainable Model Value Evaluation based Reinforcement Learning (MVERL) framework for placing applications among appropriate fog nodes. The quality of the application placement policies is good in terms of metrics related to quality like correctness, model relevance, $in$-differential privacy, and robustness. The performance results of the proposed MVERL are evaluated considering fog nodes with both limited and unlimited processors. The simulation found that the proposed MVERL outperforms existing works concerning a few performance metrics.","PeriodicalId":409264,"journal":{"name":"2022 IEEE World AI IoT Congress (AIIoT)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134147491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2022 IEEE World AI IoT Congress (AIIoT)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1