首页 > 最新文献

2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)最新文献

英文 中文
IoTaIS 2022 Cover Page IoTaIS 2022封面页
Pub Date : 2022-11-24 DOI: 10.1109/iotais56727.2022.9975933
{"title":"IoTaIS 2022 Cover Page","authors":"","doi":"10.1109/iotais56727.2022.9975933","DOIUrl":"https://doi.org/10.1109/iotais56727.2022.9975933","url":null,"abstract":"","PeriodicalId":138894,"journal":{"name":"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127830888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accuracy-Time Efficient Hyperparameter Optimization Using Actor-Critic-based Reinforcement Learning and Early Stopping in OpenAI Gym Environment OpenAI Gym环境中基于actor - critical的强化学习和早期停止的精度-时间高效超参数优化
Pub Date : 2022-11-24 DOI: 10.1109/IoTaIS56727.2022.9975984
Albert Budi Christian, Chih-Yu Lin, Y. Tseng, Lan-Da Van, Wan-Hsun Hu, Chia-Hsuan Yu
In this paper, we present accuracy-time efficient hyperparameter optimization (HPO) using advantage actor-critic (A2C)-based reinforcement learning (RL) and early stopping in OpenAI Gym environment. The A2C RL can improve the hyperparameter selection such that the resulting accuracy of machine learning (ML) algorithms including XGBoost, support vector classifier (SVC), random forest shows comparable. According to the specified accuracy of the ML algorithms, the early stopping scheme can save the computation cost. Ten standard datasets are used to valid the accuracy-time efficient HPO. Experimental results show that the presented accuracy-efficient HPO architecture can improve 0.77% accuracy on average compared with default hyperparameter for random forest. The early stopping can save 64% computation cost on average compared to without early stopping for random forest.
在本文中,我们在OpenAI Gym环境中使用基于优势actor-critic (A2C)的强化学习(RL)和早期停止提出了精度-时间高效的超参数优化(HPO)。A2C RL可以改进超参数选择,从而使机器学习(ML)算法(包括XGBoost,支持向量分类器(SVC),随机森林)的准确度显示出可比性。根据机器学习算法的精度要求,提前停止方案可以节省计算量。使用10个标准数据集对精度-时间高效HPO进行了验证。实验结果表明,与随机森林的默认超参数相比,所提出的HPO结构的准确率平均提高了0.77%。对于随机森林,提前停止比不提前停止平均节省64%的计算成本。
{"title":"Accuracy-Time Efficient Hyperparameter Optimization Using Actor-Critic-based Reinforcement Learning and Early Stopping in OpenAI Gym Environment","authors":"Albert Budi Christian, Chih-Yu Lin, Y. Tseng, Lan-Da Van, Wan-Hsun Hu, Chia-Hsuan Yu","doi":"10.1109/IoTaIS56727.2022.9975984","DOIUrl":"https://doi.org/10.1109/IoTaIS56727.2022.9975984","url":null,"abstract":"In this paper, we present accuracy-time efficient hyperparameter optimization (HPO) using advantage actor-critic (A2C)-based reinforcement learning (RL) and early stopping in OpenAI Gym environment. The A2C RL can improve the hyperparameter selection such that the resulting accuracy of machine learning (ML) algorithms including XGBoost, support vector classifier (SVC), random forest shows comparable. According to the specified accuracy of the ML algorithms, the early stopping scheme can save the computation cost. Ten standard datasets are used to valid the accuracy-time efficient HPO. Experimental results show that the presented accuracy-efficient HPO architecture can improve 0.77% accuracy on average compared with default hyperparameter for random forest. The early stopping can save 64% computation cost on average compared to without early stopping for random forest.","PeriodicalId":138894,"journal":{"name":"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121512484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Genetic profiling of olives for location of origin and variety discrimination using Machine Learning 利用机器学习对橄榄进行原产地定位和品种识别的遗传分析
Pub Date : 2022-11-24 DOI: 10.1109/IoTaIS56727.2022.9975888
M. Mavroforakis, H. Georgiou
Genetic profiling via biomarkers in the food industry is a technology that gains momentum in the context of quality assurance and protection against fraud, as well as securing commercial assets like designation of origin. However, current solutions are based on methods that require significant computational resources and management of large data volumes, making them unsuitable for applications in the context of Internet-of-Things (IoT), edge computing and microcontrollers (MCU). This study presents a novel, computationally efficient and robust approach for fully field-integrated, low-complexity and high-accuracy classification of olives variety and location of origin, based on genetic ‘fingerprinting’ via a minimal set of information-rich features. The method is tested with real-world datasets, achieving accuracy rates above 96% and 99%, respectively, using various instance-based and tree ensemble classification models.
在食品工业中,通过生物标记物进行基因分析是一项在质量保证和防止欺诈以及确保原产地指定等商业资产方面获得势头的技术。然而,目前的解决方案基于需要大量计算资源和大数据量管理的方法,使其不适合物联网(IoT),边缘计算和微控制器(MCU)背景下的应用。本研究提出了一种新颖的、计算效率高、鲁棒性强的方法,用于完全现场集成、低复杂性和高精度的橄榄品种和原产地分类,该方法基于遗传“指纹”,通过最小的信息丰富特征集。该方法在实际数据集上进行了测试,使用各种基于实例和树集成的分类模型,准确率分别达到96%和99%以上。
{"title":"Genetic profiling of olives for location of origin and variety discrimination using Machine Learning","authors":"M. Mavroforakis, H. Georgiou","doi":"10.1109/IoTaIS56727.2022.9975888","DOIUrl":"https://doi.org/10.1109/IoTaIS56727.2022.9975888","url":null,"abstract":"Genetic profiling via biomarkers in the food industry is a technology that gains momentum in the context of quality assurance and protection against fraud, as well as securing commercial assets like designation of origin. However, current solutions are based on methods that require significant computational resources and management of large data volumes, making them unsuitable for applications in the context of Internet-of-Things (IoT), edge computing and microcontrollers (MCU). This study presents a novel, computationally efficient and robust approach for fully field-integrated, low-complexity and high-accuracy classification of olives variety and location of origin, based on genetic ‘fingerprinting’ via a minimal set of information-rich features. The method is tested with real-world datasets, achieving accuracy rates above 96% and 99%, respectively, using various instance-based and tree ensemble classification models.","PeriodicalId":138894,"journal":{"name":"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122567079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An End-To-End Explainable AI System for Analyzing Breast Cancer Prediction Models 用于分析乳腺癌预测模型的端到端可解释人工智能系统
Pub Date : 2022-11-24 DOI: 10.1109/IoTaIS56727.2022.9975896
Revanth Reddy Kontham, Akhilesh Kumar Kondoju, M. Fouda, Z. Fadlullah
Deep learning-based predictive models, for identifying malign tumor data in various cancer types, emerged as a hot research topic in the Internet of Medical Things (IoT). Such models can be deployed onto biomedical machines acting as IoMT devices to provide highly accurate breast cancer screening. While there has been a significant advancement in deep learning models for classifying cancer imaging data, a key shortcoming exists in terms of their use as blackbox algorithms rendering them unexplainable or non-interpretable. Caregivers, such as oncologists and radiologists, however, need to understand the nature of the model outcome. We address this in this paper by providing an end-to-end explainable AI framework for analyzing breast cancer prediction models based on a publicly available mammography dataset. In addition, we demonstrate how the various methods in such an end-to-end system can be effectively evaluated with appropriate performance measures.
基于深度学习的预测模型用于识别各种癌症类型的恶性肿瘤数据,已成为医疗物联网(IoT)的研究热点。这些模型可以部署到生物医学机器上,作为IoMT设备,提供高度精确的乳腺癌筛查。虽然在对癌症成像数据进行分类的深度学习模型方面取得了重大进展,但它们作为黑箱算法的使用存在一个关键缺点,使它们无法解释或不可解释。然而,护理人员,如肿瘤学家和放射科医生,需要了解模型结果的性质。我们在本文中通过提供端到端可解释的AI框架来解决这个问题,该框架用于分析基于公开可用的乳房x光检查数据集的乳腺癌预测模型。此外,我们还演示了如何使用适当的性能度量有效地评估这样一个端到端系统中的各种方法。
{"title":"An End-To-End Explainable AI System for Analyzing Breast Cancer Prediction Models","authors":"Revanth Reddy Kontham, Akhilesh Kumar Kondoju, M. Fouda, Z. Fadlullah","doi":"10.1109/IoTaIS56727.2022.9975896","DOIUrl":"https://doi.org/10.1109/IoTaIS56727.2022.9975896","url":null,"abstract":"Deep learning-based predictive models, for identifying malign tumor data in various cancer types, emerged as a hot research topic in the Internet of Medical Things (IoT). Such models can be deployed onto biomedical machines acting as IoMT devices to provide highly accurate breast cancer screening. While there has been a significant advancement in deep learning models for classifying cancer imaging data, a key shortcoming exists in terms of their use as blackbox algorithms rendering them unexplainable or non-interpretable. Caregivers, such as oncologists and radiologists, however, need to understand the nature of the model outcome. We address this in this paper by providing an end-to-end explainable AI framework for analyzing breast cancer prediction models based on a publicly available mammography dataset. In addition, we demonstrate how the various methods in such an end-to-end system can be effectively evaluated with appropriate performance measures.","PeriodicalId":138894,"journal":{"name":"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130891992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Two-Step Machine Learning Model for Stage-Specific Disease Survivability Prediction 一种两步机器学习模型用于特定阶段的疾病存活率预测
Pub Date : 2022-11-24 DOI: 10.1109/IoTaIS56727.2022.9975966
Aya Farrag, Z. Fadlullah, M. Fouda
While traditional medical informatics focus primarily on disease classification problems, the disease survivability prediction for patients suffering from multi-stage conditions (e.g., congestive cardiac disorders, cancer types, diabetes, chronic kideny disorder, and so forth) surprisingly remains as an overlooked research topic. In this paper, we address this topic, and among the numerous multi-stage chronic diseases, we select the breast cancer use-case due to the importance of breast cancer patients survivability analysis and prediction for healthcare providers to make informed decisions on recommended treatment pathways for different patients. Then, we combine two main strategies in solving the breast cancer survivability prediction problem using Machine Learning techniques. In the first strategy, we model the survivability prediction task as a two-step problem, namely 1) a classification problem to predict whether or not a patient survives for five years, and 2) a regression problem to forecast the number of remaining months for those who are predicted to not survive for five years. The second strategy is to develop stage-specific models, where each model is trained on instances belonging to a certain cancer stage, instead of using all stages together, in order to predict survivability of patients from the same stage. We investigate the impact of adapting these strategies along with applying different balancing techniques over the model performance using the National Cancer Institute’s Surveillance, Epidemiology, and End Results (SEER) dataset. The obtained results demonstrate that the proposed methods prove effective in both survivability classification and regression.
虽然传统的医学信息学主要关注疾病分类问题,但令人惊讶的是,多阶段疾病(如充血性心脏病、癌症类型、糖尿病、慢性肾病等)患者的疾病存活率预测仍然是一个被忽视的研究课题。在本文中,我们讨论了这一主题,并且在众多的多阶段慢性疾病中,我们选择了乳腺癌用例,因为乳腺癌患者的生存能力分析和预测对于医疗保健提供者为不同患者推荐治疗途径做出明智决策的重要性。然后,我们结合使用机器学习技术解决乳腺癌生存能力预测问题的两种主要策略。在第一种策略中,我们将生存能力预测任务建模为一个两步问题,即1)分类问题,预测患者是否能存活5年;2)回归问题,预测那些被预测不能存活5年的患者的剩余月数。第二种策略是开发特定阶段的模型,其中每个模型都是根据属于特定癌症阶段的实例进行训练,而不是将所有阶段放在一起,以预测同一阶段患者的存活率。我们使用美国国家癌症研究所的监测、流行病学和最终结果(SEER)数据集,研究了采用这些策略以及应用不同平衡技术对模型性能的影响。实验结果表明,该方法在生存力分类和回归方面都是有效的。
{"title":"A Two-Step Machine Learning Model for Stage-Specific Disease Survivability Prediction","authors":"Aya Farrag, Z. Fadlullah, M. Fouda","doi":"10.1109/IoTaIS56727.2022.9975966","DOIUrl":"https://doi.org/10.1109/IoTaIS56727.2022.9975966","url":null,"abstract":"While traditional medical informatics focus primarily on disease classification problems, the disease survivability prediction for patients suffering from multi-stage conditions (e.g., congestive cardiac disorders, cancer types, diabetes, chronic kideny disorder, and so forth) surprisingly remains as an overlooked research topic. In this paper, we address this topic, and among the numerous multi-stage chronic diseases, we select the breast cancer use-case due to the importance of breast cancer patients survivability analysis and prediction for healthcare providers to make informed decisions on recommended treatment pathways for different patients. Then, we combine two main strategies in solving the breast cancer survivability prediction problem using Machine Learning techniques. In the first strategy, we model the survivability prediction task as a two-step problem, namely 1) a classification problem to predict whether or not a patient survives for five years, and 2) a regression problem to forecast the number of remaining months for those who are predicted to not survive for five years. The second strategy is to develop stage-specific models, where each model is trained on instances belonging to a certain cancer stage, instead of using all stages together, in order to predict survivability of patients from the same stage. We investigate the impact of adapting these strategies along with applying different balancing techniques over the model performance using the National Cancer Institute’s Surveillance, Epidemiology, and End Results (SEER) dataset. The obtained results demonstrate that the proposed methods prove effective in both survivability classification and regression.","PeriodicalId":138894,"journal":{"name":"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114253646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Accuracy vs Efficiency: Machine Learning Enabled Anomaly Detection on the Internet of Things 准确性与效率:机器学习支持的物联网异常检测
Pub Date : 2022-11-24 DOI: 10.1109/IoTaIS56727.2022.9975889
Xin-Wen Wu, Yongtao Cao, Richard Dankwa
Anomaly detection is an important security mechanism for the Internet of Things (IoT). Existing works have been focused on developing accurate anomaly detection models. However, due to the resource-constrained nature of IoT networks and the requirement of real-time security operation, cost efficient (regarding computational efficiency and memory-consumption efficiency) approaches for anomaly detection are highly desirable in IoT applications. In this paper, we investigated machine learning (ML) enabled anomaly detection models for the IoT with regard to multi-objective optimization (Pareto optimization) that minimizes the detection error, execution time, and memory consumption simultaneously. Making use of well-known datasets consisting of network traffic traces captured in an IoT environment, we studied a variety of machine learning algorithms through the world-class H2O AI platform. Our experimental results show that the Gradient Boosting Machine, Random Forest, and Deep Learning models are the most accurate and fastest anomaly detection models; the Gradient Boosting Machine and Random Forest are the most accurate and memory-efficient models. These ML models form the Pareto-optimal set of anomaly detection models. Our results can be used by the industry to facilitate their selection of ML models for anomaly detection on various IoT networks based on their security requirements and system constraints.
异常检测是物联网的重要安全机制。现有的工作主要集中在开发准确的异常检测模型上。然而,由于物联网网络的资源约束性质和实时安全操作的要求,在物联网应用中,非常需要具有成本效益(关于计算效率和内存消耗效率)的异常检测方法。在本文中,我们研究了机器学习(ML)支持的物联网异常检测模型,涉及多目标优化(帕累托优化),该模型可以最大限度地减少检测错误、执行时间和内存消耗。利用在物联网环境中捕获的网络流量轨迹组成的知名数据集,我们通过世界一流的H2O AI平台研究了各种机器学习算法。实验结果表明,梯度增强机、随机森林和深度学习模型是最准确、最快的异常检测模型;梯度增强机和随机森林是最精确和内存效率最高的模型。这些机器学习模型构成了异常检测模型的帕累托最优集。我们的研究结果可以被业界用来根据他们的安全要求和系统约束来帮助他们选择机器学习模型,以便在各种物联网网络上进行异常检测。
{"title":"Accuracy vs Efficiency: Machine Learning Enabled Anomaly Detection on the Internet of Things","authors":"Xin-Wen Wu, Yongtao Cao, Richard Dankwa","doi":"10.1109/IoTaIS56727.2022.9975889","DOIUrl":"https://doi.org/10.1109/IoTaIS56727.2022.9975889","url":null,"abstract":"Anomaly detection is an important security mechanism for the Internet of Things (IoT). Existing works have been focused on developing accurate anomaly detection models. However, due to the resource-constrained nature of IoT networks and the requirement of real-time security operation, cost efficient (regarding computational efficiency and memory-consumption efficiency) approaches for anomaly detection are highly desirable in IoT applications. In this paper, we investigated machine learning (ML) enabled anomaly detection models for the IoT with regard to multi-objective optimization (Pareto optimization) that minimizes the detection error, execution time, and memory consumption simultaneously. Making use of well-known datasets consisting of network traffic traces captured in an IoT environment, we studied a variety of machine learning algorithms through the world-class H2O AI platform. Our experimental results show that the Gradient Boosting Machine, Random Forest, and Deep Learning models are the most accurate and fastest anomaly detection models; the Gradient Boosting Machine and Random Forest are the most accurate and memory-efficient models. These ML models form the Pareto-optimal set of anomaly detection models. Our results can be used by the industry to facilitate their selection of ML models for anomaly detection on various IoT networks based on their security requirements and system constraints.","PeriodicalId":138894,"journal":{"name":"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124919822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving IndoBERT for Sentiment Analysis on Indonesian Stock Trader Slang Language 改进IndoBERT对印尼股票交易员俚语的情绪分析
Pub Date : 2022-11-24 DOI: 10.1109/IoTaIS56727.2022.9975975
Enrico Fernandez, Anderies, Michael Gilbert Winata, Fadly Haikal Fasya, A. A. Gunawan
Recently, more people access mobile stock trading apps and investors send messages, comments, and posts. Interest in performing sentiment analysis of these messages to predict stock price changes requires ever-improving machine learning models, though, this requires identifying Bahasa Indonesian slang phrases in comments and posts. For developing the model to perform a sentiment analysis on stock price changes, we retrieved data from comments and posts on third-party applications. In the current paper, we presented such a model and test data acquisition using datasets manually labelled by the authors. Our sentiment analysis approach was implemented with a fine-tuned IndoBERT model and achieved 60.35% accuracy predicting the sentiment of 1289 records comments, and posts which better than previous research study. By testing the model, it can do a sentiment analysis on stock price changes and is also capable of identifying the number of slang phrases in the comments and posts by Indonesian traders.
最近,越来越多的人使用移动股票交易应用程序,投资者发送消息、评论和帖子。对这些消息进行情绪分析以预测股价变化的兴趣需要不断改进的机器学习模型,尽管这需要识别评论和帖子中的印尼语俚语。为了开发对股票价格变化执行情绪分析的模型,我们从第三方应用程序上的评论和帖子中检索数据。在本文中,我们提出了这样一个模型,并使用作者手动标记的数据集测试数据采集。我们的情感分析方法采用微调的IndoBERT模型实现,对1289条记录评论和帖子的情感预测准确率达到60.35%,优于以往的研究。通过测试该模型,它可以对股票价格变化进行情绪分析,还能够识别印尼交易员评论和帖子中的俚语数量。
{"title":"Improving IndoBERT for Sentiment Analysis on Indonesian Stock Trader Slang Language","authors":"Enrico Fernandez, Anderies, Michael Gilbert Winata, Fadly Haikal Fasya, A. A. Gunawan","doi":"10.1109/IoTaIS56727.2022.9975975","DOIUrl":"https://doi.org/10.1109/IoTaIS56727.2022.9975975","url":null,"abstract":"Recently, more people access mobile stock trading apps and investors send messages, comments, and posts. Interest in performing sentiment analysis of these messages to predict stock price changes requires ever-improving machine learning models, though, this requires identifying Bahasa Indonesian slang phrases in comments and posts. For developing the model to perform a sentiment analysis on stock price changes, we retrieved data from comments and posts on third-party applications. In the current paper, we presented such a model and test data acquisition using datasets manually labelled by the authors. Our sentiment analysis approach was implemented with a fine-tuned IndoBERT model and achieved 60.35% accuracy predicting the sentiment of 1289 records comments, and posts which better than previous research study. By testing the model, it can do a sentiment analysis on stock price changes and is also capable of identifying the number of slang phrases in the comments and posts by Indonesian traders.","PeriodicalId":138894,"journal":{"name":"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127826874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Real time air quality monitoring with fog computing enabled IoT system: an experimental study 基于雾计算的物联网系统的实时空气质量监测:一项实验性研究
Pub Date : 2022-11-24 DOI: 10.1109/IoTaIS56727.2022.9975988
Kemal Cagri Serdaroglu, S. Baydere, Boonyarith Saovapakhiran
Fog computing has the benefits to handle and reduce data traffic load towards the central cloud in IoT systems. These benefits are facilitated with the help of offloaded fog services that participate in the decision making processes. Besides, fog-based systems have the potential to mitigate scalability bottlenecks that occur in cloud-based systems. In this study, we elaborate on fog based design for a scalable real time air quality monitoring and alert generation system. We established an emulation test bed with real data collected from air quality sensing nodes deployed around Bangkok and vicinity areas to understand the behavior of the proposed solution in terms of waiting time characteristics. We analyzed the performance of the system in two design scenarios; first scenario is built with the proposed fog solution and the second one is the cloud-based approach. We present the performance results revealing the advantages of the proposed model, for the number of air box nodes scaling up to 120 and the number of client nodes up to 200.
雾计算在处理和减少物联网系统中流向中央云的数据流量负载方面具有优势。在参与决策过程的卸载雾服务的帮助下,这些优势得以实现。此外,基于雾的系统有可能缓解基于云的系统中出现的可伸缩性瓶颈。在本研究中,我们详细阐述了基于雾的可扩展实时空气质量监测和警报生成系统的设计。我们建立了一个仿真试验台,利用部署在曼谷及周边地区的空气质量传感节点收集的真实数据,了解所提出的解决方案在等待时间特征方面的行为。分析了系统在两种设计场景下的性能;第一个场景是使用提出的雾解决方案构建的,第二个场景是基于云的方法。我们展示的性能结果揭示了所提出模型的优势,当空气箱节点数量扩展到120个,客户端节点数量增加到200个时。
{"title":"Real time air quality monitoring with fog computing enabled IoT system: an experimental study","authors":"Kemal Cagri Serdaroglu, S. Baydere, Boonyarith Saovapakhiran","doi":"10.1109/IoTaIS56727.2022.9975988","DOIUrl":"https://doi.org/10.1109/IoTaIS56727.2022.9975988","url":null,"abstract":"Fog computing has the benefits to handle and reduce data traffic load towards the central cloud in IoT systems. These benefits are facilitated with the help of offloaded fog services that participate in the decision making processes. Besides, fog-based systems have the potential to mitigate scalability bottlenecks that occur in cloud-based systems. In this study, we elaborate on fog based design for a scalable real time air quality monitoring and alert generation system. We established an emulation test bed with real data collected from air quality sensing nodes deployed around Bangkok and vicinity areas to understand the behavior of the proposed solution in terms of waiting time characteristics. We analyzed the performance of the system in two design scenarios; first scenario is built with the proposed fog solution and the second one is the cloud-based approach. We present the performance results revealing the advantages of the proposed model, for the number of air box nodes scaling up to 120 and the number of client nodes up to 200.","PeriodicalId":138894,"journal":{"name":"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130234171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Deep Learning based IoT Framework for Assistive Healthcare using Gesture Based Interface 基于深度学习的物联网框架,用于使用基于手势的界面的辅助医疗保健
Pub Date : 2022-11-24 DOI: 10.1109/IoTaIS56727.2022.9975885
Somayya Avadut, S. Udgata
Around the world, the number of senior citizens is increasing and shall continue to increase, and it is expected to be around 20 percent by 2050. Realizing its importance, the United Nations has identified Health and Wellness as one of the Sustainable Development Goals (SDG). The unfortunate pandemic situation due to the COVID-19 outbreak opened up new challenges for contact-less interactions and control of devices for ensuring the well being of citizens. In this paper, our main aim is to develop an intelligent framework based on a gesture-based interface that will help the senior citizens and physically challenged people interact and control different devices using only gestures. We focus on dynamic gesture recognition using a deep learning-based Convolutional Neural Network (CNN) model. The proposed system records continuous real-time data streams from non-invasive wearable sensors. This real-time continuous data stream is fragmented into data segments that are most likely to contain meaningful gesture data frames using the Adaptive Threshold Setting algorithm. The segmented data frames are provided as input to the CNN model to train, test, validate, and then classify it into predefined clusters, which are gestures. We have used the MPU6050 Inertial Measurement Unit based sensor model for collecting the data of the hand/ finger movement. The popular and widely used ESP8266 controller is used for data gathering, processing, and communicating. We created a dataset for 36 gestures, which includes ten digits and 26 English alphabets. For each gesture, a dataset of 300 samples has been created from 5 subjects of age group between 21-30. Thus, the final dataset consists of a total of 10800 samples belonging to 36 gestures. A total of six features comprising linear accelerations and angular rotation in 3-dimensional axes are used for training and validation. The proposed model can segment 93.75% of data segments correctly using the adaptive threshold selection algorithm, and the CNN classification algorithm can classify 98.67% gestures correctly.
在世界范围内,老年人的数量正在增加,并将继续增加,预计到2050年将达到20%左右。认识到其重要性,联合国已将健康和福祉确定为可持续发展目标(SDG)之一。2019冠状病毒病(COVID-19)疫情导致的不幸大流行形势,为确保公民健康的非接触式互动和设备控制带来了新的挑战。在本文中,我们的主要目标是开发一个基于手势界面的智能框架,帮助老年人和残疾人仅使用手势进行交互和控制不同的设备。我们专注于使用基于深度学习的卷积神经网络(CNN)模型进行动态手势识别。该系统记录来自非侵入式可穿戴传感器的连续实时数据流。使用自适应阈值设置算法,这种实时连续数据流被分割成最有可能包含有意义的手势数据帧的数据段。将分割的数据帧作为输入提供给CNN模型进行训练、测试、验证,然后将其分类到预定义的聚类中,这些聚类就是手势。我们使用基于MPU6050惯性测量单元的传感器模型来采集手/手指的运动数据。常用的ESP8266控制器用于数据采集、处理和通信。我们为36个手势创建了一个数据集,其中包括10个数字和26个英文字母。对于每个手势,从5个年龄在21-30岁之间的受试者中创建了300个样本的数据集。因此,最终的数据集由总共10800个样本组成,属于36个手势。在训练和验证中,使用了包括线性加速度和三维轴角旋转在内的六个特征。本文提出的模型使用自适应阈值选择算法对93.75%的数据段进行了正确的分割,CNN分类算法对手势的正确率为98.67%。
{"title":"A Deep Learning based IoT Framework for Assistive Healthcare using Gesture Based Interface","authors":"Somayya Avadut, S. Udgata","doi":"10.1109/IoTaIS56727.2022.9975885","DOIUrl":"https://doi.org/10.1109/IoTaIS56727.2022.9975885","url":null,"abstract":"Around the world, the number of senior citizens is increasing and shall continue to increase, and it is expected to be around 20 percent by 2050. Realizing its importance, the United Nations has identified Health and Wellness as one of the Sustainable Development Goals (SDG). The unfortunate pandemic situation due to the COVID-19 outbreak opened up new challenges for contact-less interactions and control of devices for ensuring the well being of citizens. In this paper, our main aim is to develop an intelligent framework based on a gesture-based interface that will help the senior citizens and physically challenged people interact and control different devices using only gestures. We focus on dynamic gesture recognition using a deep learning-based Convolutional Neural Network (CNN) model. The proposed system records continuous real-time data streams from non-invasive wearable sensors. This real-time continuous data stream is fragmented into data segments that are most likely to contain meaningful gesture data frames using the Adaptive Threshold Setting algorithm. The segmented data frames are provided as input to the CNN model to train, test, validate, and then classify it into predefined clusters, which are gestures. We have used the MPU6050 Inertial Measurement Unit based sensor model for collecting the data of the hand/ finger movement. The popular and widely used ESP8266 controller is used for data gathering, processing, and communicating. We created a dataset for 36 gestures, which includes ten digits and 26 English alphabets. For each gesture, a dataset of 300 samples has been created from 5 subjects of age group between 21-30. Thus, the final dataset consists of a total of 10800 samples belonging to 36 gestures. A total of six features comprising linear accelerations and angular rotation in 3-dimensional axes are used for training and validation. The proposed model can segment 93.75% of data segments correctly using the adaptive threshold selection algorithm, and the CNN classification algorithm can classify 98.67% gestures correctly.","PeriodicalId":138894,"journal":{"name":"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116417314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Indoor Navigation Using Hybrid Personal Pedestrian Dead Reckoning (Hybrid P-PDR) 混合个人行人航位推算(Hybrid P-PDR)室内导航
Pub Date : 2022-11-24 DOI: 10.1109/IoTaIS56727.2022.9975916
Onur Önder, G. Ghinea, Tor-Morten Grønli, T. Serif
The proliferation of smart devices has dramatically changed how people live their daily lives. Today, on top of their initial communicator role, smart devices act as guides, companions, and aids. For a long time, people have been using navigation systems and mobile phones as navigators in their cars. Indeed, there have been interests in implementing similar indoor navigation systems using technologies such as Wi-Fi, Bluetooth, and ultra-wideband. However, the proposed indoor navigation solutions were either too expensive to implement and maintain, or not accurate enough for a wider acceptance. Accordingly, this paper proposes a hybrid pedestrian dead reckoning (PDR) for indoor navigation, which utilizes the built-in sensors of smart devices. As part of this study, the authors implement three approaches to pedestrian dead-reckoning namely PDR, Personal PDR, and Hybrid P-PDR-and evaluate in a real-world setting. The findings of the the evaluation shows that the Hybrid P-PDR approach, which harnesses the user’s walking pattern and signals from low-energy beacons, can navigate users in an indoor environment with a minimum of 0.77 and maximum of 1.35-meter average distance error.
智能设备的普及极大地改变了人们的日常生活方式。如今,智能设备除了最初的传播者角色之外,还扮演着向导、伙伴和辅助的角色。很长一段时间以来,人们一直在使用导航系统和移动电话作为他们汽车的导航仪。事实上,人们对使用Wi-Fi、蓝牙和超宽带等技术实现类似的室内导航系统很感兴趣。然而,拟议的室内导航解决方案要么过于昂贵,无法实施和维护,要么不够精确,无法得到更广泛的接受。据此,本文提出了一种利用智能设备内置传感器的混合行人航位推算(PDR)方法。作为本研究的一部分,作者实施了三种行人航位推算方法,即PDR、Personal PDR和Hybrid p -PDR,并在现实环境中进行评估。评估结果表明,混合P-PDR方法利用用户的行走模式和低能信标信号,可以在室内环境中为用户导航,平均距离误差最小为0.77米,最大为1.35米。
{"title":"Indoor Navigation Using Hybrid Personal Pedestrian Dead Reckoning (Hybrid P-PDR)","authors":"Onur Önder, G. Ghinea, Tor-Morten Grønli, T. Serif","doi":"10.1109/IoTaIS56727.2022.9975916","DOIUrl":"https://doi.org/10.1109/IoTaIS56727.2022.9975916","url":null,"abstract":"The proliferation of smart devices has dramatically changed how people live their daily lives. Today, on top of their initial communicator role, smart devices act as guides, companions, and aids. For a long time, people have been using navigation systems and mobile phones as navigators in their cars. Indeed, there have been interests in implementing similar indoor navigation systems using technologies such as Wi-Fi, Bluetooth, and ultra-wideband. However, the proposed indoor navigation solutions were either too expensive to implement and maintain, or not accurate enough for a wider acceptance. Accordingly, this paper proposes a hybrid pedestrian dead reckoning (PDR) for indoor navigation, which utilizes the built-in sensors of smart devices. As part of this study, the authors implement three approaches to pedestrian dead-reckoning namely PDR, Personal PDR, and Hybrid P-PDR-and evaluate in a real-world setting. The findings of the the evaluation shows that the Hybrid P-PDR approach, which harnesses the user’s walking pattern and signals from low-energy beacons, can navigate users in an indoor environment with a minimum of 0.77 and maximum of 1.35-meter average distance error.","PeriodicalId":138894,"journal":{"name":"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116537293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1