首页 > 最新文献

2022 4th International Conference on Advanced Science and Engineering (ICOASE)最新文献

英文 中文
Energy Efficiency Parameters Evaluation for 5G Application 5G应用能效参数评估
Pub Date : 2022-09-21 DOI: 10.1109/ICOASE56293.2022.10075603
Fatimah H. Mohialdeen, Y. E. Mohammed Ali, F. Mahmood
The deployment of mobile telecommunication networks has increased dramatically in recent decades. This increase in the number of mobile devices, and towers yields to increase in consumed energy. Hence, the need for energy efficiency (EE) has increased to reduce cost and pollution. In this paper, the following parameters are studied to enhance EE: increasing the number of base station antennas, increasing the number of user equipment (UEs), and other parameters such as channel state information (CSI). The purpose of this study is to look into how improvement might be achieved. Using the MATLAB program, this article analyzes and enhances EE using a mathematical model in the fifth generation of wireless communication (5G) massive multiple-input multiple-output (Massive-MIMO). The EE effectiveness is demonstrated through simulation results and shows how different parameter selections affect the fundamental balance between EE and spectral efficiency (SE) or only on the EE. The results show that a couple of parameters enhance the EE-SE curve, such as the number of base station antenna, transmit bandwidth, circuit power, number of users, and the availability of CSI. The increase in the number of base station antennas is considered to be a simple solution to increase the EE before the increase in circuit power. Increasing the number of antennas, also, reduces the impact of having imperfect CSI. The results show an increasing number of antennas with respect to the number of users from 4 to 10 do not increase EE, yet increase the SE by around %55.
近几十年来,移动通信网络的部署急剧增加。移动设备和发射塔数量的增加导致能源消耗的增加。因此,对能源效率(EE)的需求增加了,以降低成本和污染。本文研究了提高EE的参数:增加基站天线数量,增加用户设备(ue)数量,以及信道状态信息(CSI)等其他参数。这项研究的目的是探讨如何改进。本文利用MATLAB程序,利用数学模型对第五代无线通信(5G)大规模多输入多输出(massive - mimo)中的EE进行了分析和增强。通过仿真结果验证了电子效率的有效性,并展示了不同参数选择如何影响电子效率和频谱效率(SE)之间的基本平衡或仅影响电子效率。结果表明,基站天线数量、发射带宽、电路功率、用户数和CSI可用性等参数对EE-SE曲线有增强作用。增加基站天线的数量被认为是在增加电路功率之前提高EE的简单解决方案。增加天线的数量也可以减少CSI不完美的影响。结果表明,相对于用户数量从4个增加到10个,天线数量的增加不会增加EE,但增加SE约%55。
{"title":"Energy Efficiency Parameters Evaluation for 5G Application","authors":"Fatimah H. Mohialdeen, Y. E. Mohammed Ali, F. Mahmood","doi":"10.1109/ICOASE56293.2022.10075603","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075603","url":null,"abstract":"The deployment of mobile telecommunication networks has increased dramatically in recent decades. This increase in the number of mobile devices, and towers yields to increase in consumed energy. Hence, the need for energy efficiency (EE) has increased to reduce cost and pollution. In this paper, the following parameters are studied to enhance EE: increasing the number of base station antennas, increasing the number of user equipment (UEs), and other parameters such as channel state information (CSI). The purpose of this study is to look into how improvement might be achieved. Using the MATLAB program, this article analyzes and enhances EE using a mathematical model in the fifth generation of wireless communication (5G) massive multiple-input multiple-output (Massive-MIMO). The EE effectiveness is demonstrated through simulation results and shows how different parameter selections affect the fundamental balance between EE and spectral efficiency (SE) or only on the EE. The results show that a couple of parameters enhance the EE-SE curve, such as the number of base station antenna, transmit bandwidth, circuit power, number of users, and the availability of CSI. The increase in the number of base station antennas is considered to be a simple solution to increase the EE before the increase in circuit power. Increasing the number of antennas, also, reduces the impact of having imperfect CSI. The results show an increasing number of antennas with respect to the number of users from 4 to 10 do not increase EE, yet increase the SE by around %55.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133004950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicted of Software Fault Based on Random Forest and K-Nearest Neighbor 基于随机森林和k近邻的软件故障预测
Pub Date : 2022-09-21 DOI: 10.1109/ICOASE56293.2022.10075596
Mustafa Zaki Mohammed, I. Saleh
Software systems have gotten increasingly complicated and adaptable in today's computer world. As a result, it's critical to track down and fix software design flaws on a regular basis. Software fault prediction in early phase is useful for enhancing software quality and for reducing software testing time and expense; it's a technique for predicting problems using historical data. To anticipate software flaws from historical databases, several machine learning approaches are applied. This paper focuses on creating a predictor to predict software defects, Based on previous data. For this purpose, a supervised machine learning techniques was utilized to forecast future software failures, K-Nearest Neighbor (KNN) and Random Forest (RF) applied technique applied to the defective data set belonging to the NASA's PROMISE repository. Also, a set of performance measures such as accuracy, precision, recall and f1 measure were used to evaluate the performance of the models. This paper showed a good performance of the RF model compared to the KNN model resulting in a maximum and minimum accuracy are 99%,88% on the MC1 and KC1 responsibly. In general, the study's findings suggest that software defect metrics may be used to determine the problematic module, and that the RF model can be used to anticipate software errors.
在当今的计算机世界中,软件系统变得越来越复杂和适应性强。因此,定期追踪和修复软件设计缺陷至关重要。软件早期故障预测有助于提高软件质量,减少软件测试时间和费用;这是一种利用历史数据预测问题的技术。为了从历史数据库中预测软件缺陷,应用了几种机器学习方法。本文的重点是基于以前的数据创建一个预测器来预测软件缺陷。为此,利用监督机器学习技术来预测未来的软件故障,k -最近邻(KNN)和随机森林(RF)应用技术应用于属于NASA PROMISE存储库的缺陷数据集。此外,还采用准确率、精密度、召回率和f1测度等性能指标来评价模型的性能。本文表明,与KNN模型相比,RF模型具有良好的性能,在MC1和KC1上的最大和最小精度分别为99%和88%。总的来说,研究结果表明软件缺陷度量可以用来确定有问题的模块,并且RF模型可以用来预测软件错误。
{"title":"Predicted of Software Fault Based on Random Forest and K-Nearest Neighbor","authors":"Mustafa Zaki Mohammed, I. Saleh","doi":"10.1109/ICOASE56293.2022.10075596","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075596","url":null,"abstract":"Software systems have gotten increasingly complicated and adaptable in today's computer world. As a result, it's critical to track down and fix software design flaws on a regular basis. Software fault prediction in early phase is useful for enhancing software quality and for reducing software testing time and expense; it's a technique for predicting problems using historical data. To anticipate software flaws from historical databases, several machine learning approaches are applied. This paper focuses on creating a predictor to predict software defects, Based on previous data. For this purpose, a supervised machine learning techniques was utilized to forecast future software failures, K-Nearest Neighbor (KNN) and Random Forest (RF) applied technique applied to the defective data set belonging to the NASA's PROMISE repository. Also, a set of performance measures such as accuracy, precision, recall and f1 measure were used to evaluate the performance of the models. This paper showed a good performance of the RF model compared to the KNN model resulting in a maximum and minimum accuracy are 99%,88% on the MC1 and KC1 responsibly. In general, the study's findings suggest that software defect metrics may be used to determine the problematic module, and that the RF model can be used to anticipate software errors.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115689399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PSO Algorithm for Three Phase Induction Motor with V/F Speed Control 三相感应电动机V/F速度控制的粒子群算法
Pub Date : 2022-09-21 DOI: 10.1109/ICOASE56293.2022.10075610
Qusay Hussein Mirdas, N. Yasin, N. Alshamaa
Because induction motors are used in most industries, IM control is more essential, Optimization is used approaches are becoming more common for improving Three - Phase induction motor (TIM). In addition, the Volt/Hz (V/f) control is utilized to minimize the harmonics level of other control and modulation approaches. This study is about tuning the PI controller parameters for utilization in TIM. To optimize the speed response performance of the TIM, the Particle Swarm Optimization (PSO) algorithm is used to adjust each parameter of the PI speed controller. Kp and Ki of the PI speed controller parameters are optimized for TIM operation with V/ f Control by designing an appropriate PSO algorithm. The PI speed controller's performance on the TIM is measured by measuring changes in speed and torque under-speed response events. In PSO, the PI controller performs well in terms of overshoot, settling time, and steady-state error.
由于异步电动机在大多数工业中使用,因此IM控制更为重要,采用优化方法对三相异步电动机(TIM)进行改进变得越来越普遍。此外,伏特/赫兹(V/f)控制被用来最小化其他控制和调制方法的谐波水平。本研究是关于调优PI控制器参数以在TIM中使用。为了优化TIM的速度响应性能,采用粒子群优化(PSO)算法对PI速度控制器的各参数进行调整。通过设计合适的粒子群算法,优化了PI速度控制器参数Kp和Ki,以适应V/ f控制下的TIM运行。通过测量转速和转矩在低速响应事件中的变化来测量PI速度控制器在TIM上的性能。在PSO中,PI控制器在超调量、稳定时间和稳态误差方面表现良好。
{"title":"PSO Algorithm for Three Phase Induction Motor with V/F Speed Control","authors":"Qusay Hussein Mirdas, N. Yasin, N. Alshamaa","doi":"10.1109/ICOASE56293.2022.10075610","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075610","url":null,"abstract":"Because induction motors are used in most industries, IM control is more essential, Optimization is used approaches are becoming more common for improving Three - Phase induction motor (TIM). In addition, the Volt/Hz (V/f) control is utilized to minimize the harmonics level of other control and modulation approaches. This study is about tuning the PI controller parameters for utilization in TIM. To optimize the speed response performance of the TIM, the Particle Swarm Optimization (PSO) algorithm is used to adjust each parameter of the PI speed controller. Kp and Ki of the PI speed controller parameters are optimized for TIM operation with V/ f Control by designing an appropriate PSO algorithm. The PI speed controller's performance on the TIM is measured by measuring changes in speed and torque under-speed response events. In PSO, the PI controller performs well in terms of overshoot, settling time, and steady-state error.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123485695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DHFogSim: Smart Real-Time Traffic Management Framework for Fog Computing Systems DHFogSim:用于雾计算系统的智能实时交通管理框架
Pub Date : 2022-09-21 DOI: 10.1109/ICOASE56293.2022.10075605
D. Abdullah, H. Mohammed
Clouds are the most powerful computation architecture; nevertheless, some applications are delay sensitive and need real time responses. Offloading tasks from user device to the cloud will take relatively long time and consumes network bandwidth. This motivates the appearance of fog computing. In fog, computing additional layer falls between user device layer and the cloud. Offloading tasks to fog layer will be faster and save network bandwidth. Fog computing has spread widely, but it is difficult to build and test such systems in real word. This led the developers to use fog simulation frameworks to simulate and test their own systems. In this paper, we adopt fog simulation formwork, which adds smart agent layer between user device and fog layer. The framework uses multilevel queue instead of single queue at the Ethernet layer, these queues are scheduled according to weighted round robin and tasks dispatched to theses queues according to the value of Type of Service (ToS) bits which falls at the second byte inside the IP header. The value of ToS bits given by the smart agent layer according to take constraints. Framework behavior compared with mFogSim framework and the results shows that the proposed framework has significantly decrease the delay on both brokers and fog nodes. furthermore, packet drop count and packet error rate are slightly improved
云是最强大的计算架构;然而,一些应用程序是延迟敏感的,需要实时响应。将任务从用户设备上卸载到云上需要较长时间,并且会消耗网络带宽。这激发了雾计算的出现。在雾中,计算附加层落在用户设备层和云之间。将任务卸载到雾层将更快,节省网络带宽。雾计算已经广泛传播,但在现实世界中构建和测试这样的系统是困难的。这导致开发人员使用雾模拟框架来模拟和测试他们自己的系统。本文采用雾仿真模板,在用户设备和雾层之间增加智能代理层。该框架在以太网层使用多层队列而不是单队列,这些队列根据加权轮询调度,并根据IP头内第二个字节的服务类型(ToS)位的值分配到这些队列的任务。智能代理层根据take约束给出的ToS位的值。框架行为与mFogSim框架进行了比较,结果表明该框架显著降低了代理节点和雾节点上的延迟。此外,丢包数和包错误率略有提高
{"title":"DHFogSim: Smart Real-Time Traffic Management Framework for Fog Computing Systems","authors":"D. Abdullah, H. Mohammed","doi":"10.1109/ICOASE56293.2022.10075605","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075605","url":null,"abstract":"Clouds are the most powerful computation architecture; nevertheless, some applications are delay sensitive and need real time responses. Offloading tasks from user device to the cloud will take relatively long time and consumes network bandwidth. This motivates the appearance of fog computing. In fog, computing additional layer falls between user device layer and the cloud. Offloading tasks to fog layer will be faster and save network bandwidth. Fog computing has spread widely, but it is difficult to build and test such systems in real word. This led the developers to use fog simulation frameworks to simulate and test their own systems. In this paper, we adopt fog simulation formwork, which adds smart agent layer between user device and fog layer. The framework uses multilevel queue instead of single queue at the Ethernet layer, these queues are scheduled according to weighted round robin and tasks dispatched to theses queues according to the value of Type of Service (ToS) bits which falls at the second byte inside the IP header. The value of ToS bits given by the smart agent layer according to take constraints. Framework behavior compared with mFogSim framework and the results shows that the proposed framework has significantly decrease the delay on both brokers and fog nodes. furthermore, packet drop count and packet error rate are slightly improved","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125467940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the Clustering Performance of the K-Means Algorithm for Non-linear Clusters 改进非线性聚类k -均值算法的聚类性能
Pub Date : 2022-09-21 DOI: 10.1109/ICOASE56293.2022.10075614
Naaman Omar, Adel Al-zebari, A. Şengur
K-means clustering is known to be the most traditional approach in machine learning. It's been put to a lot of different uses. However, it has difficulty with initialization and performs poorly for non-linear clusters. Several approaches have been offered in the literature to circumvent these restrictions. Kernel K-means (KK-M) is a type of K-means that falls under this group. In this paper, a two-stepped approach is developed to increase the clustering performance of the K-means algorithm. A transformation procedure is applied in the first step where the low-dimensional input space is transferred to a high-dimensional feature space. To this end, the hidden layer of a Radial basis function (RBF) network is used. The typical K-means method is used in the second part of our approach. We offer experimental results comparing the KK-M on simulated data sets to assess the correctness of the suggested approach. The results of the experiments show the efficiency of the proposed method. The clustering accuracy attained is higher than that of the KK-M algorithm. We also applied the proposed clustering algorithm on image segmentation application. A series of segmentation results were given accordingly.
众所周知,K-means聚类是机器学习中最传统的方法。它有很多不同的用途。然而,它在初始化方面有困难,并且在非线性集群中表现不佳。文献中提出了几种方法来规避这些限制。核k均值(KK-M)是属于这一类的k均值。本文提出了一种两步法来提高K-means算法的聚类性能。第一步采用变换过程,将低维输入空间转换为高维特征空间。为此,使用了径向基函数(RBF)网络的隐藏层。我们的方法的第二部分使用了典型的K-means方法。我们提供了在模拟数据集上比较KK-M的实验结果,以评估所建议方法的正确性。实验结果表明了该方法的有效性。所得聚类精度高于KK-M算法。我们还将提出的聚类算法应用于图像分割。给出了一系列的分割结果。
{"title":"Improving the Clustering Performance of the K-Means Algorithm for Non-linear Clusters","authors":"Naaman Omar, Adel Al-zebari, A. Şengur","doi":"10.1109/ICOASE56293.2022.10075614","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075614","url":null,"abstract":"K-means clustering is known to be the most traditional approach in machine learning. It's been put to a lot of different uses. However, it has difficulty with initialization and performs poorly for non-linear clusters. Several approaches have been offered in the literature to circumvent these restrictions. Kernel K-means (KK-M) is a type of K-means that falls under this group. In this paper, a two-stepped approach is developed to increase the clustering performance of the K-means algorithm. A transformation procedure is applied in the first step where the low-dimensional input space is transferred to a high-dimensional feature space. To this end, the hidden layer of a Radial basis function (RBF) network is used. The typical K-means method is used in the second part of our approach. We offer experimental results comparing the KK-M on simulated data sets to assess the correctness of the suggested approach. The results of the experiments show the efficiency of the proposed method. The clustering accuracy attained is higher than that of the KK-M algorithm. We also applied the proposed clustering algorithm on image segmentation application. A series of segmentation results were given accordingly.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121100457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Review on Image Segmentation Methods Using Deep Learning 基于深度学习的图像分割方法综述
Pub Date : 2022-09-21 DOI: 10.1109/ICOASE56293.2022.10075607
Nabeel N. Ali, N. Kako, A. Abdi
In recent years, the machine learning field has been inundated with a variety of deep learning methods. Different deep learning model types, including recurrent neural networks (RNNs), convolutional neural networks (CNNs), adversarial neural networks (ANNs), and autoencoders, are successfully tackling challenging computer vision problems including image detection and segmentation in an unconstrained environment. Although image segmentation has received a lot of interest, there have been several new deep learning methods discovered with regard to object detection and recognition. An academic review of deep learning image segmentation methods is presented in this article. In this study, the major goal is to offer a sensible comprehension of the basic approaches that have already made a substantial contribution to the domain of image segmentation throughout the years. The article describes the existing state of image segmentation, and goes on to make the argument that deep learning has revolutionized this field. Afterwards, segmentation algorithms have been scientifically classified and optimized, each with their own special contribution. With a variety of informative narratives, the reader may be able to understand the internal workings of these processes more quickly.
近年来,机器学习领域充斥着各种各样的深度学习方法。不同的深度学习模型类型,包括循环神经网络(rnn)、卷积神经网络(cnn)、对抗神经网络(ann)和自动编码器,正在成功地解决具有挑战性的计算机视觉问题,包括在不受约束的环境中进行图像检测和分割。尽管图像分割受到了很多关注,但在物体检测和识别方面,已经发现了几种新的深度学习方法。本文介绍了深度学习图像分割方法的学术综述。在本研究中,主要目标是对多年来已经对图像分割领域做出重大贡献的基本方法提供合理的理解。本文描述了图像分割的现有状态,并继续提出深度学习已经彻底改变了这一领域的论点。之后,对分割算法进行了科学的分类和优化,每一种算法都有自己独特的贡献。有了各种各样的信息叙述,读者可能能够更快地理解这些过程的内部运作。
{"title":"Review on Image Segmentation Methods Using Deep Learning","authors":"Nabeel N. Ali, N. Kako, A. Abdi","doi":"10.1109/ICOASE56293.2022.10075607","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075607","url":null,"abstract":"In recent years, the machine learning field has been inundated with a variety of deep learning methods. Different deep learning model types, including recurrent neural networks (RNNs), convolutional neural networks (CNNs), adversarial neural networks (ANNs), and autoencoders, are successfully tackling challenging computer vision problems including image detection and segmentation in an unconstrained environment. Although image segmentation has received a lot of interest, there have been several new deep learning methods discovered with regard to object detection and recognition. An academic review of deep learning image segmentation methods is presented in this article. In this study, the major goal is to offer a sensible comprehension of the basic approaches that have already made a substantial contribution to the domain of image segmentation throughout the years. The article describes the existing state of image segmentation, and goes on to make the argument that deep learning has revolutionized this field. Afterwards, segmentation algorithms have been scientifically classified and optimized, each with their own special contribution. With a variety of informative narratives, the reader may be able to understand the internal workings of these processes more quickly.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134147489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigation of Healthcare Security Using Blockchain Technology: A review 基于区块链技术的医疗安全研究综述
Pub Date : 2022-09-21 DOI: 10.1109/ICOASE56293.2022.10075578
M. A. Omer, Shimal Sh. Taher, S. Ameen
Telemedicine and telehealth care system show the revolutionary and modern way to deal with the coronavirus 2019 pandemic. However, such systems are facing increased security risks. As a result, healthcare providers and academic institutions must be well-informed, safe, and prepared to respond to any cyber-attack. The aim of this paper is to conduct a review of healthcare information systems together with how security can be provided for such systems. The paper main focus is on the adoption of blockchain technology to support the security of the healthcare system. This adoption has been investigated and assessed to show its benefits compared with other conventional technologies. Finally, a recommendation was pointed out for the security of healthcare with the usage of blockchain technology.
远程医疗和远程医疗系统展示了应对2019年冠状病毒大流行的革命性和现代化方式。然而,这类系统面临着越来越大的安全风险。因此,医疗保健提供者和学术机构必须充分了解情况,确保安全,并准备好应对任何网络攻击。本文的目的是对医疗保健信息系统以及如何为此类系统提供安全性进行审查。本文的主要重点是采用区块链技术来支持医疗保健系统的安全性。与其他传统技术相比,这种采用已经进行了调查和评估,以显示其优势。最后,对区块链技术在医疗安全方面的应用提出了建议。
{"title":"Investigation of Healthcare Security Using Blockchain Technology: A review","authors":"M. A. Omer, Shimal Sh. Taher, S. Ameen","doi":"10.1109/ICOASE56293.2022.10075578","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075578","url":null,"abstract":"Telemedicine and telehealth care system show the revolutionary and modern way to deal with the coronavirus 2019 pandemic. However, such systems are facing increased security risks. As a result, healthcare providers and academic institutions must be well-informed, safe, and prepared to respond to any cyber-attack. The aim of this paper is to conduct a review of healthcare information systems together with how security can be provided for such systems. The paper main focus is on the adoption of blockchain technology to support the security of the healthcare system. This adoption has been investigated and assessed to show its benefits compared with other conventional technologies. Finally, a recommendation was pointed out for the security of healthcare with the usage of blockchain technology.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134206270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diseases Diagnosis Using Machine Learning of Medical Images 使用医学图像的机器学习进行疾病诊断
Pub Date : 2022-09-21 DOI: 10.1109/ICOASE56293.2022.10075581
Shakir M. Abas, Omer Mohammed Salih Hassan, Imad Manaf Ali, Safin Saber Nori, Hamza Sardar Hassan
Recently, the various diseases are infecting the humans due to their living environmental and the changes of the environmental. It is much important to identification and prediction of such diseases at earlier stages to prevent the outbreak these diseases. The identification of these diseases manually by the doctors is difficult. There are many of the chronic diseases that affect human. One of these diseases is the brain tumors that arise by the abnormal growth and division of brain cells which leads to brain cancer. The computer vision plays important role in human health field which gives accurate results that helps the human to tack the true decision. In addition, traditional technics are time consuming, expensive and addressed problem requires expert knowledge. This research aims to focus on the using simple deep learning architecture with accurate results. Moreover, the Convolution Neural Network (CNN) algorithm is used for reliable Classification of the brain tumor Image. The proposed models are showed very good results and reached almost 96.4% accuracy on Brain MRI Images for Brain Tumor Detection1 dataset.
近年来,由于人类的生存环境和环境的变化,各种疾病正在感染人类。早期识别和预测这些疾病对预防这些疾病的爆发具有重要意义。由医生手动识别这些疾病是困难的。影响人类的慢性病有很多。其中一种疾病是脑肿瘤,它是由脑细胞的异常生长和分裂引起的,从而导致脑癌。计算机视觉在人类健康领域发挥着重要的作用,它能给出准确的结果,帮助人类做出真正的决策。此外,传统工艺耗时长,成本高,解决问题需要专业知识。本研究的重点是使用简单的深度学习架构并获得准确的结果。此外,采用卷积神经网络(CNN)算法对脑肿瘤图像进行可靠分类。所提出的模型在脑肿瘤检测1数据集的脑MRI图像上显示出非常好的结果,准确率接近96.4%。
{"title":"Diseases Diagnosis Using Machine Learning of Medical Images","authors":"Shakir M. Abas, Omer Mohammed Salih Hassan, Imad Manaf Ali, Safin Saber Nori, Hamza Sardar Hassan","doi":"10.1109/ICOASE56293.2022.10075581","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075581","url":null,"abstract":"Recently, the various diseases are infecting the humans due to their living environmental and the changes of the environmental. It is much important to identification and prediction of such diseases at earlier stages to prevent the outbreak these diseases. The identification of these diseases manually by the doctors is difficult. There are many of the chronic diseases that affect human. One of these diseases is the brain tumors that arise by the abnormal growth and division of brain cells which leads to brain cancer. The computer vision plays important role in human health field which gives accurate results that helps the human to tack the true decision. In addition, traditional technics are time consuming, expensive and addressed problem requires expert knowledge. This research aims to focus on the using simple deep learning architecture with accurate results. Moreover, the Convolution Neural Network (CNN) algorithm is used for reliable Classification of the brain tumor Image. The proposed models are showed very good results and reached almost 96.4% accuracy on Brain MRI Images for Brain Tumor Detection1 dataset.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133925234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating Masked Facial Datasets Using Dlib-Machine Learning Library 使用dlib -机器学习库生成蒙面数据集
Pub Date : 2022-09-21 DOI: 10.1109/ICOASE56293.2022.10075601
Waleed Ayad Mahdi, S. Q. Mahdi, Ali Al-Naji
In 2020, the COVID-19 pandemic spread globally, leading to countries imposing health restrictions on people, including wearing masks, to prevent the spread of the disease. Wearing a mask significantly decreases distinguishing ability due to its concealment of the main facial features. After the outbreak of the pandemic, the existing datasets became unsuitable because they did not contain images of people wearing masks. To address the shortage of large-scale masked faces datasets, a developed method was proposed to generate artificial masks and place them on the faces in the unmasked faces dataset to generate the masked faces dataset. Following the proposed method, masked faces are generated in two steps. First, the face is detected in the unmasked image, and then the detected face image is aligned. The second step is to overlay the mask on the cropped face images using the dlib-ml library. Depending on the proposed method, two datasets of masked faces called masked-dataset-1 and masked-dataset-2 were created. Promising results were obtained when they were evaluated using the Labeled Faces in the Wild (LFW) dataset, and two of the state-of-the-art facial recognition systems for evaluation are FaceNet and ArcFace, where the accuracy of using the two systems was 96.1 and 97, respectively with masked-dataset-1 and 87.6 and 88.9, respectively with masked-dataset-2.
2020年,COVID-19大流行在全球蔓延,导致各国对人们实施卫生限制,包括戴口罩,以防止疾病传播。戴上口罩,由于掩盖了主要的面部特征,大大降低了识别能力。大流行爆发后,现有的数据集变得不合适,因为它们不包含戴口罩的人的图像。针对大规模掩模人脸数据集不足的问题,提出了一种生成人工掩模的方法,并将其放置在未掩模人脸数据集中的人脸上生成掩模人脸数据集。根据提出的方法,分两步生成遮罩面。首先在去掩码图像中检测人脸,然后对检测到的人脸图像进行对齐。第二步是使用dlib-ml库在裁剪的面部图像上覆盖蒙版。根据所提出的方法,创建了两个被屏蔽面数据集,分别称为掩码数据集-1和掩码数据集-2。当使用野生标记面部(LFW)数据集对它们进行评估时,获得了令人满意的结果,其中两个最先进的面部识别系统是FaceNet和ArcFace,其中使用这两个系统的准确率分别为96.1和97,分别使用掩码数据集-1和87.6和88.9,分别使用掩码数据集-2。
{"title":"Generating Masked Facial Datasets Using Dlib-Machine Learning Library","authors":"Waleed Ayad Mahdi, S. Q. Mahdi, Ali Al-Naji","doi":"10.1109/ICOASE56293.2022.10075601","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075601","url":null,"abstract":"In 2020, the COVID-19 pandemic spread globally, leading to countries imposing health restrictions on people, including wearing masks, to prevent the spread of the disease. Wearing a mask significantly decreases distinguishing ability due to its concealment of the main facial features. After the outbreak of the pandemic, the existing datasets became unsuitable because they did not contain images of people wearing masks. To address the shortage of large-scale masked faces datasets, a developed method was proposed to generate artificial masks and place them on the faces in the unmasked faces dataset to generate the masked faces dataset. Following the proposed method, masked faces are generated in two steps. First, the face is detected in the unmasked image, and then the detected face image is aligned. The second step is to overlay the mask on the cropped face images using the dlib-ml library. Depending on the proposed method, two datasets of masked faces called masked-dataset-1 and masked-dataset-2 were created. Promising results were obtained when they were evaluated using the Labeled Faces in the Wild (LFW) dataset, and two of the state-of-the-art facial recognition systems for evaluation are FaceNet and ArcFace, where the accuracy of using the two systems was 96.1 and 97, respectively with masked-dataset-1 and 87.6 and 88.9, respectively with masked-dataset-2.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114779694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Salp Swarm Algorithm-based Position Control of a BLDC Motor 基于Salp群算法的无刷直流电机位置控制
Pub Date : 2022-09-21 DOI: 10.1109/ICOASE56293.2022.10075598
O. M. Hussein, N. Yasin
The best P and PI controller parameters of the cascade control of the BLDC system are determined using a new artificial intelligence-based optimization method called the slap swarm algorithm (SSA) in this paper. The algorithm's simplicity allows for precise tuning of optimal P and PI controller values. The integral time absolute error (ITAE) was chosen as the fitness function to optimize the controller parameters. Compared with the classical control technique (PID), the SSA approach was found to have good tuning and obtained less rise time, also less (Approximately zero) overshoot, and is more efficient in increasing the step response of the BLDC system, according to the transient response study.
本文采用一种新的基于人工智能的优化方法——拍打群算法(SSA)确定无刷直流系统串级控制的最佳P和PI控制器参数。该算法的简单性允许精确调整最优P和PI控制器值。选择积分时间绝对误差(ITAE)作为适应度函数对控制器参数进行优化。通过对系统暂态响应的研究,发现与经典控制技术(PID)相比,SSA方法具有良好的可调谐性,上升时间短,超调量(近似为零)小,能更有效地提高无刷直流电机系统的阶跃响应。
{"title":"Salp Swarm Algorithm-based Position Control of a BLDC Motor","authors":"O. M. Hussein, N. Yasin","doi":"10.1109/ICOASE56293.2022.10075598","DOIUrl":"https://doi.org/10.1109/ICOASE56293.2022.10075598","url":null,"abstract":"The best P and PI controller parameters of the cascade control of the BLDC system are determined using a new artificial intelligence-based optimization method called the slap swarm algorithm (SSA) in this paper. The algorithm's simplicity allows for precise tuning of optimal P and PI controller values. The integral time absolute error (ITAE) was chosen as the fitness function to optimize the controller parameters. Compared with the classical control technique (PID), the SSA approach was found to have good tuning and obtained less rise time, also less (Approximately zero) overshoot, and is more efficient in increasing the step response of the BLDC system, according to the transient response study.","PeriodicalId":297211,"journal":{"name":"2022 4th International Conference on Advanced Science and Engineering (ICOASE)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130347794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 4th International Conference on Advanced Science and Engineering (ICOASE)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1