首页 > 最新文献

2023 IEEE International Conference on Electro Information Technology (eIT)最新文献

英文 中文
Endoscopic Image Enhanced Deep Learning Algorithm for Inflammatory Bowel Disease (IBD) Polyp Detection: Feasibility Study 内镜图像增强深度学习算法用于炎性肠病(IBD)息肉检测:可行性研究
Pub Date : 2023-05-18 DOI: 10.1109/eIT57321.2023.10187234
J. Fetzer, Renisha Redij, Joshika Agarwal, Anjali Rajagopal, K. Gopalakrishnan, A. Cherukuri, John B. League, D. Vinsard, C. Leggett, Coelho-Prabhu Nayantara, S. P. Arunachalam
Gastrointestinal endoscopy is a commonly used diagnostic procedure for surveillance colonoscopies in patients with Inflammatory Bowel Diseases (IBD). Patients with IBD can have benign, inflammatory and or malignant polyps that require further testing and evaluation. Endoscopic image acquisition often comes with the challenges of poor quality, and often image preprocessing steps are essential for developing artificial intelligence (AI) assisted models for improving IBD polyp detection. Through artificial intelligence, detection and differentiation of these polyps can be made more efficient as it eliminates human error. The purpose of this work was to evaluate the utility of several digital filters such as average filter (AF), median filter (MF), gaussian filter (GF) and Savitzky Golay Filter (SG) to enhance the images to improve deep learning model detection of IBD polyps and compare the performance without enhancement. IBD polyp images from high-definition white light endoscopy (HDWLE) from Mayo Clinic, GIH Division were used to develop a You-Only-Look-Once (YOLO) model which employs conventional neural networks (CNN) to detect IBD polyps. Varying filter kernels such as, 3x3, 5x5, 7x7, 9x9, 11x11 and 13x3 were employed for all four filter types and YOLO model was deployed for each case. Performance was measured using precision and recall curve and measuring the area under the curve (AUC). 80% data was used for training and validation and 20% was used for testing. A moderate 5-10% improvement in deep learning model performance was observed. Further testing with different model parameters and filter settings is required to validate these findings.
胃肠内窥镜检查是炎症性肠病(IBD)患者监测结肠镜检查中常用的诊断程序。IBD患者可能有良性、炎性和/或恶性息肉,需要进一步检查和评估。内镜图像采集通常面临质量差的挑战,通常图像预处理步骤对于开发人工智能(AI)辅助模型以改善IBD息肉检测至关重要。通过人工智能,可以更有效地检测和区分这些息肉,因为它消除了人为错误。本研究的目的是评估几种数字滤波器(如平均滤波器(AF)、中值滤波器(MF)、高斯滤波器(GF)和Savitzky Golay滤波器(SG))在增强图像以提高IBD息肉深度学习模型检测方面的效用,并比较不增强的性能。利用GIH分部Mayo诊所高清白光内镜(HDWLE)的IBD息肉图像,建立You-Only-Look-Once (YOLO)模型,该模型采用传统神经网络(CNN)检测IBD息肉。对于所有四种过滤器类型,使用了不同的过滤器内核,如3x3、5x5、7x7、9x9、11x11和13x3,并为每种情况部署了YOLO模型。采用查全率曲线、查全率曲线和曲线下面积(AUC)进行评价。80%的数据用于训练和验证,20%用于测试。观察到深度学习模型性能有适度的5-10%的改善。需要使用不同的模型参数和过滤器设置进行进一步测试,以验证这些发现。
{"title":"Endoscopic Image Enhanced Deep Learning Algorithm for Inflammatory Bowel Disease (IBD) Polyp Detection: Feasibility Study","authors":"J. Fetzer, Renisha Redij, Joshika Agarwal, Anjali Rajagopal, K. Gopalakrishnan, A. Cherukuri, John B. League, D. Vinsard, C. Leggett, Coelho-Prabhu Nayantara, S. P. Arunachalam","doi":"10.1109/eIT57321.2023.10187234","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187234","url":null,"abstract":"Gastrointestinal endoscopy is a commonly used diagnostic procedure for surveillance colonoscopies in patients with Inflammatory Bowel Diseases (IBD). Patients with IBD can have benign, inflammatory and or malignant polyps that require further testing and evaluation. Endoscopic image acquisition often comes with the challenges of poor quality, and often image preprocessing steps are essential for developing artificial intelligence (AI) assisted models for improving IBD polyp detection. Through artificial intelligence, detection and differentiation of these polyps can be made more efficient as it eliminates human error. The purpose of this work was to evaluate the utility of several digital filters such as average filter (AF), median filter (MF), gaussian filter (GF) and Savitzky Golay Filter (SG) to enhance the images to improve deep learning model detection of IBD polyps and compare the performance without enhancement. IBD polyp images from high-definition white light endoscopy (HDWLE) from Mayo Clinic, GIH Division were used to develop a You-Only-Look-Once (YOLO) model which employs conventional neural networks (CNN) to detect IBD polyps. Varying filter kernels such as, 3x3, 5x5, 7x7, 9x9, 11x11 and 13x3 were employed for all four filter types and YOLO model was deployed for each case. Performance was measured using precision and recall curve and measuring the area under the curve (AUC). 80% data was used for training and validation and 20% was used for testing. A moderate 5-10% improvement in deep learning model performance was observed. Further testing with different model parameters and filter settings is required to validate these findings.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128593057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Importance of High Speed Storage in Deep Learning Training 高速存储在深度学习训练中的重要性
Pub Date : 2023-05-18 DOI: 10.1109/eIT57321.2023.10187241
Solene Bechelli, D. Apostal, A. Bergstrom
With the increase of computational power and techniques over the past decades, the use of Deep Learning (DL) algorithms in the biomedical field has grown significantly. One of the remaining challenges to using deep neural networks is the proper tuning of the model's performance beyond its simple accuracy. Therefore, in this work, we implement the combination of the NVIDIA DALI API for high-speed storage access alongside the TensorFlow framework, applied to the image classification task of skin cancer. To that end, we use the VGG16 model, known to perform accurately on skin cancer classification. We compare the performance between the use of CPU, GPU and multi-GPU devices training both in terms of accuracy and runtime performance. These performances are also evaluated on additional models, as a mean for comparison. Our work shows the high importance of model choice and fine tuning tailored to a particular application. Moreover, we show that the use of high-speed storage considerably increases the performance of DL models, in particular when handling images and large databases which may be a significant improvement for larger databases.
随着过去几十年计算能力和技术的提高,深度学习(DL)算法在生物医学领域的应用显著增长。使用深度神经网络的挑战之一是适当调整模型的性能,而不仅仅是简单的准确性。因此,在这项工作中,我们实现了NVIDIA DALI API的高速存储访问与TensorFlow框架的结合,应用于皮肤癌的图像分类任务。为此,我们使用了VGG16模型,已知它可以准确地进行皮肤癌分类。我们比较了CPU、GPU和多GPU设备训练在准确率和运行时性能方面的性能。这些性能还在其他模型上进行评估,作为比较的平均值。我们的工作显示了模型选择和针对特定应用程序进行微调的高度重要性。此外,我们表明高速存储的使用大大提高了深度学习模型的性能,特别是在处理图像和大型数据库时,这对于大型数据库来说可能是一个显著的改进。
{"title":"The Importance of High Speed Storage in Deep Learning Training","authors":"Solene Bechelli, D. Apostal, A. Bergstrom","doi":"10.1109/eIT57321.2023.10187241","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187241","url":null,"abstract":"With the increase of computational power and techniques over the past decades, the use of Deep Learning (DL) algorithms in the biomedical field has grown significantly. One of the remaining challenges to using deep neural networks is the proper tuning of the model's performance beyond its simple accuracy. Therefore, in this work, we implement the combination of the NVIDIA DALI API for high-speed storage access alongside the TensorFlow framework, applied to the image classification task of skin cancer. To that end, we use the VGG16 model, known to perform accurately on skin cancer classification. We compare the performance between the use of CPU, GPU and multi-GPU devices training both in terms of accuracy and runtime performance. These performances are also evaluated on additional models, as a mean for comparison. Our work shows the high importance of model choice and fine tuning tailored to a particular application. Moreover, we show that the use of high-speed storage considerably increases the performance of DL models, in particular when handling images and large databases which may be a significant improvement for larger databases.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122589294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hardware Description of Event-driven Systems by Translation of UML Statecharts to VHDL 用UML状态图转换成VHDL实现事件驱动系统的硬件描述
Pub Date : 2023-05-18 DOI: 10.1109/eIT57321.2023.10187218
Cristinel Ababei, S. Schneider
We present a complete implementation prototype of the classic Fly-n-Shoot game on an FPGA. This is a famous game that has been described in the past using UML statecharts as an event-driven embedded system. Because it has a rather complex functionality, attempting to describe it using a hardware description language (HDL), such as VHDL or Verilog, with the goal of deploying on a real FPGA becomes challenging. As such, brute-force attempts to write HDL descriptions are prone to errors and subject to long design times. Hence, in this paper, we describe a practical approach for translating UML statecharts used to specify event-driven embedded systems into VHDL code written using the popular two-process coding style. This approach consists of a set of mapping rules from statecharts concepts into VHDL constructs. The efficacy and correct by design characteristics of the presented approach are due to the use of two-process VHDL coding to describe the hierarchical finite state machine (FSM) corresponding to the UML statecharts. This gives the designer better control over the current and next state signals of the FSMs, it is more modular or object oriented, and makes development and debugging much easier. We apply the proposed approach to implement a prototype of the classic Fly-n-Shoot game. The implementation is verified successfully on real hardware, the DE1-SoC FPGA development board, that uses a Cyclone IV FPGA chip.
我们提出了一个完整的实现原型的经典的飞-n-射击游戏在FPGA上。这是一个著名的游戏,在过去使用UML状态图作为事件驱动的嵌入式系统来描述它。由于它具有相当复杂的功能,因此尝试使用硬件描述语言(HDL)(如VHDL或Verilog)来描述它,并将其部署在真正的FPGA上变得具有挑战性。因此,强行编写HDL描述的尝试很容易出错,并且需要很长的设计时间。因此,在本文中,我们描述了一种将用于指定事件驱动嵌入式系统的UML状态图转换为使用流行的双进程编码风格编写的VHDL代码的实用方法。这种方法由一组从状态图概念到VHDL构造的映射规则组成。由于采用两进程VHDL编码来描述与UML状态图相对应的分层有限状态机(FSM),该方法的有效性和设计特性是正确的。这使设计人员能够更好地控制fsm的当前和下一个状态信号,它更加模块化或面向对象,并且使开发和调试更加容易。我们将提出的方法用于实现经典的Fly-n-Shoot游戏的原型。在实际硬件上,使用Cyclone IV FPGA芯片的DE1-SoC FPGA开发板上成功地验证了该实现。
{"title":"Hardware Description of Event-driven Systems by Translation of UML Statecharts to VHDL","authors":"Cristinel Ababei, S. Schneider","doi":"10.1109/eIT57321.2023.10187218","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187218","url":null,"abstract":"We present a complete implementation prototype of the classic Fly-n-Shoot game on an FPGA. This is a famous game that has been described in the past using UML statecharts as an event-driven embedded system. Because it has a rather complex functionality, attempting to describe it using a hardware description language (HDL), such as VHDL or Verilog, with the goal of deploying on a real FPGA becomes challenging. As such, brute-force attempts to write HDL descriptions are prone to errors and subject to long design times. Hence, in this paper, we describe a practical approach for translating UML statecharts used to specify event-driven embedded systems into VHDL code written using the popular two-process coding style. This approach consists of a set of mapping rules from statecharts concepts into VHDL constructs. The efficacy and correct by design characteristics of the presented approach are due to the use of two-process VHDL coding to describe the hierarchical finite state machine (FSM) corresponding to the UML statecharts. This gives the designer better control over the current and next state signals of the FSMs, it is more modular or object oriented, and makes development and debugging much easier. We apply the proposed approach to implement a prototype of the classic Fly-n-Shoot game. The implementation is verified successfully on real hardware, the DE1-SoC FPGA development board, that uses a Cyclone IV FPGA chip.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"62 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132122929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experimental Validation of Event-Triggered Model Predictive Control for Autonomous Vehicle Path Tracking 事件触发模型预测控制在自动驾驶车辆路径跟踪中的实验验证
Pub Date : 2023-05-18 DOI: 10.1109/eIT57321.2023.10187304
Zhao-Ying Zhou, Jun Chen, Mingyuan Tao, P. Zhang, Meng Xu
This paper presents an experimental validation of an event-triggered model predictive control (MPC) for autonomous vehicle (AV) path-tracking control using real-world testing. Path tracking is a critical aspect of AV control, and MPC is a popular control method for this task. However, traditional MPC requires extensive computational resources to solve real-time optimization problems, which can be challenging to implement in the real world. To address this issue, event-triggered MPC, which only solves the optimization problem when a triggering event occurs, has been proposed in the literature to reduce computational requirements. This paper then conducts experimental validation, where event-triggered MPC is compared to traditional time-triggered MPC through real-world testing, and the results demonstrate that the event-triggered MPC method not only offers a significant reduction in computation compared to timetriggered MPC but also improves the control performance.
本文通过实验验证了事件触发模型预测控制(MPC)在自动驾驶汽车路径跟踪控制中的应用。路径跟踪是自动驾驶控制的一个重要方面,MPC是一种常用的自动驾驶控制方法。然而,传统的MPC需要大量的计算资源来解决实时优化问题,这在现实世界中是具有挑战性的。为了解决这个问题,文献中提出了事件触发MPC,它只解决触发事件发生时的优化问题,以减少计算需求。然后,本文进行了实验验证,通过实际测试将事件触发的MPC与传统的时间触发的MPC进行了比较,结果表明,与时间触发的MPC相比,事件触发的MPC方法不仅可以显著减少计算量,而且可以提高控制性能。
{"title":"Experimental Validation of Event-Triggered Model Predictive Control for Autonomous Vehicle Path Tracking","authors":"Zhao-Ying Zhou, Jun Chen, Mingyuan Tao, P. Zhang, Meng Xu","doi":"10.1109/eIT57321.2023.10187304","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187304","url":null,"abstract":"This paper presents an experimental validation of an event-triggered model predictive control (MPC) for autonomous vehicle (AV) path-tracking control using real-world testing. Path tracking is a critical aspect of AV control, and MPC is a popular control method for this task. However, traditional MPC requires extensive computational resources to solve real-time optimization problems, which can be challenging to implement in the real world. To address this issue, event-triggered MPC, which only solves the optimization problem when a triggering event occurs, has been proposed in the literature to reduce computational requirements. This paper then conducts experimental validation, where event-triggered MPC is compared to traditional time-triggered MPC through real-world testing, and the results demonstrate that the event-triggered MPC method not only offers a significant reduction in computation compared to timetriggered MPC but also improves the control performance.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123352392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Machine Learning and Image Recognition for Driver Attention Monitoring 机器学习和图像识别在驾驶员注意力监测中的应用
Pub Date : 2023-05-18 DOI: 10.1109/eIT57321.2023.10187230
Manav Tailor, J. Ali, Xinrui Yu, Won-Jae Yi, J. Saniie
This paper presents a real-time prototype system for monitoring the distraction levels of the driver. Due to the nature of high traffic conditions commonly seen nowadays, accidents are highly likely to occur as drivers cannot always recognize their exhaustion levels themselves. We utilize a single low-cost camera facing the driver connected to a single-board computer; a series of frame captures from the camera are fed to a neural network, and a pattern detection algorithm to predict the driver's distraction level is utilized. All training is conducted under personalized training sets to increase accuracy and to match an individual's driving patterns as accurately as possible. This system is designed to serve as a baseline for further system development, and many vital sub-components can be changed regarding input data type and choices of machine learning algorithms.
本文提出了一种实时监测驾驶员分心程度的原型系统。由于高交通状况的性质,现在普遍看到,事故很可能发生,因为司机不能总是认识到自己的疲惫程度。我们利用一个低成本的摄像头,面对连接到单板计算机的驱动程序;将相机捕捉到的一系列图像输入到神经网络中,利用模式检测算法预测驾驶员的分心程度。所有的训练都是在个性化的训练集下进行的,以提高准确性,并尽可能准确地匹配个人的驾驶模式。该系统旨在作为进一步系统开发的基线,并且可以根据输入数据类型和机器学习算法的选择更改许多重要的子组件。
{"title":"Application of Machine Learning and Image Recognition for Driver Attention Monitoring","authors":"Manav Tailor, J. Ali, Xinrui Yu, Won-Jae Yi, J. Saniie","doi":"10.1109/eIT57321.2023.10187230","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187230","url":null,"abstract":"This paper presents a real-time prototype system for monitoring the distraction levels of the driver. Due to the nature of high traffic conditions commonly seen nowadays, accidents are highly likely to occur as drivers cannot always recognize their exhaustion levels themselves. We utilize a single low-cost camera facing the driver connected to a single-board computer; a series of frame captures from the camera are fed to a neural network, and a pattern detection algorithm to predict the driver's distraction level is utilized. All training is conducted under personalized training sets to increase accuracy and to match an individual's driving patterns as accurately as possible. This system is designed to serve as a baseline for further system development, and many vital sub-components can be changed regarding input data type and choices of machine learning algorithms.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123578368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digit-DM: A Sustainable Data Mining Modell for Continuous Digitization in Manufacturing 数字化dm:制造业持续数字化的可持续数据挖掘模型
Pub Date : 2023-05-18 DOI: 10.1109/eIT57321.2023.10187390
Christian Weber, P. Czerner, M. Fathi
Manufacturing as an industry is under continuous pressure to deliver the right product, at the right quality, quantity and in time. To do so it becomes increasingly important to detect the source of manufacturing problems in a short amount of time but also to prevent further occurrence of know problems. Data Mining is focused on identifying problem patterns and inferring the right interpretation to trace and resolve the root cause in time. However, lessons learned are rarely transported into digital solutions that then thoroughly enable to automatize detection and resolving of incidents. Data mining models exist, but no structured approach for transforming and sustaining found solutions digitally. We are introducing Digit-DM as a structured and strategic process for digitizing analytical results. Digit-DM is building on top of existing data mining models but defines a strategic process for continuous digitization, enabling sustainable, digital manufacturing support, utilizing analytical lessons learned.
制造业作为一个行业面临着持续的压力,要求以正确的质量、数量和时间交付正确的产品。为了做到这一点,在短时间内发现制造问题的根源以及防止已知问题的进一步发生变得越来越重要。数据挖掘的重点是识别问题模式并推断出正确的解释,从而及时跟踪和解决问题的根本原因。然而,吸取的经验教训很少被传输到数字解决方案中,然后彻底实现自动检测和解决事件。数据挖掘模型已经存在,但没有结构化的方法来数字化地转换和维持已发现的解决方案。我们正在引入digital - dm作为数字化分析结果的结构化和战略性过程。digital - dm是建立在现有数据挖掘模型之上的,但它定义了一个持续数字化的战略过程,利用分析经验教训,实现可持续的数字化制造支持。
{"title":"Digit-DM: A Sustainable Data Mining Modell for Continuous Digitization in Manufacturing","authors":"Christian Weber, P. Czerner, M. Fathi","doi":"10.1109/eIT57321.2023.10187390","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187390","url":null,"abstract":"Manufacturing as an industry is under continuous pressure to deliver the right product, at the right quality, quantity and in time. To do so it becomes increasingly important to detect the source of manufacturing problems in a short amount of time but also to prevent further occurrence of know problems. Data Mining is focused on identifying problem patterns and inferring the right interpretation to trace and resolve the root cause in time. However, lessons learned are rarely transported into digital solutions that then thoroughly enable to automatize detection and resolving of incidents. Data mining models exist, but no structured approach for transforming and sustaining found solutions digitally. We are introducing Digit-DM as a structured and strategic process for digitizing analytical results. Digit-DM is building on top of existing data mining models but defines a strategic process for continuous digitization, enabling sustainable, digital manufacturing support, utilizing analytical lessons learned.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123784338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Application for Detection of Malaria 深度学习在疟疾检测中的应用
Pub Date : 2023-05-18 DOI: 10.1109/eIT57321.2023.10187342
Md. Saifur Rahman, Nafiz Rifat, M. Ahsan, Sabrina Islam, Md. Chowdhury, Rahul Gomes
Malaria continues to be a significant burden on global health with 247 million clinical episodes and 619,000 deaths. Along with biomedical science, technology, and informatics have begun participating in the quest against malaria. Microscopy techniques are frequently used to detect malaria parasites in infected red blood cells. Giemsa stain has been used to stain blood parasites for over a century. The stain is applied after fixing blood smears in methyl alcohol for 25 to 30 minutes [1]. When stained slides are examined under a microscope, the parasites are easily discernible based on morphology and color. We observed that automating the detection of these slides using deep learning is possible with high accuracy. A comparison between deep learning models reveals ResNets provide better performance.
疟疾仍然是全球卫生的一个重大负担,有2.47亿临床病例和619 000人死亡。随着生物医学科学,技术和信息学已经开始参与对抗疟疾的探索。显微镜技术经常用于检测受感染红细胞中的疟疾寄生虫。吉姆萨染色法用于血液寄生虫染色已有一个多世纪的历史。将血液涂片在甲醇中固定25 ~ 30分钟后使用染色剂[1]。当在显微镜下检查染色玻片时,根据形态和颜色很容易识别寄生虫。我们观察到,使用深度学习自动检测这些幻灯片是可能的,并且具有很高的准确性。深度学习模型之间的比较表明,ResNets提供了更好的性能。
{"title":"Deep Learning Application for Detection of Malaria","authors":"Md. Saifur Rahman, Nafiz Rifat, M. Ahsan, Sabrina Islam, Md. Chowdhury, Rahul Gomes","doi":"10.1109/eIT57321.2023.10187342","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187342","url":null,"abstract":"Malaria continues to be a significant burden on global health with 247 million clinical episodes and 619,000 deaths. Along with biomedical science, technology, and informatics have begun participating in the quest against malaria. Microscopy techniques are frequently used to detect malaria parasites in infected red blood cells. Giemsa stain has been used to stain blood parasites for over a century. The stain is applied after fixing blood smears in methyl alcohol for 25 to 30 minutes [1]. When stained slides are examined under a microscope, the parasites are easily discernible based on morphology and color. We observed that automating the detection of these slides using deep learning is possible with high accuracy. A comparison between deep learning models reveals ResNets provide better performance.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"572 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128806393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An investigation of the Intrusion detection system for the NSL-KDD dataset using machine-learning algorithms 利用机器学习算法研究NSL-KDD数据集的入侵检测系统
Pub Date : 2023-05-18 DOI: 10.1109/eIT57321.2023.10187360
Y. Al-Khassawneh
Over the last few years, the use of an Intrusion Detection System (IDS) has proven to be an effective method for achieving higher levels of security by detecting potentially harmful actions. Because it is unable to accurately identify all types of attacks, the current method of anomaly detection is frequently associated with high rates of false alarms and low rates of accuracy and detection. When it comes to establishing reliable and all-encompassing security, intrusion detection systems (IDS) are invaluable tools for managed service providers (MSPs). Since there are so many IDS options, it can be hard to figure out which one is best for your business and your customers. When it comes to training and testing an IDS, having access to a dataset with a large amount of high-quality data representative of real-world conditions is invaluable. In this work, NSL-KDD dataset is analyzed and is used to assess the effectiveness of various classification algorithms in detecting anomalies in network traffic patterns. In addition, we investigate the relationship between hacker attacks and the protocols found in the commonly used network protocol stack. These investigations were carried out to determine how attackers generate abnormal network traffic. The investigation has yielded a wealth of information about the relationship between the protocols and network attacks. Furthermore, the proposed model not only improves IDS precision but also opens up a new research avenue in this field.
在过去的几年中,使用入侵检测系统(IDS)已被证明是通过检测潜在的有害行为来实现更高级别安全性的有效方法。由于无法准确识别所有类型的攻击,当前的异常检测方法经常存在高虚警率、低准确率和低检测率的问题。当涉及到建立可靠和全面的安全性时,入侵检测系统(IDS)是托管服务提供商(msp)的宝贵工具。由于存在如此多的IDS选项,因此很难确定哪一个最适合您的业务和客户。在训练和测试IDS时,能够访问具有大量代表现实世界条件的高质量数据的数据集是非常宝贵的。在这项工作中,分析了NSL-KDD数据集,并用于评估各种分类算法在检测网络流量模式异常方面的有效性。此外,我们还研究了黑客攻击与常用网络协议栈中的协议之间的关系。这些调查是为了确定攻击者是如何产生异常网络流量的。调查已经获得了大量关于协议和网络攻击之间关系的信息。此外,该模型不仅提高了IDS的精度,而且为该领域的研究开辟了新的途径。
{"title":"An investigation of the Intrusion detection system for the NSL-KDD dataset using machine-learning algorithms","authors":"Y. Al-Khassawneh","doi":"10.1109/eIT57321.2023.10187360","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187360","url":null,"abstract":"Over the last few years, the use of an Intrusion Detection System (IDS) has proven to be an effective method for achieving higher levels of security by detecting potentially harmful actions. Because it is unable to accurately identify all types of attacks, the current method of anomaly detection is frequently associated with high rates of false alarms and low rates of accuracy and detection. When it comes to establishing reliable and all-encompassing security, intrusion detection systems (IDS) are invaluable tools for managed service providers (MSPs). Since there are so many IDS options, it can be hard to figure out which one is best for your business and your customers. When it comes to training and testing an IDS, having access to a dataset with a large amount of high-quality data representative of real-world conditions is invaluable. In this work, NSL-KDD dataset is analyzed and is used to assess the effectiveness of various classification algorithms in detecting anomalies in network traffic patterns. In addition, we investigate the relationship between hacker attacks and the protocols found in the commonly used network protocol stack. These investigations were carried out to determine how attackers generate abnormal network traffic. The investigation has yielded a wealth of information about the relationship between the protocols and network attacks. Furthermore, the proposed model not only improves IDS precision but also opens up a new research avenue in this field.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128501039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of Situational Context From Minimal Sensor Modality of A Smartphone Using Machine Learning Algorithm 基于机器学习算法的智能手机最小传感器模态情境检测
Pub Date : 2023-05-18 DOI: 10.1109/eIT57321.2023.10187382
Nabonita Mitra, B. Morshed
Early detection and continuous monitoring can help reduce the complexity of treatment and recovery. For this purpose, many modern technologies are being used like smart wearable devices to make the diagnosis of different types of human diseases and automated tutoring systems. There is a vast improvement in the sector of human healthcare and education delivery using artificial intelligence (AI). For these AI algorithms, there can be high error rates if situational contexts are ignored. Currently, there is no automated approach to detect situational context. In this work, we propose a novel approach to automatically detect situational context with a smartphone context detection app using AI from minimal sensor modality. We begin the process by converting a few sensor data from the smartphone app to a multitude of axes, then determine situational context from these axes by using a machine learning algorithm. At first, we evaluated $k$-means algorithm performance on the converted data and grouped them into different clusters according to the contexts. However, the $k$-means algorithm has many challenges that negatively affect its clustering performance. For this reason, to automatically detect the situational contexts more accurately we have performed different machine learning (ML) algorithms to differentiate their characteristic parameters and attributes. To train and test ML models, 145 features were extracted from the dataset. In our case, we have used a dataset with 53,679 distinct values to evaluate the performance of different algorithms in detecting five situational contexts of the users. Experimental result shows that the accuracy of the Support Vector Machine, Random Forest, Artificial Neural Network, and Decision Tree Classifiers are 95%, 99%, 97%, and 98% respectively. The most effective classifier overall is Random Forest. This preliminary work shows the feasibility of detecting situational context automatically from a few sensor data collected from the smartphone app by converting the sensor data to multiple axes and applying a machine learning algorithm.
早期发现和持续监测有助于降低治疗和康复的复杂性。为此,许多现代技术被用于诊断不同类型的人类疾病,如智能可穿戴设备和自动辅导系统。人工智能(AI)在人类医疗保健和教育领域取得了巨大进步。对于这些人工智能算法,如果忽略情景上下文,可能会有很高的错误率。目前,还没有自动检测情景上下文的方法。在这项工作中,我们提出了一种新颖的方法,通过智能手机上下文检测应用程序使用AI从最小传感器模式自动检测情景上下文。我们首先将智能手机应用程序中的一些传感器数据转换为多个轴,然后使用机器学习算法从这些轴确定情境背景。首先,我们评估了$k$ means算法在转换数据上的性能,并根据上下文将它们分组到不同的聚类中。然而,$k$ means算法存在许多负面影响其聚类性能的挑战。出于这个原因,为了更准确地自动检测情景上下文,我们执行了不同的机器学习(ML)算法来区分它们的特征参数和属性。为了训练和测试ML模型,从数据集中提取了145个特征。在我们的案例中,我们使用了一个具有53,679个不同值的数据集来评估不同算法在检测用户的五种情境上下文中的性能。实验结果表明,支持向量机分类器、随机森林分类器、人工神经网络分类器和决策树分类器的分类准确率分别为95%、99%、97%和98%。总的来说,最有效的分类器是随机森林。这项初步工作表明,通过将传感器数据转换为多个轴并应用机器学习算法,从智能手机应用程序收集的少量传感器数据中自动检测情景上下文是可行的。
{"title":"Detection of Situational Context From Minimal Sensor Modality of A Smartphone Using Machine Learning Algorithm","authors":"Nabonita Mitra, B. Morshed","doi":"10.1109/eIT57321.2023.10187382","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187382","url":null,"abstract":"Early detection and continuous monitoring can help reduce the complexity of treatment and recovery. For this purpose, many modern technologies are being used like smart wearable devices to make the diagnosis of different types of human diseases and automated tutoring systems. There is a vast improvement in the sector of human healthcare and education delivery using artificial intelligence (AI). For these AI algorithms, there can be high error rates if situational contexts are ignored. Currently, there is no automated approach to detect situational context. In this work, we propose a novel approach to automatically detect situational context with a smartphone context detection app using AI from minimal sensor modality. We begin the process by converting a few sensor data from the smartphone app to a multitude of axes, then determine situational context from these axes by using a machine learning algorithm. At first, we evaluated $k$-means algorithm performance on the converted data and grouped them into different clusters according to the contexts. However, the $k$-means algorithm has many challenges that negatively affect its clustering performance. For this reason, to automatically detect the situational contexts more accurately we have performed different machine learning (ML) algorithms to differentiate their characteristic parameters and attributes. To train and test ML models, 145 features were extracted from the dataset. In our case, we have used a dataset with 53,679 distinct values to evaluate the performance of different algorithms in detecting five situational contexts of the users. Experimental result shows that the accuracy of the Support Vector Machine, Random Forest, Artificial Neural Network, and Decision Tree Classifiers are 95%, 99%, 97%, and 98% respectively. The most effective classifier overall is Random Forest. This preliminary work shows the feasibility of detecting situational context automatically from a few sensor data collected from the smartphone app by converting the sensor data to multiple axes and applying a machine learning algorithm.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126447713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calculating an Approximate Voronoi Diagram using QuadTrees and Triangles 使用四叉树和三角形计算近似Voronoi图
Pub Date : 2023-05-18 DOI: 10.1109/eIT57321.2023.10187239
T. E. Dettling, Byron DeVries, C. Trefftz
Calculating Voronoi diagrams quickly is useful across a range of fields and application areas. However, existing divide-and-conquer methods decompose into squares while boundaries between Voronoi diagram regions are often not perfectly horizontal or vertical. In this paper we introduce a novel method of dividing Approximate Voronoi Diagram spaces into triangles stored by quadtree data structures. While our implementation stores the resulting Voronoi diagram in a data structure, rather than setting each approximated point to its closest region, we provide a comparison of the decomposition time alone.
快速计算Voronoi图在一系列领域和应用领域都很有用。然而,现有的分治方法分解成正方形,而Voronoi图区域之间的边界往往不是完全水平或垂直的。本文介绍了一种用四叉树数据结构将近似Voronoi图空间划分为三角形的新方法。虽然我们的实现将生成的Voronoi图存储在一个数据结构中,而不是将每个近似点设置为最近的区域,但我们仅提供了分解时间的比较。
{"title":"Calculating an Approximate Voronoi Diagram using QuadTrees and Triangles","authors":"T. E. Dettling, Byron DeVries, C. Trefftz","doi":"10.1109/eIT57321.2023.10187239","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187239","url":null,"abstract":"Calculating Voronoi diagrams quickly is useful across a range of fields and application areas. However, existing divide-and-conquer methods decompose into squares while boundaries between Voronoi diagram regions are often not perfectly horizontal or vertical. In this paper we introduce a novel method of dividing Approximate Voronoi Diagram spaces into triangles stored by quadtree data structures. While our implementation stores the resulting Voronoi diagram in a data structure, rather than setting each approximated point to its closest region, we provide a comparison of the decomposition time alone.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130647381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2023 IEEE International Conference on Electro Information Technology (eIT)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1