Pub Date : 2023-05-18DOI: 10.1109/eIT57321.2023.10187234
J. Fetzer, Renisha Redij, Joshika Agarwal, Anjali Rajagopal, K. Gopalakrishnan, A. Cherukuri, John B. League, D. Vinsard, C. Leggett, Coelho-Prabhu Nayantara, S. P. Arunachalam
Gastrointestinal endoscopy is a commonly used diagnostic procedure for surveillance colonoscopies in patients with Inflammatory Bowel Diseases (IBD). Patients with IBD can have benign, inflammatory and or malignant polyps that require further testing and evaluation. Endoscopic image acquisition often comes with the challenges of poor quality, and often image preprocessing steps are essential for developing artificial intelligence (AI) assisted models for improving IBD polyp detection. Through artificial intelligence, detection and differentiation of these polyps can be made more efficient as it eliminates human error. The purpose of this work was to evaluate the utility of several digital filters such as average filter (AF), median filter (MF), gaussian filter (GF) and Savitzky Golay Filter (SG) to enhance the images to improve deep learning model detection of IBD polyps and compare the performance without enhancement. IBD polyp images from high-definition white light endoscopy (HDWLE) from Mayo Clinic, GIH Division were used to develop a You-Only-Look-Once (YOLO) model which employs conventional neural networks (CNN) to detect IBD polyps. Varying filter kernels such as, 3x3, 5x5, 7x7, 9x9, 11x11 and 13x3 were employed for all four filter types and YOLO model was deployed for each case. Performance was measured using precision and recall curve and measuring the area under the curve (AUC). 80% data was used for training and validation and 20% was used for testing. A moderate 5-10% improvement in deep learning model performance was observed. Further testing with different model parameters and filter settings is required to validate these findings.
{"title":"Endoscopic Image Enhanced Deep Learning Algorithm for Inflammatory Bowel Disease (IBD) Polyp Detection: Feasibility Study","authors":"J. Fetzer, Renisha Redij, Joshika Agarwal, Anjali Rajagopal, K. Gopalakrishnan, A. Cherukuri, John B. League, D. Vinsard, C. Leggett, Coelho-Prabhu Nayantara, S. P. Arunachalam","doi":"10.1109/eIT57321.2023.10187234","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187234","url":null,"abstract":"Gastrointestinal endoscopy is a commonly used diagnostic procedure for surveillance colonoscopies in patients with Inflammatory Bowel Diseases (IBD). Patients with IBD can have benign, inflammatory and or malignant polyps that require further testing and evaluation. Endoscopic image acquisition often comes with the challenges of poor quality, and often image preprocessing steps are essential for developing artificial intelligence (AI) assisted models for improving IBD polyp detection. Through artificial intelligence, detection and differentiation of these polyps can be made more efficient as it eliminates human error. The purpose of this work was to evaluate the utility of several digital filters such as average filter (AF), median filter (MF), gaussian filter (GF) and Savitzky Golay Filter (SG) to enhance the images to improve deep learning model detection of IBD polyps and compare the performance without enhancement. IBD polyp images from high-definition white light endoscopy (HDWLE) from Mayo Clinic, GIH Division were used to develop a You-Only-Look-Once (YOLO) model which employs conventional neural networks (CNN) to detect IBD polyps. Varying filter kernels such as, 3x3, 5x5, 7x7, 9x9, 11x11 and 13x3 were employed for all four filter types and YOLO model was deployed for each case. Performance was measured using precision and recall curve and measuring the area under the curve (AUC). 80% data was used for training and validation and 20% was used for testing. A moderate 5-10% improvement in deep learning model performance was observed. Further testing with different model parameters and filter settings is required to validate these findings.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128593057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-18DOI: 10.1109/eIT57321.2023.10187241
Solene Bechelli, D. Apostal, A. Bergstrom
With the increase of computational power and techniques over the past decades, the use of Deep Learning (DL) algorithms in the biomedical field has grown significantly. One of the remaining challenges to using deep neural networks is the proper tuning of the model's performance beyond its simple accuracy. Therefore, in this work, we implement the combination of the NVIDIA DALI API for high-speed storage access alongside the TensorFlow framework, applied to the image classification task of skin cancer. To that end, we use the VGG16 model, known to perform accurately on skin cancer classification. We compare the performance between the use of CPU, GPU and multi-GPU devices training both in terms of accuracy and runtime performance. These performances are also evaluated on additional models, as a mean for comparison. Our work shows the high importance of model choice and fine tuning tailored to a particular application. Moreover, we show that the use of high-speed storage considerably increases the performance of DL models, in particular when handling images and large databases which may be a significant improvement for larger databases.
随着过去几十年计算能力和技术的提高,深度学习(DL)算法在生物医学领域的应用显著增长。使用深度神经网络的挑战之一是适当调整模型的性能,而不仅仅是简单的准确性。因此,在这项工作中,我们实现了NVIDIA DALI API的高速存储访问与TensorFlow框架的结合,应用于皮肤癌的图像分类任务。为此,我们使用了VGG16模型,已知它可以准确地进行皮肤癌分类。我们比较了CPU、GPU和多GPU设备训练在准确率和运行时性能方面的性能。这些性能还在其他模型上进行评估,作为比较的平均值。我们的工作显示了模型选择和针对特定应用程序进行微调的高度重要性。此外,我们表明高速存储的使用大大提高了深度学习模型的性能,特别是在处理图像和大型数据库时,这对于大型数据库来说可能是一个显著的改进。
{"title":"The Importance of High Speed Storage in Deep Learning Training","authors":"Solene Bechelli, D. Apostal, A. Bergstrom","doi":"10.1109/eIT57321.2023.10187241","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187241","url":null,"abstract":"With the increase of computational power and techniques over the past decades, the use of Deep Learning (DL) algorithms in the biomedical field has grown significantly. One of the remaining challenges to using deep neural networks is the proper tuning of the model's performance beyond its simple accuracy. Therefore, in this work, we implement the combination of the NVIDIA DALI API for high-speed storage access alongside the TensorFlow framework, applied to the image classification task of skin cancer. To that end, we use the VGG16 model, known to perform accurately on skin cancer classification. We compare the performance between the use of CPU, GPU and multi-GPU devices training both in terms of accuracy and runtime performance. These performances are also evaluated on additional models, as a mean for comparison. Our work shows the high importance of model choice and fine tuning tailored to a particular application. Moreover, we show that the use of high-speed storage considerably increases the performance of DL models, in particular when handling images and large databases which may be a significant improvement for larger databases.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122589294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-18DOI: 10.1109/eIT57321.2023.10187218
Cristinel Ababei, S. Schneider
We present a complete implementation prototype of the classic Fly-n-Shoot game on an FPGA. This is a famous game that has been described in the past using UML statecharts as an event-driven embedded system. Because it has a rather complex functionality, attempting to describe it using a hardware description language (HDL), such as VHDL or Verilog, with the goal of deploying on a real FPGA becomes challenging. As such, brute-force attempts to write HDL descriptions are prone to errors and subject to long design times. Hence, in this paper, we describe a practical approach for translating UML statecharts used to specify event-driven embedded systems into VHDL code written using the popular two-process coding style. This approach consists of a set of mapping rules from statecharts concepts into VHDL constructs. The efficacy and correct by design characteristics of the presented approach are due to the use of two-process VHDL coding to describe the hierarchical finite state machine (FSM) corresponding to the UML statecharts. This gives the designer better control over the current and next state signals of the FSMs, it is more modular or object oriented, and makes development and debugging much easier. We apply the proposed approach to implement a prototype of the classic Fly-n-Shoot game. The implementation is verified successfully on real hardware, the DE1-SoC FPGA development board, that uses a Cyclone IV FPGA chip.
我们提出了一个完整的实现原型的经典的飞-n-射击游戏在FPGA上。这是一个著名的游戏,在过去使用UML状态图作为事件驱动的嵌入式系统来描述它。由于它具有相当复杂的功能,因此尝试使用硬件描述语言(HDL)(如VHDL或Verilog)来描述它,并将其部署在真正的FPGA上变得具有挑战性。因此,强行编写HDL描述的尝试很容易出错,并且需要很长的设计时间。因此,在本文中,我们描述了一种将用于指定事件驱动嵌入式系统的UML状态图转换为使用流行的双进程编码风格编写的VHDL代码的实用方法。这种方法由一组从状态图概念到VHDL构造的映射规则组成。由于采用两进程VHDL编码来描述与UML状态图相对应的分层有限状态机(FSM),该方法的有效性和设计特性是正确的。这使设计人员能够更好地控制fsm的当前和下一个状态信号,它更加模块化或面向对象,并且使开发和调试更加容易。我们将提出的方法用于实现经典的Fly-n-Shoot游戏的原型。在实际硬件上,使用Cyclone IV FPGA芯片的DE1-SoC FPGA开发板上成功地验证了该实现。
{"title":"Hardware Description of Event-driven Systems by Translation of UML Statecharts to VHDL","authors":"Cristinel Ababei, S. Schneider","doi":"10.1109/eIT57321.2023.10187218","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187218","url":null,"abstract":"We present a complete implementation prototype of the classic Fly-n-Shoot game on an FPGA. This is a famous game that has been described in the past using UML statecharts as an event-driven embedded system. Because it has a rather complex functionality, attempting to describe it using a hardware description language (HDL), such as VHDL or Verilog, with the goal of deploying on a real FPGA becomes challenging. As such, brute-force attempts to write HDL descriptions are prone to errors and subject to long design times. Hence, in this paper, we describe a practical approach for translating UML statecharts used to specify event-driven embedded systems into VHDL code written using the popular two-process coding style. This approach consists of a set of mapping rules from statecharts concepts into VHDL constructs. The efficacy and correct by design characteristics of the presented approach are due to the use of two-process VHDL coding to describe the hierarchical finite state machine (FSM) corresponding to the UML statecharts. This gives the designer better control over the current and next state signals of the FSMs, it is more modular or object oriented, and makes development and debugging much easier. We apply the proposed approach to implement a prototype of the classic Fly-n-Shoot game. The implementation is verified successfully on real hardware, the DE1-SoC FPGA development board, that uses a Cyclone IV FPGA chip.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"62 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132122929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-18DOI: 10.1109/eIT57321.2023.10187304
Zhao-Ying Zhou, Jun Chen, Mingyuan Tao, P. Zhang, Meng Xu
This paper presents an experimental validation of an event-triggered model predictive control (MPC) for autonomous vehicle (AV) path-tracking control using real-world testing. Path tracking is a critical aspect of AV control, and MPC is a popular control method for this task. However, traditional MPC requires extensive computational resources to solve real-time optimization problems, which can be challenging to implement in the real world. To address this issue, event-triggered MPC, which only solves the optimization problem when a triggering event occurs, has been proposed in the literature to reduce computational requirements. This paper then conducts experimental validation, where event-triggered MPC is compared to traditional time-triggered MPC through real-world testing, and the results demonstrate that the event-triggered MPC method not only offers a significant reduction in computation compared to timetriggered MPC but also improves the control performance.
{"title":"Experimental Validation of Event-Triggered Model Predictive Control for Autonomous Vehicle Path Tracking","authors":"Zhao-Ying Zhou, Jun Chen, Mingyuan Tao, P. Zhang, Meng Xu","doi":"10.1109/eIT57321.2023.10187304","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187304","url":null,"abstract":"This paper presents an experimental validation of an event-triggered model predictive control (MPC) for autonomous vehicle (AV) path-tracking control using real-world testing. Path tracking is a critical aspect of AV control, and MPC is a popular control method for this task. However, traditional MPC requires extensive computational resources to solve real-time optimization problems, which can be challenging to implement in the real world. To address this issue, event-triggered MPC, which only solves the optimization problem when a triggering event occurs, has been proposed in the literature to reduce computational requirements. This paper then conducts experimental validation, where event-triggered MPC is compared to traditional time-triggered MPC through real-world testing, and the results demonstrate that the event-triggered MPC method not only offers a significant reduction in computation compared to timetriggered MPC but also improves the control performance.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123352392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-18DOI: 10.1109/eIT57321.2023.10187230
Manav Tailor, J. Ali, Xinrui Yu, Won-Jae Yi, J. Saniie
This paper presents a real-time prototype system for monitoring the distraction levels of the driver. Due to the nature of high traffic conditions commonly seen nowadays, accidents are highly likely to occur as drivers cannot always recognize their exhaustion levels themselves. We utilize a single low-cost camera facing the driver connected to a single-board computer; a series of frame captures from the camera are fed to a neural network, and a pattern detection algorithm to predict the driver's distraction level is utilized. All training is conducted under personalized training sets to increase accuracy and to match an individual's driving patterns as accurately as possible. This system is designed to serve as a baseline for further system development, and many vital sub-components can be changed regarding input data type and choices of machine learning algorithms.
{"title":"Application of Machine Learning and Image Recognition for Driver Attention Monitoring","authors":"Manav Tailor, J. Ali, Xinrui Yu, Won-Jae Yi, J. Saniie","doi":"10.1109/eIT57321.2023.10187230","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187230","url":null,"abstract":"This paper presents a real-time prototype system for monitoring the distraction levels of the driver. Due to the nature of high traffic conditions commonly seen nowadays, accidents are highly likely to occur as drivers cannot always recognize their exhaustion levels themselves. We utilize a single low-cost camera facing the driver connected to a single-board computer; a series of frame captures from the camera are fed to a neural network, and a pattern detection algorithm to predict the driver's distraction level is utilized. All training is conducted under personalized training sets to increase accuracy and to match an individual's driving patterns as accurately as possible. This system is designed to serve as a baseline for further system development, and many vital sub-components can be changed regarding input data type and choices of machine learning algorithms.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123578368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-18DOI: 10.1109/eIT57321.2023.10187390
Christian Weber, P. Czerner, M. Fathi
Manufacturing as an industry is under continuous pressure to deliver the right product, at the right quality, quantity and in time. To do so it becomes increasingly important to detect the source of manufacturing problems in a short amount of time but also to prevent further occurrence of know problems. Data Mining is focused on identifying problem patterns and inferring the right interpretation to trace and resolve the root cause in time. However, lessons learned are rarely transported into digital solutions that then thoroughly enable to automatize detection and resolving of incidents. Data mining models exist, but no structured approach for transforming and sustaining found solutions digitally. We are introducing Digit-DM as a structured and strategic process for digitizing analytical results. Digit-DM is building on top of existing data mining models but defines a strategic process for continuous digitization, enabling sustainable, digital manufacturing support, utilizing analytical lessons learned.
{"title":"Digit-DM: A Sustainable Data Mining Modell for Continuous Digitization in Manufacturing","authors":"Christian Weber, P. Czerner, M. Fathi","doi":"10.1109/eIT57321.2023.10187390","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187390","url":null,"abstract":"Manufacturing as an industry is under continuous pressure to deliver the right product, at the right quality, quantity and in time. To do so it becomes increasingly important to detect the source of manufacturing problems in a short amount of time but also to prevent further occurrence of know problems. Data Mining is focused on identifying problem patterns and inferring the right interpretation to trace and resolve the root cause in time. However, lessons learned are rarely transported into digital solutions that then thoroughly enable to automatize detection and resolving of incidents. Data mining models exist, but no structured approach for transforming and sustaining found solutions digitally. We are introducing Digit-DM as a structured and strategic process for digitizing analytical results. Digit-DM is building on top of existing data mining models but defines a strategic process for continuous digitization, enabling sustainable, digital manufacturing support, utilizing analytical lessons learned.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123784338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Malaria continues to be a significant burden on global health with 247 million clinical episodes and 619,000 deaths. Along with biomedical science, technology, and informatics have begun participating in the quest against malaria. Microscopy techniques are frequently used to detect malaria parasites in infected red blood cells. Giemsa stain has been used to stain blood parasites for over a century. The stain is applied after fixing blood smears in methyl alcohol for 25 to 30 minutes [1]. When stained slides are examined under a microscope, the parasites are easily discernible based on morphology and color. We observed that automating the detection of these slides using deep learning is possible with high accuracy. A comparison between deep learning models reveals ResNets provide better performance.
{"title":"Deep Learning Application for Detection of Malaria","authors":"Md. Saifur Rahman, Nafiz Rifat, M. Ahsan, Sabrina Islam, Md. Chowdhury, Rahul Gomes","doi":"10.1109/eIT57321.2023.10187342","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187342","url":null,"abstract":"Malaria continues to be a significant burden on global health with 247 million clinical episodes and 619,000 deaths. Along with biomedical science, technology, and informatics have begun participating in the quest against malaria. Microscopy techniques are frequently used to detect malaria parasites in infected red blood cells. Giemsa stain has been used to stain blood parasites for over a century. The stain is applied after fixing blood smears in methyl alcohol for 25 to 30 minutes [1]. When stained slides are examined under a microscope, the parasites are easily discernible based on morphology and color. We observed that automating the detection of these slides using deep learning is possible with high accuracy. A comparison between deep learning models reveals ResNets provide better performance.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"572 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128806393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-18DOI: 10.1109/eIT57321.2023.10187360
Y. Al-Khassawneh
Over the last few years, the use of an Intrusion Detection System (IDS) has proven to be an effective method for achieving higher levels of security by detecting potentially harmful actions. Because it is unable to accurately identify all types of attacks, the current method of anomaly detection is frequently associated with high rates of false alarms and low rates of accuracy and detection. When it comes to establishing reliable and all-encompassing security, intrusion detection systems (IDS) are invaluable tools for managed service providers (MSPs). Since there are so many IDS options, it can be hard to figure out which one is best for your business and your customers. When it comes to training and testing an IDS, having access to a dataset with a large amount of high-quality data representative of real-world conditions is invaluable. In this work, NSL-KDD dataset is analyzed and is used to assess the effectiveness of various classification algorithms in detecting anomalies in network traffic patterns. In addition, we investigate the relationship between hacker attacks and the protocols found in the commonly used network protocol stack. These investigations were carried out to determine how attackers generate abnormal network traffic. The investigation has yielded a wealth of information about the relationship between the protocols and network attacks. Furthermore, the proposed model not only improves IDS precision but also opens up a new research avenue in this field.
{"title":"An investigation of the Intrusion detection system for the NSL-KDD dataset using machine-learning algorithms","authors":"Y. Al-Khassawneh","doi":"10.1109/eIT57321.2023.10187360","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187360","url":null,"abstract":"Over the last few years, the use of an Intrusion Detection System (IDS) has proven to be an effective method for achieving higher levels of security by detecting potentially harmful actions. Because it is unable to accurately identify all types of attacks, the current method of anomaly detection is frequently associated with high rates of false alarms and low rates of accuracy and detection. When it comes to establishing reliable and all-encompassing security, intrusion detection systems (IDS) are invaluable tools for managed service providers (MSPs). Since there are so many IDS options, it can be hard to figure out which one is best for your business and your customers. When it comes to training and testing an IDS, having access to a dataset with a large amount of high-quality data representative of real-world conditions is invaluable. In this work, NSL-KDD dataset is analyzed and is used to assess the effectiveness of various classification algorithms in detecting anomalies in network traffic patterns. In addition, we investigate the relationship between hacker attacks and the protocols found in the commonly used network protocol stack. These investigations were carried out to determine how attackers generate abnormal network traffic. The investigation has yielded a wealth of information about the relationship between the protocols and network attacks. Furthermore, the proposed model not only improves IDS precision but also opens up a new research avenue in this field.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128501039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-18DOI: 10.1109/eIT57321.2023.10187382
Nabonita Mitra, B. Morshed
Early detection and continuous monitoring can help reduce the complexity of treatment and recovery. For this purpose, many modern technologies are being used like smart wearable devices to make the diagnosis of different types of human diseases and automated tutoring systems. There is a vast improvement in the sector of human healthcare and education delivery using artificial intelligence (AI). For these AI algorithms, there can be high error rates if situational contexts are ignored. Currently, there is no automated approach to detect situational context. In this work, we propose a novel approach to automatically detect situational context with a smartphone context detection app using AI from minimal sensor modality. We begin the process by converting a few sensor data from the smartphone app to a multitude of axes, then determine situational context from these axes by using a machine learning algorithm. At first, we evaluated $k$-means algorithm performance on the converted data and grouped them into different clusters according to the contexts. However, the $k$-means algorithm has many challenges that negatively affect its clustering performance. For this reason, to automatically detect the situational contexts more accurately we have performed different machine learning (ML) algorithms to differentiate their characteristic parameters and attributes. To train and test ML models, 145 features were extracted from the dataset. In our case, we have used a dataset with 53,679 distinct values to evaluate the performance of different algorithms in detecting five situational contexts of the users. Experimental result shows that the accuracy of the Support Vector Machine, Random Forest, Artificial Neural Network, and Decision Tree Classifiers are 95%, 99%, 97%, and 98% respectively. The most effective classifier overall is Random Forest. This preliminary work shows the feasibility of detecting situational context automatically from a few sensor data collected from the smartphone app by converting the sensor data to multiple axes and applying a machine learning algorithm.
{"title":"Detection of Situational Context From Minimal Sensor Modality of A Smartphone Using Machine Learning Algorithm","authors":"Nabonita Mitra, B. Morshed","doi":"10.1109/eIT57321.2023.10187382","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187382","url":null,"abstract":"Early detection and continuous monitoring can help reduce the complexity of treatment and recovery. For this purpose, many modern technologies are being used like smart wearable devices to make the diagnosis of different types of human diseases and automated tutoring systems. There is a vast improvement in the sector of human healthcare and education delivery using artificial intelligence (AI). For these AI algorithms, there can be high error rates if situational contexts are ignored. Currently, there is no automated approach to detect situational context. In this work, we propose a novel approach to automatically detect situational context with a smartphone context detection app using AI from minimal sensor modality. We begin the process by converting a few sensor data from the smartphone app to a multitude of axes, then determine situational context from these axes by using a machine learning algorithm. At first, we evaluated $k$-means algorithm performance on the converted data and grouped them into different clusters according to the contexts. However, the $k$-means algorithm has many challenges that negatively affect its clustering performance. For this reason, to automatically detect the situational contexts more accurately we have performed different machine learning (ML) algorithms to differentiate their characteristic parameters and attributes. To train and test ML models, 145 features were extracted from the dataset. In our case, we have used a dataset with 53,679 distinct values to evaluate the performance of different algorithms in detecting five situational contexts of the users. Experimental result shows that the accuracy of the Support Vector Machine, Random Forest, Artificial Neural Network, and Decision Tree Classifiers are 95%, 99%, 97%, and 98% respectively. The most effective classifier overall is Random Forest. This preliminary work shows the feasibility of detecting situational context automatically from a few sensor data collected from the smartphone app by converting the sensor data to multiple axes and applying a machine learning algorithm.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126447713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-18DOI: 10.1109/eIT57321.2023.10187239
T. E. Dettling, Byron DeVries, C. Trefftz
Calculating Voronoi diagrams quickly is useful across a range of fields and application areas. However, existing divide-and-conquer methods decompose into squares while boundaries between Voronoi diagram regions are often not perfectly horizontal or vertical. In this paper we introduce a novel method of dividing Approximate Voronoi Diagram spaces into triangles stored by quadtree data structures. While our implementation stores the resulting Voronoi diagram in a data structure, rather than setting each approximated point to its closest region, we provide a comparison of the decomposition time alone.
{"title":"Calculating an Approximate Voronoi Diagram using QuadTrees and Triangles","authors":"T. E. Dettling, Byron DeVries, C. Trefftz","doi":"10.1109/eIT57321.2023.10187239","DOIUrl":"https://doi.org/10.1109/eIT57321.2023.10187239","url":null,"abstract":"Calculating Voronoi diagrams quickly is useful across a range of fields and application areas. However, existing divide-and-conquer methods decompose into squares while boundaries between Voronoi diagram regions are often not perfectly horizontal or vertical. In this paper we introduce a novel method of dividing Approximate Voronoi Diagram spaces into triangles stored by quadtree data structures. While our implementation stores the resulting Voronoi diagram in a data structure, rather than setting each approximated point to its closest region, we provide a comparison of the decomposition time alone.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130647381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}