首页 > 最新文献

2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)最新文献

英文 中文
Statistical and Deep Convolutional Feature Fusion for Emotion Detection from Audio Signal 基于统计和深度卷积特征融合的音频信号情感检测
Pub Date : 2023-03-16 DOI: 10.1109/ICBSII58188.2023.10181060
Durgesh Ameta, Vinay Gupta, Rohit Pilakkottil Sathian, Laxmidhar Behera, Tushar Sandhan
Speech serves as a crucial mode of expression for individuals to articulate their thoughts and can offer valuable insight into their emotional state. Various research has been conducted to identify metrics that can be used to determine the emotional sentiment hidden in an audio signal. This paper presents an exploratory analysis of various audio features, including Chroma features, MFCCs, Spectral features, and flattened spectrogram features (obtained using VGG-19 convolutional neural network) for sentiment analysis in the audio signals. This study evaluates the effectiveness of combining various audio features in determining emotional states expressed in a speech using the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). Baseline techniques such as Random Forest, Multi-Layer Perceptron (MLP), Logistic Regression, XgBoost, and Support Vector Machine (SVM) are used to compare the performance of the features. The results obtained from the study provide insight into the potential of utilizing these audio features to determine emotional states expressed in speech.
言语是一种重要的表达方式,个人可以清晰地表达自己的想法,并能提供有价值的洞察自己的情绪状态。已经进行了各种研究,以确定可用于确定隐藏在音频信号中的情感情绪的指标。本文对各种音频特征进行了探索性分析,包括色度特征、mfc特征、频谱特征和平坦谱图特征(使用VGG-19卷积神经网络获得),用于音频信号的情感分析。本研究使用瑞尔森情感言语与歌曲视听数据库(RAVDESS)评估了结合各种音频特征来确定演讲中表达的情绪状态的有效性。使用随机森林、多层感知器(MLP)、逻辑回归、XgBoost和支持向量机(SVM)等基线技术来比较特征的性能。从研究中获得的结果提供了利用这些音频特征来确定语音中表达的情绪状态的潜力。
{"title":"Statistical and Deep Convolutional Feature Fusion for Emotion Detection from Audio Signal","authors":"Durgesh Ameta, Vinay Gupta, Rohit Pilakkottil Sathian, Laxmidhar Behera, Tushar Sandhan","doi":"10.1109/ICBSII58188.2023.10181060","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10181060","url":null,"abstract":"Speech serves as a crucial mode of expression for individuals to articulate their thoughts and can offer valuable insight into their emotional state. Various research has been conducted to identify metrics that can be used to determine the emotional sentiment hidden in an audio signal. This paper presents an exploratory analysis of various audio features, including Chroma features, MFCCs, Spectral features, and flattened spectrogram features (obtained using VGG-19 convolutional neural network) for sentiment analysis in the audio signals. This study evaluates the effectiveness of combining various audio features in determining emotional states expressed in a speech using the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). Baseline techniques such as Random Forest, Multi-Layer Perceptron (MLP), Logistic Regression, XgBoost, and Support Vector Machine (SVM) are used to compare the performance of the features. The results obtained from the study provide insight into the potential of utilizing these audio features to determine emotional states expressed in speech.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116765981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of an IP core for motion blur detection in fundus images using an FPGA-based accelerator 基于fpga加速器的眼底图像运动模糊检测IP核设计
Pub Date : 2023-03-16 DOI: 10.1109/ICBSII58188.2023.10181073
Rohit Jacob George, S. Charaan, R. Swathi, S. Rani
This paper focuses on applying an algorithm for real-time blur detection in fundus images via hardware acceleration. Blur in fundus images is caused due to many factors, but most of the time, with a reasonable degree of accuracy, they could be classified as motion blur. A motion blur could be modelled as an image convolved with a blur transfer function. Blur metrics are identified via techniques such as Haar DWT as it gives reasonable accuracy for most types of linear blur. First, a hardware architecture using Verilog HDL is created that computes the edge maps of images. This architecture is based on a novel algorithm that encompasses a series of Haar DWT Units. The simplicity and flexibility in this proposed architecture allow any kind of software or hardware platform to integrate the proposed model with very little to no modification, onto them. Subsequently, the IP core for the proposed architecture is developed, which can be further extended into an SoC, which can then be programmed onto a suitable FPGA system, which could then be uploaded with images that get classified as blurred and clear images. The on-chip processing system of the FPGA-SoC reads the image data and sends it to the Blur Detector IP via the DMA IP in the SoC. The whole process uses a double-buffered design in order to reduce IP stall time and increase efficiency.
研究了一种基于硬件加速的眼底图像实时模糊检测算法。眼底图像的模糊是由多种因素造成的,但在大多数情况下,在合理的精度下,它们可以被归类为运动模糊。运动模糊可以建模为图像与模糊传递函数的卷积。模糊度量是通过Haar DWT等技术确定的,因为它为大多数类型的线性模糊提供了合理的准确性。首先,使用Verilog HDL创建了计算图像边缘映射的硬件架构。该体系结构基于一种包含一系列Haar DWT单元的新算法。该架构的简单性和灵活性允许任何类型的软件或硬件平台在很少修改或不修改的情况下集成所建议的模型。随后,为所提出的架构开发了IP核,可以进一步扩展到SoC,然后可以将其编程到合适的FPGA系统上,然后可以上传图像,分类为模糊和清晰图像。FPGA-SoC的片上处理系统读取图像数据,并通过SoC中的DMA IP将其发送到模糊检测器IP。整个过程采用双缓冲设计,以减少IP失速时间,提高效率。
{"title":"Design of an IP core for motion blur detection in fundus images using an FPGA-based accelerator","authors":"Rohit Jacob George, S. Charaan, R. Swathi, S. Rani","doi":"10.1109/ICBSII58188.2023.10181073","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10181073","url":null,"abstract":"This paper focuses on applying an algorithm for real-time blur detection in fundus images via hardware acceleration. Blur in fundus images is caused due to many factors, but most of the time, with a reasonable degree of accuracy, they could be classified as motion blur. A motion blur could be modelled as an image convolved with a blur transfer function. Blur metrics are identified via techniques such as Haar DWT as it gives reasonable accuracy for most types of linear blur. First, a hardware architecture using Verilog HDL is created that computes the edge maps of images. This architecture is based on a novel algorithm that encompasses a series of Haar DWT Units. The simplicity and flexibility in this proposed architecture allow any kind of software or hardware platform to integrate the proposed model with very little to no modification, onto them. Subsequently, the IP core for the proposed architecture is developed, which can be further extended into an SoC, which can then be programmed onto a suitable FPGA system, which could then be uploaded with images that get classified as blurred and clear images. The on-chip processing system of the FPGA-SoC reads the image data and sends it to the Blur Detector IP via the DMA IP in the SoC. The whole process uses a double-buffered design in order to reduce IP stall time and increase efficiency.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130517148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Variability of E-field in Dorsolateral Prefrontal Cortex Upon a Change in Electrode Parameters in tDCS. tDCS中电极参数变化对背外侧前额叶皮层电场的影响。
Pub Date : 2023-03-16 DOI: 10.1109/ICBSII58188.2023.10180905
Utkarsh Pancholi, Vijay Dave
Transcranial direct current stimulation (tDCS) is a part of the transcranial electrical stimulation method widely used for treating patients with neurological and psychological abnormalities, along with application in cognitive improvements. With a simple design and operating procedure, tDCS is considered a safe and effective therapy choice. With predefined treatment protocols, it is possible to achieve the required electric field within the inner structures of the brain to excite and inhibit neuronal activity and its outcomes. The generated electric field shows variation among individuals due to anatomical and functional changes in the brain tissues. In-situ modeling of therapeutic procedures can help to assure the probabilistic outcome of tDCS. In this study, we have obtained results for electric field strength variability in a cognitively normal subject. We have simulated the subject with variation in stimulating electrode size and shape, a combination of electrode-Gel and electrode-sponge with SimNIBS Ver (3.2.6), and measured electric field strength and focality. Simulated results show less dependence on gel or sponge thickness and more reliance on electrode size and shape for E-field and focality. The increasing size of electrodes reduces electric field strength and focality with asymmetrical E-field distribution, whereas decrement generates a more symmetrical and focused E-field with higher strength.
经颅直流电刺激(Transcranial direct current stimulation, tDCS)是经颅电刺激方法的一部分,广泛用于治疗神经和心理异常患者,同时在认知改善方面也有应用。tDCS设计简单,操作简便,是一种安全有效的治疗方法。有了预先设定的治疗方案,就有可能在大脑内部结构中获得所需的电场来激发和抑制神经元活动及其结果。由于脑组织的解剖和功能变化,产生的电场在个体之间表现出差异。治疗过程的原位建模有助于保证tDCS的概率结果。在这项研究中,我们获得了认知正常受试者的电场强度变异性的结果。我们模拟了刺激电极大小和形状的变化,结合了SimNIBS Ver(3.2.6)的电极凝胶和电极海绵,并测量了电场强度和聚焦。模拟结果表明,对凝胶或海绵厚度的依赖性较小,而对电场和聚焦的电极尺寸和形状的依赖性更大。随着电极尺寸的增大,电场强度和聚焦度减小,电场分布不对称;电极尺寸的减小,电场强度增大,电场分布更对称、更集中。
{"title":"Variability of E-field in Dorsolateral Prefrontal Cortex Upon a Change in Electrode Parameters in tDCS.","authors":"Utkarsh Pancholi, Vijay Dave","doi":"10.1109/ICBSII58188.2023.10180905","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10180905","url":null,"abstract":"Transcranial direct current stimulation (tDCS) is a part of the transcranial electrical stimulation method widely used for treating patients with neurological and psychological abnormalities, along with application in cognitive improvements. With a simple design and operating procedure, tDCS is considered a safe and effective therapy choice. With predefined treatment protocols, it is possible to achieve the required electric field within the inner structures of the brain to excite and inhibit neuronal activity and its outcomes. The generated electric field shows variation among individuals due to anatomical and functional changes in the brain tissues. In-situ modeling of therapeutic procedures can help to assure the probabilistic outcome of tDCS. In this study, we have obtained results for electric field strength variability in a cognitively normal subject. We have simulated the subject with variation in stimulating electrode size and shape, a combination of electrode-Gel and electrode-sponge with SimNIBS Ver (3.2.6), and measured electric field strength and focality. Simulated results show less dependence on gel or sponge thickness and more reliance on electrode size and shape for E-field and focality. The increasing size of electrodes reduces electric field strength and focality with asymmetrical E-field distribution, whereas decrement generates a more symmetrical and focused E-field with higher strength.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132802997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation and Severity Classification of Dementia in Magnetic Resonance Imaging using Deep Learning Networks 基于深度学习网络的磁共振成像中痴呆的分割和严重程度分类
Pub Date : 2023-03-16 DOI: 10.1109/ICBSII58188.2023.10181083
A. Sarath Vignesh, H. Denicke Solomon, P. Dheepan, G. Kavitha
Magnetic resonance imaging is the accepted standard for analyzing any deformation in brain. There are many biomarkers which can be considered for analyzing the effect of Alzheimer’s disease in brain. One such biomarker is the ventricle which expands during the progression of Alzheimer’s disease. Ventricle segmentation plays a vital role in the diagnosis. Automated segmentation approaches are preferred since manual segmentation takes a longer time. In this work, the magnetic resonance images are skull stripped using a combination of Fuzzy C-means clustering and the Chan-Vese contouring technique. segmentation of ventricle is performed by deep learning architectures, U-Net and SegUnet on 1164 transverse MR images acquired from ADNI (Alzheimer’s DiseaseNeuroimaging Initiative) database which is an open-source database for carrying researches on Dementia. The features are extracted from the segmented images using ResNet-101 and they are classified using a classifier merger approach which consists of 3 classifiers. The final class label is obtained by majority voting on the individual classifier predictions. The results were compared and analyzed.
磁共振成像是分析大脑变形的公认标准。有许多生物标志物可用于分析阿尔茨海默病对大脑的影响。一个这样的生物标志物是在阿尔茨海默病进展过程中扩张的心室。脑室分割在诊断中起着至关重要的作用。自动分割方法是首选,因为人工分割需要更长的时间。在这项工作中,使用模糊c均值聚类和Chan-Vese轮廓技术的组合对磁共振图像进行颅骨剥离。利用深度学习架构、U-Net和SegUnet对来自ADNI (Alzheimer 's disease eneuroimaging Initiative)数据库的1164张横向MR图像进行脑室分割,该数据库是一个开展痴呆症研究的开源数据库。使用ResNet-101从分割后的图像中提取特征,并使用由3个分类器组成的分类器合并方法对其进行分类。最终的类标签是通过对单个分类器预测的多数投票获得的。对结果进行了比较和分析。
{"title":"Segmentation and Severity Classification of Dementia in Magnetic Resonance Imaging using Deep Learning Networks","authors":"A. Sarath Vignesh, H. Denicke Solomon, P. Dheepan, G. Kavitha","doi":"10.1109/ICBSII58188.2023.10181083","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10181083","url":null,"abstract":"Magnetic resonance imaging is the accepted standard for analyzing any deformation in brain. There are many biomarkers which can be considered for analyzing the effect of Alzheimer’s disease in brain. One such biomarker is the ventricle which expands during the progression of Alzheimer’s disease. Ventricle segmentation plays a vital role in the diagnosis. Automated segmentation approaches are preferred since manual segmentation takes a longer time. In this work, the magnetic resonance images are skull stripped using a combination of Fuzzy C-means clustering and the Chan-Vese contouring technique. segmentation of ventricle is performed by deep learning architectures, U-Net and SegUnet on 1164 transverse MR images acquired from ADNI (Alzheimer’s DiseaseNeuroimaging Initiative) database which is an open-source database for carrying researches on Dementia. The features are extracted from the segmented images using ResNet-101 and they are classified using a classifier merger approach which consists of 3 classifiers. The final class label is obtained by majority voting on the individual classifier predictions. The results were compared and analyzed.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"191 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133321711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Infrared Thermograms for Diagnosis of Dry Eye: A Review 红外热像图诊断干眼症的研究进展
Pub Date : 2023-03-16 DOI: 10.1109/ICBSII58188.2023.10181092
J. Persiya., A. Sasithradevi, S. Roomi
A non-intrusive, contactless temperature measurement technique that provides real-time surface temperature distribution is infrared thermography. Ocular Surface Temperature (OST) can be measured using thermography without harming the subjects. It is used in wide variety of applications, especially in medical applications. Recently Infrared thermal images of eye are used to diagnose and detect many diseases and features of human eye. This paper examines the methods currently in use for diagnosing dry eye. The main focus is on thermal images. Thermographic technique is proved to be a highly sensitive, satisfying method and accurate for detecting eye disorders. Various machine learning and Deep learning algorithms are discussed. Finally, it is concluded as deep learning combined with thermography is more likely to be used to detect dry eye disease.
红外热成像是一种非侵入式、非接触式的温度测量技术,可以提供实时的表面温度分布。眼表温度(OST)可以使用热像仪测量而不伤害受试者。它用于各种各样的应用,特别是在医疗应用中。近年来,人眼红外热像被用于人眼疾病和特征的诊断和检测。本文综述了目前用于诊断干眼症的方法。主要的焦点是热成像。热成像技术是一种灵敏、准确、令人满意的眼病检测方法。讨论了各种机器学习和深度学习算法。最后得出结论:深度学习与热成像相结合更容易用于干眼症的检测。
{"title":"Infrared Thermograms for Diagnosis of Dry Eye: A Review","authors":"J. Persiya., A. Sasithradevi, S. Roomi","doi":"10.1109/ICBSII58188.2023.10181092","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10181092","url":null,"abstract":"A non-intrusive, contactless temperature measurement technique that provides real-time surface temperature distribution is infrared thermography. Ocular Surface Temperature (OST) can be measured using thermography without harming the subjects. It is used in wide variety of applications, especially in medical applications. Recently Infrared thermal images of eye are used to diagnose and detect many diseases and features of human eye. This paper examines the methods currently in use for diagnosing dry eye. The main focus is on thermal images. Thermographic technique is proved to be a highly sensitive, satisfying method and accurate for detecting eye disorders. Various machine learning and Deep learning algorithms are discussed. Finally, it is concluded as deep learning combined with thermography is more likely to be used to detect dry eye disease.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115480170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart Crop Protection System From Animals Using AI 使用人工智能的动物智能作物保护系统
Pub Date : 2023-03-16 DOI: 10.1109/ICBSII58188.2023.10181093
K. Dharanipriya, S. Sathyageetha, K. Sowmia, J. Srinidhi
Animal assaults on crops are one of the main risks to crop production reduction. Crop raiding is one of the most acrimonious disputes as farmed land encroaches on formerly uninhabited areas. Pests, natural disasters, and animal damage pose severe risks to Indian farmers, lowering productivity. Farmers’ traditional tactics are ineffective, and it is not practical to hire guards to watch over crops and keep animals away. Since animal and human safety are equally important, it is crucial to safeguard the crops from harm brought on by creatures without causing any harm, as well as divert the animal. As a result, we employ deep learning to recognize animals that visit our farm utilizing the deep neural network idea, a branch of computer vision, in order to overcome the aforementioned issues and achieve our goal. In this project, we will periodically check in on the entire farm using a camera that will continuously record its surroundings. We are able to recognize when animals are entering with the use of a deep learning model, and then we use an SD card and speaker to play the right sounds to scare them away. The many convolutional neural network libraries and principles that were used to build the model are described in this research.
动物袭击农作物是农作物减产的主要风险之一。农作物掠夺是最激烈的争端之一,因为耕地侵占了以前无人居住的地区。害虫、自然灾害和动物伤害给印度农民带来了严重的风险,降低了生产力。农民的传统策略是无效的,雇人看守庄稼和驱赶动物是不现实的。由于动物和人类的安全同样重要,因此保护农作物免受生物的伤害而不造成任何伤害,以及转移动物的注意力是至关重要的。因此,我们采用深度学习来识别访问我们农场的动物,利用深度神经网络思想,计算机视觉的一个分支,以克服上述问题并实现我们的目标。在这个项目中,我们将定期检查整个农场,使用一个相机,将持续记录其周围环境。通过使用深度学习模型,我们能够识别动物何时进入,然后我们使用SD卡和扬声器播放正确的声音来吓跑它们。本研究描述了用于构建模型的许多卷积神经网络库和原理。
{"title":"Smart Crop Protection System From Animals Using AI","authors":"K. Dharanipriya, S. Sathyageetha, K. Sowmia, J. Srinidhi","doi":"10.1109/ICBSII58188.2023.10181093","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10181093","url":null,"abstract":"Animal assaults on crops are one of the main risks to crop production reduction. Crop raiding is one of the most acrimonious disputes as farmed land encroaches on formerly uninhabited areas. Pests, natural disasters, and animal damage pose severe risks to Indian farmers, lowering productivity. Farmers’ traditional tactics are ineffective, and it is not practical to hire guards to watch over crops and keep animals away. Since animal and human safety are equally important, it is crucial to safeguard the crops from harm brought on by creatures without causing any harm, as well as divert the animal. As a result, we employ deep learning to recognize animals that visit our farm utilizing the deep neural network idea, a branch of computer vision, in order to overcome the aforementioned issues and achieve our goal. In this project, we will periodically check in on the entire farm using a camera that will continuously record its surroundings. We are able to recognize when animals are entering with the use of a deep learning model, and then we use an SD card and speaker to play the right sounds to scare them away. The many convolutional neural network libraries and principles that were used to build the model are described in this research.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127321626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Keynote Speakers’ Profile 主题演讲嘉宾简介
Pub Date : 2023-03-16 DOI: 10.1109/icbsii58188.2023.10181081
{"title":"Keynote Speakers’ Profile","authors":"","doi":"10.1109/icbsii58188.2023.10181081","DOIUrl":"https://doi.org/10.1109/icbsii58188.2023.10181081","url":null,"abstract":"","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125573806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Learning Framework for Semantic Segmentation of Nucleus for Acute Lymphoblastic Leukemia Detection 一种用于急性淋巴细胞白血病核语义分割的深度学习框架
Pub Date : 2023-03-16 DOI: 10.1109/ICBSII58188.2023.10181067
A. Prasanna., S. Saran, N. Manoj, S. Alagu
Acute lymphoblastic leukemia is a form of blood cancer in which the bone marrow overproduces immature white blood cells. A novel semantic segmentation of nucleus for detection of Acute lymphoblastic leukemia is proposed here. The input images are obtained from public database ‘‘ALLIDB2’’. Resizing, SMOTE and Augmentation are carried out as preprocessing. After pre-processing, segmentation of nucleus is performed by SegNet and ResUNet. The performance of SegNet and ResUNet are compared. The segmented images are given as input to the classification models. Using Xception, Inception-v3 and ResNet50 models, the segmented images are classified as healthy and blast cells. It is found that Inception-v3 performs better than Xception and ResNet50 with an accuracy of 93.74%. This will be helpful to detect Acute lymphoblastic leukemia at the earliest.
急性淋巴细胞白血病是一种血癌,骨髓中产生的未成熟白细胞过多。本文提出了一种新的用于急性淋巴细胞白血病检测的细胞核语义分割方法。输入映像从公共数据库“ALLIDB2”获得。调整尺寸、SMOTE和增强作为预处理进行。预处理后,利用SegNet和ResUNet对核进行分割。比较了SegNet和ResUNet的性能。将分割后的图像作为分类模型的输入。使用Xception, Inception-v3和ResNet50模型,将分割的图像分类为健康细胞和原始细胞。发现Inception-v3的性能优于Xception和ResNet50,准确率为93.74%。这将有助于早期发现急性淋巴细胞白血病。
{"title":"A Deep Learning Framework for Semantic Segmentation of Nucleus for Acute Lymphoblastic Leukemia Detection","authors":"A. Prasanna., S. Saran, N. Manoj, S. Alagu","doi":"10.1109/ICBSII58188.2023.10181067","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10181067","url":null,"abstract":"Acute lymphoblastic leukemia is a form of blood cancer in which the bone marrow overproduces immature white blood cells. A novel semantic segmentation of nucleus for detection of Acute lymphoblastic leukemia is proposed here. The input images are obtained from public database ‘‘ALLIDB2’’. Resizing, SMOTE and Augmentation are carried out as preprocessing. After pre-processing, segmentation of nucleus is performed by SegNet and ResUNet. The performance of SegNet and ResUNet are compared. The segmented images are given as input to the classification models. Using Xception, Inception-v3 and ResNet50 models, the segmented images are classified as healthy and blast cells. It is found that Inception-v3 performs better than Xception and ResNet50 with an accuracy of 93.74%. This will be helpful to detect Acute lymphoblastic leukemia at the earliest.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122895502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resampling-free fast particle filtering with application to tracking rhythmic biomedical signals 无重采样快速粒子滤波在生物医学信号跟踪中的应用
Pub Date : 2023-03-16 DOI: 10.1109/ICBSII58188.2023.10181079
Mohammed Ashik, Ramesh Patnaik Manapuram, P. Choppala
The particle filter is known to be a powerful tool for the estimation of time varying latent states guided by nonlinear dynamics and sensor measurements.Particle filter’s traditional resampling step is essential because it avoids degeneracy of particles by stochastically eliminating the small weight particles that do not contribute to the posterior probability density function and replacing them by copies of those having large weights. Nevertheless, resampling is computationally costly since it requires extensive and sequential communication among the particles. This work proposes a novel method of particle filtering that eliminates the need for resampling and prevents degeneracy by substituting low-weight particles with a simple cutoff decision strategy based on the cumulative sum of weights. The proposed scheme limits replacement over only a few important particles and hence substantially accelerates the filtering process. We show the merits of the proposed method via simulations using a nonlinear example and also apply the method for tracking harmonics of real biomedical signals.
在非线性动力学和传感器测量的指导下,粒子滤波是估计时变潜在状态的有力工具。粒子滤波的传统重采样步骤是必不可少的,因为它通过随机去除不构成后验概率密度函数的小权重粒子,并用大权重粒子的副本代替它们,从而避免了粒子的退化。然而,重采样在计算上是昂贵的,因为它需要在粒子之间进行广泛和顺序的通信。这项工作提出了一种新的粒子滤波方法,消除了重采样的需要,并通过基于权重累积和的简单截止决策策略替换低权重粒子来防止退化。所提出的方案限制了对少数重要粒子的替换,从而大大加快了过滤过程。我们通过一个非线性例子的仿真证明了该方法的优点,并将该方法应用于实际生物医学信号的谐波跟踪。
{"title":"Resampling-free fast particle filtering with application to tracking rhythmic biomedical signals","authors":"Mohammed Ashik, Ramesh Patnaik Manapuram, P. Choppala","doi":"10.1109/ICBSII58188.2023.10181079","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10181079","url":null,"abstract":"The particle filter is known to be a powerful tool for the estimation of time varying latent states guided by nonlinear dynamics and sensor measurements.Particle filter’s traditional resampling step is essential because it avoids degeneracy of particles by stochastically eliminating the small weight particles that do not contribute to the posterior probability density function and replacing them by copies of those having large weights. Nevertheless, resampling is computationally costly since it requires extensive and sequential communication among the particles. This work proposes a novel method of particle filtering that eliminates the need for resampling and prevents degeneracy by substituting low-weight particles with a simple cutoff decision strategy based on the cumulative sum of weights. The proposed scheme limits replacement over only a few important particles and hence substantially accelerates the filtering process. We show the merits of the proposed method via simulations using a nonlinear example and also apply the method for tracking harmonics of real biomedical signals.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115838828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial Expression Recognition using Convolutional Neural Network 基于卷积神经网络的面部表情识别
Pub Date : 2023-03-16 DOI: 10.1109/ICBSII58188.2023.10181041
Mj Alben Richards, E. Kaaviya Varshini, N. Diviya, P. Prakash, Kasthuri P
Facial expression is a way of non-verbal communication by using eyes, lips, nose and facial muscles. Smiling and rolling eyes are some examples. Facial expression recognition is the process of extracting facial features from a person. Facial expressions include anger, happy, disgust, sad, neutral, fear and surprise. By the use of machine learning, an expression recognition model is built using Convolutional Neural Network. The input data is fed to the system in order to give the expected results. The model is trained using Facial Expression Recognition (FER) dataset. The Convolutional Neural Network (CNN) gives good and accurate results. The Haar cascade classifier classifies the face and non-face regions in the input image which helps the convolutional network to classify the images. Good classification of images can be desirable by the use of classifiers. These classifiers can be implemented by using the OpenCV library.
面部表情是一种使用眼睛、嘴唇、鼻子和面部肌肉进行非语言交流的方式。微笑和翻白眼就是一些例子。面部表情识别是提取人的面部特征的过程。面部表情包括生气、高兴、厌恶、悲伤、中性、恐惧和惊讶。利用机器学习技术,利用卷积神经网络建立表情识别模型。输入的数据被馈送到系统中,以便给出预期的结果。该模型使用面部表情识别(FER)数据集进行训练。卷积神经网络(CNN)给出了良好而准确的结果。Haar级联分类器对输入图像中的人脸区域和非人脸区域进行分类,从而帮助卷积网络对图像进行分类。使用分类器可以实现良好的图像分类。这些分类器可以通过使用OpenCV库来实现。
{"title":"Facial Expression Recognition using Convolutional Neural Network","authors":"Mj Alben Richards, E. Kaaviya Varshini, N. Diviya, P. Prakash, Kasthuri P","doi":"10.1109/ICBSII58188.2023.10181041","DOIUrl":"https://doi.org/10.1109/ICBSII58188.2023.10181041","url":null,"abstract":"Facial expression is a way of non-verbal communication by using eyes, lips, nose and facial muscles. Smiling and rolling eyes are some examples. Facial expression recognition is the process of extracting facial features from a person. Facial expressions include anger, happy, disgust, sad, neutral, fear and surprise. By the use of machine learning, an expression recognition model is built using Convolutional Neural Network. The input data is fed to the system in order to give the expected results. The model is trained using Facial Expression Recognition (FER) dataset. The Convolutional Neural Network (CNN) gives good and accurate results. The Haar cascade classifier classifies the face and non-face regions in the input image which helps the convolutional network to classify the images. Good classification of images can be desirable by the use of classifiers. These classifiers can be implemented by using the OpenCV library.","PeriodicalId":388866,"journal":{"name":"2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)","volume":"383 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122438426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2023 International Conference on Bio Signals, Images, and Instrumentation (ICBSII)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1