首页 > 最新文献

2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)最新文献

英文 中文
Facial Attribute Editing based on Independent Selective Transfer Unit and Self-attention Mechanism 基于独立选择传递单元和自注意机制的面部属性编辑
Xiaoning Liu, Peiyao Guo, Jinhong Liu, Dongcheng Tuo, Shiyu Lei, Yuejin Wang
Facial attribute editing aims to change the facial attributes, which can be regarded as an image translation problem. Facial attribute editing is usually realized by combining encoder-decoder and Generative Adversarial Networks, but the generated image is not realistic enough, and the model has weak ability to control the fine granularity of face attributes of generated images. In this work, we propose a Generative Adversarial Network ISTSA-GAN based on Independent Selective Transfer Unit (ISTU) and Self-attention Mechanism. On the basis of STGAN, we use ISTU instead of Selective Transfer Unit (STU) to combine with encoder-decoder to selectively transfer the features of encoder. In addition, a self-attention mechanism is introduced into the transposed convolution layer of the decoder to establish long-distance dependence of the model across image regions. Finally, attribute interpolation loss and source domain adversarial loss are added to constrain the training of the model. Experimental results show that this method can improve the ability of editing attributes and saving much details, and enhance the ability of fine-grained control of editing attributes. It is superior to classical methods in attribute editing accuracy and image quality.
人脸属性编辑的目的是改变人脸属性,这可以看作是一个图像翻译问题。人脸属性编辑通常采用编码器-解码器和生成对抗网络相结合的方式来实现,但生成的图像不够逼真,模型对生成图像人脸属性细粒度的控制能力较弱。在这项工作中,我们提出了一个基于独立选择转移单元(ISTU)和自注意机制的生成式对抗网络ISTSA-GAN。在STGAN的基础上,我们用ISTU代替选择性传输单元(Selective Transfer Unit, STU)与编解码器结合,选择性地传输编码器的特征。此外,在解码器的转置卷积层中引入自关注机制,建立模型跨图像区域的远距离依赖关系。最后,加入属性插值损失和源域对抗损失来约束模型的训练。实验结果表明,该方法提高了编辑属性和保存大量细节的能力,增强了编辑属性的细粒度控制能力。该方法在属性编辑精度和图像质量方面优于经典方法。
{"title":"Facial Attribute Editing based on Independent Selective Transfer Unit and Self-attention Mechanism","authors":"Xiaoning Liu, Peiyao Guo, Jinhong Liu, Dongcheng Tuo, Shiyu Lei, Yuejin Wang","doi":"10.1109/CISP-BMEI56279.2022.9979903","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9979903","url":null,"abstract":"Facial attribute editing aims to change the facial attributes, which can be regarded as an image translation problem. Facial attribute editing is usually realized by combining encoder-decoder and Generative Adversarial Networks, but the generated image is not realistic enough, and the model has weak ability to control the fine granularity of face attributes of generated images. In this work, we propose a Generative Adversarial Network ISTSA-GAN based on Independent Selective Transfer Unit (ISTU) and Self-attention Mechanism. On the basis of STGAN, we use ISTU instead of Selective Transfer Unit (STU) to combine with encoder-decoder to selectively transfer the features of encoder. In addition, a self-attention mechanism is introduced into the transposed convolution layer of the decoder to establish long-distance dependence of the model across image regions. Finally, attribute interpolation loss and source domain adversarial loss are added to constrain the training of the model. Experimental results show that this method can improve the ability of editing attributes and saving much details, and enhance the ability of fine-grained control of editing attributes. It is superior to classical methods in attribute editing accuracy and image quality.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115609885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Motion Compensation Method for High Resolution Terahertz SAR Imaging 高分辨率太赫兹SAR成像中一种新的运动补偿方法
Zhaoxin Hao, J. Sun, D. Gu
Airborne terahertz synthetic aperture radar (THz-SAR) is sensitive to the tiny vibration of the platform because of the short wavelength. Therefore, the phase errors caused by high-frequency vibration of the platform needs to be considered in the motion compensation (MOCO) for THz-SAR imaging. There have been many MOCO methods to compensate the phase errors caused by high-frequency vibration. However, in some cases, the low-frequency motion errors also need to be considered. Different from these methods, this paper proposes a novel MOCO method which compensates both the high-frequency vibration and the low-frequency motion errors. Firstly, the instantaneous chirp rate (ICR) and the instantaneous frequency are both estimated using chirplet decomposition. After filtering out the low-frequency component of the ICR, we obtain the estimate of high-frequency component by using the least squares (LS) sequential estimators. Then, the high-frequency component in the instantaneous frequency is removed, and the parameters of the low-frequency motion are estimated using LS estimator. Finally, the errors are compensated according to the estimated parameters, and the residual phase errors can be compensated by the phase gradient autofocus (PGA) algorithm. The simulation results validate the effectivity of the proposed method.
机载太赫兹合成孔径雷达(THz-SAR)由于波长较短,对平台的微小振动敏感。因此,在太赫兹sar成像的运动补偿(MOCO)中,需要考虑平台高频振动引起的相位误差。目前已有许多补偿高频振动引起的相位误差的MOCO方法。但是,在某些情况下,还需要考虑低频运动误差。与这些方法不同,本文提出了一种补偿高频振动和低频运动误差的MOCO方法。首先,利用啁啾分解估计瞬时啁啾率(ICR)和瞬时频率;在滤除ICR的低频分量后,利用最小二乘序列估计器得到高频分量的估计。然后,去除瞬时频率中的高频分量,利用LS估计器估计低频运动参数;最后,根据估计的参数对误差进行补偿,并利用相位梯度自动对焦(PGA)算法对剩余相位误差进行补偿。仿真结果验证了该方法的有效性。
{"title":"A Novel Motion Compensation Method for High Resolution Terahertz SAR Imaging","authors":"Zhaoxin Hao, J. Sun, D. Gu","doi":"10.1109/CISP-BMEI56279.2022.9979931","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9979931","url":null,"abstract":"Airborne terahertz synthetic aperture radar (THz-SAR) is sensitive to the tiny vibration of the platform because of the short wavelength. Therefore, the phase errors caused by high-frequency vibration of the platform needs to be considered in the motion compensation (MOCO) for THz-SAR imaging. There have been many MOCO methods to compensate the phase errors caused by high-frequency vibration. However, in some cases, the low-frequency motion errors also need to be considered. Different from these methods, this paper proposes a novel MOCO method which compensates both the high-frequency vibration and the low-frequency motion errors. Firstly, the instantaneous chirp rate (ICR) and the instantaneous frequency are both estimated using chirplet decomposition. After filtering out the low-frequency component of the ICR, we obtain the estimate of high-frequency component by using the least squares (LS) sequential estimators. Then, the high-frequency component in the instantaneous frequency is removed, and the parameters of the low-frequency motion are estimated using LS estimator. Finally, the errors are compensated according to the estimated parameters, and the residual phase errors can be compensated by the phase gradient autofocus (PGA) algorithm. The simulation results validate the effectivity of the proposed method.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"20 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128299036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Optical Guidance Method for Robotic Intramuscular Injection System 一种用于机器人肌肉注射系统的光学制导方法
Yunlong Zhu, Wenlong Zhang, Biao Yan, Rongqian Yang
Intramuscular (IM) injection is mainly performed manually at present. Large-scale COVID-19 vaccination has exposed various problems of manual IM injection. In addition, the clinical success rate of manual IM injection is also unsatisfactory. Using robotic intramuscular injection system (RIMIS) is expected to realize automated vaccination and improve the success rate of IM injection. The existing robotic needle insertion system based on image guidance is not a practical option for IM injection because of the time-consuming medical imaging process. In this paper, an optical guidance method for RIMIS is proposed, which uses near-infrared optical tracking system and retro-reflective patch to achieve rapid acquisition of surface normal vector. A closed loop formed by six coordinate systems is used to realize the accurate control of the injection angle and depth. Experimental results show that the RIMIS based on the proposed method can complete the simulated IM injection operation without image guidance and possess accurate control of the injection angle and depth.
肌内注射目前主要是手工进行的。COVID-19大规模疫苗接种暴露了人工注射IM的各种问题。此外,手工注射IM的临床成功率也不理想。利用机器人肌肉注射系统(RIMIS)有望实现自动化疫苗接种,提高IM注射的成功率。现有的基于图像引导的机器人插针系统由于耗时的医学成像过程而不是IM注射的实际选择。本文提出了一种利用近红外光学跟踪系统和反反射贴片实现表面法向量快速获取的RIMIS光学制导方法。采用6个坐标系组成闭环,实现了喷射角度和深度的精确控制。实验结果表明,基于该方法的RIMIS可以在没有图像引导的情况下完成模拟的IM注射操作,并具有精确的注射角度和注射深度控制。
{"title":"An Optical Guidance Method for Robotic Intramuscular Injection System","authors":"Yunlong Zhu, Wenlong Zhang, Biao Yan, Rongqian Yang","doi":"10.1109/CISP-BMEI56279.2022.9979870","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9979870","url":null,"abstract":"Intramuscular (IM) injection is mainly performed manually at present. Large-scale COVID-19 vaccination has exposed various problems of manual IM injection. In addition, the clinical success rate of manual IM injection is also unsatisfactory. Using robotic intramuscular injection system (RIMIS) is expected to realize automated vaccination and improve the success rate of IM injection. The existing robotic needle insertion system based on image guidance is not a practical option for IM injection because of the time-consuming medical imaging process. In this paper, an optical guidance method for RIMIS is proposed, which uses near-infrared optical tracking system and retro-reflective patch to achieve rapid acquisition of surface normal vector. A closed loop formed by six coordinate systems is used to realize the accurate control of the injection angle and depth. Experimental results show that the RIMIS based on the proposed method can complete the simulated IM injection operation without image guidance and possess accurate control of the injection angle and depth.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129360349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on an Effective Human Action Recognition Model Based on 3D CNN 基于三维CNN的有效人体动作识别模型研究
Yupeng Wang, Shuqing He, Xiaowei Wei, Samuel Akolade George
Most of the human action recognition systems based on 3-Dimensional Convolutional Neural Network (3D CNN) architecture recognize human actions frame by frame in video streams, which need to be deployed on high-performance platforms such as cloud servers. Through the targeted optimization of the processing method of each frame of the video in the process of human action recognition, the computing power requirements and the total processing time of human action recognition are reduced. The optimization of human action recognition is tested and verified by the Kinetics-700 dataset, and the accuracy of action recognition is similar to that before optimization, and the total recognition time is only 14.1 % of the total time before optimization. It effectively reduces the performance requirements of the deployment platform, improves the real-time performance of action recognition, and increases the practicability of human action recognition based on deep learning in the application of low computing power platforms.
大多数基于三维卷积神经网络(3D CNN)架构的人体动作识别系统需要在视频流中逐帧识别人体动作,这些系统需要部署在云服务器等高性能平台上。通过对人体动作识别过程中每帧视频的处理方法进行有针对性的优化,降低了人体动作识别的计算能力要求和总处理时间。通过运动学-700数据集对优化后的人体动作识别进行了测试和验证,动作识别的准确率与优化前相当,总识别时间仅为优化前总时间的14.1%。有效降低了部署平台的性能要求,提高了动作识别的实时性,增加了基于深度学习的人体动作识别在低计算能力平台应用中的实用性。
{"title":"Research on an Effective Human Action Recognition Model Based on 3D CNN","authors":"Yupeng Wang, Shuqing He, Xiaowei Wei, Samuel Akolade George","doi":"10.1109/CISP-BMEI56279.2022.9980092","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9980092","url":null,"abstract":"Most of the human action recognition systems based on 3-Dimensional Convolutional Neural Network (3D CNN) architecture recognize human actions frame by frame in video streams, which need to be deployed on high-performance platforms such as cloud servers. Through the targeted optimization of the processing method of each frame of the video in the process of human action recognition, the computing power requirements and the total processing time of human action recognition are reduced. The optimization of human action recognition is tested and verified by the Kinetics-700 dataset, and the accuracy of action recognition is similar to that before optimization, and the total recognition time is only 14.1 % of the total time before optimization. It effectively reduces the performance requirements of the deployment platform, improves the real-time performance of action recognition, and increases the practicability of human action recognition based on deep learning in the application of low computing power platforms.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131209518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Learning Based Method For COVID-19 Classification Using Chest CT Images 基于深度学习的胸部CT图像COVID-19分类方法
Guang Li, Chengwei Sun, Zeyu Sun
At the beginning of 2020, coronavirus disease 2019(COVID-19) infection spread in Wuhan, China and all over the world. Until April, it had affected millions of people. The computed tomography (CT) imaging is confirmed as one of the assessment method for COVID-19 patients. However distinguish the COVID-19 from those CT images is extremely challenging as it is very time-consuming, and lack of the experienced radiologists. So deep learning based approaches are proposed to triage the COVID-19 images from the normal or other pneumonia images. Here, we proposed a novel global average pooling (GAP) method for the deep neural network to improve the performance of the COVID-19 classification. The novel GAP method is using lung mask region as weighting factor for GAP, which reduce the influence of background region and highlight the classification features of interesting tissue region. The result of our method achieved the triage of COVID-19 with sensitivity 96.4 % and specificity 93.3 % on the independence validation dataset with 2062 CT scans.
2020年初,2019冠状病毒病(COVID-19)感染在中国武汉和世界各地蔓延。直到4月份,它已经影响了数百万人。计算机断层扫描(CT)成像被确认为新冠肺炎患者的评估方法之一。然而,从这些CT图像中区分COVID-19是极具挑战性的,因为它非常耗时,而且缺乏经验丰富的放射科医生。因此,提出了基于深度学习的方法来将COVID-19图像与正常或其他肺炎图像进行分类。在此,我们提出了一种新的全球平均池化(GAP)方法用于深度神经网络,以提高COVID-19分类的性能。该方法采用肺膜区域作为GAP的加权因子,减少了背景区域的影响,突出了感兴趣组织区域的分类特征。我们的方法在2062个CT扫描的独立性验证数据集上实现了COVID-19的分类,灵敏度为96.4%,特异性为93.3%。
{"title":"A Deep Learning Based Method For COVID-19 Classification Using Chest CT Images","authors":"Guang Li, Chengwei Sun, Zeyu Sun","doi":"10.1109/CISP-BMEI56279.2022.9980099","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9980099","url":null,"abstract":"At the beginning of 2020, coronavirus disease 2019(COVID-19) infection spread in Wuhan, China and all over the world. Until April, it had affected millions of people. The computed tomography (CT) imaging is confirmed as one of the assessment method for COVID-19 patients. However distinguish the COVID-19 from those CT images is extremely challenging as it is very time-consuming, and lack of the experienced radiologists. So deep learning based approaches are proposed to triage the COVID-19 images from the normal or other pneumonia images. Here, we proposed a novel global average pooling (GAP) method for the deep neural network to improve the performance of the COVID-19 classification. The novel GAP method is using lung mask region as weighting factor for GAP, which reduce the influence of background region and highlight the classification features of interesting tissue region. The result of our method achieved the triage of COVID-19 with sensitivity 96.4 % and specificity 93.3 % on the independence validation dataset with 2062 CT scans.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130969111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A P300 BCI calibration-free algorithm based on intersubject transfer and reinforcement learning 基于主体间迁移和强化学习的P300 BCI无标定算法
Xuewei Chen, Zhihua Huang
P300 brain-computer interface (BCI) is an important field of brain science exploration, but the calibration of P300 affects its application. To solve this problem, we propose an algorithm that combines transfer learning and reinforcement learning. In the reinforcement learning algorithm, we refer to P300 linear upper confidence bound(PLUCB). Due to the particularity of the PLUCB algorithm, we modify it and integrate the idea of online transfer learning. The new algorithm is applied to the calibration-free classification of P300 BCI, using the classifier matrices of the subjects in the source domain, without collecting additional session data of the target subjects for calibration. We test the performance of the classifier at different stages of the algorithm. For each subject, the agent constantly updates on the first part of the data and the second part of the data is used for testing. The results show that our designed algorithm P300 Homogeneous Online Transfer Learning (PHomOTL) has better performance than PLUCB, transfer PLUCB (TPLUCB) and Stepwise Linear Discriminant Analysis (SWLDA). When 10000 trials are used for training and the remaining 5120 trials are used for testing, the average P300 classification accuracy of PHomOTL is 73.15% and the average character classification accuracy of PHomOTL is 79.46%.
P300脑机接口(BCI)是脑科学探索的一个重要领域,但P300的标定影响其应用。为了解决这个问题,我们提出了一种结合迁移学习和强化学习的算法。在强化学习算法中,我们采用P300线性置信上限(linear upper confidence bound, PLUCB)。由于PLUCB算法的特殊性,我们对其进行了修改,并融入了在线迁移学习的思想。将该算法应用于P300脑机接口的无标定分类,利用源域被试的分类器矩阵,无需额外采集目标被试的会话数据进行标定。我们在算法的不同阶段测试了分类器的性能。对于每个主题,代理不断更新数据的第一部分,并使用数据的第二部分进行测试。结果表明,P300同质在线迁移学习(homohomogeneous Online Transfer Learning, PHomOTL)算法的性能优于随机抽取、迁移随机抽取(Transfer PLUCB, TPLUCB)和逐步线性判别分析(SWLDA)。当10000个试验用于训练,其余5120个试验用于测试时,phoomotl的平均P300分类准确率为73.15%,phoomotl的平均字符分类准确率为79.46%。
{"title":"A P300 BCI calibration-free algorithm based on intersubject transfer and reinforcement learning","authors":"Xuewei Chen, Zhihua Huang","doi":"10.1109/CISP-BMEI56279.2022.9980292","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9980292","url":null,"abstract":"P300 brain-computer interface (BCI) is an important field of brain science exploration, but the calibration of P300 affects its application. To solve this problem, we propose an algorithm that combines transfer learning and reinforcement learning. In the reinforcement learning algorithm, we refer to P300 linear upper confidence bound(PLUCB). Due to the particularity of the PLUCB algorithm, we modify it and integrate the idea of online transfer learning. The new algorithm is applied to the calibration-free classification of P300 BCI, using the classifier matrices of the subjects in the source domain, without collecting additional session data of the target subjects for calibration. We test the performance of the classifier at different stages of the algorithm. For each subject, the agent constantly updates on the first part of the data and the second part of the data is used for testing. The results show that our designed algorithm P300 Homogeneous Online Transfer Learning (PHomOTL) has better performance than PLUCB, transfer PLUCB (TPLUCB) and Stepwise Linear Discriminant Analysis (SWLDA). When 10000 trials are used for training and the remaining 5120 trials are used for testing, the average P300 classification accuracy of PHomOTL is 73.15% and the average character classification accuracy of PHomOTL is 79.46%.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129523828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time analysis of Intra-pulse characteristics based on instantaneous frequency 基于瞬时频率的脉冲内特性实时分析
Tianhao Wang, Haiqing Jiang
The analysis of intra-pulse characteristics of radar signal is an important part of radar reconnaissance, real-time analysis of intra-pulse features based on instantaneous frequency can efficiently recognize signals of various modulation types and extract parameters. This method has a high recognition rate under certain signal-to-noise ratio, and the algorithm is simple. It can be implemented at high speed on radar reconnaissance digital receiver.
雷达信号的脉冲内特性分析是雷达侦察的重要组成部分,基于瞬时频率的脉冲内特性实时分析可以有效地识别各种调制类型的信号并提取参数。该方法在一定信噪比下具有较高的识别率,且算法简单。它可以在雷达侦察数字接收机上高速实现。
{"title":"Real-time analysis of Intra-pulse characteristics based on instantaneous frequency","authors":"Tianhao Wang, Haiqing Jiang","doi":"10.1109/CISP-BMEI56279.2022.9980236","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9980236","url":null,"abstract":"The analysis of intra-pulse characteristics of radar signal is an important part of radar reconnaissance, real-time analysis of intra-pulse features based on instantaneous frequency can efficiently recognize signals of various modulation types and extract parameters. This method has a high recognition rate under certain signal-to-noise ratio, and the algorithm is simple. It can be implemented at high speed on radar reconnaissance digital receiver.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127773437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Face Recognition with Robust Matrix Factorization 基于鲁棒矩阵分解的人脸识别
Qing Li
In face recognition, we may encounter face images with shadow and illumination, which will affect the recognition. In this scenario, the low-rank matrix and a sparse matrix can be obtained by low-rank matrix decomposition of the collected original face image, where the low-rank matrix is the face image without shadow and illumination. In order to obtain the low-rank matrix, the Sub-gradient method and AIRLS method are used in this paper, and their effects are compared in the experimental verification of Yale face database.
在人脸识别中,我们可能会遇到有阴影和光照的人脸图像,这会影响识别。在这种情况下,对采集到的原始人脸图像进行低秩矩阵分解,得到低秩矩阵和稀疏矩阵,其中低秩矩阵为没有阴影和光照的人脸图像。为了获得低秩矩阵,本文采用了亚梯度法和AIRLS法,并在耶鲁人脸数据库的实验验证中比较了它们的效果。
{"title":"Face Recognition with Robust Matrix Factorization","authors":"Qing Li","doi":"10.1109/CISP-BMEI56279.2022.9980111","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9980111","url":null,"abstract":"In face recognition, we may encounter face images with shadow and illumination, which will affect the recognition. In this scenario, the low-rank matrix and a sparse matrix can be obtained by low-rank matrix decomposition of the collected original face image, where the low-rank matrix is the face image without shadow and illumination. In order to obtain the low-rank matrix, the Sub-gradient method and AIRLS method are used in this paper, and their effects are compared in the experimental verification of Yale face database.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125811628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SMOTE-LASSO-DeepNet Framework for Cancer Subtyping from Gene Expression Data 基于基因表达数据的癌症亚型分型SMOTE-LASSO-DeepNet框架
Yashpal Singh, Seba Susan
Cancer subtyping from gene expression data is trending research in the field of bioinformatics. Classification of gene expression data is a challenging task due to the small number of samples and large number of features involved. The problem is further complicated due to the strong class imbalance issue prevalent in gene expression datasets. The challenge here is to find an end-to-end machine learning solution to classify cancer subtypes from small sample, high-dimensional, imbalanced gene expression datasets. In this study, we propose a SMOTE-LASSO-DeepNet framework for the identification of cancer subtypes from gene expression data. The proposed framework balances the training set using SMOTE, and then finds the most informative genes using LASSO. The balanced and pruned training set is then applied as input to a deep neural network (DeepNet) with four hidden layers having 512, 256, 128 and 64 neurons respectively. We tested our framework on four different cancer gene expression datasets: Leukemia, Lung cancer, Brain cancer and Breast cancer. It is observed from the results that our proposed SMOTE-LASSO-DeepNet framework performs consistently best as compared to the existing methods.
基于基因表达数据的癌症亚型分析是生物信息学领域的研究热点。由于样本数量少,特征数量多,基因表达数据的分类是一项具有挑战性的任务。由于基因表达数据集中普遍存在强烈的类不平衡问题,使问题进一步复杂化。这里的挑战是找到一个端到端的机器学习解决方案,从小样本、高维、不平衡的基因表达数据集中对癌症亚型进行分类。在这项研究中,我们提出了一个SMOTE-LASSO-DeepNet框架,用于从基因表达数据中识别癌症亚型。该框架使用SMOTE平衡训练集,然后使用LASSO找到信息量最大的基因。然后将平衡和修剪的训练集作为输入应用到深度神经网络(DeepNet)中,该网络具有四个隐藏层,分别具有512、256、128和64个神经元。我们在四种不同的癌症基因表达数据集上测试了我们的框架:白血病、肺癌、脑癌和乳腺癌。从结果中可以看出,与现有方法相比,我们提出的SMOTE-LASSO-DeepNet框架的性能始终最好。
{"title":"SMOTE-LASSO-DeepNet Framework for Cancer Subtyping from Gene Expression Data","authors":"Yashpal Singh, Seba Susan","doi":"10.1109/CISP-BMEI56279.2022.9979952","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9979952","url":null,"abstract":"Cancer subtyping from gene expression data is trending research in the field of bioinformatics. Classification of gene expression data is a challenging task due to the small number of samples and large number of features involved. The problem is further complicated due to the strong class imbalance issue prevalent in gene expression datasets. The challenge here is to find an end-to-end machine learning solution to classify cancer subtypes from small sample, high-dimensional, imbalanced gene expression datasets. In this study, we propose a SMOTE-LASSO-DeepNet framework for the identification of cancer subtypes from gene expression data. The proposed framework balances the training set using SMOTE, and then finds the most informative genes using LASSO. The balanced and pruned training set is then applied as input to a deep neural network (DeepNet) with four hidden layers having 512, 256, 128 and 64 neurons respectively. We tested our framework on four different cancer gene expression datasets: Leukemia, Lung cancer, Brain cancer and Breast cancer. It is observed from the results that our proposed SMOTE-LASSO-DeepNet framework performs consistently best as compared to the existing methods.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114439697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Hurst Exponent Analysis Of Schizophrenia Electroencephalogram Based On Multi-point Fractional Brownian Bridge 基于多点分数布朗桥的精神分裂症脑电图赫斯特指数分析
Congzhou Zhong, Wenpo Yao, Wanyi Yi, Jui-Pin Wang, Dengxuan Bai, Qiong Wang
In this paper, the Hurst index calculation method based on multipoint fractional Brown bridge was used to analyze the electroencephalogram(EEG) of schizophrenia patients and healthy people under the same sound paradigm experiment. We used this method to analyze the short-term EEG signals of the healthy group and the patient group around the time point 100ms after stimulation and found that the method can effectively analyze the Hurst index of short-time series, in the frontal lobe and central area. There were significant differences in passage, and the Hurst index was lower in healthy people than in patients. The results show that in this experiment, the long-term correlation of EEG signals after stimulation in patients with schizophrenia is higher, and the complexity of EEG signals is lower, which can help clinical diagnosis of schizophrenia better. At the same time, this paper compares the Hurst exponent calculation method based on the multi-point fractional Brown bridge with the traditional rescaled range analysis method. The Hurst index calculation of the sequence can analyze the difference between the healthy group and the patient group on a smaller scale.
本文采用基于多点分数布朗桥的Hurst指数计算方法,对精神分裂症患者和正常人在相同声音范式实验条件下的脑电图进行了分析。我们利用该方法对刺激后100ms前后健康组和患者组的短期脑电图信号进行分析,发现该方法能有效分析额叶和中央区短时间序列的Hurst指数。在传代上存在显著差异,健康人群的赫斯特指数低于患者。结果表明,在本实验中,精神分裂症患者刺激后脑电信号的长期相关性较高,脑电信号的复杂性较低,可以更好地帮助精神分裂症的临床诊断。同时,将基于多点分数布朗桥的Hurst指数计算方法与传统的重标差分析方法进行了比较。序列的Hurst指数计算可以在较小的尺度上分析健康组与患者组之间的差异。
{"title":"Hurst Exponent Analysis Of Schizophrenia Electroencephalogram Based On Multi-point Fractional Brownian Bridge","authors":"Congzhou Zhong, Wenpo Yao, Wanyi Yi, Jui-Pin Wang, Dengxuan Bai, Qiong Wang","doi":"10.1109/CISP-BMEI56279.2022.9980315","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9980315","url":null,"abstract":"In this paper, the Hurst index calculation method based on multipoint fractional Brown bridge was used to analyze the electroencephalogram(EEG) of schizophrenia patients and healthy people under the same sound paradigm experiment. We used this method to analyze the short-term EEG signals of the healthy group and the patient group around the time point 100ms after stimulation and found that the method can effectively analyze the Hurst index of short-time series, in the frontal lobe and central area. There were significant differences in passage, and the Hurst index was lower in healthy people than in patients. The results show that in this experiment, the long-term correlation of EEG signals after stimulation in patients with schizophrenia is higher, and the complexity of EEG signals is lower, which can help clinical diagnosis of schizophrenia better. At the same time, this paper compares the Hurst exponent calculation method based on the multi-point fractional Brown bridge with the traditional rescaled range analysis method. The Hurst index calculation of the sequence can analyze the difference between the healthy group and the patient group on a smaller scale.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122787693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1