首页 > 最新文献

2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)最新文献

英文 中文
Dual-Enhanced-CNN for Few-Shot Object Detection in Remote Sensing Images 基于双增强cnn的遥感图像小目标检测
Wanyue Jiang, C. Wang, Liyue Li, Sheng Wang
Existing convolutional neural networks (CNNs) still perform excellently in object detection over remote sensing images, although remote sensing images are more complicated than natural images. But these approaches are very dependent on the quality and quantity of data. when the number of annotated samples gets smaller, the performance of existing CNNs sharply gets worse. Few-shot object detection (FSOD) can alleviate this problem but still has a lot of improvement space. In this work, to further improve the detection performance, We propose our Dual-Enhanced-CNN model. And the main improvements are as follows: 1) We design a weighted cross image attention to learn the interaction information across both images and channels and then improve the detection capabilities of the query image. 2) We design a new adaptive weight loss to focus more on the targets from novel classes and the targets with poor detection performance. We have conducted multiple experiments on the large-scale remote sensing dataset named DIOR. And the higher detection accuracy and relatively stable experimental performance prove the superiority of our method.
尽管遥感图像比自然图像复杂,但现有的卷积神经网络(cnn)在遥感图像的目标检测方面仍然表现出色。但是这些方法非常依赖于数据的质量和数量。随着标注样本数量的减少,现有cnn的性能急剧下降。少镜头目标检测(FSOD)可以缓解这一问题,但仍有很大的改进空间。在这项工作中,为了进一步提高检测性能,我们提出了我们的双增强cnn模型。主要改进如下:1)设计了加权交叉图像关注,学习图像和通道之间的交互信息,提高查询图像的检测能力。2)设计了一种新的自适应减重算法,更加关注新类别的目标和检测性能较差的目标。我们在大尺度遥感数据集DIOR上进行了多次实验。较高的检测精度和相对稳定的实验性能证明了该方法的优越性。
{"title":"Dual-Enhanced-CNN for Few-Shot Object Detection in Remote Sensing Images","authors":"Wanyue Jiang, C. Wang, Liyue Li, Sheng Wang","doi":"10.1109/CISP-BMEI56279.2022.9979831","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9979831","url":null,"abstract":"Existing convolutional neural networks (CNNs) still perform excellently in object detection over remote sensing images, although remote sensing images are more complicated than natural images. But these approaches are very dependent on the quality and quantity of data. when the number of annotated samples gets smaller, the performance of existing CNNs sharply gets worse. Few-shot object detection (FSOD) can alleviate this problem but still has a lot of improvement space. In this work, to further improve the detection performance, We propose our Dual-Enhanced-CNN model. And the main improvements are as follows: 1) We design a weighted cross image attention to learn the interaction information across both images and channels and then improve the detection capabilities of the query image. 2) We design a new adaptive weight loss to focus more on the targets from novel classes and the targets with poor detection performance. We have conducted multiple experiments on the large-scale remote sensing dataset named DIOR. And the higher detection accuracy and relatively stable experimental performance prove the superiority of our method.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130880763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of Heart Failure Using a Convolutional Neural Network via ECG Signals 基于心电信号的卷积神经网络检测心衰
Jad Botros, F. Mourad-Chehade, D. Laplanche
Heart failure (HF) is a chronic heart condition that increases mortality, morbidity, and healthcare costs. The electrocardiogram (ECG) is a noninvasive and straightforward diagnostic tool that can reveal detectable changes in HF. Because of their small amplitude and duration, these changes can be subtle and potentially misclassified during manual interpretation or when analyzed by clinicians. This paper reports a 7 -layer deep convolutional neural network (CNN) model for HF automatic detection. The proposed CNN model requires only minimal pre-processing of ECG signals and does not require any engineered features. The model is trained and tested using an unbalanced and a balanced datasets extracted from the MIT-BIH and the BIDMC databases, achieving an accuracy of 99.73%, a sensitivity of 99.58%, and a specificity of 99.83% when the dataset is unbalanced and an accuracy of 99.26%, a sensitivity of 99.37%, and a specificity of 99.12% when the dataset is balanced.
心力衰竭(HF)是一种慢性心脏疾病,可增加死亡率、发病率和医疗费用。心电图(ECG)是一种无创和直接的诊断工具,可以显示可检测的心衰变化。由于其幅度和持续时间小,这些变化可能是微妙的,并且在人工解释或临床医生分析时可能被错误分类。提出了一种用于高频信号自动检测的7层深度卷积神经网络(CNN)模型。所提出的CNN模型只需要对心电信号进行最小程度的预处理,并且不需要任何工程特征。使用从MIT-BIH和BIDMC数据库中提取的不平衡和平衡数据集对模型进行训练和测试,当数据不平衡时,模型的准确率为99.73%,灵敏度为99.58%,特异性为99.83%;当数据平衡时,模型的准确率为99.26%,灵敏度为99.37%,特异性为99.12%。
{"title":"Detection of Heart Failure Using a Convolutional Neural Network via ECG Signals","authors":"Jad Botros, F. Mourad-Chehade, D. Laplanche","doi":"10.1109/CISP-BMEI56279.2022.9980118","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9980118","url":null,"abstract":"Heart failure (HF) is a chronic heart condition that increases mortality, morbidity, and healthcare costs. The electrocardiogram (ECG) is a noninvasive and straightforward diagnostic tool that can reveal detectable changes in HF. Because of their small amplitude and duration, these changes can be subtle and potentially misclassified during manual interpretation or when analyzed by clinicians. This paper reports a 7 -layer deep convolutional neural network (CNN) model for HF automatic detection. The proposed CNN model requires only minimal pre-processing of ECG signals and does not require any engineered features. The model is trained and tested using an unbalanced and a balanced datasets extracted from the MIT-BIH and the BIDMC databases, achieving an accuracy of 99.73%, a sensitivity of 99.58%, and a specificity of 99.83% when the dataset is unbalanced and an accuracy of 99.26%, a sensitivity of 99.37%, and a specificity of 99.12% when the dataset is balanced.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131169899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of a High-Precision Simulation System for Multi-mode Spaceborne SAR Echo Generation 星载多模SAR回波生成高精度仿真系统设计
Tianyu Lu, Zhan Xu, Xiaoying Chen, Yanqing Huang, Cheng Wang
High precision spaceborne SAR radar echo simulation is crucial for spaceborne SAR system design and signal processing algorithm improvement. In this paper, an equivalent scatters SAR echo generation method is proposed to generate SAR echo data. According to the requirements of high-precision modeling and simulation of spaceborne SAR system, a novel system modeling is proposed as well as some related simulation methods. Various types of errors existed in the system are analyzed. Methods for error simulation are presented as well. Finally, spaceborne SAR echoes of stripmap mode and spotlight mode are simulated and the echo data are processed with Chirp-Scaling algorithm, which verifies the high precision spaceborne SAR echo generation simulation.
高精度星载SAR雷达回波仿真是星载SAR系统设计和信号处理算法改进的关键。本文提出了一种等效散射体SAR回波生成方法来生成SAR回波数据。根据星载SAR系统高精度建模与仿真的要求,提出了一种新的星载SAR系统建模方法及相关仿真方法。分析了系统中存在的各种误差。给出了误差模拟的方法。最后,对条带图模式和聚束模式的星载SAR回波进行了仿真,并用Chirp-Scaling算法对回波数据进行了处理,验证了星载SAR回波生成仿真的精度。
{"title":"Design of a High-Precision Simulation System for Multi-mode Spaceborne SAR Echo Generation","authors":"Tianyu Lu, Zhan Xu, Xiaoying Chen, Yanqing Huang, Cheng Wang","doi":"10.1109/CISP-BMEI56279.2022.9979924","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9979924","url":null,"abstract":"High precision spaceborne SAR radar echo simulation is crucial for spaceborne SAR system design and signal processing algorithm improvement. In this paper, an equivalent scatters SAR echo generation method is proposed to generate SAR echo data. According to the requirements of high-precision modeling and simulation of spaceborne SAR system, a novel system modeling is proposed as well as some related simulation methods. Various types of errors existed in the system are analyzed. Methods for error simulation are presented as well. Finally, spaceborne SAR echoes of stripmap mode and spotlight mode are simulated and the echo data are processed with Chirp-Scaling algorithm, which verifies the high precision spaceborne SAR echo generation simulation.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131258970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Gated Spatial-Channel Transformer Network for the Prediction of Molecular Properties 用于分子性质预测的门控空间通道变压器网络
Jiahao Shen, Qiang Tong, Zhanqi Cui, Zanqiang Dong, Xiulei Liu
Although spatial features of molecules have been widely used for molecular property prediction, the importance of interactive features is coming to the surface these days. By using molecular voxel-based representation, each voxel contains the distribution of atoms, while each channel corresponds to one atomic type. Thus the interaction between multiple atoms is actually contained in the channel information. In this work, we propose a gated spatial-channel transformer (GatedSCT) network for molecular property prediction. We design a channel transformer to capture interactive features from channels, which indicates the relationship between multiple atoms. Also, a spatial transformer is used to extract spatial features of molecules. We apply a gated mechanism to merge these two parts efficiently. Since our proposed network takes advantage of channel information, the experiments show that it can predict molecular properties more accurately than other networks.
虽然分子的空间特征已被广泛用于分子性质预测,但近年来相互作用特征的重要性逐渐浮出水面。通过使用基于分子体素的表示,每个体素包含原子的分布,而每个通道对应一个原子类型。因此,多个原子之间的相互作用实际上包含在通道信息中。在这项工作中,我们提出了一种用于分子性质预测的门控空间通道变压器(GatedSCT)网络。我们设计了一个通道变压器来捕获通道的交互特征,这表明了多个原子之间的关系。同时,利用空间转换器提取分子的空间特征。我们采用一种门控机制来有效地合并这两部分。由于我们提出的网络利用了通道信息,实验表明,它可以比其他网络更准确地预测分子性质。
{"title":"A Gated Spatial-Channel Transformer Network for the Prediction of Molecular Properties","authors":"Jiahao Shen, Qiang Tong, Zhanqi Cui, Zanqiang Dong, Xiulei Liu","doi":"10.1109/CISP-BMEI56279.2022.9980166","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9980166","url":null,"abstract":"Although spatial features of molecules have been widely used for molecular property prediction, the importance of interactive features is coming to the surface these days. By using molecular voxel-based representation, each voxel contains the distribution of atoms, while each channel corresponds to one atomic type. Thus the interaction between multiple atoms is actually contained in the channel information. In this work, we propose a gated spatial-channel transformer (GatedSCT) network for molecular property prediction. We design a channel transformer to capture interactive features from channels, which indicates the relationship between multiple atoms. Also, a spatial transformer is used to extract spatial features of molecules. We apply a gated mechanism to merge these two parts efficiently. Since our proposed network takes advantage of channel information, the experiments show that it can predict molecular properties more accurately than other networks.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133033610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FAD-Net: Fake Images Detection and Generalization Based on Frequency Domain Transformation FAD-Net:基于频域变换的假图像检测与泛化
Xiaoning Liu, Jinhong Liu, Peiyao Guo, Dongcheng Tuo, Shaotong Tian, Yi Jiang
With the continuous development of neural network technology, the generation methods of fake images are gradually improved. More and more faked photos and face changing videos appear on major social media platforms, causing people to pay attention to their reputation security, information security, and public opinion guidance. At present, the spatial detection model has excellent results. But most of them need a large number of training sets as support. Simultaneously, the frequency detection model primarily uses complex feature extraction operations, and most of the two detection models only detect a single fake image generation method. Given the above two points, this paper makes the following work: Based on the common issues and attention mechanism of fake image generation methods on the network, FAD-Net (Frequency-domain Attention Detection Network) is designed, which is suitable for most fake image generation methods. We use the frequency domain image as the network input to train the detector. Good detection results are obtained on 11 fake image generation methods such as Deepfakes and Gan series. Compared with the best spatial detection model, FAD-Net uses a smaller training set and shorter training time to get better detection generalization, which shows the superiority of frequency information in fake image detection generalization.
随着神经网络技术的不断发展,伪图像的生成方法也在逐步完善。越来越多的假照片和变脸视频出现在各大社交媒体平台上,引起人们对其声誉安全、信息安全和舆论引导的关注。目前,空间检测模型取得了很好的效果。但它们大多需要大量的训练集作为支持。同时,频率检测模型主要采用复杂的特征提取操作,两种检测模型大多只检测单一的假图像生成方法。基于以上两点,本文做了以下工作:基于网络上假图像生成方法的共性问题和注意机制,设计了适用于大多数假图像生成方法的频域注意检测网络(FAD-Net, frequency domain attention Detection network)。我们使用频域图像作为网络输入来训练检测器。在Deepfakes、Gan系列等11种伪图像生成方法上均取得了较好的检测效果。与最佳的空间检测模型相比,FAD-Net使用更小的训练集和更短的训练时间获得了更好的检测泛化,显示了频率信息在假图像检测泛化中的优势。
{"title":"FAD-Net: Fake Images Detection and Generalization Based on Frequency Domain Transformation","authors":"Xiaoning Liu, Jinhong Liu, Peiyao Guo, Dongcheng Tuo, Shaotong Tian, Yi Jiang","doi":"10.1109/CISP-BMEI56279.2022.9980271","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9980271","url":null,"abstract":"With the continuous development of neural network technology, the generation methods of fake images are gradually improved. More and more faked photos and face changing videos appear on major social media platforms, causing people to pay attention to their reputation security, information security, and public opinion guidance. At present, the spatial detection model has excellent results. But most of them need a large number of training sets as support. Simultaneously, the frequency detection model primarily uses complex feature extraction operations, and most of the two detection models only detect a single fake image generation method. Given the above two points, this paper makes the following work: Based on the common issues and attention mechanism of fake image generation methods on the network, FAD-Net (Frequency-domain Attention Detection Network) is designed, which is suitable for most fake image generation methods. We use the frequency domain image as the network input to train the detector. Good detection results are obtained on 11 fake image generation methods such as Deepfakes and Gan series. Compared with the best spatial detection model, FAD-Net uses a smaller training set and shorter training time to get better detection generalization, which shows the superiority of frequency information in fake image detection generalization.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132163125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Power Separation Fast Response Brown-Out Detection Structure 一种功率分离快速响应断电检测结构
Shiming Liang, Shengxi Diao
This work presents a brown-out detection circuit architecture with the most reported selectable reference levels, which can work in a wide temperature range. Compared with reported BOD circuits, the proposed BOD circuit separates the power supply of brown-out detection and functional circuit from the external power supply, which gives higher robustness. The proposed architecture is implemented in a 12 nm CMOS process, occupying a 0.01 mm- Area. The post layout simulation shows that the circuit can realize a detection delay within 1us in the temperature range of −40-125°C
这项工作提出了一个具有最多可选参考电平的停电检测电路架构,它可以在很宽的温度范围内工作。与已有的BOD电路相比,本文提出的BOD电路将断电检测和功能电路的电源与外部电源分离,具有更高的鲁棒性。该架构采用12纳米CMOS工艺,占地0.01 mm。后布局仿真结果表明,该电路在−40 ~ 125℃的温度范围内可实现1us以内的检测延迟
{"title":"A Power Separation Fast Response Brown-Out Detection Structure","authors":"Shiming Liang, Shengxi Diao","doi":"10.1109/CISP-BMEI56279.2022.9980313","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9980313","url":null,"abstract":"This work presents a brown-out detection circuit architecture with the most reported selectable reference levels, which can work in a wide temperature range. Compared with reported BOD circuits, the proposed BOD circuit separates the power supply of brown-out detection and functional circuit from the external power supply, which gives higher robustness. The proposed architecture is implemented in a 12 nm CMOS process, occupying a 0.01 mm- Area. The post layout simulation shows that the circuit can realize a detection delay within 1us in the temperature range of −40-125°C","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124372017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Denoising Low-Dose CT Images Using a Multi-Layer Convolutional Analysis-Based Sparse Encoder Network 基于多层卷积稀疏编码器网络的低剂量CT图像去噪
Yanqin Kang, Jin Liu, Tao Liu, Jun Qiang
Imaging in the field of low-dose computed tomography (LDCT) tend to be rather noisy and artificial but is diagnostically useful. One approach to improve the quality of LDCT images is to use deep learning (DL) techniques. DL-based methods produce state-of-the-art performance in low-level medical image restoration tasks but remain defect to interpret due to their black-box constructions. In this paper, we present a simple yet effective LDCT image denoising model by combining the advantages of a residual strategy and a multilayer convolutional analysis-based sparse encoder (CASE). Inspired by convolutional sparse coding (CSC), we constructed a multilayer CASE to sufficiently capture and represent hierarchical image features and designed CASE-net to achieve improved LDCT noise artifact suppression. Moreover, a hybrid loss function, e.g. mean absolute error (MAE) loss, edge loss and perceptual loss, was used to achieve better denoising effects. Experiments on the MAYO and UIH datasets demonstrated the performance of our framework. The results prove that the proposed approach can restrain noise and artifacts and maintain tissue structure during the LDCT imaging.
低剂量计算机断层扫描(LDCT)成像往往是相当嘈杂和人工,但诊断有用。提高LDCT图像质量的一种方法是使用深度学习技术。基于dl的方法在低水平的医学图像恢复任务中产生了最先进的性能,但由于其黑箱结构,仍然存在解释缺陷。本文结合残差策略和基于多层卷积分析的稀疏编码器(CASE)的优点,提出了一种简单有效的LDCT图像去噪模型。受卷积稀疏编码(CSC)的启发,我们构建了一个多层CASE来充分捕获和表示分层图像特征,并设计了CASE-net来实现改进的LDCT噪声伪像抑制。此外,采用平均绝对误差(MAE)损失、边缘损失和感知损失等混合损失函数来达到更好的去噪效果。在MAYO和UIH数据集上的实验证明了我们的框架的性能。结果表明,该方法能有效抑制LDCT成像过程中的噪声和伪影,保持组织结构。
{"title":"Denoising Low-Dose CT Images Using a Multi-Layer Convolutional Analysis-Based Sparse Encoder Network","authors":"Yanqin Kang, Jin Liu, Tao Liu, Jun Qiang","doi":"10.1109/CISP-BMEI56279.2022.9980070","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9980070","url":null,"abstract":"Imaging in the field of low-dose computed tomography (LDCT) tend to be rather noisy and artificial but is diagnostically useful. One approach to improve the quality of LDCT images is to use deep learning (DL) techniques. DL-based methods produce state-of-the-art performance in low-level medical image restoration tasks but remain defect to interpret due to their black-box constructions. In this paper, we present a simple yet effective LDCT image denoising model by combining the advantages of a residual strategy and a multilayer convolutional analysis-based sparse encoder (CASE). Inspired by convolutional sparse coding (CSC), we constructed a multilayer CASE to sufficiently capture and represent hierarchical image features and designed CASE-net to achieve improved LDCT noise artifact suppression. Moreover, a hybrid loss function, e.g. mean absolute error (MAE) loss, edge loss and perceptual loss, was used to achieve better denoising effects. Experiments on the MAYO and UIH datasets demonstrated the performance of our framework. The results prove that the proposed approach can restrain noise and artifacts and maintain tissue structure during the LDCT imaging.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124591489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Analysis and Simulation of Interference Effects on CSK Modulation Systems 干扰对CSK调制系统影响的分析与仿真
Lele Guan, Zhan Xu, Lu Tian, Chenrui Shi
As the space environment of information transmission becomes more and more complex, the accuracy of communication becomes a new challenge. In order to study the anti-jamming performance of code shift keying (CSK), this paper mainly simulates the performance of CSK and binary phase shift keying (BPSK) under 5 signal interferences, and discusses it according to the bit error rates (BER) obtained by simulation. In the face of different forms of signal interference, CSK has better anti-jamming performance than BPSK. Under various forms of signal interference, CSK has the strongest anti-interference ability to multi-tone interference, and its anti-interference ability to pulse interference, 50% narrowband interference, single-tone interference and channel noise decreases successively.
随着信息传播的空间环境越来越复杂,对信息传播的准确性提出了新的挑战。为了研究码移键控(CSK)的抗干扰性能,本文主要对CSK和二相移键控(BPSK)在5种信号干扰下的性能进行了仿真,并根据仿真得到的误码率(BER)进行了讨论。面对不同形式的信号干扰,CSK比BPSK具有更好的抗干扰性能。在各种形式的信号干扰下,CSK对多音干扰的抗干扰能力最强,对脉冲干扰、50%窄带干扰、单音干扰和信道噪声的抗干扰能力依次降低。
{"title":"Analysis and Simulation of Interference Effects on CSK Modulation Systems","authors":"Lele Guan, Zhan Xu, Lu Tian, Chenrui Shi","doi":"10.1109/CISP-BMEI56279.2022.9979910","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9979910","url":null,"abstract":"As the space environment of information transmission becomes more and more complex, the accuracy of communication becomes a new challenge. In order to study the anti-jamming performance of code shift keying (CSK), this paper mainly simulates the performance of CSK and binary phase shift keying (BPSK) under 5 signal interferences, and discusses it according to the bit error rates (BER) obtained by simulation. In the face of different forms of signal interference, CSK has better anti-jamming performance than BPSK. Under various forms of signal interference, CSK has the strongest anti-interference ability to multi-tone interference, and its anti-interference ability to pulse interference, 50% narrowband interference, single-tone interference and channel noise decreases successively.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114344925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Modal Magnetic Resonance Images Segmentation Based on An Improved 3DUNet 基于改进3DUNet的多模态磁共振图像分割
Yitong Luo, Chenxi Li, Yiman Sun, Hong Fan
The 3D U-Net model employs end-to-end training ways and does not demand pre-training process, but the limited acceptance range of convolutional kernel makes it difficult to establish an explicit long-range dependency, resulting in poor segmentation accuracy in magnetic resonance (MR) image. This paper presents an promoted 3D U-Net architecture that incorporates the Transformer in 3D U-Net (Trans3DUNet) to segment multi-modal MR images, called MMTrans3DUNet. Firstly, the tokenized image blocks from a convolutional neural network (CNN) feature mapping are encoded by Transformer as the input sequence to extract the global context. Then, the decoder up-sampling the encoded features and coalesce them in CNN feature mapping with high resolution to achieve exact positioning. Moreover, according to the characteristics of MR images with multiple imaging modes, the four modalities images (t l, t lce, t2, flair) are fused and put into the Trans3DUNet model for training, which can overcome the problem that the single-modal MR image cannot sufficiently subdivide the lesion in the relevant area. The experimental results on the BraTS2018 and BraTS2019 dataset show that MMTrans3DUNet model can further promote the efficiency and precision of segmentation due to the image information of multiple modes which can complement each other.
三维U-Net模型采用端到端训练方式,不需要预训练过程,但卷积核的接受范围有限,难以建立明确的远程依赖关系,导致磁共振图像分割精度较差。本文提出了一种改进的3D U-Net架构,该架构将Transformer集成到3D U-Net (Trans3DUNet)中以分割多模态MR图像,称为MMTrans3DUNet。首先,用Transformer对卷积神经网络(CNN)特征映射的标记化图像块进行编码,作为提取全局上下文的输入序列;然后,解码器对编码后的特征进行上采样,并在CNN特征映射中进行高分辨率合并,实现精确定位。此外,根据多成像模式MR图像的特点,将4种模式图像(t1、t1、t2、flair)融合到Trans3DUNet模型中进行训练,克服了单模态MR图像不能充分细分病灶相关区域的问题。在BraTS2018和BraTS2019数据集上的实验结果表明,MMTrans3DUNet模型可以进一步提高分割的效率和精度,因为多模式的图像信息可以相互补充。
{"title":"Multi-Modal Magnetic Resonance Images Segmentation Based on An Improved 3DUNet","authors":"Yitong Luo, Chenxi Li, Yiman Sun, Hong Fan","doi":"10.1109/CISP-BMEI56279.2022.9980185","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9980185","url":null,"abstract":"The 3D U-Net model employs end-to-end training ways and does not demand pre-training process, but the limited acceptance range of convolutional kernel makes it difficult to establish an explicit long-range dependency, resulting in poor segmentation accuracy in magnetic resonance (MR) image. This paper presents an promoted 3D U-Net architecture that incorporates the Transformer in 3D U-Net (Trans3DUNet) to segment multi-modal MR images, called MMTrans3DUNet. Firstly, the tokenized image blocks from a convolutional neural network (CNN) feature mapping are encoded by Transformer as the input sequence to extract the global context. Then, the decoder up-sampling the encoded features and coalesce them in CNN feature mapping with high resolution to achieve exact positioning. Moreover, according to the characteristics of MR images with multiple imaging modes, the four modalities images (t l, t lce, t2, flair) are fused and put into the Trans3DUNet model for training, which can overcome the problem that the single-modal MR image cannot sufficiently subdivide the lesion in the relevant area. The experimental results on the BraTS2018 and BraTS2019 dataset show that MMTrans3DUNet model can further promote the efficiency and precision of segmentation due to the image information of multiple modes which can complement each other.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114819143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MA2-FPN for Tiny Object Detection from Remote Sensing Images 基于MA2-FPN的遥感图像微小目标检测
Saiwei Li, Qiang Tong, Xuhong Liu, Zhanqi Cui, Xiulei Liu
Tiny object detection has been a challenging topic in computer vision recent years. Moreover, in remote sensing field, smaller and clustered tiny objects make its detection more difficult compared to ground-based images. This makes general detectors fail to achieve good performance when facing tiny objects in remote sensing images. In this paper, we propose a Mask Augmented Attention Feature Pyramid Network(MA2-FPN) to detect tiny objects in remote sensing images, which consists of two modules, Attention Enhancement Module(AEM) and Mask Supervision Module(MSM). Specifically, AEM aggregates tiny target context and spatial feature information by large kernel separable convolutional attention mechanism, and MSM supervises AEM through a segmentation attention loss to aggregate attention information more accurately while suppressing the influence of irrelevant background. Experiments based on the AI-TOD benchmark show that our MA2-FPN achieves state-of-the-art(SOTA) level.
近年来,微小目标检测一直是计算机视觉领域的一个具有挑战性的课题。此外,在遥感领域,与地面图像相比,较小且聚集的微小物体使其检测更加困难。这使得一般的探测器在面对遥感图像中的微小目标时无法达到良好的性能。本文提出了一种用于遥感图像中微小目标检测的掩模增强注意特征金字塔网络(MA2-FPN),该网络由两个模块组成:注意增强模块(AEM)和掩模监督模块(MSM)。其中,AEM通过大核可分卷积注意机制聚合微小目标上下文和空间特征信息,MSM通过分割注意损失对AEM进行监督,在抑制无关背景影响的同时更准确地聚合注意信息。基于AI-TOD基准的实验表明,我们的MA2-FPN达到了最先进(SOTA)的水平。
{"title":"MA2-FPN for Tiny Object Detection from Remote Sensing Images","authors":"Saiwei Li, Qiang Tong, Xuhong Liu, Zhanqi Cui, Xiulei Liu","doi":"10.1109/CISP-BMEI56279.2022.9980328","DOIUrl":"https://doi.org/10.1109/CISP-BMEI56279.2022.9980328","url":null,"abstract":"Tiny object detection has been a challenging topic in computer vision recent years. Moreover, in remote sensing field, smaller and clustered tiny objects make its detection more difficult compared to ground-based images. This makes general detectors fail to achieve good performance when facing tiny objects in remote sensing images. In this paper, we propose a Mask Augmented Attention Feature Pyramid Network(MA2-FPN) to detect tiny objects in remote sensing images, which consists of two modules, Attention Enhancement Module(AEM) and Mask Supervision Module(MSM). Specifically, AEM aggregates tiny target context and spatial feature information by large kernel separable convolutional attention mechanism, and MSM supervises AEM through a segmentation attention loss to aggregate attention information more accurately while suppressing the influence of irrelevant background. Experiments based on the AI-TOD benchmark show that our MA2-FPN achieves state-of-the-art(SOTA) level.","PeriodicalId":198522,"journal":{"name":"2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116436173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1