首页 > 最新文献

2022 30th Signal Processing and Communications Applications Conference (SIU)最新文献

英文 中文
Unsupervised Similarity Based Convolutions for Handwritten Digit Classification 基于无监督相似度卷积的手写数字分类
Pub Date : 2022-05-15 DOI: 10.1109/SIU55565.2022.9864689
Tuğba Erkoç, M. T. Eskil
Effective training of filters in Convolutional Neural Networks (CNN) ensures their success. In order to achieve good classification results in CNNs, filters must be carefully initialized, trained and fine-tuned. We propose an unsupervised method that allows the discovery of filters from the given dataset in a single epoch without specifying the number of filters hyper-parameter in convolutional layers. Our proposed method gradually builds the convolutional layers by a discovery routine that extracts a number of features that adequately represent the complexity of the input domain. The discovered filters represent the patterns in the domain, so they do not require any initialization method or backpropagation training for fine tuning purposes. Our method achieves 99.03% accuracy on MNIST dataset without applying any data augmentation techniques.
卷积神经网络(CNN)中滤波器的有效训练保证了其成功。为了在cnn中获得好的分类结果,必须仔细地初始化、训练和微调过滤器。我们提出了一种无监督的方法,允许在单个历元中从给定的数据集中发现过滤器,而无需指定卷积层中过滤器超参数的数量。我们提出的方法通过一个发现例程逐步构建卷积层,该例程提取了许多足以表示输入域复杂性的特征。发现的过滤器表示领域中的模式,因此它们不需要任何初始化方法或用于微调目的的反向传播训练。该方法在不使用任何数据增强技术的情况下,在MNIST数据集上达到99.03%的准确率。
{"title":"Unsupervised Similarity Based Convolutions for Handwritten Digit Classification","authors":"Tuğba Erkoç, M. T. Eskil","doi":"10.1109/SIU55565.2022.9864689","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864689","url":null,"abstract":"Effective training of filters in Convolutional Neural Networks (CNN) ensures their success. In order to achieve good classification results in CNNs, filters must be carefully initialized, trained and fine-tuned. We propose an unsupervised method that allows the discovery of filters from the given dataset in a single epoch without specifying the number of filters hyper-parameter in convolutional layers. Our proposed method gradually builds the convolutional layers by a discovery routine that extracts a number of features that adequately represent the complexity of the input domain. The discovered filters represent the patterns in the domain, so they do not require any initialization method or backpropagation training for fine tuning purposes. Our method achieves 99.03% accuracy on MNIST dataset without applying any data augmentation techniques.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114694672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Attention Modeling with Temporal Shift in Sign Language Recognition 手语识别中注意的时间转移建模
Pub Date : 2022-05-15 DOI: 10.1109/SIU55565.2022.9864987
Ahmet Faruk Celimli, Ogulcan Özdemir, L. Akarun
Sign languages are visual languages expressed with multiple cues including facial expressions, upper-body and hand gestures. These different visual cues can be used together or at different instants to convey the message. In order to recognize sign languages, it is crucial to model what, where and when to attend. In this study, we developed a model to use different visual cues at the same time by using Temporal Shift Modules (TSMs) and attention modeling. Our experiments are conducted with BospohorusSign22k dataset. Our system has achieved 92.46% recognition accuracy and improved the performance approximately 14% compared to the baseline study with 78.85% accuracy.
手语是通过多种线索表达的视觉语言,包括面部表情、上半身和手势。这些不同的视觉线索可以一起使用,也可以在不同的时刻使用来传达信息。为了识别手语,至关重要的是模拟什么,在哪里和何时参加。在本研究中,我们利用时间移位模块和注意建模建立了一个同时使用不同视觉线索的模型。我们的实验是用BospohorusSign22k数据集进行的。我们的系统达到了92.46%的识别准确率,与基线研究的78.85%准确率相比,性能提高了约14%。
{"title":"Attention Modeling with Temporal Shift in Sign Language Recognition","authors":"Ahmet Faruk Celimli, Ogulcan Özdemir, L. Akarun","doi":"10.1109/SIU55565.2022.9864987","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864987","url":null,"abstract":"Sign languages are visual languages expressed with multiple cues including facial expressions, upper-body and hand gestures. These different visual cues can be used together or at different instants to convey the message. In order to recognize sign languages, it is crucial to model what, where and when to attend. In this study, we developed a model to use different visual cues at the same time by using Temporal Shift Modules (TSMs) and attention modeling. Our experiments are conducted with BospohorusSign22k dataset. Our system has achieved 92.46% recognition accuracy and improved the performance approximately 14% compared to the baseline study with 78.85% accuracy.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"284 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116101742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of Egyptian Fruit Bat Calls with Deep Learning Methods 埃及果蝠叫声的深度学习分类
Pub Date : 2022-05-15 DOI: 10.1109/SIU55565.2022.9864713
Dogukan Mesci, Anil Koluacik, B. Yılmaz, Melih Sen, E. Masazade, V. Beskardes
Bats are of great importance for the survival of all living beings and for biodiversity. This study aims to classify the collective calls of the Egyptian fruit bat, whose northernmost distribution is in Turkey, using deep learning methods CNN and LSTM and utilizing MFCC (Mel Frequency Cepstral Coefficients) features. Thanks to the classification of species-specific calls, it is possible to observe the habitat preference, social relations, foraging, reproduction, mobility and migration of the species. The classification results obtained in this study provide significant increases compared to the previous study, especially in distinguishing certain calls.
蝙蝠对所有生物的生存和生物多样性至关重要。本研究旨在利用深度学习方法CNN和LSTM,利用Mel Frequency Cepstral Coefficients特征,对分布在土耳其最北的埃及果蝠的集体鸣叫进行分类。由于对物种特有的叫声进行了分类,因此有可能观察到物种的栖息地偏好、社会关系、觅食、繁殖、迁移和迁徙。与以往的研究相比,本研究获得的分类结果有了显著的提高,特别是在区分某些叫声方面。
{"title":"Classification of Egyptian Fruit Bat Calls with Deep Learning Methods","authors":"Dogukan Mesci, Anil Koluacik, B. Yılmaz, Melih Sen, E. Masazade, V. Beskardes","doi":"10.1109/SIU55565.2022.9864713","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864713","url":null,"abstract":"Bats are of great importance for the survival of all living beings and for biodiversity. This study aims to classify the collective calls of the Egyptian fruit bat, whose northernmost distribution is in Turkey, using deep learning methods CNN and LSTM and utilizing MFCC (Mel Frequency Cepstral Coefficients) features. Thanks to the classification of species-specific calls, it is possible to observe the habitat preference, social relations, foraging, reproduction, mobility and migration of the species. The classification results obtained in this study provide significant increases compared to the previous study, especially in distinguishing certain calls.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115364526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Hyperspectral Anomaly Detection with Multivariate Skewed t Background Model 基于多元倾斜t背景模型的高光谱异常检测
Pub Date : 2022-05-15 DOI: 10.1109/SIU55565.2022.9864954
K. Kayabol, Ensar Burak Aytekin, Sertaç Arisoy, E. Kuruoğlu
In this paper, autoencoder-based multivariate skewed t-distribution is proposed for hyperspectral anomaly detection. In the proposed method, the reconstruction error between the hyperspectral images reconstructed by the autoencoder and the original hyperspectral images is calculated and is modeled with a multivariate skewed t-distribution. The parameters of the distribution are estimated using the variational Bayes approach, and a distribution-based rule is determined for anomaly detection. The experimental results show that the proposed method has better performance when compared to the RX, LRASR and DAEAD anomaly detection methods.
本文提出了一种基于自编码器的多元偏态t分布的高光谱异常检测方法。该方法计算了自编码器重建的高光谱图像与原始高光谱图像之间的重构误差,并采用多元偏态t分布建模。利用变分贝叶斯方法估计分布参数,确定基于分布的异常检测规则。实验结果表明,与RX、LRASR和DAEAD异常检测方法相比,该方法具有更好的检测性能。
{"title":"Hyperspectral Anomaly Detection with Multivariate Skewed t Background Model","authors":"K. Kayabol, Ensar Burak Aytekin, Sertaç Arisoy, E. Kuruoğlu","doi":"10.1109/SIU55565.2022.9864954","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864954","url":null,"abstract":"In this paper, autoencoder-based multivariate skewed t-distribution is proposed for hyperspectral anomaly detection. In the proposed method, the reconstruction error between the hyperspectral images reconstructed by the autoencoder and the original hyperspectral images is calculated and is modeled with a multivariate skewed t-distribution. The parameters of the distribution are estimated using the variational Bayes approach, and a distribution-based rule is determined for anomaly detection. The experimental results show that the proposed method has better performance when compared to the RX, LRASR and DAEAD anomaly detection methods.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125183880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Çok Dilli Sesten Metne Çeviri Modelinin İnce Ayar Yapılarak Türkçe Dilindeki Başarısının Arttırılması Increasing Performance in Turkish by Finetuning of Multilingual Speech-to-Text Model
Pub Date : 2022-05-15 DOI: 10.1109/SIU55565.2022.9864728
Ö. Mercan, Umut Özdil, Sükrü Ozan
This study was carried out with the aim of automatically translating phone calls between customers and customer representatives of a company. The dataset used in the study was created with audio files that were taken from open source platforms and reading of short texts in various contents by the company personnel. In addition to the labbeled data, approximately 28 thousand unlabeled data were labelled, and a total of 37534 audio data were prepared to be used in the training of the model that will translate from speech to text. The Wav2Vec2-XLSR-53 model which is a pre-trained model trained in 53 languages was fine-tuned with the our Turkish dataset. It has been obtained that it gives successful results in the speech to text performed on the data that is not used in model training and validation. The model was shared as open source on HugginFace to be used and tested for similar speech to text translation problems.
本研究的目的是自动翻译客户和客户代表之间的电话。研究中使用的数据集是用来自开源平台的音频文件和公司人员阅读的各种内容的短文本创建的。除了标记的数据外,大约有28000个未标记的数据被标记,总共有37534个音频数据被准备用于从语音到文本翻译的模型的训练。Wav2Vec2-XLSR-53模型是一个用53种语言训练的预训练模型,我们用土耳其语数据集对其进行了微调。在模型训练和验证中未使用的数据上执行的语音文本得到了成功的结果。该模型在HugginFace上作为开源共享,用于测试类似的语音到文本翻译问题。
{"title":"Çok Dilli Sesten Metne Çeviri Modelinin İnce Ayar Yapılarak Türkçe Dilindeki Başarısının Arttırılması Increasing Performance in Turkish by Finetuning of Multilingual Speech-to-Text Model","authors":"Ö. Mercan, Umut Özdil, Sükrü Ozan","doi":"10.1109/SIU55565.2022.9864728","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864728","url":null,"abstract":"This study was carried out with the aim of automatically translating phone calls between customers and customer representatives of a company. The dataset used in the study was created with audio files that were taken from open source platforms and reading of short texts in various contents by the company personnel. In addition to the labbeled data, approximately 28 thousand unlabeled data were labelled, and a total of 37534 audio data were prepared to be used in the training of the model that will translate from speech to text. The Wav2Vec2-XLSR-53 model which is a pre-trained model trained in 53 languages was fine-tuned with the our Turkish dataset. It has been obtained that it gives successful results in the speech to text performed on the data that is not used in model training and validation. The model was shared as open source on HugginFace to be used and tested for similar speech to text translation problems.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121815103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Investigation of Appropriate Classification Method for EOG Based Human Computer Interface 基于EOG人机界面的分类方法研究
Pub Date : 2022-05-15 DOI: 10.1109/SIU55565.2022.9864953
Muna Layth Abdulateef Al-Zubaidi, Selim Aras
The reason why real feelings and mood changes can be seen through our eyes is that the eyes provide the most revealing and accurate information of all human communication signs. It is possible to control a human-computer interface by voluntarily moving the eyes, which have an important place in communication. In this study, the appropriate feature and classification methods were investigated to use the Electooculography signs obtained from seven different voluntary eye movements in the human-computer interface. The success of the system is increased by determining the combination that gives the best result from many features by using the sequential forward feature selection method. The developed method reached 93.9% success in the seven-class dataset. The results show that human-computer interface control can be done with high accuracy with voluntary eye movements. Also, the development of a real-time working model is inspiring for work.
我们的眼睛之所以能看到真实的感情和情绪变化,是因为眼睛提供了人类所有交流符号中最具启发性和最准确的信息。通过主动移动眼睛来控制人机界面是可能的,眼睛在交流中起着重要的作用。在本研究中,利用人机界面中7种不同的自愿眼动所获得的电图符号,探讨了相应的特征和分类方法。采用顺序前向特征选择方法,从众多特征中选择出最优的组合,提高了系统的成功率。该方法在7类数据集上的成功率为93.9%。结果表明,该人机界面控制方法可以实现高精度的眼动控制。此外,实时工作模型的开发对工作也很有启发作用。
{"title":"Investigation of Appropriate Classification Method for EOG Based Human Computer Interface","authors":"Muna Layth Abdulateef Al-Zubaidi, Selim Aras","doi":"10.1109/SIU55565.2022.9864953","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864953","url":null,"abstract":"The reason why real feelings and mood changes can be seen through our eyes is that the eyes provide the most revealing and accurate information of all human communication signs. It is possible to control a human-computer interface by voluntarily moving the eyes, which have an important place in communication. In this study, the appropriate feature and classification methods were investigated to use the Electooculography signs obtained from seven different voluntary eye movements in the human-computer interface. The success of the system is increased by determining the combination that gives the best result from many features by using the sequential forward feature selection method. The developed method reached 93.9% success in the seven-class dataset. The results show that human-computer interface control can be done with high accuracy with voluntary eye movements. Also, the development of a real-time working model is inspiring for work.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124527512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Face Frontalization for Image Set Based Face Recognition 基于图像集的人脸识别的人脸正面化
Pub Date : 2022-05-15 DOI: 10.1109/SIU55565.2022.9864911
Golara Ghorban Dordinejad, Hakan Cevikalp
Image set based face recognition has recently become a popular topic as it has better performance than single image based face recognition. However, preprocessing is needed to remove the effects of some adverse conditions such as different pose angles, illumination, and expression differences within the set. One of the most effective preprocessing to improve the face recognition rate is face frontalization. Face frontalization is defined as the artificial acquisition of a face image with a different pose angle to a frontal pose. It has been observed that this process increases the face recognition performance. In this paper, image set based face recognition was performed by applying face frontalization to all images in the sets. Firstly, the faces in the IJBA database were frontalized by using the Rotate and Render hybrid frontalization method, which is based on a Three-Dimensional and Generative Adversial Network. Then, discriminative convex classifier is used for set based face recognition. In face recognition experiments, when the frontalized IJBA database and its non-frontalized version were compared, it was observed that the accuracy of face recognition increased with the frontalized face images.
基于图像集的人脸识别由于具有比单图像人脸识别更好的性能而成为近年来的热门话题。但是,需要进行预处理,以消除一些不利条件的影响,例如不同的姿势角度,光照,以及集合内的表情差异。提高人脸识别率最有效的预处理方法之一是人脸正面化。人脸正面化被定义为人工获取与人脸正面姿态角度不同的人脸图像。据观察,这一过程提高了人脸识别性能。本文通过对集合中的所有图像进行人脸正面化,实现基于图像集的人脸识别。首先,采用基于三维生成对抗网络的旋转与渲染混合正面化方法对IJBA数据库中的人脸进行正面化;然后,将判别凸分类器用于基于集合的人脸识别。在人脸识别实验中,将经过正面处理的IJBA数据库与未经过正面处理的IJBA数据库进行对比,发现人脸识别的准确率随着正面处理的人脸图像的增加而提高。
{"title":"Face Frontalization for Image Set Based Face Recognition","authors":"Golara Ghorban Dordinejad, Hakan Cevikalp","doi":"10.1109/SIU55565.2022.9864911","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864911","url":null,"abstract":"Image set based face recognition has recently become a popular topic as it has better performance than single image based face recognition. However, preprocessing is needed to remove the effects of some adverse conditions such as different pose angles, illumination, and expression differences within the set. One of the most effective preprocessing to improve the face recognition rate is face frontalization. Face frontalization is defined as the artificial acquisition of a face image with a different pose angle to a frontal pose. It has been observed that this process increases the face recognition performance. In this paper, image set based face recognition was performed by applying face frontalization to all images in the sets. Firstly, the faces in the IJBA database were frontalized by using the Rotate and Render hybrid frontalization method, which is based on a Three-Dimensional and Generative Adversial Network. Then, discriminative convex classifier is used for set based face recognition. In face recognition experiments, when the frontalized IJBA database and its non-frontalized version were compared, it was observed that the accuracy of face recognition increased with the frontalized face images.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124022411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Classification of Breast Cancer Histopathological Images with Deep Transfer Learning Methods 基于深度迁移学习方法的乳腺癌组织病理学图像分类
Pub Date : 2022-05-15 DOI: 10.1109/SIU55565.2022.9864846
Cemal Efe Tezcan, Berk Kiras, G. Bilgin
It is very important to have a high accuracy rate in detecting cancerous cells in histopathological images. Thanks to high-accuracy images, cancerous cells will be detected more sensitively, and there will be a chance for more accurate and early diagnosis. Thus, a very important preliminary step will be taken in the treatment of cancerous cells. In this study, classification performances were comparatively analyzed by applying various methods to four different cancer cell types (benign, normal, carcinoma in situ and invasive carcinoma). By using BACH and Bioimaging as datasets, the desired parts are tried to be obtained primarily by several image processing methods (pyramid mean shifting, line detection, spreading). After obtaining images of different sizes, their performances are examined by using VGG16, DenseNet121, ResNet50, MobileNetV2, InceptionResNetV2, CNN deep transfer learning methods.
在组织病理图像中检测癌细胞时,具有较高的准确率是非常重要的。由于高准确度的图像,癌细胞将被更灵敏地检测出来,并且有机会更准确和早期诊断。因此,一个非常重要的初步步骤将采取治疗癌细胞。本研究通过对四种不同类型的癌细胞(良性、正常、原位癌和浸润性癌)应用不同的方法进行分类性能对比分析。以BACH和Bioimaging作为数据集,主要通过几种图像处理方法(金字塔均值移位、直线检测、扩散)来获得所需的部分。在获得不同大小的图像后,使用VGG16、DenseNet121、ResNet50、MobileNetV2、InceptionResNetV2、CNN深度迁移学习方法对其性能进行检验。
{"title":"Classification of Breast Cancer Histopathological Images with Deep Transfer Learning Methods","authors":"Cemal Efe Tezcan, Berk Kiras, G. Bilgin","doi":"10.1109/SIU55565.2022.9864846","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864846","url":null,"abstract":"It is very important to have a high accuracy rate in detecting cancerous cells in histopathological images. Thanks to high-accuracy images, cancerous cells will be detected more sensitively, and there will be a chance for more accurate and early diagnosis. Thus, a very important preliminary step will be taken in the treatment of cancerous cells. In this study, classification performances were comparatively analyzed by applying various methods to four different cancer cell types (benign, normal, carcinoma in situ and invasive carcinoma). By using BACH and Bioimaging as datasets, the desired parts are tried to be obtained primarily by several image processing methods (pyramid mean shifting, line detection, spreading). After obtaining images of different sizes, their performances are examined by using VGG16, DenseNet121, ResNet50, MobileNetV2, InceptionResNetV2, CNN deep transfer learning methods.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130688859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Coplanar-Waveguide Fed Microstrip Dual- Band Bandstop Filters with Inductively Coupled Dual-Mode Ring Resonators 带电感耦合双模环形谐振器的共面波导馈电微带双带阻滤波器
Pub Date : 2022-05-15 DOI: 10.1109/SIU55565.2022.9864825
E. G. Sahin
This paper presents a novel microstrip dual-band bandstop filter design by using inductively coupled Coplanar Waveguide fed (CPW) dual-mode resonators. Two nested dual-mode square loop resonators with perturbation elements on microstrip layer are constructed to produce first and second band. In the proposed filter design method, inductively coupled CPW rectangular rings are used to excite the degenerate modes of the resonators to obtain dual band bandstop filtering structure. Two perturbation elements are used to control the reflection zeros of the first and second band. So, two different filter responses are obtained by repositioning the reflection zeros. Charge distributions of the reflection zeros and poles of the structure are investigated for each stopband to exhibit the mode characteristic. Two filters are simulated with a full-wave EM simulator. Two filters are implemented and measured. Despite the production losses are high as a result of the CPW-fed structure, the measurement results are in good agreement with the simulation results.
采用电感耦合共面波导馈电(CPW)双模谐振器,设计了一种新型微带双带带阻滤波器。在微带层上构造了两个嵌套的带微扰元件的双模方环谐振器,以产生第一和第二波段。在提出的滤波器设计方法中,采用电感耦合的CPW矩形环激励谐振器的简并模,得到双带带阻滤波器结构。用两个微扰元件控制第一和第二波段的反射零点。因此,通过重新定位反射零点,可以得到两个不同的滤波器响应。研究了每个阻带的反射零点和反射极的电荷分布,以显示其模式特性。用全波电磁模拟器对两个滤波器进行了仿真。实现并测量了两个滤波器。尽管由于cpw馈电结构导致生产损失较大,但测量结果与仿真结果吻合较好。
{"title":"Coplanar-Waveguide Fed Microstrip Dual- Band Bandstop Filters with Inductively Coupled Dual-Mode Ring Resonators","authors":"E. G. Sahin","doi":"10.1109/SIU55565.2022.9864825","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864825","url":null,"abstract":"This paper presents a novel microstrip dual-band bandstop filter design by using inductively coupled Coplanar Waveguide fed (CPW) dual-mode resonators. Two nested dual-mode square loop resonators with perturbation elements on microstrip layer are constructed to produce first and second band. In the proposed filter design method, inductively coupled CPW rectangular rings are used to excite the degenerate modes of the resonators to obtain dual band bandstop filtering structure. Two perturbation elements are used to control the reflection zeros of the first and second band. So, two different filter responses are obtained by repositioning the reflection zeros. Charge distributions of the reflection zeros and poles of the structure are investigated for each stopband to exhibit the mode characteristic. Two filters are simulated with a full-wave EM simulator. Two filters are implemented and measured. Despite the production losses are high as a result of the CPW-fed structure, the measurement results are in good agreement with the simulation results.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130497416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Color Image Enhancement Using A New Anisotropic Metric 一种新的各向异性度量的彩色图像增强
Pub Date : 2022-05-15 DOI: 10.1109/SIU55565.2022.9864950
Haydar Kiliç, S. Ceyhan
In this study, a new anisotropic metric for color images was defined and the filtering results of a noisy image were examined. Unlike the others (Riemann), the metric created in filtering was chosen as Finsler type and mathematical inferences were made until the filter creation stage. The scale parameter beta and step size dt were tried for different images, and the parameters that gave the best results for the new metric were examined for this study. This new filter was compared with some known filters and the results were examined. As a result, the new filter provided the best image enhancement.
本文定义了一种新的彩色图像各向异性度量,并对噪声图像的滤波结果进行了检验。与其他(Riemann)不同,在过滤中创建的度量被选择为Finsler类型,直到过滤器创建阶段才进行数学推理。对不同的图像尝试了尺度参数beta和步长dt,并对新度量给出最佳结果的参数进行了研究。将该滤波器与一些已知的滤波器进行了比较,并对结果进行了检验。因此,新的过滤器提供了最好的图像增强。
{"title":"Color Image Enhancement Using A New Anisotropic Metric","authors":"Haydar Kiliç, S. Ceyhan","doi":"10.1109/SIU55565.2022.9864950","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864950","url":null,"abstract":"In this study, a new anisotropic metric for color images was defined and the filtering results of a noisy image were examined. Unlike the others (Riemann), the metric created in filtering was chosen as Finsler type and mathematical inferences were made until the filter creation stage. The scale parameter beta and step size dt were tried for different images, and the parameters that gave the best results for the new metric were examined for this study. This new filter was compared with some known filters and the results were examined. As a result, the new filter provided the best image enhancement.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126742311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 30th Signal Processing and Communications Applications Conference (SIU)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1