首页 > 最新文献

Eurasip Journal on Image and Video Processing最新文献

英文 中文
Weakly supervised spatial–temporal attention network driven by tracking and consistency loss for action detection 基于跟踪和一致性损失驱动的弱监督时空注意网络
IF 2.4 4区 计算机科学 Pub Date : 2022-07-18 DOI: 10.1186/s13640-022-00588-4
Jinlei Zhu, Houjin Chen, Pan Pan, Jia Sun

This study proposes a novel network model for video action tube detection. This model is based on a location-interactive weakly supervised spatial–temporal attention mechanism driven by multiple loss functions. It is especially costly and time consuming to annotate every target location in video frames. Thus, we first propose a cross-domain weakly supervised learning method with a spatial–temporal attention mechanism for action tube detection. In source domain, we trained a newly designed multi-loss spatial–temporal attention–convolution network on the source data set, which has both object location and classification annotations. In target domain, we introduced internal tracking loss and neighbor-consistency loss; we trained the network with the pre-trained model on the target data set, which only has inaccurate action temporal positions. Although this is a location-unsupervised method, its performance outperforms typical weakly supervised methods, and even shows comparable results with some recent fully supervised methods. We also visualize the activation maps, which reveal the intrinsic reason behind the higher performance of the proposed method.

本文提出了一种新的视频动作管检测网络模型。该模型基于多损失函数驱动的位置交互弱监督时空注意机制。在视频帧中对每个目标位置进行标注是非常昂贵和耗时的。因此,我们首先提出了一种具有时空注意机制的跨域弱监督学习方法用于动作管检测。在源域,我们在源数据集上训练了一个新设计的多损失时空注意卷积网络,该网络同时具有目标定位和分类注释。在目标域引入了内部跟踪损失和邻居一致性损失;我们使用预先训练好的模型在目标数据集上训练网络,目标数据集只有不准确的动作时间位置。虽然这是一种位置无监督方法,但其性能优于典型的弱监督方法,甚至可以与最近的一些完全监督方法相媲美。我们还可视化了激活图,这揭示了所提方法更高性能背后的内在原因。
{"title":"Weakly supervised spatial–temporal attention network driven by tracking and consistency loss for action detection","authors":"Jinlei Zhu, Houjin Chen, Pan Pan, Jia Sun","doi":"10.1186/s13640-022-00588-4","DOIUrl":"https://doi.org/10.1186/s13640-022-00588-4","url":null,"abstract":"<p>This study proposes a novel network model for video action tube detection. This model is based on a location-interactive weakly supervised spatial–temporal attention mechanism driven by multiple loss functions. It is especially costly and time consuming to annotate every target location in video frames. Thus, we first propose a cross-domain weakly supervised learning method with a spatial–temporal attention mechanism for action tube detection. In source domain, we trained a newly designed multi-loss spatial–temporal attention–convolution network on the source data set, which has both object location and classification annotations. In target domain, we introduced internal tracking loss and neighbor-consistency loss; we trained the network with the pre-trained model on the target data set, which only has inaccurate action temporal positions. Although this is a location-unsupervised method, its performance outperforms typical weakly supervised methods, and even shows comparable results with some recent fully supervised methods. We also visualize the activation maps, which reveal the intrinsic reason behind the higher performance of the proposed method.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138518834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Performance analysis of different DCNN models in remote sensing image object detection 不同DCNN模型在遥感图像目标检测中的性能分析
IF 2.4 4区 计算机科学 Pub Date : 2022-06-07 DOI: 10.1186/s13640-022-00586-6
Hua Liu, Jixiang Du, Yong Zhang, Hongbo Zhang
{"title":"Performance analysis of different DCNN models in remote sensing image object detection","authors":"Hua Liu, Jixiang Du, Yong Zhang, Hongbo Zhang","doi":"10.1186/s13640-022-00586-6","DOIUrl":"https://doi.org/10.1186/s13640-022-00586-6","url":null,"abstract":"","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45878916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-orientation local ternary pattern-based feature extraction for forensic dentistry 基于多方向局部三元模式的法医牙科特征提取
IF 2.4 4区 计算机科学 Pub Date : 2022-05-13 DOI: 10.1186/s13640-022-00584-8
Karunya Rajmohan, Askarunisa Abdul Khader

Accurate and automated identification of the deceased victims with dental radiographs plays a significant role in forensic dentistry. The image processing techniques such as segmentation and feature extraction play a crucial role in image retrieval in accordance with the matching image. The raw image undergoes segmentation, feature extraction and distance-based image retrieval. The ultimate goal of the proposed work is the automated quality enhancement of the image by providing advanced enhancement techniques, segmentation techniques, feature extraction, and matching techniques. In this paper, multi-orientation local ternary pattern-based feature extraction is proposed for feature extraction. The grey level difference method (GLDM) is adopted to extract the texture and shape features that are considered for better results. The image retrieval is done by the computation of similarity score using distances such as Manhattan, Euclidean, vector cosine angle, and histogram intersection distance to obtain the optimal match from the database. The manually picked dataset of 200 images is considered for performance analysis. By extracting both the shape features and texture features, the proposed approach achieved maximum accuracy, precision, recall, F-measure, sensitivity, and specificity and lower false-positive and negative values.

利用牙科x光片准确和自动识别死者在法医牙科中起着重要作用。图像分割和特征提取等图像处理技术在匹配图像检索中起着至关重要的作用。对原始图像进行分割、特征提取和基于距离的图像检索。所提出的工作的最终目标是通过提供先进的增强技术、分割技术、特征提取和匹配技术来自动增强图像的质量。本文提出了一种基于多方向局部三元模式的特征提取方法。采用灰度差法(GLDM)提取考虑的纹理和形状特征,以获得较好的效果。图像检索采用曼哈顿距离、欧几里得距离、矢量余弦角、直方图相交距离等计算相似度得分,从数据库中获得最优匹配。考虑手动选择200张图像的数据集进行性能分析。该方法通过同时提取形状特征和纹理特征,实现了最大的正确率、精密度、召回率、f值、灵敏度和特异性,并降低了假阳性和阴性值。
{"title":"Multi-orientation local ternary pattern-based feature extraction for forensic dentistry","authors":"Karunya Rajmohan, Askarunisa Abdul Khader","doi":"10.1186/s13640-022-00584-8","DOIUrl":"https://doi.org/10.1186/s13640-022-00584-8","url":null,"abstract":"<p>Accurate and automated identification of the deceased victims with dental radiographs plays a significant role in forensic dentistry. The image processing techniques such as segmentation and feature extraction play a crucial role in image retrieval in accordance with the matching image. The raw image undergoes segmentation, feature extraction and distance-based image retrieval. The ultimate goal of the proposed work is the automated quality enhancement of the image by providing advanced enhancement techniques, segmentation techniques, feature extraction, and matching techniques. In this paper, multi-orientation local ternary pattern-based feature extraction is proposed for feature extraction. The grey level difference method (GLDM) is adopted to extract the texture and shape features that are considered for better results. The image retrieval is done by the computation of similarity score using distances such as Manhattan, Euclidean, vector cosine angle, and histogram intersection distance to obtain the optimal match from the database. The manually picked dataset of 200 images is considered for performance analysis. By extracting both the shape features and texture features, the proposed approach achieved maximum accuracy, precision, recall, F-measure, sensitivity, and specificity and lower false-positive and negative values.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138518832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Face image synthesis from facial parts 从面部部位合成人脸图像
IF 2.4 4区 计算机科学 Pub Date : 2022-05-10 DOI: 10.1186/s13640-022-00585-7
Qiushi Sun, Jingtao Guo, Yi Liu

Recently, inspired by the growing power of deep convolutional neural networks (CNNs) and generative adversarial networks (GANs), facial image editing has received increasing attention and has produced a series of wide-ranging applications. In this paper, we propose a new and effective approach to a challenging task: synthesizing face images based on key facial parts. The proposed approach is a novel deep generative network that can automatically align facial parts with the precise positions in a face image and then output an entire facial image conditioned on the well-aligned parts. Specifically, three loss functions are introduced in this approach, which are the key to making the synthesized realistic facial image: a reconstruction loss to generate image content in an unknown region, a perceptual loss to enhance the network's ability to model high-level semantic structures and an adversarial loss to ensure that the synthesized images are visually realistic. In this approach, the three components cooperate well to form an effective framework for parts-based high-quality facial image synthesis. Finally, extensive experiments demonstrate the superior performance of this method to existing solutions.

近年来,受深度卷积神经网络(cnn)和生成对抗网络(gan)日益强大的影响,面部图像编辑受到越来越多的关注,并产生了一系列广泛的应用。在本文中,我们提出了一种新的有效的方法来解决一个具有挑战性的任务:基于人脸关键部位的人脸图像合成。该方法是一种新颖的深度生成网络,可以自动将面部部位与面部图像中的精确位置对齐,然后以对齐好的面部部位为条件输出整个面部图像。具体来说,该方法引入了三个损失函数,它们是使合成的面部图像逼真的关键:在未知区域生成图像内容的重建损失,增强网络高级语义结构建模能力的感知损失,以及确保合成图像视觉逼真的对抗损失。在该方法中,这三个部分很好地配合,形成了一个有效的基于部分的高质量人脸图像合成框架。最后,大量的实验证明了该方法优于现有的解决方案。
{"title":"Face image synthesis from facial parts","authors":"Qiushi Sun, Jingtao Guo, Yi Liu","doi":"10.1186/s13640-022-00585-7","DOIUrl":"https://doi.org/10.1186/s13640-022-00585-7","url":null,"abstract":"<p>Recently, inspired by the growing power of deep convolutional neural networks (CNNs) and generative adversarial networks (GANs), facial image editing has received increasing attention and has produced a series of wide-ranging applications. In this paper, we propose a new and effective approach to a challenging task: synthesizing face images based on key facial parts. The proposed approach is a novel deep generative network that can automatically align facial parts with the precise positions in a face image and then output an entire facial image conditioned on the well-aligned parts. Specifically, three loss functions are introduced in this approach, which are the key to making the synthesized realistic facial image: a reconstruction loss to generate image content in an unknown region, a perceptual loss to enhance the network's ability to model high-level semantic structures and an adversarial loss to ensure that the synthesized images are visually realistic. In this approach, the three components cooperate well to form an effective framework for parts-based high-quality facial image synthesis. Finally, extensive experiments demonstrate the superior performance of this method to existing solutions.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138518814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An image-guided network for depth edge enhancement 一种用于深度边缘增强的图像引导网络
IF 2.4 4区 计算机科学 Pub Date : 2022-04-15 DOI: 10.1186/s13640-022-00583-9
Kuan-Ting Lee, Enyu Liu, J. Yang, Li Hong
{"title":"An image-guided network for depth edge enhancement","authors":"Kuan-Ting Lee, Enyu Liu, J. Yang, Li Hong","doi":"10.1186/s13640-022-00583-9","DOIUrl":"https://doi.org/10.1186/s13640-022-00583-9","url":null,"abstract":"","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49354185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic kidney segmentation using 2.5D ResUNet and 2.5D DenseUNet for malignant potential analysis in complex renal cyst based on CT images 基于CT图像,采用2.5D ResUNet和2.5D DenseUNet自动分割复杂肾囊肿恶性潜能分析
IF 2.4 4区 计算机科学 Pub Date : 2022-03-22 DOI: 10.1186/s13640-022-00581-x
Parin Kittipongdaja, Thitirat Siriborvornratanakul

Bosniak renal cyst classification has been widely used in determining the complexity of a renal cyst. However, it turns out that about half of patients undergoing surgery for Bosniak category III, take surgical risks that reward them with no clinical benefit at all. This is because their pathological results reveal that the cysts are actually benign not malignant. This problem inspires us to use recently popular deep learning techniques and study alternative analytics methods for precise binary classification (benign or malignant tumor) on Computerized Tomography (CT) images. To achieve our goal, two consecutive steps are required–segmenting kidney organs or lesions from CT images then classifying the segmented kidneys. In this paper, we propose a study of kidney segmentation using 2.5D ResUNet and 2.5D DenseUNet for efficiently extracting intra-slice and inter-slice features. Our models are trained and validated on the public data set from Kidney Tumor Segmentation (KiTS19) challenge in two different training environments. As a result, all experimental models achieve high mean kidney Dice scores of at least 95% on the KiTS19 validation set consisting of 60 patients. Apart from the KiTS19 data set, we also conduct separate experiments on abdomen CT images of four Thai patients. Based on the four Thai patients, our experimental models show a drop in performance, where the best mean kidney Dice score is 87.60%.

Bosniak肾囊肿分类已被广泛用于确定肾囊肿的复杂性。然而,事实证明,大约一半接受波什尼亚克病第三类手术的患者,冒着手术风险,根本没有任何临床益处。这是因为他们的病理结果显示囊肿实际上是良性的而不是恶性的。这个问题激励我们使用最近流行的深度学习技术,并研究计算机断层扫描(CT)图像的精确二值分类(良性或恶性肿瘤)的替代分析方法。为了实现我们的目标,需要连续两个步骤——从CT图像中分割肾脏器官或病变,然后对分割后的肾脏进行分类。在本文中,我们提出了一种使用2.5D ResUNet和2.5D DenseUNet进行肾脏分割的研究,以有效地提取片内和片间特征。在两种不同的训练环境下,我们的模型在肾肿瘤分割(KiTS19)挑战的公共数据集上进行了训练和验证。因此,所有实验模型在KiTS19验证集(60例患者)上均达到了至少95%的高平均肾Dice评分。除了KiTS19数据集,我们还对4例泰国患者的腹部CT图像进行了单独的实验。基于4名泰国患者,我们的实验模型表现出性能下降,其中肾脏骰子的最佳平均得分为87.60%。
{"title":"Automatic kidney segmentation using 2.5D ResUNet and 2.5D DenseUNet for malignant potential analysis in complex renal cyst based on CT images","authors":"Parin Kittipongdaja, Thitirat Siriborvornratanakul","doi":"10.1186/s13640-022-00581-x","DOIUrl":"https://doi.org/10.1186/s13640-022-00581-x","url":null,"abstract":"<p>Bosniak renal cyst classification has been widely used in determining the complexity of a renal cyst. However, it turns out that about half of patients undergoing surgery for Bosniak category III, take surgical risks that reward them with no clinical benefit at all. This is because their pathological results reveal that the cysts are actually benign not malignant. This problem inspires us to use recently popular deep learning techniques and study alternative analytics methods for precise binary classification (benign or malignant tumor) on Computerized Tomography (CT) images. To achieve our goal, two consecutive steps are required–segmenting kidney organs or lesions from CT images then classifying the segmented kidneys. In this paper, we propose a study of kidney segmentation using 2.5D ResUNet and 2.5D DenseUNet for efficiently extracting intra-slice and inter-slice features. Our models are trained and validated on the public data set from Kidney Tumor Segmentation (KiTS19) challenge in two different training environments. As a result, all experimental models achieve high mean kidney Dice scores of at least 95% on the KiTS19 validation set consisting of 60 patients. Apart from the KiTS19 data set, we also conduct separate experiments on abdomen CT images of four Thai patients. Based on the four Thai patients, our experimental models show a drop in performance, where the best mean kidney Dice score is 87.60%.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138518831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Adaptive response maps fusion of correlation filters with anti-occlusion mechanism for visual object tracking 基于抗遮挡机制的相关滤波器自适应响应映射融合视觉目标跟踪
IF 2.4 4区 计算机科学 Pub Date : 2022-03-18 DOI: 10.1186/s13640-022-00582-w
Jianming Zhang, Hehua Liu, Yaoqi He, Li-Dan Kuang, Xi Chen

Despite the impressive performance of correlation filter-based trackers in terms of robustness and accuracy, the trackers have room for improvement. The majority of existing trackers use a single feature or fixed fusion weights, which makes it possible for tracking to fail in the case of deformation or severe occlusion. In this paper, we propose a multi-feature response map adaptive fusion strategy based on the consistency of individual features and fused feature. It is able to improve the robustness and accuracy by building the better object appearance model. Moreover, since the response map has multiple local peaks when the target is occluded, we propose an anti-occlusion mechanism. Specifically, if the nonmaximal local peak is satisfied with our proposed conditions, we generate a new response map which is obtained by moving the center of the region of interest to the nonmaximal local peak position of the response map and re-extracting features. We then select the response map with the largest response value as the final response map. This proposed anti-occlusion mechanism can effectively cope with the problem of tracking failure caused by occlusion. Finally, by adjusting the learning rate in different scenes, we designed a high-confidence model update strategy to deal with the problem of model pollution. Besides, we conducted experiments on OTB2013, OTB2015, TC128 and UAV123 datasets and compared them with the current state-of-the-art algorithms, and the proposed algorithms have impressive advantages in terms of accuracy and robustness.

尽管基于相关滤波器的跟踪器在鲁棒性和准确性方面表现出色,但跟踪器仍有改进的空间。现有的大多数跟踪器使用单一特征或固定的融合权重,这使得在变形或严重遮挡的情况下跟踪可能失败。本文提出了一种基于个体特征与融合特征一致性的多特征响应图自适应融合策略。通过建立更好的目标外观模型,提高了鲁棒性和准确性。此外,由于目标被遮挡时响应图具有多个局部峰,我们提出了一种抗遮挡机制。具体来说,如果非极大局部峰满足我们提出的条件,我们通过将感兴趣区域的中心移动到响应图的非极大局部峰位置并重新提取特征来生成新的响应图。然后,我们选择响应值最大的响应图作为最终的响应图。提出的抗遮挡机制可以有效地解决遮挡导致的跟踪失败问题。最后,通过调整不同场景下的学习率,设计了一种高置信度的模型更新策略来解决模型污染问题。此外,我们在OTB2013、OTB2015、TC128和UAV123数据集上进行了实验,并与目前最先进的算法进行了比较,所提出的算法在准确性和鲁棒性方面具有明显的优势。
{"title":"Adaptive response maps fusion of correlation filters with anti-occlusion mechanism for visual object tracking","authors":"Jianming Zhang, Hehua Liu, Yaoqi He, Li-Dan Kuang, Xi Chen","doi":"10.1186/s13640-022-00582-w","DOIUrl":"https://doi.org/10.1186/s13640-022-00582-w","url":null,"abstract":"<p>Despite the impressive performance of correlation filter-based trackers in terms of robustness and accuracy, the trackers have room for improvement. The majority of existing trackers use a single feature or fixed fusion weights, which makes it possible for tracking to fail in the case of deformation or severe occlusion. In this paper, we propose a multi-feature response map adaptive fusion strategy based on the consistency of individual features and fused feature. It is able to improve the robustness and accuracy by building the better object appearance model. Moreover, since the response map has multiple local peaks when the target is occluded, we propose an anti-occlusion mechanism. Specifically, if the nonmaximal local peak is satisfied with our proposed conditions, we generate a new response map which is obtained by moving the center of the region of interest to the nonmaximal local peak position of the response map and re-extracting features. We then select the response map with the largest response value as the final response map. This proposed anti-occlusion mechanism can effectively cope with the problem of tracking failure caused by occlusion. Finally, by adjusting the learning rate in different scenes, we designed a high-confidence model update strategy to deal with the problem of model pollution. Besides, we conducted experiments on OTB2013, OTB2015, TC128 and UAV123 datasets and compared them with the current state-of-the-art algorithms, and the proposed algorithms have impressive advantages in terms of accuracy and robustness.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138518815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Random CNN structure: tool to increase generalization ability in deep learning 随机CNN结构:提高深度学习泛化能力的工具
IF 2.4 4区 计算机科学 Pub Date : 2022-02-08 DOI: 10.1186/s13640-022-00580-y
B. Świderski, S. Osowski, Grzegorz Gwardys, J. Kurek, M. Słowińska, I. Lugowska
{"title":"Random CNN structure: tool to increase generalization ability in deep learning","authors":"B. Świderski, S. Osowski, Grzegorz Gwardys, J. Kurek, M. Słowińska, I. Lugowska","doi":"10.1186/s13640-022-00580-y","DOIUrl":"https://doi.org/10.1186/s13640-022-00580-y","url":null,"abstract":"","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49608624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Printing and scanning investigation for image counter forensics 用于图像反取证的打印和扫描调查
IF 2.4 4区 计算机科学 Pub Date : 2022-02-07 DOI: 10.1186/s13640-023-00610-3
Hailey James, O. Gupta, D. Raviv
{"title":"Printing and scanning investigation for image counter forensics","authors":"Hailey James, O. Gupta, D. Raviv","doi":"10.1186/s13640-023-00610-3","DOIUrl":"https://doi.org/10.1186/s13640-023-00610-3","url":null,"abstract":"","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42053190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Printing and scanning investigation for image counter forensics 图像反取证的印刷和扫描调查
IF 2.4 4区 计算机科学 Pub Date : 2022-02-07 DOI: 10.1186/s13640-022-00579-5
Hailey Joren, O. Gupta, D. Raviv
{"title":"Printing and scanning investigation for image counter forensics","authors":"Hailey Joren, O. Gupta, D. Raviv","doi":"10.1186/s13640-022-00579-5","DOIUrl":"https://doi.org/10.1186/s13640-022-00579-5","url":null,"abstract":"","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45050831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Eurasip Journal on Image and Video Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1