首页 > 最新文献

新媒体杂志(英文)最新文献

英文 中文
A Survey on Digital Image Copy-Move Forgery Localization Using Passive Techniques 基于被动技术的数字图像复制-移动伪造定位研究进展
Pub Date : 2019-01-01 DOI: 10.32604/JNM.2019.06219
W. Tan, Wu Yunqing, Wu Peng, Chen Beijing
Digital images can be tampered easily with simple image editing software tools. Therefore, image forensic investigation on the authenticity of digital images’ content is increasingly important. Copy-move is one of the most common types of image forgeries. Thus, an overview of the traditional and the recent copy-move forgery localization methods using passive techniques is presented in this paper. These methods are classified into three types: block-based methods, keypoint-based methods, and deep learning-based methods. In addition, the strengths and weaknesses of these methods are compared and analyzed in robustness and computational cost. Finally, further research directions are discussed.
数字图像可以很容易地篡改简单的图像编辑软件工具。因此,对数字图像内容真实性的图像取证调查显得越来越重要。复制移动是最常见的图像伪造类型之一。因此,本文概述了传统的和最近使用被动技术的复制-移动伪造定位方法。这些方法分为三类:基于块的方法、基于关键点的方法和基于深度学习的方法。此外,还比较分析了这些方法在鲁棒性和计算成本方面的优缺点。最后,对今后的研究方向进行了展望。
{"title":"A Survey on Digital Image Copy-Move Forgery Localization Using Passive Techniques","authors":"W. Tan, Wu Yunqing, Wu Peng, Chen Beijing","doi":"10.32604/JNM.2019.06219","DOIUrl":"https://doi.org/10.32604/JNM.2019.06219","url":null,"abstract":"Digital images can be tampered easily with simple image editing software tools. Therefore, image forensic investigation on the authenticity of digital images’ content is increasingly important. Copy-move is one of the most common types of image forgeries. Thus, an overview of the traditional and the recent copy-move forgery localization methods using passive techniques is presented in this paper. These methods are classified into three types: block-based methods, keypoint-based methods, and deep learning-based methods. In addition, the strengths and weaknesses of these methods are compared and analyzed in robustness and computational cost. Finally, further research directions are discussed.","PeriodicalId":69198,"journal":{"name":"新媒体杂志(英文)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69794948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Low-Dose CT Image Denoising Based on Improved WGAN-gp 基于改进WGAN-gp的低剂量CT图像去噪
Pub Date : 2019-01-01 DOI: 10.32604/JNM.2019.06259
Zhenlong Du, Ye Chao, Yujia Yan, Xiaoli Li
In order to improve the quality of low-dose computational tomography (CT) images, the paper proposes an improved image denoising approach based on WGAN-gp with Wasserstein distance. For improving the training and the convergence efficiency, the given method introduces the gradient penalty term to WGAN network. The novel perceptual loss is introduced to make the texture information of the low-dose images sensitive to the diagnostician eye. The experimental results show that compared with the state-of-art methods, the time complexity is reduced, and the visual quality of low-dose CT images is significantly improved.
为了提高低剂量CT图像的质量,提出了一种基于WGAN-gp的Wasserstein距离改进图像去噪方法。为了提高训练和收敛效率,该方法在WGAN网络中引入了梯度惩罚项。引入了一种新的感知损失方法,使低剂量图像的纹理信息对诊断人眼敏感。实验结果表明,与现有方法相比,该方法降低了时间复杂度,显著提高了低剂量CT图像的视觉质量。
{"title":"Low-Dose CT Image Denoising Based on Improved WGAN-gp","authors":"Zhenlong Du, Ye Chao, Yujia Yan, Xiaoli Li","doi":"10.32604/JNM.2019.06259","DOIUrl":"https://doi.org/10.32604/JNM.2019.06259","url":null,"abstract":"In order to improve the quality of low-dose computational tomography (CT) images, the paper proposes an improved image denoising approach based on WGAN-gp with Wasserstein distance. For improving the training and the convergence efficiency, the given method introduces the gradient penalty term to WGAN network. The novel perceptual loss is introduced to make the texture information of the low-dose images sensitive to the diagnostician eye. The experimental results show that compared with the state-of-art methods, the time complexity is reduced, and the visual quality of low-dose CT images is significantly improved.","PeriodicalId":69198,"journal":{"name":"新媒体杂志(英文)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69795155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Instance Retrieval Using Region of Interest Based CNN Features 基于兴趣区域的CNN特征实例检索
Pub Date : 2019-01-01 DOI: 10.32604/JNM.2019.06582
Jingcheng Chen, Zhili Zhou, Zhaoqing Pan, Ching-Nung Yang
: Recently, image representations derived by convolutional neural networks (CNN) have achieved promising performance for instance retrieval, and they outperform the traditional hand-crafted image features. However, most of existing CNN-based features are proposed to describe the entire images, and thus they are less robust to background clutter. This paper proposes a region of interest (RoI)-based deep convolutional representation for instance retrieval. It first detects the region of interests (RoIs) from an image, and then extracts a set of RoI-based CNN features from the fully-connected layer of CNN. The proposed RoI-based CNN feature describes the patterns of the detected RoIs, so that the visual matching can be implemented at image region-level to effectively identify target objects from cluttered backgrounds. Moreover, we test the performance of the proposed RoI-based CNN feature, when it is extracted from different convolutional layers or fully-connected layers. Also, we compare the performance of RoI-based CNN feature with those of the state-of-the-art CNN features on two instance retrieval benchmarks. Experimental results show that the proposed RoI-based CNN feature provides superior performance than the state-of-the-art CNN features for in-stance retrieval.
最近,卷积神经网络(CNN)衍生的图像表示在实例检索方面取得了很好的性能,并且优于传统的手工制作的图像特征。然而,现有的大多数基于cnn的特征都是描述整个图像,因此它们对背景杂波的鲁棒性较差。提出了一种基于感兴趣区域(RoI)的深度卷积实例检索方法。首先从图像中检测出兴趣区域(roi),然后从CNN的全连通层中提取出一组基于roi的CNN特征。本文提出的基于roi的CNN特征描述了检测到的roi的模式,从而可以在图像区域级实现视觉匹配,从而有效地从杂乱的背景中识别目标物体。此外,我们测试了从不同的卷积层或完全连接层中提取的基于roi的CNN特征的性能。此外,我们在两个实例检索基准上比较了基于roi的CNN特征与最先进的CNN特征的性能。实验结果表明,本文提出的基于roi的CNN特征在实例检索方面的性能优于目前最先进的CNN特征。
{"title":"Instance Retrieval Using Region of Interest Based CNN Features","authors":"Jingcheng Chen, Zhili Zhou, Zhaoqing Pan, Ching-Nung Yang","doi":"10.32604/JNM.2019.06582","DOIUrl":"https://doi.org/10.32604/JNM.2019.06582","url":null,"abstract":": Recently, image representations derived by convolutional neural networks (CNN) have achieved promising performance for instance retrieval, and they outperform the traditional hand-crafted image features. However, most of existing CNN-based features are proposed to describe the entire images, and thus they are less robust to background clutter. This paper proposes a region of interest (RoI)-based deep convolutional representation for instance retrieval. It first detects the region of interests (RoIs) from an image, and then extracts a set of RoI-based CNN features from the fully-connected layer of CNN. The proposed RoI-based CNN feature describes the patterns of the detected RoIs, so that the visual matching can be implemented at image region-level to effectively identify target objects from cluttered backgrounds. Moreover, we test the performance of the proposed RoI-based CNN feature, when it is extracted from different convolutional layers or fully-connected layers. Also, we compare the performance of RoI-based CNN feature with those of the state-of-the-art CNN features on two instance retrieval benchmarks. Experimental results show that the proposed RoI-based CNN feature provides superior performance than the state-of-the-art CNN features for in-stance retrieval.","PeriodicalId":69198,"journal":{"name":"新媒体杂志(英文)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69794717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
新媒体杂志(英文)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1