Digital images can be tampered easily with simple image editing software tools. Therefore, image forensic investigation on the authenticity of digital images’ content is increasingly important. Copy-move is one of the most common types of image forgeries. Thus, an overview of the traditional and the recent copy-move forgery localization methods using passive techniques is presented in this paper. These methods are classified into three types: block-based methods, keypoint-based methods, and deep learning-based methods. In addition, the strengths and weaknesses of these methods are compared and analyzed in robustness and computational cost. Finally, further research directions are discussed.
{"title":"A Survey on Digital Image Copy-Move Forgery Localization Using Passive Techniques","authors":"W. Tan, Wu Yunqing, Wu Peng, Chen Beijing","doi":"10.32604/JNM.2019.06219","DOIUrl":"https://doi.org/10.32604/JNM.2019.06219","url":null,"abstract":"Digital images can be tampered easily with simple image editing software tools. Therefore, image forensic investigation on the authenticity of digital images’ content is increasingly important. Copy-move is one of the most common types of image forgeries. Thus, an overview of the traditional and the recent copy-move forgery localization methods using passive techniques is presented in this paper. These methods are classified into three types: block-based methods, keypoint-based methods, and deep learning-based methods. In addition, the strengths and weaknesses of these methods are compared and analyzed in robustness and computational cost. Finally, further research directions are discussed.","PeriodicalId":69198,"journal":{"name":"新媒体杂志(英文)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69794948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to improve the quality of low-dose computational tomography (CT) images, the paper proposes an improved image denoising approach based on WGAN-gp with Wasserstein distance. For improving the training and the convergence efficiency, the given method introduces the gradient penalty term to WGAN network. The novel perceptual loss is introduced to make the texture information of the low-dose images sensitive to the diagnostician eye. The experimental results show that compared with the state-of-art methods, the time complexity is reduced, and the visual quality of low-dose CT images is significantly improved.
{"title":"Low-Dose CT Image Denoising Based on Improved WGAN-gp","authors":"Zhenlong Du, Ye Chao, Yujia Yan, Xiaoli Li","doi":"10.32604/JNM.2019.06259","DOIUrl":"https://doi.org/10.32604/JNM.2019.06259","url":null,"abstract":"In order to improve the quality of low-dose computational tomography (CT) images, the paper proposes an improved image denoising approach based on WGAN-gp with Wasserstein distance. For improving the training and the convergence efficiency, the given method introduces the gradient penalty term to WGAN network. The novel perceptual loss is introduced to make the texture information of the low-dose images sensitive to the diagnostician eye. The experimental results show that compared with the state-of-art methods, the time complexity is reduced, and the visual quality of low-dose CT images is significantly improved.","PeriodicalId":69198,"journal":{"name":"新媒体杂志(英文)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69795155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingcheng Chen, Zhili Zhou, Zhaoqing Pan, Ching-Nung Yang
: Recently, image representations derived by convolutional neural networks (CNN) have achieved promising performance for instance retrieval, and they outperform the traditional hand-crafted image features. However, most of existing CNN-based features are proposed to describe the entire images, and thus they are less robust to background clutter. This paper proposes a region of interest (RoI)-based deep convolutional representation for instance retrieval. It first detects the region of interests (RoIs) from an image, and then extracts a set of RoI-based CNN features from the fully-connected layer of CNN. The proposed RoI-based CNN feature describes the patterns of the detected RoIs, so that the visual matching can be implemented at image region-level to effectively identify target objects from cluttered backgrounds. Moreover, we test the performance of the proposed RoI-based CNN feature, when it is extracted from different convolutional layers or fully-connected layers. Also, we compare the performance of RoI-based CNN feature with those of the state-of-the-art CNN features on two instance retrieval benchmarks. Experimental results show that the proposed RoI-based CNN feature provides superior performance than the state-of-the-art CNN features for in-stance retrieval.
{"title":"Instance Retrieval Using Region of Interest Based CNN Features","authors":"Jingcheng Chen, Zhili Zhou, Zhaoqing Pan, Ching-Nung Yang","doi":"10.32604/JNM.2019.06582","DOIUrl":"https://doi.org/10.32604/JNM.2019.06582","url":null,"abstract":": Recently, image representations derived by convolutional neural networks (CNN) have achieved promising performance for instance retrieval, and they outperform the traditional hand-crafted image features. However, most of existing CNN-based features are proposed to describe the entire images, and thus they are less robust to background clutter. This paper proposes a region of interest (RoI)-based deep convolutional representation for instance retrieval. It first detects the region of interests (RoIs) from an image, and then extracts a set of RoI-based CNN features from the fully-connected layer of CNN. The proposed RoI-based CNN feature describes the patterns of the detected RoIs, so that the visual matching can be implemented at image region-level to effectively identify target objects from cluttered backgrounds. Moreover, we test the performance of the proposed RoI-based CNN feature, when it is extracted from different convolutional layers or fully-connected layers. Also, we compare the performance of RoI-based CNN feature with those of the state-of-the-art CNN features on two instance retrieval benchmarks. Experimental results show that the proposed RoI-based CNN feature provides superior performance than the state-of-the-art CNN features for in-stance retrieval.","PeriodicalId":69198,"journal":{"name":"新媒体杂志(英文)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69794717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}