首页 > 最新文献

2013 IEEE International Conference on Signal and Image Processing Applications最新文献

英文 中文
A video steganography attack using multi-dimensional Discrete Spring Transform 基于多维离散弹簧变换的视频隐写攻击
Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6708000
Aaron T. Sharp, Qilin Qi, Yaoqing Yang, D. Peng, H. Sharif
Video steganography is fast emerging as a next-generation steganographic medium that offers many advantages over traditional steganographic cover media such as audio and images. Various schemes have recently emerged which take advantage of video specific properties for information hiding, most notably through the use of motion vectors. Although many steganographic schemes have been proposed which exploit several possible steganographic domains within video sequences, few attacks have been proposed to combat such schemes, and no current attacks have been shown to be capable of defeating multiple schemes at once. In this paper, we will further expand upon our proposed Discrete Spring Transform (DST) steganographic attack. We will explore further applications of the transform and how it may be used to defeat multiple steganographic schemes, specifically current video steganography schemes. The effectiveness of the proposed algorithm will be shown by attacking a multi-dimensional steganographic algorithm embedded in video sequences, where the scheme operates in two different dimensions of the video. The attack is successful in defeating multiple steganographic schemes verified by determining the BER after DST attack which always remains approximately 0.5. Furthermore, the attack preserves the integrity of the video sequence which is verified by determining the PSNR which always remains approximately above 30dB.
视频隐写技术作为新一代的隐写技术正在迅速崛起,它比传统的隐写覆盖媒体(如音频和图像)具有许多优点。最近出现了各种利用视频特定属性进行信息隐藏的方案,最明显的是通过使用运动矢量。虽然已经提出了许多利用视频序列中几个可能的隐写域的隐写方案,但很少有攻击被提出来对抗这些方案,并且目前没有攻击被证明能够一次击败多个方案。在本文中,我们将进一步扩展我们提出的离散弹簧变换(DST)隐写攻击。我们将探索该变换的进一步应用,以及如何使用它来击败多种隐写方案,特别是当前的视频隐写方案。该算法的有效性将通过攻击嵌入在视频序列中的多维隐写算法来证明,该算法在视频的两个不同维度上运行。通过确定DST攻击后的误码率,该攻击成功地击败了多个隐写方案,该方案始终保持在0.5左右。此外,该攻击保留了视频序列的完整性,通过确定始终保持在大约30dB以上的PSNR来验证。
{"title":"A video steganography attack using multi-dimensional Discrete Spring Transform","authors":"Aaron T. Sharp, Qilin Qi, Yaoqing Yang, D. Peng, H. Sharif","doi":"10.1109/ICSIPA.2013.6708000","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6708000","url":null,"abstract":"Video steganography is fast emerging as a next-generation steganographic medium that offers many advantages over traditional steganographic cover media such as audio and images. Various schemes have recently emerged which take advantage of video specific properties for information hiding, most notably through the use of motion vectors. Although many steganographic schemes have been proposed which exploit several possible steganographic domains within video sequences, few attacks have been proposed to combat such schemes, and no current attacks have been shown to be capable of defeating multiple schemes at once. In this paper, we will further expand upon our proposed Discrete Spring Transform (DST) steganographic attack. We will explore further applications of the transform and how it may be used to defeat multiple steganographic schemes, specifically current video steganography schemes. The effectiveness of the proposed algorithm will be shown by attacking a multi-dimensional steganographic algorithm embedded in video sequences, where the scheme operates in two different dimensions of the video. The attack is successful in defeating multiple steganographic schemes verified by determining the BER after DST attack which always remains approximately 0.5. Furthermore, the attack preserves the integrity of the video sequence which is verified by determining the PSNR which always remains approximately above 30dB.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121565189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
A multi-agent mobile robot system with environment perception and HMI capabilities 具有环境感知和人机交互能力的多智能体移动机器人系统
Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6708013
M. Tornow, A. Al-Hamadi, Vinzenz Borrmann
A multi-agent robot system can speed up exploration or search and rescue operations in dangerous environments by working as a distributed sensor network. Each robot (e.g. Eddi Robot) equipped with a combined 2D/3D sensor (MS Kinect) and additional sensors needs to efficiently exchange its collected data with the other group members for task planning. For environment perception a 2D/3D panorama is generated from a sequence of images which were obtained while the robot was rotating. Furthermore the 2D/3D sensor data is used for a Human-Machine Interaction based on hand postures and gestures. The hand posture classification is realized by an Artificial Neural Network (ANN) which is processing a feature vector composed of Cosine-Descriptors (COD), Hu-moments and geometric features extracted of the hand shape. The System achieves an overall classification rate of more than 93%. It is used within the hand posture and gesture based human machine interface to control the robot team.
多智能体机器人系统可以作为一个分布式传感器网络,在危险环境中加速探索或搜救行动。每个机器人(例如Eddi机器人)配备了一个2D/3D传感器(MS Kinect)和其他传感器,需要有效地与其他小组成员交换收集到的数据,以进行任务规划。对于环境感知,从机器人旋转时获得的一系列图像生成2D/3D全景图。此外,2D/3D传感器数据用于基于手势和手势的人机交互。该方法利用人工神经网络(ANN)对提取的手部形状的余弦描述子(COD)、胡矩和几何特征组成的特征向量进行处理,实现手部姿态分类。该系统实现了93%以上的整体分类率。它是用在基于手的姿势和手势的人机界面来控制机器人团队。
{"title":"A multi-agent mobile robot system with environment perception and HMI capabilities","authors":"M. Tornow, A. Al-Hamadi, Vinzenz Borrmann","doi":"10.1109/ICSIPA.2013.6708013","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6708013","url":null,"abstract":"A multi-agent robot system can speed up exploration or search and rescue operations in dangerous environments by working as a distributed sensor network. Each robot (e.g. Eddi Robot) equipped with a combined 2D/3D sensor (MS Kinect) and additional sensors needs to efficiently exchange its collected data with the other group members for task planning. For environment perception a 2D/3D panorama is generated from a sequence of images which were obtained while the robot was rotating. Furthermore the 2D/3D sensor data is used for a Human-Machine Interaction based on hand postures and gestures. The hand posture classification is realized by an Artificial Neural Network (ANN) which is processing a feature vector composed of Cosine-Descriptors (COD), Hu-moments and geometric features extracted of the hand shape. The System achieves an overall classification rate of more than 93%. It is used within the hand posture and gesture based human machine interface to control the robot team.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121537111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Robust reversible watermarking scheme based on wavelet-like transform 基于类小波变换的鲁棒可逆水印方案
Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6708032
R. T. Mohammed, B. Khoo
Watermarking reversibility is one of the basic requirements for medical imaging, military imaging, and remote sensing applications. In these fields a slight change in the original image can lead to a significant difference in the final decision making process. However, the reversibility alone is not enough for practical applications because the hidden data must be extracted even after unintentional attacks (e.g., noise addition, JPEG compression) so a robust (i.e., semi-fragile) reversible watermarking methods became required. In this paper, we present a new robust reversible watermarking method that utilizes the Slantlet transform (SLT) to transform image blocks and modifying the SLT coefficients to embed the watermark bits. If the watermarked image is not attacked, the method is completely reversible (i.e., the watermark and the original image will be recovered correctly). After JPEG compression, the hidden data can be extracted without error. Experimental results prove that the presented scheme achieves high visual quality, complete reversibility, and better robustness in comparison with the previous methods.
水印的可逆性是医学成像、军事成像和遥感应用的基本要求之一。在这些领域中,原始图像的微小变化可能导致最终决策过程中的显着差异。然而,对于实际应用而言,仅具有可逆性是不够的,因为隐藏的数据即使在意外攻击(例如,噪声添加,JPEG压缩)之后也必须被提取出来,因此需要一种鲁棒的(即半脆弱的)可逆水印方法。本文提出了一种新的鲁棒可逆水印方法,该方法利用小波变换(SLT)变换图像块并修改SLT系数来嵌入水印位。如果水印图像没有受到攻击,则该方法是完全可逆的(即水印和原始图像将被正确恢复)。经过JPEG压缩后,可以准确无误地提取隐藏数据。实验结果表明,与现有方法相比,该方法具有较高的视觉质量、完全可逆性和较好的鲁棒性。
{"title":"Robust reversible watermarking scheme based on wavelet-like transform","authors":"R. T. Mohammed, B. Khoo","doi":"10.1109/ICSIPA.2013.6708032","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6708032","url":null,"abstract":"Watermarking reversibility is one of the basic requirements for medical imaging, military imaging, and remote sensing applications. In these fields a slight change in the original image can lead to a significant difference in the final decision making process. However, the reversibility alone is not enough for practical applications because the hidden data must be extracted even after unintentional attacks (e.g., noise addition, JPEG compression) so a robust (i.e., semi-fragile) reversible watermarking methods became required. In this paper, we present a new robust reversible watermarking method that utilizes the Slantlet transform (SLT) to transform image blocks and modifying the SLT coefficients to embed the watermark bits. If the watermarked image is not attacked, the method is completely reversible (i.e., the watermark and the original image will be recovered correctly). After JPEG compression, the hidden data can be extracted without error. Experimental results prove that the presented scheme achieves high visual quality, complete reversibility, and better robustness in comparison with the previous methods.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129720239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Depth error concealment based on decision making 基于决策的深度错误隐藏
Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6708002
M. Ranjbari, A. Sali, H. A. Karim, F. Hashim
One of the common form of representing stereoscopic video is combination of 2D video with its corresponding depth map which is made by a laser camera to illustrate depth in the video. When this type of video is transmitted over error prone channels, the packet loss leads to frame loss; and mostly this frame lost occur in depth frames. Thus, a depth error concealment based on decision making termed as DM-PV, which exploits high correlation of 2-D image and its corresponding depth map. The 2D image provide information about the missing frame in the depth sequence to assist the decision making process in order to conceal the lost frames. The process involves inserting proper blank frame and duplication of previous frames instead of missing frames in depth sequence. PSNR performance improves over frame copy method has no decision making. Furthermore, subjective quality of stereoscopic video is better using DM-PV.
立体视频的一种常见表现形式是将二维视频与其相应的深度图相结合,该深度图由激光摄像机制作,以表示视频中的深度。当这种类型的视频在容易出错的信道上传输时,丢包导致帧丢失;这种帧丢失主要发生在深度帧中。因此,基于决策的深度误差隐藏被称为DM-PV,它利用了二维图像与其相应深度图的高度相关性。二维图像提供关于深度序列中缺失帧的信息,以帮助决策过程,以隐藏丢失的帧。该过程包括插入适当的空白帧和重复前一帧,而不是在深度序列中缺失帧。与不需要决策的帧复制方法相比,PSNR性能得到了提高。此外,使用DM-PV技术可以提高立体视频的主观质量。
{"title":"Depth error concealment based on decision making","authors":"M. Ranjbari, A. Sali, H. A. Karim, F. Hashim","doi":"10.1109/ICSIPA.2013.6708002","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6708002","url":null,"abstract":"One of the common form of representing stereoscopic video is combination of 2D video with its corresponding depth map which is made by a laser camera to illustrate depth in the video. When this type of video is transmitted over error prone channels, the packet loss leads to frame loss; and mostly this frame lost occur in depth frames. Thus, a depth error concealment based on decision making termed as DM-PV, which exploits high correlation of 2-D image and its corresponding depth map. The 2D image provide information about the missing frame in the depth sequence to assist the decision making process in order to conceal the lost frames. The process involves inserting proper blank frame and duplication of previous frames instead of missing frames in depth sequence. PSNR performance improves over frame copy method has no decision making. Furthermore, subjective quality of stereoscopic video is better using DM-PV.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130945448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Beamspace bearing estimation based on wavelet transform 基于小波变换的波束空间方位估计
Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6708022
Jinxiang Du, Yan Ma
In this paper, we propose a new wideband bearing estimation method based on wavelet transform. By analyzing the relationship between the wavelet transform of the frequency invariant beam's output and the array's beampattern, we derived spatial power spectrum based on wavelet transform (SPS-WT). The method has good performance on noise suppression by utilizing the statistical uncorrelation character between signals and noise, and also has high resolution on bearing estimation. The performance of the proposed method is illustrated in simulation results.
本文提出了一种基于小波变换的宽带方位估计方法。通过分析频率不变波束输出的小波变换与阵列波束方向图的关系,推导出基于小波变换(SPS-WT)的空间功率谱。该方法利用信号与噪声之间的统计不相关特性,具有良好的噪声抑制性能,同时具有较高的方位估计分辨率。仿真结果表明了该方法的有效性。
{"title":"Beamspace bearing estimation based on wavelet transform","authors":"Jinxiang Du, Yan Ma","doi":"10.1109/ICSIPA.2013.6708022","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6708022","url":null,"abstract":"In this paper, we propose a new wideband bearing estimation method based on wavelet transform. By analyzing the relationship between the wavelet transform of the frequency invariant beam's output and the array's beampattern, we derived spatial power spectrum based on wavelet transform (SPS-WT). The method has good performance on noise suppression by utilizing the statistical uncorrelation character between signals and noise, and also has high resolution on bearing estimation. The performance of the proposed method is illustrated in simulation results.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131337665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On the effects of pre- and post-processing in video cartoonization with bilateral filters 双边滤波器在视频卡通化中的预处理和后处理效果
Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6707974
Zoya Shahcheraghi, John See
In recent years, advances in image-based artistic rendering have grown steadily with the additional leverage of image and video processing techniques. Video cartoonization or stylization is the process of artificially incorporating cartoon-like effects to photorealistic input videos. This paper investigates the effects of integrating relevant pre- and post-processing tasks to significantly improve the quality of cartoonized videos processed with bilateral filters (BF). Our video cartoonization framework work extends the original Winnemöller's real-time video abstraction framework, which applies the edge-preserving BF with additional use of edge maps, luminance quantization and frame temporal coherency. In our work, we propose a contrast enhancement option by intensity stretching and Laplacian filtering to finetune the contrast levels of the pre-BF frames. For the post-BF recombined frames, an unsharp masking procedure is proposed to accentuate feature details in the final output video. Results from extensive experiments conducted by qualitative user evaluation underline the essentiality of pre- and post-processing tasks for improved video cartoonization.
近年来,随着图像和视频处理技术的进一步发展,基于图像的艺术渲染技术稳步发展。视频卡通化或风格化是人为地将卡通效果纳入逼真输入视频的过程。本文研究了将相关的预处理和后处理任务整合在一起,以显著提高双边滤波器处理的卡通化视频的质量。我们的视频卡通化框架工作扩展了原来Winnemöller的实时视频抽象框架,该框架应用边缘保持BF,并额外使用边缘映射、亮度量化和帧时间相干。在我们的工作中,我们提出了一种对比度增强选项,通过强度拉伸和拉普拉斯滤波来微调预bf帧的对比度水平。对于后bf重组帧,提出了一种非锐化掩蔽方法来突出最终输出视频中的特征细节。通过定性用户评估进行的大量实验结果强调了预处理和后处理任务对改进视频卡通化的重要性。
{"title":"On the effects of pre- and post-processing in video cartoonization with bilateral filters","authors":"Zoya Shahcheraghi, John See","doi":"10.1109/ICSIPA.2013.6707974","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6707974","url":null,"abstract":"In recent years, advances in image-based artistic rendering have grown steadily with the additional leverage of image and video processing techniques. Video cartoonization or stylization is the process of artificially incorporating cartoon-like effects to photorealistic input videos. This paper investigates the effects of integrating relevant pre- and post-processing tasks to significantly improve the quality of cartoonized videos processed with bilateral filters (BF). Our video cartoonization framework work extends the original Winnemöller's real-time video abstraction framework, which applies the edge-preserving BF with additional use of edge maps, luminance quantization and frame temporal coherency. In our work, we propose a contrast enhancement option by intensity stretching and Laplacian filtering to finetune the contrast levels of the pre-BF frames. For the post-BF recombined frames, an unsharp masking procedure is proposed to accentuate feature details in the final output video. Results from extensive experiments conducted by qualitative user evaluation underline the essentiality of pre- and post-processing tasks for improved video cartoonization.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115152966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A non-destructive technique using 3D X-ray Computed Tomography to reveal semiconductor internal physical defects 一种利用三维x射线计算机断层扫描技术揭示半导体内部物理缺陷的非破坏性技术
Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6707977
C. H. Tan, C. Lau
This paper focuses on the application of 3D X-ray Computed Tomography (CT) to precisely detect and confirm semiconductor internal physical defects without the need to decapsulate the sample. Equipped with advanced technologies and innovations, today's X-ray machine is capable of reconstructing the two-dimension (2D) sliced images to form 3D images and videos in much shorter time. With the introduction of 3D X-ray CT designed for electronics field, failure mechanisms once only visible after destructive analysis can now be revealed in non-destructive way. The technique not only saves cost, it shortens the turnaround time tremendously and allows customer's response and relevant improvement actions to be taken more efficiently.
本文重点研究了三维x射线计算机断层扫描(CT)在不解封装样品的情况下精确检测和确认半导体内部物理缺陷的应用。今天的x光机拥有先进的技术和创新,能够在更短的时间内重建二维(2D)切片图像,形成3D图像和视频。随着专为电子领域设计的3D x射线CT的引入,曾经只有在破坏性分析后才能看到的失效机制现在可以以非破坏性的方式揭示。该技术不仅节省了成本,还极大地缩短了周转时间,并使客户的响应和相关改进行动更加有效。
{"title":"A non-destructive technique using 3D X-ray Computed Tomography to reveal semiconductor internal physical defects","authors":"C. H. Tan, C. Lau","doi":"10.1109/ICSIPA.2013.6707977","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6707977","url":null,"abstract":"This paper focuses on the application of 3D X-ray Computed Tomography (CT) to precisely detect and confirm semiconductor internal physical defects without the need to decapsulate the sample. Equipped with advanced technologies and innovations, today's X-ray machine is capable of reconstructing the two-dimension (2D) sliced images to form 3D images and videos in much shorter time. With the introduction of 3D X-ray CT designed for electronics field, failure mechanisms once only visible after destructive analysis can now be revealed in non-destructive way. The technique not only saves cost, it shortens the turnaround time tremendously and allows customer's response and relevant improvement actions to be taken more efficiently.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133981456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Use an efficient neural network to improve the Arabic handwriting recognition 利用高效的神经网络改进阿拉伯语手写识别
Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6708016
H. Hamad
Using an efficient neural network for recognition and segmentation will definitely improve the performance and accuracy of the results; in addition to reduce the efforts and costs. This paper investigates and compares between results of four different artificial neural network models. The same algorithm has been applied for all with applying two major techniques, first, neural-segmentation technique, second, apply a new fusion equation. The neural techniques calculate the confidence values for each Prospective Segmentation Points (PSP) using the proposed classifiers in order to recognize the better model, this will enhance the overall recognition results of the handwritten scripts. The fusion equation evaluates each PSP by obtaining a fused value from three neural confidence values. CPU times and accuracies are also reported. Experiments that were performed of classifiers will be compared with each other and with the literature.
使用高效的神经网络进行识别和分割,必将提高结果的性能和准确性;另外减少了工作量和成本。本文研究并比较了四种不同的人工神经网络模型的结果。该算法主要采用两种技术,一是神经分割技术,二是采用新的融合方程。神经网络技术利用所提出的分类器计算每个预期分割点(PSP)的置信度值,从而识别出更好的模型,从而提高手写体的整体识别效果。融合方程通过从三个神经置信值中获得一个融合值来评估每个PSP。还报告CPU时间和精度。用分类器进行的实验将相互比较并与文献进行比较。
{"title":"Use an efficient neural network to improve the Arabic handwriting recognition","authors":"H. Hamad","doi":"10.1109/ICSIPA.2013.6708016","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6708016","url":null,"abstract":"Using an efficient neural network for recognition and segmentation will definitely improve the performance and accuracy of the results; in addition to reduce the efforts and costs. This paper investigates and compares between results of four different artificial neural network models. The same algorithm has been applied for all with applying two major techniques, first, neural-segmentation technique, second, apply a new fusion equation. The neural techniques calculate the confidence values for each Prospective Segmentation Points (PSP) using the proposed classifiers in order to recognize the better model, this will enhance the overall recognition results of the handwritten scripts. The fusion equation evaluates each PSP by obtaining a fused value from three neural confidence values. CPU times and accuracies are also reported. Experiments that were performed of classifiers will be compared with each other and with the literature.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125740410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Watermarking schemes to secure the face database and test images in a biometric system 在生物识别系统中,采用水印技术保护人脸数据库和测试图像
Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6707990
Himanshu Agarwal, B. Raman, P. Atrey
This paper attempts to solve the integrity issues of a compromised face biometric system using two watermarking schemes. Two new blind watermarking schemes, namely S1 and S2, are proposed to ensure the integrity of the training face database and of the test images, respectively. Scheme S1 is fragile spatial-domain based and scheme S2 works in the discrete cosine transformation (DCT) domain and is robust to channel noise. The novelty of S1 lies in the fact that it is lossless and the ratio of watermark bits to the size of the host image is 2.67, while S2 has better robustness than existing blind watermarking schemes. The performance of both schemes is evaluated on a subset of the Indian face database and the results show that both schemes verify the integrity with very high accuracy without affecting the performance of the biometric system.
本文尝试使用两种水印方案来解决受损人脸生物识别系统的完整性问题。为了保证训练人脸库和测试图像的完整性,提出了两种新的盲水印方案S1和S2。方案S1基于脆弱空域,方案S2工作于离散余弦变换(DCT)域,对信道噪声具有鲁棒性。S1的新颖之处在于它是无损的,水印位与主图像大小之比为2.67,而S2比现有的盲水印方案具有更好的鲁棒性。在印度人脸数据库的一个子集上对两种方案的性能进行了评估,结果表明,两种方案在不影响生物识别系统性能的情况下,以非常高的精度验证了完整性。
{"title":"Watermarking schemes to secure the face database and test images in a biometric system","authors":"Himanshu Agarwal, B. Raman, P. Atrey","doi":"10.1109/ICSIPA.2013.6707990","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6707990","url":null,"abstract":"This paper attempts to solve the integrity issues of a compromised face biometric system using two watermarking schemes. Two new blind watermarking schemes, namely S1 and S2, are proposed to ensure the integrity of the training face database and of the test images, respectively. Scheme S1 is fragile spatial-domain based and scheme S2 works in the discrete cosine transformation (DCT) domain and is robust to channel noise. The novelty of S1 lies in the fact that it is lossless and the ratio of watermark bits to the size of the host image is 2.67, while S2 has better robustness than existing blind watermarking schemes. The performance of both schemes is evaluated on a subset of the Indian face database and the results show that both schemes verify the integrity with very high accuracy without affecting the performance of the biometric system.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130725516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
High security image steganography using IWT and graph theory 基于IWT和图论的高安全性图像隐写
Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6708029
V. Thanikaiselvan, P. Arulmozhivarman
Steganography conceals the secret information inside the cover medium. There are two types of steganography techniques available practically. They are spatial domain steganography and Transform domain steganography. The objectives to be considered in the steganography methods are high capacity, imperceptibility and robustness. In this paper, a Color image steganography in transform domain is proposed. Reversible Integer Haar wavelet transform is applied to the R, G and B planes separately and the data is embedded in a random manner. Random selection of wavelet coefficients is based on the graph theory. This proposed system uses three different keys for embedding and extraction of the secret data, where key1(Subband Selection - SB) is used to select the Wavelet subband for embedding, key2(Selection of Co-effecients-SC) is used to select the co-efficients randomly and key3 (Selection of Bit length-SB) is used to select the number of bits to be embedded in the selected co-efficients. This method shows good imperceptibility, High capacity and Robustness.
隐写术将秘密信息隐藏在掩护介质中。实际上有两种类型的隐写技术。它们分别是空间域隐写和变换域隐写。隐写方法要考虑的目标是高容量、隐蔽性和鲁棒性。提出了一种变换域的彩色图像隐写算法。将可逆整数Haar小波变换分别应用于R、G和B平面,并以随机方式嵌入数据。小波系数的随机选择基于图论。该系统使用3种不同的密钥对秘密数据进行嵌入和提取,其中key1(Subband Selection -SB)用于选择要嵌入的小波子带,key2(Selection of co- effients - sc)用于随机选择系数,key3 (Selection of Bit length-SB)用于选择所选系数中要嵌入的比特数。该方法具有良好的隐蔽性、高容量和鲁棒性。
{"title":"High security image steganography using IWT and graph theory","authors":"V. Thanikaiselvan, P. Arulmozhivarman","doi":"10.1109/ICSIPA.2013.6708029","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6708029","url":null,"abstract":"Steganography conceals the secret information inside the cover medium. There are two types of steganography techniques available practically. They are spatial domain steganography and Transform domain steganography. The objectives to be considered in the steganography methods are high capacity, imperceptibility and robustness. In this paper, a Color image steganography in transform domain is proposed. Reversible Integer Haar wavelet transform is applied to the R, G and B planes separately and the data is embedded in a random manner. Random selection of wavelet coefficients is based on the graph theory. This proposed system uses three different keys for embedding and extraction of the secret data, where key1(Subband Selection - SB) is used to select the Wavelet subband for embedding, key2(Selection of Co-effecients-SC) is used to select the co-efficients randomly and key3 (Selection of Bit length-SB) is used to select the number of bits to be embedded in the selected co-efficients. This method shows good imperceptibility, High capacity and Robustness.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115215978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
期刊
2013 IEEE International Conference on Signal and Image Processing Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1