首页 > 最新文献

2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)最新文献

英文 中文
A fragile watermarking technique for fingerprint protection 一种用于指纹保护的脆弱水印技术
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745495
V. Subashini, S. Poornachandra, M. Ramakrishnan
This paper presents a fragile watermarking algorithm for biometric fingerprint images based on the run length pattern of pixels. The algorithm works with binary form of fingerprint images. The carrier image is converted into a one dimensional vector and the run lengths of the pixels are computed. The run length vector is then split into overlapping vector patterns of length three where the mid part of the vector is considered for watermark embedding. The watermark may be text data or a binary image. The new scheme has a good data embedding capacity. The paper also discusses the means to extract the embedded watermark.
提出了一种基于像素运行长度模式的生物指纹图像脆弱水印算法。该算法适用于二进制形式的指纹图像。将载体图像转换为一维矢量,并计算像素的运行长度。然后将运行长度矢量拆分为长度为3的重叠矢量模式,其中矢量的中间部分用于水印嵌入。水印可以是文本数据或二值图像。该方案具有良好的数据嵌入能力。本文还讨论了嵌入水印的提取方法。
{"title":"A fragile watermarking technique for fingerprint protection","authors":"V. Subashini, S. Poornachandra, M. Ramakrishnan","doi":"10.1109/RAICS.2013.6745495","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745495","url":null,"abstract":"This paper presents a fragile watermarking algorithm for biometric fingerprint images based on the run length pattern of pixels. The algorithm works with binary form of fingerprint images. The carrier image is converted into a one dimensional vector and the run lengths of the pixels are computed. The run length vector is then split into overlapping vector patterns of length three where the mid part of the vector is considered for watermark embedding. The watermark may be text data or a binary image. The new scheme has a good data embedding capacity. The paper also discusses the means to extract the embedded watermark.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128718875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Sparsity-based representation for categorical data 分类数据的基于稀疏的表示
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745450
R. Menon, Shruthi S. Nair, K. Srindhya, M. D. Kaimal
Over the past few decades, many algorithms have been continuously evolving in the area of machine learning. This is an era of big data which is generated by different applications related to various fields like medicine, the World Wide Web, E-learning networking etc. So, we are still in need for more efficient algorithms which are computationally cost effective and thereby producing faster results. Sparse representation of data is one giant leap toward the search for a solution for big data analysis. The focus of our paper is on algorithms for sparsity-based representation of categorical data. For this, we adopt a concept from the image and signal processing domain called dictionary learning. We have successfully implemented its sparse coding stage which gives the sparse representation of data using Orthogonal Matching Pursuit (OMP) algorithms (both Batch and Cholesky based) and its dictionary update stage using the Singular Value Decomposition (SVD). We have also used a preprocessing stage where we represent the categorical dataset using a vector space model based on the TF-IDF weighting scheme. Our paper demonstrates how input data can be decomposed and approximated as a linear combination of minimum number of elementary columns of a dictionary which so formed will be a compact representation of data. Classification or clustering algorithms can now be easily performed based on the generated sparse coded coefficient matrix or based on the dictionary. We also give a comparison of the dictionary learning algorithm when applying different OMP algorithms. The algorithms are analysed and results are demonstrated by synthetic tests and on real data.
在过去的几十年里,许多算法在机器学习领域不断发展。这是一个大数据的时代,它是由与各个领域相关的不同应用产生的,如医学、万维网、电子学习网络等。所以,我们仍然需要更有效的算法,计算成本更低,从而产生更快的结果。数据的稀疏表示是寻找大数据分析解决方案的一个巨大飞跃。本文的重点是基于稀疏的分类数据表示算法。为此,我们采用了图像和信号处理领域的一个概念,称为字典学习。我们已经成功地实现了它的稀疏编码阶段,它使用正交匹配追踪(OMP)算法(基于批处理和基于Cholesky)给出数据的稀疏表示,它的字典更新阶段使用奇异值分解(SVD)。我们还使用了预处理阶段,其中我们使用基于TF-IDF加权方案的向量空间模型表示分类数据集。我们的论文演示了如何将输入数据分解并近似为字典中最小基本列数的线性组合,从而形成数据的紧凑表示。现在可以基于生成的稀疏编码系数矩阵或基于字典轻松地执行分类或聚类算法。我们还比较了字典学习算法在应用不同的OMP算法时的表现。通过综合试验和实际数据对算法进行了分析和验证。
{"title":"Sparsity-based representation for categorical data","authors":"R. Menon, Shruthi S. Nair, K. Srindhya, M. D. Kaimal","doi":"10.1109/RAICS.2013.6745450","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745450","url":null,"abstract":"Over the past few decades, many algorithms have been continuously evolving in the area of machine learning. This is an era of big data which is generated by different applications related to various fields like medicine, the World Wide Web, E-learning networking etc. So, we are still in need for more efficient algorithms which are computationally cost effective and thereby producing faster results. Sparse representation of data is one giant leap toward the search for a solution for big data analysis. The focus of our paper is on algorithms for sparsity-based representation of categorical data. For this, we adopt a concept from the image and signal processing domain called dictionary learning. We have successfully implemented its sparse coding stage which gives the sparse representation of data using Orthogonal Matching Pursuit (OMP) algorithms (both Batch and Cholesky based) and its dictionary update stage using the Singular Value Decomposition (SVD). We have also used a preprocessing stage where we represent the categorical dataset using a vector space model based on the TF-IDF weighting scheme. Our paper demonstrates how input data can be decomposed and approximated as a linear combination of minimum number of elementary columns of a dictionary which so formed will be a compact representation of data. Classification or clustering algorithms can now be easily performed based on the generated sparse coded coefficient matrix or based on the dictionary. We also give a comparison of the dictionary learning algorithm when applying different OMP algorithms. The algorithms are analysed and results are demonstrated by synthetic tests and on real data.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127906673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A proposed system for colorization of a gray scale facial image using patch matching technique 提出了一种基于补丁匹配技术的灰度人脸图像着色系统
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745454
Santanu Halder, A. Hasnat, D. Bhattacharjee, M. Nasipuri
This paper proposes a novel approach to colorize a gray scale facial image with the selected reference face using patch matching technique. The colorization methodology has been applied on facial images due to their extensive use in various important fields like archaeology, entertainment, law enforcement etc. The experiment has been conducted with 150 male and female facial images collected from different face databases and the result has been found satisfactory. The proposed methodology has been implemented using Matlab 7.
本文提出了一种利用斑块匹配技术对选定的参考人脸进行灰度图像上色的新方法。由于面部图像广泛应用于考古、娱乐、执法等各个重要领域,因此着色方法已应用于面部图像。该实验从不同的面部数据库中收集了150张男性和女性的面部图像,结果令人满意。所提出的方法已在Matlab 7中实现。
{"title":"A proposed system for colorization of a gray scale facial image using patch matching technique","authors":"Santanu Halder, A. Hasnat, D. Bhattacharjee, M. Nasipuri","doi":"10.1109/RAICS.2013.6745454","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745454","url":null,"abstract":"This paper proposes a novel approach to colorize a gray scale facial image with the selected reference face using patch matching technique. The colorization methodology has been applied on facial images due to their extensive use in various important fields like archaeology, entertainment, law enforcement etc. The experiment has been conducted with 150 male and female facial images collected from different face databases and the result has been found satisfactory. The proposed methodology has been implemented using Matlab 7.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115956594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Wireless powering of utility equipments in a smart home using magnetic resonance 利用磁共振为智能家居中的公用设备无线供电
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745477
H. Ramachandran, G. R. Bindu
Wireless power transfer systems are in vogue today and are being widely used in divergent fields like mobile charging systems, medical implants and powering utility devices in smart homes. Magnetic resonance technology between source and load resonators has been demonstrated as a potential means of non-radiative power transfer. In this paper, the basic circuit for a single source and receiver geometry is discussed and extended to describe the system with a single source resonator pair and a receiver resonator pair. Wireless power transfer to a light load is experimentally demonstrated using a source coil and a receiving coil made of 21 SWG copper coils. The system is extended to a source coil powering a source resonator and a receiver resonator powering a load coil. The near electromagnetic field of a wireless power transfer system is used to ionize the mercury vapour to light up a fluorescent tube without the aid of wires.
无线电力传输系统如今很流行,并被广泛应用于不同的领域,如移动充电系统、医疗植入物和智能家居中的公用设备供电。源腔和负载腔之间的磁共振技术已被证明是一种潜在的非辐射功率传输手段。本文讨论了单源和单接收机几何结构的基本电路,并将其扩展为描述单源谐振器对和接收谐振器对的系统。无线电力传输到轻负载的实验证明,使用一个源线圈和一个接收线圈由21 SWG铜线圈。该系统扩展为源线圈为源谐振器供电,接收谐振器为负载线圈供电。无线电力传输系统的近电磁场被用来电离汞蒸气,在没有电线的情况下点亮荧光灯管。
{"title":"Wireless powering of utility equipments in a smart home using magnetic resonance","authors":"H. Ramachandran, G. R. Bindu","doi":"10.1109/RAICS.2013.6745477","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745477","url":null,"abstract":"Wireless power transfer systems are in vogue today and are being widely used in divergent fields like mobile charging systems, medical implants and powering utility devices in smart homes. Magnetic resonance technology between source and load resonators has been demonstrated as a potential means of non-radiative power transfer. In this paper, the basic circuit for a single source and receiver geometry is discussed and extended to describe the system with a single source resonator pair and a receiver resonator pair. Wireless power transfer to a light load is experimentally demonstrated using a source coil and a receiving coil made of 21 SWG copper coils. The system is extended to a source coil powering a source resonator and a receiver resonator powering a load coil. The near electromagnetic field of a wireless power transfer system is used to ionize the mercury vapour to light up a fluorescent tube without the aid of wires.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115653003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Joint estimation of carrier frequency offsets and doubly selective channel for OFDMA uplink using line search 基于线路搜索的OFDMA上行载波频偏和双选择信道联合估计
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745466
P. Muneer, C. P. Najlah, S. Sameer
In this paper, we propose a novel joint estimation technique for the carrier frequency offsets (CFOs) and time-varying frequency selective (doubly selective) channels of highly mobile users in an orthogonal frequency division multiple access (OFDMA) uplink system. To avoid the identifiability problem in tracking doubly selective channels (DSCs), we incorporate the idea of basis expansion model (BEM) which uses complex exponential (CE) basis functions. With the aid of CE-BEM we need to estimate only the basis expansion coefficients instead of actual impulse responses of the channels. Our proposed scheme make use of a line search method based on minimum mean square error (MMSE) criteria. The complete set of parameters, which includes both CFO and basis coefficients for all the users, are updated in each iteration by minimizing the error between the successive iterations. Simulation studies are carried out to demonstrates that the proposed method has faster convergence rate and achieves better estimation performance even at high mobile speeds.
本文针对正交频分多址(OFDMA)上行系统中高度移动用户的载波频偏(CFOs)和时变选频(双选)信道提出了一种新的联合估计技术。为了避免双选择信道(dsc)跟踪中的可识别性问题,我们引入了使用复指数基函数的基展开模型(BEM)的思想。借助CE-BEM,我们只需要估计基本的膨胀系数,而不需要估计通道的实际脉冲响应。我们提出的方案利用基于最小均方误差(MMSE)准则的直线搜索方法。完整的参数集,包括所有用户的CFO和基系数,在每次迭代中通过最小化连续迭代之间的误差来更新。仿真研究表明,该方法具有较快的收敛速度,在高移动速度下也能获得较好的估计性能。
{"title":"Joint estimation of carrier frequency offsets and doubly selective channel for OFDMA uplink using line search","authors":"P. Muneer, C. P. Najlah, S. Sameer","doi":"10.1109/RAICS.2013.6745466","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745466","url":null,"abstract":"In this paper, we propose a novel joint estimation technique for the carrier frequency offsets (CFOs) and time-varying frequency selective (doubly selective) channels of highly mobile users in an orthogonal frequency division multiple access (OFDMA) uplink system. To avoid the identifiability problem in tracking doubly selective channels (DSCs), we incorporate the idea of basis expansion model (BEM) which uses complex exponential (CE) basis functions. With the aid of CE-BEM we need to estimate only the basis expansion coefficients instead of actual impulse responses of the channels. Our proposed scheme make use of a line search method based on minimum mean square error (MMSE) criteria. The complete set of parameters, which includes both CFO and basis coefficients for all the users, are updated in each iteration by minimizing the error between the successive iterations. Simulation studies are carried out to demonstrates that the proposed method has faster convergence rate and achieves better estimation performance even at high mobile speeds.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115704586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A combined approach to speaker authentication using claimant-specific acoustic universal structures 使用索赔人特定声学通用结构的说话人身份验证的组合方法
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745455
Waquar Ahmad, S. Satyavolu, R. Hegde, H. Karnick
In this paper, a novel approach to online cohort selection is proposed which combines the cohort sets obtained using acoustic universal structure (AUS) and speaker specific cohort selection (SSCS). To obtain the cohort set using AUS, a confusion matrix is first generated using the distance between the structure of test utterance and the AUS of speaker. The confusion matrix is normalized using the iterative proportional fitting (IPF) method. The normalized confusion matrix along with a simple distance metric is used to select a cohort set based on similarity for each client speaker. A similar procedure is followed to obtain the cohort set using the SSCS method. Both these cohort sets are combined to obtain a single cohort set. Normalization statistics are then computed from this cohort set, which is used in the final scoring for authenticating the claimed speaker identity. Speaker verification experiments conducted on the NIST 2002 SRE, NIST 2004 SRE and YORO database, show reasonable improvement over conventional techniques used in speaker verification in terms of equal error rate and decision cost function values.
本文提出了一种新的在线队列选择方法,该方法将声学通用结构(AUS)和说话人特定队列选择(SSCS)相结合。为了获得队列集,首先利用测试话语结构与说话人的AUS之间的距离生成混淆矩阵。采用迭代比例拟合(IPF)方法对混淆矩阵进行归一化处理。规范化混淆矩阵和一个简单的距离度量被用来选择一个基于相似性的队列集合为每个客户说话者。遵循类似的程序,使用SSCS方法获得队列集。将这两个队列集合并为一个队列集。然后从该队列集计算规范化统计信息,该统计信息用于验证所声明的说话人身份的最终评分。在NIST 2002 SRE、NIST 2004 SRE和YORO数据库上进行的说话人验证实验表明,在等错误率和决策成本函数值方面,说话人验证比传统的说话人验证技术有了合理的改进。
{"title":"A combined approach to speaker authentication using claimant-specific acoustic universal structures","authors":"Waquar Ahmad, S. Satyavolu, R. Hegde, H. Karnick","doi":"10.1109/RAICS.2013.6745455","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745455","url":null,"abstract":"In this paper, a novel approach to online cohort selection is proposed which combines the cohort sets obtained using acoustic universal structure (AUS) and speaker specific cohort selection (SSCS). To obtain the cohort set using AUS, a confusion matrix is first generated using the distance between the structure of test utterance and the AUS of speaker. The confusion matrix is normalized using the iterative proportional fitting (IPF) method. The normalized confusion matrix along with a simple distance metric is used to select a cohort set based on similarity for each client speaker. A similar procedure is followed to obtain the cohort set using the SSCS method. Both these cohort sets are combined to obtain a single cohort set. Normalization statistics are then computed from this cohort set, which is used in the final scoring for authenticating the claimed speaker identity. Speaker verification experiments conducted on the NIST 2002 SRE, NIST 2004 SRE and YORO database, show reasonable improvement over conventional techniques used in speaker verification in terms of equal error rate and decision cost function values.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125259567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated control of webserver performance in a cloud environment 在云环境中自动控制web服务器性能
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745480
P. S. Saikrishna, R. Pasumarthy
Cloud computing has been emerging as a new technology. In a distributed computing perspective cloud is similar to client-server services like web-based services and uses virtualized resources for execution. The widespread use of internet technology has focused attention on quality of service, especially the response time experienced by the end user. We demonstrate the performance degradation of traditional web hosting with time varying user requests directly affecting the response time. We also show how this issue could be addressed by a web-server hosted on a cloud using control algorithms for Load balancing and Elasticity control developed to maintain the desired response time within acceptable limit. Our experimental setup hosts a web server on an open source Eucalyptus cloud platform. To evaluate the control system performance we use the web server benchmarking tool called httperf and autobench for automating the process of benchmarking.
云计算作为一种新技术正在兴起。在分布式计算透视图中,云类似于基于web的服务等客户机-服务器服务,并使用虚拟化资源执行。随着互联网技术的广泛应用,人们越来越关注服务质量,尤其是终端用户的响应时间。我们演示了随时间变化的用户请求直接影响响应时间的传统web托管的性能下降。我们还展示了如何通过托管在云上的web服务器来解决这个问题,使用负载平衡和弹性控制的控制算法来将期望的响应时间保持在可接受的范围内。我们的实验设置在开源Eucalyptus云平台上托管一个web服务器。为了评估控制系统的性能,我们使用名为httperf的web服务器基准测试工具和autobench来自动化基准测试过程。
{"title":"Automated control of webserver performance in a cloud environment","authors":"P. S. Saikrishna, R. Pasumarthy","doi":"10.1109/RAICS.2013.6745480","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745480","url":null,"abstract":"Cloud computing has been emerging as a new technology. In a distributed computing perspective cloud is similar to client-server services like web-based services and uses virtualized resources for execution. The widespread use of internet technology has focused attention on quality of service, especially the response time experienced by the end user. We demonstrate the performance degradation of traditional web hosting with time varying user requests directly affecting the response time. We also show how this issue could be addressed by a web-server hosted on a cloud using control algorithms for Load balancing and Elasticity control developed to maintain the desired response time within acceptable limit. Our experimental setup hosts a web server on an open source Eucalyptus cloud platform. To evaluate the control system performance we use the web server benchmarking tool called httperf and autobench for automating the process of benchmarking.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116467544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
PWLCM based image encryption through compressive sensing 基于压缩感知的PWLCM图像加密
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745445
Omkar Abhishek, S. N. George, P. Deepthi
In this paper, compressive sensing is combined with a chaotic key based generation of measurement matrix to provide an effective encryption algorithm for multimedia security. Block-based compressive sensing provides a better way in the field of image and video transmission by reducing the memory requirements and complexity, where as multiple hypothesis prediction provides a competent way in improving PSNR during reconstruction of block based compressive sensed images and videos. The measurement matrix Φ place a crucial role in this compressive sensing and as well as in the reconstruction process. A possibility to generate secure measurement matrix using piecewise linear chaotic map (PWLCM) as the seed and then hiding initial condition, system parameter, number of iterations of PWLCM as the key enable the sender to incorporate room for encryption along with the compression in a single step. The above mentioned scheme provides high level of data security, reduced complexity, compression with a good reconstruction quality and beside all it reduce the burden of sending the measurement matrix along with the data which further reduces the complexity in over all compressive sensing framework.
本文将压缩感知与基于混沌密钥的测量矩阵生成相结合,为多媒体安全提供了一种有效的加密算法。基于块的压缩感知通过降低存储需求和复杂性为图像和视频传输领域提供了一种更好的方式,其中多假设预测为提高基于块的压缩感知图像和视频重建过程中的PSNR提供了一种有效的方法。测量矩阵Φ在压缩感知和重建过程中起着至关重要的作用。使用分段线性混沌映射(PWLCM)作为种子生成安全测量矩阵的可能性,然后隐藏初始条件,系统参数,PWLCM的迭代次数作为密钥,使发送方能够在单步压缩中合并加密空间。上述方案提供了高水平的数据安全性,降低了复杂性,压缩具有良好的重构质量,并且减少了随数据发送测量矩阵的负担,进一步降低了整体压缩感知框架的复杂性。
{"title":"PWLCM based image encryption through compressive sensing","authors":"Omkar Abhishek, S. N. George, P. Deepthi","doi":"10.1109/RAICS.2013.6745445","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745445","url":null,"abstract":"In this paper, compressive sensing is combined with a chaotic key based generation of measurement matrix to provide an effective encryption algorithm for multimedia security. Block-based compressive sensing provides a better way in the field of image and video transmission by reducing the memory requirements and complexity, where as multiple hypothesis prediction provides a competent way in improving PSNR during reconstruction of block based compressive sensed images and videos. The measurement matrix Φ place a crucial role in this compressive sensing and as well as in the reconstruction process. A possibility to generate secure measurement matrix using piecewise linear chaotic map (PWLCM) as the seed and then hiding initial condition, system parameter, number of iterations of PWLCM as the key enable the sender to incorporate room for encryption along with the compression in a single step. The above mentioned scheme provides high level of data security, reduced complexity, compression with a good reconstruction quality and beside all it reduce the burden of sending the measurement matrix along with the data which further reduces the complexity in over all compressive sensing framework.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124958859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Object detection and classification in surveillance system 监控系统中的目标检测与分类
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745491
S. Varma, M. Sreeraj
Object Detection and Tracking in Surveillance System is inevitable in the present scenario, as it is not possible for a person to continuously monitor the video clips in real time. We propose an efficient and novel system for detecting moving objects in a surveillance video and predict whether it is a human or not. In order to account for faster object detection, we use an established Background Subtraction Algorithm known as Mixture of Gaussians. A set of simple and efficient features are extracted and provided to Support Vector Machine. The performance of the system is evaluated with different kernels of SVM and also for K Nearest Neighbor Classifier with its various distance metrics. The system is evaluated using statistical measurements, and the experiments resulted in average F measure of 86.925% and thus prove the efficiency of the novel system.
监控系统中的目标检测与跟踪在当前场景中是不可避免的,因为一个人不可能持续实时地监控视频片段。我们提出了一种有效的、新颖的系统来检测监控视频中的运动物体,并预测它是否是人。为了考虑更快的目标检测,我们使用了一种被称为混合高斯的背景减法算法。提取出一组简单高效的特征并提供给支持向量机。用支持向量机的不同核对系统的性能进行了评价,并对具有不同距离度量的K近邻分类器进行了评价。实验结果表明,系统的平均F值为86.925%,证明了系统的有效性。
{"title":"Object detection and classification in surveillance system","authors":"S. Varma, M. Sreeraj","doi":"10.1109/RAICS.2013.6745491","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745491","url":null,"abstract":"Object Detection and Tracking in Surveillance System is inevitable in the present scenario, as it is not possible for a person to continuously monitor the video clips in real time. We propose an efficient and novel system for detecting moving objects in a surveillance video and predict whether it is a human or not. In order to account for faster object detection, we use an established Background Subtraction Algorithm known as Mixture of Gaussians. A set of simple and efficient features are extracted and provided to Support Vector Machine. The performance of the system is evaluated with different kernels of SVM and also for K Nearest Neighbor Classifier with its various distance metrics. The system is evaluated using statistical measurements, and the experiments resulted in average F measure of 86.925% and thus prove the efficiency of the novel system.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116879794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
NSB-TREE for an efficient multidimensional indexing in non-spatial databases NSB-TREE用于非空间数据库中高效的多维索引
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745451
Sandhya Harikumar, A. Vinay
Query processing of high dimensional data with huge volume of records, especially in non-spatial domain require efficient multidimensional index. The present versions of DBMSs follow a single dimension indexing at multiple levels or indexing based on the formation of compound keys which is concatenation of the key values of the required attributes. The underlying structures, data models and query languages are not sufficient for the retrieval of information based on more complex data in terms of dimensions and size. This paper aims at designing an efficient indexing structure for multidimensional data access in non-spatial domain. This new indexing structure is evolved from R-tree with certain preprocessing steps to be applied on non-spatial data. The proposed indexing model, NSB-Tree (Non-Spatial Block tree) is balanced and has better performance than traditional B-trees and has less complicated algorithms as compared to UB tree. It has linear space complexity and logarithmic time complexity. The main drive of NSB tree is multidimensional indexing eliminating the need for multiple secondary indexes and concatenation of multiple keys. We cannot index non-spatial data using R-tree in the available DBMSs. Our index structure replaces an arbitrary number of secondary indexes for multicolumn index structure. This is implemented and feasibility check is done using the PostgreSQL database.
具有大量记录的高维数据的查询处理,特别是在非空间域,需要高效的多维索引。当前版本的dbms遵循在多个级别上的单一维度索引,或者基于复合键的形式进行索引,复合键是所需属性的键值的连接。底层结构、数据模型和查询语言不足以检索基于维度和大小方面更复杂的数据的信息。本文旨在为非空间域的多维数据访问设计一种高效的索引结构。这种新的索引结构是由r树演变而来的,具有一定的预处理步骤,可应用于非空间数据。所提出的索引模型NSB-Tree (Non-Spatial Block tree,非空间块树)比传统的b树具有更好的平衡性和性能,且算法比UB树更简单。它具有线性空间复杂度和对数时间复杂度。NSB树的主要驱动力是多维索引,消除了对多个辅助索引和多个键连接的需要。我们不能在可用的dbms中使用R-tree索引非空间数据。我们的索引结构替换了多列索引结构中任意数量的二级索引。使用PostgreSQL数据库实现并进行可行性验证。
{"title":"NSB-TREE for an efficient multidimensional indexing in non-spatial databases","authors":"Sandhya Harikumar, A. Vinay","doi":"10.1109/RAICS.2013.6745451","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745451","url":null,"abstract":"Query processing of high dimensional data with huge volume of records, especially in non-spatial domain require efficient multidimensional index. The present versions of DBMSs follow a single dimension indexing at multiple levels or indexing based on the formation of compound keys which is concatenation of the key values of the required attributes. The underlying structures, data models and query languages are not sufficient for the retrieval of information based on more complex data in terms of dimensions and size. This paper aims at designing an efficient indexing structure for multidimensional data access in non-spatial domain. This new indexing structure is evolved from R-tree with certain preprocessing steps to be applied on non-spatial data. The proposed indexing model, NSB-Tree (Non-Spatial Block tree) is balanced and has better performance than traditional B-trees and has less complicated algorithms as compared to UB tree. It has linear space complexity and logarithmic time complexity. The main drive of NSB tree is multidimensional indexing eliminating the need for multiple secondary indexes and concatenation of multiple keys. We cannot index non-spatial data using R-tree in the available DBMSs. Our index structure replaces an arbitrary number of secondary indexes for multicolumn index structure. This is implemented and feasibility check is done using the PostgreSQL database.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127040703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1