首页 > 最新文献

TENCON 2008 - 2008 IEEE Region 10 Conference最新文献

英文 中文
Fingerprint matching using transform features 基于变换特征的指纹匹配
Pub Date : 2008-11-01 DOI: 10.1109/TENCON.2008.4766494
M. Dale, M. Joshi
In the fingerprint recognition application utilizing more information other than minutiae is much helpful. We present here a fingerprint matching scheme based on transform features and their comparison. The technique described here obviates the need for extracting minutiae points to match fingerprint images. The proposed scheme uses Discrete Cosine Transform (DCT), Fast Fourier Transform (FFT) and Discrete Wavelet Transform (DWT) to create feature vector for fingerprints. After finding out the core point, fingerprint image of size 64times64 is cropped around the core point. The transform is applied on the cropped image without any pre-processing. The transform coefficients are arranged in specific manner and are used to obtain the feature vector in terms of standard deviation. The fingerprint matching is based on the minimum Euclidean distance between two feature vectors. Here database is formed by capturing 8 images per person using 500 dpi optical scanner. Training images used to form feature vector are 2, 4 or 6 per person. In the matching phase either all or remaining images are checked in identification mode to find out the percentage recognition rate. Comparison for all the three transform is presented here and it is observed that DCT and DFT gives better result as compared DWT.
在指纹识别应用中,利用更多的信息而不是细节是很有帮助的。本文提出了一种基于变换特征及其比较的指纹匹配方案。这里描述的技术避免了提取细节点来匹配指纹图像的需要。该方案采用离散余弦变换(DCT)、快速傅立叶变换(FFT)和离散小波变换(DWT)来生成指纹特征向量。找到核心点后,围绕核心点裁剪尺寸为64times64的指纹图像。变换应用于裁剪后的图像,没有任何预处理。变换系数按照特定的方式排列,用来获得以标准差表示的特征向量。指纹匹配是基于两个特征向量之间的最小欧氏距离。这里的数据库是通过使用500 dpi光学扫描仪捕获每人8张图像而形成的。用于形成特征向量的训练图像是每人2张、4张或6张。在匹配阶段,以识别方式检查所有图像或剩余图像,以找出百分比识别率。本文对这三种变换进行了比较,并观察到DCT和DFT比DWT给出了更好的结果。
{"title":"Fingerprint matching using transform features","authors":"M. Dale, M. Joshi","doi":"10.1109/TENCON.2008.4766494","DOIUrl":"https://doi.org/10.1109/TENCON.2008.4766494","url":null,"abstract":"In the fingerprint recognition application utilizing more information other than minutiae is much helpful. We present here a fingerprint matching scheme based on transform features and their comparison. The technique described here obviates the need for extracting minutiae points to match fingerprint images. The proposed scheme uses Discrete Cosine Transform (DCT), Fast Fourier Transform (FFT) and Discrete Wavelet Transform (DWT) to create feature vector for fingerprints. After finding out the core point, fingerprint image of size 64times64 is cropped around the core point. The transform is applied on the cropped image without any pre-processing. The transform coefficients are arranged in specific manner and are used to obtain the feature vector in terms of standard deviation. The fingerprint matching is based on the minimum Euclidean distance between two feature vectors. Here database is formed by capturing 8 images per person using 500 dpi optical scanner. Training images used to form feature vector are 2, 4 or 6 per person. In the matching phase either all or remaining images are checked in identification mode to find out the percentage recognition rate. Comparison for all the three transform is presented here and it is observed that DCT and DFT gives better result as compared DWT.","PeriodicalId":22230,"journal":{"name":"TENCON 2008 - 2008 IEEE Region 10 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89280810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Model reference linear adaptive control of DC motor using fuzzy controller 基于模糊控制器的直流电动机模型参考线性自适应控制
Pub Date : 2008-11-01 DOI: 10.1109/TENCON.2008.4766484
A. Suresh kumar, M. Subba Rao, Y. Babu
This paper deals with the conventional model reference adaptive control (MRAC) and replaces conventional control technique such as PI control with model reference adaptive control scheme with fuzzy linear adaptation. The Model Reference Adaptive Control (MRAC) speed control systems do not achieve consistent satisfactory performance over wide range of speed demand, especially at low speed and there is no defined rule to guide designers to choose the adaptation gains. The fuzzy logic model reference adaptive control maintains satisfactory response irrespective of the magnitude of the inputs. It enhances the performance of the DC drive compared to conventional MRAC. The performance of the drive system, thus obtained, is forming a set of test conditions with model reference fuzzy adaptive control. The performance of the drive is tested for load disturbances along with reference model. This work also compares the performance of Model Reference Fuzzy Adaptive scheme over conventional MRAC. This work is carried out by using MATLAB-SIMULINK.
本文研究了传统的模型参考自适应控制(MRAC),用模糊线性自适应模型参考自适应控制方案代替PI控制等传统控制技术。模型参考自适应控制(Model Reference Adaptive Control, MRAC)速度控制系统在较宽的速度需求范围内,特别是低速时,不能获得一致的满意性能,并且没有明确的规则指导设计者选择自适应增益。模糊逻辑模型参考自适应控制无论输入量大小如何都能保持满意的响应。与传统的MRAC相比,它提高了直流驱动器的性能。由此得到的驱动系统的性能,正在形成一套具有模型参考模糊自适应控制的试验条件。结合参考模型,对负载扰动下的驱动性能进行了测试。本文还比较了模型参考模糊自适应方案与传统MRAC方案的性能。本工作采用MATLAB-SIMULINK进行。
{"title":"Model reference linear adaptive control of DC motor using fuzzy controller","authors":"A. Suresh kumar, M. Subba Rao, Y. Babu","doi":"10.1109/TENCON.2008.4766484","DOIUrl":"https://doi.org/10.1109/TENCON.2008.4766484","url":null,"abstract":"This paper deals with the conventional model reference adaptive control (MRAC) and replaces conventional control technique such as PI control with model reference adaptive control scheme with fuzzy linear adaptation. The Model Reference Adaptive Control (MRAC) speed control systems do not achieve consistent satisfactory performance over wide range of speed demand, especially at low speed and there is no defined rule to guide designers to choose the adaptation gains. The fuzzy logic model reference adaptive control maintains satisfactory response irrespective of the magnitude of the inputs. It enhances the performance of the DC drive compared to conventional MRAC. The performance of the drive system, thus obtained, is forming a set of test conditions with model reference fuzzy adaptive control. The performance of the drive is tested for load disturbances along with reference model. This work also compares the performance of Model Reference Fuzzy Adaptive scheme over conventional MRAC. This work is carried out by using MATLAB-SIMULINK.","PeriodicalId":22230,"journal":{"name":"TENCON 2008 - 2008 IEEE Region 10 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89322352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Improving multi-objective clustering through support vector machine: Application to gene expression data 支持向量机改进多目标聚类:在基因表达数据中的应用
Pub Date : 2008-11-01 DOI: 10.1109/TENCON.2008.4766630
A. Mukhopadhyay, U. Maulik, S. Bandyopadhyay
Microarray technology facilitates the monitoring of the expression profile of a large number of genes across different experimental conditions simultaneously. This article proposes a novel approach that combines a recently proposed multiobjective fuzzy clustering scheme with support vector machine (SVM), to yield improved solutions. The multiobjective technique is first used to produce a set of non-dominated solutions. The non-dominated set is then used to find some high-confidence points using a fuzzy voting technique. The SVM classifier is trained by this high-confidence points. Finally the remaining points are classified using the trained classifier. Results demonstrating the effectiveness of the proposed technique are provided for three real life gene expression data sets. Moreover statistical significance test has been conducted to establish the significant superiority of the proposed technique.
微阵列技术有助于同时监测不同实验条件下大量基因的表达谱。本文提出了一种新的方法,将最近提出的多目标模糊聚类方案与支持向量机(SVM)相结合,以产生改进的解。多目标技术首先用于生成一组非支配解。然后使用模糊投票技术使用非支配集来找到一些高置信度点。SVM分类器通过这些高置信度点进行训练。最后使用训练好的分类器对剩余的点进行分类。结果证明了所提出的技术的有效性提供了三个现实生活中的基因表达数据集。此外,还进行了统计显著性检验,以确定所提出的技术的显著优越性。
{"title":"Improving multi-objective clustering through support vector machine: Application to gene expression data","authors":"A. Mukhopadhyay, U. Maulik, S. Bandyopadhyay","doi":"10.1109/TENCON.2008.4766630","DOIUrl":"https://doi.org/10.1109/TENCON.2008.4766630","url":null,"abstract":"Microarray technology facilitates the monitoring of the expression profile of a large number of genes across different experimental conditions simultaneously. This article proposes a novel approach that combines a recently proposed multiobjective fuzzy clustering scheme with support vector machine (SVM), to yield improved solutions. The multiobjective technique is first used to produce a set of non-dominated solutions. The non-dominated set is then used to find some high-confidence points using a fuzzy voting technique. The SVM classifier is trained by this high-confidence points. Finally the remaining points are classified using the trained classifier. Results demonstrating the effectiveness of the proposed technique are provided for three real life gene expression data sets. Moreover statistical significance test has been conducted to establish the significant superiority of the proposed technique.","PeriodicalId":22230,"journal":{"name":"TENCON 2008 - 2008 IEEE Region 10 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89563237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Validation of PCA and LDA for SAR ATR PCA和LDA在SAR ATR中的应用验证
Pub Date : 2008-11-01 DOI: 10.1109/TENCON.2008.4766807
A. Mishra
Both principal component analysis (PCA) and linear discriminant analysis (LDA) have long been recognized as tools for feature extraction and data analysis. There has been reports in the open literature regarding the performance of both LDA and PCA as feature extractors in various types of classification and recognition problems. Many of the reports claim a better performance with LDA than with PCA. However, the grounds of comparison have mostly been quite narrow. In the current paper PCA and LDA based classifiers are evaluated for the problem of synthetic aperture radar based automatic target recognition problem. The results show that in terms of absolute performance, PCA outperforms LDA. Results of PCA based classifier are also found to be of higher confidence than those from LDA based classifiers, as observed from the error-bar analysis of the classifiers.With decreased amount of training dataset, the degradation in the performance of the classifiers are almost similar in nature. The current work concludes that LDA is not suitable for radar image based target recognition task. This is in line with reports from some works in the open literature which claim that the success of LDA will depend on the type of data and whether there is exhaustive data available during the training phase or not.
主成分分析(PCA)和线性判别分析(LDA)一直被认为是特征提取和数据分析的工具。在公开文献中已经有关于LDA和PCA作为特征提取器在各种类型的分类和识别问题中的性能的报道。许多报告声称LDA比PCA有更好的性能。然而,比较的依据大多相当狭隘。针对基于合成孔径雷达的目标自动识别问题,本文对基于PCA和LDA的分类器进行了评价。结果表明,在绝对性能方面,PCA优于LDA。从分类器的误差条分析中可以看出,基于PCA的分类器的结果也比基于LDA的分类器的结果具有更高的置信度。随着训练数据集数量的减少,分类器性能的下降在本质上几乎是相似的。目前的研究表明,LDA算法并不适用于基于雷达图像的目标识别任务。这与公开文献中的一些工作报告一致,这些报告声称LDA的成功将取决于数据的类型以及在训练阶段是否有详尽的数据可用。
{"title":"Validation of PCA and LDA for SAR ATR","authors":"A. Mishra","doi":"10.1109/TENCON.2008.4766807","DOIUrl":"https://doi.org/10.1109/TENCON.2008.4766807","url":null,"abstract":"Both principal component analysis (PCA) and linear discriminant analysis (LDA) have long been recognized as tools for feature extraction and data analysis. There has been reports in the open literature regarding the performance of both LDA and PCA as feature extractors in various types of classification and recognition problems. Many of the reports claim a better performance with LDA than with PCA. However, the grounds of comparison have mostly been quite narrow. In the current paper PCA and LDA based classifiers are evaluated for the problem of synthetic aperture radar based automatic target recognition problem. The results show that in terms of absolute performance, PCA outperforms LDA. Results of PCA based classifier are also found to be of higher confidence than those from LDA based classifiers, as observed from the error-bar analysis of the classifiers.With decreased amount of training dataset, the degradation in the performance of the classifiers are almost similar in nature. The current work concludes that LDA is not suitable for radar image based target recognition task. This is in line with reports from some works in the open literature which claim that the success of LDA will depend on the type of data and whether there is exhaustive data available during the training phase or not.","PeriodicalId":22230,"journal":{"name":"TENCON 2008 - 2008 IEEE Region 10 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89944257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 110
Improved performance irregular quasi-cyclic LDPC code design from BIBD’s using threshold minimization 利用阈值最小化改进了不规则准循环LDPC码设计的性能
Pub Date : 2008-11-01 DOI: 10.1109/TENCON.2008.4766569
S.R. Patil, S. Pathak
In this paper we propose a novel method for designing irregular quasi-cyclic low-density parity-check (LDPC) codes from Balanced Incomplete Block Designs (BIBDpsilas). The design method hinges around finding the optimum degree profile by minimizing the threshold using density evolution. The approach for designing short block length codes is robust for practical implementation and has been found to exhibit considerable performance gain that may be attributed to a good degree profile for the codes. The design takes into consideration the code rate and code length as variable parameters. The simulation results for the designed codes are compared with regular and irregular quasi-cyclic BIBD based LDPC codes and found a relatively higher performance in terms of BER.
本文提出了一种基于平衡不完全块设计(BIBDpsilas)的不规则准循环低密度奇偶校验(LDPC)码的设计方法。设计方法围绕着利用密度演化最小化阈值来寻找最优度剖面展开。设计短块长度代码的方法对于实际实现是健壮的,并且已经发现表现出相当大的性能增益,这可能归因于代码的良好程度配置文件。设计中考虑了码率和码长作为可变参数。将所设计的码与基于规则和不规则准循环BIBD的LDPC码的仿真结果进行了比较,发现在误码率方面具有较高的性能。
{"title":"Improved performance irregular quasi-cyclic LDPC code design from BIBD’s using threshold minimization","authors":"S.R. Patil, S. Pathak","doi":"10.1109/TENCON.2008.4766569","DOIUrl":"https://doi.org/10.1109/TENCON.2008.4766569","url":null,"abstract":"In this paper we propose a novel method for designing irregular quasi-cyclic low-density parity-check (LDPC) codes from Balanced Incomplete Block Designs (BIBDpsilas). The design method hinges around finding the optimum degree profile by minimizing the threshold using density evolution. The approach for designing short block length codes is robust for practical implementation and has been found to exhibit considerable performance gain that may be attributed to a good degree profile for the codes. The design takes into consideration the code rate and code length as variable parameters. The simulation results for the designed codes are compared with regular and irregular quasi-cyclic BIBD based LDPC codes and found a relatively higher performance in terms of BER.","PeriodicalId":22230,"journal":{"name":"TENCON 2008 - 2008 IEEE Region 10 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86748004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fuzzy techniques and hierarchical aggregation functions decision trees for the classification of epilepsy risk levels from EEG signals 基于模糊技术和层次聚合函数决策树的脑电信号癫痫危险等级分类
Pub Date : 2008-11-01 DOI: 10.1109/TENCON.2008.4766545
R. Sukanesh, R. Harikumar
The purpose of this paper is to identify the practicability of hierarchical soft (max-min) decision trees in optimization of fuzzy outputs in the classification of epilepsy risk levels from EEG (Electroencephalogram) signals. The fuzzy pre classifier is used to classify the risk levels of epilepsy based on extracted parameters like energy, variance, peaks, sharp and spike waves, duration, events and covariance from the EEG signals of the patient. Hierarchical soft decision tree (post classifier with max-min criteria) four types are applied on the classified data to identify the optimized risk level (singleton) which characterizes the patientpsilas risk level. The efficacy of the above methods is compared based on the bench mark parameters such as performance index (PI), and quality value (QV). A group of ten patients with known epilepsy findings are used for this study. High PI such as 95.88 % was obtained at QVpsilas of 22.43 in the hierarchical decision tree optimization when compared to the value of 40% and 6.25 through fuzzy classifier respectively. It is identified the hierarchical soft decision tree (Hier & h min-max) method is a good post classifier.
本文的目的是确定层次软(最大-最小)决策树在从EEG(脑电图)信号中分类癫痫风险等级的模糊输出优化中的实用性。利用模糊预分类器从患者的脑电图信号中提取能量、方差、峰值、尖峰和尖峰波、持续时间、事件和协方差等参数,对癫痫的风险等级进行分类。在分类数据上应用四种类型的分层软决策树(最大最小准则的后分类器)来识别表征患者整体风险水平的优化风险水平(单例)。基于性能指数(PI)、质量值(QV)等基准参数对上述方法的有效性进行比较。这项研究使用了10名已知癫痫症状的患者。层次决策树优化在QVpsilas为22.43时获得了95.88%的高PI,而模糊分类器的PI分别为40%和6.25。结果表明,分层软决策树(Hier & h min-max)方法是一种较好的后分类器。
{"title":"Fuzzy techniques and hierarchical aggregation functions decision trees for the classification of epilepsy risk levels from EEG signals","authors":"R. Sukanesh, R. Harikumar","doi":"10.1109/TENCON.2008.4766545","DOIUrl":"https://doi.org/10.1109/TENCON.2008.4766545","url":null,"abstract":"The purpose of this paper is to identify the practicability of hierarchical soft (max-min) decision trees in optimization of fuzzy outputs in the classification of epilepsy risk levels from EEG (Electroencephalogram) signals. The fuzzy pre classifier is used to classify the risk levels of epilepsy based on extracted parameters like energy, variance, peaks, sharp and spike waves, duration, events and covariance from the EEG signals of the patient. Hierarchical soft decision tree (post classifier with max-min criteria) four types are applied on the classified data to identify the optimized risk level (singleton) which characterizes the patientpsilas risk level. The efficacy of the above methods is compared based on the bench mark parameters such as performance index (PI), and quality value (QV). A group of ten patients with known epilepsy findings are used for this study. High PI such as 95.88 % was obtained at QVpsilas of 22.43 in the hierarchical decision tree optimization when compared to the value of 40% and 6.25 through fuzzy classifier respectively. It is identified the hierarchical soft decision tree (Hier & h min-max) method is a good post classifier.","PeriodicalId":22230,"journal":{"name":"TENCON 2008 - 2008 IEEE Region 10 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87910081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A low complexity demodulator for coordinate interleaved modulation schemes 一种用于坐标交错调制方案的低复杂度解调器
Pub Date : 2008-11-01 DOI: 10.1109/TENCON.2008.4766432
G.A. Srinivas, D.N. Rohith, M. Z. Ali Khan
This paper discusses techniques to reduce the demodulation complexity at the receiver of systems that use coordinate interleaving (CI) with N2-QAM constellations. CI is a method used to increase the diversity order of any modulation scheme on fading channels, but has a high demodulation complexity. The techniques described reduce the complexity required to calculate the maximum likelihood and log likelihood ratio metrics for such systems from O(N2) and O(N2logN) respectively to O(NlogN), without any loss in performance.
本文讨论了在使用N2-QAM星座的坐标交错(CI)系统中降低接收机解调复杂性的技术。CI是一种在衰落信道上提高任何调制方案分集阶数的方法,但具有较高的解调复杂度。所描述的技术将计算此类系统的最大似然比和对数似然比指标所需的复杂性分别从O(N2)和O(N2logN)降低到O(NlogN),而性能没有任何损失。
{"title":"A low complexity demodulator for coordinate interleaved modulation schemes","authors":"G.A. Srinivas, D.N. Rohith, M. Z. Ali Khan","doi":"10.1109/TENCON.2008.4766432","DOIUrl":"https://doi.org/10.1109/TENCON.2008.4766432","url":null,"abstract":"This paper discusses techniques to reduce the demodulation complexity at the receiver of systems that use coordinate interleaving (CI) with N2-QAM constellations. CI is a method used to increase the diversity order of any modulation scheme on fading channels, but has a high demodulation complexity. The techniques described reduce the complexity required to calculate the maximum likelihood and log likelihood ratio metrics for such systems from O(N2) and O(N2logN) respectively to O(NlogN), without any loss in performance.","PeriodicalId":22230,"journal":{"name":"TENCON 2008 - 2008 IEEE Region 10 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89157602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An approach to reversible information hiding for images 一种图像可逆信息隐藏方法
Pub Date : 2008-11-01 DOI: 10.1109/TENCON.2008.4766672
S. Arjun, N. Rao
Information hiding (also known as data hiding), is a process of embedding secret message into a cover media for the purpose of covert communication, identification, security and copyright. Information hiding is used to conceal the content of messages. This is achieved by hiding secret message into other digital media like audio, images, video etc. For hiding secret message information in images, there exists a large variety of techniques, of which most techniques cause distortions in the original image even after secret message recovery, either due to bit-replacement or quantization error or truncation. It is especially important for medical and military applications that the image obtained after data extraction should be distortion free. Reversible data-hiding is the technique which embeds data in to a digital media such that the original media can be recovered without any distortion after the hidden message has been extracted. In this paper we propose a lossless method which embeds and extracts the data in the spatial domain. This method uses only one statistic parameter which controls the embedding and extraction of data. Two methods with different block orientations have been proposed. The novel method allows for an increase of 41.57% in average embedding efficiency when compared to existing methods, while maintaining the cover image degradation (PSNR - peak signal to noise ratio) at a comparable level. We also suggest a multi-layered embedding for increasing embedding capacity further.
信息隐藏(又称数据隐藏),是将秘密信息嵌入掩蔽介质中,以达到隐蔽通信、身份识别、安全和版权等目的的过程。信息隐藏用于隐藏消息的内容。这是通过隐藏秘密信息到其他数字媒体,如音频,图像,视频等来实现的。为了在图像中隐藏秘密信息,存在各种各样的技术,其中大多数技术即使在秘密信息恢复后也会由于比特替换或量化错误或截断而导致原始图像失真。对于医疗和军事应用来说,数据提取后获得的图像应该是无失真的,这一点尤为重要。可逆数据隐藏是一种将数据嵌入到数字媒体中,从而在提取出隐藏信息后不失真地恢复原始媒体的技术。本文提出了一种在空间域中嵌入和提取数据的无损方法。该方法仅使用一个统计参数来控制数据的嵌入和提取。提出了两种不同块取向的方法。与现有方法相比,该方法的平均嵌入效率提高了41.57%,同时将覆盖图像的退化(峰值信噪比)保持在相当的水平。为了进一步提高嵌入容量,我们还提出了多层嵌入的方法。
{"title":"An approach to reversible information hiding for images","authors":"S. Arjun, N. Rao","doi":"10.1109/TENCON.2008.4766672","DOIUrl":"https://doi.org/10.1109/TENCON.2008.4766672","url":null,"abstract":"Information hiding (also known as data hiding), is a process of embedding secret message into a cover media for the purpose of covert communication, identification, security and copyright. Information hiding is used to conceal the content of messages. This is achieved by hiding secret message into other digital media like audio, images, video etc. For hiding secret message information in images, there exists a large variety of techniques, of which most techniques cause distortions in the original image even after secret message recovery, either due to bit-replacement or quantization error or truncation. It is especially important for medical and military applications that the image obtained after data extraction should be distortion free. Reversible data-hiding is the technique which embeds data in to a digital media such that the original media can be recovered without any distortion after the hidden message has been extracted. In this paper we propose a lossless method which embeds and extracts the data in the spatial domain. This method uses only one statistic parameter which controls the embedding and extraction of data. Two methods with different block orientations have been proposed. The novel method allows for an increase of 41.57% in average embedding efficiency when compared to existing methods, while maintaining the cover image degradation (PSNR - peak signal to noise ratio) at a comparable level. We also suggest a multi-layered embedding for increasing embedding capacity further.","PeriodicalId":22230,"journal":{"name":"TENCON 2008 - 2008 IEEE Region 10 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88979852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Analysis of center of pressure signals using Empirical Mode Decomposition and Fourier-Bessel expansion 用经验模态分解和傅里叶-贝塞尔展开分析压力信号中心
Pub Date : 2008-11-01 DOI: 10.1109/TENCON.2008.4766596
R. B. Pachori, D. Hewson, H. Snoussi, J. Duchêne
Center of pressure (COP) measurements are often used to identify balance problems. A new method for analysis of COP signals using empirical mode decomposition (EMD) and Fourier-Bessel (FB) expansion is proposed in this paper. The EMD decomposes a COP signal into a finite set of band-limited signals termed intrinsic mode functions (IMFs), before FB expansion is applied on each IMF to compute mean frequency. The FB expansion based representation is suitable for use in non-stationary and very short duration signals. Seventeen subjects were tested under eyes open (EO) and eyes closed (EC) conditions, with different vibration frequencies applied for EC condition to further perturb sensory information. Mean frequency as calculated by FB expansion for the first three IMFs was able to distinguish between EO and EC conditions (p < 0.05), while only first IMF was able to detect a vibration effect.
压力中心(COP)测量通常用于识别平衡问题。提出了一种利用经验模态分解(EMD)和傅里叶-贝塞尔(FB)展开对COP信号进行分析的新方法。EMD将COP信号分解为一组有限的带限信号,称为内禀模态函数(IMFs),然后对每个IMF应用FB展开以计算平均频率。基于FB展开的表示法适用于非平稳和极短持续时间的信号。17名被试分别在睁眼和闭眼状态下进行测试,在闭眼状态下使用不同的振动频率进一步扰动感官信息。前三个IMF的FB扩展计算的平均频率能够区分EO和EC条件(p < 0.05),而只有第一个IMF能够检测到振动效应。
{"title":"Analysis of center of pressure signals using Empirical Mode Decomposition and Fourier-Bessel expansion","authors":"R. B. Pachori, D. Hewson, H. Snoussi, J. Duchêne","doi":"10.1109/TENCON.2008.4766596","DOIUrl":"https://doi.org/10.1109/TENCON.2008.4766596","url":null,"abstract":"Center of pressure (COP) measurements are often used to identify balance problems. A new method for analysis of COP signals using empirical mode decomposition (EMD) and Fourier-Bessel (FB) expansion is proposed in this paper. The EMD decomposes a COP signal into a finite set of band-limited signals termed intrinsic mode functions (IMFs), before FB expansion is applied on each IMF to compute mean frequency. The FB expansion based representation is suitable for use in non-stationary and very short duration signals. Seventeen subjects were tested under eyes open (EO) and eyes closed (EC) conditions, with different vibration frequencies applied for EC condition to further perturb sensory information. Mean frequency as calculated by FB expansion for the first three IMFs was able to distinguish between EO and EC conditions (p < 0.05), while only first IMF was able to detect a vibration effect.","PeriodicalId":22230,"journal":{"name":"TENCON 2008 - 2008 IEEE Region 10 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77182089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Improved representation of power system load dynamics using heuristic models 利用启发式模型改进电力系统负荷动态表示
Pub Date : 2008-11-01 DOI: 10.1109/TENCON.2008.4766550
S. Halwi, D. G. Holmes, T. Czaszejko, M.B. Khoorasani
The importance of having accurate load models in power system stability studies has been well established in the literature as being essential for precise power system transient event investigations. The power industry presently uses composite load models in typical stability programs (e.g. LOADSYN and PSS/E). However, the parameters of the composite load models need to be tuned for each type of disturbance based on an assumed load composition, and are often inadequate for matching the modeled dynamics of the power system disturbance event to actual measured results. A stochastic time series technique in the form of an ARMAX mathematical model is presented in this paper as a novel alternative for dynamic load modeling. The model parameters are estimated using on-line measurement data for a number of disturbance events, collected from five substations in the Victorian electricity network in Australia. The performance of the model is then evaluated for other transient events, and compared against the recorded response for these events. The results achieved show that this heuristic-based model is robust and effective in predicting the dynamic response of a power system load across a range of events spanning various seasons and locations.
拥有准确的负荷模型在电力系统稳定性研究中的重要性已经在文献中得到了很好的确立,因为它对于精确的电力系统暂态事件研究至关重要。电力行业目前在典型的稳定程序(例如LOADSYN和PSS/E)中使用复合负载模型。然而,复合负荷模型的参数需要根据假设的负荷组成对每种类型的扰动进行调整,并且往往不足以将电力系统扰动事件的建模动态与实际测量结果相匹配。本文提出了一种以ARMAX数学模型为形式的随机时间序列技术,作为一种新的动态负荷建模方法。模型参数的估计使用在线测量数据的一些干扰事件,收集从五个变电站在澳大利亚维多利亚电网。然后评估模型的其他瞬态事件的性能,并与这些事件的记录响应进行比较。结果表明,该启发式模型在预测系统负荷在不同季节和地点的动态响应方面具有鲁棒性和有效性。
{"title":"Improved representation of power system load dynamics using heuristic models","authors":"S. Halwi, D. G. Holmes, T. Czaszejko, M.B. Khoorasani","doi":"10.1109/TENCON.2008.4766550","DOIUrl":"https://doi.org/10.1109/TENCON.2008.4766550","url":null,"abstract":"The importance of having accurate load models in power system stability studies has been well established in the literature as being essential for precise power system transient event investigations. The power industry presently uses composite load models in typical stability programs (e.g. LOADSYN and PSS/E). However, the parameters of the composite load models need to be tuned for each type of disturbance based on an assumed load composition, and are often inadequate for matching the modeled dynamics of the power system disturbance event to actual measured results. A stochastic time series technique in the form of an ARMAX mathematical model is presented in this paper as a novel alternative for dynamic load modeling. The model parameters are estimated using on-line measurement data for a number of disturbance events, collected from five substations in the Victorian electricity network in Australia. The performance of the model is then evaluated for other transient events, and compared against the recorded response for these events. The results achieved show that this heuristic-based model is robust and effective in predicting the dynamic response of a power system load across a range of events spanning various seasons and locations.","PeriodicalId":22230,"journal":{"name":"TENCON 2008 - 2008 IEEE Region 10 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76462983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
TENCON 2008 - 2008 IEEE Region 10 Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1