Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526896
A. Gupta, H. Gupta
In speaker recognition, most of the computation originates from the likelihood computations between feature vectors of the unknown speaker and the models in the database. In this paper, we concentrate on optimizing Mel Frequency Cepstral Coefficient (MFCC) for feature extraction and Vector Quantization (VQ) for feature modeling. We reduce the number of feature vectors by pre-quantizing the test sequence prior to matching, and number of speakers by ruling out unlikely speakers during recognition process. The two important parameters, Recognition rate and minimized Average Distance between the samples, depends on the codebook size and the number of cepstral coefficients. We find, that this approach yields significant performance when the changes are made in the number of mfcc's and the codebook size. Recognition rate is found to reach upto 89% and the distortion reduced upto 69%.
{"title":"Applications of MFCC and Vector Quantization in speaker recognition","authors":"A. Gupta, H. Gupta","doi":"10.1109/ISSP.2013.6526896","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526896","url":null,"abstract":"In speaker recognition, most of the computation originates from the likelihood computations between feature vectors of the unknown speaker and the models in the database. In this paper, we concentrate on optimizing Mel Frequency Cepstral Coefficient (MFCC) for feature extraction and Vector Quantization (VQ) for feature modeling. We reduce the number of feature vectors by pre-quantizing the test sequence prior to matching, and number of speakers by ruling out unlikely speakers during recognition process. The two important parameters, Recognition rate and minimized Average Distance between the samples, depends on the codebook size and the number of cepstral coefficients. We find, that this approach yields significant performance when the changes are made in the number of mfcc's and the codebook size. Recognition rate is found to reach upto 89% and the distortion reduced upto 69%.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125653133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526932
S. K. Patel, V. R. Rathod, J. Prajapati
Internet has become priceless tool that enables corporate world to show their capabilities. While Web applications have gained importance on the Internet, security is the only thing to worry. Business data is very critical that floats on the cloud and that's why Web Application Security is rapidly becoming a growing concern for all enterprises. In this paper we tried to show what is Hacking and its symptoms. In web development Content Management System (CMS) is gaining so much popularity as it uses to make easy editing and publishing process for novice even if he doesn't know web programming. There are over thousand of open source CMS available in the market. When we just talk about content management concept two or three names like Joomla, Drupal and WordPress strike in mind. As these are the one of the best CMSs in the market and their community provides nice basic security still we want to compare these CMS and want to know which CMS provides best security. To do the comparison we have done two case studies. In case 1 we have developed one common page in all CMS and host it then after we have applied different web attacks like SQLi, XSS, CSRF etc. and derived their hacking results. In case 2 we used Acunetix WVS Reporter v6.0 to find out the strength of security in different CMS. Apart from this we also try to find out Broken links in all listed CMSs.
{"title":"Comparative analysis of web security in open source content management system","authors":"S. K. Patel, V. R. Rathod, J. Prajapati","doi":"10.1109/ISSP.2013.6526932","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526932","url":null,"abstract":"Internet has become priceless tool that enables corporate world to show their capabilities. While Web applications have gained importance on the Internet, security is the only thing to worry. Business data is very critical that floats on the cloud and that's why Web Application Security is rapidly becoming a growing concern for all enterprises. In this paper we tried to show what is Hacking and its symptoms. In web development Content Management System (CMS) is gaining so much popularity as it uses to make easy editing and publishing process for novice even if he doesn't know web programming. There are over thousand of open source CMS available in the market. When we just talk about content management concept two or three names like Joomla, Drupal and WordPress strike in mind. As these are the one of the best CMSs in the market and their community provides nice basic security still we want to compare these CMS and want to know which CMS provides best security. To do the comparison we have done two case studies. In case 1 we have developed one common page in all CMS and host it then after we have applied different web attacks like SQLi, XSS, CSRF etc. and derived their hacking results. In case 2 we used Acunetix WVS Reporter v6.0 to find out the strength of security in different CMS. Apart from this we also try to find out Broken links in all listed CMSs.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133500839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526871
V. Prajapati, Apurva Shah, Prem Balani
The most challenging part of scheduling in real time systems is to achieve successful completion of a job before its deadline. Mainly two categories of algorithms i.e. static and dynamic tried to achieve this but both categories failed either in under-loaded condition or in over-loaded condition. Dynamic algorithms achieve optimum results in under-loaded condition but fail to achieve the same in over-loaded condition. On the other side static algorithms do not achieve optimum performance in underloaded condition but perform well in over-loaded condition. So our idea behind designing new scheduling algorithm is to achieve optimum performance in under-loaded condition and to achieve high performance in over-loaded condition. To achieve this we schedule jobs according to dynamic scheduling algorithm LLF (Least Laxity First) when system is under-loaded and when system becomes overloaded we schedule jobs according to static algorithm DM (Deadline Monotonic). In this paper we have proposed a LLF_DM algorithm which achieves optimum performance in under-loaded condition and achieves very high performance in over loaded condition.
{"title":"Design of new scheduling algorithm LLF_DM and its comparison with existing EDF, LLF, and DM algorithms for periodic tasks","authors":"V. Prajapati, Apurva Shah, Prem Balani","doi":"10.1109/ISSP.2013.6526871","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526871","url":null,"abstract":"The most challenging part of scheduling in real time systems is to achieve successful completion of a job before its deadline. Mainly two categories of algorithms i.e. static and dynamic tried to achieve this but both categories failed either in under-loaded condition or in over-loaded condition. Dynamic algorithms achieve optimum results in under-loaded condition but fail to achieve the same in over-loaded condition. On the other side static algorithms do not achieve optimum performance in underloaded condition but perform well in over-loaded condition. So our idea behind designing new scheduling algorithm is to achieve optimum performance in under-loaded condition and to achieve high performance in over-loaded condition. To achieve this we schedule jobs according to dynamic scheduling algorithm LLF (Least Laxity First) when system is under-loaded and when system becomes overloaded we schedule jobs according to static algorithm DM (Deadline Monotonic). In this paper we have proposed a LLF_DM algorithm which achieves optimum performance in under-loaded condition and achieves very high performance in over loaded condition.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114834714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526917
K. Lata, M. Kumar
This paper presents the ADPLL design using Verilog and its implementation on FPGA. ADPLL is designed using Verilog HDL. Xilinx ISE 10.1 Simulator is used for simulating Verilog Code. This paper gives details of the basic blocks of an ADPLL. In this paper, implementation of ADPLL is described in detail. Its simulation results using Xilinx are also discussed. It also presents the FPGA implementation of ADPLL design on Xilinx vertex5 xc5vlx110t chip and its results. The ADPLL is designed of 200 kHz central frequency. The operational frequency range of ADPLL is 189 Hz to 215 kHz, which is lock range of the design.
本文介绍了基于Verilog的ADPLL设计及其在FPGA上的实现。ADPLL采用Verilog HDL进行设计。Xilinx ISE 10.1 Simulator用于模拟Verilog Code。本文详细介绍了ADPLL的基本模块。本文详细介绍了ADPLL的实现方法。文中还讨论了Xilinx软件的仿真结果。并给出了在Xilinx vertex5 xc5vlx110t芯片上ADPLL设计的FPGA实现及其结果。ADPLL的中心频率为200khz。ADPLL的工作频率范围为189hz ~ 215khz,是本设计的锁定范围。
{"title":"ADPLL design and implementation on FPGA","authors":"K. Lata, M. Kumar","doi":"10.1109/ISSP.2013.6526917","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526917","url":null,"abstract":"This paper presents the ADPLL design using Verilog and its implementation on FPGA. ADPLL is designed using Verilog HDL. Xilinx ISE 10.1 Simulator is used for simulating Verilog Code. This paper gives details of the basic blocks of an ADPLL. In this paper, implementation of ADPLL is described in detail. Its simulation results using Xilinx are also discussed. It also presents the FPGA implementation of ADPLL design on Xilinx vertex5 xc5vlx110t chip and its results. The ADPLL is designed of 200 kHz central frequency. The operational frequency range of ADPLL is 189 Hz to 215 kHz, which is lock range of the design.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125419055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526929
Manish Patel, A. Aggarwal
Wireless Sensor Network (WSN) is being emerged as a prevailing technology in future due to its wide range of applications in military and civilian domains. These networks are easily prone to security attacks, since once deployed these networks are unattended and unprotected. Some of the inherent features like limited battery and low memory make sensor networks infeasible to use conventional security solutions. There are lot of attacks on these networks which can be classified as routing attacks and data traffic attacks. Some of the data attacks in sensor nodes are wormhole, jamming, selective forwarding, sinkhole and Sybil attack. In this paper, we discussed about all these attacks and some of the mitigation schemes to defend these attacks.
{"title":"Security attacks in wireless sensor networks: A survey","authors":"Manish Patel, A. Aggarwal","doi":"10.1109/ISSP.2013.6526929","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526929","url":null,"abstract":"Wireless Sensor Network (WSN) is being emerged as a prevailing technology in future due to its wide range of applications in military and civilian domains. These networks are easily prone to security attacks, since once deployed these networks are unattended and unprotected. Some of the inherent features like limited battery and low memory make sensor networks infeasible to use conventional security solutions. There are lot of attacks on these networks which can be classified as routing attacks and data traffic attacks. Some of the data attacks in sensor nodes are wormhole, jamming, selective forwarding, sinkhole and Sybil attack. In this paper, we discussed about all these attacks and some of the mitigation schemes to defend these attacks.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"249 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124005274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526927
Mr. D. S. Jadhav
There is a common reaction that Internet crime is an highly developed category of crime that has not yet break into developed state in India like Maharashtra. The cheerful environment of the Internet in which everybody distributes whatever thing at anytime poses a serious defense hazard for every state in India. Alas, there are no official reports about this category of crime for Maharashtra. Possibly will this indicate that it does not exist there? Here we carry out an independent investigate to find out whether cyber crimes have affected public in Maharashtra and if so, to find out where they are reported.
{"title":"Virtual offense in Maharashtra (India): Legend and truth?","authors":"Mr. D. S. Jadhav","doi":"10.1109/ISSP.2013.6526927","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526927","url":null,"abstract":"There is a common reaction that Internet crime is an highly developed category of crime that has not yet break into developed state in India like Maharashtra. The cheerful environment of the Internet in which everybody distributes whatever thing at anytime poses a serious defense hazard for every state in India. Alas, there are no official reports about this category of crime for Maharashtra. Possibly will this indicate that it does not exist there? Here we carry out an independent investigate to find out whether cyber crimes have affected public in Maharashtra and if so, to find out where they are reported.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122492370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526930
Smita Naval, Y. Meena, V. Laxmi, P. Vinod
Malware poses a big threat to computer systems now a days. Malware authors often use encryption/compression methods to conceal their malicious executables data and code. These methods that transform some or all of the original bytes into a series of random looking data bytes appear in 80 to 90% of malware samples. This fact creates special challenges for anti-virus scanners who use static and dynamic methods to analyze large malware collections. In this paper we propose a method to identify malware executables by reading initial 2500 byte patterns of the sample. Our method reduces overall scanner execution time by considering 2500 bytes instead of whole file. Experimental results are evaluated using different classification algorithms (Random Forest, Ada-Boost, IBK, J48, Naïve-Bayes) followed by a feature selection method.
{"title":"Relevant hex patterns for malcode detection","authors":"Smita Naval, Y. Meena, V. Laxmi, P. Vinod","doi":"10.1109/ISSP.2013.6526930","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526930","url":null,"abstract":"Malware poses a big threat to computer systems now a days. Malware authors often use encryption/compression methods to conceal their malicious executables data and code. These methods that transform some or all of the original bytes into a series of random looking data bytes appear in 80 to 90% of malware samples. This fact creates special challenges for anti-virus scanners who use static and dynamic methods to analyze large malware collections. In this paper we propose a method to identify malware executables by reading initial 2500 byte patterns of the sample. Our method reduces overall scanner execution time by considering 2500 bytes instead of whole file. Experimental results are evaluated using different classification algorithms (Random Forest, Ada-Boost, IBK, J48, Naïve-Bayes) followed by a feature selection method.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131162931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526933
Ashish Christian, H. Soni
The expeditious transformation of hardware technology has empowered the reinforcement of small, inexpensive, static and dynamic powerful sensor nodes, which are capable of sensing, computation and wireless communication on wide range of frequency with various modulation technique. This revolutionizes the implementation of wireless sensor network for considerable dimensions like overseeing some geographic area and parameter collection task. However, a limited energy constraint presents a major challenge such vision to become reality. In this paper we have attempted to explain how the wireless sensor networks are formed and how the various nodes present in those networks act as interdependent communicating nodes. LEACH (Low Energy Adaptive Clustering Hierarchy) [2] is one of the popular cluster-based structures, which has been widely proposed in wireless sensor networks. We are proposing the iLeach protocol (Improved Low-Energy Adaptive Clustering Hierarchy) and comparing to LEACH protocol. Lifetime of sensors is evaluated in terms of FND (First Node Dies) and HND (Half of the Nodes Die) [11] which will take care for the reliability and power efficiency of a wireless sensor network.
硬件技术的飞速发展使小型、廉价、静态和动态功能强大的传感器节点得以加强,这些传感器节点能够通过各种调制技术在宽频率范围内进行传感、计算和无线通信。这就彻底改变了无线传感器网络在相当大的维度上的实现,比如监控某些地理区域和参数收集任务。然而,有限的能源限制对这一愿景的实现提出了重大挑战。在本文中,我们试图解释无线传感器网络是如何形成的,以及这些网络中存在的各种节点如何作为相互依赖的通信节点。LEACH (Low Energy Adaptive Clustering Hierarchy,低能量自适应聚类结构)[2]是一种流行的基于聚类的结构,在无线传感器网络中被广泛提出。提出了改进的低能量自适应聚类层次结构(iLeach)协议,并与LEACH协议进行了比较。传感器的寿命是根据FND(第一节点死亡)和HND(一半节点死亡)来评估的[11],这将照顾到无线传感器网络的可靠性和功率效率。
{"title":"Lifetime prolonging in LEACH protocol for wireless sensor networks","authors":"Ashish Christian, H. Soni","doi":"10.1109/ISSP.2013.6526933","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526933","url":null,"abstract":"The expeditious transformation of hardware technology has empowered the reinforcement of small, inexpensive, static and dynamic powerful sensor nodes, which are capable of sensing, computation and wireless communication on wide range of frequency with various modulation technique. This revolutionizes the implementation of wireless sensor network for considerable dimensions like overseeing some geographic area and parameter collection task. However, a limited energy constraint presents a major challenge such vision to become reality. In this paper we have attempted to explain how the wireless sensor networks are formed and how the various nodes present in those networks act as interdependent communicating nodes. LEACH (Low Energy Adaptive Clustering Hierarchy) [2] is one of the popular cluster-based structures, which has been widely proposed in wireless sensor networks. We are proposing the iLeach protocol (Improved Low-Energy Adaptive Clustering Hierarchy) and comparing to LEACH protocol. Lifetime of sensors is evaluated in terms of FND (First Node Dies) and HND (Half of the Nodes Die) [11] which will take care for the reliability and power efficiency of a wireless sensor network.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130755657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526875
P. Nagarju, R. Naskar, R. Chakraborty
Reversible w atermarking constitutes a class of fragile digital watermarking techniques that find application in authentication of medical and military imagery. Reversible watermarking techniques ensure that after watermark extraction, the original cover image can be recovered from the watermarked image pixel-by-pixel. In this paper, we propose a novel reversible watermarking technique as an improved modification of the existing histogram bin shifting technique. We develop an optimal selection scheme for the “embedding point” (grayscale value of the pixels hosting the watermark), and take advantage of multiple zero frequency pixel values (if available) in the given image to embed the watermark. Experimental results for a set of images show that the adoption of these techniques improves the peak signal-to-noise ratio (PSNR) of the watermarked image compared to previously proposed histogram bin shifting techniques.
{"title":"Improved histogram bin shifting based reversible watermarking","authors":"P. Nagarju, R. Naskar, R. Chakraborty","doi":"10.1109/ISSP.2013.6526875","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526875","url":null,"abstract":"Reversible w atermarking constitutes a class of fragile digital watermarking techniques that find application in authentication of medical and military imagery. Reversible watermarking techniques ensure that after watermark extraction, the original cover image can be recovered from the watermarked image pixel-by-pixel. In this paper, we propose a novel reversible watermarking technique as an improved modification of the existing histogram bin shifting technique. We develop an optimal selection scheme for the “embedding point” (grayscale value of the pixels hosting the watermark), and take advantage of multiple zero frequency pixel values (if available) in the given image to embed the watermark. Experimental results for a set of images show that the adoption of these techniques improves the peak signal-to-noise ratio (PSNR) of the watermarked image compared to previously proposed histogram bin shifting techniques.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125063678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526878
P. M. Pradnya, D. Sachin
The fusion of images is the process of combining two or more images into a single image retaining important features from each. Fusion is an important technique within many disparate fields such as remote sensing, robotics and medical applications. The result of image fusion is a single image which is more suitable for human and machine perception or further image-processing tasks. The image fusion algorithm based on wavelet transform is proposed to prove the geometric resolution of the images, in which two images to be processed are firstly decomposed into sub images and then the information is performed using these images under the certain criteria and finally these sub images are reconstructed into result image with plentiful information. In this paper three different image fusion methods based wavelet transform are implemented. And the results are compared and best method is found.
{"title":"Wavelet based image fusion techniques","authors":"P. M. Pradnya, D. Sachin","doi":"10.1109/ISSP.2013.6526878","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526878","url":null,"abstract":"The fusion of images is the process of combining two or more images into a single image retaining important features from each. Fusion is an important technique within many disparate fields such as remote sensing, robotics and medical applications. The result of image fusion is a single image which is more suitable for human and machine perception or further image-processing tasks. The image fusion algorithm based on wavelet transform is proposed to prove the geometric resolution of the images, in which two images to be processed are firstly decomposed into sub images and then the information is performed using these images under the certain criteria and finally these sub images are reconstructed into result image with plentiful information. In this paper three different image fusion methods based wavelet transform are implemented. And the results are compared and best method is found.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127117335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}