Pub Date : 2011-03-24DOI: 10.1109/ICCSP.2011.5739325
S. Rajkumar, S. Muttan, Balaji Pillai
Presently audiological investigations are done in a speciality hospital and the test results are analyzed and diagnosed by the audiologists. However, the fact remains that most of us do not undergo regular checking for hearing due to the reasons of inconvenient timing and ease of accessibility. The object of this work is to design and develop a computerized audiometer, which could be effectively used for mass screening of level of hearing impairment instead of the conventional audiometer. This would be user friendly, cost effective and efficient in terms of analysis, data storage and maintenance. The present work successfully enables subjects to conduct hearing screening tests fully with the help of multimedia computers without any additional accessories. The audiological tests could be conducted regularly so as to facilitate the early detection of hearing loss at home or any place and time convenient to the user. At first, the design requirements for a digital hearing aid is being arrived by using the standard Real Ear Insertion Gain (REIG) formulae followed in Australia and European countries. Subsequently, based on the estimated value of minimum threshold of hearing arrived from this proposed set up, in addition to inputs from expert audiologists, the REIG formula could be made distinct for every language.
{"title":"Adaptive expert system for audiologists","authors":"S. Rajkumar, S. Muttan, Balaji Pillai","doi":"10.1109/ICCSP.2011.5739325","DOIUrl":"https://doi.org/10.1109/ICCSP.2011.5739325","url":null,"abstract":"Presently audiological investigations are done in a speciality hospital and the test results are analyzed and diagnosed by the audiologists. However, the fact remains that most of us do not undergo regular checking for hearing due to the reasons of inconvenient timing and ease of accessibility. The object of this work is to design and develop a computerized audiometer, which could be effectively used for mass screening of level of hearing impairment instead of the conventional audiometer. This would be user friendly, cost effective and efficient in terms of analysis, data storage and maintenance. The present work successfully enables subjects to conduct hearing screening tests fully with the help of multimedia computers without any additional accessories. The audiological tests could be conducted regularly so as to facilitate the early detection of hearing loss at home or any place and time convenient to the user. At first, the design requirements for a digital hearing aid is being arrived by using the standard Real Ear Insertion Gain (REIG) formulae followed in Australia and European countries. Subsequently, based on the estimated value of minimum threshold of hearing arrived from this proposed set up, in addition to inputs from expert audiologists, the REIG formula could be made distinct for every language.","PeriodicalId":408736,"journal":{"name":"2011 International Conference on Communications and Signal Processing","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116351807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-24DOI: 10.1109/ICCSP.2011.5739291
K. Datta, I. Sengupta
With the development in communication technology over the past few decade, the usage of multimedia contents have increased progressively. Multimedia data protection has become a very important issue which needs to be addressed at the earliest. In this paper we have proposed a multimedia data protection technique for audio files. There exists various audio watermarking techniques in the literature. In this paper we propose a wavelet based watermarking technique where embedding is performed on the third level detail wavelet coefficients. The robustness of the scheme is found to be at an acceptable level with respect to some of the existing techniques in wavelet domain. The proposed method is essentially an improvement of the works reported in [1], [2], where the bit rates of the watermark data are enhanced with modest degradation in robustness. Subjective tests have been performed to evaluate the performance of the proposed method.
{"title":"Improving bitrate in detail coefficient based audio watermarking using wavelet transformation","authors":"K. Datta, I. Sengupta","doi":"10.1109/ICCSP.2011.5739291","DOIUrl":"https://doi.org/10.1109/ICCSP.2011.5739291","url":null,"abstract":"With the development in communication technology over the past few decade, the usage of multimedia contents have increased progressively. Multimedia data protection has become a very important issue which needs to be addressed at the earliest. In this paper we have proposed a multimedia data protection technique for audio files. There exists various audio watermarking techniques in the literature. In this paper we propose a wavelet based watermarking technique where embedding is performed on the third level detail wavelet coefficients. The robustness of the scheme is found to be at an acceptable level with respect to some of the existing techniques in wavelet domain. The proposed method is essentially an improvement of the works reported in [1], [2], where the bit rates of the watermark data are enhanced with modest degradation in robustness. Subjective tests have been performed to evaluate the performance of the proposed method.","PeriodicalId":408736,"journal":{"name":"2011 International Conference on Communications and Signal Processing","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114490640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-24DOI: 10.1109/ICCSP.2011.5739319
A. Chatterjee, G. K. Mahanti, P. Mahapatra
In this paper, the authors propose a pattern synthesis method to generate a dual radiation pattern of pencil/sector beam pair of a concentric ring array of isotropic elements with desired sidelobe levels by switching between the radial phase distribution among the elements which shares a common amplitude distribution. The optimum set of radial amplitude and radial phase distributions for generating dual radiation patterns with lower sidelobe levels are computed using Differential Evolution algorithm. The array with optimum radial amplitude and zero radial phase distributions generate a pencil beam while the array with same amplitude but optimum radial phase generates a sector beam in the vertical plane.
{"title":"Design of phase-differentiated dual-beam concentric ring array antenna using differential evolution algorithm","authors":"A. Chatterjee, G. K. Mahanti, P. Mahapatra","doi":"10.1109/ICCSP.2011.5739319","DOIUrl":"https://doi.org/10.1109/ICCSP.2011.5739319","url":null,"abstract":"In this paper, the authors propose a pattern synthesis method to generate a dual radiation pattern of pencil/sector beam pair of a concentric ring array of isotropic elements with desired sidelobe levels by switching between the radial phase distribution among the elements which shares a common amplitude distribution. The optimum set of radial amplitude and radial phase distributions for generating dual radiation patterns with lower sidelobe levels are computed using Differential Evolution algorithm. The array with optimum radial amplitude and zero radial phase distributions generate a pencil beam while the array with same amplitude but optimum radial phase generates a sector beam in the vertical plane.","PeriodicalId":408736,"journal":{"name":"2011 International Conference on Communications and Signal Processing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131783340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-24DOI: 10.1109/ICCSP.2011.5739353
M. R. Ram, K. V. Madhav, E. Krishna, K. Nagarjuna Reddy, K. Reddy
Clinical Investigation of hypoxic status of the patients requires accurate information about the heart rate and oxygen saturation of arterial blood. Pulse oximeters are widely used for monitoring these parameters by recording the raw pulse oximeter signal, namely Photoplethysmogram (PPG). The recorded PPG Signal acquired using PPG sensors are usually corrupted with Motion Artifacts (MA) due to the voluntary or involuntary movements of patient. Reduction of MA has received much attention in the literature over recent years. In this paper, we present an efficient adaptive filtering technique based on Time Varying Step-size Least Mean Squares (TVS-LMS) algorithm for MA reduction. The novelty of the method lies in the fact that a synthetic noise reference signal for adaptive filtering, representing MA noise, is generated internally from the MA corrupted PPG signal itself instead of using any additional hardware such as accelerometer or source-detector pair for noise reference signal generation. Convergence analysis, SNR calculations and Statistical analysis revealed that the proposed TVS-LMS method has a clear edge over the Constant Step-size LMS (CS-LMS) based adaptive filtering technique. Test results, on the PPG data recorded with different MAs, demonstrated the efficacy of the proposed TVS-LMS algorithm in MA reduction and thus making it best suitable for real-time pulse oximetry applications.
{"title":"On the performance of Time Varying Step-size Least Mean Squares(TVS-LMS) adaptive filter for MA reduction from PPG signals","authors":"M. R. Ram, K. V. Madhav, E. Krishna, K. Nagarjuna Reddy, K. Reddy","doi":"10.1109/ICCSP.2011.5739353","DOIUrl":"https://doi.org/10.1109/ICCSP.2011.5739353","url":null,"abstract":"Clinical Investigation of hypoxic status of the patients requires accurate information about the heart rate and oxygen saturation of arterial blood. Pulse oximeters are widely used for monitoring these parameters by recording the raw pulse oximeter signal, namely Photoplethysmogram (PPG). The recorded PPG Signal acquired using PPG sensors are usually corrupted with Motion Artifacts (MA) due to the voluntary or involuntary movements of patient. Reduction of MA has received much attention in the literature over recent years. In this paper, we present an efficient adaptive filtering technique based on Time Varying Step-size Least Mean Squares (TVS-LMS) algorithm for MA reduction. The novelty of the method lies in the fact that a synthetic noise reference signal for adaptive filtering, representing MA noise, is generated internally from the MA corrupted PPG signal itself instead of using any additional hardware such as accelerometer or source-detector pair for noise reference signal generation. Convergence analysis, SNR calculations and Statistical analysis revealed that the proposed TVS-LMS method has a clear edge over the Constant Step-size LMS (CS-LMS) based adaptive filtering technique. Test results, on the PPG data recorded with different MAs, demonstrated the efficacy of the proposed TVS-LMS algorithm in MA reduction and thus making it best suitable for real-time pulse oximetry applications.","PeriodicalId":408736,"journal":{"name":"2011 International Conference on Communications and Signal Processing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127567810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-24DOI: 10.1109/ICCSP.2011.5739360
Hyeonjin Lee, Jinwoo Jang, Yeongseog Lim
A compact printed dual dipole structure with CPW-fed for WLAN operations is proposed in this paper. The proposed antenna, which consists of dual dipole strips, has modified monopole and modified strips by the ground plane. The proposed antenna has been obtained good radiation characteristics. This antenna is effectively covered 5 GHz (5.15–5.825 GHz) bands. The measured peak gain is 2.8 dBi at 5.32 GHz. Effects of varying the monopole dimensions and the ground-plane size on the antenna performance have been studied.
{"title":"CPW-fed dual dipole antenna for WLAN communication","authors":"Hyeonjin Lee, Jinwoo Jang, Yeongseog Lim","doi":"10.1109/ICCSP.2011.5739360","DOIUrl":"https://doi.org/10.1109/ICCSP.2011.5739360","url":null,"abstract":"A compact printed dual dipole structure with CPW-fed for WLAN operations is proposed in this paper. The proposed antenna, which consists of dual dipole strips, has modified monopole and modified strips by the ground plane. The proposed antenna has been obtained good radiation characteristics. This antenna is effectively covered 5 GHz (5.15–5.825 GHz) bands. The measured peak gain is 2.8 dBi at 5.32 GHz. Effects of varying the monopole dimensions and the ground-plane size on the antenna performance have been studied.","PeriodicalId":408736,"journal":{"name":"2011 International Conference on Communications and Signal Processing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122389608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-24DOI: 10.1109/ICCSP.2011.5739314
V. Namboodiri, Hong Liu, P. Spasojevic
Turbo equalization schemes based on minimum mean square error (MMSE) criteria available in the literature for multiple-input multiple-output (MIMO) systems are computationally expensive as they require a relatively large matrix inversion. In this paper, we propose a low complexity turbo-equalization scheme with successive interference cancellation for the equalization of rapidly time varying multi-path channels for orthogonal frequency division multiplexing (OFDM) based MIMO receiver systems (TE-SIC-MIMO). TE-SIC-MIMO leverages on the soft feedback symbol estimate to remove the inter-carrier interference(ICI) and co-antenna interference (CAI) from the received data thus turning the system matrix into an easily invertable form. Numerical simulation results show that TE-SIC-MIMO performs better than other schemes of comparable computation complexity at signal to noise ratios (SNR) of practical interest.
{"title":"Low complexity turbo equalization for mobile MIMO OFDM systems","authors":"V. Namboodiri, Hong Liu, P. Spasojevic","doi":"10.1109/ICCSP.2011.5739314","DOIUrl":"https://doi.org/10.1109/ICCSP.2011.5739314","url":null,"abstract":"Turbo equalization schemes based on minimum mean square error (MMSE) criteria available in the literature for multiple-input multiple-output (MIMO) systems are computationally expensive as they require a relatively large matrix inversion. In this paper, we propose a low complexity turbo-equalization scheme with successive interference cancellation for the equalization of rapidly time varying multi-path channels for orthogonal frequency division multiplexing (OFDM) based MIMO receiver systems (TE-SIC-MIMO). TE-SIC-MIMO leverages on the soft feedback symbol estimate to remove the inter-carrier interference(ICI) and co-antenna interference (CAI) from the received data thus turning the system matrix into an easily invertable form. Numerical simulation results show that TE-SIC-MIMO performs better than other schemes of comparable computation complexity at signal to noise ratios (SNR) of practical interest.","PeriodicalId":408736,"journal":{"name":"2011 International Conference on Communications and Signal Processing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115716167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-24DOI: 10.1109/ICCSP.2011.5739296
Jayashri R, Chitra H, Kusuma S, Pavithra A, Chandrakanthv
Least Mean Square (LMS) algorithm is undoubtedly the most resorted to algorithm in diverse fields of engineering. Due to its simplicity it has been applied to solve numerous problems including side lobe reduction in matched filters, adaptive equalization, system identification, adaptive noise cancellation etc. In this paper we present a simple architecture for the implementation of a variant of Block LMS algorithm where the weight updation and error calculation are both calculated block wise. The algorithm performs considerably well with a slight trade off in the learning curve time and misadjustment, both of which can be adjusted by varying the step size depending on the requirement. The architecture can be further modified to perform the variants of LMS algorithm such as sign-sign, signerror and sign-data algorithms. The performance of the Simplified BLMS and LMS algorithms are compared in MATLAB simulations and the hardware outputs from the FPGA are verified with the simulations.
{"title":"Memory based architecture to implement simplified block LMS algorithm on FPGA","authors":"Jayashri R, Chitra H, Kusuma S, Pavithra A, Chandrakanthv","doi":"10.1109/ICCSP.2011.5739296","DOIUrl":"https://doi.org/10.1109/ICCSP.2011.5739296","url":null,"abstract":"Least Mean Square (LMS) algorithm is undoubtedly the most resorted to algorithm in diverse fields of engineering. Due to its simplicity it has been applied to solve numerous problems including side lobe reduction in matched filters, adaptive equalization, system identification, adaptive noise cancellation etc. In this paper we present a simple architecture for the implementation of a variant of Block LMS algorithm where the weight updation and error calculation are both calculated block wise. The algorithm performs considerably well with a slight trade off in the learning curve time and misadjustment, both of which can be adjusted by varying the step size depending on the requirement. The architecture can be further modified to perform the variants of LMS algorithm such as sign-sign, signerror and sign-data algorithms. The performance of the Simplified BLMS and LMS algorithms are compared in MATLAB simulations and the hardware outputs from the FPGA are verified with the simulations.","PeriodicalId":408736,"journal":{"name":"2011 International Conference on Communications and Signal Processing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122865796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-24DOI: 10.1109/ICCSP.2011.5739381
John Blesswin, V Rema, J. Joselin
Security has become an inseparable issue even in the field of space technology. Visual Cryptography is the study of mathematical techniques related aspects of Information Security which allows Visual information to be encrypted in such a way that their decryption can be performed by the human visual system, without any complex cryptographic algorithms. This technique represents the secret image by several different shares of binary images. It is hard to perceive any clues about a secret image from individual shares. The secret message is revealed when parts or all of these shares are aligned and stacked together. In this paper we provide an overview of the emerging Visual Cryptography (VC) techniques used in the secure transfer of the thousands of images collected by the satellite which are stored in image library and sent to Google for use on Google Earth and Google maps. The related work is based on the recovering of secret image using a binary logo which is used to represent the ownership of the host image which generates shadows by visual cryptography algorithms. An error correction-coding scheme is also used to create the appropriate shadow. The logo extracted from the half-toned host image identifies the cheating types. Furthermore, the logo recovers the reconstructed image when shadow is being cheated using an image self-verification scheme based on the Rehash technique which rehash the halftone logo for effective self verification of the reconstructed secret image without the need for the trusted third party(TTP).
{"title":"Recovering secret image in Visual Cryptography","authors":"John Blesswin, V Rema, J. Joselin","doi":"10.1109/ICCSP.2011.5739381","DOIUrl":"https://doi.org/10.1109/ICCSP.2011.5739381","url":null,"abstract":"Security has become an inseparable issue even in the field of space technology. Visual Cryptography is the study of mathematical techniques related aspects of Information Security which allows Visual information to be encrypted in such a way that their decryption can be performed by the human visual system, without any complex cryptographic algorithms. This technique represents the secret image by several different shares of binary images. It is hard to perceive any clues about a secret image from individual shares. The secret message is revealed when parts or all of these shares are aligned and stacked together. In this paper we provide an overview of the emerging Visual Cryptography (VC) techniques used in the secure transfer of the thousands of images collected by the satellite which are stored in image library and sent to Google for use on Google Earth and Google maps. The related work is based on the recovering of secret image using a binary logo which is used to represent the ownership of the host image which generates shadows by visual cryptography algorithms. An error correction-coding scheme is also used to create the appropriate shadow. The logo extracted from the half-toned host image identifies the cheating types. Furthermore, the logo recovers the reconstructed image when shadow is being cheated using an image self-verification scheme based on the Rehash technique which rehash the halftone logo for effective self verification of the reconstructed secret image without the need for the trusted third party(TTP).","PeriodicalId":408736,"journal":{"name":"2011 International Conference on Communications and Signal Processing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128127121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-24DOI: 10.1109/ICCSP.2011.5739382
K. Deergha Rao, C. Gangadhar
Chaotic maps have been widely used in data encryption. However, a number of chaos-based algorithms have been shown to be insecure. The application of BB equation for encryption is reported in a recent article. In this paper, new algorithms based on chaos and BB equation are reported for image encryption and decryption. The algorithms are illustrated through an example. For practical use, VLSI architectures of the proposed algorithms are designed and realized using Xilinx ISE VLSI software for hardware implementation. Further, the hardware complexity of the proposed algorithms is compared with the algorithm reported in [6]
{"title":"VLSI realization of a secure cryptosystem for image encryption and decryption","authors":"K. Deergha Rao, C. Gangadhar","doi":"10.1109/ICCSP.2011.5739382","DOIUrl":"https://doi.org/10.1109/ICCSP.2011.5739382","url":null,"abstract":"Chaotic maps have been widely used in data encryption. However, a number of chaos-based algorithms have been shown to be insecure. The application of BB equation for encryption is reported in a recent article. In this paper, new algorithms based on chaos and BB equation are reported for image encryption and decryption. The algorithms are illustrated through an example. For practical use, VLSI architectures of the proposed algorithms are designed and realized using Xilinx ISE VLSI software for hardware implementation. Further, the hardware complexity of the proposed algorithms is compared with the algorithm reported in [6]","PeriodicalId":408736,"journal":{"name":"2011 International Conference on Communications and Signal Processing","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128189726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-24DOI: 10.1109/ICCSP.2011.5739354
N. Janrao, V. Janyani
In recent years photonic crystal fibers (PCFs) made of silica and air hole has provided a new approach for dispersion compensation. For dispersion compensating fibers, large negative dispersion D (ps/km-nm) is required. The refractive index profile of conventional, step-index, dispersion compensation fiber can be changed in order to obtain high waveguide dispersion, however, this needs high doping, giving rise to higher losses. PCFs are now being increasingly used as an alternative approach to dispersion compensation. This paper proposes a new geometry of PCFs, which uses ‘square’ holes instead of circular holes, which gives large negative dispersion without high doping. Dispersion compensating photonic crystal fiber is designed with square holes with equivalent width w given as d=1.128w where w=width of square holes in PCF and d is diameter of circular holes in more conventional PCF. This relation is newly introduced between circular diameter and square width of air holes.
{"title":"Dispersion compensation fiber using square hole PCF","authors":"N. Janrao, V. Janyani","doi":"10.1109/ICCSP.2011.5739354","DOIUrl":"https://doi.org/10.1109/ICCSP.2011.5739354","url":null,"abstract":"In recent years photonic crystal fibers (PCFs) made of silica and air hole has provided a new approach for dispersion compensation. For dispersion compensating fibers, large negative dispersion D (ps/km-nm) is required. The refractive index profile of conventional, step-index, dispersion compensation fiber can be changed in order to obtain high waveguide dispersion, however, this needs high doping, giving rise to higher losses. PCFs are now being increasingly used as an alternative approach to dispersion compensation. This paper proposes a new geometry of PCFs, which uses ‘square’ holes instead of circular holes, which gives large negative dispersion without high doping. Dispersion compensating photonic crystal fiber is designed with square holes with equivalent width w given as d=1.128w where w=width of square holes in PCF and d is diameter of circular holes in more conventional PCF. This relation is newly introduced between circular diameter and square width of air holes.","PeriodicalId":408736,"journal":{"name":"2011 International Conference on Communications and Signal Processing","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124240972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}