Pub Date : 2015-03-12DOI: 10.1109/SPACES.2015.7058220
T. Archana, T. Venugopal, M. P. Kumar
Face is the primary index for imparting the identity. Automated face detection is one of the interesting field of research. Face detection of digital image has acquired much importance and interest in last two decades, which has applications in different fields. Computerizing the process needs many image processing methods. In this paper, a new face detection approach using color base segmentation and morphological operations is presented. The algorithm uses color plane extraction, background subtraction, thresholding, morphological operations (such as erosion and dilation), filtering (to avoid false detection). Then particle analysis is done to detect only the face area in the image and not the other parts of the body. The color planes are extracted using vision module the RGB color space is converted into suitable color space such as HSV and YCbCr. The algorithm can be used to detect both single as well as multiple persons in a image. Experimental results of the algorithm show that, it is good enough to detect the human faces with an accuracy of 93% i.e., the efficiency of the detection is up to 93%.
{"title":"Multiple face detection in color images","authors":"T. Archana, T. Venugopal, M. P. Kumar","doi":"10.1109/SPACES.2015.7058220","DOIUrl":"https://doi.org/10.1109/SPACES.2015.7058220","url":null,"abstract":"Face is the primary index for imparting the identity. Automated face detection is one of the interesting field of research. Face detection of digital image has acquired much importance and interest in last two decades, which has applications in different fields. Computerizing the process needs many image processing methods. In this paper, a new face detection approach using color base segmentation and morphological operations is presented. The algorithm uses color plane extraction, background subtraction, thresholding, morphological operations (such as erosion and dilation), filtering (to avoid false detection). Then particle analysis is done to detect only the face area in the image and not the other parts of the body. The color planes are extracted using vision module the RGB color space is converted into suitable color space such as HSV and YCbCr. The algorithm can be used to detect both single as well as multiple persons in a image. Experimental results of the algorithm show that, it is good enough to detect the human faces with an accuracy of 93% i.e., the efficiency of the detection is up to 93%.","PeriodicalId":432479,"journal":{"name":"2015 International Conference on Signal Processing and Communication Engineering Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127487949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-12DOI: 10.1109/SPACES.2015.7058255
Satyasai Sribhashyam, M. Ramachandran, S. Prince, B. R. Ravi
A systematic model for all-optical full adder as well as full subtractor is proposed based on principle of Mach Zehnder Interferometer and using Semiconductor Optical Amplifier (MZI-SOA) configuration. MZI plays a role for ultra fast all-optical signal processing, here the non-linear property of SOA are properly utilized for designing the full adder as well as full subtractor. In this model the full adder as well as full subtractor can be effectively designed by properly selecting output terminals of MZI-SOA component. The design is implemented with the help of OptiSystem software which is one of the powerful software for analyzing Optical components. The proposed mode shows design performance of full adder as well as full subtractor in optical domain and it seems to be future wireless technology.
{"title":"Design of full adder and subtractor based on MZI — SOA","authors":"Satyasai Sribhashyam, M. Ramachandran, S. Prince, B. R. Ravi","doi":"10.1109/SPACES.2015.7058255","DOIUrl":"https://doi.org/10.1109/SPACES.2015.7058255","url":null,"abstract":"A systematic model for all-optical full adder as well as full subtractor is proposed based on principle of Mach Zehnder Interferometer and using Semiconductor Optical Amplifier (MZI-SOA) configuration. MZI plays a role for ultra fast all-optical signal processing, here the non-linear property of SOA are properly utilized for designing the full adder as well as full subtractor. In this model the full adder as well as full subtractor can be effectively designed by properly selecting output terminals of MZI-SOA component. The design is implemented with the help of OptiSystem software which is one of the powerful software for analyzing Optical components. The proposed mode shows design performance of full adder as well as full subtractor in optical domain and it seems to be future wireless technology.","PeriodicalId":432479,"journal":{"name":"2015 International Conference on Signal Processing and Communication Engineering Systems","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127766590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-12DOI: 10.1109/SPACES.2015.7058238
K. Sujatha, P. V. Nageswara Rao, A. Rao, V. G. Sastry, V. Praneeta, R. Bharat
Many applications today require more computing power than offered by a traditional sequential computers. High performance computing requires parallel processing. Parallelism is utilized to depict executions that physically execute at the same time with the objective of taking care of a complex issue quicker. Multicore processing means code working on more than one core of a single CPU chip and is a subset of Parallel Processing and Multicore utilization means that efficient usage of CPU. In parallel processing, program instructions are divided among multiple processors with the goal of executing the same program in less time compared to sequential processing. Applications programs related to sorting and searching are developed encapsulating parallel processing and are tested on huge database. Experimental results based on bubble sort and linear search show that it is easier to get work done if load is shared and also quicker by parallel processing with multicore utilization compared to sequential processing.
{"title":"Multicore parallel processing concepts for effective sorting and searching","authors":"K. Sujatha, P. V. Nageswara Rao, A. Rao, V. G. Sastry, V. Praneeta, R. Bharat","doi":"10.1109/SPACES.2015.7058238","DOIUrl":"https://doi.org/10.1109/SPACES.2015.7058238","url":null,"abstract":"Many applications today require more computing power than offered by a traditional sequential computers. High performance computing requires parallel processing. Parallelism is utilized to depict executions that physically execute at the same time with the objective of taking care of a complex issue quicker. Multicore processing means code working on more than one core of a single CPU chip and is a subset of Parallel Processing and Multicore utilization means that efficient usage of CPU. In parallel processing, program instructions are divided among multiple processors with the goal of executing the same program in less time compared to sequential processing. Applications programs related to sorting and searching are developed encapsulating parallel processing and are tested on huge database. Experimental results based on bubble sort and linear search show that it is easier to get work done if load is shared and also quicker by parallel processing with multicore utilization compared to sequential processing.","PeriodicalId":432479,"journal":{"name":"2015 International Conference on Signal Processing and Communication Engineering Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116482119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-12DOI: 10.1109/SPACES.2015.7058221
S. Sathiyan, S. Kumar, A. Selvakumar
Conventional controllers like Proportional Derivative (PD), Proportional Integral (PI) and Proportional Integral Derivative (PID) controllers were used in implementing Velocity Control Mode (VCM) in Cruise Control (CC) and Adaptive Cruise control (ACC) vehicles. Transition that occur in "resume mode" of CC and transitions that occur during switching from Distance Control Mode (DCM) to VCM for an ACC is considered primarily for the design of a controller. This transition creates the disturbance in the comfort level of the vehicle occupants. Disturbance above a predefined level may lead to rejection of such driver assistance systems by the user. The proposed optimized fuzzy controller using Genetic Algorithm (GA) has shown a better performance over the conventional CC system in terms of minimizing the jerk by controlling the acceleration within the comfortable level.
{"title":"Optimised fuzzy controller for improved comfort level during transitions in Cruise and Adaptive Cruise Control Vehicles","authors":"S. Sathiyan, S. Kumar, A. Selvakumar","doi":"10.1109/SPACES.2015.7058221","DOIUrl":"https://doi.org/10.1109/SPACES.2015.7058221","url":null,"abstract":"Conventional controllers like Proportional Derivative (PD), Proportional Integral (PI) and Proportional Integral Derivative (PID) controllers were used in implementing Velocity Control Mode (VCM) in Cruise Control (CC) and Adaptive Cruise control (ACC) vehicles. Transition that occur in \"resume mode\" of CC and transitions that occur during switching from Distance Control Mode (DCM) to VCM for an ACC is considered primarily for the design of a controller. This transition creates the disturbance in the comfort level of the vehicle occupants. Disturbance above a predefined level may lead to rejection of such driver assistance systems by the user. The proposed optimized fuzzy controller using Genetic Algorithm (GA) has shown a better performance over the conventional CC system in terms of minimizing the jerk by controlling the acceleration within the comfortable level.","PeriodicalId":432479,"journal":{"name":"2015 International Conference on Signal Processing and Communication Engineering Systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124011955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-12DOI: 10.1109/SPACES.2015.7058227
M. Kamaraju, Veerendra Satyavolu, K. Kishore
Many a change have been taking place in the technologies and trends in very large scale integration (VLSI) these days. The main factors in VLSI are Area, Speed and power. As there is a need of low power circuits in all real time applications like consumer electronics, medical applications, and mobile applications. So low power design theme is raised. As this paper introduces a method to reduce power dissipation in digital CMOS circuits using power gated dual sub threshold (PGDST) supply voltage. The purpose of this dual supply voltage is some of ultra-low power applications and the circuits with low supply voltages. They did not give satisfactory results with single supply voltage. This secondary supply voltage is assigned for gates, components depends on the critical path and path density in the circuit. Power gating technique is applied for corresponding circuit at supply voltage level to reduce power dissipation. This entire work is implemented in Mentor Graphics Back End Tool with Pyxis Schematic 10.3 version on Linux operating system. By using this technique high amount of power dissipation is reduced in designed circuits and increases the performance of the designed circuits.
{"title":"Design and realization of CMOS circuits using dual integrated technique to reduce power dissipation","authors":"M. Kamaraju, Veerendra Satyavolu, K. Kishore","doi":"10.1109/SPACES.2015.7058227","DOIUrl":"https://doi.org/10.1109/SPACES.2015.7058227","url":null,"abstract":"Many a change have been taking place in the technologies and trends in very large scale integration (VLSI) these days. The main factors in VLSI are Area, Speed and power. As there is a need of low power circuits in all real time applications like consumer electronics, medical applications, and mobile applications. So low power design theme is raised. As this paper introduces a method to reduce power dissipation in digital CMOS circuits using power gated dual sub threshold (PGDST) supply voltage. The purpose of this dual supply voltage is some of ultra-low power applications and the circuits with low supply voltages. They did not give satisfactory results with single supply voltage. This secondary supply voltage is assigned for gates, components depends on the critical path and path density in the circuit. Power gating technique is applied for corresponding circuit at supply voltage level to reduce power dissipation. This entire work is implemented in Mentor Graphics Back End Tool with Pyxis Schematic 10.3 version on Linux operating system. By using this technique high amount of power dissipation is reduced in designed circuits and increases the performance of the designed circuits.","PeriodicalId":432479,"journal":{"name":"2015 International Conference on Signal Processing and Communication Engineering Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124290202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-12DOI: 10.1109/SPACES.2015.7058210
D. Devi, M. Rao
Human-Computer Interaction is an emerging field of Computer Science where, Computer Vision, especially facial expression recognition occupies an essential role. There are so many approaches to resolve this problem, among them HMM is a considerable one. This paper aims to achieve optimization in both, the usage of number of states and the time complexity of HMM runtime. It also focusses to enable parallel processing which aims to process more than one image simultaneously.
{"title":"A novel method to achieve optimization in facial expression recognition using HMM","authors":"D. Devi, M. Rao","doi":"10.1109/SPACES.2015.7058210","DOIUrl":"https://doi.org/10.1109/SPACES.2015.7058210","url":null,"abstract":"Human-Computer Interaction is an emerging field of Computer Science where, Computer Vision, especially facial expression recognition occupies an essential role. There are so many approaches to resolve this problem, among them HMM is a considerable one. This paper aims to achieve optimization in both, the usage of number of states and the time complexity of HMM runtime. It also focusses to enable parallel processing which aims to process more than one image simultaneously.","PeriodicalId":432479,"journal":{"name":"2015 International Conference on Signal Processing and Communication Engineering Systems","volume":"28 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120866034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-12DOI: 10.1109/SPACES.2015.7058288
P. Kishore, M. Prasad, C. Prasad, R. Rahul
Sign language recognition (SLR) is considered a multidisciplinary research area engulfing image processing, pattern recognition and artificial intelligence. The major hurdle for a SLR is the occlusions of one hand on another. This results in poor segmentations and hence the feature vector generated result in erroneous classifications of signs resulting in deprived recognition rate. To overcome this difficulty we propose in this paper a 4 camera model for recognizing gestures of Indian sign language. Segmentation for hand extraction, shape feature extraction with elliptical Fourier descriptors and pattern classification using artificial neural networks with backpropagation training algorithm. The classification rate is computed and which provides experimental evidence that 4 camera model outperforms single camera model.
{"title":"4-Camera model for sign language recognition using elliptical fourier descriptors and ANN","authors":"P. Kishore, M. Prasad, C. Prasad, R. Rahul","doi":"10.1109/SPACES.2015.7058288","DOIUrl":"https://doi.org/10.1109/SPACES.2015.7058288","url":null,"abstract":"Sign language recognition (SLR) is considered a multidisciplinary research area engulfing image processing, pattern recognition and artificial intelligence. The major hurdle for a SLR is the occlusions of one hand on another. This results in poor segmentations and hence the feature vector generated result in erroneous classifications of signs resulting in deprived recognition rate. To overcome this difficulty we propose in this paper a 4 camera model for recognizing gestures of Indian sign language. Segmentation for hand extraction, shape feature extraction with elliptical Fourier descriptors and pattern classification using artificial neural networks with backpropagation training algorithm. The classification rate is computed and which provides experimental evidence that 4 camera model outperforms single camera model.","PeriodicalId":432479,"journal":{"name":"2015 International Conference on Signal Processing and Communication Engineering Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132586973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-12DOI: 10.1109/SPACES.2015.7058247
G. D. Suryachand, M. S. Rao, S. Sankar, S. A. Kumar
Microwave antennas with wide bandwidths have many applications in satellite links, sensor systems and radio astronomy applications etc. Tapered slot antennas are often preferred in this frequency range as they easy for circuit integration. Vivaldi antenna is one such structure which has wide band width and end fire Radiation pattern. But all the antenna designs may not yield a perfectly end-fire pattern with no side-lobes. Hence a technique to suppress side-lobes present in the radiation pattern is discussed in this paper. This paper emphasises on usage of dielectric properties of the substrate on which antenna is built to improve the radiation pattern of antenna.
{"title":"A novel technique to improve the radiation pattern of TSAs by modifying the substrate","authors":"G. D. Suryachand, M. S. Rao, S. Sankar, S. A. Kumar","doi":"10.1109/SPACES.2015.7058247","DOIUrl":"https://doi.org/10.1109/SPACES.2015.7058247","url":null,"abstract":"Microwave antennas with wide bandwidths have many applications in satellite links, sensor systems and radio astronomy applications etc. Tapered slot antennas are often preferred in this frequency range as they easy for circuit integration. Vivaldi antenna is one such structure which has wide band width and end fire Radiation pattern. But all the antenna designs may not yield a perfectly end-fire pattern with no side-lobes. Hence a technique to suppress side-lobes present in the radiation pattern is discussed in this paper. This paper emphasises on usage of dielectric properties of the substrate on which antenna is built to improve the radiation pattern of antenna.","PeriodicalId":432479,"journal":{"name":"2015 International Conference on Signal Processing and Communication Engineering Systems","volume":"23 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132623979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-12DOI: 10.1109/SPACES.2015.7058279
D. Magdum, Manisha Shukla Dubey, T. Patil, Ronak Shah, S. Belhe, Mahesh Kulkarni
In this paper we have described the methodologies that we have used in data collection and recording for our Hindi Text to Speech system. Design of the speech corpus plays a very important role in overall quality of the text-to-speech system. A huge text corpus of one million words was created for existing text-to-speech system. We have crawled text from many domains like financial, government, current news etc. along with pre-built dictionaries. For the very first time, we have also generated and incorporated text from Hindi Short-Messaging-Service (SMS). The efforts were made to make the generic speech corpus for Hindi. The crawled text was first filtered for correctness e.g. spelling mistakes, validity to Hindi, word lengths etc. The filtered words were then carefully analyzed and ensured that phonetically balanced text is prepared. This cured text is then recorded by professional recordist in a studio environment. The recorded speech data is then processed and annotated to generate the final speech corpus. The paper explains the speech corpus creation process, beginning with text data crawling, filtering, recording and annotation phases. The final speech corpus thus generated is used in the Hindi Text-to-Speech system with the MOS of 2.8.
{"title":"Methodology for designing and creating Hindi speech corpus","authors":"D. Magdum, Manisha Shukla Dubey, T. Patil, Ronak Shah, S. Belhe, Mahesh Kulkarni","doi":"10.1109/SPACES.2015.7058279","DOIUrl":"https://doi.org/10.1109/SPACES.2015.7058279","url":null,"abstract":"In this paper we have described the methodologies that we have used in data collection and recording for our Hindi Text to Speech system. Design of the speech corpus plays a very important role in overall quality of the text-to-speech system. A huge text corpus of one million words was created for existing text-to-speech system. We have crawled text from many domains like financial, government, current news etc. along with pre-built dictionaries. For the very first time, we have also generated and incorporated text from Hindi Short-Messaging-Service (SMS). The efforts were made to make the generic speech corpus for Hindi. The crawled text was first filtered for correctness e.g. spelling mistakes, validity to Hindi, word lengths etc. The filtered words were then carefully analyzed and ensured that phonetically balanced text is prepared. This cured text is then recorded by professional recordist in a studio environment. The recorded speech data is then processed and annotated to generate the final speech corpus. The paper explains the speech corpus creation process, beginning with text data crawling, filtering, recording and annotation phases. The final speech corpus thus generated is used in the Hindi Text-to-Speech system with the MOS of 2.8.","PeriodicalId":432479,"journal":{"name":"2015 International Conference on Signal Processing and Communication Engineering Systems","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115872298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-12DOI: 10.1109/SPACES.2015.7058302
N. Renuka, M. SathyaSaiRam, P. Naganjaneyulu
The major drawback of OFDM is the high PAPR which in results, increases the complexity of Analog to Digital (A/D) and Digital to Analog (D/A) converters and also reduces the efficiency of RF High Power Amplifier (HPA). High PAPR values can also lead to serious problems such as severe power penalty at the transmitter, which is not affordable in portable wireless systems where terminals are powered by battery. Here in this letter we proposed analysis of PAPR by using an improved low complexity four partial transmit sequences (ILCF-PTS) and compared with the existing techniques such as original OFDM, selective level mapping (SLM), active constellation extension (ACE) and also with adaptive active constellation extension (AACE). Experimental analysis shown that the IF-PTS scheme is superior to previous PAPR reduction techniques.
{"title":"Performance and analysis of PAPR reduction schemes based on improved low complexity four partial transmit sequences and constellation methods","authors":"N. Renuka, M. SathyaSaiRam, P. Naganjaneyulu","doi":"10.1109/SPACES.2015.7058302","DOIUrl":"https://doi.org/10.1109/SPACES.2015.7058302","url":null,"abstract":"The major drawback of OFDM is the high PAPR which in results, increases the complexity of Analog to Digital (A/D) and Digital to Analog (D/A) converters and also reduces the efficiency of RF High Power Amplifier (HPA). High PAPR values can also lead to serious problems such as severe power penalty at the transmitter, which is not affordable in portable wireless systems where terminals are powered by battery. Here in this letter we proposed analysis of PAPR by using an improved low complexity four partial transmit sequences (ILCF-PTS) and compared with the existing techniques such as original OFDM, selective level mapping (SLM), active constellation extension (ACE) and also with adaptive active constellation extension (AACE). Experimental analysis shown that the IF-PTS scheme is superior to previous PAPR reduction techniques.","PeriodicalId":432479,"journal":{"name":"2015 International Conference on Signal Processing and Communication Engineering Systems","volume":"118 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133846972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}