Pub Date : 2014-11-01DOI: 10.1109/ICECCE.2014.7086619
S. Aich, Chahat Ahuja, Tushar Gupta, P. Arulmozhivarman
The motion of a UAV is greatly influenced by its interaction with different kinds of surfaces in proximity. In this paper, we address the aerodynamic challenge termed as `The ground effect' faced by a UAV, by mathematical modelling of its dynamic response. An AR Drone 2.0 has been used to collect the variations of parameters such as roll and pitch at different heights above a smooth surface. The data thus obtained has been analyzed on MATLAB R2013a and mathematical models have been created to correct them for a more stable take-off, landing and near ground flights. The controller in a UAV constantly adjusts to stabilize the UAV. Results show that model obtained can be used to counter the ground effect in UAVs. The model obtained would also decrease stress on the controller and it will consume less power as it would not have to expend extra power in attempting to stabilize the UAV.
{"title":"Analysis of ground effect on multi-rotors","authors":"S. Aich, Chahat Ahuja, Tushar Gupta, P. Arulmozhivarman","doi":"10.1109/ICECCE.2014.7086619","DOIUrl":"https://doi.org/10.1109/ICECCE.2014.7086619","url":null,"abstract":"The motion of a UAV is greatly influenced by its interaction with different kinds of surfaces in proximity. In this paper, we address the aerodynamic challenge termed as `The ground effect' faced by a UAV, by mathematical modelling of its dynamic response. An AR Drone 2.0 has been used to collect the variations of parameters such as roll and pitch at different heights above a smooth surface. The data thus obtained has been analyzed on MATLAB R2013a and mathematical models have been created to correct them for a more stable take-off, landing and near ground flights. The controller in a UAV constantly adjusts to stabilize the UAV. Results show that model obtained can be used to counter the ground effect in UAVs. The model obtained would also decrease stress on the controller and it will consume less power as it would not have to expend extra power in attempting to stabilize the UAV.","PeriodicalId":223751,"journal":{"name":"2014 International Conference on Electronics, Communication and Computational Engineering (ICECCE)","volume":"374 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122775875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICECCE.2014.7086613
Shatrughna Prasad Yadav, S. Bera
High Power Amplifier in wireless communication systems exhibit nonlinear behavior which causes signal degradation. This is detrimental to the signals that pass through it. It appears in the form of Harmonic distortion, Gain compression, Inter-modulation distortion, Phase distortion, adjacent channel interference, etc. Orthogonal frequency-division multiplexing (OFDM) is considered to be better techniques for multicarrier wireless communication. It has tolerance to inter-symbol interference and good spectral efficiency but are subjected to nonlinearities of power amplifiers, due to their high peak-to-average power ratio (PAPR). In this paper, some of the basic PAPR reduction techniques like: Peak clipping, random phase shifting, selected mapping (SLM) and dummy sequence insertion (DSI) methods have been discussed. The results obtained has been compared with original PAPR which suggests improvement in nonlinearity with modest reduction in efficiency and increase in hardware implementations. Among the various techniques SLM method gives low PAPR than other methods.
{"title":"Nonlinearity effect of Power Amplifiers in wireless communication systems","authors":"Shatrughna Prasad Yadav, S. Bera","doi":"10.1109/ICECCE.2014.7086613","DOIUrl":"https://doi.org/10.1109/ICECCE.2014.7086613","url":null,"abstract":"High Power Amplifier in wireless communication systems exhibit nonlinear behavior which causes signal degradation. This is detrimental to the signals that pass through it. It appears in the form of Harmonic distortion, Gain compression, Inter-modulation distortion, Phase distortion, adjacent channel interference, etc. Orthogonal frequency-division multiplexing (OFDM) is considered to be better techniques for multicarrier wireless communication. It has tolerance to inter-symbol interference and good spectral efficiency but are subjected to nonlinearities of power amplifiers, due to their high peak-to-average power ratio (PAPR). In this paper, some of the basic PAPR reduction techniques like: Peak clipping, random phase shifting, selected mapping (SLM) and dummy sequence insertion (DSI) methods have been discussed. The results obtained has been compared with original PAPR which suggests improvement in nonlinearity with modest reduction in efficiency and increase in hardware implementations. Among the various techniques SLM method gives low PAPR than other methods.","PeriodicalId":223751,"journal":{"name":"2014 International Conference on Electronics, Communication and Computational Engineering (ICECCE)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133518745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICECCE.2014.7086632
Praveen Sakrappanavar, S. Yellampalli, Ashish Kothari
Design for Testability (DFT) based on scan and ATPG has been adopted as a reliable and broadly acceptable methodology that provides very high test coverage, but for large circuits, the growing test data volume causes a significant increase in test cost because of much longer test time and elevated tester memory requirements. Test compression or scan compression provides great reduction in test data volume and test time required by adding on-chip decompressor and compactor. In this paper comparative analysis are made for Broadcast, XOR decompressor along with XOR, MISR and Hybrid compactors with respect to test coverage, test cycles required and test data volume by considering Flash Interface as CUT. From the experiments, it is observed that XOR decompressor with MISR compactor architecture provides 17.31% to 49.76% reduction in test data volume compared to other architectures, with 99.76% of fault coverage, 16694 test cycles and 2104μm2 of area overhead.
{"title":"Comparative analysis of scan compression techniques","authors":"Praveen Sakrappanavar, S. Yellampalli, Ashish Kothari","doi":"10.1109/ICECCE.2014.7086632","DOIUrl":"https://doi.org/10.1109/ICECCE.2014.7086632","url":null,"abstract":"Design for Testability (DFT) based on scan and ATPG has been adopted as a reliable and broadly acceptable methodology that provides very high test coverage, but for large circuits, the growing test data volume causes a significant increase in test cost because of much longer test time and elevated tester memory requirements. Test compression or scan compression provides great reduction in test data volume and test time required by adding on-chip decompressor and compactor. In this paper comparative analysis are made for Broadcast, XOR decompressor along with XOR, MISR and Hybrid compactors with respect to test coverage, test cycles required and test data volume by considering Flash Interface as CUT. From the experiments, it is observed that XOR decompressor with MISR compactor architecture provides 17.31% to 49.76% reduction in test data volume compared to other architectures, with 99.76% of fault coverage, 16694 test cycles and 2104μm2 of area overhead.","PeriodicalId":223751,"journal":{"name":"2014 International Conference on Electronics, Communication and Computational Engineering (ICECCE)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134027248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICECCE.2014.7086618
Sumithra Sriram, B. J. Saira, Rajasekhara Babu
With the increase in the resolution of images, arises the need to compress these images effectively without much loss, for easy storage and transmission. Sparse matrices are matrices that have majority of their elements as zeroes, which brings in the possibility of storing just the non-zero elements in a space efficient manner using various formats. Images, which are essentially matrices, if somehow expressed as sparse matrices, can be similarly stored. The rectangular segmentation is a method that can be used to do so. In this paper, we analyze the space complexity of various storage formats for benchmark matrices and the suitability of these formats to compress images using rectangular segmentation method.
{"title":"Space complexity analysis of various sparse matrix storage formats used in rectangular segmentation image compression technique","authors":"Sumithra Sriram, B. J. Saira, Rajasekhara Babu","doi":"10.1109/ICECCE.2014.7086618","DOIUrl":"https://doi.org/10.1109/ICECCE.2014.7086618","url":null,"abstract":"With the increase in the resolution of images, arises the need to compress these images effectively without much loss, for easy storage and transmission. Sparse matrices are matrices that have majority of their elements as zeroes, which brings in the possibility of storing just the non-zero elements in a space efficient manner using various formats. Images, which are essentially matrices, if somehow expressed as sparse matrices, can be similarly stored. The rectangular segmentation is a method that can be used to do so. In this paper, we analyze the space complexity of various storage formats for benchmark matrices and the suitability of these formats to compress images using rectangular segmentation method.","PeriodicalId":223751,"journal":{"name":"2014 International Conference on Electronics, Communication and Computational Engineering (ICECCE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126243295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICECCE.2014.7086633
S. Praveen, S. Yellampalli, Ashish Kothari
Structural test is the most efficient test to detect manufacturing defects. With ever increasing complexity of digital designs, structural test vectors alone are not sufficient to achieve the desired fault coverage. Functional test vectors are programs written with the design specifications in mind rather than manufacturing defects and this can help in testing some of the critical portions of design. Functional test vectors are given by the functional verification team. Structural and Functional tests put together can increase the Test quality very significantly. Unlike structural test vectors, functional test vectors do not offer test coverage metric on their own. In this paper, comparative analysis between conventional ATPG method and fault grading using fault simulation flow is done on I2C design. Fault grading technique is implemented using ATPG and Fault simulation flow to fault grade the functional test vectors. This greatly reduces the test vectors which indeed reduces test time and test effort.
{"title":"Optimization of test time and fault grading of functional test vectors using fault simulation flow","authors":"S. Praveen, S. Yellampalli, Ashish Kothari","doi":"10.1109/ICECCE.2014.7086633","DOIUrl":"https://doi.org/10.1109/ICECCE.2014.7086633","url":null,"abstract":"Structural test is the most efficient test to detect manufacturing defects. With ever increasing complexity of digital designs, structural test vectors alone are not sufficient to achieve the desired fault coverage. Functional test vectors are programs written with the design specifications in mind rather than manufacturing defects and this can help in testing some of the critical portions of design. Functional test vectors are given by the functional verification team. Structural and Functional tests put together can increase the Test quality very significantly. Unlike structural test vectors, functional test vectors do not offer test coverage metric on their own. In this paper, comparative analysis between conventional ATPG method and fault grading using fault simulation flow is done on I2C design. Fault grading technique is implemented using ATPG and Fault simulation flow to fault grade the functional test vectors. This greatly reduces the test vectors which indeed reduces test time and test effort.","PeriodicalId":223751,"journal":{"name":"2014 International Conference on Electronics, Communication and Computational Engineering (ICECCE)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129548695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICECCE.2014.7086654
G. Sreenivasulu, S. Raju, N. Rao
Clustering is one of the most important method in data mining. Clustering a huge data set is difficult and time taking process. In this scenario a new method proposed that is based on Rough Entropy for improving efficiency of clustering and labeling the unlabeled data points in clusters. Data Labeling is a simple process in numerical domain but not in categorical domain. Why because distance is a major parameter in numerical whereas not in categorical attributes. So, In this paper proposed a new method for data labeling using Rough Entropy for clustering categorical data attributes. This method is mainly divided into two phases. Phase-1 is aimed to find the partition with respect to attributes and phase-II is to find the Rough Entropy to know the node importance for data labeling.
{"title":"Data Labeling method based on Rough Entropy for categorical data clustering","authors":"G. Sreenivasulu, S. Raju, N. Rao","doi":"10.1109/ICECCE.2014.7086654","DOIUrl":"https://doi.org/10.1109/ICECCE.2014.7086654","url":null,"abstract":"Clustering is one of the most important method in data mining. Clustering a huge data set is difficult and time taking process. In this scenario a new method proposed that is based on Rough Entropy for improving efficiency of clustering and labeling the unlabeled data points in clusters. Data Labeling is a simple process in numerical domain but not in categorical domain. Why because distance is a major parameter in numerical whereas not in categorical attributes. So, In this paper proposed a new method for data labeling using Rough Entropy for clustering categorical data attributes. This method is mainly divided into two phases. Phase-1 is aimed to find the partition with respect to attributes and phase-II is to find the Rough Entropy to know the node importance for data labeling.","PeriodicalId":223751,"journal":{"name":"2014 International Conference on Electronics, Communication and Computational Engineering (ICECCE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129172357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICECCE.2014.7086614
S. Saini, Meenakshi Sharma
Wireless Sensor Networks (WSNs) contain a large number of sensor nodes that are equipped to handle complex functionalities, the network processing may require the sensors to use the constrained energy level to enhance the network lifetime. Many protocols have been proposed for achieving energy efficiency in heterogeneous structure of the network. In this dissertation, the performance of SEP, ECRSEP and ESEP have been analyzed for different WSNs scenarios. The outcomes of the same have been then analyzed for stability, network lifetime and throughput. The survey done on these protocols has shown that the SEP, ESEP and ECRSEP continues to punish advance and intermediate nodes and have also neglected the use of thresholds i.e. hard and soft thresholds to decrease the energy consumption. The neighbours of sensor nodes going to become CHs have also been neglected in the existing work. This dissertation has proposed two modifications where radius based grouping is used before CH selection to decrease the computation time. The new algorithm is the modification of ECRSEP and hence, is named MECRSEP (Modified ECRSEP). Due to non availability of actual sensor environment, in this dissertation simulation environment has been designed and implemented in the MATLAB tool.
{"title":"Prolonging the network lifetime of heterogeneous WSNS using MECRSEP","authors":"S. Saini, Meenakshi Sharma","doi":"10.1109/ICECCE.2014.7086614","DOIUrl":"https://doi.org/10.1109/ICECCE.2014.7086614","url":null,"abstract":"Wireless Sensor Networks (WSNs) contain a large number of sensor nodes that are equipped to handle complex functionalities, the network processing may require the sensors to use the constrained energy level to enhance the network lifetime. Many protocols have been proposed for achieving energy efficiency in heterogeneous structure of the network. In this dissertation, the performance of SEP, ECRSEP and ESEP have been analyzed for different WSNs scenarios. The outcomes of the same have been then analyzed for stability, network lifetime and throughput. The survey done on these protocols has shown that the SEP, ESEP and ECRSEP continues to punish advance and intermediate nodes and have also neglected the use of thresholds i.e. hard and soft thresholds to decrease the energy consumption. The neighbours of sensor nodes going to become CHs have also been neglected in the existing work. This dissertation has proposed two modifications where radius based grouping is used before CH selection to decrease the computation time. The new algorithm is the modification of ECRSEP and hence, is named MECRSEP (Modified ECRSEP). Due to non availability of actual sensor environment, in this dissertation simulation environment has been designed and implemented in the MATLAB tool.","PeriodicalId":223751,"journal":{"name":"2014 International Conference on Electronics, Communication and Computational Engineering (ICECCE)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122228169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICECCE.2014.7086645
V. Sharma, J. N. Tripathi, R. Nagpal, Sujay Deb, Rakesh Malik
With the advancement of VLSI technology, the effect of jitter is becoming more critical on high speed signals. To negate the effect of jitter on these signals, the causes of jitter in a circuit need to be identified by decomposing the jitter. In this paper, a comparative analysis of various jitter estimation techniques is presented. The statistical domain methods are based on fitting techniques while the frequency domain methods are based on frequency spectrum analysis. This work describes both statistical domain methods and frequency domain methods. Further, their strengths and limitations are discussed. The algorithms are implemented in MATLAB and the results are extensively verified with Agilent ADS.
{"title":"A comparative analysis of jitter estimation techniques","authors":"V. Sharma, J. N. Tripathi, R. Nagpal, Sujay Deb, Rakesh Malik","doi":"10.1109/ICECCE.2014.7086645","DOIUrl":"https://doi.org/10.1109/ICECCE.2014.7086645","url":null,"abstract":"With the advancement of VLSI technology, the effect of jitter is becoming more critical on high speed signals. To negate the effect of jitter on these signals, the causes of jitter in a circuit need to be identified by decomposing the jitter. In this paper, a comparative analysis of various jitter estimation techniques is presented. The statistical domain methods are based on fitting techniques while the frequency domain methods are based on frequency spectrum analysis. This work describes both statistical domain methods and frequency domain methods. Further, their strengths and limitations are discussed. The algorithms are implemented in MATLAB and the results are extensively verified with Agilent ADS.","PeriodicalId":223751,"journal":{"name":"2014 International Conference on Electronics, Communication and Computational Engineering (ICECCE)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116638352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICECCE.2014.7086621
B. Somasekhar, A. Mallikarjunaprasad
In current communication systems growing demand of multimedia services and the growth of Internet related contents lead to increasing interest to high speed communications. Multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) technology is one of the most attractive candidates for fourth generation (4G) mobile radio communication. It effectively combats the multipath fading channel and improves the bandwidth efficiency. However, the main drawback of MIMO-OFDM system is high peak-to-average power ratio (PAPR) for large number of sub-carriers, which result in many restrictions for practical applications. Recently, space time block codes (STBC) have gained much attention as an effective transmit diversity technique to provide reliable transmission with high peak data rates to increase the capacity of wireless communication systems. In this paper, the performance of BER vs. SNR in normal STBC OFDM is compared to that of STBC-OFDM systems with versions of SLM using simulations. Selective Mapping (SLM) is used as the existing system. In this technique, the minimum PAPR signal is selected and transmitted by generating different representations of OFDM symbols using different phase sequences. A new concurrent PAPR reduction algorithm based on the property of orthogonal space-time block coded (STBC) is proposed in this paper. We proved that the conjugate symbols transmitted on two antennas have same PAPR property, with which, the computational complexity cost of the proposed algorithm can be reduced significantly compared with the conventional concurrent PAPR reduction algorithms, such as concurrent partial transmit sequences (PTS). Furthermore, a criterion of minimum maximum (minmax) is proposed, which shows better PAPR performance than the criterion of minimum average (minaverage) in conventional concurrent algorithms. Simulation results demonstrate that the performance of the proposed algorithm outperforms the conventional concurrent algorithms.
{"title":"Modified SLM and PTS approach to reduce PAPR in MIMO OFDM","authors":"B. Somasekhar, A. Mallikarjunaprasad","doi":"10.1109/ICECCE.2014.7086621","DOIUrl":"https://doi.org/10.1109/ICECCE.2014.7086621","url":null,"abstract":"In current communication systems growing demand of multimedia services and the growth of Internet related contents lead to increasing interest to high speed communications. Multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) technology is one of the most attractive candidates for fourth generation (4G) mobile radio communication. It effectively combats the multipath fading channel and improves the bandwidth efficiency. However, the main drawback of MIMO-OFDM system is high peak-to-average power ratio (PAPR) for large number of sub-carriers, which result in many restrictions for practical applications. Recently, space time block codes (STBC) have gained much attention as an effective transmit diversity technique to provide reliable transmission with high peak data rates to increase the capacity of wireless communication systems. In this paper, the performance of BER vs. SNR in normal STBC OFDM is compared to that of STBC-OFDM systems with versions of SLM using simulations. Selective Mapping (SLM) is used as the existing system. In this technique, the minimum PAPR signal is selected and transmitted by generating different representations of OFDM symbols using different phase sequences. A new concurrent PAPR reduction algorithm based on the property of orthogonal space-time block coded (STBC) is proposed in this paper. We proved that the conjugate symbols transmitted on two antennas have same PAPR property, with which, the computational complexity cost of the proposed algorithm can be reduced significantly compared with the conventional concurrent PAPR reduction algorithms, such as concurrent partial transmit sequences (PTS). Furthermore, a criterion of minimum maximum (minmax) is proposed, which shows better PAPR performance than the criterion of minimum average (minaverage) in conventional concurrent algorithms. Simulation results demonstrate that the performance of the proposed algorithm outperforms the conventional concurrent algorithms.","PeriodicalId":223751,"journal":{"name":"2014 International Conference on Electronics, Communication and Computational Engineering (ICECCE)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122420352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/ICECCE.2014.7086624
S. P. Meharunnisa, K. Suresh
Breast cancer is the Second leading cause of cancer death among women. It has become a major health issue in the world over the past 50 years and it has increased in recent years. Early detection is an effective way to diagnose and manage breast cancer. Mammography is an efficient imaging technique for detection and diagnosis of breast pathological disorders at the early stage. This paper presents algorithms which are combination of image processing techniques to remove noise and enhancement of mammography images for identification of microcalcifications. Efficient methods such as use of wavelets and adaptive histogram equalization techniques, along with fusion techniques are used for image enhancement to detect microcalcifications using Labview.
{"title":"Labview implementation of identification of early signs of breast cancer","authors":"S. P. Meharunnisa, K. Suresh","doi":"10.1109/ICECCE.2014.7086624","DOIUrl":"https://doi.org/10.1109/ICECCE.2014.7086624","url":null,"abstract":"Breast cancer is the Second leading cause of cancer death among women. It has become a major health issue in the world over the past 50 years and it has increased in recent years. Early detection is an effective way to diagnose and manage breast cancer. Mammography is an efficient imaging technique for detection and diagnosis of breast pathological disorders at the early stage. This paper presents algorithms which are combination of image processing techniques to remove noise and enhancement of mammography images for identification of microcalcifications. Efficient methods such as use of wavelets and adaptive histogram equalization techniques, along with fusion techniques are used for image enhancement to detect microcalcifications using Labview.","PeriodicalId":223751,"journal":{"name":"2014 International Conference on Electronics, Communication and Computational Engineering (ICECCE)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115986226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}