Pub Date : 2010-05-20DOI: 10.1109/EIT.2010.5612133
R. Vishnubhotla, P. S. Rao, A. Ladha, S. Kadiyala, A. Narmada, B. Ronanki, S. Illapakurthi
Looking for empty parking spaces in congested parking spaces can be painstaking and time consuming. The average time spent in parking bays cruising for vacant spaces approximately varies from 3.5–12 minutes. These cruising cars also add to the traffic and also to the pollution inside the bay. The present parking management system in the urban cities of growing economies like India lacks efficiency, often leaving the drivers frustrated. We are engaged in developing an automated parking management system employing Wireless Sensor Network (WSN) technology. The parking management system can detect the presence and/or absence of a vehicle in the respective parking spaces and automatically provide the location of the identified available spaces to prospective users in real-time. This paper describes the ultrasonic based vehicle detection system, ZigBee networks and presents the preliminary results.
{"title":"ZigBee based multi-level parking vacancy monitoring system","authors":"R. Vishnubhotla, P. S. Rao, A. Ladha, S. Kadiyala, A. Narmada, B. Ronanki, S. Illapakurthi","doi":"10.1109/EIT.2010.5612133","DOIUrl":"https://doi.org/10.1109/EIT.2010.5612133","url":null,"abstract":"Looking for empty parking spaces in congested parking spaces can be painstaking and time consuming. The average time spent in parking bays cruising for vacant spaces approximately varies from 3.5–12 minutes. These cruising cars also add to the traffic and also to the pollution inside the bay. The present parking management system in the urban cities of growing economies like India lacks efficiency, often leaving the drivers frustrated. We are engaged in developing an automated parking management system employing Wireless Sensor Network (WSN) technology. The parking management system can detect the presence and/or absence of a vehicle in the respective parking spaces and automatically provide the location of the identified available spaces to prospective users in real-time. This paper describes the ultrasonic based vehicle detection system, ZigBee networks and presents the preliminary results.","PeriodicalId":305049,"journal":{"name":"2010 IEEE International Conference on Electro/Information Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133659964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-20DOI: 10.1109/EIT.2010.5612134
Rajeev J. Thapa, C. Trefftz, G. Wolffe
Numerous approaches have been proposed for detecting clusters, groups of data in spatial databases. Of these, the algorithm known as Density Based Spatial Clustering of Applications with Noise (DBSCAN) is a recent approach which has proven efficient for larger databases. Graphical Processing Units (GPUs), used originally to aid in the processing of high intensity graphics, have been found to be highly effective as general purpose parallel computing platforms. In this project, a GPU-based DBSCAN program has been implemented: the enhancement in this program allows for better memory scalability for use with very large databases. Algorithm performance, as compared to the original sequential program and to an initial GPU implementation, is investigated and analyzed.
{"title":"Memory-efficient implementation of a graphics processor-based cluster detection algorithm for large spatial databases","authors":"Rajeev J. Thapa, C. Trefftz, G. Wolffe","doi":"10.1109/EIT.2010.5612134","DOIUrl":"https://doi.org/10.1109/EIT.2010.5612134","url":null,"abstract":"Numerous approaches have been proposed for detecting clusters, groups of data in spatial databases. Of these, the algorithm known as Density Based Spatial Clustering of Applications with Noise (DBSCAN) is a recent approach which has proven efficient for larger databases. Graphical Processing Units (GPUs), used originally to aid in the processing of high intensity graphics, have been found to be highly effective as general purpose parallel computing platforms. In this project, a GPU-based DBSCAN program has been implemented: the enhancement in this program allows for better memory scalability for use with very large databases. Algorithm performance, as compared to the original sequential program and to an initial GPU implementation, is investigated and analyzed.","PeriodicalId":305049,"journal":{"name":"2010 IEEE International Conference on Electro/Information Technology","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123589116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-20DOI: 10.1109/EIT.2010.5612101
R. Mercado, Zhongbo Cao, D. Rover
System-level design (SLD) provides a solution to the challenge of increasing design complexity and time-to-market pressure in modern embedded system designs. In this paper, we propose a novel system-level approach to communication architecture modeling, which was not yet well addressed in existing SLD methodologies. In particular, we show how to develop statistical models for communication architectures. These new models are capable of capturing communication details at higher abstraction levels than previously possible. We demonstrate how to use the mean-square-error as a tool for developing these models, and show where to integrate these models in the design process.
{"title":"Mixture models for system-level communication analysis at higher levels of abstraction","authors":"R. Mercado, Zhongbo Cao, D. Rover","doi":"10.1109/EIT.2010.5612101","DOIUrl":"https://doi.org/10.1109/EIT.2010.5612101","url":null,"abstract":"System-level design (SLD) provides a solution to the challenge of increasing design complexity and time-to-market pressure in modern embedded system designs. In this paper, we propose a novel system-level approach to communication architecture modeling, which was not yet well addressed in existing SLD methodologies. In particular, we show how to develop statistical models for communication architectures. These new models are capable of capturing communication details at higher abstraction levels than previously possible. We demonstrate how to use the mean-square-error as a tool for developing these models, and show where to integrate these models in the design process.","PeriodicalId":305049,"journal":{"name":"2010 IEEE International Conference on Electro/Information Technology","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128258018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-20DOI: 10.1109/EIT.2010.5612087
Kenneth P. Hunt, J. Niemeier, A. Kruger
We investigate the propagation of radio waves underwater and between water and air to facilitate setting up hybrid wireless sensor networks with both surface and subsurface nodes. Our investigation includes signal attenuation, antenna radiation patterns, multipath due to reflections from the surface and substrate, noise, and reflection losses transmitting from one medium to another.
{"title":"RF communications in underwater wireless sensor networks","authors":"Kenneth P. Hunt, J. Niemeier, A. Kruger","doi":"10.1109/EIT.2010.5612087","DOIUrl":"https://doi.org/10.1109/EIT.2010.5612087","url":null,"abstract":"We investigate the propagation of radio waves underwater and between water and air to facilitate setting up hybrid wireless sensor networks with both surface and subsurface nodes. Our investigation includes signal attenuation, antenna radiation patterns, multipath due to reflections from the surface and substrate, noise, and reflection losses transmitting from one medium to another.","PeriodicalId":305049,"journal":{"name":"2010 IEEE International Conference on Electro/Information Technology","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121638230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-20DOI: 10.1109/EIT.2010.5612119
E. Salari, G. Bao
The detection of cracks and other degradations on pavement surfaces was traditionally done by human experts using visual inspection while driving along the surveyed road. To overcome the limitations of the manual scheme, an automatic crack detection and classification system is proposed in this paper to both speed up and reduce the subjectivity of the process. After the pavement images are captured by a digital camera, regions corresponding to cracks are detected over the acquired images by local segmentation and then represented by a matrix of square tiles. Since the crack pattern can be represented by the distribution of the crack tiles, standard deviations for both vertical and horizontal histograms are calculated to map the cracks onto a 2D feature space, where four crack types, namely, longitudinal, transversal, block, and alligator cracks can be identified. The experimental results, obtained by testing real pavement images over local asphalt roads, present the effectiveness of our algorithm for automating the process of identifying road distresses from images.
{"title":"Pavement distress detection and classification using feature mapping","authors":"E. Salari, G. Bao","doi":"10.1109/EIT.2010.5612119","DOIUrl":"https://doi.org/10.1109/EIT.2010.5612119","url":null,"abstract":"The detection of cracks and other degradations on pavement surfaces was traditionally done by human experts using visual inspection while driving along the surveyed road. To overcome the limitations of the manual scheme, an automatic crack detection and classification system is proposed in this paper to both speed up and reduce the subjectivity of the process. After the pavement images are captured by a digital camera, regions corresponding to cracks are detected over the acquired images by local segmentation and then represented by a matrix of square tiles. Since the crack pattern can be represented by the distribution of the crack tiles, standard deviations for both vertical and horizontal histograms are calculated to map the cracks onto a 2D feature space, where four crack types, namely, longitudinal, transversal, block, and alligator cracks can be identified. The experimental results, obtained by testing real pavement images over local asphalt roads, present the effectiveness of our algorithm for automating the process of identifying road distresses from images.","PeriodicalId":305049,"journal":{"name":"2010 IEEE International Conference on Electro/Information Technology","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121778669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-20DOI: 10.1109/EIT.2010.5612106
Daniel J. Monroe, I. S. Ahn, Yufeng Lu
In radar, sonar, medical ultrasound, and ultrasonic nondestructive evaluation, environmental noise makes target detection challenging. Therefore, clutter rejection and noise cancellation are necessary for the system to correctly identify targets. In this study, an adaptive filtering algorithm is used to reject clutter and detect small targets in noisy ultrasonic backscattered signals. Simulation and experimental results show that adaptive filter can efficiently reduce the clutter and improve the detection capability.
{"title":"Adaptive filtering and target detection for ultrasonic backscattered signal","authors":"Daniel J. Monroe, I. S. Ahn, Yufeng Lu","doi":"10.1109/EIT.2010.5612106","DOIUrl":"https://doi.org/10.1109/EIT.2010.5612106","url":null,"abstract":"In radar, sonar, medical ultrasound, and ultrasonic nondestructive evaluation, environmental noise makes target detection challenging. Therefore, clutter rejection and noise cancellation are necessary for the system to correctly identify targets. In this study, an adaptive filtering algorithm is used to reject clutter and detect small targets in noisy ultrasonic backscattered signals. Simulation and experimental results show that adaptive filter can efficiently reduce the clutter and improve the detection capability.","PeriodicalId":305049,"journal":{"name":"2010 IEEE International Conference on Electro/Information Technology","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114324714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-20DOI: 10.1109/EIT.2010.5612132
G. Swaroop, P. R. Reddy, M. L. R. Teja
Water is a ubiquitous chemical substance that is composed of hydrogen and oxygen and it is vital for all known forms of life. Clean, fresh drinking water is essential for human and other life forms. Access to safe and hygienic drinking water has improved steadily and substantially over the last decades in almost every part of the world. Water pollution is the contamination of water bodies that occurs when pollutants are discharged directly or indirectly into water bodies, which is a major problem in the global context and it leads to dangerous water-borne diseases and physical deformities. In this paper, a new methodology is proposed to continuously monitor the impurities that present in the water. The impurity content in the water can be monitored and indicated by a microprocessor controlled instrument.
{"title":"Hygieia- domestic online monitor of water pollution","authors":"G. Swaroop, P. R. Reddy, M. L. R. Teja","doi":"10.1109/EIT.2010.5612132","DOIUrl":"https://doi.org/10.1109/EIT.2010.5612132","url":null,"abstract":"Water is a ubiquitous chemical substance that is composed of hydrogen and oxygen and it is vital for all known forms of life. Clean, fresh drinking water is essential for human and other life forms. Access to safe and hygienic drinking water has improved steadily and substantially over the last decades in almost every part of the world. Water pollution is the contamination of water bodies that occurs when pollutants are discharged directly or indirectly into water bodies, which is a major problem in the global context and it leads to dangerous water-borne diseases and physical deformities. In this paper, a new methodology is proposed to continuously monitor the impurities that present in the water. The impurity content in the water can be monitored and indicated by a microprocessor controlled instrument.","PeriodicalId":305049,"journal":{"name":"2010 IEEE International Conference on Electro/Information Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124042653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-20DOI: 10.1109/EIT.2010.5612105
Phares A. Noel, S. Ganesan
The purpose of this research is to determine the viability and theoretical performance of a task distribution scheme utilizing Multi-Installment Load Distribution with Results Collection based on Distributed Load Theory (DLT), and some of the challenges that would be encountered, and criteria that need to be considered when attempting to utilize such a scheme. This is done by providing, for the first time, the theoretical performance details of a heterogeneous computational platform utilizing Divisible Load Scheduling (DLS) with various sizes of result load fractions under different criteria of network communication and participating processor performance. The system under consideration in this research, is a system that utilizes for job scheduling a Divisible Load Scheme that entails distributing arbitrarily divisible computational loads amongst eligible processors within a bus based distributed computing environment, including the aspects of both the Multi-Installment Scheme of Divisible Load Theory along with the Results Collection Phase. The primary contribution of this research is to provide insight into the impact of the size of the results load fraction on the overall task execution time, and to identify other system characteristics that influence the performance of a system that utilizes Divisible Load Scheduling. The assertion here is that under certain system configurations, and performance criteria, care must be taken in adopting this scheme, by taking into account the size of the results fraction when estimating the performance of such a system.
{"title":"Performance analysis of Divisible Load Scheduling utilizing Multi-Installment Load Distribution with varying sizes of result load fractions","authors":"Phares A. Noel, S. Ganesan","doi":"10.1109/EIT.2010.5612105","DOIUrl":"https://doi.org/10.1109/EIT.2010.5612105","url":null,"abstract":"The purpose of this research is to determine the viability and theoretical performance of a task distribution scheme utilizing Multi-Installment Load Distribution with Results Collection based on Distributed Load Theory (DLT), and some of the challenges that would be encountered, and criteria that need to be considered when attempting to utilize such a scheme. This is done by providing, for the first time, the theoretical performance details of a heterogeneous computational platform utilizing Divisible Load Scheduling (DLS) with various sizes of result load fractions under different criteria of network communication and participating processor performance. The system under consideration in this research, is a system that utilizes for job scheduling a Divisible Load Scheme that entails distributing arbitrarily divisible computational loads amongst eligible processors within a bus based distributed computing environment, including the aspects of both the Multi-Installment Scheme of Divisible Load Theory along with the Results Collection Phase. The primary contribution of this research is to provide insight into the impact of the size of the results load fraction on the overall task execution time, and to identify other system characteristics that influence the performance of a system that utilizes Divisible Load Scheduling. The assertion here is that under certain system configurations, and performance criteria, care must be taken in adopting this scheme, by taking into account the size of the results fraction when estimating the performance of such a system.","PeriodicalId":305049,"journal":{"name":"2010 IEEE International Conference on Electro/Information Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124056551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-20DOI: 10.1109/EIT.2010.5612191
S. Ababneh, M. Gurcan
The segmentation of bones in the knee region is one of the first essential steps to perform further analysis, classification and osteoarthritis imaging biomarkers discovery. In this paper, an efficient graph-cut based segmentation algorithm is proposed. One of the challenges in current graph-cut schemes is properly distinguishing between regions of interest (ROI) and background regions with features very similar to those of the ROI. Since obtaining a very discriminative cost function is not always feasible, many algorithms require user interaction to provide an extensive number of seed points. In this paper, a new approach is proposed which uses efficient content-based features to achieve segmentation without the need for any user interaction. Experimental results on actual knee MR images demonstrate the effectiveness of the proposed scheme with an average accuracy of 95% using the Zijdenbos similarity index.
{"title":"An efficient graph-cut segmentation for knee bone osteoarthritis medical images","authors":"S. Ababneh, M. Gurcan","doi":"10.1109/EIT.2010.5612191","DOIUrl":"https://doi.org/10.1109/EIT.2010.5612191","url":null,"abstract":"The segmentation of bones in the knee region is one of the first essential steps to perform further analysis, classification and osteoarthritis imaging biomarkers discovery. In this paper, an efficient graph-cut based segmentation algorithm is proposed. One of the challenges in current graph-cut schemes is properly distinguishing between regions of interest (ROI) and background regions with features very similar to those of the ROI. Since obtaining a very discriminative cost function is not always feasible, many algorithms require user interaction to provide an extensive number of seed points. In this paper, a new approach is proposed which uses efficient content-based features to achieve segmentation without the need for any user interaction. Experimental results on actual knee MR images demonstrate the effectiveness of the proposed scheme with an average accuracy of 95% using the Zijdenbos similarity index.","PeriodicalId":305049,"journal":{"name":"2010 IEEE International Conference on Electro/Information Technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117288135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-20DOI: 10.1109/EIT.2010.5612183
Liljana Aleksovska-Stojkovska, S. Loskovska
Constructing the Knowledge Base (KB) of the Clinical Decision Support System (CDSS) is a crucial task that determines the success of the CDSS in general. The goal is to collect the medical knowledge from the relevant sources, systemize it and represent it in a formal human understandable and computer-interpretable manner. There are many different methodologies for acquisition and representation of the medical knowledge. This paper reviews and compares some of these methodologies to identify what has been achieved in the past and to provide directions for future research and improvements.
{"title":"Clinical Decision Support Systems: Medical knowledge acquisition and representation methods","authors":"Liljana Aleksovska-Stojkovska, S. Loskovska","doi":"10.1109/EIT.2010.5612183","DOIUrl":"https://doi.org/10.1109/EIT.2010.5612183","url":null,"abstract":"Constructing the Knowledge Base (KB) of the Clinical Decision Support System (CDSS) is a crucial task that determines the success of the CDSS in general. The goal is to collect the medical knowledge from the relevant sources, systemize it and represent it in a formal human understandable and computer-interpretable manner. There are many different methodologies for acquisition and representation of the medical knowledge. This paper reviews and compares some of these methodologies to identify what has been achieved in the past and to provide directions for future research and improvements.","PeriodicalId":305049,"journal":{"name":"2010 IEEE International Conference on Electro/Information Technology","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133868608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}