Node mobility is one of the most important factors that may degrade network performance and restrict network scalability in mobile ad hoc networks. An effective way to reduce the impact of node mobility is to select long lifetime routing paths in the network. We propose a link lifetime-based segment-by-segment routing protocol (LL-SSR) in mobile ad hoc networks, where each node maintains a routing table for its k-hop region. Simulation studies show that LL-SSR has better scalability and higher packet delivery ratio when compared with GPSR.
{"title":"Link Lifetime-Based Segment-by-Segment Routing Protocol in MANETs","authors":"Yujie Chen, Guojun Wang, Sancheng Peng","doi":"10.1109/ISPA.2008.39","DOIUrl":"https://doi.org/10.1109/ISPA.2008.39","url":null,"abstract":"Node mobility is one of the most important factors that may degrade network performance and restrict network scalability in mobile ad hoc networks. An effective way to reduce the impact of node mobility is to select long lifetime routing paths in the network. We propose a link lifetime-based segment-by-segment routing protocol (LL-SSR) in mobile ad hoc networks, where each node maintains a routing table for its k-hop region. Simulation studies show that LL-SSR has better scalability and higher packet delivery ratio when compared with GPSR.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130618141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jae Yeol Lee, Dongwoo Seo, G. Rhee, S. H. Hong, Ji-Seung Nam
This paper proposes a new way of providing virtual smart home services using tangible mixed reality (MR), which provides more cost-effective and reliable visualization and simulation of the existing pervasive environment. One of the main characteristics of the proposed approach is capable of embedding virtual objects into the real environment such that it is easy to test the feasibility of the existing pervasive smart home. The paper also presents how to support those services by linking contexts to virtual objects and their tangible behaviors. We will show the effectiveness and advantage of the proposed approach by demonstrating a tangible smart home testbed. It has been also applied to the tangible virtual studio authoring and interaction.
{"title":"Virtual and Pervasive Smart Home Services Using Tangible Mixed Reality","authors":"Jae Yeol Lee, Dongwoo Seo, G. Rhee, S. H. Hong, Ji-Seung Nam","doi":"10.1109/ISPA.2008.92","DOIUrl":"https://doi.org/10.1109/ISPA.2008.92","url":null,"abstract":"This paper proposes a new way of providing virtual smart home services using tangible mixed reality (MR), which provides more cost-effective and reliable visualization and simulation of the existing pervasive environment. One of the main characteristics of the proposed approach is capable of embedding virtual objects into the real environment such that it is easy to test the feasibility of the existing pervasive smart home. The paper also presents how to support those services by linking contexts to virtual objects and their tangible behaviors. We will show the effectiveness and advantage of the proposed approach by demonstrating a tangible smart home testbed. It has been also applied to the tangible virtual studio authoring and interaction.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131178099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emerging body-wearable devices for continuous health monitoring are severely energy constrained and yet required to offer high communication reliability under fluctuating channel conditions. Such devices require very careful management of their energy resources in order to prolong their lifetime. In our earlier work we had proposed dynamic power control as a means of saving precious energy in off the-shelf sensor devices. In this work we experiment with a real body-wearable device to assess the power savings possible in a realistic setting. We quantify the power consumption against the packet loss and establish the feasibility of dynamic power control for saving energy in a truly-body-wearable setting.
{"title":"Experiments in Adaptive Power Control for Truly Wearable Biomedical Sensor Devices","authors":"Ashay Dhamdhere, V. Sivaraman, A. Burdett","doi":"10.1109/ISPA.2008.96","DOIUrl":"https://doi.org/10.1109/ISPA.2008.96","url":null,"abstract":"Emerging body-wearable devices for continuous health monitoring are severely energy constrained and yet required to offer high communication reliability under fluctuating channel conditions. Such devices require very careful management of their energy resources in order to prolong their lifetime. In our earlier work we had proposed dynamic power control as a means of saving precious energy in off the-shelf sensor devices. In this work we experiment with a real body-wearable device to assess the power savings possible in a realistic setting. We quantify the power consumption against the packet loss and establish the feasibility of dynamic power control for saving energy in a truly-body-wearable setting.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132231008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Power management for WSNs can take many forms, from adaptively tuning the power consumption of some of the components of a node to hibernating it completely. In the later case, the competence of the WSN must not be compromised. In general, the competence of a WSN is its ability to perform its function in an accurate and timely fashion. These two, related, Quality of Service (QoS) metrics are primarily affected by the density and latency of data from the environment, respectively. Without adequate density, interesting events may not be adequately observed or missed completely by the application, while stale data could result in event detection occurring too late. In opposition to this is the fact that the energy consumed by the network is related to the number of active nodes in the deployment. Therefore, given that the nodes have finite power resources, a trade-off exists between the longevity and QoS provided by the network and it is crucial that both aspects are considered when evaluating a power management protocol. In this paper, we present an evaluation of a novel node hibernation technique based on interpolated sensor readings according to these four metrics: energy consumption, density, message latency and the accuracy of an application utilising the data from the WSN. A comparison with a standard WSN that does not engage in power management is also presented, in order to show the overhead in the protocols operation.
{"title":"Evaluating Interpolation-Based Power Management","authors":"R. Tynan, G. O’hare","doi":"10.1109/ISPA.2008.71","DOIUrl":"https://doi.org/10.1109/ISPA.2008.71","url":null,"abstract":"Power management for WSNs can take many forms, from adaptively tuning the power consumption of some of the components of a node to hibernating it completely. In the later case, the competence of the WSN must not be compromised. In general, the competence of a WSN is its ability to perform its function in an accurate and timely fashion. These two, related, Quality of Service (QoS) metrics are primarily affected by the density and latency of data from the environment, respectively. Without adequate density, interesting events may not be adequately observed or missed completely by the application, while stale data could result in event detection occurring too late. In opposition to this is the fact that the energy consumed by the network is related to the number of active nodes in the deployment. Therefore, given that the nodes have finite power resources, a trade-off exists between the longevity and QoS provided by the network and it is crucial that both aspects are considered when evaluating a power management protocol. In this paper, we present an evaluation of a novel node hibernation technique based on interpolated sensor readings according to these four metrics: energy consumption, density, message latency and the accuracy of an application utilising the data from the WSN. A comparison with a standard WSN that does not engage in power management is also presented, in order to show the overhead in the protocols operation.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133497733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nishat Ahmad, Gwangwon Kang, Hyunsook Chung, Suchoi Ik, Jong-An Park
The paper presents a new approach for content based retrieval of images. The algorithm uses information sampled from around detected corner points in the image. A corner detection approach based on line intersections has been employed using Hough transform for line detection and then finding intersecting, near intersecting or complex shaped corners. As the affine transformations preserve the co-linearity of points on a line and their intersection properties, the corner points obtained as such retain the much desired property of repeatability and hence ensure the similar pixel samples under various transformations and are robust to noise. K-means clustering algorithm is used to assign class labels to the extracted sample mean and variance of the corner regions from a random selection of training images and used for learning a Gaussian Byes classifier to classify whole training image database. Histogram of the class members in an image has been used as a feature vector. The retrieval performance and behavior of the algorithm has been tested using four different similarity measures.
{"title":"Image Feature Vector Construction Using Interest Point Based Regions","authors":"Nishat Ahmad, Gwangwon Kang, Hyunsook Chung, Suchoi Ik, Jong-An Park","doi":"10.1109/ISPA.2008.27","DOIUrl":"https://doi.org/10.1109/ISPA.2008.27","url":null,"abstract":"The paper presents a new approach for content based retrieval of images. The algorithm uses information sampled from around detected corner points in the image. A corner detection approach based on line intersections has been employed using Hough transform for line detection and then finding intersecting, near intersecting or complex shaped corners. As the affine transformations preserve the co-linearity of points on a line and their intersection properties, the corner points obtained as such retain the much desired property of repeatability and hence ensure the similar pixel samples under various transformations and are robust to noise. K-means clustering algorithm is used to assign class labels to the extracted sample mean and variance of the corner regions from a random selection of training images and used for learning a Gaussian Byes classifier to classify whole training image database. Histogram of the class members in an image has been used as a feature vector. The retrieval performance and behavior of the algorithm has been tested using four different similarity measures.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"95 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113988233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current virtual server-based load balancing schemes for DHT have been shown to be able to achieve excellent load balancing effectiveness. However, they face two important issues. They suffer from problems of incurring extremely high overheads, and inducing severe inconsistency in DHT routing state. We present two fundamental components, virtual server management and active stabilization, whose inclusion into these schemes essentially eliminates these problems. As a result, these schemes not only incur overheads comparable to non-virtual server-based systems, but also achieve better query performance.
{"title":"Towards Practical Virtual Server-Based Load Balancing for Distributed Hash Tables","authors":"Chyouhwa Chen, Ching-Bang Yao, Sonjie Liang","doi":"10.1109/ISPA.2008.42","DOIUrl":"https://doi.org/10.1109/ISPA.2008.42","url":null,"abstract":"Current virtual server-based load balancing schemes for DHT have been shown to be able to achieve excellent load balancing effectiveness. However, they face two important issues. They suffer from problems of incurring extremely high overheads, and inducing severe inconsistency in DHT routing state. We present two fundamental components, virtual server management and active stabilization, whose inclusion into these schemes essentially eliminates these problems. As a result, these schemes not only incur overheads comparable to non-virtual server-based systems, but also achieve better query performance.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123957948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Sinnott, Christopher Bayliss, T. Doherty, David B. Martin, C. Millar, G. Stewart, J. Watt, A. Asenov, G. Roy, Scott Roy, C. Davenhall, B. Harbulot, M. Jones
The UK Engineering and Physical Sciences Research Council (EPSRC) funded project ¿Meeting the Design Challenges of nanoCMOS Electronics¿ (nanoCMOS) is developing a research infrastructure for collaborative electronics research across multiple institutions in the UK with especially strong industrial and commercial involvement. Unlike other domains, the electronics industry is driven by the necessity of protecting the intellectual property of the data, designs and software associated with next generation electronics devices and therefore requires fine-grained security. Similarly, the project also demands seamless access to large scale high performance compute resources for atomic scale device simulations and the capability to manage the hundreds of thousands of files and the metadata associated with these simulations. Within this context, the project has explored a wide range of authentication and authorization infrastructures facilitating compute resource access and providing fine-grained security over numerous distributed file stores and files. We conclude that no single security solution meets the needs of the project. This paper describes the experiences of applying X.509-based certificates and public key infrastructures, VOMS, PERMIS, Kerberos and the Internet2 Shibboleth technologies for nanoCMOS security. We outline how we are integrating these solutions to provide a complete end-to-end security framework meeting the demands of the nanoCMOS electronics domain.
{"title":"Integrating Security Solutions to Support nanoCMOS Electronics Research","authors":"R. Sinnott, Christopher Bayliss, T. Doherty, David B. Martin, C. Millar, G. Stewart, J. Watt, A. Asenov, G. Roy, Scott Roy, C. Davenhall, B. Harbulot, M. Jones","doi":"10.1109/ISPA.2008.132","DOIUrl":"https://doi.org/10.1109/ISPA.2008.132","url":null,"abstract":"The UK Engineering and Physical Sciences Research Council (EPSRC) funded project ¿Meeting the Design Challenges of nanoCMOS Electronics¿ (nanoCMOS) is developing a research infrastructure for collaborative electronics research across multiple institutions in the UK with especially strong industrial and commercial involvement. Unlike other domains, the electronics industry is driven by the necessity of protecting the intellectual property of the data, designs and software associated with next generation electronics devices and therefore requires fine-grained security. Similarly, the project also demands seamless access to large scale high performance compute resources for atomic scale device simulations and the capability to manage the hundreds of thousands of files and the metadata associated with these simulations. Within this context, the project has explored a wide range of authentication and authorization infrastructures facilitating compute resource access and providing fine-grained security over numerous distributed file stores and files. We conclude that no single security solution meets the needs of the project. This paper describes the experiences of applying X.509-based certificates and public key infrastructures, VOMS, PERMIS, Kerberos and the Internet2 Shibboleth technologies for nanoCMOS security. We outline how we are integrating these solutions to provide a complete end-to-end security framework meeting the demands of the nanoCMOS electronics domain.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121337461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper provides and overview of the {it GDBase} framework for offline parallel debuggers. The framework was designed to become the basis of debugging tools which scale successfully on systems with tens to hundreds of thousands of cores. With several systems coming online at more than 50,000 cores in the past year, debuggers which can run at these scales are now required. The proposed framework offers two features not found in current generation debugging tools: the ability to debug "offline'', and a central database to act as a repository of debugging information. These two features enable the GDBase debugger to offer several advantages. The debugger can be used in conjunction with modern batch systems with low overhead, with user interaction taking place after the parallel system resources are freed. The use of a database and a simple API allows for multiple interfaces and data mining tools to be implemented to provide novel ways of viewing and analyzing debugging data. The database also enables cross-run analysis, and the combination of debugging, performance, and system health information. Evidence is provided of the scalability of the framework, as well as output from several simple analysis tools that have been implemented.
{"title":"Architecture for an Offline Parallel Debugger","authors":"Karl Lindekugel, A. DiGirolamo, D. Stanzione","doi":"10.1109/ISPA.2008.125","DOIUrl":"https://doi.org/10.1109/ISPA.2008.125","url":null,"abstract":"This paper provides and overview of the {it GDBase} framework for offline parallel debuggers. The framework was designed to become the basis of debugging tools which scale successfully on systems with tens to hundreds of thousands of cores. With several systems coming online at more than 50,000 cores in the past year, debuggers which can run at these scales are now required. The proposed framework offers two features not found in current generation debugging tools: the ability to debug \"offline'', and a central database to act as a repository of debugging information. These two features enable the GDBase debugger to offer several advantages. The debugger can be used in conjunction with modern batch systems with low overhead, with user interaction taking place after the parallel system resources are freed. The use of a database and a simple API allows for multiple interfaces and data mining tools to be implemented to provide novel ways of viewing and analyzing debugging data. The database also enables cross-run analysis, and the combination of debugging, performance, and system health information. Evidence is provided of the scalability of the framework, as well as output from several simple analysis tools that have been implemented.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115810090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Decimal multiplication is an integral part of financial, commercial, and Internet-based computations. A novel design for single digit decimal multiplication that reduces the critical path delay and area for an iterative multiplier is proposed in this research. The partial products are generated using single digit multipliers, and are accumulated based ona novel RPS algorithm. This design uses n single digit multipliers for an n times n multiplication. The latency for the multiplication of two n-digit Binary Coded Decimal (BCD) operands is (n + 1) cycles and a new multiplication can begin every n cycle. The accumulation of final partial products and the first iteration of partial product generation for next set of inputs are done simultaneously. This iterative decimal multiplier offers low latency and high throughput, and can be extended for decimal floating-point multiplication.
{"title":"Fixed Point Decimal Multiplication Using RPS Algorithm","authors":"R. K. James, S. Kassim, K. Jacob, S. Sasi","doi":"10.1109/ISPA.2008.89","DOIUrl":"https://doi.org/10.1109/ISPA.2008.89","url":null,"abstract":"Decimal multiplication is an integral part of financial, commercial, and Internet-based computations. A novel design for single digit decimal multiplication that reduces the critical path delay and area for an iterative multiplier is proposed in this research. The partial products are generated using single digit multipliers, and are accumulated based ona novel RPS algorithm. This design uses n single digit multipliers for an n times n multiplication. The latency for the multiplication of two n-digit Binary Coded Decimal (BCD) operands is (n + 1) cycles and a new multiplication can begin every n cycle. The accumulation of final partial products and the first iteration of partial product generation for next set of inputs are done simultaneously. This iterative decimal multiplier offers low latency and high throughput, and can be extended for decimal floating-point multiplication.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"70 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114102360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In data grids, replication is a critical technology aiming at lowering the access delay as well as saving network bandwidth by duplicating original data in a distributed manner. For data-intensive applications with read/write files, replica consistency is an important issue. In this paper, we propose a novel replica consistency decision model with high adaptability and flexibility using the naive Bayesian classifier in order to improve the system performance in data grids. A system prototype has been implemented. The experimental results also prove the excellence of the proposed model.
{"title":"An Innovative Replica Consistency Model in Data Grids","authors":"Jih-Sheng Chang, R. Chang","doi":"10.1109/ISPA.2008.14","DOIUrl":"https://doi.org/10.1109/ISPA.2008.14","url":null,"abstract":"In data grids, replication is a critical technology aiming at lowering the access delay as well as saving network bandwidth by duplicating original data in a distributed manner. For data-intensive applications with read/write files, replica consistency is an important issue. In this paper, we propose a novel replica consistency decision model with high adaptability and flexibility using the naive Bayesian classifier in order to improve the system performance in data grids. A system prototype has been implemented. The experimental results also prove the excellence of the proposed model.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125407855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}