Pub Date : 2006-04-07DOI: 10.1109/TPSD.2006.5507413
Julian U. Anugom, A. Grigoryan
In this paper, multiresolution signal processing is described, by the continuous Fourier transform, not the short-time Fourier transform. The inverse Fourier transform is defined by the integral Fourier formula which is referred to as the correlation of the function (signal) with cosine waveforms of various frequencies. This is a direct way to perform the time-frequency analysis of signals. The Fourier transform is described as the sum of wavelet-like transforms with the cosine analyzing function of one period. Properties of such transforms, including the inverse formula of reconstruction of the signal by the described wavelet-like transforms are described. Examples of application of such transforms to detect the exact location of a high-frequency signal are given.
{"title":"Multiresolution signal processing by Fourier transform time-frequency correlation analysis","authors":"Julian U. Anugom, A. Grigoryan","doi":"10.1109/TPSD.2006.5507413","DOIUrl":"https://doi.org/10.1109/TPSD.2006.5507413","url":null,"abstract":"In this paper, multiresolution signal processing is described, by the continuous Fourier transform, not the short-time Fourier transform. The inverse Fourier transform is defined by the integral Fourier formula which is referred to as the correlation of the function (signal) with cosine waveforms of various frequencies. This is a direct way to perform the time-frequency analysis of signals. The Fourier transform is described as the sum of wavelet-like transforms with the cosine analyzing function of one period. Properties of such transforms, including the inverse formula of reconstruction of the signal by the described wavelet-like transforms are described. Examples of application of such transforms to detect the exact location of a high-frequency signal are given.","PeriodicalId":385396,"journal":{"name":"2006 IEEE Region 5 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128232478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-04-07DOI: 10.1109/TPSD.2006.5507439
H. Bensaidane, H. Mohellebi, S. H. Ould Ouali, M. Feliachi
In this paper, a finite element modeling of permanent magnet micro actuator is presented. The later is coupled with a micro screw pump, which can be used in biomedical applications for the pumping of biological fluids such as blood (surgery of heart, blood treatment machines). At first the resolution of the electromagnetic problem is carried out in 3D linear magnetostatic assumption. The characteristics evaluations concern the magnetic torque coupling. The obtained results with linear hypothesis agree well with the ones given in literature. Then, a non linear case study of permanent magnet magnetisation is considered.
{"title":"3D non-linear magnetostatic study of MEMS imployed for biological pumping fluids","authors":"H. Bensaidane, H. Mohellebi, S. H. Ould Ouali, M. Feliachi","doi":"10.1109/TPSD.2006.5507439","DOIUrl":"https://doi.org/10.1109/TPSD.2006.5507439","url":null,"abstract":"In this paper, a finite element modeling of permanent magnet micro actuator is presented. The later is coupled with a micro screw pump, which can be used in biomedical applications for the pumping of biological fluids such as blood (surgery of heart, blood treatment machines). At first the resolution of the electromagnetic problem is carried out in 3D linear magnetostatic assumption. The characteristics evaluations concern the magnetic torque coupling. The obtained results with linear hypothesis agree well with the ones given in literature. Then, a non linear case study of permanent magnet magnetisation is considered.","PeriodicalId":385396,"journal":{"name":"2006 IEEE Region 5 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131089118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-04-07DOI: 10.1109/TPSD.2006.5507468
Aniruddha Kulkarni, R. Hewett
Data mining systems are mainly built to assist users to automatically abstract useful information from large data sets. Thus, they often lack supports for other important practical considerations commonly used in software development (e.g., ease of software modification and maintenance, and portability of resulting models). This paper studies principles for the development of data mining systems from software engineering perspectives. In particular, we propose a framework architecture that provides four desirable characteristics: extensibility, modularity, flexibility and interoperabity. The architecture utilizes a design pattern called Pipes and Filters together with data replication to provide loosely coupled structures for the systems. It also facilitates interoperability and reusability of the resulting predictive models obtained from the mining process by means of appropriate interface mechanisms. The proposed architecture promises important advantages that can enhance the usability of data mining systems.
{"title":"Architecture for interoperability and reuse in data mining systems","authors":"Aniruddha Kulkarni, R. Hewett","doi":"10.1109/TPSD.2006.5507468","DOIUrl":"https://doi.org/10.1109/TPSD.2006.5507468","url":null,"abstract":"Data mining systems are mainly built to assist users to automatically abstract useful information from large data sets. Thus, they often lack supports for other important practical considerations commonly used in software development (e.g., ease of software modification and maintenance, and portability of resulting models). This paper studies principles for the development of data mining systems from software engineering perspectives. In particular, we propose a framework architecture that provides four desirable characteristics: extensibility, modularity, flexibility and interoperabity. The architecture utilizes a design pattern called Pipes and Filters together with data replication to provide loosely coupled structures for the systems. It also facilitates interoperability and reusability of the resulting predictive models obtained from the mining process by means of appropriate interface mechanisms. The proposed architecture promises important advantages that can enhance the usability of data mining systems.","PeriodicalId":385396,"journal":{"name":"2006 IEEE Region 5 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133956443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-04-07DOI: 10.1109/TPSD.2006.5507461
V. Chunduru, N. Subramanian
Often times we may find ourselves in a situation where we are miles away and recall that we haven't closed our garage door or haven't switched off the stove, and using our internet-enabled wireless mobile device we send a command to our Home Appliance Control System (HACS) to either close the garage door or switch the stove off. However, how can we be sure that the command was executed and that the desired situation, for example, the closed garage door or the switched-off stove, was reached? This paper proposes a technique for constructing reliable HACS's (or RHACS) using the concept of forward view and reverse view, where a view includes not only the physical path but also the control intelligence for that path. RHACS will not only help people remotely control devices at home but also increase confidence that their commands were effective with no unanticipated side-effects. A brief definition of reliability is the probability of good working of a system - however, our survey of the literature indicated that there is no consensus on this definition. Our analysis of a typical HACS indicated that its reliability depended on three major factors - reliability of software, reliability of hardware, and reliability of the network. As a case-study we considered a HACS configuration that included a washer, dryer, garage door opener, stove and a camera, and explored how the reliability of this system could be improved using the view-based approach. Concentrating on the network reliability aspect we explored three different techniques to improve the overall system reliability: standard protocol (X10) for the forward and reverse views, standard protocol (X10) for the forward view and the wired Ethernet for the reverse view, standard protocol (X10) for the forward view and the wireless Bluetooth for the reverse view. We used the NFR Framework to systematically analyze and evaluate reliability of HACS while at the same time accommodating the varying definitions of reliability, and we validated these evaluations using simulations. While further work needs to be done to determine the effectiveness of this approach to other reliability factors, we believe that this study demonstrates the practicality of the view-based approach to methodically analyze and construct reliable HACS with almost negligible overhead.
{"title":"View-based approach to constructing reliable Home Appliance Control System","authors":"V. Chunduru, N. Subramanian","doi":"10.1109/TPSD.2006.5507461","DOIUrl":"https://doi.org/10.1109/TPSD.2006.5507461","url":null,"abstract":"Often times we may find ourselves in a situation where we are miles away and recall that we haven't closed our garage door or haven't switched off the stove, and using our internet-enabled wireless mobile device we send a command to our Home Appliance Control System (HACS) to either close the garage door or switch the stove off. However, how can we be sure that the command was executed and that the desired situation, for example, the closed garage door or the switched-off stove, was reached? This paper proposes a technique for constructing reliable HACS's (or RHACS) using the concept of forward view and reverse view, where a view includes not only the physical path but also the control intelligence for that path. RHACS will not only help people remotely control devices at home but also increase confidence that their commands were effective with no unanticipated side-effects. A brief definition of reliability is the probability of good working of a system - however, our survey of the literature indicated that there is no consensus on this definition. Our analysis of a typical HACS indicated that its reliability depended on three major factors - reliability of software, reliability of hardware, and reliability of the network. As a case-study we considered a HACS configuration that included a washer, dryer, garage door opener, stove and a camera, and explored how the reliability of this system could be improved using the view-based approach. Concentrating on the network reliability aspect we explored three different techniques to improve the overall system reliability: standard protocol (X10) for the forward and reverse views, standard protocol (X10) for the forward view and the wired Ethernet for the reverse view, standard protocol (X10) for the forward view and the wireless Bluetooth for the reverse view. We used the NFR Framework to systematically analyze and evaluate reliability of HACS while at the same time accommodating the varying definitions of reliability, and we validated these evaluations using simulations. While further work needs to be done to determine the effectiveness of this approach to other reliability factors, we believe that this study demonstrates the practicality of the view-based approach to methodically analyze and construct reliable HACS with almost negligible overhead.","PeriodicalId":385396,"journal":{"name":"2006 IEEE Region 5 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132975100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-04-07DOI: 10.1109/TPSD.2006.5507443
Naveena Marupudi, E. John, F. Hudson
The need to ensure security has spurred the growth of biometric verification. Biometric systems using a single biometric trait for authentication purposes have some limitations. In order to overcome this we are investigating multiple sensors that capture different biometric traits. Multimodal biometrics has the potential to overcome these limitations by improving system security levels and increasing the accuracy. This paper focuses on fingerprint verification as part of a fusion of biometric modalities such as voice and fingerprint. Fingerprint verification is one of the most reliable biometric techniques for personal identification. We determined that to merge the fingerprint system with voice verijication we need to develop our own algorithms for fingerprint verification. We describe the design and implementation of a fingerprint verification system that operates in two stages: minutia extraction and minutia matching. An improved version of the minutiae extraction algorithm was identified in literature and is implemented for extracting features from fingerprint images.
{"title":"Fingerprint verification in multimodal biometrics","authors":"Naveena Marupudi, E. John, F. Hudson","doi":"10.1109/TPSD.2006.5507443","DOIUrl":"https://doi.org/10.1109/TPSD.2006.5507443","url":null,"abstract":"The need to ensure security has spurred the growth of biometric verification. Biometric systems using a single biometric trait for authentication purposes have some limitations. In order to overcome this we are investigating multiple sensors that capture different biometric traits. Multimodal biometrics has the potential to overcome these limitations by improving system security levels and increasing the accuracy. This paper focuses on fingerprint verification as part of a fusion of biometric modalities such as voice and fingerprint. Fingerprint verification is one of the most reliable biometric techniques for personal identification. We determined that to merge the fingerprint system with voice verijication we need to develop our own algorithms for fingerprint verification. We describe the design and implementation of a fingerprint verification system that operates in two stages: minutia extraction and minutia matching. An improved version of the minutiae extraction algorithm was identified in literature and is implemented for extracting features from fingerprint images.","PeriodicalId":385396,"journal":{"name":"2006 IEEE Region 5 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123541194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-04-07DOI: 10.1109/TPSD.2006.5507431
J. Medrano, V. Gonzalez, A. Musa, M. Shadaram
A method of generating circuits in an optical network is Optical Multiple Protocol Lamda Switching (OMP?S). For OMPλS, the wavelength of the optical carrier is used as an out-of-band label to route data through the network without requiring periodic conversion to the electrical state. Presented is a method of adding a second dimension of optical encoding, as an out-of-band label, to be used for routing data through a label-switched network. Combining wavelength and code multiplexing techniques generates an O3 circuit-switched network with a data capacity of up to 5.7 Tb/s on each point-to-point link in the network. The proposed architecture uses two stages to process and map labels for each bit transmitted through the network. The architecture for the Dual Stage Optical Label Switch (DSOLS) is presented.
{"title":"Dual Stage Optical Label Switch using out-of-band wavelength and code optical properties","authors":"J. Medrano, V. Gonzalez, A. Musa, M. Shadaram","doi":"10.1109/TPSD.2006.5507431","DOIUrl":"https://doi.org/10.1109/TPSD.2006.5507431","url":null,"abstract":"A method of generating circuits in an optical network is Optical Multiple Protocol Lamda Switching (OMP?S). For OMPλS, the wavelength of the optical carrier is used as an out-of-band label to route data through the network without requiring periodic conversion to the electrical state. Presented is a method of adding a second dimension of optical encoding, as an out-of-band label, to be used for routing data through a label-switched network. Combining wavelength and code multiplexing techniques generates an O3 circuit-switched network with a data capacity of up to 5.7 Tb/s on each point-to-point link in the network. The proposed architecture uses two stages to process and map labels for each bit transmitted through the network. The architecture for the Dual Stage Optical Label Switch (DSOLS) is presented.","PeriodicalId":385396,"journal":{"name":"2006 IEEE Region 5 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120975608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-04-07DOI: 10.1109/TPSD.2006.5507457
B. Dinan, F. Hudson
Computer models of biological systems need to manage both form and function. The transformation of scientific observation to computer code requires a means of describing form and function that fits both systems approaches. We propose the use of formal natural language order. Our hypothesis is that by designing such a process we can provide a tool that both “feels” natural to the researcher and is easy for the designer to convert directly to accurate and realistic models of complex biological functions. We modeled skeletal muscle to demonstrate the approach. Supported by UTSA Summer Mentor Program.
{"title":"Natural language order - a streamlined approach to modeling","authors":"B. Dinan, F. Hudson","doi":"10.1109/TPSD.2006.5507457","DOIUrl":"https://doi.org/10.1109/TPSD.2006.5507457","url":null,"abstract":"Computer models of biological systems need to manage both form and function. The transformation of scientific observation to computer code requires a means of describing form and function that fits both systems approaches. We propose the use of formal natural language order. Our hypothesis is that by designing such a process we can provide a tool that both “feels” natural to the researcher and is easy for the designer to convert directly to accurate and realistic models of complex biological functions. We modeled skeletal muscle to demonstrate the approach. Supported by UTSA Summer Mentor Program.","PeriodicalId":385396,"journal":{"name":"2006 IEEE Region 5 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125046162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-04-07DOI: 10.1109/TPSD.2006.5507460
R. Hewett, Aniruddha Kulkarni, R. Seker, C. Stringfellow
In software technology today, several development methodologies such as extreme programming and open source development increasingly use feedback from customer testing. This makes the customer defect data become more readily available. This paper proposes an effective use of reliability models and defect data to help managers make software release decisions by applying a strategy for selecting a suitable reliability model, which best fits the customer defect data as testing progresses. We validate the proposed approach in an empirical study using a dataset of defect reports obtained from testing of three releases of a large medical system. The paper describes detailed results of our experiments and concludes with suggested guidelines on the usage of reliability models and defect data.
{"title":"On effective use of reliability models and defect data in software development","authors":"R. Hewett, Aniruddha Kulkarni, R. Seker, C. Stringfellow","doi":"10.1109/TPSD.2006.5507460","DOIUrl":"https://doi.org/10.1109/TPSD.2006.5507460","url":null,"abstract":"In software technology today, several development methodologies such as extreme programming and open source development increasingly use feedback from customer testing. This makes the customer defect data become more readily available. This paper proposes an effective use of reliability models and defect data to help managers make software release decisions by applying a strategy for selecting a suitable reliability model, which best fits the customer defect data as testing progresses. We validate the proposed approach in an empirical study using a dataset of defect reports obtained from testing of three releases of a large medical system. The paper describes detailed results of our experiments and concludes with suggested guidelines on the usage of reliability models and defect data.","PeriodicalId":385396,"journal":{"name":"2006 IEEE Region 5 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125115421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-04-07DOI: 10.1109/TPSD.2006.5507438
Srikrishna Alla, A. Grigoryan
In this paper, a novel method of reconstruction of 3-D Positron Emission Tomography (PET) images is proposed. The method is based on the concept of non traditional, tensor and paired forms of representation of the 3-D image with respect to the 3-D discrete Fourier transform (DFT). Such representations use a minimal number of projections. The proposed algorithm is described in detail for an image (N × N × N), where N is a power of two. A multi-ring scanner that has contiguous rings of detectors stacked on top of each other is considered. The measurement data set containing specified projections of the 3-D image are generated according to the paired representation, and the proposed algorithm is tested on the data. The algorithm for selecting a required number of projections is described and illustrated for image (32×32×32).
本文提出了一种新的三维正电子发射断层扫描(PET)图像重建方法。该方法基于三维图像相对于三维离散傅里叶变换(DFT)的非传统、张量和配对表示形式的概念。这种表示使用最少数量的投影。本文对图像(N × N × N)的算法进行了详细描述,其中N为2的幂次。考虑了一种多环扫描仪,它具有相邻的探测器环堆叠在彼此的顶部。根据配对表示生成包含三维图像指定投影的测量数据集,并在该数据集上对所提算法进行了测试。本文描述并说明了选择所需数量投影的算法(32×32×32)。
{"title":"Method of reconstruction of 3-D pet images from projections","authors":"Srikrishna Alla, A. Grigoryan","doi":"10.1109/TPSD.2006.5507438","DOIUrl":"https://doi.org/10.1109/TPSD.2006.5507438","url":null,"abstract":"In this paper, a novel method of reconstruction of 3-D Positron Emission Tomography (PET) images is proposed. The method is based on the concept of non traditional, tensor and paired forms of representation of the 3-D image with respect to the 3-D discrete Fourier transform (DFT). Such representations use a minimal number of projections. The proposed algorithm is described in detail for an image (N × N × N), where N is a power of two. A multi-ring scanner that has contiguous rings of detectors stacked on top of each other is considered. The measurement data set containing specified projections of the 3-D image are generated according to the paired representation, and the proposed algorithm is tested on the data. The algorithm for selecting a required number of projections is described and illustrated for image (32×32×32).","PeriodicalId":385396,"journal":{"name":"2006 IEEE Region 5 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129476267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-04-07DOI: 10.1109/TPSD.2006.5507424
Avanthi Koneru, Xinrong Li, M. Varanasi
Reliable localization is an essential building block of sensor networks. Many techniques have taken advantage of the received signal strength (RSS) measurement for location estimation in wireless sensor networks, since no special hardware implementation is required to measure RSS in almost all kinds of wireless systems. In this paper, two such techniques, MDS method and MLE that are recently proposed for collaborative location estimation, are studied in detail. From the theoretical formulation of the RSS-based location estimation problem, it is seen that MLE is more appropriate than MDS method. However, from simulation studies of both algorithms, which are iterative in nature, it is found that MLE is more sensitive to initial estimate than MDS method. Therefore, in this paper we propose to integrate these two techniques in series so that an estimate is first obtained using MDS method by taking advantage of its better convergence property, then MLE is employed to fine-tune the solution of MDS method to remove modeling errors that are inherent in MDS method. Through extensive simulations it is demonstrated that the new integrated method, named MDS-MLE, consistently outperforms both MDS method and MLE in various simulation scenarios. In this paper, we also address many important issues in collaborative localization, including effects of sensor node density, reference node density, and different deployment strategies of reference nodes.
{"title":"Comparative study of RSS-based collaborative localization methods in sensor networks","authors":"Avanthi Koneru, Xinrong Li, M. Varanasi","doi":"10.1109/TPSD.2006.5507424","DOIUrl":"https://doi.org/10.1109/TPSD.2006.5507424","url":null,"abstract":"Reliable localization is an essential building block of sensor networks. Many techniques have taken advantage of the received signal strength (RSS) measurement for location estimation in wireless sensor networks, since no special hardware implementation is required to measure RSS in almost all kinds of wireless systems. In this paper, two such techniques, MDS method and MLE that are recently proposed for collaborative location estimation, are studied in detail. From the theoretical formulation of the RSS-based location estimation problem, it is seen that MLE is more appropriate than MDS method. However, from simulation studies of both algorithms, which are iterative in nature, it is found that MLE is more sensitive to initial estimate than MDS method. Therefore, in this paper we propose to integrate these two techniques in series so that an estimate is first obtained using MDS method by taking advantage of its better convergence property, then MLE is employed to fine-tune the solution of MDS method to remove modeling errors that are inherent in MDS method. Through extensive simulations it is demonstrated that the new integrated method, named MDS-MLE, consistently outperforms both MDS method and MLE in various simulation scenarios. In this paper, we also address many important issues in collaborative localization, including effects of sensor node density, reference node density, and different deployment strategies of reference nodes.","PeriodicalId":385396,"journal":{"name":"2006 IEEE Region 5 Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122176923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}