Pub Date : 2002-07-08DOI: 10.1109/ICIF.2002.1021139
L. Hubert‐Moy, S. Corgne, G. Mercier, B. Solaiman
In intensive agricultural regions, accurate assessment of the spatial and temporal variation of winter vegetation covering is a key indicator of water transfer processes, essential for controlling land management and helping local decision making. Spatial prediction modeling of winter bare soils is complex and it is necessary to introduce uncertainty in modeling land use and cover changes, especially as high spatial and temporal variability are encountered. Dempster's fusion rule is used in the present study to spatially predict the location of winter bare fields for the next season on a watershed located in an intensive agricultural region. It expresses the model as a function of past-observed bare soils, field size, distance from farm buildings, agro-environmental action, and production quotas per ha. The model well predicted the presence of bare soils on 4/5 of the total area. The spatial distribution of misrepresented fields is a good indicator for identifying change factors.
{"title":"Land use and land cover change prediction with the theory of evidence: a case study in an intensive agricultural region of France","authors":"L. Hubert‐Moy, S. Corgne, G. Mercier, B. Solaiman","doi":"10.1109/ICIF.2002.1021139","DOIUrl":"https://doi.org/10.1109/ICIF.2002.1021139","url":null,"abstract":"In intensive agricultural regions, accurate assessment of the spatial and temporal variation of winter vegetation covering is a key indicator of water transfer processes, essential for controlling land management and helping local decision making. Spatial prediction modeling of winter bare soils is complex and it is necessary to introduce uncertainty in modeling land use and cover changes, especially as high spatial and temporal variability are encountered. Dempster's fusion rule is used in the present study to spatially predict the location of winter bare fields for the next season on a watershed located in an intensive agricultural region. It expresses the model as a function of past-observed bare soils, field size, distance from farm buildings, agro-environmental action, and production quotas per ha. The model well predicted the presence of bare soils on 4/5 of the total area. The spatial distribution of misrepresented fields is a good indicator for identifying change factors.","PeriodicalId":399150,"journal":{"name":"Proceedings of the Fifth International Conference on Information Fusion. FUSION 2002. (IEEE Cat.No.02EX5997)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124441881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-08DOI: 10.1109/ICIF.2002.1020904
E. Jones, N. Denis, D. Hunter
The efficient management of large collections of fusion hypotheses presents a critical challenge for scaling high-level information fusion systems to solve large problems. We motivate this challenge in the context of two ALPHATECH research projects, and discuss several partial solutions. A recurring theme is the exploitation of space-efficient, factored representations of multiple hypotheses to enable efficient search for good hypotheses.
{"title":"Hypothesis management for information fusion","authors":"E. Jones, N. Denis, D. Hunter","doi":"10.1109/ICIF.2002.1020904","DOIUrl":"https://doi.org/10.1109/ICIF.2002.1020904","url":null,"abstract":"The efficient management of large collections of fusion hypotheses presents a critical challenge for scaling high-level information fusion systems to solve large problems. We motivate this challenge in the context of two ALPHATECH research projects, and discuss several partial solutions. A recurring theme is the exploitation of space-efficient, factored representations of multiple hypotheses to enable efficient search for good hypotheses.","PeriodicalId":399150,"journal":{"name":"Proceedings of the Fifth International Conference on Information Fusion. FUSION 2002. (IEEE Cat.No.02EX5997)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123997023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-08DOI: 10.1109/ICIF.2002.1020953
S. Challa, B. Vo, Xuezhi Wang
Most target tracking algorithms implicitly assume that target exists. There are only a few techniques that address the target existence problem along with target tracking. For example, (Integrated Probabilistic Data Association) IPDA filter addresses the target tracking and target existence problems simultaneously and it does so under at most one target assumption. In recent times random sets have been proposed as a general framework for multiple target tracking problem. However, its relationship to well understood existing tracking algorithms like IPDA has not been explored. In this paper, we show that under appropriate conditions random sets provide appropriate mathematical framework for solving the joint target existence and state estimation problem and subsequently show that it results in IPDA under appropriate simplifying assumptions.
大多数目标跟踪算法都隐含地假设目标存在。只有少数技术可以在目标跟踪的同时解决目标存在问题。例如,集成概率数据关联(Integrated Probabilistic Data Association, IPDA)滤波器同时解决了目标跟踪和目标存在的问题,它最多在一个目标假设下进行。近年来,随机集被提出作为多目标跟踪问题的通用框架。然而,它与现有的跟踪算法(如IPDA)之间的关系尚未得到探讨。本文证明了在适当的条件下,随机集为解决联合目标存在和状态估计问题提供了适当的数学框架,并在适当的简化假设下得到了IPDA。
{"title":"Bayesian approaches to track existence - IPDA and random sets","authors":"S. Challa, B. Vo, Xuezhi Wang","doi":"10.1109/ICIF.2002.1020953","DOIUrl":"https://doi.org/10.1109/ICIF.2002.1020953","url":null,"abstract":"Most target tracking algorithms implicitly assume that target exists. There are only a few techniques that address the target existence problem along with target tracking. For example, (Integrated Probabilistic Data Association) IPDA filter addresses the target tracking and target existence problems simultaneously and it does so under at most one target assumption. In recent times random sets have been proposed as a general framework for multiple target tracking problem. However, its relationship to well understood existing tracking algorithms like IPDA has not been explored. In this paper, we show that under appropriate conditions random sets provide appropriate mathematical framework for solving the joint target existence and state estimation problem and subsequently show that it results in IPDA under appropriate simplifying assumptions.","PeriodicalId":399150,"journal":{"name":"Proceedings of the Fifth International Conference on Information Fusion. FUSION 2002. (IEEE Cat.No.02EX5997)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126249447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-08DOI: 10.1109/ICIF.2002.1020918
N. Singpurwalla
It is fairly easy to calculate the reliability of a network with independent nodes by using a technique called pivoting. However, when there is dependence, this calculation will prove inaccurate and a model for the dependence is required. This paper considers the issues involved in developing a suitable model. Particular emphasis is placed on ensuring the calculations and distributions involved do not become intractable for large networks. Various potential distributions are discussed. Strategies are suggested for simplifying the dependence. A model for cascading failures is proposed.
{"title":"Dependence in network reliability","authors":"N. Singpurwalla","doi":"10.1109/ICIF.2002.1020918","DOIUrl":"https://doi.org/10.1109/ICIF.2002.1020918","url":null,"abstract":"It is fairly easy to calculate the reliability of a network with independent nodes by using a technique called pivoting. However, when there is dependence, this calculation will prove inaccurate and a model for the dependence is required. This paper considers the issues involved in developing a suitable model. Particular emphasis is placed on ensuring the calculations and distributions involved do not become intractable for large networks. Various potential distributions are discussed. Strategies are suggested for simplifying the dependence. A model for cascading failures is proposed.","PeriodicalId":399150,"journal":{"name":"Proceedings of the Fifth International Conference on Information Fusion. FUSION 2002. (IEEE Cat.No.02EX5997)","volume":"231 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126413009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-08DOI: 10.1109/ICIF.2002.1020965
C. Anken, N. Gemelli, P. LaMonica, R. Mineo, J. Spina
The premise of this paper is that a combination of information extraction techniques, knowledge bases and natural language processing technology can assist the intelligence analyst by providing higher level fusion capabilities to support the decision making process. The paper examines programs and the tools that have evolved from these programs being researched by the Air Force Research Laboratory's Information Directorate. These programs include DARPA sponsored High Performance Knowledge Bases (HPKB), Rapid Knowledge Formation (RKF) and Evidence Extraction and Link Discovery (EELD). Some of the tools include the CYC knowledge base, Intelligent Mining Platform for the Analysis of Counter Terrorism (IMPACT) and the START natural language query system. By exploiting and leveraging the strengths of each system, we believe that a high level of information fusion is possible.
本文的前提是信息提取技术、知识库和自然语言处理技术的结合可以通过提供更高层次的融合能力来支持情报分析人员的决策过程。本文考察了由空军研究实验室信息理事会研究的项目和从这些项目发展而来的工具。这些项目包括DARPA赞助的高性能知识库(HPKB)、快速知识形成(RKF)和证据提取和链接发现(EELD)。这些工具包括CYC知识库、IMPACT (Intelligent Mining Platform for Analysis of counterterrorism)和START自然语言查询系统。通过利用和利用每个系统的优势,我们相信高水平的信息融合是可能的。
{"title":"Intelligent systems technology for higher level fusion","authors":"C. Anken, N. Gemelli, P. LaMonica, R. Mineo, J. Spina","doi":"10.1109/ICIF.2002.1020965","DOIUrl":"https://doi.org/10.1109/ICIF.2002.1020965","url":null,"abstract":"The premise of this paper is that a combination of information extraction techniques, knowledge bases and natural language processing technology can assist the intelligence analyst by providing higher level fusion capabilities to support the decision making process. The paper examines programs and the tools that have evolved from these programs being researched by the Air Force Research Laboratory's Information Directorate. These programs include DARPA sponsored High Performance Knowledge Bases (HPKB), Rapid Knowledge Formation (RKF) and Evidence Extraction and Link Discovery (EELD). Some of the tools include the CYC knowledge base, Intelligent Mining Platform for the Analysis of Counter Terrorism (IMPACT) and the START natural language query system. By exploiting and leveraging the strengths of each system, we believe that a high level of information fusion is possible.","PeriodicalId":399150,"journal":{"name":"Proceedings of the Fifth International Conference on Information Fusion. FUSION 2002. (IEEE Cat.No.02EX5997)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115859044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-08DOI: 10.1109/ICIF.2002.1020988
Md. Khayrul Bashar, N. Ohnishi
This paper proposes a new scheme of fusing cortex transform and brightness based features obtained by local windowing operation. Energy features are obtained by applying popular cortex transform technique within a sliding window rather than the conventional way, while we define three features namely directional surface density (DSD), normalised sharpness index (NSI), and normalized frequency index (NFI) as measures for pixel brightness variation. Fusion by simply vector tagging as well as by correlation is performed in the feature space and then classification is done using minimum distance classifier on the fused vectors. It is interesting that the brightness features, though inferior on some natural images, often produces smoother texture boundary in mosaic images, whereas energy features show the opposite behavior. This symmetrically inverse property is combined through vector fusion for robust classification of multi-texture images obtained from Brodatz album and VisTex database. Classification outcome with confusion matrix analysis shows the robustness of the scheme.
{"title":"Fusing cortex transform and intensity based features for image texture classification","authors":"Md. Khayrul Bashar, N. Ohnishi","doi":"10.1109/ICIF.2002.1020988","DOIUrl":"https://doi.org/10.1109/ICIF.2002.1020988","url":null,"abstract":"This paper proposes a new scheme of fusing cortex transform and brightness based features obtained by local windowing operation. Energy features are obtained by applying popular cortex transform technique within a sliding window rather than the conventional way, while we define three features namely directional surface density (DSD), normalised sharpness index (NSI), and normalized frequency index (NFI) as measures for pixel brightness variation. Fusion by simply vector tagging as well as by correlation is performed in the feature space and then classification is done using minimum distance classifier on the fused vectors. It is interesting that the brightness features, though inferior on some natural images, often produces smoother texture boundary in mosaic images, whereas energy features show the opposite behavior. This symmetrically inverse property is combined through vector fusion for robust classification of multi-texture images obtained from Brodatz album and VisTex database. Classification outcome with confusion matrix analysis shows the robustness of the scheme.","PeriodicalId":399150,"journal":{"name":"Proceedings of the Fifth International Conference on Information Fusion. FUSION 2002. (IEEE Cat.No.02EX5997)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134125669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-08DOI: 10.1109/ICIF.2002.1021219
F. D'Agostino, A. Farinelli, G. Grisetti, L. Iocchi, D. Nardi
The goal of the project, which is currently under development, is to design tools to monitor the situation after a large-scale disaster, with a particular focus on the task on situation assessment and high-level information fusion, as well as on the issues that arise in coordinating the agent actions based on the acquired information. The development environment is based on the RoboCup-Rescue simulator: a simulation environment used for the RoboCup-Rescue competition, allowing for the design of both agents operating in the scenario and simulators for modeling various aspects of the situation including the graphical interface to monitor the disaster site. Our project is focussed on three aspects: modeling in the simulator a scenario devised from the analysis of a real case study; an extension of the simulator enabling for the experimentation of various communication and information fusion schemes; a framework for developing agents that are capable of constructing a global view of the situation and of distributing specific information, to other agents in order to drive their actions.
{"title":"Monitoring and information fusion for search and rescue operations in large-scale disasters","authors":"F. D'Agostino, A. Farinelli, G. Grisetti, L. Iocchi, D. Nardi","doi":"10.1109/ICIF.2002.1021219","DOIUrl":"https://doi.org/10.1109/ICIF.2002.1021219","url":null,"abstract":"The goal of the project, which is currently under development, is to design tools to monitor the situation after a large-scale disaster, with a particular focus on the task on situation assessment and high-level information fusion, as well as on the issues that arise in coordinating the agent actions based on the acquired information. The development environment is based on the RoboCup-Rescue simulator: a simulation environment used for the RoboCup-Rescue competition, allowing for the design of both agents operating in the scenario and simulators for modeling various aspects of the situation including the graphical interface to monitor the disaster site. Our project is focussed on three aspects: modeling in the simulator a scenario devised from the analysis of a real case study; an extension of the simulator enabling for the experimentation of various communication and information fusion schemes; a framework for developing agents that are capable of constructing a global view of the situation and of distributing specific information, to other agents in order to drive their actions.","PeriodicalId":399150,"journal":{"name":"Proceedings of the Fifth International Conference on Information Fusion. FUSION 2002. (IEEE Cat.No.02EX5997)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133278389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-08DOI: 10.1109/ICIF.2002.1021003
In this work, various linear predictive feature vectors were used to train three different automated neural networks type classifiers for the task of isolated vowel recognition. The features used included linear prediction filter coefficients, reflection coefficients, log area ratios, and the linear predictive cepstrum. The three neural network classifiers used are the multilayer perceptron, radial basis function and the probabilistic neural network. The linear predictive cepstrum of dimension 12 is the best feature especially when training is done on clean speech and testing is done on noisy speech. Three different classifier fusion strategies (linear fusion, majority voting and weighted majority voting) were found to improve the performance. Linear fusion with varying weights is the best method and is most robust to noise.
{"title":"Isolated vowel recognition using linear predictive features and neural network classifier fusion","authors":"","doi":"10.1109/ICIF.2002.1021003","DOIUrl":"https://doi.org/10.1109/ICIF.2002.1021003","url":null,"abstract":"In this work, various linear predictive feature vectors were used to train three different automated neural networks type classifiers for the task of isolated vowel recognition. The features used included linear prediction filter coefficients, reflection coefficients, log area ratios, and the linear predictive cepstrum. The three neural network classifiers used are the multilayer perceptron, radial basis function and the probabilistic neural network. The linear predictive cepstrum of dimension 12 is the best feature especially when training is done on clean speech and testing is done on noisy speech. Three different classifier fusion strategies (linear fusion, majority voting and weighted majority voting) were found to improve the performance. Linear fusion with varying weights is the best method and is most robust to noise.","PeriodicalId":399150,"journal":{"name":"Proceedings of the Fifth International Conference on Information Fusion. FUSION 2002. (IEEE Cat.No.02EX5997)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115702647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-08DOI: 10.1109/ICIF.2002.1021179
A. L. Magnus, M. Oxley
Given a finite collection of classifiers trained on n-class data, one wishes to fuse the classifiers to form a new classifier with improved performance. Typically, the fusion is performed on the output level using logical ANDs and ORs. Sometimes classifiers are arrogant and will classify a feature vector without any prior experience (data) to justify their decision. The proposed fusion is based on the arrogance of the classifier and the location of the feature vector in respect to training data. Given a feature vector x, if any one of the classifiers is an expert on x then that classifier should dominate the fusion. If the classifiers are confused at x then the fusion rule should be defined in such a way to reflect this confusion. If the classifier is arrogant, then its results should not be considered and, thus, filtered out from the fusion process. We give this fusion rule based upon the metrics of veracity and experience.
{"title":"Fusing and filtering arrogant classifiers","authors":"A. L. Magnus, M. Oxley","doi":"10.1109/ICIF.2002.1021179","DOIUrl":"https://doi.org/10.1109/ICIF.2002.1021179","url":null,"abstract":"Given a finite collection of classifiers trained on n-class data, one wishes to fuse the classifiers to form a new classifier with improved performance. Typically, the fusion is performed on the output level using logical ANDs and ORs. Sometimes classifiers are arrogant and will classify a feature vector without any prior experience (data) to justify their decision. The proposed fusion is based on the arrogance of the classifier and the location of the feature vector in respect to training data. Given a feature vector x, if any one of the classifiers is an expert on x then that classifier should dominate the fusion. If the classifiers are confused at x then the fusion rule should be defined in such a way to reflect this confusion. If the classifier is arrogant, then its results should not be considered and, thus, filtered out from the fusion process. We give this fusion rule based upon the metrics of veracity and experience.","PeriodicalId":399150,"journal":{"name":"Proceedings of the Fifth International Conference on Information Fusion. FUSION 2002. (IEEE Cat.No.02EX5997)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114708343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-08DOI: 10.1109/ICIF.2002.1020902
Subhash Challa, M. Palaniswami, A. Shilton
The basic quantity to be estimated in the Bayesian approach to data fusion is the conditional probability density function (CPDF). Computationally efficient particle filtering approaches are becoming more important in estimating these CPDFs. In this approach, IID samples are used to represent the conditional probability densities. However, their application in data fusion is severely limited due to the fact that the information is stored in the form of a large set of samples. In all practical data fusion systems that have limited communication bandwidth, broadcasting this probabilistic information, available as a set of samples, to the fusion center is impractical. Support vector machines, through statistical learning theory, provide a way of compressing information by generating optimal kernal based representations. In this paper we use SVM to compress the probabilistic information available in the form of IID samples and apply it to solve the Bayesian data fusion problem. We demonstrate this technique on a multi-sensor tracking example.
{"title":"Distributed data fusion using support vector machines","authors":"Subhash Challa, M. Palaniswami, A. Shilton","doi":"10.1109/ICIF.2002.1020902","DOIUrl":"https://doi.org/10.1109/ICIF.2002.1020902","url":null,"abstract":"The basic quantity to be estimated in the Bayesian approach to data fusion is the conditional probability density function (CPDF). Computationally efficient particle filtering approaches are becoming more important in estimating these CPDFs. In this approach, IID samples are used to represent the conditional probability densities. However, their application in data fusion is severely limited due to the fact that the information is stored in the form of a large set of samples. In all practical data fusion systems that have limited communication bandwidth, broadcasting this probabilistic information, available as a set of samples, to the fusion center is impractical. Support vector machines, through statistical learning theory, provide a way of compressing information by generating optimal kernal based representations. In this paper we use SVM to compress the probabilistic information available in the form of IID samples and apply it to solve the Bayesian data fusion problem. We demonstrate this technique on a multi-sensor tracking example.","PeriodicalId":399150,"journal":{"name":"Proceedings of the Fifth International Conference on Information Fusion. FUSION 2002. (IEEE Cat.No.02EX5997)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117039929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}