Pub Date : 2012-12-01DOI: 10.1109/THS.2012.6459893
K. Harrison, G. White
Communities, and the critical infrastructure that they rely upon, are becoming ever increasingly integrated into cyberspace. At the same time, communities are experiencing increasing activity and sophistication from a variety of threat agents. The effect of cyber attacks on communities has been observed, and the frequency and devastation of these attacks can only increase in the foreseeable future. Early detection of these attacks is critical for a fast and effective response. We propose detecting community cyber incidents by comparing indicators from community members across space and time. Performing spatiotemporal differentiation on these indicators requires that community members, such as private and governmental organizations, share information about these indicators. However, community members are, for good reasons, reluctant to share sensitive security related information. Additionally, sharing large amounts of information with a trusted, centralized location introduces scalability and reliability problems. In this paper we define the information sharing requirements necessary for fast, effective community cyber incident detection and response, while addressing both privacy and scalability concerns. Furthermore, we introduce a framework to meet these requirements, and analyze a proof of concept implementation.
{"title":"Information sharing requirements and framework needed for community cyber incident detection and response","authors":"K. Harrison, G. White","doi":"10.1109/THS.2012.6459893","DOIUrl":"https://doi.org/10.1109/THS.2012.6459893","url":null,"abstract":"Communities, and the critical infrastructure that they rely upon, are becoming ever increasingly integrated into cyberspace. At the same time, communities are experiencing increasing activity and sophistication from a variety of threat agents. The effect of cyber attacks on communities has been observed, and the frequency and devastation of these attacks can only increase in the foreseeable future. Early detection of these attacks is critical for a fast and effective response. We propose detecting community cyber incidents by comparing indicators from community members across space and time. Performing spatiotemporal differentiation on these indicators requires that community members, such as private and governmental organizations, share information about these indicators. However, community members are, for good reasons, reluctant to share sensitive security related information. Additionally, sharing large amounts of information with a trusted, centralized location introduces scalability and reliability problems. In this paper we define the information sharing requirements necessary for fast, effective community cyber incident detection and response, while addressing both privacy and scalability concerns. Furthermore, we introduce a framework to meet these requirements, and analyze a proof of concept implementation.","PeriodicalId":355549,"journal":{"name":"2012 IEEE Conference on Technologies for Homeland Security (HST)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131871917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/THS.2012.6459857
B. Auslander, K. Gupta, D. Aha
Existing algorithms for maritime threat detection employ a variety of normalcy models that are probabilistic and/or rule-based. Unfortunately, they can be limited in their ability to model the subtlety and complexity of multiple vessel types and their spatio-temporal events, yet their representation is needed to accurately detect anomalies in maritime scenarios. To address these limitations, we apply plan recognition algorithms for maritime anomaly detection. In particular, we examine hierarchical task network (HTN) and case-based algorithms for plan recognition, which detect anomalies by generating expected behaviors for use as a basis for threat detection. We compare their performance with a behavior recognition algorithm on simulated riverine maritime traffic. On a set of simulated maritime scenarios, these plan recognition algorithms outperformed the behavior recognition algorithm, except for one reactive behavior task in which the inverse occurred. Furthermore, our case-based plan recognizer outperformed our HTN algorithm. On the short-term reactive planning scenarios, the plan recognition algorithms outperformed the behavior recognition algorithm on routine plan following. However, they are significantly outperformed on the anomalous scenarios.
{"title":"Maritime threat detection using plan recognition","authors":"B. Auslander, K. Gupta, D. Aha","doi":"10.1109/THS.2012.6459857","DOIUrl":"https://doi.org/10.1109/THS.2012.6459857","url":null,"abstract":"Existing algorithms for maritime threat detection employ a variety of normalcy models that are probabilistic and/or rule-based. Unfortunately, they can be limited in their ability to model the subtlety and complexity of multiple vessel types and their spatio-temporal events, yet their representation is needed to accurately detect anomalies in maritime scenarios. To address these limitations, we apply plan recognition algorithms for maritime anomaly detection. In particular, we examine hierarchical task network (HTN) and case-based algorithms for plan recognition, which detect anomalies by generating expected behaviors for use as a basis for threat detection. We compare their performance with a behavior recognition algorithm on simulated riverine maritime traffic. On a set of simulated maritime scenarios, these plan recognition algorithms outperformed the behavior recognition algorithm, except for one reactive behavior task in which the inverse occurred. Furthermore, our case-based plan recognizer outperformed our HTN algorithm. On the short-term reactive planning scenarios, the plan recognition algorithms outperformed the behavior recognition algorithm on routine plan following. However, they are significantly outperformed on the anomalous scenarios.","PeriodicalId":355549,"journal":{"name":"2012 IEEE Conference on Technologies for Homeland Security (HST)","volume":"62 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121806511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/THS.2012.6459895
A. Kirby, J. E. Dietz, C. Wojtalewicz
The city of Chicago, Illinois is making strides to become more prepared for large-scale disasters. One idea is called a Regional Hub Reception Center (RHRC), which converts an existing facility into a temporary shelter for evacuees in the event of a 10-kiloton nuclear blast being detonated in the center of downtown. The RHRC will provide the evacuees with basic needs and register them for assignment at a more permanent shelter. The Regional Catastrophic Planning Team needs to know if its estimates for time, personnel, and resources are accurate. The best and most reliable way to test what will be needed is to perform simulations. However, large full-scale simulations are time consuming and expensive. A computer-generated model, however, can accurately simulate many variables and scenarios to test the RHRC quickly, cheaply, and repetitively to make it more effective if used. A computer modeling software tool, called AnyLogic, is a multi-paradigm modeling program that allows users to build agent-based, discrete event, and system dynamics models. The modeling paradigm that best suits the simulation of an RHRC is discrete event modeling. This is because a discrete event model represents a chronological sequence of events. When an event occurs in a discrete event model, it represents a change to the entire system. An RHRC is a chronological sequence of events and a system of systems that are constantly changing. As evacuees move through the RHRC, they flow through a predefined set of points, ranging from registration, to care, to shelter assignment, and many others. The data provided is supported by research or by personal field experience where research has not yet been performed. A model is a simulation of the real world. Though it does not represent the 100% of the variables that could occur in an actual simulation, it takes into consideration as many as possible to provide the most accurate results. The RHRC AnyLogic model is a simulation that estimates resource needs and processes of an RHRC. The RHRC model created to support this paper was developed using data collected by all students in Dr. J. Eric Dietz's Homeland Security Seminar graduate level class at Purdue University in the spring semester of 2012. The purpose of this study is to determine if the goals of the Regional Catastrophic Planning Team are attainable based upon the data collected.
伊利诺斯州芝加哥市正在大步前进,为大规模灾难做好准备。其中一个想法被称为区域中心接待中心(RHRC),它将现有设施改造成一个临时避难所,以便在市中心发生1万吨核爆炸时疏散人员。RHRC将为撤离者提供基本需求,并为他们登记,以便分配到更永久的避难所。区域灾难计划小组需要知道它对时间、人员和资源的估计是否准确。测试所需内容的最佳和最可靠的方法是进行模拟。然而,大型全尺寸模拟既耗时又昂贵。然而,计算机生成的模型可以准确地模拟许多变量和场景,以快速、廉价和重复地测试RHRC,使其在使用时更加有效。一种名为AnyLogic的计算机建模软件工具是一种多范式建模程序,允许用户构建基于代理的离散事件和系统动力学模型。最适合RHRC仿真的建模范例是离散事件建模。这是因为离散事件模型表示事件的时间顺序。当一个事件在离散事件模型中发生时,它表示对整个系统的更改。RHRC是事件的时间顺序和不断变化的系统的系统。当撤离人员穿过RHRC时,他们会经过一组预定义的点,从登记到护理,到分配住所等等。所提供的数据由研究或尚未进行研究的个人实地经验支持。模型是对真实世界的模拟。虽然它不能代表实际模拟中可能出现的100%的变量,但它考虑了尽可能多的变量以提供最准确的结果。RHRC AnyLogic模型是一种模拟,用于估计RHRC的资源需求和流程。为支持本文而创建的RHRC模型是利用普渡大学2012年春季学期J. Eric Dietz博士的国土安全研讨会研究生班的所有学生收集的数据开发的。本研究的目的是根据收集的数据确定区域灾难规划小组的目标是否可以实现。
{"title":"Modeling of a Regional Hub Reception Center to improve the speed of an urban area evacuation","authors":"A. Kirby, J. E. Dietz, C. Wojtalewicz","doi":"10.1109/THS.2012.6459895","DOIUrl":"https://doi.org/10.1109/THS.2012.6459895","url":null,"abstract":"The city of Chicago, Illinois is making strides to become more prepared for large-scale disasters. One idea is called a Regional Hub Reception Center (RHRC), which converts an existing facility into a temporary shelter for evacuees in the event of a 10-kiloton nuclear blast being detonated in the center of downtown. The RHRC will provide the evacuees with basic needs and register them for assignment at a more permanent shelter. The Regional Catastrophic Planning Team needs to know if its estimates for time, personnel, and resources are accurate. The best and most reliable way to test what will be needed is to perform simulations. However, large full-scale simulations are time consuming and expensive. A computer-generated model, however, can accurately simulate many variables and scenarios to test the RHRC quickly, cheaply, and repetitively to make it more effective if used. A computer modeling software tool, called AnyLogic, is a multi-paradigm modeling program that allows users to build agent-based, discrete event, and system dynamics models. The modeling paradigm that best suits the simulation of an RHRC is discrete event modeling. This is because a discrete event model represents a chronological sequence of events. When an event occurs in a discrete event model, it represents a change to the entire system. An RHRC is a chronological sequence of events and a system of systems that are constantly changing. As evacuees move through the RHRC, they flow through a predefined set of points, ranging from registration, to care, to shelter assignment, and many others. The data provided is supported by research or by personal field experience where research has not yet been performed. A model is a simulation of the real world. Though it does not represent the 100% of the variables that could occur in an actual simulation, it takes into consideration as many as possible to provide the most accurate results. The RHRC AnyLogic model is a simulation that estimates resource needs and processes of an RHRC. The RHRC model created to support this paper was developed using data collected by all students in Dr. J. Eric Dietz's Homeland Security Seminar graduate level class at Purdue University in the spring semester of 2012. The purpose of this study is to determine if the goals of the Regional Catastrophic Planning Team are attainable based upon the data collected.","PeriodicalId":355549,"journal":{"name":"2012 IEEE Conference on Technologies for Homeland Security (HST)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124270360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/THS.2012.6459907
R. Lazarick
This paper addresses the concepts of reusable biometric testing in a general sense, and then describes the US Government initiative to establish a mechanism to facilitate sharing of biometric testing information both within the government and with stakeholders. The fundamental motivation for promoting reuse of biometric testing information is to achieve cost avoidance. If a well defined test has been successfully completed and documented by a trusted party, then the results of that testing should be sufficient to allow other consumers of that product to rely on that test, and thereby avoid the cost of repeating that testing. The extent of reusability depends on the type of testing being conducted. The most straightforward type of testing suited for reuse are Conformance tests, such as conformance to American National Standards Institute/National Institute of Standards and Technology (ANSI/NIST) or International Organization for Standardization (ISO) standards. These tests are typically automated and are fully repeatable. Biometric Performance testing using the Technology Testing approach, is similarly repeatable and easily reused given a fixed set of biometric samples. Biometric Performance testing using the Scenario Testing approach is quite different in that it is inherently not repeatable due to the use of human test subjects, and not easily reusable. These tests are also typically expensive. There are several notable examples of testing programs for which the results have demonstrated reusability. One of the first and most visible may be the Federal Bureau of Investigation (FBI) Appendix F Certification of fingerprint image quality supported by the FBI for procurement of livescan fingerprint devices. There are fundamental prerequisites for reusable testing. First, there is a need for agreement on the method/procedure for conducting the testing and reporting the results. Secondly, the methods must be “Open”, and additionally, the product must be tested by a trusted party . In order for reusable testing to work, the participants in a test must have a willingness and the authority to share the results, and establish a common level of integration. In order for reusability to succeed, there must be a capability to disseminate the information. The United States Government (USG) has established an effort to develop a repository for biometrics test methods and successfully completed test results - “BITES” - Biometric Interagency Testing and Evaluation Schema, to promote efficient and effective reuse of biometric testing information.
{"title":"Biometric Interagency Testing & Evaluation Schema (BITES)","authors":"R. Lazarick","doi":"10.1109/THS.2012.6459907","DOIUrl":"https://doi.org/10.1109/THS.2012.6459907","url":null,"abstract":"This paper addresses the concepts of reusable biometric testing in a general sense, and then describes the US Government initiative to establish a mechanism to facilitate sharing of biometric testing information both within the government and with stakeholders. The fundamental motivation for promoting reuse of biometric testing information is to achieve cost avoidance. If a well defined test has been successfully completed and documented by a trusted party, then the results of that testing should be sufficient to allow other consumers of that product to rely on that test, and thereby avoid the cost of repeating that testing. The extent of reusability depends on the type of testing being conducted. The most straightforward type of testing suited for reuse are Conformance tests, such as conformance to American National Standards Institute/National Institute of Standards and Technology (ANSI/NIST) or International Organization for Standardization (ISO) standards. These tests are typically automated and are fully repeatable. Biometric Performance testing using the Technology Testing approach, is similarly repeatable and easily reused given a fixed set of biometric samples. Biometric Performance testing using the Scenario Testing approach is quite different in that it is inherently not repeatable due to the use of human test subjects, and not easily reusable. These tests are also typically expensive. There are several notable examples of testing programs for which the results have demonstrated reusability. One of the first and most visible may be the Federal Bureau of Investigation (FBI) Appendix F Certification of fingerprint image quality supported by the FBI for procurement of livescan fingerprint devices. There are fundamental prerequisites for reusable testing. First, there is a need for agreement on the method/procedure for conducting the testing and reporting the results. Secondly, the methods must be “Open”, and additionally, the product must be tested by a trusted party . In order for reusable testing to work, the participants in a test must have a willingness and the authority to share the results, and establish a common level of integration. In order for reusability to succeed, there must be a capability to disseminate the information. The United States Government (USG) has established an effort to develop a repository for biometrics test methods and successfully completed test results - “BITES” - Biometric Interagency Testing and Evaluation Schema, to promote efficient and effective reuse of biometric testing information.","PeriodicalId":355549,"journal":{"name":"2012 IEEE Conference on Technologies for Homeland Security (HST)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127258690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/THS.2012.6459871
Chris Peake
Cloud services are susceptible to faults, failures, and attacks just like enterprise IT architectures, the difference is that when a cloud suffers an outage it can affect numerous customers. But cloud security is not just about accessibility and availability; it must also provide information integrity and confidentiality to assure effective business operations. Therefore, cloud-based services (i.e. SaaS, PaaS, and IaaS) will also have to provide resilient and fault tolerant resources at the application, platform, and infrastructure levels in order to assure cloud consumer mission objectives can be met. This will require the development of a new breed of security technologies that not only provide Information Assurance but also Mission Assurance.
{"title":"Security in the cloud: Understanding the risks of cloud-as-a-service","authors":"Chris Peake","doi":"10.1109/THS.2012.6459871","DOIUrl":"https://doi.org/10.1109/THS.2012.6459871","url":null,"abstract":"Cloud services are susceptible to faults, failures, and attacks just like enterprise IT architectures, the difference is that when a cloud suffers an outage it can affect numerous customers. But cloud security is not just about accessibility and availability; it must also provide information integrity and confidentiality to assure effective business operations. Therefore, cloud-based services (i.e. SaaS, PaaS, and IaaS) will also have to provide resilient and fault tolerant resources at the application, platform, and infrastructure levels in order to assure cloud consumer mission objectives can be met. This will require the development of a new breed of security technologies that not only provide Information Assurance but also Mission Assurance.","PeriodicalId":355549,"journal":{"name":"2012 IEEE Conference on Technologies for Homeland Security (HST)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126120995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1007/978-3-662-43616-5_2
M. Azab, M. Eltoweissy
{"title":"Bio-inspired Evolutionary Sensory system for Cyber-Physical System defense","authors":"M. Azab, M. Eltoweissy","doi":"10.1007/978-3-662-43616-5_2","DOIUrl":"https://doi.org/10.1007/978-3-662-43616-5_2","url":null,"abstract":"","PeriodicalId":355549,"journal":{"name":"2012 IEEE Conference on Technologies for Homeland Security (HST)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126156031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/THS.2012.6459842
Lee Oesterling, Don Hayford, Georgeanne Friend
Battelle has been actively exploring emerging quantum key distribution (QKD) cryptographic technologies for secure communication of information with a goal of expanding the use of this technology by commercial enterprises in the United States. In QKD systems, the principles of quantum physics are applied to generate a secret data encryption key, which is distributed between two users. The security of this key is guaranteed by the laws of quantum physics, and this distributed key can be used to encrypt data to enable secure communication on insecure channels. To date, Battelle has studied commercially available and custom-built QKD systems in controlled laboratory environments and is actively working to establish a QKD Test Bed network to characterize performance in real world metropolitan (10-100 km) and long distance (>; 100 km) environments. All QKD systems that we have tested to date utilize a discrete variable (DV) binary approach. In this approach, discrete information is encoded onto a quantum state of a single photon, and binary data are measured using single photon detectors. Recently, continuous variable (CV) QKD systems have been developed and are expected to be commercially available shortly. In CV-QKD systems, randomly generated continuous variables are encoded on coherent states of weak pulses of light, and continuous data values are measured with homodyne detection methods. In certain applications for cyber security, the CV-QKD systems may offer advantages over traditional DV-QKD systems, such as a higher secret key exchange rate for short distances, lower cost, and compatibility with telecommunication technologies. In this paper, current CV- and DV-QKD approaches are described, and security issues and technical challenges fielding these quantum-based systems are discussed. Experimental and theoretical data that have been published on quantum key exchange rates and distances that are relevant to metropolitan and long distance network applications are presented. From an analysis of these data, the relative performance of the two approaches is compared as a function of distance and environment (free space and optical fiber). Additionally, current research activities are described for both technologies, which include network integration and methods to increase secret key distribution rates and distances.
{"title":"Comparison of commercial and next generation quantum key distribution: Technologies for secure communication of information","authors":"Lee Oesterling, Don Hayford, Georgeanne Friend","doi":"10.1109/THS.2012.6459842","DOIUrl":"https://doi.org/10.1109/THS.2012.6459842","url":null,"abstract":"Battelle has been actively exploring emerging quantum key distribution (QKD) cryptographic technologies for secure communication of information with a goal of expanding the use of this technology by commercial enterprises in the United States. In QKD systems, the principles of quantum physics are applied to generate a secret data encryption key, which is distributed between two users. The security of this key is guaranteed by the laws of quantum physics, and this distributed key can be used to encrypt data to enable secure communication on insecure channels. To date, Battelle has studied commercially available and custom-built QKD systems in controlled laboratory environments and is actively working to establish a QKD Test Bed network to characterize performance in real world metropolitan (10-100 km) and long distance (>; 100 km) environments. All QKD systems that we have tested to date utilize a discrete variable (DV) binary approach. In this approach, discrete information is encoded onto a quantum state of a single photon, and binary data are measured using single photon detectors. Recently, continuous variable (CV) QKD systems have been developed and are expected to be commercially available shortly. In CV-QKD systems, randomly generated continuous variables are encoded on coherent states of weak pulses of light, and continuous data values are measured with homodyne detection methods. In certain applications for cyber security, the CV-QKD systems may offer advantages over traditional DV-QKD systems, such as a higher secret key exchange rate for short distances, lower cost, and compatibility with telecommunication technologies. In this paper, current CV- and DV-QKD approaches are described, and security issues and technical challenges fielding these quantum-based systems are discussed. Experimental and theoretical data that have been published on quantum key exchange rates and distances that are relevant to metropolitan and long distance network applications are presented. From an analysis of these data, the relative performance of the two approaches is compared as a function of distance and environment (free space and optical fiber). Additionally, current research activities are described for both technologies, which include network integration and methods to increase secret key distribution rates and distances.","PeriodicalId":355549,"journal":{"name":"2012 IEEE Conference on Technologies for Homeland Security (HST)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123740034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/THS.2012.6459838
J. Martinez-Lorenzo, Y. Rodriguez-Vaqueiro, C. Rappaport, O. R. Lopez, A. Pino
This work presents a new radar system concept, working at millimeter wave frequencies, capable of detecting explosive related threats at standoff distances. The system consists of a two dimensional aperture of randomly distributed transmitting/receiving antenna elements, and a Passive Array of Scatters (PAS) positioned in the vicinity of the target. In addition, a novel norm one minimization imaging algorithm has been implemented that is capable of producing super-resolution images. This paper also includes a numerical example in which 7.5 mm resolution is achieved at the standoff range of 40 m for a working frequency of 60 GHz.
{"title":"A compressed sensing approach for detection of explosive threats at standoff distances using a Passive Array of Scatters","authors":"J. Martinez-Lorenzo, Y. Rodriguez-Vaqueiro, C. Rappaport, O. R. Lopez, A. Pino","doi":"10.1109/THS.2012.6459838","DOIUrl":"https://doi.org/10.1109/THS.2012.6459838","url":null,"abstract":"This work presents a new radar system concept, working at millimeter wave frequencies, capable of detecting explosive related threats at standoff distances. The system consists of a two dimensional aperture of randomly distributed transmitting/receiving antenna elements, and a Passive Array of Scatters (PAS) positioned in the vicinity of the target. In addition, a novel norm one minimization imaging algorithm has been implemented that is capable of producing super-resolution images. This paper also includes a numerical example in which 7.5 mm resolution is achieved at the standoff range of 40 m for a working frequency of 60 GHz.","PeriodicalId":355549,"journal":{"name":"2012 IEEE Conference on Technologies for Homeland Security (HST)","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122389635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/THS.2012.6459862
Katia Estabridis
This paper proposes an adaptive face recognition algorithm to jointly classify and learn from unlabeled data. It presents an efficient design that specifically addresses the case when only a single sample per person is available for training. A dictionary composed of regional descriptors serves as the basis for the recognition system while providing a flexible framework to augment or update dictionary atoms. The algorithm is based on l1 minimization techniques and the decision to update the dictionary is made in an unsupervised mode via non-parametric Bayes. The dictionary learning is done via reverse-OMP to select atoms that are orthogonal or near orthogonal to the current dictionary elements. The proposed algorithm was tested with two face databases showing the capability to handle illumination, scale, and some moderate pose and expression variations. Classification results as high as 96% were obtained with the Georgia Tech database and 94% correct classification rates for the Multi-PIE database for the frontal-view scenarios.
{"title":"Face recognition and learning via adaptive dictionaries","authors":"Katia Estabridis","doi":"10.1109/THS.2012.6459862","DOIUrl":"https://doi.org/10.1109/THS.2012.6459862","url":null,"abstract":"This paper proposes an adaptive face recognition algorithm to jointly classify and learn from unlabeled data. It presents an efficient design that specifically addresses the case when only a single sample per person is available for training. A dictionary composed of regional descriptors serves as the basis for the recognition system while providing a flexible framework to augment or update dictionary atoms. The algorithm is based on l1 minimization techniques and the decision to update the dictionary is made in an unsupervised mode via non-parametric Bayes. The dictionary learning is done via reverse-OMP to select atoms that are orthogonal or near orthogonal to the current dictionary elements. The proposed algorithm was tested with two face databases showing the capability to handle illumination, scale, and some moderate pose and expression variations. Classification results as high as 96% were obtained with the Georgia Tech database and 94% correct classification rates for the Multi-PIE database for the frontal-view scenarios.","PeriodicalId":355549,"journal":{"name":"2012 IEEE Conference on Technologies for Homeland Security (HST)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122634943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/THS.2012.6459826
J. Lacirignola, P. Pomianowski, D. Ricke, D. Strom, E. Wack
The size and scope of standoff multimodal biometric datasets can be increased through the adoption of a common architecture to collect, describe, archive, and analyze subject traits. The Extendable Multimodal Biometric Evaluation Range (EMBER) system developed by MIT Lincoln Laboratory is a field-ready, easily adaptable architecture to streamline collections requiring multiple biometric devices in environments of interest. Its data architecture includes a fully featured metadata-rich relational database that supports the aggregation of biometric data collected with proliferated systems into a single corpus for analytical use.
{"title":"Multimodal biometric collection and evaluation architecture","authors":"J. Lacirignola, P. Pomianowski, D. Ricke, D. Strom, E. Wack","doi":"10.1109/THS.2012.6459826","DOIUrl":"https://doi.org/10.1109/THS.2012.6459826","url":null,"abstract":"The size and scope of standoff multimodal biometric datasets can be increased through the adoption of a common architecture to collect, describe, archive, and analyze subject traits. The Extendable Multimodal Biometric Evaluation Range (EMBER) system developed by MIT Lincoln Laboratory is a field-ready, easily adaptable architecture to streamline collections requiring multiple biometric devices in environments of interest. Its data architecture includes a fully featured metadata-rich relational database that supports the aggregation of biometric data collected with proliferated systems into a single corpus for analytical use.","PeriodicalId":355549,"journal":{"name":"2012 IEEE Conference on Technologies for Homeland Security (HST)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128258921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}