Nadia El Bekri, Susanne Angele, M. Ruckhäberle, E. Peinsipp-Byma, Bruno Haelke
This paper introduces an interactive recognition assistance system for imaging reconnaissance. This system supports aerial image analysts on missions during two main tasks: Object recognition and infrastructure analysis. Object recognition concentrates on the classification of one single object. Infrastructure analysis deals with the description of the components of an infrastructure and the recognition of the infrastructure type (e.g. military airfield). Based on satellite or aerial images, aerial image analysts are able to extract single object features and thereby recognize different object types. It is one of the most challenging tasks in the imaging reconnaissance. Currently, there are no high potential ATR (automatic target recognition) applications available, as consequence the human observer cannot be replaced entirely. State-of-the-art ATR applications cannot assume in equal measure human perception and interpretation. Why is this still such a critical issue? First, cluttered and noisy images make it difficult to automatically extract, classify and identify object types. Second, due to the changed warfare and the rise of asymmetric threats it is nearly impossible to create an underlying data set containing all features, objects or infrastructure types. Many other reasons like environmental parameters or aspect angles compound the application of ATR supplementary. Due to the lack of suitable ATR procedures, the human factor is still important and so far irreplaceable. In order to use the potential benefits of the human perception and computational methods in a synergistic way, both are unified in an interactive assistance system. RecceMan® (Reconnaissance Manual) offers two different modes for aerial image analysts on missions: the object recognition mode and the infrastructure analysis mode. The aim of the object recognition mode is to recognize a certain object type based on the object features that originated from the image signatures. The infrastructure analysis mode pursues the goal to analyze the function of the infrastructure. The image analyst extracts visually certain target object signatures, assigns them to corresponding object features and is finally able to recognize the object type. The system offers him the possibility to assign the image signatures to features given by sample images. The underlying data set contains a wide range of objects features and object types for different domains like ships or land vehicles. Each domain has its own feature tree developed by aerial image analyst experts. By selecting the corresponding features, the possible solution set of objects is automatically reduced and matches only the objects that contain the selected features. Moreover, we give an outlook of current research in the field of ground target analysis in which we deal with partly automated methods to extract image signatures and assign them to the corresponding features. This research includes methods for automatically determinin
{"title":"RecceMan: an interactive recognition assistance for image-based reconnaissance: synergistic effects of human perception and computational methods for object recognition, identification, and infrastructure analysis","authors":"Nadia El Bekri, Susanne Angele, M. Ruckhäberle, E. Peinsipp-Byma, Bruno Haelke","doi":"10.1117/12.2196300","DOIUrl":"https://doi.org/10.1117/12.2196300","url":null,"abstract":"This paper introduces an interactive recognition assistance system for imaging reconnaissance. This system supports aerial image analysts on missions during two main tasks: Object recognition and infrastructure analysis. Object recognition concentrates on the classification of one single object. Infrastructure analysis deals with the description of the components of an infrastructure and the recognition of the infrastructure type (e.g. military airfield). Based on satellite or aerial images, aerial image analysts are able to extract single object features and thereby recognize different object types. It is one of the most challenging tasks in the imaging reconnaissance. Currently, there are no high potential ATR (automatic target recognition) applications available, as consequence the human observer cannot be replaced entirely. State-of-the-art ATR applications cannot assume in equal measure human perception and interpretation. Why is this still such a critical issue? First, cluttered and noisy images make it difficult to automatically extract, classify and identify object types. Second, due to the changed warfare and the rise of asymmetric threats it is nearly impossible to create an underlying data set containing all features, objects or infrastructure types. Many other reasons like environmental parameters or aspect angles compound the application of ATR supplementary. Due to the lack of suitable ATR procedures, the human factor is still important and so far irreplaceable. In order to use the potential benefits of the human perception and computational methods in a synergistic way, both are unified in an interactive assistance system. RecceMan® (Reconnaissance Manual) offers two different modes for aerial image analysts on missions: the object recognition mode and the infrastructure analysis mode. The aim of the object recognition mode is to recognize a certain object type based on the object features that originated from the image signatures. The infrastructure analysis mode pursues the goal to analyze the function of the infrastructure. The image analyst extracts visually certain target object signatures, assigns them to corresponding object features and is finally able to recognize the object type. The system offers him the possibility to assign the image signatures to features given by sample images. The underlying data set contains a wide range of objects features and object types for different domains like ships or land vehicles. Each domain has its own feature tree developed by aerial image analyst experts. By selecting the corresponding features, the possible solution set of objects is automatically reduced and matches only the objects that contain the selected features. Moreover, we give an outlook of current research in the field of ground target analysis in which we deal with partly automated methods to extract image signatures and assign them to the corresponding features. This research includes methods for automatically determinin","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121018882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we propose an all-optical sensor based on consideration the nonlinear effects on modal propagation and output intensity based on ultra-compact nonlinear multimode interference (NLMMI) coupler. The sensor can be tuned to highest sensitivity in the wavelength and refractive index ranges sufficient to detect water- soluble chemical, air pollutions, and heart operation. The results indicate high output sensitivity to input wavelength. This sensitivity guides us to propose a wave sensor both transverse and longitudinal waves such as acoustic and light wave, when an external wave interacts with input waveguide. For instance, this sensor can be implemented by long input that inserted in the land, then any wave could detected from earth. The visible changes of intensity at output facet in various surrounding layer refractive index show the high sensitivity to the refractive index of surrounding layer that is foundation of introducing a sensor. Also, the results show the high distinguished changes on modal expansion and output throat distribution in various refractive indices of surrounding layer.
{"title":"Proposal of all-optical sensor based on nonlinear MMI coupler for multi-purpose usage","authors":"M. Tajaldini, M. Z. Matjafri","doi":"10.1117/12.2195606","DOIUrl":"https://doi.org/10.1117/12.2195606","url":null,"abstract":"In this study, we propose an all-optical sensor based on consideration the nonlinear effects on modal propagation and output intensity based on ultra-compact nonlinear multimode interference (NLMMI) coupler. The sensor can be tuned to highest sensitivity in the wavelength and refractive index ranges sufficient to detect water- soluble chemical, air pollutions, and heart operation. The results indicate high output sensitivity to input wavelength. This sensitivity guides us to propose a wave sensor both transverse and longitudinal waves such as acoustic and light wave, when an external wave interacts with input waveguide. For instance, this sensor can be implemented by long input that inserted in the land, then any wave could detected from earth. The visible changes of intensity at output facet in various surrounding layer refractive index show the high sensitivity to the refractive index of surrounding layer that is foundation of introducing a sensor. Also, the results show the high distinguished changes on modal expansion and output throat distribution in various refractive indices of surrounding layer.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128357826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Fraunhofer thermal object model (FTOM) predicts the temperature of an object as a function of the environmental conditions. The model has an outer layer exchanging radiation and heat with the environment and a stack of layers beyond modifying the thermal behavior. The innermost layer is at a constant or variable temperature called core temperature. The properties of the model (6 parameters) are fitted to minimize the difference between the prediction and a time series of measured temperatures. The model can be used for very different objects like backgrounds (e.g. meadow, forest, stone, or sand) or objects like vehicles. The two dimensional enhancement was developed to model more complex objects with non-planar surfaces and heat conduction between adjacent regions. In this model we call the small thermal homogenous interacting regions thermal pixels. For each thermal pixel the orientation and the identities of the adjacent pixels are stored in an array. In this version 7 parameters have to be fitted. The model is limited to a convex geometry to reduce the complexity of the heat exchange and allow for a higher number of thermal pixels. For the test of the model time series of thermal images of a test object (CUBI) were analyzed. The square sides of the cubes were modeled as 25 thermal pixels (5 × 5). In the time series of thermal images small areas in the size of the thermal pixels were analyzed to generate data files that can easily be read by the model. The program was developed with MATLAB and the final version in C++ using the OpenMP multiprocessor library. The differential equation for the heat transfer is the time consuming part in the computation and was programmed in C. The comparison show a good agreement of the fitted and not fitted thermal pixels with the measured temperatures. This indicates the ability of the model to predict the temperatures of the whole object.
{"title":"FTOM-2D: a two-dimensional approach to model the detailed thermal behavior of nonplanar surfaces","authors":"B. Bartos, K. Stein","doi":"10.1117/12.2197177","DOIUrl":"https://doi.org/10.1117/12.2197177","url":null,"abstract":"The Fraunhofer thermal object model (FTOM) predicts the temperature of an object as a function of the environmental conditions. The model has an outer layer exchanging radiation and heat with the environment and a stack of layers beyond modifying the thermal behavior. The innermost layer is at a constant or variable temperature called core temperature. The properties of the model (6 parameters) are fitted to minimize the difference between the prediction and a time series of measured temperatures. The model can be used for very different objects like backgrounds (e.g. meadow, forest, stone, or sand) or objects like vehicles. The two dimensional enhancement was developed to model more complex objects with non-planar surfaces and heat conduction between adjacent regions. In this model we call the small thermal homogenous interacting regions thermal pixels. For each thermal pixel the orientation and the identities of the adjacent pixels are stored in an array. In this version 7 parameters have to be fitted. The model is limited to a convex geometry to reduce the complexity of the heat exchange and allow for a higher number of thermal pixels. For the test of the model time series of thermal images of a test object (CUBI) were analyzed. The square sides of the cubes were modeled as 25 thermal pixels (5 × 5). In the time series of thermal images small areas in the size of the thermal pixels were analyzed to generate data files that can easily be read by the model. The program was developed with MATLAB and the final version in C++ using the OpenMP multiprocessor library. The differential equation for the heat transfer is the time consuming part in the computation and was programmed in C. The comparison show a good agreement of the fitted and not fitted thermal pixels with the measured temperatures. This indicates the ability of the model to predict the temperatures of the whole object.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130579619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Bouma, J. Baan, Frank ter Haar, P. Eendebak, R. D. den Hollander, G. Burghouts, R. Wijn, S. P. van den Broek, J. V. van Rest
In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications and effects, such as the reduction of violence between police and citizens. However, the increased use of bodycams also creates potential challenges. For example: how can end-users extract information from the abundance of video, how can the information be presented, and how can an officer retrieve information efficiently? Nevertheless, such video gives the opportunity to stimulate the professionals’ memory, and support complete and accurate reporting. In this paper, we show how video content analysis (VCA) can address these challenges and seize these opportunities. To this end, we focus on methods for creating a complete summary of the video, which allows quick retrieval of relevant fragments. The content analysis for summarization consists of several components, such as stabilization, scene selection, motion estimation, localization, pedestrian tracking and action recognition in the video from a bodycam. The different components and visual representations of summaries are presented for retrospective investigation.
{"title":"Video content analysis on body-worn cameras for retrospective investigation","authors":"H. Bouma, J. Baan, Frank ter Haar, P. Eendebak, R. D. den Hollander, G. Burghouts, R. Wijn, S. P. van den Broek, J. V. van Rest","doi":"10.1117/12.2194436","DOIUrl":"https://doi.org/10.1117/12.2194436","url":null,"abstract":"In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications and effects, such as the reduction of violence between police and citizens. However, the increased use of bodycams also creates potential challenges. For example: how can end-users extract information from the abundance of video, how can the information be presented, and how can an officer retrieve information efficiently? Nevertheless, such video gives the opportunity to stimulate the professionals’ memory, and support complete and accurate reporting. In this paper, we show how video content analysis (VCA) can address these challenges and seize these opportunities. To this end, we focus on methods for creating a complete summary of the video, which allows quick retrieval of relevant fragments. The content analysis for summarization consists of several components, such as stabilization, scene selection, motion estimation, localization, pedestrian tracking and action recognition in the video from a bodycam. The different components and visual representations of summaries are presented for retrospective investigation.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120818527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study common clothing and variety of textile materials were used in research on its influence on remote materials identification. Experimental setup was designed for the terahertz reflection spectroscopy of different materials located at a distance up to 5 m. The source of the radiation is a tunable solid-state optical parametric oscillator (OPO), which generates a narrow-band nanosecond pulses in the range of 0.7-2.7 THz. The signal is detected with hot electron bolometer (HEB). Investigations were carried out for 1 m, 3 m and 5 m distance between the examined sample and the system. Experiment was conducted in the 0.7 – 2.5 THz range. Fabrics subjected to testing were varied in terms of the fibers kind which they were made from and weights of test materials ranged from 53 g/m2 up to 420 g/m2. Also textiles with a composition consisting of several fibers with differing percentage of the fibers composition of each sample were measured. Information about textiles transmission was obtained in separate set of experiments. The study fabrics were made of viscose, polyester, cotton, spandex, wool, nylon, leather, flax.
{"title":"Textile influence on remote identification of explosives in the THz range","authors":"M. Walczakowski, N. Pałka, M. Szustakowski","doi":"10.1117/12.2194512","DOIUrl":"https://doi.org/10.1117/12.2194512","url":null,"abstract":"In this study common clothing and variety of textile materials were used in research on its influence on remote materials identification. Experimental setup was designed for the terahertz reflection spectroscopy of different materials located at a distance up to 5 m. The source of the radiation is a tunable solid-state optical parametric oscillator (OPO), which generates a narrow-band nanosecond pulses in the range of 0.7-2.7 THz. The signal is detected with hot electron bolometer (HEB). Investigations were carried out for 1 m, 3 m and 5 m distance between the examined sample and the system. Experiment was conducted in the 0.7 – 2.5 THz range. Fabrics subjected to testing were varied in terms of the fibers kind which they were made from and weights of test materials ranged from 53 g/m2 up to 420 g/m2. Also textiles with a composition consisting of several fibers with differing percentage of the fibers composition of each sample were measured. Information about textiles transmission was obtained in separate set of experiments. The study fabrics were made of viscose, polyester, cotton, spandex, wool, nylon, leather, flax.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130849313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents part of a feasibility study into the use of the aperture synthesis passive imaging technique to screen vehicles for persons. The aperture synthesis technique is introduced and shown how in the near-field regime of a vehicle screening scenario that a three-dimensional imaging capability is possible. A suggested antenna receiver array is presented and the three-dimensional point spread function which this enables is calculated by simulation. This shows that over the majority of the inside of the vehicle the spatial resolution in all three spatial dimensions is of or less than the radiation wavelength, which at the suggested operational radiation frequency of 20 GHz is 1.5 cm. A radiation transport model that estimates the radiation temperatures of persons and backgrounds when viewing the vehicle either from the side or the top is presented, such a model being useful in the design of vehicle screening systems and as a basis for interpretation codes to assist operators in recognising persons in vehicles.
{"title":"Screening vehicles for stowaways using aperture synthesis passive millimetre wave imaging","authors":"N. Salmon, N. Bowring","doi":"10.1117/12.2197687","DOIUrl":"https://doi.org/10.1117/12.2197687","url":null,"abstract":"This paper presents part of a feasibility study into the use of the aperture synthesis passive imaging technique to screen vehicles for persons. The aperture synthesis technique is introduced and shown how in the near-field regime of a vehicle screening scenario that a three-dimensional imaging capability is possible. A suggested antenna receiver array is presented and the three-dimensional point spread function which this enables is calculated by simulation. This shows that over the majority of the inside of the vehicle the spatial resolution in all three spatial dimensions is of or less than the radiation wavelength, which at the suggested operational radiation frequency of 20 GHz is 1.5 cm. A radiation transport model that estimates the radiation temperatures of persons and backgrounds when viewing the vehicle either from the side or the top is presented, such a model being useful in the design of vehicle screening systems and as a basis for interpretation codes to assist operators in recognising persons in vehicles.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114957179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Depraz, Vladan Popovic, B. Ott, P. Wellig, Y. Leblebici
Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps) [1]. Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time. Graphics Processing Units (GPUs) are powerful devices with lots of processing capabilities for parallel jobs. The detection of objects in a scene requires large amount of independent pixel operations on the video frames that can be done in parallel, making GPU a good choice for the processing platform. This paper only concentrates on Background Subtraction Techniques [2] to detect the objects present in the scene. The foreground pixels are extracted from the processed frame and compared to the corresponding ones of the model. Using a connected- component detector, neighboring pixels are gathered in order to form blobs which correspond to the detected foreground objects. The new blobs are compared to the blobs formed in the previous frame to see if the corresponding object moved.
最近硬件系统的技术进步使相机的质量更高。最先进的全景系统使用它们以每秒30帧(fps)的速率制作分辨率为9000 x 2400像素的视频[1]。许多现代应用程序使用对象跟踪来确定每个对象在场景中移动的速度和路径。检测需要对两帧之间的像素进行详细的分析。在监控系统或人群分析等领域,这必须实时实现。图形处理单元(gpu)是功能强大的设备,具有并行作业的大量处理能力。场景中物体的检测需要对视频帧进行大量独立的像素操作,这些操作可以并行完成,因此GPU是一个很好的处理平台。本文只关注背景减除技术[2]来检测场景中存在的物体。从处理后的帧中提取前景像素,并与模型的相应像素进行比较。使用连通分量检测器,收集相邻像素以形成与检测到的前景目标相对应的斑点。将新的斑点与前一帧中形成的斑点进行比较,以查看相应的对象是否移动。
{"title":"Real-time object detection and tracking in omni-directional surveillance using GPU","authors":"Florian Depraz, Vladan Popovic, B. Ott, P. Wellig, Y. Leblebici","doi":"10.1117/12.2194810","DOIUrl":"https://doi.org/10.1117/12.2194810","url":null,"abstract":"Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps) [1]. Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time. Graphics Processing Units (GPUs) are powerful devices with lots of processing capabilities for parallel jobs. The detection of objects in a scene requires large amount of independent pixel operations on the video frames that can be done in parallel, making GPU a good choice for the processing platform. This paper only concentrates on Background Subtraction Techniques [2] to detect the objects present in the scene. The foreground pixels are extracted from the processed frame and compared to the corresponding ones of the model. Using a connected- component detector, neighboring pixels are gathered in order to form blobs which correspond to the detected foreground objects. The new blobs are compared to the blobs formed in the previous frame to see if the corresponding object moved.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"22 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116553192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A bottle scanner to detect liquid explosive has been developed using technologies of near infrared. Its detection rate of liquid explosive is quite high and its false alarm rate of safe liquids quite low. It uses a light source with wide spectrum such as a halogen lamp. Recently a variety of LEDs have been developed and some of them have near infrared spectrum. Here a near infrared LED is tested as a light source of the liquid explosive detector. Three infrared LEDs that have a main peak of spectrum at 901nm, 936nm, and 1028 nm have been used as a light source to scan liquids. Spectrum widths of these LEDs are quite narrow typically less than 100 nm. Ten typical liquids have been evaluated by these LEDs and the correlation coefficients of a spectrum by an LED and a tungsten lamp were more than 0.98. This experiment shows that the infrared LED can be used as a light source for the liquid scanner. An LED has some merits, such as long life of more than some ten thousand hours and small consumption electric power of less than 0.2 W. When the LED is used as a light source for the liquid scanner, it is also more compact and handy.
{"title":"Liquid explosive detection using near infrared LED","authors":"H. Itozaki, S. Ito, H. Sato-Akaba, Y. Miyato","doi":"10.1117/12.2194658","DOIUrl":"https://doi.org/10.1117/12.2194658","url":null,"abstract":"A bottle scanner to detect liquid explosive has been developed using technologies of near infrared. Its detection rate of liquid explosive is quite high and its false alarm rate of safe liquids quite low. It uses a light source with wide spectrum such as a halogen lamp. Recently a variety of LEDs have been developed and some of them have near infrared spectrum. Here a near infrared LED is tested as a light source of the liquid explosive detector. Three infrared LEDs that have a main peak of spectrum at 901nm, 936nm, and 1028 nm have been used as a light source to scan liquids. Spectrum widths of these LEDs are quite narrow typically less than 100 nm. Ten typical liquids have been evaluated by these LEDs and the correlation coefficients of a spectrum by an LED and a tungsten lamp were more than 0.98. This experiment shows that the infrared LED can be used as a light source for the liquid scanner. An LED has some merits, such as long life of more than some ten thousand hours and small consumption electric power of less than 0.2 W. When the LED is used as a light source for the liquid scanner, it is also more compact and handy.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116661602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This new security development is expected to increase interest from Northern European states in supporting the development of conceptually new stealthy ground platforms, incorporating a decade of advances in technology and experiences from stealth platforms at sea and in the air. The scope of this case study is to draw experience from where we left off. At the end of the 1990s there was growing interest in stealth for combat vehicles in Sweden. An ambitious technology demonstrator project was launched. One of the outcomes was a proposed Systems Engineering process tailored for signature management presented to SPIE in 2002.(Olsson et.al, A systems approach…, Proc. SPIE 4718 ) The process was used for the Swedish/BAE Systems Hägglunds AB development of a multirole armored platform (The Swedish acronym is SEP). Before development was completed there was a change of procurement policy in Sweden from domestic development towards Governmental Off-The-Shelf, preceded by a Swedish Armed Forces change of focus from national defense only, towards expeditionary missions. Lessons learned, of value for future development, are presented. They are deduced from interviews of key-personnel, on the procurer and industry sides respectively, and from document reviews.
{"title":"A systems approach to stealth on the ground revisited","authors":"K. Andersson, H. Kariis, G. Hult","doi":"10.1117/12.2194844","DOIUrl":"https://doi.org/10.1117/12.2194844","url":null,"abstract":"This new security development is expected to increase interest from Northern European states in supporting the development of conceptually new stealthy ground platforms, incorporating a decade of advances in technology and experiences from stealth platforms at sea and in the air. The scope of this case study is to draw experience from where we left off. At the end of the 1990s there was growing interest in stealth for combat vehicles in Sweden. An ambitious technology demonstrator project was launched. One of the outcomes was a proposed Systems Engineering process tailored for signature management presented to SPIE in 2002.(Olsson et.al, A systems approach…, Proc. SPIE 4718 ) The process was used for the Swedish/BAE Systems Hägglunds AB development of a multirole armored platform (The Swedish acronym is SEP). Before development was completed there was a change of procurement policy in Sweden from domestic development towards Governmental Off-The-Shelf, preceded by a Swedish Armed Forces change of focus from national defense only, towards expeditionary missions. Lessons learned, of value for future development, are presented. They are deduced from interviews of key-personnel, on the procurer and industry sides respectively, and from document reviews.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122038844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Carestia, R. Pizzoferrato, M. Lungaroni, J. Gabriele, G. Ludovici, O. Cenciarelli, M. Gelfusa, A. Murari, A. Malizia, P. Gaudio
With the aim of identifying an approach to exploit the differences in the fluorescence signatures of biological agents BAs, we have investigated the response of some BAs simulants to a set of different excitation wavelengths in the UV spectral range (i.e. 266, 273, 280, 300, 340, 355 nm). Our preliminary results on bacterial spores and vegetative forms, dispersed in water, showed that the differences in the fluorescence spectra can be enhanced, and more easily revealed, by using different excitation wavelengths. Specifically, the photo luminescence (PL) spectra coming from different species of Bacillus, in the form of spores (used as simulants of Bacillus anthracis), show significant differences under excitation at all the wavelengths, with slightly larger differences at 300, 340, 355 nm. On the other hand, the vegetative forms of two Bacillus species, did not show any appreciable difference, i.e. the PL spectra are virtually identical, for the excitation wavelengths of 266, 273, 280 nm. Conversely, small yet appreciable difference appear at 300, 340, 355 nm. Finally, large difference appear between the spore and the vegetative form of each species at all the wavelengths, with slightly larger variations at 300, 340, 355 nm. Together, these preliminary results support the hypothesis that a multi-wavelength approach could be used to improve the sensitivity and specificity of UV-LIF based BAs detection systems. The second step of this work concerns the application of a Support Vector Regression (SVR) method, as evaluated in our previous work to define a methodology for the setup of a multispectral database for the stand-off detection of BAs.
{"title":"Multispectral analysis of biological agents to implement a quick tool for stand-off biological detection","authors":"M. Carestia, R. Pizzoferrato, M. Lungaroni, J. Gabriele, G. Ludovici, O. Cenciarelli, M. Gelfusa, A. Murari, A. Malizia, P. Gaudio","doi":"10.1117/12.2194988","DOIUrl":"https://doi.org/10.1117/12.2194988","url":null,"abstract":"With the aim of identifying an approach to exploit the differences in the fluorescence signatures of biological agents BAs, we have investigated the response of some BAs simulants to a set of different excitation wavelengths in the UV spectral range (i.e. 266, 273, 280, 300, 340, 355 nm). Our preliminary results on bacterial spores and vegetative forms, dispersed in water, showed that the differences in the fluorescence spectra can be enhanced, and more easily revealed, by using different excitation wavelengths. Specifically, the photo luminescence (PL) spectra coming from different species of Bacillus, in the form of spores (used as simulants of Bacillus anthracis), show significant differences under excitation at all the wavelengths, with slightly larger differences at 300, 340, 355 nm. On the other hand, the vegetative forms of two Bacillus species, did not show any appreciable difference, i.e. the PL spectra are virtually identical, for the excitation wavelengths of 266, 273, 280 nm. Conversely, small yet appreciable difference appear at 300, 340, 355 nm. Finally, large difference appear between the spore and the vegetative form of each species at all the wavelengths, with slightly larger variations at 300, 340, 355 nm. Together, these preliminary results support the hypothesis that a multi-wavelength approach could be used to improve the sensitivity and specificity of UV-LIF based BAs detection systems. The second step of this work concerns the application of a Support Vector Regression (SVR) method, as evaluated in our previous work to define a methodology for the setup of a multispectral database for the stand-off detection of BAs.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125080108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}