Pub Date : 2015-05-26DOI: 10.1109/CISDA.2015.7208630
J. Schaffer
Spiking neural networks (SNNs) have generated considerable excitement because of their computational properties, believed to be superior to conventional von Neumann machines, and sharing properties with living brains. Yet progress building these systems has been limited because we lack a design methodology. We present a gene-driven network growth algorithm that enables a genetic algorithm (evolutionary computation) to generate and test SNNs. The genome length for this algorithm grows O(n) where n is the number of neurons; n is also evolved. The genome not only specifies the network topology, but all its parameters as well. In experiments, the algorithm discovered SNNs that effectively produce a robust spike bursting behavior given tonic inputs, an application suitable for central pattern generators. Even though evolution did not include perturbations of the input spike trains, the evolved networks showed remarkable robustness to such perturbations. On a second task, a sequence detector, several related discriminating designs were found, all made “errors” in that they fired when input spikes were simultaneous (i.e. not strictly in sequence), but not when they were out of sequence. They also fired when the sequence was too close for the teacher to have declared they were in sequence. That is, evolution produced these behaviors even though it was not explicitly rewarded for doing so. We are optimistic that this technology might be scaled up to produce robust SNN designs that humans would be hard pressed to produce.
{"title":"Evolving spiking neural networks: A novel growth algorithm corrects the teacher","authors":"J. Schaffer","doi":"10.1109/CISDA.2015.7208630","DOIUrl":"https://doi.org/10.1109/CISDA.2015.7208630","url":null,"abstract":"Spiking neural networks (SNNs) have generated considerable excitement because of their computational properties, believed to be superior to conventional von Neumann machines, and sharing properties with living brains. Yet progress building these systems has been limited because we lack a design methodology. We present a gene-driven network growth algorithm that enables a genetic algorithm (evolutionary computation) to generate and test SNNs. The genome length for this algorithm grows O(n) where n is the number of neurons; n is also evolved. The genome not only specifies the network topology, but all its parameters as well. In experiments, the algorithm discovered SNNs that effectively produce a robust spike bursting behavior given tonic inputs, an application suitable for central pattern generators. Even though evolution did not include perturbations of the input spike trains, the evolved networks showed remarkable robustness to such perturbations. On a second task, a sequence detector, several related discriminating designs were found, all made “errors” in that they fired when input spikes were simultaneous (i.e. not strictly in sequence), but not when they were out of sequence. They also fired when the sequence was too close for the teacher to have declared they were in sequence. That is, evolution produced these behaviors even though it was not explicitly rewarded for doing so. We are optimistic that this technology might be scaled up to produce robust SNN designs that humans would be hard pressed to produce.","PeriodicalId":6407,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications","volume":"4 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2015-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90701231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-07-08DOI: 10.1109/CISDA.2009.5356524
E. F. Ersi, John K. Tsotsos
In this paper a novel technique for face recognition is proposed. Using the statistical Local Feature Analysis (LFA) technique, a set of feature points is extracted from each face image, at locations with highest deviations from the statistical expected face. Each feature point is described by a set of Gabor wavelet responses at different frequencies and orientations. A triangle-inequality-based pruning algorithm is developed for fast matching, which automatically chooses a set of key features from the database of model features and uses the pre-computed distances of the keys to the database, along with the triangle inequality, in order to speedily compute lower bounds on the distances from a query feature to the database, and eliminate the unnecessary direct comparisons. Our proposed technique achieves perfect results on the ORL face set and an accuracy rate of 99.1% on the FERET face set, which shows the superiority of the proposed technique over all considered state-of-the-art face recognition methods.
{"title":"Local feature analysis for robust face recognition","authors":"E. F. Ersi, John K. Tsotsos","doi":"10.1109/CISDA.2009.5356524","DOIUrl":"https://doi.org/10.1109/CISDA.2009.5356524","url":null,"abstract":"In this paper a novel technique for face recognition is proposed. Using the statistical Local Feature Analysis (LFA) technique, a set of feature points is extracted from each face image, at locations with highest deviations from the statistical expected face. Each feature point is described by a set of Gabor wavelet responses at different frequencies and orientations. A triangle-inequality-based pruning algorithm is developed for fast matching, which automatically chooses a set of key features from the database of model features and uses the pre-computed distances of the keys to the database, along with the triangle inequality, in order to speedily compute lower bounds on the distances from a query feature to the database, and eliminate the unnecessary direct comparisons. Our proposed technique achieves perfect results on the ORL face set and an accuracy rate of 99.1% on the FERET face set, which shows the superiority of the proposed technique over all considered state-of-the-art face recognition methods.","PeriodicalId":6407,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications","volume":"102 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74811574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-07-08DOI: 10.1109/CISDA.2009.5356529
Zeeshan-ul-hassan Usmani, F. Alghamdi, D. Kirk
This paper introduces BlastSim — physics based stationary multi-agent simulation of blast waves and its impact on human body. The agents are constrained by physical characteristics and mechanics of blast wave. The simulation is capable of assessing the impact of crowd formation patterns on the magnitude of injury and number of casualties during a suicide bombing attack. It also examines variables such as the number and arrangement of people within a crowd for typical layouts, the number of suicide bombers, and the nature of the explosion including equivalent weight of TNT, and the duration of the resulting blast wave pulse. The paper also explains the physics, explosive models, mathematics and the assumptions we need to create such a simulation. Furthermore, it also describes human shields available in the crowd with partial and full coverage in both two dimensional and three dimensional environments. The goals of this paper are to determine optimal crowd formations to reduce the deaths and/or injuries of individuals in the crowd. The findings, although preliminary, may have implications for forensics investigations, emergency response and counterterrorism.
{"title":"BlastSim — Multi agent simulation of suicide bombing","authors":"Zeeshan-ul-hassan Usmani, F. Alghamdi, D. Kirk","doi":"10.1109/CISDA.2009.5356529","DOIUrl":"https://doi.org/10.1109/CISDA.2009.5356529","url":null,"abstract":"This paper introduces BlastSim — physics based stationary multi-agent simulation of blast waves and its impact on human body. The agents are constrained by physical characteristics and mechanics of blast wave. The simulation is capable of assessing the impact of crowd formation patterns on the magnitude of injury and number of casualties during a suicide bombing attack. It also examines variables such as the number and arrangement of people within a crowd for typical layouts, the number of suicide bombers, and the nature of the explosion including equivalent weight of TNT, and the duration of the resulting blast wave pulse. The paper also explains the physics, explosive models, mathematics and the assumptions we need to create such a simulation. Furthermore, it also describes human shields available in the crowd with partial and full coverage in both two dimensional and three dimensional environments. The goals of this paper are to determine optimal crowd formations to reduce the deaths and/or injuries of individuals in the crowd. The findings, although preliminary, may have implications for forensics investigations, emergency response and counterterrorism.","PeriodicalId":6407,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications","volume":"19 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76925118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-07-08DOI: 10.1109/CISDA.2009.5356567
Safaa Zaman, F. Karray
Due to the rapid growth of network technologies and substantial improvement in attack tools and techniques, a distributed Intrusion Detection System (dIDS) is required to allocate multiple IDSs across a network to monitor security events and to collect data. However, dIDS architectures suffer from many limitations such as the lack of a central analyzer and a heavy network load. In this paper, we propose a new architecture for dIDS, called a Collaborative architecture for dIDS (C-dIDS), to overcome these limitations. The C-dIDS contains one-level hierarchy dIDS with a non-central analyzer. To make the detection decision for a specific IDS module in the system, this IDS module needs to collaborate with the IDS in the lower level of the hierarchy. Cooperating with lower level IDS module improves the system accuracy with less network load (just one bit of information). Moreover, by using one hierarchy level, there is no central management and processing of data so there is no chance for a single point of failure. We have examined the feasibility of our dIDS architecture by conducting several experiments using the DARPA dataset. The experimental results indicate that the proposed architecture can deliver satisfactory system performance with less network load.
{"title":"Collaborative architecture for distributed intrusion detection system","authors":"Safaa Zaman, F. Karray","doi":"10.1109/CISDA.2009.5356567","DOIUrl":"https://doi.org/10.1109/CISDA.2009.5356567","url":null,"abstract":"Due to the rapid growth of network technologies and substantial improvement in attack tools and techniques, a distributed Intrusion Detection System (dIDS) is required to allocate multiple IDSs across a network to monitor security events and to collect data. However, dIDS architectures suffer from many limitations such as the lack of a central analyzer and a heavy network load. In this paper, we propose a new architecture for dIDS, called a Collaborative architecture for dIDS (C-dIDS), to overcome these limitations. The C-dIDS contains one-level hierarchy dIDS with a non-central analyzer. To make the detection decision for a specific IDS module in the system, this IDS module needs to collaborate with the IDS in the lower level of the hierarchy. Cooperating with lower level IDS module improves the system accuracy with less network load (just one bit of information). Moreover, by using one hierarchy level, there is no central management and processing of data so there is no chance for a single point of failure. We have examined the feasibility of our dIDS architecture by conducting several experiments using the DARPA dataset. The experimental results indicate that the proposed architecture can deliver satisfactory system performance with less network load.","PeriodicalId":6407,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications","volume":"26 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78155713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-07-08DOI: 10.1109/CISDA.2009.5356556
Andrés Solís Montero, A. Nayak, M. Stojmenovic, N. Zaguia
This paper describes a new line segment detection and extraction algorithm for computer vision, image segmentation, and shape recognition applications. This is an important pre processing step in detecting, recognizing and classifying military hardware in images. This algorithm uses a compilation of different image processing steps such as normalization, Gaussian smooth, thresholding, and Laplace edge detection to extract edge contours from colour input images. Contours of each connected component are divided into short segments, which are classified by their orientation into nine discrete categories. Straight lines are recognized as the minimal number of such consecutive short segments with the same direction. This solution gives us a surprisingly more accurate, faster and simpler answer with fewer parameters than the widely used Hough Transform algorithm for detecting lines segments among any orientation and location inside images. Its easy implementation, simplicity, speed, the ability to divide an edge into straight line segments using the actual morphology of objects, inclusion of endpoint information, and the use of the OpenCV library are key features and advantages of this solution procedure. The algorithm was tested on several simple shape images as well as real pictures giving more accuracy than the actual procedures based in Hough Transform. This line detection algorithm is robust to image transformations such as rotation, scaling and translation, and to the selection of parameter values.
{"title":"Robust line extraction based on repeated segment directions on image contours","authors":"Andrés Solís Montero, A. Nayak, M. Stojmenovic, N. Zaguia","doi":"10.1109/CISDA.2009.5356556","DOIUrl":"https://doi.org/10.1109/CISDA.2009.5356556","url":null,"abstract":"This paper describes a new line segment detection and extraction algorithm for computer vision, image segmentation, and shape recognition applications. This is an important pre processing step in detecting, recognizing and classifying military hardware in images. This algorithm uses a compilation of different image processing steps such as normalization, Gaussian smooth, thresholding, and Laplace edge detection to extract edge contours from colour input images. Contours of each connected component are divided into short segments, which are classified by their orientation into nine discrete categories. Straight lines are recognized as the minimal number of such consecutive short segments with the same direction. This solution gives us a surprisingly more accurate, faster and simpler answer with fewer parameters than the widely used Hough Transform algorithm for detecting lines segments among any orientation and location inside images. Its easy implementation, simplicity, speed, the ability to divide an edge into straight line segments using the actual morphology of objects, inclusion of endpoint information, and the use of the OpenCV library are key features and advantages of this solution procedure. The algorithm was tested on several simple shape images as well as real pictures giving more accuracy than the actual procedures based in Hough Transform. This line detection algorithm is robust to image transformations such as rotation, scaling and translation, and to the selection of parameter values.","PeriodicalId":6407,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications","volume":"135 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80182994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-07-08DOI: 10.1109/CISDA.2009.5356538
R. Guo, B. Cain
Human behaviour representation (HBR) is playing a significant role in Computer-Generated Forces (CGFs). However, current HBR lacks realistic and human-like characteristics, and many domain experts think that integrating personality and individual differences into existing simulation tools in CGFs is able to improve HBR. This article describes an approach to represent reasoning with personality effects in variable cognitive processes in CGFs. By combining the Integrated Performance Modelling Environment (IPME), the proposed representation provides a high level reasoning tool for human behaviour modelling. This approach deals with deterministic and uncertainty reasoning with influence of personality and supports user-defined personality models in simulated military operators. This research shows the possibility to integrate the reasoning with personality effects into performance simulation engines in CGFs for improving current HBR.
{"title":"Integrating reasoning with personality effects in simulated operators","authors":"R. Guo, B. Cain","doi":"10.1109/CISDA.2009.5356538","DOIUrl":"https://doi.org/10.1109/CISDA.2009.5356538","url":null,"abstract":"Human behaviour representation (HBR) is playing a significant role in Computer-Generated Forces (CGFs). However, current HBR lacks realistic and human-like characteristics, and many domain experts think that integrating personality and individual differences into existing simulation tools in CGFs is able to improve HBR. This article describes an approach to represent reasoning with personality effects in variable cognitive processes in CGFs. By combining the Integrated Performance Modelling Environment (IPME), the proposed representation provides a high level reasoning tool for human behaviour modelling. This approach deals with deterministic and uncertainty reasoning with influence of personality and supports user-defined personality models in simulated military operators. This research shows the possibility to integrate the reasoning with personality effects into performance simulation engines in CGFs for improving current HBR.","PeriodicalId":6407,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications","volume":"175 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76931186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-07-08DOI: 10.1109/CISDA.2009.5356522
Carson D. Brown, Alex Cowperthwaite, Abdulrahman Hijazi, Anil Somayaji
The 1999 DARPA/Lincoln Laboratory IDS Evaluation Data has been widely used in the intrusion detection and networking community, even though it is known to have a number of artifacts. Here we show that many of these artifacts, including the lack of damaged or unusual background packets and uniform host distribution, can be easily extracted using NetADHICT, a tool we developed for understanding networks. In addition, using NetADHICT we were able to identify extreme temporal variation in the data, a characteristic that was not identified in past analyses. These results illustrate the utility of NetADHICT in characterizing network traces for experimental purposes.
{"title":"Analysis of the 1999 DARPA/Lincoln Laboratory IDS evaluation data with NetADHICT","authors":"Carson D. Brown, Alex Cowperthwaite, Abdulrahman Hijazi, Anil Somayaji","doi":"10.1109/CISDA.2009.5356522","DOIUrl":"https://doi.org/10.1109/CISDA.2009.5356522","url":null,"abstract":"The 1999 DARPA/Lincoln Laboratory IDS Evaluation Data has been widely used in the intrusion detection and networking community, even though it is known to have a number of artifacts. Here we show that many of these artifacts, including the lack of damaged or unusual background packets and uniform host distribution, can be easily extracted using NetADHICT, a tool we developed for understanding networks. In addition, using NetADHICT we were able to identify extreme temporal variation in the data, a characteristic that was not identified in past analyses. These results illustrate the utility of NetADHICT in characterizing network traces for experimental purposes.","PeriodicalId":6407,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications","volume":"13 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86332874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-07-08DOI: 10.1109/CISDA.2009.5356518
A. Stirtzinger, C. Anken, B. McQueary
The approach taken with OGEP is to parse relevant domain data in the form of unstructured content (or corpus) and use that knowledge to generate and/or evolve an existing ontology. OGEP creates a constant conversation between the corpus parser and a reasoning mechanism (corpus reasoner) that continually formulates potential ontology modifications in the form of hypotheses. These hypotheses are weighted towards contextual relevancy and further reasoned over to provide a confidence measure for use in deciding new assertions to the ontology. The new assertions generated from the corpus reasoner can either be automatically asserted based on confidence measure, or can be asserted by OGEP interacting with a user for final approval. This paper describes the OGEP technology in the context of the architectural components and identifies a potential technology transition path to Scott AFB's Tanker Airlift Control Center (TACC), which serves as the Air Operations Center (AOC) for the Air Mobility Command (AMC).
{"title":"Goal-driven semi-automated generation of semantic models","authors":"A. Stirtzinger, C. Anken, B. McQueary","doi":"10.1109/CISDA.2009.5356518","DOIUrl":"https://doi.org/10.1109/CISDA.2009.5356518","url":null,"abstract":"The approach taken with OGEP is to parse relevant domain data in the form of unstructured content (or corpus) and use that knowledge to generate and/or evolve an existing ontology. OGEP creates a constant conversation between the corpus parser and a reasoning mechanism (corpus reasoner) that continually formulates potential ontology modifications in the form of hypotheses. These hypotheses are weighted towards contextual relevancy and further reasoned over to provide a confidence measure for use in deciding new assertions to the ontology. The new assertions generated from the corpus reasoner can either be automatically asserted based on confidence measure, or can be asserted by OGEP interacting with a user for final approval. This paper describes the OGEP technology in the context of the architectural components and identifies a potential technology transition path to Scott AFB's Tanker Airlift Control Center (TACC), which serves as the Air Operations Center (AOC) for the Air Mobility Command (AMC).","PeriodicalId":6407,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications","volume":"87 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80778734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-07-08DOI: 10.1109/CISDA.2009.5356543
Dmitri Ignakov, G. Okouneva, Guangjun Liu
This paper presents a novel approach to localizing a door handle of unknown geometry to assist in autonomous door opening. The localization is performed using data from a single CCD camera that is mounted at the end-effector of a mobile manipulator. The proposed algorithm extracts a 3D point cloud using optical flow and known camera motion provided by the manipulator. Segmentation of the point cloud is then performed, enabling the separation of the door and the handle points, which is then followed by fitting a boundary box to the door handle data. The fitted box can then be used to guide robotic grasping. The proposed algorithm has been validated using a 3D virtual scene, and the results have demonstrated the effectiveness of the proposed method to localize a door handle in an unknown environment.
{"title":"Localization of door handle using a single camera on a door opening mobile manipulator","authors":"Dmitri Ignakov, G. Okouneva, Guangjun Liu","doi":"10.1109/CISDA.2009.5356543","DOIUrl":"https://doi.org/10.1109/CISDA.2009.5356543","url":null,"abstract":"This paper presents a novel approach to localizing a door handle of unknown geometry to assist in autonomous door opening. The localization is performed using data from a single CCD camera that is mounted at the end-effector of a mobile manipulator. The proposed algorithm extracts a 3D point cloud using optical flow and known camera motion provided by the manipulator. Segmentation of the point cloud is then performed, enabling the separation of the door and the handle points, which is then followed by fitting a boundary box to the door handle data. The fitted box can then be used to guide robotic grasping. The proposed algorithm has been validated using a 3D virtual scene, and the results have demonstrated the effectiveness of the proposed method to localize a door handle in an unknown environment.","PeriodicalId":6407,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications","volume":"13 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81989001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-07-08DOI: 10.1109/CISDA.2009.5356520
Il-Chul Moon
Many adversarial organizations, such as organized crime groups, terrorist networks, and the like, are complex adaptive organizations. Therefore, strategies against them should consider the natures of complexity and adaptivity. However, such natures create nonlinear effects that are difficult to predict. To mitigate those difficulties, I utilize agent-based simulations that could possibly capture unexpected responses coming from our interventions into their organizations. This paper presents a simulation analysis example of an action against a terrorist group. Particularly, this example points out three critical aspects of simulation analysis. First, this simulation example shows how to setup a simulation analysis to anticipate an intervention's results. Second, this example illustrates various results making analysis useful. Third, this example describes statistical processing of the results. I expect that the three points will advance the current practices of simulation analysis on complex adaptive organizations.
{"title":"Simulation analysis on destabilization of complex adaptive organizations","authors":"Il-Chul Moon","doi":"10.1109/CISDA.2009.5356520","DOIUrl":"https://doi.org/10.1109/CISDA.2009.5356520","url":null,"abstract":"Many adversarial organizations, such as organized crime groups, terrorist networks, and the like, are complex adaptive organizations. Therefore, strategies against them should consider the natures of complexity and adaptivity. However, such natures create nonlinear effects that are difficult to predict. To mitigate those difficulties, I utilize agent-based simulations that could possibly capture unexpected responses coming from our interventions into their organizations. This paper presents a simulation analysis example of an action against a terrorist group. Particularly, this example points out three critical aspects of simulation analysis. First, this simulation example shows how to setup a simulation analysis to anticipate an intervention's results. Second, this example illustrates various results making analysis useful. Third, this example describes statistical processing of the results. I expect that the three points will advance the current practices of simulation analysis on complex adaptive organizations.","PeriodicalId":6407,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications","volume":"19 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88649117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}