Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549984
M. Weir, Ross Kulak, Ankur Agarwal
System test development, automation and execution process are key stages of the overall product development to both the New Product Introduction (NPI) and Production Release processes. For NPI, companies must create test systems to support product validation and verification. For manufacturing companies, ongoing process metrics are used to ensure the product meets quality specifications and can be sold to customers. This entire test process is time consuming and resource intensive and therefore negatively impacts the overall product net revenue, both in terms of time to market and in terms of development resources. Large and successful companies invest hundreds of thousands of dollars in automated test systems to support product development. Such infrastructures provide a competitive advantage by enabling a systematic methodology to generate test plans and then automatically have the test plan flow through the test software and hardware development, test and data collection, and results analysis phases. The Automatic Testing Equipment (ATE) industry has pushed to develop a framework that supports the sharing of test information, data, and analysis results across various enterprise platforms. An IEEE standard know as Automatic Test Markup Language (ATML), comprising of an XML schema, was proposed and developed in order to allow interoperability of test case, data, equipment information, and results. Our methodology provides a Service-Oriented Architecture that provides an interoperable solution. Users can begin with a test plan, deploy a scalable data monitoring and analysis capability, and follow the process from NPI through production. Various additional capabilities such as advanced analysis capability, customer data sharing resources, test software generation and deployment, closed and open source software library access, test station monitoring and equipment tracking, automated reporting schedules, and others are among the possibilities that can be added to the overall process. The proposed architecture is entirely scalable and can be deployed in single-site or global applications and may be installed behind corporate firewalls or in the cloud.
{"title":"Service Oriented Architecture for agile automated testing environment","authors":"M. Weir, Ross Kulak, Ankur Agarwal","doi":"10.1109/SysCon.2013.6549984","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549984","url":null,"abstract":"System test development, automation and execution process are key stages of the overall product development to both the New Product Introduction (NPI) and Production Release processes. For NPI, companies must create test systems to support product validation and verification. For manufacturing companies, ongoing process metrics are used to ensure the product meets quality specifications and can be sold to customers. This entire test process is time consuming and resource intensive and therefore negatively impacts the overall product net revenue, both in terms of time to market and in terms of development resources. Large and successful companies invest hundreds of thousands of dollars in automated test systems to support product development. Such infrastructures provide a competitive advantage by enabling a systematic methodology to generate test plans and then automatically have the test plan flow through the test software and hardware development, test and data collection, and results analysis phases. The Automatic Testing Equipment (ATE) industry has pushed to develop a framework that supports the sharing of test information, data, and analysis results across various enterprise platforms. An IEEE standard know as Automatic Test Markup Language (ATML), comprising of an XML schema, was proposed and developed in order to allow interoperability of test case, data, equipment information, and results. Our methodology provides a Service-Oriented Architecture that provides an interoperable solution. Users can begin with a test plan, deploy a scalable data monitoring and analysis capability, and follow the process from NPI through production. Various additional capabilities such as advanced analysis capability, customer data sharing resources, test software generation and deployment, closed and open source software library access, test station monitoring and equipment tracking, automated reporting schedules, and others are among the possibilities that can be added to the overall process. The proposed architecture is entirely scalable and can be deployed in single-site or global applications and may be installed behind corporate firewalls or in the cloud.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115744673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549930
Manju Nanda, J. Jayanthi, C. S. Jamadagni, V. Madhan
Tools have become very critical in the design, development or testing of critical systems. The selection of proper tool contributes to the success of the project. This paper derives and discusses the metrics for selecting an appropriate tool for the intended application. The work discusses the tool metrics analysis in the various phases of the engineering process and proposes an integrated framework combining the capabilities of tools requiring for a safety critical application.
{"title":"Quantitative metrics for improving software performance for an integrated tool platform","authors":"Manju Nanda, J. Jayanthi, C. S. Jamadagni, V. Madhan","doi":"10.1109/SysCon.2013.6549930","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549930","url":null,"abstract":"Tools have become very critical in the design, development or testing of critical systems. The selection of proper tool contributes to the success of the project. This paper derives and discusses the metrics for selecting an appropriate tool for the intended application. The work discusses the tool metrics analysis in the various phases of the engineering process and proposes an integrated framework combining the capabilities of tools requiring for a safety critical application.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"5 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114046315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549894
N. Zhang, P. Behera, Charles Williams
Over the last decade, there has been emphasis on the reduction of the dependency of fossil fuels that resulting in the growth of renewable energy industries. These industries have been significant economic drivers in many parts of the United States supported by both government and private sectors. As a part of renewable energy industries, there is a strong growth in solar power generation industries that often requires prediction of solar energy to develop highly efficient stand-alone photovoltaic systems as well as hybrid power systems. Specifically solar radiation prediction is a important component in the solar energy production. However, some computational intelligence methods that have most successful applications on time series prediction have not yet been investigated on solar radiation prediction. Only a limited number of neural networks models were applied to the solar radiation monitoring. Therefore, we propose an Elman style based recurrent neural network to predict solar radiation from the past solar radiation and solar energy in this research. A hybrid learning algorithm incorporating particle swarm optimization and evolutional algorithm was presented, which takes the complementary advantages of the two global optimization algorithms. The neural networks model was trained by particle swarm optimization and evolutional algorithm to forecast the solar radiation. The excellent experimental results demonstrated that the proposed hybrid learning algorithm can be successfully used for the recurrent neural networks based prediction model for the solar radiation monitoring.
{"title":"Solar radiation prediction based on particle swarm optimization and evolutionary algorithm using recurrent neural networks","authors":"N. Zhang, P. Behera, Charles Williams","doi":"10.1109/SysCon.2013.6549894","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549894","url":null,"abstract":"Over the last decade, there has been emphasis on the reduction of the dependency of fossil fuels that resulting in the growth of renewable energy industries. These industries have been significant economic drivers in many parts of the United States supported by both government and private sectors. As a part of renewable energy industries, there is a strong growth in solar power generation industries that often requires prediction of solar energy to develop highly efficient stand-alone photovoltaic systems as well as hybrid power systems. Specifically solar radiation prediction is a important component in the solar energy production. However, some computational intelligence methods that have most successful applications on time series prediction have not yet been investigated on solar radiation prediction. Only a limited number of neural networks models were applied to the solar radiation monitoring. Therefore, we propose an Elman style based recurrent neural network to predict solar radiation from the past solar radiation and solar energy in this research. A hybrid learning algorithm incorporating particle swarm optimization and evolutional algorithm was presented, which takes the complementary advantages of the two global optimization algorithms. The neural networks model was trained by particle swarm optimization and evolutional algorithm to forecast the solar radiation. The excellent experimental results demonstrated that the proposed hybrid learning algorithm can be successfully used for the recurrent neural networks based prediction model for the solar radiation monitoring.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122519999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549863
M. Talamo, M. Galinium, C. Schunck, F. Arcieri
Despite the increased use of smartcards in many areas of everyday life the secure interoperability of these devices still remains a significant challenge. Common Criteria certification ensures the secure operation of a particular smartcard in a specific and closed environment and does not explicitly consider potential problems in more open environments where different types of smartcards and their corresponding applications are present at the same time. Since both the range of smartcard applications and the issuing manufacturers continue to grow, the interoperability of smartcards cannot be satisfactorily addressed in an isolated testing and certification environment. Ideally, one should be able to certify that adding a new type of smartcard and a new smartcard application to a such environment is safe without interoperability problems. To conduct this research, we focus on digital signature applications on Common Criteria certified smartcards. We investigated the vulnerabilities of smartcards in such open environments and possible ways to identify and eliminate those using Model Checking approaches. Here we simulate the interaction of many smartcards which interact with their applications via a common middleware. Each smartcard is assumed to execute a Straight Line Program which consists of a series of states or nodes connected by transitions (no loops). We discuss how these results can be taken into account in the design of new types of middleware which can identify and suppress anomalous transitions. These results will help to design systems that support multiple smartcards types and applications simultaneously and securely.
{"title":"Simulation based verification of concurrent processing on security devices","authors":"M. Talamo, M. Galinium, C. Schunck, F. Arcieri","doi":"10.1109/SysCon.2013.6549863","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549863","url":null,"abstract":"Despite the increased use of smartcards in many areas of everyday life the secure interoperability of these devices still remains a significant challenge. Common Criteria certification ensures the secure operation of a particular smartcard in a specific and closed environment and does not explicitly consider potential problems in more open environments where different types of smartcards and their corresponding applications are present at the same time. Since both the range of smartcard applications and the issuing manufacturers continue to grow, the interoperability of smartcards cannot be satisfactorily addressed in an isolated testing and certification environment. Ideally, one should be able to certify that adding a new type of smartcard and a new smartcard application to a such environment is safe without interoperability problems. To conduct this research, we focus on digital signature applications on Common Criteria certified smartcards. We investigated the vulnerabilities of smartcards in such open environments and possible ways to identify and eliminate those using Model Checking approaches. Here we simulate the interaction of many smartcards which interact with their applications via a common middleware. Each smartcard is assumed to execute a Straight Line Program which consists of a series of states or nodes connected by transitions (no loops). We discuss how these results can be taken into account in the design of new types of middleware which can identify and suppress anomalous transitions. These results will help to design systems that support multiple smartcards types and applications simultaneously and securely.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124748308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549917
Kaushik Sinha, O. Weck
The complexity of today's highly engineered products is rooted in the interwoven architecture defined by its components and their interactions. Such structures can be viewed as the adjacency matrix of the associated dependency network representing the product architecture. To evaluate a complex system or to compare it to other systems, numerical assessment of its structural complexity is mandatory. In this paper, we develop a quantitative measure for structural complexity and apply the same to real-world engineered systems like gas turbine engine. It is observed that low topological complexity implies centralized architecture and it increases as one marches towards highly distributed architectures. We posit that the development cost varies non-linearly with structural complexity. Some empirical evidences of such behavior are presented from the literature and preliminary results from simple experiments involving assembly of simple structures strengthens our hypothesis.
{"title":"A network-based structural complexity metric for engineered complex systems","authors":"Kaushik Sinha, O. Weck","doi":"10.1109/SysCon.2013.6549917","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549917","url":null,"abstract":"The complexity of today's highly engineered products is rooted in the interwoven architecture defined by its components and their interactions. Such structures can be viewed as the adjacency matrix of the associated dependency network representing the product architecture. To evaluate a complex system or to compare it to other systems, numerical assessment of its structural complexity is mandatory. In this paper, we develop a quantitative measure for structural complexity and apply the same to real-world engineered systems like gas turbine engine. It is observed that low topological complexity implies centralized architecture and it increases as one marches towards highly distributed architectures. We posit that the development cost varies non-linearly with structural complexity. Some empirical evidences of such behavior are presented from the literature and preliminary results from simple experiments involving assembly of simple structures strengthens our hypothesis.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128355789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549988
M. Simons, S. Stalnaker, C. Morgan
This paper presents a preliminary high-level architecture framework for Air Traffic Management (ATM) Operations in the context of the aviation transportation enterprise. The analysis contained in this paper examines ATM Operations as a specialized case of a logistics process and utilizes key concepts of transportation theory. The analysis used to develop the high-level architecture examines ATM Operations as a socio-technical Systems of System (SoS) where the human is examined as an integral part of the system, together with other physical components such as automation and infrastructure. The framework is developed using a systems methodology and presents a high-level architecture of functions and data. It also presents a set of notional operational and service-level requirements. In this paper we use the example of ATM Operations in the United States (U.S.) to illustrate the application of the high-level framework. As more research is conducted in both breadth and depth, it is expected that this high-level architecture will evolve. The framework presented in this paper is intended to help aviation stakeholders in their quest to develop new capabilities while meeting the mission needs of the organization. The framework is intended to be used in planning and coordinating systems research and development. It is expected the framework can also be used as a basis to help define expectations between organizations.
{"title":"A high-level architecture framework for Air Traffic Management (ATM) Operations","authors":"M. Simons, S. Stalnaker, C. Morgan","doi":"10.1109/SysCon.2013.6549988","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549988","url":null,"abstract":"This paper presents a preliminary high-level architecture framework for Air Traffic Management (ATM) Operations in the context of the aviation transportation enterprise. The analysis contained in this paper examines ATM Operations as a specialized case of a logistics process and utilizes key concepts of transportation theory. The analysis used to develop the high-level architecture examines ATM Operations as a socio-technical Systems of System (SoS) where the human is examined as an integral part of the system, together with other physical components such as automation and infrastructure. The framework is developed using a systems methodology and presents a high-level architecture of functions and data. It also presents a set of notional operational and service-level requirements. In this paper we use the example of ATM Operations in the United States (U.S.) to illustrate the application of the high-level framework. As more research is conducted in both breadth and depth, it is expected that this high-level architecture will evolve. The framework presented in this paper is intended to help aviation stakeholders in their quest to develop new capabilities while meeting the mission needs of the organization. The framework is intended to be used in planning and coordinating systems research and development. It is expected the framework can also be used as a basis to help define expectations between organizations.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127568103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549854
A. Ejnioui, C. Otero, A. Qureshi
Information technology organizations are increasingly having difficulty completing software projects with protected content due to a lack of qualified engineers with proper security credentials. These organizations are turning toward technology to look for advanced software tools that allow them to develop software systems while protecting proprietary or classified content. Many of these software systems require that a graphic user interface (GUI) be developed without accessing protected content. Properly credentialed engineers can later embed the protected content in this GUI. This paper presents a software tool, called GUI Miner, which allows users to edit the contents of GUIs without accessing the source code of the target application. This tool extracts the entire set of GUI widgets in an existing Java application to make them available for editing. Edit changes made on these widgets are automatically reflected on the screen and saved to appropriate class files by modifying their bytecode. Testing of this tool on a set of small Java applications shows that it works as expected without consuming too many memory or processor resources.
{"title":"Engineering graphic user interfaces with protected content","authors":"A. Ejnioui, C. Otero, A. Qureshi","doi":"10.1109/SysCon.2013.6549854","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549854","url":null,"abstract":"Information technology organizations are increasingly having difficulty completing software projects with protected content due to a lack of qualified engineers with proper security credentials. These organizations are turning toward technology to look for advanced software tools that allow them to develop software systems while protecting proprietary or classified content. Many of these software systems require that a graphic user interface (GUI) be developed without accessing protected content. Properly credentialed engineers can later embed the protected content in this GUI. This paper presents a software tool, called GUI Miner, which allows users to edit the contents of GUIs without accessing the source code of the target application. This tool extracts the entire set of GUI widgets in an existing Java application to make them available for editing. Edit changes made on these widgets are automatically reflected on the screen and saved to appropriate class files by modifying their bytecode. Testing of this tool on a set of small Java applications shows that it works as expected without consuming too many memory or processor resources.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128851348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549953
S. Mittal, Margery J. Doyle, Eric Watz
Intelligence can be defined as an emergent property in some types of complex systems and may arise as a result of an agent's interactions with the environment or with other agents either directly or indirectly through changes in the environment. Within this perspective, intelligence takes the form of an `observer' phenomenon; externally observed at a level higher than that of agents situated in their environment. Such emergent behavior sometimes may be reduced to the fundamental components within the system and its interacting agents and sometimes it is a completely novel behavior involving a new nomenclature. When emergent behavior is reducible to its parts it is considered to be a `weak' form of emergence and when emergent behavior cannot be reduced to its constituent parts, it is considered to be a `strong' form of emergence. A differentiating factor between these two forms of emergent phenomena is the usage of emergent outcomes by the agents. In weak emergence there is no causality, while in strong emergence there is causation as a result of actions based on the affordances emergent phenomena support. Modeling a complex air combat system involves modeling agent behavior in a dynamic environment and because humans tend to display strong emergence, the observation of emergent phenomena has to exist within the knowledge boundaries of the domain of interest so as not to warrant any new nomenclature for the computational model at the semantic level. The emergent observed phenomenon has to be semantically tagged as `intelligent' and such knowledge resides within the bounds of the semantic domain. Therefore, observation and recognition of emergent intelligent behavior has been undertaken by the development and use of an Environment Abstraction (EA) layer that semantically ensures that strong emergence can be modeled within an agent-platform-system, such as Live, Virtual and Constructive (LVC) training in a Distributed Mission Operations (DMO) testbed. In the present study, various modeling architectures capable of modeling/mimicking human type behavior or eliciting an expected response from a human pilot in a training environment are brought to bear at the semantic interoperability level using the EA layer. This article presents a high level description of the agent-platform-system and how formal modeling and simulation approaches such as Discrete Event Systems (DEVS) formalism can be used for modeling complex dynamical systems capturing emergent behavior at various levels of interoperability. The ideas presented in this paper successfully achieve integration at the syntactic level using the Distributed Interactive Simulation (DIS) protocol data units and semantic interoperability with the EA layer.
{"title":"Detecting intelligent agent behavior with environment abstraction in complex air combat systems","authors":"S. Mittal, Margery J. Doyle, Eric Watz","doi":"10.1109/SysCon.2013.6549953","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549953","url":null,"abstract":"Intelligence can be defined as an emergent property in some types of complex systems and may arise as a result of an agent's interactions with the environment or with other agents either directly or indirectly through changes in the environment. Within this perspective, intelligence takes the form of an `observer' phenomenon; externally observed at a level higher than that of agents situated in their environment. Such emergent behavior sometimes may be reduced to the fundamental components within the system and its interacting agents and sometimes it is a completely novel behavior involving a new nomenclature. When emergent behavior is reducible to its parts it is considered to be a `weak' form of emergence and when emergent behavior cannot be reduced to its constituent parts, it is considered to be a `strong' form of emergence. A differentiating factor between these two forms of emergent phenomena is the usage of emergent outcomes by the agents. In weak emergence there is no causality, while in strong emergence there is causation as a result of actions based on the affordances emergent phenomena support. Modeling a complex air combat system involves modeling agent behavior in a dynamic environment and because humans tend to display strong emergence, the observation of emergent phenomena has to exist within the knowledge boundaries of the domain of interest so as not to warrant any new nomenclature for the computational model at the semantic level. The emergent observed phenomenon has to be semantically tagged as `intelligent' and such knowledge resides within the bounds of the semantic domain. Therefore, observation and recognition of emergent intelligent behavior has been undertaken by the development and use of an Environment Abstraction (EA) layer that semantically ensures that strong emergence can be modeled within an agent-platform-system, such as Live, Virtual and Constructive (LVC) training in a Distributed Mission Operations (DMO) testbed. In the present study, various modeling architectures capable of modeling/mimicking human type behavior or eliciting an expected response from a human pilot in a training environment are brought to bear at the semantic interoperability level using the EA layer. This article presents a high level description of the agent-platform-system and how formal modeling and simulation approaches such as Discrete Event Systems (DEVS) formalism can be used for modeling complex dynamical systems capturing emergent behavior at various levels of interoperability. The ideas presented in this paper successfully achieve integration at the syntactic level using the Distributed Interactive Simulation (DIS) protocol data units and semantic interoperability with the EA layer.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116943713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549916
Juan C. Avendano, L. D. Otero, P. Cosentino
Inspections of structures such as bridges and high mast lightning (HML) and support poles are crucial to the maintenance and safety of transportation infrastructures. Government agencies rely on inspections to estimate the health of structures and make decisions-such as allocation of human resources and funds to maintainrepair the structures - that significantly affect public safety and costs. This paper describes a work-in-progress towards the development of a highly complex system capable of assisting structural inspectors during the inspection process. The authors present the conceptual design of a complex system capable of acquiring and processing image data of structures in near real-time efficiently and in a cost-effective manner. The completion of this highly complex system requires a robust systems engineering approach that integrates the software engineering, mobile technology, small-scale aerial vehicles, and transportation engineering disciplines.
{"title":"Towards the development of a complex structural inspection system using small-scale aerial vehicles and image processing","authors":"Juan C. Avendano, L. D. Otero, P. Cosentino","doi":"10.1109/SysCon.2013.6549916","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549916","url":null,"abstract":"Inspections of structures such as bridges and high mast lightning (HML) and support poles are crucial to the maintenance and safety of transportation infrastructures. Government agencies rely on inspections to estimate the health of structures and make decisions-such as allocation of human resources and funds to maintainrepair the structures - that significantly affect public safety and costs. This paper describes a work-in-progress towards the development of a highly complex system capable of assisting structural inspectors during the inspection process. The authors present the conceptual design of a complex system capable of acquiring and processing image data of structures in near real-time efficiently and in a cost-effective manner. The completion of this highly complex system requires a robust systems engineering approach that integrates the software engineering, mobile technology, small-scale aerial vehicles, and transportation engineering disciplines.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128136010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549970
E. Iáñez, A. Úbeda, E. Hortal, J. Azorín
In this work, a study that analyzes the best combinations of mental tasks in a Brain-Computer Interface (BCI) using a classifier based on Support Vector Machine (SVM) is presented. To that end, twelve mental tasks of different nature are analyzed and the results of the classification for the combinations of two, three and four tasks are obtained. Four volunteers performed registers of the twelve tasks. The main goal is to find the combination of more than three mental tasks that obtains the higher reliability to apply it in future complex applications that require the use of more than three mental control commands. After a selection procedure, the results obtained show higher success percentages and important differences according to the nature of the mental tasks, which suggest that it is possible to differentiate with enough reliability between more than three mental tasks using the methodology proposed.
{"title":"Mental tasks selection method for a SVM-based BCI system","authors":"E. Iáñez, A. Úbeda, E. Hortal, J. Azorín","doi":"10.1109/SysCon.2013.6549970","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549970","url":null,"abstract":"In this work, a study that analyzes the best combinations of mental tasks in a Brain-Computer Interface (BCI) using a classifier based on Support Vector Machine (SVM) is presented. To that end, twelve mental tasks of different nature are analyzed and the results of the classification for the combinations of two, three and four tasks are obtained. Four volunteers performed registers of the twelve tasks. The main goal is to find the combination of more than three mental tasks that obtains the higher reliability to apply it in future complex applications that require the use of more than three mental control commands. After a selection procedure, the results obtained show higher success percentages and important differences according to the nature of the mental tasks, which suggest that it is possible to differentiate with enough reliability between more than three mental tasks using the methodology proposed.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126825546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}