Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549928
Sarayut Nonsiri, E. Coatanéa, M. Bakhouya, Faisal Mokammel
The need for support related to the complexity management of systems engineering problems, specifically for requirements management and changes is especially necessary during the early stages of the systems engineering process. Indeed, these stages have a tremendous impact on the overall outcome of a project. If not anticipated at early stages, changes in requirements are leading to changes in the design and in the later implementation stages, resulting in an unexpected increase in costs (monetary, time, etc.). The framework proposed in this article for requirements change prediction consists of a three steps process. First, requirements are modeled using SysML with predefined relationships. Second, all the relationships between requirements in the SysML model are transformed into an adjacency matrix also named DSM. A higher order Dependency Structure Matrix is applied; this matrix-based methodology allows support in the prediction of which requirements will be affected after a change in a specific requirement. Third, the change propagation path is visualized. Using this framework, it is possible to predict the possible propagation of changes in requirements. In addition, it is also possible to identify the requirements that can be reused. This can help to save the time and cost for developing a new system.
{"title":"Model-based approach for change propagation analysis in requirements","authors":"Sarayut Nonsiri, E. Coatanéa, M. Bakhouya, Faisal Mokammel","doi":"10.1109/SysCon.2013.6549928","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549928","url":null,"abstract":"The need for support related to the complexity management of systems engineering problems, specifically for requirements management and changes is especially necessary during the early stages of the systems engineering process. Indeed, these stages have a tremendous impact on the overall outcome of a project. If not anticipated at early stages, changes in requirements are leading to changes in the design and in the later implementation stages, resulting in an unexpected increase in costs (monetary, time, etc.). The framework proposed in this article for requirements change prediction consists of a three steps process. First, requirements are modeled using SysML with predefined relationships. Second, all the relationships between requirements in the SysML model are transformed into an adjacency matrix also named DSM. A higher order Dependency Structure Matrix is applied; this matrix-based methodology allows support in the prediction of which requirements will be affected after a change in a specific requirement. Third, the change propagation path is visualized. Using this framework, it is possible to predict the possible propagation of changes in requirements. In addition, it is also possible to identify the requirements that can be reused. This can help to save the time and cost for developing a new system.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133306026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549956
Ehsan Moradi-Pari, Amin Tahmasbi-Sarvestani, Y. P. Fallah
We present a novel approach to modeling the combined estimation and networking processes of automated crash/collision avoidance systems (ACAS). The estimation and networking processes are two necessary components of the real-time situation awareness component of the system. The existing models for these two components are mostly based on stochastic modeling methods, describing each component separately and in abstract probabilistic terms. Such modeling methods lead to the loss of useful details. In our recent work we presented extended stochastic models using discrete-time Markov chains for the networking component and empirical statistical models for the estimation process. Although these models led to significantly improved designs for the situation awareness component of ACAS, it was observed that the extent of the improvement was limited. The limitation is due the fact that stochastic models are limited in describing the system which inherently has many deterministic features. In this paper we attempt to advance the approach to modeling the ACAS systems (and other similar systems) through developing a method to model the communication component based on Probabilistic Timed automata and also a Hybrid automata to combine and model the entire system (both estimation and communication/networking processes). This paper presents the new model and verifies it using simulations.
{"title":"Modeling communication and estimation processes of automated crash avoidance systems","authors":"Ehsan Moradi-Pari, Amin Tahmasbi-Sarvestani, Y. P. Fallah","doi":"10.1109/SysCon.2013.6549956","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549956","url":null,"abstract":"We present a novel approach to modeling the combined estimation and networking processes of automated crash/collision avoidance systems (ACAS). The estimation and networking processes are two necessary components of the real-time situation awareness component of the system. The existing models for these two components are mostly based on stochastic modeling methods, describing each component separately and in abstract probabilistic terms. Such modeling methods lead to the loss of useful details. In our recent work we presented extended stochastic models using discrete-time Markov chains for the networking component and empirical statistical models for the estimation process. Although these models led to significantly improved designs for the situation awareness component of ACAS, it was observed that the extent of the improvement was limited. The limitation is due the fact that stochastic models are limited in describing the system which inherently has many deterministic features. In this paper we attempt to advance the approach to modeling the ACAS systems (and other similar systems) through developing a method to model the communication component based on Probabilistic Timed automata and also a Hybrid automata to combine and model the entire system (both estimation and communication/networking processes). This paper presents the new model and verifies it using simulations.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131986816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549857
Edmon Begoli, Theodore F. Chila, W. Inmon
The methodology we present in this paper emerged as a result of the technical and organizational assessment we conducted for a large data analytic system and for its expansion to support a significant new mission in healthcare domain. We developed a 4+1 dimensional approach for examining the different characteristics of a system with four traditional dimensions and a fifth, scenarios-based, dimension, introduced as an exploration device of the entire system in its business context. We present the principles, guidelines, and structure of the methodology as well as the results of the application of this process leading to a credible evaluation that better assesses current large data analysis systems than the previous, purely static assessment.
{"title":"Scenario-driven architecture assessment methodology for large data analysis systems","authors":"Edmon Begoli, Theodore F. Chila, W. Inmon","doi":"10.1109/SysCon.2013.6549857","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549857","url":null,"abstract":"The methodology we present in this paper emerged as a result of the technical and organizational assessment we conducted for a large data analytic system and for its expansion to support a significant new mission in healthcare domain. We developed a 4+1 dimensional approach for examining the different characteristics of a system with four traditional dimensions and a fifth, scenarios-based, dimension, introduced as an exploration device of the entire system in its business context. We present the principles, guidelines, and structure of the methodology as well as the results of the application of this process leading to a credible evaluation that better assesses current large data analysis systems than the previous, purely static assessment.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125549409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549885
D. Bodner, J. Wade
Programs that develop and deploy complex systems typically have multiple criteria by which they are judged to be successful. Categories of such criteria include schedule, cost, technical system performance, quality and customer expectations. Criteria are operationalized via particular metrics, and often there are complex relationships between metrics, e.g., correlations or trade-offs. In an acquisition program, it is critical that systems engineers understand the implications of their actions and decisions with respect to these metrics, since the metrics are used to report the performance and eventual outcome of the program. However, such understanding usually takes many years of on-the-job experience. This paper describes an approach to simulation modeling of program behavior and performance whereby program outputs expressed in these metrics can be studied by systems engineers. An example program simulation model is presented that currently is used in an educational technology system for training systems engineers. The decisions and actions that can be taken by a systems engineer are described, and the impacts of various actions and decisions on program metrics and metric relationships are illustrated. The model is validated via subject matter experts with extensive experience in the program domain.
{"title":"Multi-criteria simulation of program outcomes","authors":"D. Bodner, J. Wade","doi":"10.1109/SysCon.2013.6549885","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549885","url":null,"abstract":"Programs that develop and deploy complex systems typically have multiple criteria by which they are judged to be successful. Categories of such criteria include schedule, cost, technical system performance, quality and customer expectations. Criteria are operationalized via particular metrics, and often there are complex relationships between metrics, e.g., correlations or trade-offs. In an acquisition program, it is critical that systems engineers understand the implications of their actions and decisions with respect to these metrics, since the metrics are used to report the performance and eventual outcome of the program. However, such understanding usually takes many years of on-the-job experience. This paper describes an approach to simulation modeling of program behavior and performance whereby program outputs expressed in these metrics can be studied by systems engineers. An example program simulation model is presented that currently is used in an educational technology system for training systems engineers. The decisions and actions that can be taken by a systems engineer are described, and the impacts of various actions and decisions on program metrics and metric relationships are illustrated. The model is validated via subject matter experts with extensive experience in the program domain.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125296001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549862
J. Jayanthi, Manju Nanda, Sameer Nayak
Evaluation of the sophisticated safety critical systems is a tedious and time consuming process. This process has been undertaken manually, by experts and individuals, which increase the risk of human errors and ambiguities in understanding. We introduce an approach by integrating the mutation analysis and model checking. The mutation analysis cuts down the possibility of human errors, whereas model checker analyses and verifies the semantics of the safety critical system, thereby reducing both human and semantic errors to a considerable extent.
{"title":"A lightweight integration of mutation analysis with the model checker for system safety verification","authors":"J. Jayanthi, Manju Nanda, Sameer Nayak","doi":"10.1109/SysCon.2013.6549862","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549862","url":null,"abstract":"Evaluation of the sophisticated safety critical systems is a tedious and time consuming process. This process has been undertaken manually, by experts and individuals, which increase the risk of human errors and ambiguities in understanding. We introduce an approach by integrating the mutation analysis and model checking. The mutation analysis cuts down the possibility of human errors, whereas model checker analyses and verifies the semantics of the safety critical system, thereby reducing both human and semantic errors to a considerable extent.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"2 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131898442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549937
C. Insaurralde, Y. Pétillot
The increasing demand for more self-governed assistance in maritime activities is leading ocean engineering research projects to combine diverse autonomous capabilities developed at geographically-dispersed locations. Availability of such capabilities as well as experiments in water is a critical issue that can seriously impact on the project milestones. The ability to perform verification and validation at an initial integration stage of maritime capabilities while they are still physically located at the partner's site can reduce significantly costs and risks. This paper proposes an early integration framework for autonomous capabilities of maritime vehicles by means of system and context simulation (including emulation of maritime vehicles and operational environment). This makes interaction between the computational and physical process become crucial as in cyber-physical systems. The framework proposed allows project patterns to pre-verify and pre-validate requirements before the system is physically integrated. This paper presents a review of the research context, and the autonomous maritime capabilities to be integrated. An illustrative case study of simulation and trials carried out on cooperative maritime navigation is also presented.
{"title":"Cyber-physical framework for early integration of autonomous maritime capabilities","authors":"C. Insaurralde, Y. Pétillot","doi":"10.1109/SysCon.2013.6549937","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549937","url":null,"abstract":"The increasing demand for more self-governed assistance in maritime activities is leading ocean engineering research projects to combine diverse autonomous capabilities developed at geographically-dispersed locations. Availability of such capabilities as well as experiments in water is a critical issue that can seriously impact on the project milestones. The ability to perform verification and validation at an initial integration stage of maritime capabilities while they are still physically located at the partner's site can reduce significantly costs and risks. This paper proposes an early integration framework for autonomous capabilities of maritime vehicles by means of system and context simulation (including emulation of maritime vehicles and operational environment). This makes interaction between the computational and physical process become crucial as in cyber-physical systems. The framework proposed allows project patterns to pre-verify and pre-validate requirements before the system is physically integrated. This paper presents a review of the research context, and the autonomous maritime capabilities to be integrated. An illustrative case study of simulation and trials carried out on cooperative maritime navigation is also presented.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115738495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549975
Karen B. Marais, Jessica Rivas, Isaac J. Tetzloff, W. Crossley
The US Navy is making a concerted effort to use total ownership cost (TOC) as a metric for decision-making about the various systems needed to perform the Navy's missions. System Total Ownership Cost seeks to combine aspects related to acquisition costs, operating costs, maintenance costs, and manpower costs (both staffing and training) over the lifecycle of the system. Here, this paper presents initial efforts to consider deferred maintenance and its impact on TOC for long-lived systems, like the DDG-51 class destroyers. Near-term cost pressures often result in decisions that defer maintenance to a later time than scheduled or well after first notice of a maintenance need. Deferring maintenance allows the costs of performing maintenance to be postponed, saving short term costs, but the choice to defer maintenance may also result in the system moving to a state of further degradation. If this is true, later maintenance tasks needed to restore the ship's capability or reliability may become more costly. While these trade-offs are conceptually well understood, they have not been adequately quantified to allow decision makers to make the best decisions when funds are constrained. One reason such quantification has been lacking is that the necessary data is often not available. This paper presents initial work aimed at using data recorded by the Navy to construct a model that could allow for quantitative decision support. The principal challenge is that most of the recorded data is at the system level, implying that the ship must be modeled as a single unit. This assumption results in an underestimation of the impact on reliability of deferring corrective maintenance. Our results show that given the data available, a stochastic renewal process can model the Arleigh Burke (DDG-51) class guided-missile destroyers, implying that the ship returns to a “like new” condition following successful maintenance. The stochastic renewal process model provides a first step in using reported data to develop a model of delayed maintenance and its effect on TOC.
{"title":"Modeling the impact of maintenance on naval fleet total ownership cost","authors":"Karen B. Marais, Jessica Rivas, Isaac J. Tetzloff, W. Crossley","doi":"10.1109/SysCon.2013.6549975","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549975","url":null,"abstract":"The US Navy is making a concerted effort to use total ownership cost (TOC) as a metric for decision-making about the various systems needed to perform the Navy's missions. System Total Ownership Cost seeks to combine aspects related to acquisition costs, operating costs, maintenance costs, and manpower costs (both staffing and training) over the lifecycle of the system. Here, this paper presents initial efforts to consider deferred maintenance and its impact on TOC for long-lived systems, like the DDG-51 class destroyers. Near-term cost pressures often result in decisions that defer maintenance to a later time than scheduled or well after first notice of a maintenance need. Deferring maintenance allows the costs of performing maintenance to be postponed, saving short term costs, but the choice to defer maintenance may also result in the system moving to a state of further degradation. If this is true, later maintenance tasks needed to restore the ship's capability or reliability may become more costly. While these trade-offs are conceptually well understood, they have not been adequately quantified to allow decision makers to make the best decisions when funds are constrained. One reason such quantification has been lacking is that the necessary data is often not available. This paper presents initial work aimed at using data recorded by the Navy to construct a model that could allow for quantitative decision support. The principal challenge is that most of the recorded data is at the system level, implying that the ship must be modeled as a single unit. This assumption results in an underestimation of the impact on reliability of deferring corrective maintenance. Our results show that given the data available, a stochastic renewal process can model the Arleigh Burke (DDG-51) class guided-missile destroyers, implying that the ship returns to a “like new” condition following successful maintenance. The stochastic renewal process model provides a first step in using reported data to develop a model of delayed maintenance and its effect on TOC.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116244153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549852
Sun Zhe, T. Wong, L. Lee
With the proliferation of outsourcing in global market place, supplier selection has become a key strategic consideration in forming a competitive supply chain. Supplier selection has been recognized as a multi-criteria decision making problem in which suppliers are evaluated according to multiple criteria such as price, quality, delivery and service simultaneously. Facing with excessive pressures from government and customers, increasing number of companies are beginning to consider environmental issues in the procurement and supplier selection process to practice the sustainable development. It is therefore necessary to measure a supplier's environmental performance. This paper aims to find out what kind of environmental criteria can be applied to assess suppliers overall performances. The multi-criteria decision making approach data envelopment analysis (DEA) is applied to help companies to evaluate suppliers' various environmental performance and other capabilities simultaneously.
{"title":"Using data envelopment analysis for supplier evaluation with environmental considerations","authors":"Sun Zhe, T. Wong, L. Lee","doi":"10.1109/SysCon.2013.6549852","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549852","url":null,"abstract":"With the proliferation of outsourcing in global market place, supplier selection has become a key strategic consideration in forming a competitive supply chain. Supplier selection has been recognized as a multi-criteria decision making problem in which suppliers are evaluated according to multiple criteria such as price, quality, delivery and service simultaneously. Facing with excessive pressures from government and customers, increasing number of companies are beginning to consider environmental issues in the procurement and supplier selection process to practice the sustainable development. It is therefore necessary to measure a supplier's environmental performance. This paper aims to find out what kind of environmental criteria can be applied to assess suppliers overall performances. The multi-criteria decision making approach data envelopment analysis (DEA) is applied to help companies to evaluate suppliers' various environmental performance and other capabilities simultaneously.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122254113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549923
P. Hershey, Mu-Cheng Wang
Advances in sensor observation and collection technology, coupled with the use of multiple sensors distributed amongst multiple transport platforms to support a single mission; create the potentially crippling problem of data overload for DoD operators and imagery and video analysts as they strive to provide processing, exploitation, and dissemination (PED) to the end-user (i.e., warfighter). These operators and analysts depend on sensor data, specifically Intelligence, Surveillance and Reconnaissance (ISR) data, to perform time-critical mission functions such as target engagement. However, in many cases, the data they receive are not relevant to the mission, and they waste valuable time and resources sifting through thousands of still imagery and video footage gaining very little useful information that can be turned into actionable intelligence. The DoD seeks a solution to this problem that will reduce the vast amount of ISR data to actionable ISR information. The Multifactor Information Distributed Analytics Technology Aide (MiData) applies systems engineering and architectural principles to solve this challenge in a novel way. MiData is a composable system comprising independent factors integrated to perform critical processing functions that autonomously transform data to information. These factors may be distributed between mission segments located in the air, on the ground, or in the sea based on the mission requirements and PED resource availability. This distribution enables resource reduction, operator and analysts' productivity increases across the ISR mission segments, and response-time improvements to meet time-critical needs of end-users.
{"title":"Composable, distributed system to derive actionable mission information from intelligence, surveillance, and reconnaissance (ISR) data","authors":"P. Hershey, Mu-Cheng Wang","doi":"10.1109/SysCon.2013.6549923","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549923","url":null,"abstract":"Advances in sensor observation and collection technology, coupled with the use of multiple sensors distributed amongst multiple transport platforms to support a single mission; create the potentially crippling problem of data overload for DoD operators and imagery and video analysts as they strive to provide processing, exploitation, and dissemination (PED) to the end-user (i.e., warfighter). These operators and analysts depend on sensor data, specifically Intelligence, Surveillance and Reconnaissance (ISR) data, to perform time-critical mission functions such as target engagement. However, in many cases, the data they receive are not relevant to the mission, and they waste valuable time and resources sifting through thousands of still imagery and video footage gaining very little useful information that can be turned into actionable intelligence. The DoD seeks a solution to this problem that will reduce the vast amount of ISR data to actionable ISR information. The Multifactor Information Distributed Analytics Technology Aide (MiData) applies systems engineering and architectural principles to solve this challenge in a novel way. MiData is a composable system comprising independent factors integrated to perform critical processing functions that autonomously transform data to information. These factors may be distributed between mission segments located in the air, on the ground, or in the sea based on the mission requirements and PED resource availability. This distribution enables resource reduction, operator and analysts' productivity increases across the ISR mission segments, and response-time improvements to meet time-critical needs of end-users.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130833707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549955
W. Müller, E. Widl
The simulation of cyber-physical systems involves modular heterogeneous systems. When embedding continuous subsystems in a discrete event system, in a classic approach the different subsystems use the same communication points and wait for each other. The approach presented in this paper uses predictions for every single continuous subsystem. That way the continuous subsystems can be used as discrete components in a discrete event system. This concept is implemented with FMUs (Functional Mock-up Units) generated with OpenModelica and the Discrete Event domain of Ptolemy II as a proof of concept. A model is implemented using this environment and compared to another implementation that uses only Ptolemy II. The results show the better scalability and shorter runtime of the presented approach compared to the pure Ptolemy II approach.
{"title":"Linking FMI-based components with discrete event systems","authors":"W. Müller, E. Widl","doi":"10.1109/SysCon.2013.6549955","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549955","url":null,"abstract":"The simulation of cyber-physical systems involves modular heterogeneous systems. When embedding continuous subsystems in a discrete event system, in a classic approach the different subsystems use the same communication points and wait for each other. The approach presented in this paper uses predictions for every single continuous subsystem. That way the continuous subsystems can be used as discrete components in a discrete event system. This concept is implemented with FMUs (Functional Mock-up Units) generated with OpenModelica and the Discrete Event domain of Ptolemy II as a proof of concept. A model is implemented using this environment and compared to another implementation that uses only Ptolemy II. The results show the better scalability and shorter runtime of the presented approach compared to the pure Ptolemy II approach.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130183034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}