Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549928
Sarayut Nonsiri, E. Coatanéa, M. Bakhouya, Faisal Mokammel
The need for support related to the complexity management of systems engineering problems, specifically for requirements management and changes is especially necessary during the early stages of the systems engineering process. Indeed, these stages have a tremendous impact on the overall outcome of a project. If not anticipated at early stages, changes in requirements are leading to changes in the design and in the later implementation stages, resulting in an unexpected increase in costs (monetary, time, etc.). The framework proposed in this article for requirements change prediction consists of a three steps process. First, requirements are modeled using SysML with predefined relationships. Second, all the relationships between requirements in the SysML model are transformed into an adjacency matrix also named DSM. A higher order Dependency Structure Matrix is applied; this matrix-based methodology allows support in the prediction of which requirements will be affected after a change in a specific requirement. Third, the change propagation path is visualized. Using this framework, it is possible to predict the possible propagation of changes in requirements. In addition, it is also possible to identify the requirements that can be reused. This can help to save the time and cost for developing a new system.
{"title":"Model-based approach for change propagation analysis in requirements","authors":"Sarayut Nonsiri, E. Coatanéa, M. Bakhouya, Faisal Mokammel","doi":"10.1109/SysCon.2013.6549928","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549928","url":null,"abstract":"The need for support related to the complexity management of systems engineering problems, specifically for requirements management and changes is especially necessary during the early stages of the systems engineering process. Indeed, these stages have a tremendous impact on the overall outcome of a project. If not anticipated at early stages, changes in requirements are leading to changes in the design and in the later implementation stages, resulting in an unexpected increase in costs (monetary, time, etc.). The framework proposed in this article for requirements change prediction consists of a three steps process. First, requirements are modeled using SysML with predefined relationships. Second, all the relationships between requirements in the SysML model are transformed into an adjacency matrix also named DSM. A higher order Dependency Structure Matrix is applied; this matrix-based methodology allows support in the prediction of which requirements will be affected after a change in a specific requirement. Third, the change propagation path is visualized. Using this framework, it is possible to predict the possible propagation of changes in requirements. In addition, it is also possible to identify the requirements that can be reused. This can help to save the time and cost for developing a new system.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133306026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549956
Ehsan Moradi-Pari, Amin Tahmasbi-Sarvestani, Y. P. Fallah
We present a novel approach to modeling the combined estimation and networking processes of automated crash/collision avoidance systems (ACAS). The estimation and networking processes are two necessary components of the real-time situation awareness component of the system. The existing models for these two components are mostly based on stochastic modeling methods, describing each component separately and in abstract probabilistic terms. Such modeling methods lead to the loss of useful details. In our recent work we presented extended stochastic models using discrete-time Markov chains for the networking component and empirical statistical models for the estimation process. Although these models led to significantly improved designs for the situation awareness component of ACAS, it was observed that the extent of the improvement was limited. The limitation is due the fact that stochastic models are limited in describing the system which inherently has many deterministic features. In this paper we attempt to advance the approach to modeling the ACAS systems (and other similar systems) through developing a method to model the communication component based on Probabilistic Timed automata and also a Hybrid automata to combine and model the entire system (both estimation and communication/networking processes). This paper presents the new model and verifies it using simulations.
{"title":"Modeling communication and estimation processes of automated crash avoidance systems","authors":"Ehsan Moradi-Pari, Amin Tahmasbi-Sarvestani, Y. P. Fallah","doi":"10.1109/SysCon.2013.6549956","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549956","url":null,"abstract":"We present a novel approach to modeling the combined estimation and networking processes of automated crash/collision avoidance systems (ACAS). The estimation and networking processes are two necessary components of the real-time situation awareness component of the system. The existing models for these two components are mostly based on stochastic modeling methods, describing each component separately and in abstract probabilistic terms. Such modeling methods lead to the loss of useful details. In our recent work we presented extended stochastic models using discrete-time Markov chains for the networking component and empirical statistical models for the estimation process. Although these models led to significantly improved designs for the situation awareness component of ACAS, it was observed that the extent of the improvement was limited. The limitation is due the fact that stochastic models are limited in describing the system which inherently has many deterministic features. In this paper we attempt to advance the approach to modeling the ACAS systems (and other similar systems) through developing a method to model the communication component based on Probabilistic Timed automata and also a Hybrid automata to combine and model the entire system (both estimation and communication/networking processes). This paper presents the new model and verifies it using simulations.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131986816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549937
C. Insaurralde, Y. Pétillot
The increasing demand for more self-governed assistance in maritime activities is leading ocean engineering research projects to combine diverse autonomous capabilities developed at geographically-dispersed locations. Availability of such capabilities as well as experiments in water is a critical issue that can seriously impact on the project milestones. The ability to perform verification and validation at an initial integration stage of maritime capabilities while they are still physically located at the partner's site can reduce significantly costs and risks. This paper proposes an early integration framework for autonomous capabilities of maritime vehicles by means of system and context simulation (including emulation of maritime vehicles and operational environment). This makes interaction between the computational and physical process become crucial as in cyber-physical systems. The framework proposed allows project patterns to pre-verify and pre-validate requirements before the system is physically integrated. This paper presents a review of the research context, and the autonomous maritime capabilities to be integrated. An illustrative case study of simulation and trials carried out on cooperative maritime navigation is also presented.
{"title":"Cyber-physical framework for early integration of autonomous maritime capabilities","authors":"C. Insaurralde, Y. Pétillot","doi":"10.1109/SysCon.2013.6549937","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549937","url":null,"abstract":"The increasing demand for more self-governed assistance in maritime activities is leading ocean engineering research projects to combine diverse autonomous capabilities developed at geographically-dispersed locations. Availability of such capabilities as well as experiments in water is a critical issue that can seriously impact on the project milestones. The ability to perform verification and validation at an initial integration stage of maritime capabilities while they are still physically located at the partner's site can reduce significantly costs and risks. This paper proposes an early integration framework for autonomous capabilities of maritime vehicles by means of system and context simulation (including emulation of maritime vehicles and operational environment). This makes interaction between the computational and physical process become crucial as in cyber-physical systems. The framework proposed allows project patterns to pre-verify and pre-validate requirements before the system is physically integrated. This paper presents a review of the research context, and the autonomous maritime capabilities to be integrated. An illustrative case study of simulation and trials carried out on cooperative maritime navigation is also presented.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115738495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549975
Karen B. Marais, Jessica Rivas, Isaac J. Tetzloff, W. Crossley
The US Navy is making a concerted effort to use total ownership cost (TOC) as a metric for decision-making about the various systems needed to perform the Navy's missions. System Total Ownership Cost seeks to combine aspects related to acquisition costs, operating costs, maintenance costs, and manpower costs (both staffing and training) over the lifecycle of the system. Here, this paper presents initial efforts to consider deferred maintenance and its impact on TOC for long-lived systems, like the DDG-51 class destroyers. Near-term cost pressures often result in decisions that defer maintenance to a later time than scheduled or well after first notice of a maintenance need. Deferring maintenance allows the costs of performing maintenance to be postponed, saving short term costs, but the choice to defer maintenance may also result in the system moving to a state of further degradation. If this is true, later maintenance tasks needed to restore the ship's capability or reliability may become more costly. While these trade-offs are conceptually well understood, they have not been adequately quantified to allow decision makers to make the best decisions when funds are constrained. One reason such quantification has been lacking is that the necessary data is often not available. This paper presents initial work aimed at using data recorded by the Navy to construct a model that could allow for quantitative decision support. The principal challenge is that most of the recorded data is at the system level, implying that the ship must be modeled as a single unit. This assumption results in an underestimation of the impact on reliability of deferring corrective maintenance. Our results show that given the data available, a stochastic renewal process can model the Arleigh Burke (DDG-51) class guided-missile destroyers, implying that the ship returns to a “like new” condition following successful maintenance. The stochastic renewal process model provides a first step in using reported data to develop a model of delayed maintenance and its effect on TOC.
{"title":"Modeling the impact of maintenance on naval fleet total ownership cost","authors":"Karen B. Marais, Jessica Rivas, Isaac J. Tetzloff, W. Crossley","doi":"10.1109/SysCon.2013.6549975","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549975","url":null,"abstract":"The US Navy is making a concerted effort to use total ownership cost (TOC) as a metric for decision-making about the various systems needed to perform the Navy's missions. System Total Ownership Cost seeks to combine aspects related to acquisition costs, operating costs, maintenance costs, and manpower costs (both staffing and training) over the lifecycle of the system. Here, this paper presents initial efforts to consider deferred maintenance and its impact on TOC for long-lived systems, like the DDG-51 class destroyers. Near-term cost pressures often result in decisions that defer maintenance to a later time than scheduled or well after first notice of a maintenance need. Deferring maintenance allows the costs of performing maintenance to be postponed, saving short term costs, but the choice to defer maintenance may also result in the system moving to a state of further degradation. If this is true, later maintenance tasks needed to restore the ship's capability or reliability may become more costly. While these trade-offs are conceptually well understood, they have not been adequately quantified to allow decision makers to make the best decisions when funds are constrained. One reason such quantification has been lacking is that the necessary data is often not available. This paper presents initial work aimed at using data recorded by the Navy to construct a model that could allow for quantitative decision support. The principal challenge is that most of the recorded data is at the system level, implying that the ship must be modeled as a single unit. This assumption results in an underestimation of the impact on reliability of deferring corrective maintenance. Our results show that given the data available, a stochastic renewal process can model the Arleigh Burke (DDG-51) class guided-missile destroyers, implying that the ship returns to a “like new” condition following successful maintenance. The stochastic renewal process model provides a first step in using reported data to develop a model of delayed maintenance and its effect on TOC.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116244153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549857
Edmon Begoli, Theodore F. Chila, W. Inmon
The methodology we present in this paper emerged as a result of the technical and organizational assessment we conducted for a large data analytic system and for its expansion to support a significant new mission in healthcare domain. We developed a 4+1 dimensional approach for examining the different characteristics of a system with four traditional dimensions and a fifth, scenarios-based, dimension, introduced as an exploration device of the entire system in its business context. We present the principles, guidelines, and structure of the methodology as well as the results of the application of this process leading to a credible evaluation that better assesses current large data analysis systems than the previous, purely static assessment.
{"title":"Scenario-driven architecture assessment methodology for large data analysis systems","authors":"Edmon Begoli, Theodore F. Chila, W. Inmon","doi":"10.1109/SysCon.2013.6549857","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549857","url":null,"abstract":"The methodology we present in this paper emerged as a result of the technical and organizational assessment we conducted for a large data analytic system and for its expansion to support a significant new mission in healthcare domain. We developed a 4+1 dimensional approach for examining the different characteristics of a system with four traditional dimensions and a fifth, scenarios-based, dimension, introduced as an exploration device of the entire system in its business context. We present the principles, guidelines, and structure of the methodology as well as the results of the application of this process leading to a credible evaluation that better assesses current large data analysis systems than the previous, purely static assessment.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125549409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549955
W. Müller, E. Widl
The simulation of cyber-physical systems involves modular heterogeneous systems. When embedding continuous subsystems in a discrete event system, in a classic approach the different subsystems use the same communication points and wait for each other. The approach presented in this paper uses predictions for every single continuous subsystem. That way the continuous subsystems can be used as discrete components in a discrete event system. This concept is implemented with FMUs (Functional Mock-up Units) generated with OpenModelica and the Discrete Event domain of Ptolemy II as a proof of concept. A model is implemented using this environment and compared to another implementation that uses only Ptolemy II. The results show the better scalability and shorter runtime of the presented approach compared to the pure Ptolemy II approach.
{"title":"Linking FMI-based components with discrete event systems","authors":"W. Müller, E. Widl","doi":"10.1109/SysCon.2013.6549955","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549955","url":null,"abstract":"The simulation of cyber-physical systems involves modular heterogeneous systems. When embedding continuous subsystems in a discrete event system, in a classic approach the different subsystems use the same communication points and wait for each other. The approach presented in this paper uses predictions for every single continuous subsystem. That way the continuous subsystems can be used as discrete components in a discrete event system. This concept is implemented with FMUs (Functional Mock-up Units) generated with OpenModelica and the Discrete Event domain of Ptolemy II as a proof of concept. A model is implemented using this environment and compared to another implementation that uses only Ptolemy II. The results show the better scalability and shorter runtime of the presented approach compared to the pure Ptolemy II approach.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130183034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549852
Sun Zhe, T. Wong, L. Lee
With the proliferation of outsourcing in global market place, supplier selection has become a key strategic consideration in forming a competitive supply chain. Supplier selection has been recognized as a multi-criteria decision making problem in which suppliers are evaluated according to multiple criteria such as price, quality, delivery and service simultaneously. Facing with excessive pressures from government and customers, increasing number of companies are beginning to consider environmental issues in the procurement and supplier selection process to practice the sustainable development. It is therefore necessary to measure a supplier's environmental performance. This paper aims to find out what kind of environmental criteria can be applied to assess suppliers overall performances. The multi-criteria decision making approach data envelopment analysis (DEA) is applied to help companies to evaluate suppliers' various environmental performance and other capabilities simultaneously.
{"title":"Using data envelopment analysis for supplier evaluation with environmental considerations","authors":"Sun Zhe, T. Wong, L. Lee","doi":"10.1109/SysCon.2013.6549852","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549852","url":null,"abstract":"With the proliferation of outsourcing in global market place, supplier selection has become a key strategic consideration in forming a competitive supply chain. Supplier selection has been recognized as a multi-criteria decision making problem in which suppliers are evaluated according to multiple criteria such as price, quality, delivery and service simultaneously. Facing with excessive pressures from government and customers, increasing number of companies are beginning to consider environmental issues in the procurement and supplier selection process to practice the sustainable development. It is therefore necessary to measure a supplier's environmental performance. This paper aims to find out what kind of environmental criteria can be applied to assess suppliers overall performances. The multi-criteria decision making approach data envelopment analysis (DEA) is applied to help companies to evaluate suppliers' various environmental performance and other capabilities simultaneously.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122254113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549977
Hitoshi Oi, Sho Niboshi
In this paper, we present a case study of the power consumption and performance trade-offs in Java application servers. We use the industry-standard benchmark for the Java application servers, SPECjEnterprise2010, on two platforms with different CPUs, AMD Phenom II and Intel Atom, We investigated the performance and power consumption behaviors against the increasing system size and the relative performance between Phenom vs Atom. Phenom is capable of dynamic frequency scaling (DFS) and we studied the effects of clock frequency control parameters on the performance and power consumption. In terms of the maximum system sizes with valid quality of service (QoS) metrics, Phenom could handle 9.7 times more transactions than Atom. In terms of dynamic power consumption normalized to the system size, Atom was 2.5 times more power-efficient than Phenom. Increasing the sampling rate, one of the DFS parameters, was effective in reducing the power consumption in low load level regions. It reduced the dynamic power up to 7.7 Watt, about 40% lower than the default setting.
{"title":"Power-efficiency study using SPECjEnterprise2010","authors":"Hitoshi Oi, Sho Niboshi","doi":"10.1109/SysCon.2013.6549977","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549977","url":null,"abstract":"In this paper, we present a case study of the power consumption and performance trade-offs in Java application servers. We use the industry-standard benchmark for the Java application servers, SPECjEnterprise2010, on two platforms with different CPUs, AMD Phenom II and Intel Atom, We investigated the performance and power consumption behaviors against the increasing system size and the relative performance between Phenom vs Atom. Phenom is capable of dynamic frequency scaling (DFS) and we studied the effects of clock frequency control parameters on the performance and power consumption. In terms of the maximum system sizes with valid quality of service (QoS) metrics, Phenom could handle 9.7 times more transactions than Atom. In terms of dynamic power consumption normalized to the system size, Atom was 2.5 times more power-efficient than Phenom. Increasing the sampling rate, one of the DFS parameters, was effective in reducing the power consumption in low load level regions. It reduced the dynamic power up to 7.7 Watt, about 40% lower than the default setting.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127038946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549885
D. Bodner, J. Wade
Programs that develop and deploy complex systems typically have multiple criteria by which they are judged to be successful. Categories of such criteria include schedule, cost, technical system performance, quality and customer expectations. Criteria are operationalized via particular metrics, and often there are complex relationships between metrics, e.g., correlations or trade-offs. In an acquisition program, it is critical that systems engineers understand the implications of their actions and decisions with respect to these metrics, since the metrics are used to report the performance and eventual outcome of the program. However, such understanding usually takes many years of on-the-job experience. This paper describes an approach to simulation modeling of program behavior and performance whereby program outputs expressed in these metrics can be studied by systems engineers. An example program simulation model is presented that currently is used in an educational technology system for training systems engineers. The decisions and actions that can be taken by a systems engineer are described, and the impacts of various actions and decisions on program metrics and metric relationships are illustrated. The model is validated via subject matter experts with extensive experience in the program domain.
{"title":"Multi-criteria simulation of program outcomes","authors":"D. Bodner, J. Wade","doi":"10.1109/SysCon.2013.6549885","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549885","url":null,"abstract":"Programs that develop and deploy complex systems typically have multiple criteria by which they are judged to be successful. Categories of such criteria include schedule, cost, technical system performance, quality and customer expectations. Criteria are operationalized via particular metrics, and often there are complex relationships between metrics, e.g., correlations or trade-offs. In an acquisition program, it is critical that systems engineers understand the implications of their actions and decisions with respect to these metrics, since the metrics are used to report the performance and eventual outcome of the program. However, such understanding usually takes many years of on-the-job experience. This paper describes an approach to simulation modeling of program behavior and performance whereby program outputs expressed in these metrics can be studied by systems engineers. An example program simulation model is presented that currently is used in an educational technology system for training systems engineers. The decisions and actions that can be taken by a systems engineer are described, and the impacts of various actions and decisions on program metrics and metric relationships are illustrated. The model is validated via subject matter experts with extensive experience in the program domain.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125296001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/SysCon.2013.6549934
Julie Peirson, R. Turner, B. Williams
The evolution of systems integration has reduced proprietary interfaces significantly. Aging federated systems contain proprietary interfaces which minimize upgrade flexibility. As product lines evolve their value increase can be attributed to the progress from interface standards to open systems architecture. Systems integration evolution for aircraft design is perpetuated by technology development. The ability to harness that progress could be reusable systems architecture (RSA) which is derived from libraries of systems functional definitions. RSA is an evolutionary concept because it captures the existing systems definitions and addresses the gap of integrating the state of the art emergent technologies in an efficient manner. Library concept uniqueness is attained through the system attributes. The research will identify the required systems attributes (e.g. communication protocols, bonding & grounding requirements, power requirements, etc...) necessary in the functional library to efficiently incorporate state of the art systems. This paper illustrates a high level example of how such a library concept could be constructed. Utilization of prior systems functional definitions leverages new combinations of systems with reduced integration cost, less risk, and improved first time quality.
{"title":"Embracing reusable systems architecture","authors":"Julie Peirson, R. Turner, B. Williams","doi":"10.1109/SysCon.2013.6549934","DOIUrl":"https://doi.org/10.1109/SysCon.2013.6549934","url":null,"abstract":"The evolution of systems integration has reduced proprietary interfaces significantly. Aging federated systems contain proprietary interfaces which minimize upgrade flexibility. As product lines evolve their value increase can be attributed to the progress from interface standards to open systems architecture. Systems integration evolution for aircraft design is perpetuated by technology development. The ability to harness that progress could be reusable systems architecture (RSA) which is derived from libraries of systems functional definitions. RSA is an evolutionary concept because it captures the existing systems definitions and addresses the gap of integrating the state of the art emergent technologies in an efficient manner. Library concept uniqueness is attained through the system attributes. The research will identify the required systems attributes (e.g. communication protocols, bonding & grounding requirements, power requirements, etc...) necessary in the functional library to efficiently incorporate state of the art systems. This paper illustrates a high level example of how such a library concept could be constructed. Utilization of prior systems functional definitions leverages new combinations of systems with reduced integration cost, less risk, and improved first time quality.","PeriodicalId":218073,"journal":{"name":"2013 IEEE International Systems Conference (SysCon)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126830974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}