{"title":"Energy planning in sub‐Saharan African telecom networks: Decision support using a soft systems methodology","authors":"Mbiika Ceriano, J. Lalk, G. A. Thopil","doi":"10.1002/sys.21706","DOIUrl":"https://doi.org/10.1002/sys.21706","url":null,"abstract":"","PeriodicalId":54439,"journal":{"name":"Systems Engineering","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48154862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article details the first step in the system dynamics analysis of the systems engineering process for the evolution of legacy systems into an enterprise. This step develops a model that depicts the interaction of the various components associated with the system of interest. This model is a collection of causal loop diagrams that will foster the development of a novel framework known as the enterprise lifecycle model, which will support system dynamics analysis through three stages: planning, development, and execution. Peer‐reviewed academic and industry sources will be utilized to understand how accepted literature defines this analysis. Specifically, this model will be based on the Vee lifecycle model as well as the Agile and Iron Triangle frameworks. Supplemental elements will be added to these diagrams to incorporate the environment within which the system of interest is planned, developed, and executed. Additional factors, such as quality management, will be added to complete the super system and system of interest views of the enterprise lifecycle model—with the goal of creating a model depicting a reductive and holistic view that aids in the reduction of complexity surrounding the systems engineering process used to prepare legacy systems for evolution into an enterprise and support the definition of the desired target system.
{"title":"Developing a model that supports the evolution of legacy systems into an enterprise","authors":"Sian Terry, V. Chandrasekar","doi":"10.1002/sys.21700","DOIUrl":"https://doi.org/10.1002/sys.21700","url":null,"abstract":"This article details the first step in the system dynamics analysis of the systems engineering process for the evolution of legacy systems into an enterprise. This step develops a model that depicts the interaction of the various components associated with the system of interest. This model is a collection of causal loop diagrams that will foster the development of a novel framework known as the enterprise lifecycle model, which will support system dynamics analysis through three stages: planning, development, and execution. Peer‐reviewed academic and industry sources will be utilized to understand how accepted literature defines this analysis. Specifically, this model will be based on the Vee lifecycle model as well as the Agile and Iron Triangle frameworks. Supplemental elements will be added to these diagrams to incorporate the environment within which the system of interest is planned, developed, and executed. Additional factors, such as quality management, will be added to complete the super system and system of interest views of the enterprise lifecycle model—with the goal of creating a model depicting a reductive and holistic view that aids in the reduction of complexity surrounding the systems engineering process used to prepare legacy systems for evolution into an enterprise and support the definition of the desired target system.","PeriodicalId":54439,"journal":{"name":"Systems Engineering","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41411028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathon Parry, Donald H. Costello, J. Rupert, Gavin Taylor
{"title":"The National Airworthiness Council artificial intelligence working group (NACAIWG) summit proceedings 2022","authors":"Jonathon Parry, Donald H. Costello, J. Rupert, Gavin Taylor","doi":"10.1002/sys.21703","DOIUrl":"https://doi.org/10.1002/sys.21703","url":null,"abstract":"","PeriodicalId":54439,"journal":{"name":"Systems Engineering","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45223354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study provides empirical evidence to the body of knowledge in Agile methods adoption in small, medium, and large organizations in the global context. This research explores facilitators and inhibitors of Agile methods adoption in software development organizations. A survey was conducted among Agile professionals to gather survey data from 52 software organizations in seven countries across the world. This study found many facilitators of Agile adoption to be significant such as customers’ dominant issues, encouragement, project champion, highly competent team, use of tools, etc. Similarly a correlation analysis revealed multiple inhibitors as significant: absence of a full set of right Agile practices, absence of customer presence, absence of tracking mechanisms during Agile progress, and failure to determine the role of the client. The present study identifies that an Agile team with high expertise and competence leads to higher quality in software, customer satisfaction along with return on investment (ROI) while a small Agile team increases ease in handling changing requirements, customer satisfaction, reduced delivery time, and increased ROI. Frequent delivery accelerates better control over work, adds to software quality, customer satisfaction, and in shortening delivery time along with increase ROI. It has also been observed that providing essential features early leads to increase in software quality and customer satisfaction. This study confirms that active customer focus leads to better control over work. Further, absence of customer decreases dealing with changing requirements, and customer satisfaction while absence of progress tracking lowers customer satisfaction.
{"title":"Facilitators and inhibitors of Agile methods adoption: Practitioners view","authors":"Deepti Mishra, A. Mishra, Samia Abdalhamid","doi":"10.1002/sys.21702","DOIUrl":"https://doi.org/10.1002/sys.21702","url":null,"abstract":"This study provides empirical evidence to the body of knowledge in Agile methods adoption in small, medium, and large organizations in the global context. This research explores facilitators and inhibitors of Agile methods adoption in software development organizations. A survey was conducted among Agile professionals to gather survey data from 52 software organizations in seven countries across the world. This study found many facilitators of Agile adoption to be significant such as customers’ dominant issues, encouragement, project champion, highly competent team, use of tools, etc. Similarly a correlation analysis revealed multiple inhibitors as significant: absence of a full set of right Agile practices, absence of customer presence, absence of tracking mechanisms during Agile progress, and failure to determine the role of the client. The present study identifies that an Agile team with high expertise and competence leads to higher quality in software, customer satisfaction along with return on investment (ROI) while a small Agile team increases ease in handling changing requirements, customer satisfaction, reduced delivery time, and increased ROI. Frequent delivery accelerates better control over work, adds to software quality, customer satisfaction, and in shortening delivery time along with increase ROI. It has also been observed that providing essential features early leads to increase in software quality and customer satisfaction. This study confirms that active customer focus leads to better control over work. Further, absence of customer decreases dealing with changing requirements, and customer satisfaction while absence of progress tracking lowers customer satisfaction.","PeriodicalId":54439,"journal":{"name":"Systems Engineering","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45606252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Astrid V. Solheim, A. Rauzy, P. O. Brett, S. Ellefmo, Tonje Hatling, R. Helmons, B. Asbjørnslett
In this paper, model‐based systems engineering (MBSE) and discrete event simulation (DES) are combined to assess the performance of an offshore production system at an early stage. Various systems engineering tools are applied to an industrial case concerning the retrieval of deep‐sea minerals, and a simulation engine is developed to calculate the annual production output. A mean production of 1 Million tonnes of ore per year is estimated for an operation in the Norwegian Sea using Monte Carlo simulation. Depending on the limiting design wave height of the marine operations, the estimated production output ranges from 280,000 tonnes to 1.8 Million tonnes per year. The constrained parameter of the production system is particularly the wave height operational limit of the ship‐to‐ship transfer operation. We present the learning outcome from applying MBSE and DES to this case and discuss important aspects for improved performance.
{"title":"Assessment of expected production of a deep‐sea mining system: An integrated model‐based systems engineering and discrete event simulation approach","authors":"Astrid V. Solheim, A. Rauzy, P. O. Brett, S. Ellefmo, Tonje Hatling, R. Helmons, B. Asbjørnslett","doi":"10.1002/sys.21699","DOIUrl":"https://doi.org/10.1002/sys.21699","url":null,"abstract":"In this paper, model‐based systems engineering (MBSE) and discrete event simulation (DES) are combined to assess the performance of an offshore production system at an early stage. Various systems engineering tools are applied to an industrial case concerning the retrieval of deep‐sea minerals, and a simulation engine is developed to calculate the annual production output. A mean production of 1 Million tonnes of ore per year is estimated for an operation in the Norwegian Sea using Monte Carlo simulation. Depending on the limiting design wave height of the marine operations, the estimated production output ranges from 280,000 tonnes to 1.8 Million tonnes per year. The constrained parameter of the production system is particularly the wave height operational limit of the ship‐to‐ship transfer operation. We present the learning outcome from applying MBSE and DES to this case and discuss important aspects for improved performance.","PeriodicalId":54439,"journal":{"name":"Systems Engineering","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43076013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Built infrastructure for water and energy supply, transportation, and other such services underpins human well‐being and socioeconomic development. A fundamental understanding of how infrastructure design and user strategies interact can guide important design decisions as well as policy formulation for ensuring long‐term infrastructure viability in conjunction with improved individual user benefits. In this work, an agent based model (ABM) is developed to study this issue for the specific case of irrigation canals. Cooperatively maintained irrigation canals serve essential roles in sustaining agriculture‐based economies in many regions. Canal system design can strongly affect benefits derived by distributed users, regional agricultural output, and the long‐term viability of the shared infrastructure itself. Here, an ABM is used to investigate how an option to use an independent water source interacts with canal design to affect canal maintenance cooperation and farmer income. The independent water source is stylized as a well that provides access to groundwater and represents a strategically robust design option; a design option that reduces the implementer's utility vulnerability to unfavorable actions by other actors. Research in other systems has demonstrated that strategically robust designs can improve both implementer utility and the probability of collaboration. The results of this research, in contrast, demonstrate that the option of individual resource access, the strategically robust design option, as represented by a well, reduces cooperative maintenance in most cases. However, wells also improve farmer income, especially for downstream farmers that are most affected by water theft.
{"title":"Effects of individual strategies for resource access on collaboratively maintained irrigation infrastructure","authors":"Jordan L. Stern, A. Siddiqi, P. Grogan","doi":"10.1002/sys.21701","DOIUrl":"https://doi.org/10.1002/sys.21701","url":null,"abstract":"Built infrastructure for water and energy supply, transportation, and other such services underpins human well‐being and socioeconomic development. A fundamental understanding of how infrastructure design and user strategies interact can guide important design decisions as well as policy formulation for ensuring long‐term infrastructure viability in conjunction with improved individual user benefits. In this work, an agent based model (ABM) is developed to study this issue for the specific case of irrigation canals. Cooperatively maintained irrigation canals serve essential roles in sustaining agriculture‐based economies in many regions. Canal system design can strongly affect benefits derived by distributed users, regional agricultural output, and the long‐term viability of the shared infrastructure itself. Here, an ABM is used to investigate how an option to use an independent water source interacts with canal design to affect canal maintenance cooperation and farmer income. The independent water source is stylized as a well that provides access to groundwater and represents a strategically robust design option; a design option that reduces the implementer's utility vulnerability to unfavorable actions by other actors. Research in other systems has demonstrated that strategically robust designs can improve both implementer utility and the probability of collaboration. The results of this research, in contrast, demonstrate that the option of individual resource access, the strategically robust design option, as represented by a well, reduces cooperative maintenance in most cases. However, wells also improve farmer income, especially for downstream farmers that are most affected by water theft.","PeriodicalId":54439,"journal":{"name":"Systems Engineering","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47773437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enterprise system engineering is a new practice that has emerged over the last few decades, promising to achieve better enterprises by improving cross‐enterprise processes. However, enterprises have a unique property as a system of unsynchronized arrays of systems. This property can lead to severe problems and anomalies, such as cross‐enterprise failures. These issues become even more drastic in supporting cross‐enterprises processes like transportation. The transportation arena is a complex system in itself. It comprises a variety of enterprises and systems supported by different technologies and vendors. Moreover, it involves governmental, municipal, and private stakeholders. Therefore, planning and designing a coordinated and integrated architecture is difficult. A new enterprise system engineering framework called EPIC addresses these issues by enabling better coordination of unsynchronized arrays of systems across enterprises. This research explores the application of an architectural framework to the transportation arena, where existing methods have not adequately addressed its unique properties. Deploying it in the “real world” plants the seeds to improve the transportation processes, their performance, efficiency, and reliability.
{"title":"Deployment of EPIC framework for intelligence transportation system","authors":"Miri Sitton","doi":"10.1002/sys.21698","DOIUrl":"https://doi.org/10.1002/sys.21698","url":null,"abstract":"Enterprise system engineering is a new practice that has emerged over the last few decades, promising to achieve better enterprises by improving cross‐enterprise processes. However, enterprises have a unique property as a system of unsynchronized arrays of systems. This property can lead to severe problems and anomalies, such as cross‐enterprise failures. These issues become even more drastic in supporting cross‐enterprises processes like transportation. The transportation arena is a complex system in itself. It comprises a variety of enterprises and systems supported by different technologies and vendors. Moreover, it involves governmental, municipal, and private stakeholders. Therefore, planning and designing a coordinated and integrated architecture is difficult. A new enterprise system engineering framework called EPIC addresses these issues by enabling better coordination of unsynchronized arrays of systems across enterprises. This research explores the application of an architectural framework to the transportation arena, where existing methods have not adequately addressed its unique properties. Deploying it in the “real world” plants the seeds to improve the transportation processes, their performance, efficiency, and reliability.","PeriodicalId":54439,"journal":{"name":"Systems Engineering","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43453173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Industry 4.0 (I4.0) was introduced in 2011, and its advanced enablers strongly affect industrial practices. In the current literature, while several papers offer general reviews on the topic, contributions exploring the evidences coming from the implementation of I4.0 in multi‐sector Small and Medium Enterprises (SMEs) and large enterprises are few and expected. To address this gap, a comprehensive review of the main I4.0 enabling technologies is conducted, focusing on implementation experiences in companies belonging to different sectors. Forty (40) real case studies are analyzed and compared. The results show that 63% of the identified applications involve large enterprises in the transport sector, that is, automotive, aeronautics, and railway, adopting a structured set of enabling technologies. SMEs engaged in I4.0 projects primarily belong to the mechanical engineering sector, and 37% of such projects deals with the preliminary feasibility analysis of introducing a single enabling technology. Conclusions and trends guide researchers and practitioners in understanding the implementation level of I4.0 technologies.
{"title":"A cross‐sectorial review of industrial best practices and case histories on Industry 4.0 technologies","authors":"F. G. Galizia, M. Bortolini, F. Calabrese","doi":"10.1002/sys.21697","DOIUrl":"https://doi.org/10.1002/sys.21697","url":null,"abstract":"Industry 4.0 (I4.0) was introduced in 2011, and its advanced enablers strongly affect industrial practices. In the current literature, while several papers offer general reviews on the topic, contributions exploring the evidences coming from the implementation of I4.0 in multi‐sector Small and Medium Enterprises (SMEs) and large enterprises are few and expected. To address this gap, a comprehensive review of the main I4.0 enabling technologies is conducted, focusing on implementation experiences in companies belonging to different sectors. Forty (40) real case studies are analyzed and compared. The results show that 63% of the identified applications involve large enterprises in the transport sector, that is, automotive, aeronautics, and railway, adopting a structured set of enabling technologies. SMEs engaged in I4.0 projects primarily belong to the mechanical engineering sector, and 37% of such projects deals with the preliminary feasibility analysis of introducing a single enabling technology. Conclusions and trends guide researchers and practitioners in understanding the implementation level of I4.0 technologies.","PeriodicalId":54439,"journal":{"name":"Systems Engineering","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43625903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When dealing with problems with more than two objectives, sophisticated multi‐objective optimization algorithms might be needed. Pareto optimization, which is based on the concept of dominated and non‐dominated solutions, is the most widely utilized method when comparing solutions within a multi‐objective setting. However, in the context of optimization, where three or more objectives are involved, the effectiveness of Pareto dominance approaches to drive the solutions to convergence is significantly compromised as more and more solutions tend to be non‐dominated by each other. This in turn reduces the selection pressure, especially for algorithms that rely on evolving a population of solutions such as evolutionary algorithms, particle swarm optimization, differential evolution, etc. The size of the non‐dominated set of trade‐off solutions can be quite large, rendering the decision‐making process difficult if not impossible. The size of the non‐dominated solution set increases exponentially with an increase in the number of objectives. This paper aims to expand a framework for coping with many/multi‐objective and multidisciplinary optimization problems through the introduction of a min‐max metric that behaves like a median measure that can locate the center of a data set. We compare this metric to the Chebyshev norm L_∞ metric that behaves like a mean measure in locating the center of a data set. The median metric is introduced in this paper for the first time, and unlike the mean metric is independent of the data normalization method. These metrics advocate balanced, natural, and minimum compromise solutions about all objectives. We also demonstrate and compare the behavior of the two metrics for a Tradespace case study involving more than 1200 CubeSat design alternatives identifying a manageable set of potential solutions for decision‐makers.
{"title":"Decision making for multi‐objective problems: Mean and median metrics","authors":"M. Efatmaneshnik, N. Chitsaz, Li Qiao","doi":"10.1002/sys.21690","DOIUrl":"https://doi.org/10.1002/sys.21690","url":null,"abstract":"When dealing with problems with more than two objectives, sophisticated multi‐objective optimization algorithms might be needed. Pareto optimization, which is based on the concept of dominated and non‐dominated solutions, is the most widely utilized method when comparing solutions within a multi‐objective setting. However, in the context of optimization, where three or more objectives are involved, the effectiveness of Pareto dominance approaches to drive the solutions to convergence is significantly compromised as more and more solutions tend to be non‐dominated by each other. This in turn reduces the selection pressure, especially for algorithms that rely on evolving a population of solutions such as evolutionary algorithms, particle swarm optimization, differential evolution, etc. The size of the non‐dominated set of trade‐off solutions can be quite large, rendering the decision‐making process difficult if not impossible. The size of the non‐dominated solution set increases exponentially with an increase in the number of objectives. This paper aims to expand a framework for coping with many/multi‐objective and multidisciplinary optimization problems through the introduction of a min‐max metric that behaves like a median measure that can locate the center of a data set. We compare this metric to the Chebyshev norm L_∞ metric that behaves like a mean measure in locating the center of a data set. The median metric is introduced in this paper for the first time, and unlike the mean metric is independent of the data normalization method. These metrics advocate balanced, natural, and minimum compromise solutions about all objectives. We also demonstrate and compare the behavior of the two metrics for a Tradespace case study involving more than 1200 CubeSat design alternatives identifying a manageable set of potential solutions for decision‐makers.","PeriodicalId":54439,"journal":{"name":"Systems Engineering","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42817017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Horváth, V. Molnár, Bence Graics, Á. Hajdu, I. Ráth, Á. Horváth, R. Karban, G. Trancho, Zoltán Micskei
In recent years, Model‐Based Systems Engineering (MBSE) practices have been applied in various industries to design, simulate and verify complex systems. The verification and validation (V&V) of such systems engineering models are crucial to develop high‐quality systems. However, this is a challenging problem due to the complexity of the models and semantic differences in how different tools interpret the models, which can undermine the validity of the obtained results if they go undiscovered. To address these issues, we propose (i) a subset of the SysML language for which the practical semantic integrity of tools can be achieved and (ii) a cloud‐based V&V framework for this subset, lifting verification to an industrial scale. We demonstrate the feasibility of our approach on an industrial‐scale model from the aerospace domain and summarize the lessons learned during transitioning formal verification tools to an industrial context.
{"title":"Pragmatic verification and validation of industrial executable SysML models","authors":"B. Horváth, V. Molnár, Bence Graics, Á. Hajdu, I. Ráth, Á. Horváth, R. Karban, G. Trancho, Zoltán Micskei","doi":"10.1002/sys.21679","DOIUrl":"https://doi.org/10.1002/sys.21679","url":null,"abstract":"In recent years, Model‐Based Systems Engineering (MBSE) practices have been applied in various industries to design, simulate and verify complex systems. The verification and validation (V&V) of such systems engineering models are crucial to develop high‐quality systems. However, this is a challenging problem due to the complexity of the models and semantic differences in how different tools interpret the models, which can undermine the validity of the obtained results if they go undiscovered. To address these issues, we propose (i) a subset of the SysML language for which the practical semantic integrity of tools can be achieved and (ii) a cloud‐based V&V framework for this subset, lifting verification to an industrial scale. We demonstrate the feasibility of our approach on an industrial‐scale model from the aerospace domain and summarize the lessons learned during transitioning formal verification tools to an industrial context.","PeriodicalId":54439,"journal":{"name":"Systems Engineering","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45602732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}