Jabier Martinez, D. Strüber, J. Horcas, Alexandru Burdusel, S. Zschaler
Configuring feature-oriented variability-rich systems is complex because of the large number of features and, potentially, the lack of visibility of the implications on quality attributes when selecting certain features. We present Acapulco as an alternative to the existing tools for automating the configuration process with a focus on mono- and multi-criteria optimization. The soundness of the tool has been proven in a previous publication comparing it to SATIBEA and MODAGAME. The main advantage was obtained through consistency-preserving configuration operators (CPCOs) that guarantee the validity of the configurations during the IBEA genetic algorithm evolution process. We present a new version of Acapulco built on top of FeatureIDE, extensible through the easy integration of objective functions, providing pre-defined reusable objectives, and being able to handle complex feature model constraints.
{"title":"Acapulco: an extensible tool for identifying optimal and consistent feature model configurations","authors":"Jabier Martinez, D. Strüber, J. Horcas, Alexandru Burdusel, S. Zschaler","doi":"10.1145/3503229.3547067","DOIUrl":"https://doi.org/10.1145/3503229.3547067","url":null,"abstract":"Configuring feature-oriented variability-rich systems is complex because of the large number of features and, potentially, the lack of visibility of the implications on quality attributes when selecting certain features. We present Acapulco as an alternative to the existing tools for automating the configuration process with a focus on mono- and multi-criteria optimization. The soundness of the tool has been proven in a previous publication comparing it to SATIBEA and MODAGAME. The main advantage was obtained through consistency-preserving configuration operators (CPCOs) that guarantee the validity of the configurations during the IBEA genetic algorithm evolution process. We present a new version of Acapulco built on top of FeatureIDE, extensible through the easy integration of objective functions, providing pre-defined reusable objectives, and being able to handle complex feature model constraints.","PeriodicalId":193319,"journal":{"name":"Proceedings of the 26th ACM International Systems and Software Product Line Conference - Volume B","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115687500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Self-adaptive systems can change their behavior due to internal or external issues detected during operation. Such systems should be able to change their internal structure or functionality to cope with broken motors or changes in the infrastructure. Assuring that the adaptations taken during operation do not impact the desired behavior or functionality of the system is of uttermost interest. In this paper, we contribute to the corresponding quality assurance challenge. In particular, we focus on a specific class of self-adaptive systems utilizing health states for computing repair actions. We discuss the requirements of testing methodologies for such systems and raise relevant research questions.
{"title":"Challenges of testing self-adaptive systems","authors":"Liliana Marie Prikler, F. Wotawa","doi":"10.1145/3503229.3547048","DOIUrl":"https://doi.org/10.1145/3503229.3547048","url":null,"abstract":"Self-adaptive systems can change their behavior due to internal or external issues detected during operation. Such systems should be able to change their internal structure or functionality to cope with broken motors or changes in the infrastructure. Assuring that the adaptations taken during operation do not impact the desired behavior or functionality of the system is of uttermost interest. In this paper, we contribute to the corresponding quality assurance challenge. In particular, we focus on a specific class of self-adaptive systems utilizing health states for computing repair actions. We discuss the requirements of testing methodologies for such systems and raise relevant research questions.","PeriodicalId":193319,"journal":{"name":"Proceedings of the 26th ACM International Systems and Software Product Line Conference - Volume B","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114862839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dario Romano, Kevin Feichtinger, Danilo Beuche, U. Ryssel, Rick Rabiser
In the last 30 years, many variability modeling approaches have been developed and new ones are still developed regularly. Most of them are only described in academic papers, only few come with tool support. The sheer plethora of approaches, all differing in terms of scope and expressiveness, makes it difficult to assess their properties, experiment with them and find the right approach for a specific use case. Implementing transformations between variability modeling approaches or importers/exporters for tools can help, but are hard to realize without information loss. In this paper, we describe how we derived and implemented transformations between the academically developed Universal Variability Language and the commercially developed pure::variants tool, with as little information loss as possible. Our approach can also be used to optimize constraints, e.g., reduce their number without an effect on the configuration space, using particular capabilities pure::variants provides. Also, via an existing variability model transformation approach, which uses UVL as a pivot language, we enable the transformation of FeatureIDE feature models, DOPLER decision models, and Orthogonal Variability Models into/from pure::variants and back. With our approach, we work towards bridging the gap between academic and industrial variability modeling tools and enable experiments with the different capabilities these tools provide.
{"title":"Bridging the gap between academia and industry: transforming the universal variability language to pure::variants and back","authors":"Dario Romano, Kevin Feichtinger, Danilo Beuche, U. Ryssel, Rick Rabiser","doi":"10.1145/3503229.3547056","DOIUrl":"https://doi.org/10.1145/3503229.3547056","url":null,"abstract":"In the last 30 years, many variability modeling approaches have been developed and new ones are still developed regularly. Most of them are only described in academic papers, only few come with tool support. The sheer plethora of approaches, all differing in terms of scope and expressiveness, makes it difficult to assess their properties, experiment with them and find the right approach for a specific use case. Implementing transformations between variability modeling approaches or importers/exporters for tools can help, but are hard to realize without information loss. In this paper, we describe how we derived and implemented transformations between the academically developed Universal Variability Language and the commercially developed pure::variants tool, with as little information loss as possible. Our approach can also be used to optimize constraints, e.g., reduce their number without an effect on the configuration space, using particular capabilities pure::variants provides. Also, via an existing variability model transformation approach, which uses UVL as a pivot language, we enable the transformation of FeatureIDE feature models, DOPLER decision models, and Orthogonal Variability Models into/from pure::variants and back. With our approach, we work towards bridging the gap between academic and industrial variability modeling tools and enable experiments with the different capabilities these tools provide.","PeriodicalId":193319,"journal":{"name":"Proceedings of the 26th ACM International Systems and Software Product Line Conference - Volume B","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128329605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing size and complexity of feature models (FM) requires the provision of efficient testing and debugging techniques. Feature models can be tested, for example, with regard to their conformance with a pre-defined set of analysis operations. In this paper, we show how the number of consistency checks for FM testing can be reduced on the basis of test case aggregation. Using a divide-and-conquer based approach, we show how to transform a feature model test suite into a corresponding aggregated representation where individual test cases can be combined if specific consistency criteria are fulfilled. Performance improvements are also analyzed on the basis of a best- and worst-case runtime analysis.
{"title":"Test case aggregation for efficient feature model testing","authors":"Viet-Man Le, A. Felfernig, Thi Ngoc Trang Tran","doi":"10.1145/3503229.3547046","DOIUrl":"https://doi.org/10.1145/3503229.3547046","url":null,"abstract":"The increasing size and complexity of feature models (FM) requires the provision of efficient testing and debugging techniques. Feature models can be tested, for example, with regard to their conformance with a pre-defined set of analysis operations. In this paper, we show how the number of consistency checks for FM testing can be reduced on the basis of test case aggregation. Using a divide-and-conquer based approach, we show how to transform a feature model test suite into a corresponding aggregated representation where individual test cases can be combined if specific consistency criteria are fulfilled. Performance improvements are also analyzed on the basis of a best- and worst-case runtime analysis.","PeriodicalId":193319,"journal":{"name":"Proceedings of the 26th ACM International Systems and Software Product Line Conference - Volume B","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122333360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Soleymani, D. Ferreira, Vasil L. Tenev, Martin Becker
Adopting Product Line Engineering (PLE) approaches in the context of software-intensive systems reduces overall development and maintenance costs, reduces time to market and leads to an overall improvement in product quality. The Software and System Product Line (SPL) community has provided a large number of different analysis approaches and tools, which were developed in different contexts, answer different questions, and can contribute to the fulfillment of different analysis goals. The pursuit of these goals requires holistic approaches, i. e. integrated toolchains and classification of analyses, which are documented as a centralized collection of wisdom. Previously, we proposed a classification system for describing existing analyses. Furthermore, this method supports the search for possible combinations, i. e. toolchains which address the complex industrial needs in the context of adopting PLE approaches. In this paper, we present a prototype of a crowd-sourcing platform to collect and share the required information regarding existing analyses and toolchains. While overviews of PLE-aware analyses exist, we propose an interactive visualisation to identify and document the required input data and resulting information for each analysis method. With this platform, we hope to promote the usage of analysis approaches and encourage collaboration between researchers.
{"title":"A prototype of a crowd-sourcing platform for classification and integration of analysis tools in product line engineering","authors":"M. Soleymani, D. Ferreira, Vasil L. Tenev, Martin Becker","doi":"10.1145/3503229.3547054","DOIUrl":"https://doi.org/10.1145/3503229.3547054","url":null,"abstract":"Adopting Product Line Engineering (PLE) approaches in the context of software-intensive systems reduces overall development and maintenance costs, reduces time to market and leads to an overall improvement in product quality. The Software and System Product Line (SPL) community has provided a large number of different analysis approaches and tools, which were developed in different contexts, answer different questions, and can contribute to the fulfillment of different analysis goals. The pursuit of these goals requires holistic approaches, i. e. integrated toolchains and classification of analyses, which are documented as a centralized collection of wisdom. Previously, we proposed a classification system for describing existing analyses. Furthermore, this method supports the search for possible combinations, i. e. toolchains which address the complex industrial needs in the context of adopting PLE approaches. In this paper, we present a prototype of a crowd-sourcing platform to collect and share the required information regarding existing analyses and toolchains. While overviews of PLE-aware analyses exist, we propose an interactive visualisation to identify and document the required input data and resulting information for each analysis method. With this platform, we hope to promote the usage of analysis approaches and encourage collaboration between researchers.","PeriodicalId":193319,"journal":{"name":"Proceedings of the 26th ACM International Systems and Software Product Line Conference - Volume B","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129662739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Migrating a set of similar software products into a Software Product Line is a time-consuming and costly process which, ultimately, provides an important gain in time and customization. Conducting this migration within an agile development process is a complex process which requires discipline and adaptation. We think it can be beneficial to drive the migration by leveraging agile software specifications and the source code versioning platform. Currently, we are working on a method, whose design is explained in this paper, which exploits: (1) Epics and User stories to identify features and variability and (2) the source code associated to code merges related to User stories and Epics to locate them. We plan to extract features and variability inside Epics and User stories using Natural Language Processing (NLP) techniques. Then we plan to investigate how formal concept analysis (FCA) and relational concept analysis (RCA) can assist feature model synthesis and establish mappings between features and source code. These knowledge discovery methods have been chosen for their ability to highlight and hierarchically organize groups of similar artefacts. FCA only considers artefact description to establish groups of similar artefacts. RCA groups similarly described artefacts that, in addition, share similar relationships to other artefact groups. We also plan to evaluate the method within the context of a company (ITK) with which we collaborate, using its code base and the associated project management artifacts. We also will assess how the method can be generalized to public projects in source code versioning platforms.
{"title":"Feature and variability extraction from Agile specifications and their related source code for software product line migration","authors":"Thomas Georges","doi":"10.1145/3503229.3547065","DOIUrl":"https://doi.org/10.1145/3503229.3547065","url":null,"abstract":"Migrating a set of similar software products into a Software Product Line is a time-consuming and costly process which, ultimately, provides an important gain in time and customization. Conducting this migration within an agile development process is a complex process which requires discipline and adaptation. We think it can be beneficial to drive the migration by leveraging agile software specifications and the source code versioning platform. Currently, we are working on a method, whose design is explained in this paper, which exploits: (1) Epics and User stories to identify features and variability and (2) the source code associated to code merges related to User stories and Epics to locate them. We plan to extract features and variability inside Epics and User stories using Natural Language Processing (NLP) techniques. Then we plan to investigate how formal concept analysis (FCA) and relational concept analysis (RCA) can assist feature model synthesis and establish mappings between features and source code. These knowledge discovery methods have been chosen for their ability to highlight and hierarchically organize groups of similar artefacts. FCA only considers artefact description to establish groups of similar artefacts. RCA groups similarly described artefacts that, in addition, share similar relationships to other artefact groups. We also plan to evaluate the method within the context of a company (ITK) with which we collaborate, using its code base and the associated project management artifacts. We also will assess how the method can be generalized to public projects in source code versioning platforms.","PeriodicalId":193319,"journal":{"name":"Proceedings of the 26th ACM International Systems and Software Product Line Conference - Volume B","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132849362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, in order to help manufacturers to better manage manufacturing resource reconfiguration in the context of reconfigurable manufacturing systems, we propose a generic knowledge-based model that can support resource reconfiguration decision-making while considering various manufacturing requirements and constraints. The model is based on Constraint Satisfaction Problem (CSP) framework. The two presented scenarios demonstrate that the application of a Knowledge-Based System (KBS) is a great opportunity to improve manufacturing systems' responsiveness.
{"title":"A generic knowledge model for resource reconfiguration in the context of reconfigurable manufacturing systems","authors":"Mathis Allibe, Abdourahim Sylla, Gülgün Alpan-Gaujal","doi":"10.1145/3503229.3547040","DOIUrl":"https://doi.org/10.1145/3503229.3547040","url":null,"abstract":"In this article, in order to help manufacturers to better manage manufacturing resource reconfiguration in the context of reconfigurable manufacturing systems, we propose a generic knowledge-based model that can support resource reconfiguration decision-making while considering various manufacturing requirements and constraints. The model is based on Constraint Satisfaction Problem (CSP) framework. The two presented scenarios demonstrate that the application of a Knowledge-Based System (KBS) is a great opportunity to improve manufacturing systems' responsiveness.","PeriodicalId":193319,"journal":{"name":"Proceedings of the 26th ACM International Systems and Software Product Line Conference - Volume B","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132023315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robel Negussie Workalemahu, C. Forza, Nikola Suzić
The capability of realizing individually customized products with complex geometries makes additive manufacturing (AM) ever more considered by companies engaged in mass customized manufacturing. In order to be exploited in the market, the AM allowed geometry freedom has to be transferred to the customers for the customer-specific customization. Notably, this is a new request posed to product configurators (PC). So, in this research we ask: How is this request being answered by pioneers who engage in this challenge? Are there other new requests that AM poses to configurators? The present paper aims at answering these exploratory questions by looking at how these issues have been considered in existing literature and by providing some examples. We hope that considerations derived from this investigation will open a discussion on this topic in the product configuration research community with the goal to identify peculiar PC capabilities needed to customize additively manufactured products using PCs.
{"title":"Product configurators for additively manufactured products: exploring their peculiar characteristics","authors":"Robel Negussie Workalemahu, C. Forza, Nikola Suzić","doi":"10.1145/3503229.3547038","DOIUrl":"https://doi.org/10.1145/3503229.3547038","url":null,"abstract":"The capability of realizing individually customized products with complex geometries makes additive manufacturing (AM) ever more considered by companies engaged in mass customized manufacturing. In order to be exploited in the market, the AM allowed geometry freedom has to be transferred to the customers for the customer-specific customization. Notably, this is a new request posed to product configurators (PC). So, in this research we ask: How is this request being answered by pioneers who engage in this challenge? Are there other new requests that AM poses to configurators? The present paper aims at answering these exploratory questions by looking at how these issues have been considered in existing literature and by providing some examples. We hope that considerations derived from this investigation will open a discussion on this topic in the product configuration research community with the goal to identify peculiar PC capabilities needed to customize additively manufactured products using PCs.","PeriodicalId":193319,"journal":{"name":"Proceedings of the 26th ACM International Systems and Software Product Line Conference - Volume B","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122266544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Romero, J. Galindo, J. Horcas, David Benavides
Relational databases are widely present in the development of software applications. A typical implementation can be seen in content management systems found on most websites. However, the migration of database structure and content between different management systems is not trivial, and the manual creation of scripts makes it difficult to reuse them in other scenarios. This paper presents a tool for database migration by modeling what we call a migration product line. This tool allows to obtain different configurations resulting in final products in a semi-automatic way, i.e., products according to software requirements, considering the variability between any two relational databases. To study the feasibility of our proposal, we have implemented a proof of concept that performs the migration between two databases.
{"title":"Variability-aware data migration tool","authors":"David Romero, J. Galindo, J. Horcas, David Benavides","doi":"10.1145/3503229.3547062","DOIUrl":"https://doi.org/10.1145/3503229.3547062","url":null,"abstract":"Relational databases are widely present in the development of software applications. A typical implementation can be seen in content management systems found on most websites. However, the migration of database structure and content between different management systems is not trivial, and the manual creation of scripts makes it difficult to reuse them in other scenarios. This paper presents a tool for database migration by modeling what we call a migration product line. This tool allows to obtain different configurations resulting in final products in a semi-automatic way, i.e., products according to software requirements, considering the variability between any two relational databases. To study the feasibility of our proposal, we have implemented a proof of concept that performs the migration between two databases.","PeriodicalId":193319,"journal":{"name":"Proceedings of the 26th ACM International Systems and Software Product Line Conference - Volume B","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127417182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, Cyber-Physical Systems (CPS) represent one of the main core elements of the Industry 4.0. It is common practice to run simulations on a model of the CPS, by adopting specific tools and approaches. Since the purpose of such models is to represent real systems, it is appropriate to assume that several components may be affected by noises and disturbances (N&D), and that these latter may have a different impact on the system depending on the considered configuration and the simulation scenarios. The analysis of signals belonging to a CPS system permits the understanding of the relationships that discipline the behavior of the whole system in presence of N&D. Depending on the context and the considered scenarios, the simulations in presence of N&D might generate very different numerical results compared to the simulations that do not include them. However, the simulations with additional N&D are non-trivial to be computed and analyzed, especially when the considered CPS have also high variability and configurability. The adopted approach investigates the validation of possible cross-configurations, in order that the solution includes sets of suitable configurations for both the CPS parameters and N&D wrt scenarios.
{"title":"BEEHIVE","authors":"Valeria Trombetta","doi":"10.1145/3503229.3547064","DOIUrl":"https://doi.org/10.1145/3503229.3547064","url":null,"abstract":"Nowadays, Cyber-Physical Systems (CPS) represent one of the main core elements of the Industry 4.0. It is common practice to run simulations on a model of the CPS, by adopting specific tools and approaches. Since the purpose of such models is to represent real systems, it is appropriate to assume that several components may be affected by noises and disturbances (N&D), and that these latter may have a different impact on the system depending on the considered configuration and the simulation scenarios. The analysis of signals belonging to a CPS system permits the understanding of the relationships that discipline the behavior of the whole system in presence of N&D. Depending on the context and the considered scenarios, the simulations in presence of N&D might generate very different numerical results compared to the simulations that do not include them. However, the simulations with additional N&D are non-trivial to be computed and analyzed, especially when the considered CPS have also high variability and configurability. The adopted approach investigates the validation of possible cross-configurations, in order that the solution includes sets of suitable configurations for both the CPS parameters and N&D wrt scenarios.","PeriodicalId":193319,"journal":{"name":"Proceedings of the 26th ACM International Systems and Software Product Line Conference - Volume B","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124321747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}