Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369417
Gracia Sánchez, F. Jiménez
This paper outlines a real-world industrial problem for product-mix selection involving 8 decision variables and 21 constraints with fuzzy coefficients. On one hand, a multi-objective optimization approach to solve the fuzzy problem is proposed. Modified S-curve membership functions are considered. On the other hand, an ad hoc Pareto-based multi-objective evolutionary algorithm to capture multiple non dominated solutions in a single run of the algorithm is described. Solutions in the Pareto front corresponds with the fuzzy solution of the former fuzzy problem expressed in terms of the group of three (xrarr, mu, alpha), i.e., optimal solution - level of satisfaction - vagueness factor. Decision-maker could choose, in a posteriori decision environment, the most convenient optimal solution according to his level of satisfaction and vagueness factor. The proposed algorithm has been evaluated with the existing methodologies in the field and the results have been compared with the well-known multi-objective evolutionary algorithm NSGA-II
{"title":"Fuzzy Optimization with Multi-Objective Evolutionary Algorithms: a Case Study","authors":"Gracia Sánchez, F. Jiménez","doi":"10.1109/MCDM.2007.369417","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369417","url":null,"abstract":"This paper outlines a real-world industrial problem for product-mix selection involving 8 decision variables and 21 constraints with fuzzy coefficients. On one hand, a multi-objective optimization approach to solve the fuzzy problem is proposed. Modified S-curve membership functions are considered. On the other hand, an ad hoc Pareto-based multi-objective evolutionary algorithm to capture multiple non dominated solutions in a single run of the algorithm is described. Solutions in the Pareto front corresponds with the fuzzy solution of the former fuzzy problem expressed in terms of the group of three (xrarr, mu, alpha), i.e., optimal solution - level of satisfaction - vagueness factor. Decision-maker could choose, in a posteriori decision environment, the most convenient optimal solution according to his level of satisfaction and vagueness factor. The proposed algorithm has been evaluated with the existing methodologies in the field and the results have been compared with the well-known multi-objective evolutionary algorithm NSGA-II","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128684085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369106
Hiroyuki Sato, H. Aguirre, Kiyoshi Tanaka
Local dominance has been shown to improve significantly the overall performance of multiobjective evolutionary algorithms (MOEAs) on combinatorial optimization problems. This work proposes the control of dominance area of solutions in local dominance MOEAs to enhance Pareto selection aiming to find solutions with high convergence and diversity properties. We control the expansion or contraction of the dominance area of solutions and analyze its effects on the search performance of a local dominance MOEA using 0/1 multiobjective knapsack problems. We show that convergence of the algorithm can be significantly improved while keeping a good distribution of solutions along the whole true Pareto front by using local dominance with expansion of dominance area of solutions. We also show that by controlling the dominance area of solutions dominance can be applied within very small neighborhoods, which reduces significantly the computational cost of the local dominance MOEA
{"title":"Local Dominance Including Control of Dominance Area of Solutions in MOEAs","authors":"Hiroyuki Sato, H. Aguirre, Kiyoshi Tanaka","doi":"10.1109/MCDM.2007.369106","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369106","url":null,"abstract":"Local dominance has been shown to improve significantly the overall performance of multiobjective evolutionary algorithms (MOEAs) on combinatorial optimization problems. This work proposes the control of dominance area of solutions in local dominance MOEAs to enhance Pareto selection aiming to find solutions with high convergence and diversity properties. We control the expansion or contraction of the dominance area of solutions and analyze its effects on the search performance of a local dominance MOEA using 0/1 multiobjective knapsack problems. We show that convergence of the algorithm can be significantly improved while keeping a good distribution of solutions along the whole true Pareto front by using local dominance with expansion of dominance area of solutions. We also show that by controlling the dominance area of solutions dominance can be applied within very small neighborhoods, which reduces significantly the computational cost of the local dominance MOEA","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124552304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369117
L. Bradstreet, L. Barone, Lyndon While, S. Huband, P. Hingston
Understanding the behaviour of different optimisation algorithms is important in order to apply the best algorithm to a particular problem. The WFG toolkit was designed to aid this task for multi-objective evolutionary algorithms (MOEAs), offering an easily modifiable framework that allows practitioners the ability to test different features by "plugging" in different forms of transformations. In doing so, the WFG toolkit provides a set of problems that exhibit a variety of different characteristics. This paper presents a comparison between two state of the art MOEAs (NSGA-II and SPEA2) that exemplifies the unique capabilities of the WFG toolkit. By altering the control parameters or even the transformations that compose the WFG problems, we are able to explore the different types of problems where SPEA2 and NSGA-II each excel. Our results show that the performance of the two algorithms differ not only on the dimensionality of the problem, but also by properties such as the shape and size of the underlying Pareto surface. As such, the tunability of the WFG toolkit is key in allowing the easy exploration of these different features.
{"title":"Use of the WFG Toolkit and PISA for Comparison of MOEAs","authors":"L. Bradstreet, L. Barone, Lyndon While, S. Huband, P. Hingston","doi":"10.1109/MCDM.2007.369117","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369117","url":null,"abstract":"Understanding the behaviour of different optimisation algorithms is important in order to apply the best algorithm to a particular problem. The WFG toolkit was designed to aid this task for multi-objective evolutionary algorithms (MOEAs), offering an easily modifiable framework that allows practitioners the ability to test different features by \"plugging\" in different forms of transformations. In doing so, the WFG toolkit provides a set of problems that exhibit a variety of different characteristics. This paper presents a comparison between two state of the art MOEAs (NSGA-II and SPEA2) that exemplifies the unique capabilities of the WFG toolkit. By altering the control parameters or even the transformations that compose the WFG problems, we are able to explore the different types of problems where SPEA2 and NSGA-II each excel. Our results show that the performance of the two algorithms differ not only on the dimensionality of the problem, but also by properties such as the shape and size of the underlying Pareto surface. As such, the tunability of the WFG toolkit is key in allowing the easy exploration of these different features.","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133590606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369103
Quan Zhang, Yucai Wang, Yuxian Yang
A new approach is proposed for the fuzzy multiple attribute decision making (MADM) problems with preference information on alternatives. In the approach, multiple decision makers give their preference information on alternatives in different formats. The uniformities and aggregation process with fuzzy majority method are employed to obtain the social fuzzy preference relation on the alternatives. Accordingly, an optimization model is constructed to assess the ranking values of the alternatives
{"title":"Fuzzy multiple attribute decision making with eight types of preference information on alternatives","authors":"Quan Zhang, Yucai Wang, Yuxian Yang","doi":"10.1109/MCDM.2007.369103","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369103","url":null,"abstract":"A new approach is proposed for the fuzzy multiple attribute decision making (MADM) problems with preference information on alternatives. In the approach, multiple decision makers give their preference information on alternatives in different formats. The uniformities and aggregation process with fuzzy majority method are employed to obtain the social fuzzy preference relation on the alternatives. Accordingly, an optimization model is constructed to assess the ranking values of the alternatives","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134409729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369435
B. Chandrasekaran, Mark Goldman
Planning requires evaluating candidate plans multi-criterially, which in turn requires some kind of a causal model of the operational environment, whether the model is to be used as part of evaluation by humans or simulation by computers. However, there is always a gap - consisting of missing or erroneous information - between any model and the reality. One of the important sources of gaps in models is built-in assumptions about the world, e.g., enemy capabilities or intent in military planning. Some of the gaps can be handled by standard approaches to uncertainty, such as optimizing expected values of the criteria of interest based on assumed probability distributions. However, there are many problems, such as military planning, where it is not appropriate to choose the best plan based on such expected values, or where meaningful probability distributions are not available. Such uncertainties, often called "deep uncertainties," require an approach to planning where the task is not choosing the optimal plan as much as a robust plan, one that would do well enough even in the presence of such uncertainties. Decision support systems should help the planner explore the robustness of candidate plans. In this paper, we illustrate this functionality, robustness exploration, in the domain of network disruption planning, an example of effect-based operations.
{"title":"Exploring Robustness of Plans for Simulation-Based Course of Action Planning: A Framework and an Example","authors":"B. Chandrasekaran, Mark Goldman","doi":"10.1109/MCDM.2007.369435","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369435","url":null,"abstract":"Planning requires evaluating candidate plans multi-criterially, which in turn requires some kind of a causal model of the operational environment, whether the model is to be used as part of evaluation by humans or simulation by computers. However, there is always a gap - consisting of missing or erroneous information - between any model and the reality. One of the important sources of gaps in models is built-in assumptions about the world, e.g., enemy capabilities or intent in military planning. Some of the gaps can be handled by standard approaches to uncertainty, such as optimizing expected values of the criteria of interest based on assumed probability distributions. However, there are many problems, such as military planning, where it is not appropriate to choose the best plan based on such expected values, or where meaningful probability distributions are not available. Such uncertainties, often called \"deep uncertainties,\" require an approach to planning where the task is not choosing the optimal plan as much as a robust plan, one that would do well enough even in the presence of such uncertainties. Decision support systems should help the planner explore the robustness of candidate plans. In this paper, we illustrate this functionality, robustness exploration, in the domain of network disruption planning, an example of effect-based operations.","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"27 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114117508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369104
Rafał Dreżewski, Leszek Siwik
The realization of co- evolutionary interactions in evolutionary algorithms results in increased population diversity and speciation. General model of co-evolution in multi-agent system allows for modeling and realization of agent-based co-evolutionary systems in which many species and sexes may exist and interact. In this paper one exemplary agent-based system with predator-prey mechanism is presented. The results from experiments with various multi-objective test problems conclude the paper
{"title":"The Application of Agent-Based Co-Evolutionary System with Predator-Prey Interactions to Solving Multi-Objective Optimization Problems","authors":"Rafał Dreżewski, Leszek Siwik","doi":"10.1109/MCDM.2007.369104","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369104","url":null,"abstract":"The realization of co- evolutionary interactions in evolutionary algorithms results in increased population diversity and speciation. General model of co-evolution in multi-agent system allows for modeling and realization of agent-based co-evolutionary systems in which many species and sexes may exist and interact. In this paper one exemplary agent-based system with predator-prey mechanism is presented. The results from experiments with various multi-objective test problems conclude the paper","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122030689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369112
A. D. Amo, L. Garmendia, D. Gómez, J. Montero
This paper stresses that standard multicriteria aggregation procedures either do not assume any structure in data or this structure is in fact assumed linear. Nevertheless, many decision making problems are based upon a family of data with a well denned spatial structure, which is simply not taken into account. Hence, such aggregation procedures may be misleading. Therefore, we propose an alternative model where the aggregation of criteria assumes a certain structure, according to remote sensing data
{"title":"A Spatial Classification Model for Multicriteria Analysis","authors":"A. D. Amo, L. Garmendia, D. Gómez, J. Montero","doi":"10.1109/MCDM.2007.369112","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369112","url":null,"abstract":"This paper stresses that standard multicriteria aggregation procedures either do not assume any structure in data or this structure is in fact assumed linear. Nevertheless, many decision making problems are based upon a family of data with a well denned spatial structure, which is simply not taken into account. Hence, such aggregation procedures may be misleading. Therefore, we propose an alternative model where the aggregation of criteria assumes a certain structure, according to remote sensing data","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121053800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369105
S. Kukkonen, S. Jangam, N. Chakraborti
Molecular sequence alignment is one of the most essential tools of the molecular biology. It permits to track changes and similarities between molecular sequences. In this paper the molecular sequence alignment problem is formulated suitable for an evolutionary algorithm (EA), and two problem instances are solved using generalized differential evolution 3 (GDE3), which is a general purpose EA. Regardless of relatively large number of decision variables, the instances were solvable and results were comparable to those by sequence alignment solvers in comparison
{"title":"Solving the Molecular Sequence Alignment Problem with Generalized Differential Evolution 3 (GDE3)","authors":"S. Kukkonen, S. Jangam, N. Chakraborti","doi":"10.1109/MCDM.2007.369105","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369105","url":null,"abstract":"Molecular sequence alignment is one of the most essential tools of the molecular biology. It permits to track changes and similarities between molecular sequences. In this paper the molecular sequence alignment problem is formulated suitable for an evolutionary algorithm (EA), and two problem instances are solved using generalized differential evolution 3 (GDE3), which is a general purpose EA. Regardless of relatively large number of decision variables, the instances were solvable and results were comparable to those by sequence alignment solvers in comparison","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129592200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369110
Daniel Angus
Ant inspired algorithms have gained popularity for use in multi-objective problem domains. One specific algorithm, Population-based ACO, which uses a population as well as the traditional pheromone matrix, has been shown to be effective at solving combinatorial multi-objective optimisation problems. This paper extends the population-based ACO algorithm with a crowding population replacement scheme to increase the search efficacy and efficiency. Results are shown for a suite of multi-objective travelling salesman problems of varying complexity
{"title":"Crowding Population-based Ant Colony Optimisation for the Multi-objective Travelling Salesman Problem","authors":"Daniel Angus","doi":"10.1109/MCDM.2007.369110","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369110","url":null,"abstract":"Ant inspired algorithms have gained popularity for use in multi-objective problem domains. One specific algorithm, Population-based ACO, which uses a population as well as the traditional pheromone matrix, has been shown to be effective at solving combinatorial multi-objective optimisation problems. This paper extends the population-based ACO algorithm with a crowding population replacement scheme to increase the search efficacy and efficiency. Results are shown for a suite of multi-objective travelling salesman problems of varying complexity","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122060937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369101
N. Iyer, P. Bonissone
Risk assessment is a common task present in a variety of problem domains, ranging from the assignment of premium classes to insurance applications, to the evaluation of disease treatments in medical diagnostics, situation assessments in battlefield management, state evaluations in planning activities, etc. Risk assessment involves scoring alternatives based on their likelihood to produce better or worse than expected returns in their application domain. Often, it is sufficient to evaluate the risk associated with an alternative by using a predefined granularity derived from an ordered set of risk-classes. Therefore, the process of risk assessment becomes one of classification. Traditionally, risk classifications are made by human experts using their domain knowledge to perform such assignments. These assignments will drive further decisions related to the alternatives. We address the automation of the risk classification process by exploiting risk structures present in sets of historical cases classified by human experts. We use such structures to pre-compile risk signatures that are compact and can be used to classify new alternatives. Specifically, we use dominance relationships, exploiting the partial ordering induced by the monotonic relationship between the individual features and the risk associated with a candidate alternative, to extract such signatures. Due to its underlying logical basis, this classifier produces highly accurate and defensible risk assignments. However, due to its strict applicability constraints, it covers only a small percentage of new cases. In response, we present a weaker version of the classifier, which incrementally improves its coverage without any substantial drop in accuracy. Although these approaches could be used as risk classifiers on their own, we found their primary strengths to be in validating the overall logical consistency of the risk assignments made by human experts and automated systems. We refer to potentially inconsistent risk assignments as outliers and present results obtained from implementing our technique in the problem of insurance underwriting
{"title":"Automated Risk Classification and Outlier Detection","authors":"N. Iyer, P. Bonissone","doi":"10.1109/MCDM.2007.369101","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369101","url":null,"abstract":"Risk assessment is a common task present in a variety of problem domains, ranging from the assignment of premium classes to insurance applications, to the evaluation of disease treatments in medical diagnostics, situation assessments in battlefield management, state evaluations in planning activities, etc. Risk assessment involves scoring alternatives based on their likelihood to produce better or worse than expected returns in their application domain. Often, it is sufficient to evaluate the risk associated with an alternative by using a predefined granularity derived from an ordered set of risk-classes. Therefore, the process of risk assessment becomes one of classification. Traditionally, risk classifications are made by human experts using their domain knowledge to perform such assignments. These assignments will drive further decisions related to the alternatives. We address the automation of the risk classification process by exploiting risk structures present in sets of historical cases classified by human experts. We use such structures to pre-compile risk signatures that are compact and can be used to classify new alternatives. Specifically, we use dominance relationships, exploiting the partial ordering induced by the monotonic relationship between the individual features and the risk associated with a candidate alternative, to extract such signatures. Due to its underlying logical basis, this classifier produces highly accurate and defensible risk assignments. However, due to its strict applicability constraints, it covers only a small percentage of new cases. In response, we present a weaker version of the classifier, which incrementally improves its coverage without any substantial drop in accuracy. Although these approaches could be used as risk classifiers on their own, we found their primary strengths to be in validating the overall logical consistency of the risk assignments made by human experts and automated systems. We refer to potentially inconsistent risk assignments as outliers and present results obtained from implementing our technique in the problem of insurance underwriting","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124267683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}