Pub Date : 2018-12-01DOI: 10.1109/umso.2018.8637245
{"title":"[Copyright notice]","authors":"","doi":"10.1109/umso.2018.8637245","DOIUrl":"https://doi.org/10.1109/umso.2018.8637245","url":null,"abstract":"","PeriodicalId":433225,"journal":{"name":"2018 International Conference on Unconventional Modelling, Simulation and Optimization - Soft Computing and Meta Heuristics - UMSO","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121295763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/UMSO.2018.8637219
Yoshiyuki Matsumoto, J. Watada
Rough Set Theory proposed in 1982 by Zdzislaw Pawlak. This theory can be data mining based on decision rules from a database, a web page, a big data, and so on. The decision rule is employed for data analysis as well as calculating an unknown object. We used rough set to analyze time-series data. We obtained prediction knowledge from time series data using decision rules. Economic time-series data was predicted using decision rules. However, when acquiring a decision rule from time series data, there are cases where the number of decision rules is very large. If the number of decision rules is very large, it is difficult to acquire knowledge. We proposed a method of merging them to reduce the number of decision rules. Similar to how it is difficult to acquire knowledge from multiple rules, it is also difficult to acquire knowledge from rules with a large number of condition attributes. Our method reduces the number of conditions attributes and thereby reduces the number of rules. However, it is not always possible to reduce rules. There are cases where the number of rules increases. In this thesis, we examine under what conditions rule reduction is possible. Change the condition attribute and verify the effect on rule reduction. We acquire knowledge using the Nikkei Stock Average. We acquire decision rule by rough set method and consider the influence on rule reduction.
{"title":"Study of Knowledge Acquisition Using Rough Set Merging Rule from Time Series Data","authors":"Yoshiyuki Matsumoto, J. Watada","doi":"10.1109/UMSO.2018.8637219","DOIUrl":"https://doi.org/10.1109/UMSO.2018.8637219","url":null,"abstract":"Rough Set Theory proposed in 1982 by Zdzislaw Pawlak. This theory can be data mining based on decision rules from a database, a web page, a big data, and so on. The decision rule is employed for data analysis as well as calculating an unknown object. We used rough set to analyze time-series data. We obtained prediction knowledge from time series data using decision rules. Economic time-series data was predicted using decision rules. However, when acquiring a decision rule from time series data, there are cases where the number of decision rules is very large. If the number of decision rules is very large, it is difficult to acquire knowledge. We proposed a method of merging them to reduce the number of decision rules. Similar to how it is difficult to acquire knowledge from multiple rules, it is also difficult to acquire knowledge from rules with a large number of condition attributes. Our method reduces the number of conditions attributes and thereby reduces the number of rules. However, it is not always possible to reduce rules. There are cases where the number of rules increases. In this thesis, we examine under what conditions rule reduction is possible. Change the condition attribute and verify the effect on rule reduction. We acquire knowledge using the Nikkei Stock Average. We acquire decision rule by rough set method and consider the influence on rule reduction.","PeriodicalId":433225,"journal":{"name":"2018 International Conference on Unconventional Modelling, Simulation and Optimization - Soft Computing and Meta Heuristics - UMSO","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128096891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/UMSO.2018.8637234
Y. Yabuuchi
A fuzzy regression model is classified into two types: non-interval-type and interval-type fuzzy regressions. A non-interval-type fuzzy regression model can analyze errors similar to the manner in which a statistical least squares model can. In contrast, because an interval-type fuzzy regression model illustrates the possibility of an analyzed system by including data, the obtained regression does not analyze prediction accuracies such as in error analysis. In other words, it is important to illustrate the amount of possibilities of an analyzed system by regression outputs. The appropriate evaluation functions, which can be easily interpreted, are used for this purpose. This paper proposes a new evaluation function, which is validated using a numerical example. The evaluation function is explained and discussed herein using the numerical example.
{"title":"Evaluation of an Interval-Type Model on Fuzzy Regression","authors":"Y. Yabuuchi","doi":"10.1109/UMSO.2018.8637234","DOIUrl":"https://doi.org/10.1109/UMSO.2018.8637234","url":null,"abstract":"A fuzzy regression model is classified into two types: non-interval-type and interval-type fuzzy regressions. A non-interval-type fuzzy regression model can analyze errors similar to the manner in which a statistical least squares model can. In contrast, because an interval-type fuzzy regression model illustrates the possibility of an analyzed system by including data, the obtained regression does not analyze prediction accuracies such as in error analysis. In other words, it is important to illustrate the amount of possibilities of an analyzed system by regression outputs. The appropriate evaluation functions, which can be easily interpreted, are used for this purpose. This paper proposes a new evaluation function, which is validated using a numerical example. The evaluation function is explained and discussed herein using the numerical example.","PeriodicalId":433225,"journal":{"name":"2018 International Conference on Unconventional Modelling, Simulation and Optimization - Soft Computing and Meta Heuristics - UMSO","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115512281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/UMSO.2018.8637239
M. Kashim, H. Tsegab, S. A. Ayub, Zainol Affendi B Abu Bakar
Mineral carbonation is a process whereby CO2 is chemically reacted with calcium and/or magnesium containing minerals to form stable carbonate minerals which needs minimal long-term monitoring. The in situ transformation mechanism involves injection of CO2 into geological formations where the temperature, pressure, and pH parameters for mineral-carbonation prevails. However, the dissolution of CO2 into formation waters depend on temperature, pressure, salinity, and buffering of pH through fluid-rock reaction, which needs numerical modeling to see the combined effect of certain variables through time. This paper presents findings from combined effect of salinity, temperature and pressure in a local geological formation, Kuantan Basalt. The models show that the amount of trapped CO2 in the selected geological formation with pure water condition and at temperature ranges from 60-150 °C is much lower than that of CO2 trapped at higher salinity geological conditions. The models also show a general decreasing amount of trapped CO2 with increasing pressure for salinity range from freshwater to 20000mg/l of NaCl. However, an increased amount of trapped CO2 with higher salinity such as the gas field of Malaysian scenario is observed. These findings may provide clues as what could happen if CO2 is sequestered into geological formations with similar mineralogical composition, similar temperature, salinity and pressure conditions.
{"title":"Numerical modeling of CO2 sequestration into basalt at high pressure and temperature with variable brine solutions","authors":"M. Kashim, H. Tsegab, S. A. Ayub, Zainol Affendi B Abu Bakar","doi":"10.1109/UMSO.2018.8637239","DOIUrl":"https://doi.org/10.1109/UMSO.2018.8637239","url":null,"abstract":"Mineral carbonation is a process whereby CO2 is chemically reacted with calcium and/or magnesium containing minerals to form stable carbonate minerals which needs minimal long-term monitoring. The in situ transformation mechanism involves injection of CO2 into geological formations where the temperature, pressure, and pH parameters for mineral-carbonation prevails. However, the dissolution of CO2 into formation waters depend on temperature, pressure, salinity, and buffering of pH through fluid-rock reaction, which needs numerical modeling to see the combined effect of certain variables through time. This paper presents findings from combined effect of salinity, temperature and pressure in a local geological formation, Kuantan Basalt. The models show that the amount of trapped CO2 in the selected geological formation with pure water condition and at temperature ranges from 60-150 °C is much lower than that of CO2 trapped at higher salinity geological conditions. The models also show a general decreasing amount of trapped CO2 with increasing pressure for salinity range from freshwater to 20000mg/l of NaCl. However, an increased amount of trapped CO2 with higher salinity such as the gas field of Malaysian scenario is observed. These findings may provide clues as what could happen if CO2 is sequestered into geological formations with similar mineralogical composition, similar temperature, salinity and pressure conditions.","PeriodicalId":433225,"journal":{"name":"2018 International Conference on Unconventional Modelling, Simulation and Optimization - Soft Computing and Meta Heuristics - UMSO","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123127332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/UMSO.2018.8637246
Viacheslav Kalashnikov, N. Kalashnykova, J. G. Flores-Muñiz
We study a bilevel programming problem molding the optimal toll assignment related to a network of toll and free highways. A public operator or a private company running the toll roads make decisions at the upper level by alloting the tolls in order to maximize their profits. The lower level decision makers (highway users), instead, look for the equilibrium among them while attempting to arrange their transportation flows along the routes aiming at minimization of their total travel costs subject to the satisfied demand for their goods/passengers. Our model extends the previous ones by adding quadratic terms to the lower level costs thus reflecting the mutual traffic congestion on the roads. Moreover, as a new feature, the lower level quadratic costs aren’t separable anymore, i.e., they are functions of the total flow along the arc (highway). In order to solve this bilevel programming problem, a heuristic algorithm making use of the sensitivity analysis techniques for quadratic programs is developed. As a remedy against being stuck at a local maximum of the upper-level objective function, we modify the well-known "filled function" method which brings us to a neighborhood of another local maximum point. A series of numerical experiments conducted on test models of small and medium size shows that the new algorithm is competitive enough.
{"title":"Toll Optimization Problems with Quadratic Costs","authors":"Viacheslav Kalashnikov, N. Kalashnykova, J. G. Flores-Muñiz","doi":"10.1109/UMSO.2018.8637246","DOIUrl":"https://doi.org/10.1109/UMSO.2018.8637246","url":null,"abstract":"We study a bilevel programming problem molding the optimal toll assignment related to a network of toll and free highways. A public operator or a private company running the toll roads make decisions at the upper level by alloting the tolls in order to maximize their profits. The lower level decision makers (highway users), instead, look for the equilibrium among them while attempting to arrange their transportation flows along the routes aiming at minimization of their total travel costs subject to the satisfied demand for their goods/passengers. Our model extends the previous ones by adding quadratic terms to the lower level costs thus reflecting the mutual traffic congestion on the roads. Moreover, as a new feature, the lower level quadratic costs aren’t separable anymore, i.e., they are functions of the total flow along the arc (highway). In order to solve this bilevel programming problem, a heuristic algorithm making use of the sensitivity analysis techniques for quadratic programs is developed. As a remedy against being stuck at a local maximum of the upper-level objective function, we modify the well-known \"filled function\" method which brings us to a neighborhood of another local maximum point. A series of numerical experiments conducted on test models of small and medium size shows that the new algorithm is competitive enough.","PeriodicalId":433225,"journal":{"name":"2018 International Conference on Unconventional Modelling, Simulation and Optimization - Soft Computing and Meta Heuristics - UMSO","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115931334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/umso.2018.8637221
{"title":"UMSO 2018 Committee","authors":"","doi":"10.1109/umso.2018.8637221","DOIUrl":"https://doi.org/10.1109/umso.2018.8637221","url":null,"abstract":"","PeriodicalId":433225,"journal":{"name":"2018 International Conference on Unconventional Modelling, Simulation and Optimization - Soft Computing and Meta Heuristics - UMSO","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126377348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/UMSO.2018.8637224
Viacheslav Kalashnikov, A. Talman, N. Kalashnykova, L. Alanís-López
The paper develops the extensions of both the antipodal (Borsuk–Ulam) theorem and Browder theorem to the cases embracing star-shaped domains of the studied mappings and a multi-valued nature of the latter.To be more specific, by making use of the triangulation procedure, we spread out the antipodal and fixed-point theorems to the case of not necessarily convex (star-shaped) domains. In addition, similar extensions are obtained for multi-valued mappings defined over star-shaped sets. Moreover, a directt algorithm shaping the required connected path of the zero points of the mapping has been designed, and its convergence demonstrated.
{"title":"Extensions of Antipodal-Type Theorems","authors":"Viacheslav Kalashnikov, A. Talman, N. Kalashnykova, L. Alanís-López","doi":"10.1109/UMSO.2018.8637224","DOIUrl":"https://doi.org/10.1109/UMSO.2018.8637224","url":null,"abstract":"The paper develops the extensions of both the antipodal (Borsuk–Ulam) theorem and Browder theorem to the cases embracing star-shaped domains of the studied mappings and a multi-valued nature of the latter.To be more specific, by making use of the triangulation procedure, we spread out the antipodal and fixed-point theorems to the case of not necessarily convex (star-shaped) domains. In addition, similar extensions are obtained for multi-valued mappings defined over star-shaped sets. Moreover, a directt algorithm shaping the required connected path of the zero points of the mapping has been designed, and its convergence demonstrated.","PeriodicalId":433225,"journal":{"name":"2018 International Conference on Unconventional Modelling, Simulation and Optimization - Soft Computing and Meta Heuristics - UMSO","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133938841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/UMSO.2018.8637242
Zhenyuan Xu, Chao Xu, J. Watada, Lihan Hu
Human tracking is one of the most important researches in computer vision. It is quite useful for many applications, such as surveil- lance systems and smart vehicle systems. It is also an important basic step for content analysis for behavior recognition and target detection. Due to the variations in human positions, complicated backgrounds and environmental conditions, human tracking remains challenging work. In particular, difficulties caused by environment and background such as occlusion and noises should be solved. Also, real-time human tracking now seems a critical step in intelligent video surveillance systems because of its huge computational workload. In this paper we propose a Particle Swarm Optimization based Support Vector Machine (PSO-SVM) to overcome these problems. First, we finish the preliminary human tracking step in several frames based on some filters such as particle filter and Kalman filter. Second, for each newly come frame need to be processed, we use the proposed PSO-SVM to process the previous frames as a regression frame work, based on this regression frame work, an estimated location of the target will be calculated out. Third, we process the newly come frame based on the particle filter and calculate out the target location as the basic target location. Finally, based on comparison analysis between basic target location and estimated target location, we can get the tracked target location. Experiment results on several videos will show the effectiveness and robustness of the proposed method.
{"title":"A Meta-Heuristic Parameter Selector Based Support Vector Machine for Human Tracking","authors":"Zhenyuan Xu, Chao Xu, J. Watada, Lihan Hu","doi":"10.1109/UMSO.2018.8637242","DOIUrl":"https://doi.org/10.1109/UMSO.2018.8637242","url":null,"abstract":"Human tracking is one of the most important researches in computer vision. It is quite useful for many applications, such as surveil- lance systems and smart vehicle systems. It is also an important basic step for content analysis for behavior recognition and target detection. Due to the variations in human positions, complicated backgrounds and environmental conditions, human tracking remains challenging work. In particular, difficulties caused by environment and background such as occlusion and noises should be solved. Also, real-time human tracking now seems a critical step in intelligent video surveillance systems because of its huge computational workload. In this paper we propose a Particle Swarm Optimization based Support Vector Machine (PSO-SVM) to overcome these problems. First, we finish the preliminary human tracking step in several frames based on some filters such as particle filter and Kalman filter. Second, for each newly come frame need to be processed, we use the proposed PSO-SVM to process the previous frames as a regression frame work, based on this regression frame work, an estimated location of the target will be calculated out. Third, we process the newly come frame based on the particle filter and calculate out the target location as the basic target location. Finally, based on comparison analysis between basic target location and estimated target location, we can get the tracked target location. Experiment results on several videos will show the effectiveness and robustness of the proposed method.","PeriodicalId":433225,"journal":{"name":"2018 International Conference on Unconventional Modelling, Simulation and Optimization - Soft Computing and Meta Heuristics - UMSO","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133368875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/UMSO.2018.8637237
T. Hossain, J. Watada, M. Hermana, Siti Rohkmah Bt M Shukri, H. Sakai
Characterization and evaluation of (oil and gas) reservoirs is typically achieved using a combination of seismic and well data. It is therefore critical that the two data types are well calibrated to correct and account for the fact that seismic data are measured at a scale of tens of meters while well data at a scale of tens of centimeters. In addition, seismic data can be poorly processed; some well logs can be damaged, affected by mud filtrate invasion or completely missing. This research proposes an approach based on rough set theory for generating significant rules from a not consistent information system that consists of the preprocessed seismic and well log data collected from geological data using stratified random sampling method. It is often that Geosciences’ researches encountering inexact, uncertain, or vague data. Rough Set Theory (RST), originally put forward by Zdzisław I. Pawlak, is a tool for dealing with uncertainty and vagueness. RST is very effective to address data mining tasks like rule extraction, clustering and classification. In RST the available data are used for performing the computations. RST works by utilizing the granularity structure of the data. Applying the RST on the data it generates a set of significant rules. These rules are likely to be supportive to the Geoscientists around the world to know the data behavior, which will enable them to know the dependency of the petro-physical properties obtained from well log and elastic properties which can be derived from seismic attributes and to improve the accuracy of the Data.
油气储层的特征和评价通常是通过结合地震和井数据来实现的。因此,必须对这两种数据类型进行很好的校准,以纠正和解释地震数据是在几十米尺度上测量的,而井数据是在几十厘米尺度上测量的。此外,地震数据的处理可能很差;有些测井曲线可能被泥浆滤液侵入或完全丢失而损坏。本研究提出了一种基于粗糙集理论的方法,用于从不一致的信息系统中生成重要规则,该信息系统由分层随机抽样方法收集的地质数据中预处理的地震和测井数据组成。地球科学研究经常遇到不准确、不确定或模糊的数据。粗糙集理论(RST)是一种处理不确定性和模糊性的工具,最早由Zdzisław I. Pawlak提出。RST在处理规则提取、聚类和分类等数据挖掘任务方面非常有效。在RST中,可用的数据用于执行计算。RST通过利用数据的粒度结构来工作。在数据上应用RST,它会生成一组重要的规则。这些规则可能有助于世界各地的地球科学家了解数据的行为,使他们能够了解从测井中获得的岩石物理性质与从地震属性中获得的弹性性质之间的依赖关系,从而提高数据的准确性。
{"title":"A Rough Set Based Rule Induction Approach to Geoscience Data","authors":"T. Hossain, J. Watada, M. Hermana, Siti Rohkmah Bt M Shukri, H. Sakai","doi":"10.1109/UMSO.2018.8637237","DOIUrl":"https://doi.org/10.1109/UMSO.2018.8637237","url":null,"abstract":"Characterization and evaluation of (oil and gas) reservoirs is typically achieved using a combination of seismic and well data. It is therefore critical that the two data types are well calibrated to correct and account for the fact that seismic data are measured at a scale of tens of meters while well data at a scale of tens of centimeters. In addition, seismic data can be poorly processed; some well logs can be damaged, affected by mud filtrate invasion or completely missing. This research proposes an approach based on rough set theory for generating significant rules from a not consistent information system that consists of the preprocessed seismic and well log data collected from geological data using stratified random sampling method. It is often that Geosciences’ researches encountering inexact, uncertain, or vague data. Rough Set Theory (RST), originally put forward by Zdzisław I. Pawlak, is a tool for dealing with uncertainty and vagueness. RST is very effective to address data mining tasks like rule extraction, clustering and classification. In RST the available data are used for performing the computations. RST works by utilizing the granularity structure of the data. Applying the RST on the data it generates a set of significant rules. These rules are likely to be supportive to the Geoscientists around the world to know the data behavior, which will enable them to know the dependency of the petro-physical properties obtained from well log and elastic properties which can be derived from seismic attributes and to improve the accuracy of the Data.","PeriodicalId":433225,"journal":{"name":"2018 International Conference on Unconventional Modelling, Simulation and Optimization - Soft Computing and Meta Heuristics - UMSO","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130289993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}