Pub Date : 2017-12-01DOI: 10.1109/PIC.2017.8359541
Yanfang Wang, Qian Huang, Jing Hu
Images taken under non-uniform illumination usually suffer from degenerated details because of underexposure and overexposure. In order to improve the visual quality of color images, underexposure needs to be brightened and overexposure should be dimmed accordingly. Hence, an important procedure is discriminating between underexposure and overexposure in color images. Traditional methods utilize a certain discriminating threshold throughout an image. However, illumination variation occurs easily in real life. To cope with this, we propose an adaptive discriminating principle according to local and global luminance. Then, a nonlinear modification is applied to image luminance to light up underexposure and dim overexposure regions. Further, based on the modified luminance and original chromatic information, a natural color image is constructed via an exponential technique. Finally, a local and image-dependent exponential technique is applied to RGB channels to improve image contrast. Experimental results shows that the proposed method produces clear and vivid details for both non-uniform illumination images and images with normal illumination.
{"title":"Image enhancement based on adaptive demarcation between underexposure and overexposure","authors":"Yanfang Wang, Qian Huang, Jing Hu","doi":"10.1109/PIC.2017.8359541","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359541","url":null,"abstract":"Images taken under non-uniform illumination usually suffer from degenerated details because of underexposure and overexposure. In order to improve the visual quality of color images, underexposure needs to be brightened and overexposure should be dimmed accordingly. Hence, an important procedure is discriminating between underexposure and overexposure in color images. Traditional methods utilize a certain discriminating threshold throughout an image. However, illumination variation occurs easily in real life. To cope with this, we propose an adaptive discriminating principle according to local and global luminance. Then, a nonlinear modification is applied to image luminance to light up underexposure and dim overexposure regions. Further, based on the modified luminance and original chromatic information, a natural color image is constructed via an exponential technique. Finally, a local and image-dependent exponential technique is applied to RGB channels to improve image contrast. Experimental results shows that the proposed method produces clear and vivid details for both non-uniform illumination images and images with normal illumination.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122425292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/PIC.2017.8359551
Yujia Zhai, Lizhen Liu, Wei Song, Chao Du, Xinlei Zhao
Compiling principle is an important course of computer science major, which mainly introduces general principles and basic methods of the construction of compiling programs mainly. Due to high demands of the logic analysis ability, the course bring abstract and unintelligible experience to many students. Thus it is quite difficult for students to master the main points of this course within the limited class time. Based on the requirement above, this paper mainly proposed a method of making use of natural language processing in the research and application of compiling process, which utilizes Maximum Probability Word Segmentation algorithm during the process of lexical analysis and syntax analysis, to offer more effective interface between human and computer. The proposed method can provide students with intuitive and profound knowledge concept in the process of learning how to compile, makes it easier and quicker for students to understand the principle of computer compiling.
{"title":"The application of natural language processing in compiler principle system","authors":"Yujia Zhai, Lizhen Liu, Wei Song, Chao Du, Xinlei Zhao","doi":"10.1109/PIC.2017.8359551","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359551","url":null,"abstract":"Compiling principle is an important course of computer science major, which mainly introduces general principles and basic methods of the construction of compiling programs mainly. Due to high demands of the logic analysis ability, the course bring abstract and unintelligible experience to many students. Thus it is quite difficult for students to master the main points of this course within the limited class time. Based on the requirement above, this paper mainly proposed a method of making use of natural language processing in the research and application of compiling process, which utilizes Maximum Probability Word Segmentation algorithm during the process of lexical analysis and syntax analysis, to offer more effective interface between human and computer. The proposed method can provide students with intuitive and profound knowledge concept in the process of learning how to compile, makes it easier and quicker for students to understand the principle of computer compiling.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122921656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/PIC.2017.8359571
Xiangqun Song, Q. Ma, Wenyuan Wang, Yun Peng
In distant-water fishing industry, production, supply and distribution activities are realized in a dynamic environment including uncertainties of resources in fishing grounds and demands from global markets. Moreover, the capacity of the port cold storage is of great importance to the cold chain network as most of catch is handled and temporarily stored at seaports. Here, a multi-product, three-echelon and multi-period network model is applied to a distant-water fishery cold chain design problem based on capacities. This network-design includes optimal cold storage facility locations and optimal flow amounts. And a two-stage stochastic programming method is used to handle the uncertainties.
{"title":"Capacities-based distant-water fishery cold chain network design considering yield uncertainty and demand dynamics","authors":"Xiangqun Song, Q. Ma, Wenyuan Wang, Yun Peng","doi":"10.1109/PIC.2017.8359571","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359571","url":null,"abstract":"In distant-water fishing industry, production, supply and distribution activities are realized in a dynamic environment including uncertainties of resources in fishing grounds and demands from global markets. Moreover, the capacity of the port cold storage is of great importance to the cold chain network as most of catch is handled and temporarily stored at seaports. Here, a multi-product, three-echelon and multi-period network model is applied to a distant-water fishery cold chain design problem based on capacities. This network-design includes optimal cold storage facility locations and optimal flow amounts. And a two-stage stochastic programming method is used to handle the uncertainties.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129010380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/PIC.2017.8359556
Jinbin Tu, Tianhao Yang, Yi Zhang, Jin Sun
As the size of the transistor continues to shrink, a number of reliability issues have emerged in network-on-chip (NoC) design. Taking into account the performance degradation induced by Negative Bias Temperature Instability (NBTI) aging effect, this paper proposes an aging-aware task scheduling framework for NoC-based multi-core systems. This framework relies on a NBTI aging model to evaluate the degradation of core's operating frequency to establish the task scheduling model under aging effect. Then, we develop a particle swarm optimization (PSO)-based heuristic to solve the scheduling problem with an optimization objective of total task completion time, and finally obtain a scheduling result with higher efficiency compared with traditional scheduling algorithms without considering of NBTI aging effect. Experiments show that the proposed aging-aware task-scheduling algorithm achieves not only shorter makespan and higher throughput, but also better reliability over non-aging-aware ones.
{"title":"Particle swarm optimization based task scheduling for multi-core systems under aging effect","authors":"Jinbin Tu, Tianhao Yang, Yi Zhang, Jin Sun","doi":"10.1109/PIC.2017.8359556","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359556","url":null,"abstract":"As the size of the transistor continues to shrink, a number of reliability issues have emerged in network-on-chip (NoC) design. Taking into account the performance degradation induced by Negative Bias Temperature Instability (NBTI) aging effect, this paper proposes an aging-aware task scheduling framework for NoC-based multi-core systems. This framework relies on a NBTI aging model to evaluate the degradation of core's operating frequency to establish the task scheduling model under aging effect. Then, we develop a particle swarm optimization (PSO)-based heuristic to solve the scheduling problem with an optimization objective of total task completion time, and finally obtain a scheduling result with higher efficiency compared with traditional scheduling algorithms without considering of NBTI aging effect. Experiments show that the proposed aging-aware task-scheduling algorithm achieves not only shorter makespan and higher throughput, but also better reliability over non-aging-aware ones.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131276162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to enhance the white box security of software, we proposed a reduplicate code obfuscation algorithm to protect the source code. Firstly, we apply the parameter decomposition tree to formalize the code, and then we utilize flattening control flow system to decompose the source code into a multi-branch WHILE-SWITCH loop structure. Finally, we apply opaque predicates to obfuscate the flattened code for the secondary obfuscation. In this paper, opaque predicate code representation and different methods of inserting opaque predicates into program braches and sequence blocks were given. Experiments has been made to compare time-space cost of source code and obfuscated code. The results demonstrate that the proposed algorithm can improve code's anti-attack ability, increasing the difficulty of reverse engineering as well.
{"title":"A parameterized flattening control flow based obfuscation algorithm with opaque predicate for reduplicate obfuscation","authors":"Zheheng Liang, Wenlin Li, Jing Guo, Deyu Qi, Jijun Zeng","doi":"10.1109/PIC.2017.8359575","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359575","url":null,"abstract":"In order to enhance the white box security of software, we proposed a reduplicate code obfuscation algorithm to protect the source code. Firstly, we apply the parameter decomposition tree to formalize the code, and then we utilize flattening control flow system to decompose the source code into a multi-branch WHILE-SWITCH loop structure. Finally, we apply opaque predicates to obfuscate the flattened code for the secondary obfuscation. In this paper, opaque predicate code representation and different methods of inserting opaque predicates into program braches and sequence blocks were given. Experiments has been made to compare time-space cost of source code and obfuscated code. The results demonstrate that the proposed algorithm can improve code's anti-attack ability, increasing the difficulty of reverse engineering as well.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129082664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/PIC.2017.8359557
Jianzhang Zhang, Yinglin Wang, Wentao Wang, Nan Niu
The problem of incomplete requirements has been one of the most critical issues in requirements engineering. Common requirements in the software product line establish the foundation to implement the core functionalities of a family of software products. In this paper, we propose a framework to detect the missing common requirements in the context of software product line. The framework mainly consists of two parts: a set of rules to extract the common requirements, the dependency relationships among them and a procedure for detecting the omission in an incoming set of requirements. Preliminary experiments are conducted on a group of medical applications to validate our approach. The results indicate that our proposed framework is effective.
{"title":"A rule-based method for detecting the missing common requirements in software product line","authors":"Jianzhang Zhang, Yinglin Wang, Wentao Wang, Nan Niu","doi":"10.1109/PIC.2017.8359557","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359557","url":null,"abstract":"The problem of incomplete requirements has been one of the most critical issues in requirements engineering. Common requirements in the software product line establish the foundation to implement the core functionalities of a family of software products. In this paper, we propose a framework to detect the missing common requirements in the context of software product line. The framework mainly consists of two parts: a set of rules to extract the common requirements, the dependency relationships among them and a procedure for detecting the omission in an incoming set of requirements. Preliminary experiments are conducted on a group of medical applications to validate our approach. The results indicate that our proposed framework is effective.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129150522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/PIC.2017.8359579
Yang Yu, Lei Wu, Hang Yu, Sheng Li, Shi Wang, Shangce Gao
Brain storm optimization algorithm is a newly proposed algorithm which is based on the social behaviors of human beings. It gets inspiration from the procedure of brain storming to generate new individuals. The properties of brain storming highly ensure the diversity of the whole population and can efficiently eliminate premature convergence. In this paper, an adaptive search radius method has been proposed to help brain storm optimization to enhance its search ability. The proposed algorithm equipped with adaptive search strategies introduces a success and failure memory to choose the best suitable search strategy to improve the individual quality in each iteration. Three strategies can be chosen in the duration of convergence adaptively and guarantee better solutions compared with traditional brain storm optimization.
{"title":"Brain storm optimization with adaptive search radius for optimization","authors":"Yang Yu, Lei Wu, Hang Yu, Sheng Li, Shi Wang, Shangce Gao","doi":"10.1109/PIC.2017.8359579","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359579","url":null,"abstract":"Brain storm optimization algorithm is a newly proposed algorithm which is based on the social behaviors of human beings. It gets inspiration from the procedure of brain storming to generate new individuals. The properties of brain storming highly ensure the diversity of the whole population and can efficiently eliminate premature convergence. In this paper, an adaptive search radius method has been proposed to help brain storm optimization to enhance its search ability. The proposed algorithm equipped with adaptive search strategies introduces a success and failure memory to choose the best suitable search strategy to improve the individual quality in each iteration. Three strategies can be chosen in the duration of convergence adaptively and guarantee better solutions compared with traditional brain storm optimization.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"61 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129121484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/PIC.2017.8359577
Areerat Trongratsameethong
This paper proposes an optimization algorithm named Join Order Algorithm Using Predefined Optimal Join Order or is called JAPO algorithm to optimize join cost. Optimal join order solutions for all possible join patterns are predefined and stored in a file using Dynamic Programming with Memorization technique or is called DPM algorithm. JAPO algorithm searches join order solutions from the predefined optimal join orders using hash function instead of traversing all search space. Experiments are conducted and join costs obtained by JAPO algorithm are compared with DPM algorithm and greedy algorithm named GOO. The experimental results show that JAPO algorithm with polynomial time complexity obtains almost 100 percent of optimal join order solutions. DPM algorithm obtains 100 percent of optimal join order solutions with factorial time complexity. GOO algorithm with polynomial time complexity obtains sub-optimal solutions and number of optimal solutions obtained by GOO algorithm decreases when number of relations to be joined is increased.
{"title":"Join order algorithm using predefined optimal join order","authors":"Areerat Trongratsameethong","doi":"10.1109/PIC.2017.8359577","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359577","url":null,"abstract":"This paper proposes an optimization algorithm named Join Order Algorithm Using Predefined Optimal Join Order or is called JAPO algorithm to optimize join cost. Optimal join order solutions for all possible join patterns are predefined and stored in a file using Dynamic Programming with Memorization technique or is called DPM algorithm. JAPO algorithm searches join order solutions from the predefined optimal join orders using hash function instead of traversing all search space. Experiments are conducted and join costs obtained by JAPO algorithm are compared with DPM algorithm and greedy algorithm named GOO. The experimental results show that JAPO algorithm with polynomial time complexity obtains almost 100 percent of optimal join order solutions. DPM algorithm obtains 100 percent of optimal join order solutions with factorial time complexity. GOO algorithm with polynomial time complexity obtains sub-optimal solutions and number of optimal solutions obtained by GOO algorithm decreases when number of relations to be joined is increased.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127524599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the convolutional neural network (CNN) algorithm is proved to be uncomplicated in the image preconditioning and relatively simple train the original image, it has become popular in image classification. Apart from the field of image classification, CNN has been widely used in many scientific area, especially in the field of pattern classification. In this paper, we use CNN for handwritten numeral recognition. The basic idea of our method is to use the multi-process to process the training samples in parallel, to exchange the training results and to get the final weight parameters. Compared with the conventional algorithm, the training time is greatly reduced, and the result can be obtained more quickly. Besides, the accuracy of the algorithm is proved to be almost the same as that of the conventional algorithm with sufficient training testing samples. This significantly improves the efficiency of CNN in the hand written numeral recognition. Finally, we also implemented our proposed method with parallel acceleration optimization based on Many Integrated Core Architecture (MIC) architecture of Intel and GPU architecture of Nvidia.
{"title":"Parallelizing convolutional neural network for the handwriting recognition problems with different architectures","authors":"Junhao Zhou, Weibin Chen, Guishen Peng, Hong Xiao, Hao Wang, Zhigang Chen","doi":"10.1109/PIC.2017.8359517","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359517","url":null,"abstract":"As the convolutional neural network (CNN) algorithm is proved to be uncomplicated in the image preconditioning and relatively simple train the original image, it has become popular in image classification. Apart from the field of image classification, CNN has been widely used in many scientific area, especially in the field of pattern classification. In this paper, we use CNN for handwritten numeral recognition. The basic idea of our method is to use the multi-process to process the training samples in parallel, to exchange the training results and to get the final weight parameters. Compared with the conventional algorithm, the training time is greatly reduced, and the result can be obtained more quickly. Besides, the accuracy of the algorithm is proved to be almost the same as that of the conventional algorithm with sufficient training testing samples. This significantly improves the efficiency of CNN in the hand written numeral recognition. Finally, we also implemented our proposed method with parallel acceleration optimization based on Many Integrated Core Architecture (MIC) architecture of Intel and GPU architecture of Nvidia.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125791680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/PIC.2017.8359558
Haibo Shi, Yaoru Sun, Guangyuan Li
The deep deterministic policy gradient (DDPG) is a recently developed reinforcement learning method that could learn the control policy with a deterministic representation. The policy learning directly follows the gradient of the action-value function with respect to the actions. Similarly, the DDPG provides the gradient of the action-value function to the state readily. This mechanism allows the incorporation of the model information to improve the original DDPG. In this study, a model-based DDPG as an improvement to the original DDPG was implemented. An additional deep network was embedded into the framework of the conventional DDPG, based on which the gradient of the model dynamics for the maximization of the action-value is also exploited to learn the control policy. The model-based DDPG showed a relative advantage over the original DDPG through an experiment of simulated arm reaching movement control.
{"title":"Model-based DDPG for motor control","authors":"Haibo Shi, Yaoru Sun, Guangyuan Li","doi":"10.1109/PIC.2017.8359558","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359558","url":null,"abstract":"The deep deterministic policy gradient (DDPG) is a recently developed reinforcement learning method that could learn the control policy with a deterministic representation. The policy learning directly follows the gradient of the action-value function with respect to the actions. Similarly, the DDPG provides the gradient of the action-value function to the state readily. This mechanism allows the incorporation of the model information to improve the original DDPG. In this study, a model-based DDPG as an improvement to the original DDPG was implemented. An additional deep network was embedded into the framework of the conventional DDPG, based on which the gradient of the model dynamics for the maximization of the action-value is also exploited to learn the control policy. The model-based DDPG showed a relative advantage over the original DDPG through an experiment of simulated arm reaching movement control.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126709602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}