Pub Date : 2020-09-01DOI: 10.1109/CACRE50138.2020.9230318
Xuesong Zheng, Jiafu Ou, Tao Chen, Qiang Zhang, Liangyi Yang, Junfu Huang, H. Zhao
At present, the cleaning robot presents an intelligent situation, which can realize map building, path planning, self-charging and so on. Taking the coverage rate of intelligent cleaning robot as the research object, real-time image is used to test and analyze the target.In this method, the collected real-time image is transformed into segmented gray scale. The target is identified by template matching, and the motion path is measured by sub-pixel edge location. Thus the coverage of the cleaning robot can be calculated. Experiments show the effectiveness of the proposed method.
{"title":"Research of Coverage Evaluation of Cleaning Robots based on visual perception","authors":"Xuesong Zheng, Jiafu Ou, Tao Chen, Qiang Zhang, Liangyi Yang, Junfu Huang, H. Zhao","doi":"10.1109/CACRE50138.2020.9230318","DOIUrl":"https://doi.org/10.1109/CACRE50138.2020.9230318","url":null,"abstract":"At present, the cleaning robot presents an intelligent situation, which can realize map building, path planning, self-charging and so on. Taking the coverage rate of intelligent cleaning robot as the research object, real-time image is used to test and analyze the target.In this method, the collected real-time image is transformed into segmented gray scale. The target is identified by template matching, and the motion path is measured by sub-pixel edge location. Thus the coverage of the cleaning robot can be calculated. Experiments show the effectiveness of the proposed method.","PeriodicalId":325195,"journal":{"name":"2020 5th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129854426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/CACRE50138.2020.9229932
Ronghua Li, Zhidong Wang, Jingshan Yang, Qi Lu
With the rapid development of rail transit in China, there is a great demand for the manufacturing capacity of intelligent equipment. Facing the demand of efficient cleaning of work-piece in rail transit, this paper proposes the automatic cleaning of assembly work-piece of railway vehicle by intelligent robot.Intelligent industrial robots are used to wash and deburr the work-piece in the intelligent processing. The main cleaning parts are fasteners, rods and valve bodies. The main technology is clean and deburr the work-piece with holes and deep holes. Binocular vision is used to identify the coarse positioning and fine positioning of the work-piece with holes, and different types of work-pieces are stored in memory, which is convenient for the robot to quickly clean and deburr. According to the coordinates of image processing holes, the robot can automatically optimize the path and complete the cleaning work.
{"title":"Research on automatic cleaning method for hole work-piece in rail transit by intelligent robot","authors":"Ronghua Li, Zhidong Wang, Jingshan Yang, Qi Lu","doi":"10.1109/CACRE50138.2020.9229932","DOIUrl":"https://doi.org/10.1109/CACRE50138.2020.9229932","url":null,"abstract":"With the rapid development of rail transit in China, there is a great demand for the manufacturing capacity of intelligent equipment. Facing the demand of efficient cleaning of work-piece in rail transit, this paper proposes the automatic cleaning of assembly work-piece of railway vehicle by intelligent robot.Intelligent industrial robots are used to wash and deburr the work-piece in the intelligent processing. The main cleaning parts are fasteners, rods and valve bodies. The main technology is clean and deburr the work-piece with holes and deep holes. Binocular vision is used to identify the coarse positioning and fine positioning of the work-piece with holes, and different types of work-pieces are stored in memory, which is convenient for the robot to quickly clean and deburr. According to the coordinates of image processing holes, the robot can automatically optimize the path and complete the cleaning work.","PeriodicalId":325195,"journal":{"name":"2020 5th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124637270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/CACRE50138.2020.9230316
Huiqing Zhang, Kemei Jin
Aiming at the traditional prediction methods of related parameters that affect water quality, they usually only consider the temporal characteristics of the related parameters of water quality and ignore the problem that water quality changes are multivariate related, A prediction method of spatiotemporal correlation water quality parameters based on automatic encoder (AE) dimensionality reduction and long and short time memory (LSTM) neural network is proposed. Firstly, considering that water quality parameter changes have obvious time characteristics, a time series prediction model of water quality parameters is established based on LSTM. Secondly, considering that the water quality changes have multiple correlations, the upstream water quality will also affect the downstream water quality. If all the water quality parameters of the upstream station are added to the prediction model, redundant features will reduce the accuracy of parameter prediction. Therefore, the automatic encoder is used to reduce the dimensionality of the relevant parameters. Finally, the data set of Lang fang Water Quality Automatic Monitoring Station is applied to monitor the effectiveness of the method. By predicting the concentration of total phosphorus (TP) and total nitrogen (TN), the method is found to have better prediction accuracy and robustness.
{"title":"Research on water quality prediction method based on AE-LSTM","authors":"Huiqing Zhang, Kemei Jin","doi":"10.1109/CACRE50138.2020.9230316","DOIUrl":"https://doi.org/10.1109/CACRE50138.2020.9230316","url":null,"abstract":"Aiming at the traditional prediction methods of related parameters that affect water quality, they usually only consider the temporal characteristics of the related parameters of water quality and ignore the problem that water quality changes are multivariate related, A prediction method of spatiotemporal correlation water quality parameters based on automatic encoder (AE) dimensionality reduction and long and short time memory (LSTM) neural network is proposed. Firstly, considering that water quality parameter changes have obvious time characteristics, a time series prediction model of water quality parameters is established based on LSTM. Secondly, considering that the water quality changes have multiple correlations, the upstream water quality will also affect the downstream water quality. If all the water quality parameters of the upstream station are added to the prediction model, redundant features will reduce the accuracy of parameter prediction. Therefore, the automatic encoder is used to reduce the dimensionality of the relevant parameters. Finally, the data set of Lang fang Water Quality Automatic Monitoring Station is applied to monitor the effectiveness of the method. By predicting the concentration of total phosphorus (TP) and total nitrogen (TN), the method is found to have better prediction accuracy and robustness.","PeriodicalId":325195,"journal":{"name":"2020 5th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116212708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate heading reference and familiar landmarks are the key information to guide migratory birds to realize long-distance migration, which provides an important reference for the navigation of unmanned platforms. This paper focuses on the integrated inertial/visual navigation with heading constraints of magnetic compass and position constraints of place recognition. The model of the integrated navigation system is constructed based on extended Kalman filter (EKF), with the derivation of state and observation equations. The observability of system states under different constraints is analyzed based on singular value decomposition (SVD) method. It shows all states of the system are either completely observable or the errors are bounded by introducing the heading and position constraints, which can guarantee the requirements of long-endurance, long-distance and high-precision navigation.
{"title":"Research on inertial/visual navigation with heading-position constraints and the observability analysis","authors":"Yujie Wang, Qing-yang Chen, Peng Wang, Yafei Lu, Gao-wei Jia","doi":"10.1109/CACRE50138.2020.9229993","DOIUrl":"https://doi.org/10.1109/CACRE50138.2020.9229993","url":null,"abstract":"Accurate heading reference and familiar landmarks are the key information to guide migratory birds to realize long-distance migration, which provides an important reference for the navigation of unmanned platforms. This paper focuses on the integrated inertial/visual navigation with heading constraints of magnetic compass and position constraints of place recognition. The model of the integrated navigation system is constructed based on extended Kalman filter (EKF), with the derivation of state and observation equations. The observability of system states under different constraints is analyzed based on singular value decomposition (SVD) method. It shows all states of the system are either completely observable or the errors are bounded by introducing the heading and position constraints, which can guarantee the requirements of long-endurance, long-distance and high-precision navigation.","PeriodicalId":325195,"journal":{"name":"2020 5th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116494079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/CACRE50138.2020.9230301
Shou Zhou, Shifeng Zhang, Shangwei Niu, Pan Wu
The impact angle control guidance problem considering the strapdown seeker’s field-of-view has become an interested topic and it has been solved by various techniques. However, most of the existing solutions suffer from undesirable fluctuation in their guidance commands due to the disturbance of the nonlinear system. In this paper, we design a field-of-view constrained impact angle control guidance law by using an adaptive RBF neural network based sliding mode controller. In the design of the controller, a logarithmic barrier Lyapunov function and a quadratic Lyapunov function are used for forcing the system to reach the sliding mode in limited time and a hyperbolic tangent function is introduced to solve the field-of-view limitation. An adaptive RBF neural network is constructed to approximate the system’s uncertain disturbance and the approximation serves as compensation in the guidance command to mitigate the adverse fluctuation. Finally, the performance of the proposed solution is verified by numerical simulations through two engagement scenarios.
{"title":"Research on Neutral Network Based Impact Angle Control Considering the Field-of-view Limitation","authors":"Shou Zhou, Shifeng Zhang, Shangwei Niu, Pan Wu","doi":"10.1109/CACRE50138.2020.9230301","DOIUrl":"https://doi.org/10.1109/CACRE50138.2020.9230301","url":null,"abstract":"The impact angle control guidance problem considering the strapdown seeker’s field-of-view has become an interested topic and it has been solved by various techniques. However, most of the existing solutions suffer from undesirable fluctuation in their guidance commands due to the disturbance of the nonlinear system. In this paper, we design a field-of-view constrained impact angle control guidance law by using an adaptive RBF neural network based sliding mode controller. In the design of the controller, a logarithmic barrier Lyapunov function and a quadratic Lyapunov function are used for forcing the system to reach the sliding mode in limited time and a hyperbolic tangent function is introduced to solve the field-of-view limitation. An adaptive RBF neural network is constructed to approximate the system’s uncertain disturbance and the approximation serves as compensation in the guidance command to mitigate the adverse fluctuation. Finally, the performance of the proposed solution is verified by numerical simulations through two engagement scenarios.","PeriodicalId":325195,"journal":{"name":"2020 5th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128229062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/CACRE50138.2020.9230009
Min Huang, Xingbao Yang, C. Zhu
To provide a reference for how to evaluate the flight control systems (FCSs) mathematically, a mathematical simulation and evaluation method for the flight control systems was proposed in this paper. Firstly, a brief description of the FCS was given. Secondly, the dynamic and kinematic parameters of aircraft were introduced. Thirdly, mathematical simulation structure for the evaluation of FCS was proposed to basically show how to mathematically simulate the aircraft and the FCS. Then, based on the dynamic, kinematic parameters and the simulation structure, mathematical simulation models were built, including the hardware model, the dynamic and kinematic models, and the aircraft disturbance model. Finally, an evaluation method of mathematical simulation for the FCS was proposed to illustrate how to evaluate FCS performances with the built mathematical simulation models.
{"title":"A mathematical simulation and evaluation method for the flight control systems","authors":"Min Huang, Xingbao Yang, C. Zhu","doi":"10.1109/CACRE50138.2020.9230009","DOIUrl":"https://doi.org/10.1109/CACRE50138.2020.9230009","url":null,"abstract":"To provide a reference for how to evaluate the flight control systems (FCSs) mathematically, a mathematical simulation and evaluation method for the flight control systems was proposed in this paper. Firstly, a brief description of the FCS was given. Secondly, the dynamic and kinematic parameters of aircraft were introduced. Thirdly, mathematical simulation structure for the evaluation of FCS was proposed to basically show how to mathematically simulate the aircraft and the FCS. Then, based on the dynamic, kinematic parameters and the simulation structure, mathematical simulation models were built, including the hardware model, the dynamic and kinematic models, and the aircraft disturbance model. Finally, an evaluation method of mathematical simulation for the FCS was proposed to illustrate how to evaluate FCS performances with the built mathematical simulation models.","PeriodicalId":325195,"journal":{"name":"2020 5th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128394822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a lightweight stereo visual odometry (SuperPointVO) based on feature extraction of convolutional neural network(CNN). Compared with the traditional indirect method of VO system, our system replace the hand-engineered feature extraction method with a CNN-based method. Based on the feature extraction network SuperPoint, we discard the redundant descriptor information it extracted, and expand the expression ability of the descriptor through NMS and grid sampling, making it more suitable for VO tasks. We build a complete stereo VO system without loop closing around the modified feature extractor. In the experiments, we evaluate the performance of the system on the KITTI dataset, which is close to other state-of-the-art stereo SLAM system. This shows that the accuracy and robustness of feature extraction methods based on deep learning are comparable to, or even better than the traditional methods in VO tasks.
{"title":"SuperPointVO: A Lightweight Visual Odometry based on CNN Feature Extraction","authors":"Xiao Han, Yulin Tao, Zhuyi Li, Ruping Cen, Fangzheng Xue","doi":"10.1109/CACRE50138.2020.9230348","DOIUrl":"https://doi.org/10.1109/CACRE50138.2020.9230348","url":null,"abstract":"In this paper, we propose a lightweight stereo visual odometry (SuperPointVO) based on feature extraction of convolutional neural network(CNN). Compared with the traditional indirect method of VO system, our system replace the hand-engineered feature extraction method with a CNN-based method. Based on the feature extraction network SuperPoint, we discard the redundant descriptor information it extracted, and expand the expression ability of the descriptor through NMS and grid sampling, making it more suitable for VO tasks. We build a complete stereo VO system without loop closing around the modified feature extractor. In the experiments, we evaluate the performance of the system on the KITTI dataset, which is close to other state-of-the-art stereo SLAM system. This shows that the accuracy and robustness of feature extraction methods based on deep learning are comparable to, or even better than the traditional methods in VO tasks.","PeriodicalId":325195,"journal":{"name":"2020 5th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131193029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/CACRE50138.2020.9230015
Weidong Wu, Yabo Wang, Shuning Xu, Kaibo Yan
Detecting sentiment in online reviews is a key task, and effective analysis of sentiment in online reviews is the foundation of applications such as user preference modeling, consumer behavior monitoring, and public opinion analysis. In previous studies, the sentiment analysis task mainly relied on text content and ignored the effective modeling of visual information in comments. This paper proposes a neural network SFNN based on semantic feature fusion. The model first uses convolutional neural networks and attention mechanism to obtain the effective emotional feature expressions of the image, and then maps the emotional feature expressions to the semantic feature level. Then, the semantic features of the visual modal is combined with the semantic features of the text modal, and finally the emotional polarity of the comment is effectively analyzed by combining the emotional features of the physical level of the image. Feature fusion based on semantic level can reduce the difference of heterogeneous data. Experimental results show that our model could achieve better performance than the existing methods in the benchmark dataset.
{"title":"SFNN: Semantic Features Fusion Neural Network for Multimodal Sentiment Analysis","authors":"Weidong Wu, Yabo Wang, Shuning Xu, Kaibo Yan","doi":"10.1109/CACRE50138.2020.9230015","DOIUrl":"https://doi.org/10.1109/CACRE50138.2020.9230015","url":null,"abstract":"Detecting sentiment in online reviews is a key task, and effective analysis of sentiment in online reviews is the foundation of applications such as user preference modeling, consumer behavior monitoring, and public opinion analysis. In previous studies, the sentiment analysis task mainly relied on text content and ignored the effective modeling of visual information in comments. This paper proposes a neural network SFNN based on semantic feature fusion. The model first uses convolutional neural networks and attention mechanism to obtain the effective emotional feature expressions of the image, and then maps the emotional feature expressions to the semantic feature level. Then, the semantic features of the visual modal is combined with the semantic features of the text modal, and finally the emotional polarity of the comment is effectively analyzed by combining the emotional features of the physical level of the image. Feature fusion based on semantic level can reduce the difference of heterogeneous data. Experimental results show that our model could achieve better performance than the existing methods in the benchmark dataset.","PeriodicalId":325195,"journal":{"name":"2020 5th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134392361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/CACRE50138.2020.9230240
W. Tao, Y. Miao, Biqin Xiao, Hongrui Li, Guanfang Li
The research status of underwater unmanned swarm control technology is briefly analyzed and its cooperative control system based on center transfer is constructed. A hybrid hierarchical information fusion structure and its joint optimization model is set up followed by simulation. The results show that the constructed decentralized underwater unmanned swarm autonomous control system can realize fast and stable tracking for target, and satisfy the application requirements.
{"title":"Research on autonomous control technology of underwater unmanned swarm based on center transfer","authors":"W. Tao, Y. Miao, Biqin Xiao, Hongrui Li, Guanfang Li","doi":"10.1109/CACRE50138.2020.9230240","DOIUrl":"https://doi.org/10.1109/CACRE50138.2020.9230240","url":null,"abstract":"The research status of underwater unmanned swarm control technology is briefly analyzed and its cooperative control system based on center transfer is constructed. A hybrid hierarchical information fusion structure and its joint optimization model is set up followed by simulation. The results show that the constructed decentralized underwater unmanned swarm autonomous control system can realize fast and stable tracking for target, and satisfy the application requirements.","PeriodicalId":325195,"journal":{"name":"2020 5th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134591094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/CACRE50138.2020.9229904
Shuailei Wang, Shaolei Zhou, Xuanbing Liu, Feiyang Dai, Yahui Qi, S. Yan
Group attitude coordinated control of multi-spacecraft is investigated with the distributed event-triggered mechanism. The communication topology of the system is an undirected graph. An auxiliary variable is designed, and event-triggered functions for every spacecraft are constructed. The control input is proposed with the information of the neighbor at the triggering instant. It is proved that the multi-spacecraft system can reach group attitude coordination, and for every spacecraft, the triggering interval has a positive lower bound, thus Zeno behavior is avoided. The effectiveness of the control input proposed is verified with simulation results.
{"title":"Distributed Event-triggered Group Attitude Coordinated Control of Multi-spacecraft with Undirected Topology","authors":"Shuailei Wang, Shaolei Zhou, Xuanbing Liu, Feiyang Dai, Yahui Qi, S. Yan","doi":"10.1109/CACRE50138.2020.9229904","DOIUrl":"https://doi.org/10.1109/CACRE50138.2020.9229904","url":null,"abstract":"Group attitude coordinated control of multi-spacecraft is investigated with the distributed event-triggered mechanism. The communication topology of the system is an undirected graph. An auxiliary variable is designed, and event-triggered functions for every spacecraft are constructed. The control input is proposed with the information of the neighbor at the triggering instant. It is proved that the multi-spacecraft system can reach group attitude coordination, and for every spacecraft, the triggering interval has a positive lower bound, thus Zeno behavior is avoided. The effectiveness of the control input proposed is verified with simulation results.","PeriodicalId":325195,"journal":{"name":"2020 5th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132763378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}