Many real-world networks are suitable to be modeled as heterogeneous graphs, which are made up of many sorts of nodes and links. When the heterogeneous map is a non-attribute graph or some features on the graph are missing, it will lead to poor performance of the previous models. In this paper, we hold that useful position features can be generated through the guidance of topological information on the graph and present a generic framework for Heterogeneous Graph Neural Networks(HGNNs), termed Position Encoding(PE). First of all, PE leverages existing node embedding methods to obtain the implicit semantics on a graph and generate low-dimensional node embedding. Secondly, for each task-related target node, PE generates corresponding sampling subgraphs, in which we use node embedding to calculate the relative positions and encode the positions into position features that can be used directly or as an additional feature. Then the set of subgraphs with position features can be easily combined with the desired Graph Neural Networks (GNNs) or HGNNs to learn the representation of target nodes. We evaluated our method on graph classification tasks over three commonly used heterogeneous graph datasets with two processing ways, and experimental results show the superiority of PE over baselines.
{"title":"Position encoding for heterogeneous graph neural networks","authors":"Xi Zeng, Qingyun Dai, Fangyu Lei","doi":"10.1117/12.2639209","DOIUrl":"https://doi.org/10.1117/12.2639209","url":null,"abstract":"Many real-world networks are suitable to be modeled as heterogeneous graphs, which are made up of many sorts of nodes and links. When the heterogeneous map is a non-attribute graph or some features on the graph are missing, it will lead to poor performance of the previous models. In this paper, we hold that useful position features can be generated through the guidance of topological information on the graph and present a generic framework for Heterogeneous Graph Neural Networks(HGNNs), termed Position Encoding(PE). First of all, PE leverages existing node embedding methods to obtain the implicit semantics on a graph and generate low-dimensional node embedding. Secondly, for each task-related target node, PE generates corresponding sampling subgraphs, in which we use node embedding to calculate the relative positions and encode the positions into position features that can be used directly or as an additional feature. Then the set of subgraphs with position features can be easily combined with the desired Graph Neural Networks (GNNs) or HGNNs to learn the representation of target nodes. We evaluated our method on graph classification tasks over three commonly used heterogeneous graph datasets with two processing ways, and experimental results show the superiority of PE over baselines.","PeriodicalId":336892,"journal":{"name":"Neural Networks, Information and Communication Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125854128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software quality prediction technology is the main method of early prediction and control of software quality. Generalized regression neural network (GRNN) can better map the nonlinear relationship between software metrics and software quality elements, but the prediction accuracy of the software quality prediction model based on GRNN is low. To improve the accuracy of the quality prediction model, we use the improved cuckoo search (CS) algorithm to optimize the smoothing factor of GRNN, solve the problems of insufficient population diversity and slow convergence speed in the later stage of the cuckoo algorithm, and propose a software quality prediction model based on the improved CS algorithm to optimize GRNN by introducing Gaussian disturbance function, to improve the accuracy of predicting the number of software defects. Finally, the paper uses the public promise data set for simulation experiments and verifies the model by comparing it with the GRNN model optimized by the CS algorithm and the standard GRNN model.
{"title":"Application of improved cuckoo algorithm to optimize generalized regression neural network in software quality prediction","authors":"Luyao Liu, Peisheng Han","doi":"10.1117/12.2639204","DOIUrl":"https://doi.org/10.1117/12.2639204","url":null,"abstract":"Software quality prediction technology is the main method of early prediction and control of software quality. Generalized regression neural network (GRNN) can better map the nonlinear relationship between software metrics and software quality elements, but the prediction accuracy of the software quality prediction model based on GRNN is low. To improve the accuracy of the quality prediction model, we use the improved cuckoo search (CS) algorithm to optimize the smoothing factor of GRNN, solve the problems of insufficient population diversity and slow convergence speed in the later stage of the cuckoo algorithm, and propose a software quality prediction model based on the improved CS algorithm to optimize GRNN by introducing Gaussian disturbance function, to improve the accuracy of predicting the number of software defects. Finally, the paper uses the public promise data set for simulation experiments and verifies the model by comparing it with the GRNN model optimized by the CS algorithm and the standard GRNN model.","PeriodicalId":336892,"journal":{"name":"Neural Networks, Information and Communication Engineering","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127141846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steel beam is a kind of basic component widely used in machinery and civil engineering industry and its application has been widely studied home and abroad. In this paper, the neural network toolbox in MATLAB software was used to predict and analyze damage identification based on the changes of yield strength, elongation and tensile strength of steel beams with different thickness in accelerated corrosion experiments. The results show that, on the premise of selecting appropriate training samples, the BP neural network method had a great effect on the damage identification of steel beams, and its average error was about 3%, which could meet the requirements of the damage identification of steel beams in adverse environment.
{"title":"BP neural network method for damage recognition of steel beams in corrosive environment","authors":"Duo Wu","doi":"10.1117/12.2640335","DOIUrl":"https://doi.org/10.1117/12.2640335","url":null,"abstract":"Steel beam is a kind of basic component widely used in machinery and civil engineering industry and its application has been widely studied home and abroad. In this paper, the neural network toolbox in MATLAB software was used to predict and analyze damage identification based on the changes of yield strength, elongation and tensile strength of steel beams with different thickness in accelerated corrosion experiments. The results show that, on the premise of selecting appropriate training samples, the BP neural network method had a great effect on the damage identification of steel beams, and its average error was about 3%, which could meet the requirements of the damage identification of steel beams in adverse environment.","PeriodicalId":336892,"journal":{"name":"Neural Networks, Information and Communication Engineering","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125347312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aiming at the characteristics of small targets, many interferents and inconspicuous features of spore images of wheat powdery mildew, a weight adaptive feature fusion model is proposed based on SSD network structure to improve the accuracy of spore detection. Firstly, a feature fusion path is constructed to recursively fuse features of various scales from deep to shallow, and at the same time, a layer of feature matrix is added to enhance the utilization of deep and shallow features by the network; Secondly, a hybrid attention module is proposed, which redistributes the weights of features adaptively to enhance the ability of extracting network context information. Finally, the k-means algorithm is used to set the shape of the prior box, which effectively improves the problem that it is difficult to manually adjust the hyperparameter of the neural network. The AP of powdery mildew spores was 91.17%, Compared with the classical SSD detection method, it has been greatly improved.
{"title":"Spore detection algorithm of wheat powdery mildew based on weight adaptive feature fusion","authors":"Hao Niu, Botao Wang","doi":"10.1117/12.2639187","DOIUrl":"https://doi.org/10.1117/12.2639187","url":null,"abstract":"Aiming at the characteristics of small targets, many interferents and inconspicuous features of spore images of wheat powdery mildew, a weight adaptive feature fusion model is proposed based on SSD network structure to improve the accuracy of spore detection. Firstly, a feature fusion path is constructed to recursively fuse features of various scales from deep to shallow, and at the same time, a layer of feature matrix is added to enhance the utilization of deep and shallow features by the network; Secondly, a hybrid attention module is proposed, which redistributes the weights of features adaptively to enhance the ability of extracting network context information. Finally, the k-means algorithm is used to set the shape of the prior box, which effectively improves the problem that it is difficult to manually adjust the hyperparameter of the neural network. The AP of powdery mildew spores was 91.17%, Compared with the classical SSD detection method, it has been greatly improved.","PeriodicalId":336892,"journal":{"name":"Neural Networks, Information and Communication Engineering","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123640624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine learning, as the core of artificial intelligence technology, has been rapidly developed in recent years, and has made breakthrough progress in many fields. Similarly, machine learning has been widely used in the field of economic management. Unlike other fields, data in the economic field is often complex and disordered. This complexity and disorder limit the use of some machine learning methods, but it gives neural network a huge space to play. The largest advantage of neural network is that there is no requirement on the structure of the input data. However, previous work has applied neural networks directly, without making specific improvements based on the structure in economics. In the actual economic forecast and decision-making, although there are many influencing factors, the weight of each factor is not the same. Previous neural networks put all the data into the network and then got a result without considering the different weights of each factor. We propose a new neural network with different weights forecasting and local connections, which can apply different weights to each factor to get more accurate and practical results. We use our proposed method to forecast the sales volume of Haier company, and the results show that our method is significantly better than the previous method.
{"title":"Economic forecasting based on neural network with weight learning and local connection","authors":"Z. Y. Zheng","doi":"10.1117/12.2639194","DOIUrl":"https://doi.org/10.1117/12.2639194","url":null,"abstract":"Machine learning, as the core of artificial intelligence technology, has been rapidly developed in recent years, and has made breakthrough progress in many fields. Similarly, machine learning has been widely used in the field of economic management. Unlike other fields, data in the economic field is often complex and disordered. This complexity and disorder limit the use of some machine learning methods, but it gives neural network a huge space to play. The largest advantage of neural network is that there is no requirement on the structure of the input data. However, previous work has applied neural networks directly, without making specific improvements based on the structure in economics. In the actual economic forecast and decision-making, although there are many influencing factors, the weight of each factor is not the same. Previous neural networks put all the data into the network and then got a result without considering the different weights of each factor. We propose a new neural network with different weights forecasting and local connections, which can apply different weights to each factor to get more accurate and practical results. We use our proposed method to forecast the sales volume of Haier company, and the results show that our method is significantly better than the previous method.","PeriodicalId":336892,"journal":{"name":"Neural Networks, Information and Communication Engineering","volume":"184 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116973513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hui Zhang, Xing-hai Dang, Liqi Jia, Jianyun Zhao, Xincheng Fan, Ming Lu
In order to study the spatial distribution characteristics and causes of Heifangtai landslide in Gansu Province, the sentinel- 1A images from September 2017 to November 2020 were used as the data source to extract surface subsidence information in the study area using SBAS technology, and the high coherence point D1 of the landslide in Dangchuan village was selected, the subsidence was analyzed by combining irrigation, rainfall and temperature data. And the BP neural network was used to predict the point. The results showed that: (1) the area identified by SBAS technology was mainly spread in Xinyuan village, Fangtai village, Zhuwang village, Chenjia village and around the tableland. (2) In February and March, due to the large temperature difference, the landslide of Dangchuan started to settle as the temperature increased and caused the permafrost to melt; The amount of irrigation and rainfall increases from June, when the loess tableland starts to sink and landslides occur frequently; After October, the landslide in Dangchuan Village produced a frozen stagnant water effect, and there was a tendency for the subsidence to increase. (3) The prediction result of BP neural network shows that the subsidence rate of D1 point will surpass 60 mm in 2022, which is important for the early identification and prevention of the area.
{"title":"Analysis and prediction of landslide subsidence characteristics of Dangchuan based on Sentinel-1A data","authors":"Hui Zhang, Xing-hai Dang, Liqi Jia, Jianyun Zhao, Xincheng Fan, Ming Lu","doi":"10.1117/12.2639299","DOIUrl":"https://doi.org/10.1117/12.2639299","url":null,"abstract":"In order to study the spatial distribution characteristics and causes of Heifangtai landslide in Gansu Province, the sentinel- 1A images from September 2017 to November 2020 were used as the data source to extract surface subsidence information in the study area using SBAS technology, and the high coherence point D1 of the landslide in Dangchuan village was selected, the subsidence was analyzed by combining irrigation, rainfall and temperature data. And the BP neural network was used to predict the point. The results showed that: (1) the area identified by SBAS technology was mainly spread in Xinyuan village, Fangtai village, Zhuwang village, Chenjia village and around the tableland. (2) In February and March, due to the large temperature difference, the landslide of Dangchuan started to settle as the temperature increased and caused the permafrost to melt; The amount of irrigation and rainfall increases from June, when the loess tableland starts to sink and landslides occur frequently; After October, the landslide in Dangchuan Village produced a frozen stagnant water effect, and there was a tendency for the subsidence to increase. (3) The prediction result of BP neural network shows that the subsidence rate of D1 point will surpass 60 mm in 2022, which is important for the early identification and prevention of the area.","PeriodicalId":336892,"journal":{"name":"Neural Networks, Information and Communication Engineering","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126255643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to solve the problem that the main flap beam of the conventional adaptive beam forming algorithm cannot correctly point to the desired signal direction of the target under the array error, a robust adaptive broadband constant beamwidth digital beamforming method based on spatial response variation constraint is proposed. Firstly, the beamformer with constant beamwidth based on spatial response variation constraints have been designed. Secondly, for the array error, the relationship between the error vector norm between the real steering vector and the assumed steering vector and the array error matrix is derived, and an inequality optimization model is established. Finally, the proposed method is a non-convex problem, which is transformed into a convex programming model through matrix decomposition and the idea of changing elements, and is solved by the convex optimization toolbox. The simulation results show that the proposed method is more robust than some other methods.
{"title":"Robust adaptive wideband constant beamwidth digital beamforming based on spatial response variation constraint.","authors":"Yao Li, Wendong Li, Xingchen Lu","doi":"10.1117/12.2639109","DOIUrl":"https://doi.org/10.1117/12.2639109","url":null,"abstract":"In order to solve the problem that the main flap beam of the conventional adaptive beam forming algorithm cannot correctly point to the desired signal direction of the target under the array error, a robust adaptive broadband constant beamwidth digital beamforming method based on spatial response variation constraint is proposed. Firstly, the beamformer with constant beamwidth based on spatial response variation constraints have been designed. Secondly, for the array error, the relationship between the error vector norm between the real steering vector and the assumed steering vector and the array error matrix is derived, and an inequality optimization model is established. Finally, the proposed method is a non-convex problem, which is transformed into a convex programming model through matrix decomposition and the idea of changing elements, and is solved by the convex optimization toolbox. The simulation results show that the proposed method is more robust than some other methods.","PeriodicalId":336892,"journal":{"name":"Neural Networks, Information and Communication Engineering","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122066589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of the Internet, users’ personal information has become one of the key factors for Internet platform. Due to the privacy concern, users are often reluctant to provide their personal privacy information to Internet platform. ELM model is an important model to analyze consumer behavior. Based on ELM model, this paper studies the willingness of users to provide their private information. The results shows that privacy collection method, privacy protection statement, and privacy protection technology of the website have a negative correlation with the willingness of users to provide privacy. This research is helpful for enterprises to master user information, analyze user behavior, and find the key factors affecting users’ willingness to provide private information.
{"title":"Research on willingness of Internet users to provide privacy information based on ELM model","authors":"Min Wang, Zhilong You","doi":"10.1117/12.2639182","DOIUrl":"https://doi.org/10.1117/12.2639182","url":null,"abstract":"With the development of the Internet, users’ personal information has become one of the key factors for Internet platform. Due to the privacy concern, users are often reluctant to provide their personal privacy information to Internet platform. ELM model is an important model to analyze consumer behavior. Based on ELM model, this paper studies the willingness of users to provide their private information. The results shows that privacy collection method, privacy protection statement, and privacy protection technology of the website have a negative correlation with the willingness of users to provide privacy. This research is helpful for enterprises to master user information, analyze user behavior, and find the key factors affecting users’ willingness to provide private information.","PeriodicalId":336892,"journal":{"name":"Neural Networks, Information and Communication Engineering","volume":"409 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116672203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Functional medicine imaging has been successfully applied to capture functional changes in pathological tissues of the body in recent years. SPECT nuclear medicine functional imaging has the potential to acquire information about areas of concern (e.g., lesions and organs) in a non-invasive manner, enabling semi-automated or automated decision-making for the purposes of disease diagnosis, treatment, evaluation, and prediction. To reliably identify that whether or not at least one hotspot or lesion presents in a whole-body SPECT image, in this work, we develop a group of CNN-based classifiers. Specifically, we first propose a preprocessing method that transforms each original SPECT file into the required form by deep learning model, including normalization, 3-channel construction, rotation and scaling, size standardization, and size adapting. Second, six different classifiers are constructed by fine-tuning parameters of the standard VGG-16 model. Last, a group of real-world SPECT whole-body bone scan files were utilized to evaluate the developed classifiers. Experiment results shows that our classifiers are workable for the 2-class classification of SPECT images, achieving a best value of 0.7641, 0.6678, 1.000, and 0.6574 for defined evaluation metrics Acc, Pre, Rec, and AUC, respectively.
{"title":"CNN-based automated classification of SPECT bone scan images","authors":"Zhengxing Man, Qiang Lin, Yongchun Cao","doi":"10.1117/12.2639123","DOIUrl":"https://doi.org/10.1117/12.2639123","url":null,"abstract":"Functional medicine imaging has been successfully applied to capture functional changes in pathological tissues of the body in recent years. SPECT nuclear medicine functional imaging has the potential to acquire information about areas of concern (e.g., lesions and organs) in a non-invasive manner, enabling semi-automated or automated decision-making for the purposes of disease diagnosis, treatment, evaluation, and prediction. To reliably identify that whether or not at least one hotspot or lesion presents in a whole-body SPECT image, in this work, we develop a group of CNN-based classifiers. Specifically, we first propose a preprocessing method that transforms each original SPECT file into the required form by deep learning model, including normalization, 3-channel construction, rotation and scaling, size standardization, and size adapting. Second, six different classifiers are constructed by fine-tuning parameters of the standard VGG-16 model. Last, a group of real-world SPECT whole-body bone scan files were utilized to evaluate the developed classifiers. Experiment results shows that our classifiers are workable for the 2-class classification of SPECT images, achieving a best value of 0.7641, 0.6678, 1.000, and 0.6574 for defined evaluation metrics Acc, Pre, Rec, and AUC, respectively.","PeriodicalId":336892,"journal":{"name":"Neural Networks, Information and Communication Engineering","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115695991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the post epidemic era and the rapid development of science and technology finance, bank credit card marketing has been greatly impacted. This paper proposes a new deep learning model DeepAFM (Deep Attentional Factorization Machine), which is used to predict potential credit card users of bank, so as to provide an effective basis for bank precision marketing. The model uses factorization machine and embedding layer to decompose the parameter matrix into low dimensional parameter matrix; The Attentional Mechanism is introduced to learn the weight of cross features and extract important features; A fully connected depth network is introduced to realize the mining of higher-order cross features. Finally, through the comparison with other algorithms, the results show that the expression ability of DeepAFM model is better and the automatic mining of important data is more accurate.
{"title":"Prediction of potential credit card users of bank based on deep learning","authors":"Yue Qiu, Jianan Fang","doi":"10.1117/12.2639171","DOIUrl":"https://doi.org/10.1117/12.2639171","url":null,"abstract":"In the post epidemic era and the rapid development of science and technology finance, bank credit card marketing has been greatly impacted. This paper proposes a new deep learning model DeepAFM (Deep Attentional Factorization Machine), which is used to predict potential credit card users of bank, so as to provide an effective basis for bank precision marketing. The model uses factorization machine and embedding layer to decompose the parameter matrix into low dimensional parameter matrix; The Attentional Mechanism is introduced to learn the weight of cross features and extract important features; A fully connected depth network is introduced to realize the mining of higher-order cross features. Finally, through the comparison with other algorithms, the results show that the expression ability of DeepAFM model is better and the automatic mining of important data is more accurate.","PeriodicalId":336892,"journal":{"name":"Neural Networks, Information and Communication Engineering","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127903848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}