The Van der Waals equation is an equation of state that generalizes the ideal gas law. It involves two characteristic curves, called binodal and spinodal curves. They are usually reconstructed through standard polynomial fitting. However, the resulting fitting models are strongly limited in several ways. In this paper, we address this issue through least-squares approximation of the set of 2D points by using free-form Bezier curves. This requires to perform data parameterization in addition to computing the poles of the curves. This is achieved by applying a powerful swarm intelligence method called the firefly algorithm. Our method is applied to real data of a gas. Our results show that the method can reconstruct the characteristic curves with good accuracy. Comparative work shows that our approach outperforms two state-of-the-art methods for this example.
{"title":"Applying Firefly Algorithm to Data Fitting for the Van der Waals Equation of State with Bézier Curves","authors":"Almudena Campuzano, A. Iglesias, A. Gálvez","doi":"10.1109/CW.2019.00042","DOIUrl":"https://doi.org/10.1109/CW.2019.00042","url":null,"abstract":"The Van der Waals equation is an equation of state that generalizes the ideal gas law. It involves two characteristic curves, called binodal and spinodal curves. They are usually reconstructed through standard polynomial fitting. However, the resulting fitting models are strongly limited in several ways. In this paper, we address this issue through least-squares approximation of the set of 2D points by using free-form Bezier curves. This requires to perform data parameterization in addition to computing the poles of the curves. This is achieved by applying a powerful swarm intelligence method called the firefly algorithm. Our method is applied to real data of a gas. Our results show that the method can reconstruct the characteristic curves with good accuracy. Comparative work shows that our approach outperforms two state-of-the-art methods for this example.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121361114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdullah Al Shiam, M. Islam, Toshihisa Tanaka, M. I. Molla
The major challenge in Brain Computer Interface (BCI) is to obtain reliable classification accuracy of motor imagery (MI) task. This paper mainly focuses on unsupervised feature selection for electroencephalography (EEG) classification leading to BCI implementation. The multichannel EEG signal is decomposed into a number of subband signals. The features are extracted from each subband by applying spatial filtering technique. The features are combined into a common feature space to represent the effective event MI classification. It may inevitably include some irrelevant features yielding the increase of dimension and mislead the classification system. The unsupervised discriminative feature selection (UDFS) is employed here to select the subset of extracted features. It effectively selects the dominant features to improve classification accuracy of motor imagery task acquired by EEG signals. The classification of MI tasks is performed by support vector machine. The performance of the proposed method is evaluated using publicly available dataset obtained from BCI Competition III (IVA). The experimental results show that the performance of this method is better than that of the recently developed algorithms.
脑机接口(BCI)面临的主要挑战是如何获得可靠的运动意象任务分类精度。本文主要研究脑电分类中的无监督特征选择,从而实现脑机接口。将多路脑电信号分解为若干子带信号。利用空间滤波技术对每个子带进行特征提取。将这些特征组合成一个公共特征空间来表示有效的事件MI分类。它可能不可避免地包含一些不相关的特征,从而增加了维度,误导了分类系统。本文采用无监督判别特征选择(unsupervised discriminative feature selection, UDFS)对提取的特征子集进行选择。该方法有效地选择了优势特征,提高了脑电信号获取的运动图像任务的分类精度。通过支持向量机对人工智能任务进行分类。使用从BCI Competition III (IVA)获得的公开可用数据集对所提出方法的性能进行了评估。实验结果表明,该方法的性能优于最近开发的算法。
{"title":"Electroencephalography Based Motor Imagery Classification Using Unsupervised Feature Selection","authors":"Abdullah Al Shiam, M. Islam, Toshihisa Tanaka, M. I. Molla","doi":"10.1109/CW.2019.00047","DOIUrl":"https://doi.org/10.1109/CW.2019.00047","url":null,"abstract":"The major challenge in Brain Computer Interface (BCI) is to obtain reliable classification accuracy of motor imagery (MI) task. This paper mainly focuses on unsupervised feature selection for electroencephalography (EEG) classification leading to BCI implementation. The multichannel EEG signal is decomposed into a number of subband signals. The features are extracted from each subband by applying spatial filtering technique. The features are combined into a common feature space to represent the effective event MI classification. It may inevitably include some irrelevant features yielding the increase of dimension and mislead the classification system. The unsupervised discriminative feature selection (UDFS) is employed here to select the subset of extracted features. It effectively selects the dominant features to improve classification accuracy of motor imagery task acquired by EEG signals. The classification of MI tasks is performed by support vector machine. The performance of the proposed method is evaluated using publicly available dataset obtained from BCI Competition III (IVA). The experimental results show that the performance of this method is better than that of the recently developed algorithms.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134094401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, computer graphics is widely used in movies and games, etc., and modeling three-dimensional virtual objects is important for synthesizing realistic images. Since modeling realistic objects often requires special skills and takes long time, many methods have been developed to help the user generate models such as plants and buildings. However, little attention has been paid to the modeling of fish shapes because of the complexity of their shapes. We propose an interactive system for modeling a realistic fish shape from a single image. We also introduce a method called Direct Manipulation Blendshapes for improving the usability of our system.
{"title":"An Interactive System for Modeling Fish Shapes","authors":"Masayuki Tamiya, Y. Dobashi","doi":"10.1109/CW.2019.00076","DOIUrl":"https://doi.org/10.1109/CW.2019.00076","url":null,"abstract":"Recently, computer graphics is widely used in movies and games, etc., and modeling three-dimensional virtual objects is important for synthesizing realistic images. Since modeling realistic objects often requires special skills and takes long time, many methods have been developed to help the user generate models such as plants and buildings. However, little attention has been paid to the modeling of fish shapes because of the complexity of their shapes. We propose an interactive system for modeling a realistic fish shape from a single image. We also introduce a method called Direct Manipulation Blendshapes for improving the usability of our system.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127112180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The last decade has witnessed an increase in online human interactions, covering all aspects of personal and professional activities. Identification of people based on their behavior rather than physical traits is a growing industry, spanning diverse spheres such as online education, e-commerce and cyber security. One prominent behavior is the expression of opinions, commonly as a reaction to images posted online. Visual aesthetic is a soft, behavioral biometric that refers to a person's sense of fondness to a certain image. Identifying individuals using their visual aesthetics as discriminatory features is an emerging domain of research. This paper introduces a new method for aesthetic feature dimensionality reduction using gene expression programming. The advantage of this method is that the resulting system is capable of using a tree-based genetic approach for feature recombination. Reducing feature dimensionality improves classifier accuracy, reduces computation runtime, and minimizes required storage. The results obtained on a dataset of 200 Flickr users evaluating 40000 images demonstrates a 94% accuracy of identity recognition based solely on users' aesthetic preferences. This outperforms the best-known method by 13.5%.
{"title":"Person Identification from Visual Aesthetics Using Gene Expression Programming","authors":"Brandon Sieu, M. Gavrilova","doi":"10.1109/CW.2019.00053","DOIUrl":"https://doi.org/10.1109/CW.2019.00053","url":null,"abstract":"The last decade has witnessed an increase in online human interactions, covering all aspects of personal and professional activities. Identification of people based on their behavior rather than physical traits is a growing industry, spanning diverse spheres such as online education, e-commerce and cyber security. One prominent behavior is the expression of opinions, commonly as a reaction to images posted online. Visual aesthetic is a soft, behavioral biometric that refers to a person's sense of fondness to a certain image. Identifying individuals using their visual aesthetics as discriminatory features is an emerging domain of research. This paper introduces a new method for aesthetic feature dimensionality reduction using gene expression programming. The advantage of this method is that the resulting system is capable of using a tree-based genetic approach for feature recombination. Reducing feature dimensionality improves classifier accuracy, reduces computation runtime, and minimizes required storage. The results obtained on a dataset of 200 Flickr users evaluating 40000 images demonstrates a 94% accuracy of identity recognition based solely on users' aesthetic preferences. This outperforms the best-known method by 13.5%.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126084260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yisi Liu, Fan Li, L. Tang, Zirui Lan, Jian Cui, O. Sourina, Chun-Hsien Chen
Currently, many modern humanoid robots have little appeal due to their simple designs and bland appearances. To provide recommendations for designers and improve the designs of humanoid robots, a study of human's perception on humanoid robot designs is conducted using Electroencephalogram (EEG), eye tracking information and questionnaires. We proposed and carried out an experiment with 20 subjects to collect the EEG and eye tracking data to study their reaction to different robot designs and the corresponding preference towards these designs. This study can possibly give us some insights on how people react to the aesthetic designs of different humanoid robot models and the important traits in a humanoid robot design, such as the perceived smartness and friendliness of the robots. Another point of interest is to investigate the most prominent feature of the robot, such as the head, facial features and the chest. The result shows that the head and facial features are the focus. It is also discovered that more attention is paid to the robots that appear to be more appealing. Lastly, it is affirmed that the first impressions of the robots generally do not change over time, which may imply that a good humanoid robot design impress the observers at first sight.
{"title":"Detection of Humanoid Robot Design Preferences Using EEG and Eye Tracker","authors":"Yisi Liu, Fan Li, L. Tang, Zirui Lan, Jian Cui, O. Sourina, Chun-Hsien Chen","doi":"10.1109/CW.2019.00044","DOIUrl":"https://doi.org/10.1109/CW.2019.00044","url":null,"abstract":"Currently, many modern humanoid robots have little appeal due to their simple designs and bland appearances. To provide recommendations for designers and improve the designs of humanoid robots, a study of human's perception on humanoid robot designs is conducted using Electroencephalogram (EEG), eye tracking information and questionnaires. We proposed and carried out an experiment with 20 subjects to collect the EEG and eye tracking data to study their reaction to different robot designs and the corresponding preference towards these designs. This study can possibly give us some insights on how people react to the aesthetic designs of different humanoid robot models and the important traits in a humanoid robot design, such as the perceived smartness and friendliness of the robots. Another point of interest is to investigate the most prominent feature of the robot, such as the head, facial features and the chest. The result shows that the head and facial features are the focus. It is also discovered that more attention is paid to the robots that appear to be more appealing. Lastly, it is affirmed that the first impressions of the robots generally do not change over time, which may imply that a good humanoid robot design impress the observers at first sight.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114890485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electromyography (EMG) signals can be used for human movements classification. Nonetheless, due to their nonlinear and time-varying properties, it is difficult to classify the EMG signals and it is critical to use appropriate algorithms for EMG feature extraction and pattern classification. In literature various machine learning (ML) methods have been applied to the EMG signal classification problem in question. In this paper, we extracted four time-domain features of the EMG signals and use a generative graphical model, Deep Belief Network (DBN), to classify the EMG signals. A DBN is a fast, greedy deep learning algorithm that can rapidly find a set of optimal weights of a deep network with many hidden layers. To evaluate the DBN model, we acquired EMG signals, extracted their time-domain features, and then utilized the DBN model to classify human movements. The real data analysis results are presented to show the effectiveness of the proposed deep learning technique for both binary and 4-class recognition of human movements using the measured 8-channel EMG signals. The proposed DBN model may find applications in design of EMG-based user interfaces.
{"title":"Human Movements Classification Using Multi-channel Surface EMG Signals and Deep Learning Technique","authors":"Jianhua Zhang, C. Ling, Sunan Li","doi":"10.1109/CW.2019.00051","DOIUrl":"https://doi.org/10.1109/CW.2019.00051","url":null,"abstract":"Electromyography (EMG) signals can be used for human movements classification. Nonetheless, due to their nonlinear and time-varying properties, it is difficult to classify the EMG signals and it is critical to use appropriate algorithms for EMG feature extraction and pattern classification. In literature various machine learning (ML) methods have been applied to the EMG signal classification problem in question. In this paper, we extracted four time-domain features of the EMG signals and use a generative graphical model, Deep Belief Network (DBN), to classify the EMG signals. A DBN is a fast, greedy deep learning algorithm that can rapidly find a set of optimal weights of a deep network with many hidden layers. To evaluate the DBN model, we acquired EMG signals, extracted their time-domain features, and then utilized the DBN model to classify human movements. The real data analysis results are presented to show the effectiveness of the proposed deep learning technique for both binary and 4-class recognition of human movements using the measured 8-channel EMG signals. The proposed DBN model may find applications in design of EMG-based user interfaces.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125494127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Techniques to render 3D models like hand-drawings are often required. In this paper, we propose an approach that generates line-drawing with various styles by machine learning. We train two Convolutional neural networks (CNNs), of which one is a line extractor from the depth and normal images of a 3D object, and the other is a line thickness applicator. The following process to CNNs interprets the thickness of the lines as intensity to control properties of a line style. Using the obtained intensities, non-uniform line styled drawings are generated. The results show the efficiency of combining the machine learning method and the interpreter.
{"title":"Stylized Line Drawing of 3D Models using CNN","authors":"Mitsuhiro Uchida, S. Saito","doi":"10.1109/CW.2019.00015","DOIUrl":"https://doi.org/10.1109/CW.2019.00015","url":null,"abstract":"Techniques to render 3D models like hand-drawings are often required. In this paper, we propose an approach that generates line-drawing with various styles by machine learning. We train two Convolutional neural networks (CNNs), of which one is a line extractor from the depth and normal images of a 3D object, and the other is a line thickness applicator. The following process to CNNs interprets the thickness of the lines as intensity to control properties of a line style. Using the obtained intensities, non-uniform line styled drawings are generated. The results show the efficiency of combining the machine learning method and the interpreter.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125388611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hazard maps are currently developed using computationally expensive techniques and must be revised after a major disaster. Therefore, we developed a low-cost tsunami evacuation simulation system using a game engine and open data. A simulation evaluation was conducted using the target area of Kamakura City, Japan. We also developed an agent that performs evacuation actions and autonomously searches for evacuation destinations and evacuation behaviors at specified speeds in order to clarify current issues with the location of evacuation sites. The agent prepared three walking speeds, three disaster conditions, and two evacuation behaviors and could freely change the number and ratio of agents. The simulation results revealed locations with high concentrations of tsunami victims, even in inland areas.
{"title":"Tsunami Evacuation Simulation System for Disaster Prevention Plan","authors":"Yasuo Kawai, Yurie Kaizu","doi":"10.1109/CW.2019.00067","DOIUrl":"https://doi.org/10.1109/CW.2019.00067","url":null,"abstract":"Hazard maps are currently developed using computationally expensive techniques and must be revised after a major disaster. Therefore, we developed a low-cost tsunami evacuation simulation system using a game engine and open data. A simulation evaluation was conducted using the target area of Kamakura City, Japan. We also developed an agent that performs evacuation actions and autonomously searches for evacuation destinations and evacuation behaviors at specified speeds in order to clarify current issues with the location of evacuation sites. The agent prepared three walking speeds, three disaster conditions, and two evacuation behaviors and could freely change the number and ratio of agents. The simulation results revealed locations with high concentrations of tsunami victims, even in inland areas.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130934185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In these days, we can take many pictures everyday and everywhere with mobile devices such as smartphones. After taking a picture, we often modify it by using some image enhancement tools so that the appearance of the picture becomes favorable to his/her own preference. However, since there are many parameters in the enhancement functions, it is not an easy task to find an appropriate parameter set to obtain the desired result. Some tools have a function that automatically determine the parameters but they do not take into account the user's preference. In this paper, we present a system to address this problem. Our system first estimates the user's preference by using RankNet. Next, the image enhancement parameters are optimized to maximize the estimated preference. We show some experimental results to demonstrate the usefulness of our system.
{"title":"Automatic Image Enhancement Taking into Account User Preference","authors":"Yuri Murata, Y. Dobashi","doi":"10.1109/CW.2019.00070","DOIUrl":"https://doi.org/10.1109/CW.2019.00070","url":null,"abstract":"In these days, we can take many pictures everyday and everywhere with mobile devices such as smartphones. After taking a picture, we often modify it by using some image enhancement tools so that the appearance of the picture becomes favorable to his/her own preference. However, since there are many parameters in the enhancement functions, it is not an easy task to find an appropriate parameter set to obtain the desired result. Some tools have a function that automatically determine the parameters but they do not take into account the user's preference. In this paper, we present a system to address this problem. Our system first estimates the user's preference by using RankNet. Next, the image enhancement parameters are optimized to maximize the estimated preference. We show some experimental results to demonstrate the usefulness of our system.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116534723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The two distinctive visual experiences of binocular display, with and without stereoscopic glasses, have been recently utilized for visual sharing between the colorblind and the normal-vision audiences. However, all existing methods only work for still images, and lack of temporal consistency for video application. In this paper, we propose the first synthesis method for colorblind-sharable videos that possess the temporal consistency for both visual experiences of colorblind and normal-vision, and retains all other crucial characteristics for visual sharing with colorblind. We formulate this challenging multi-constraint problem as a global optimization and minimize an objective function consisting of temporal term, color preservation term, color distinguishability term, and binocular fusibility term. Qualitative and quantitative experiments are conducted to evaluate the effectiveness of the proposed method comparing to existing methods.
{"title":"Colorblind-Shareable Videos","authors":"Xinghong Hu, Xueting Liu, Xiangyu Mao, T. Wong","doi":"10.1109/CW.2019.00065","DOIUrl":"https://doi.org/10.1109/CW.2019.00065","url":null,"abstract":"The two distinctive visual experiences of binocular display, with and without stereoscopic glasses, have been recently utilized for visual sharing between the colorblind and the normal-vision audiences. However, all existing methods only work for still images, and lack of temporal consistency for video application. In this paper, we propose the first synthesis method for colorblind-sharable videos that possess the temporal consistency for both visual experiences of colorblind and normal-vision, and retains all other crucial characteristics for visual sharing with colorblind. We formulate this challenging multi-constraint problem as a global optimization and minimize an objective function consisting of temporal term, color preservation term, color distinguishability term, and binocular fusibility term. Qualitative and quantitative experiments are conducted to evaluate the effectiveness of the proposed method comparing to existing methods.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127851583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}