This paper investigates the current state of handheld augmented reality (AR) gaming apps available on the App Store (iOS) and the Play Store (Android). To be able to directly compare the differences between games played with and without AR, only games in which the AR mode can be switched on/off were investigated. Because the main scope of this paper is on the evaluation of the experience provided by AR, parts of the game experience questionnaire (GEQ) have been included in the empirical study. It showed that AR has big potential to improve immersion or flow in the game-play. This paper also identifies differences in the implementation of AR features and investigates how and what parameter in the GEQ can be positively influenced.
本文调查了App Store (iOS)和Play Store (Android)上手持增强现实(AR)游戏应用的现状。为了能够直接比较使用AR和不使用AR的游戏之间的差异,我们只研究了可以打开/关闭AR模式的游戏。由于本文的主要研究范围是对AR所提供的体验进行评价,因此在实证研究中纳入了部分游戏体验问卷(GEQ)。这表明,增强现实在提高沉浸感或游戏体验方面具有巨大潜力。本文还确定了AR功能实现的差异,并研究了GEQ中的参数如何以及哪些参数可以受到积极影响。
{"title":"How does Augmented Reality Improve the Play Experience in Current Augmented Reality Enhanced Smartphone Games?","authors":"Matthias Wölfel, Melinda C. Braun, Sandra Beuck","doi":"10.1109/CW.2019.00079","DOIUrl":"https://doi.org/10.1109/CW.2019.00079","url":null,"abstract":"This paper investigates the current state of handheld augmented reality (AR) gaming apps available on the App Store (iOS) and the Play Store (Android). To be able to directly compare the differences between games played with and without AR, only games in which the AR mode can be switched on/off were investigated. Because the main scope of this paper is on the evaluation of the experience provided by AR, parts of the game experience questionnaire (GEQ) have been included in the empirical study. It showed that AR has big potential to improve immersion or flow in the game-play. This paper also identifies differences in the implementation of AR features and investigates how and what parameter in the GEQ can be positively influenced.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114587948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdullah Al Shiam, M. Islam, Toshihisa Tanaka, M. I. Molla
The major challenge in Brain Computer Interface (BCI) is to obtain reliable classification accuracy of motor imagery (MI) task. This paper mainly focuses on unsupervised feature selection for electroencephalography (EEG) classification leading to BCI implementation. The multichannel EEG signal is decomposed into a number of subband signals. The features are extracted from each subband by applying spatial filtering technique. The features are combined into a common feature space to represent the effective event MI classification. It may inevitably include some irrelevant features yielding the increase of dimension and mislead the classification system. The unsupervised discriminative feature selection (UDFS) is employed here to select the subset of extracted features. It effectively selects the dominant features to improve classification accuracy of motor imagery task acquired by EEG signals. The classification of MI tasks is performed by support vector machine. The performance of the proposed method is evaluated using publicly available dataset obtained from BCI Competition III (IVA). The experimental results show that the performance of this method is better than that of the recently developed algorithms.
脑机接口(BCI)面临的主要挑战是如何获得可靠的运动意象任务分类精度。本文主要研究脑电分类中的无监督特征选择,从而实现脑机接口。将多路脑电信号分解为若干子带信号。利用空间滤波技术对每个子带进行特征提取。将这些特征组合成一个公共特征空间来表示有效的事件MI分类。它可能不可避免地包含一些不相关的特征,从而增加了维度,误导了分类系统。本文采用无监督判别特征选择(unsupervised discriminative feature selection, UDFS)对提取的特征子集进行选择。该方法有效地选择了优势特征,提高了脑电信号获取的运动图像任务的分类精度。通过支持向量机对人工智能任务进行分类。使用从BCI Competition III (IVA)获得的公开可用数据集对所提出方法的性能进行了评估。实验结果表明,该方法的性能优于最近开发的算法。
{"title":"Electroencephalography Based Motor Imagery Classification Using Unsupervised Feature Selection","authors":"Abdullah Al Shiam, M. Islam, Toshihisa Tanaka, M. I. Molla","doi":"10.1109/CW.2019.00047","DOIUrl":"https://doi.org/10.1109/CW.2019.00047","url":null,"abstract":"The major challenge in Brain Computer Interface (BCI) is to obtain reliable classification accuracy of motor imagery (MI) task. This paper mainly focuses on unsupervised feature selection for electroencephalography (EEG) classification leading to BCI implementation. The multichannel EEG signal is decomposed into a number of subband signals. The features are extracted from each subband by applying spatial filtering technique. The features are combined into a common feature space to represent the effective event MI classification. It may inevitably include some irrelevant features yielding the increase of dimension and mislead the classification system. The unsupervised discriminative feature selection (UDFS) is employed here to select the subset of extracted features. It effectively selects the dominant features to improve classification accuracy of motor imagery task acquired by EEG signals. The classification of MI tasks is performed by support vector machine. The performance of the proposed method is evaluated using publicly available dataset obtained from BCI Competition III (IVA). The experimental results show that the performance of this method is better than that of the recently developed algorithms.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134094401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, computer graphics is widely used in movies and games, etc., and modeling three-dimensional virtual objects is important for synthesizing realistic images. Since modeling realistic objects often requires special skills and takes long time, many methods have been developed to help the user generate models such as plants and buildings. However, little attention has been paid to the modeling of fish shapes because of the complexity of their shapes. We propose an interactive system for modeling a realistic fish shape from a single image. We also introduce a method called Direct Manipulation Blendshapes for improving the usability of our system.
{"title":"An Interactive System for Modeling Fish Shapes","authors":"Masayuki Tamiya, Y. Dobashi","doi":"10.1109/CW.2019.00076","DOIUrl":"https://doi.org/10.1109/CW.2019.00076","url":null,"abstract":"Recently, computer graphics is widely used in movies and games, etc., and modeling three-dimensional virtual objects is important for synthesizing realistic images. Since modeling realistic objects often requires special skills and takes long time, many methods have been developed to help the user generate models such as plants and buildings. However, little attention has been paid to the modeling of fish shapes because of the complexity of their shapes. We propose an interactive system for modeling a realistic fish shape from a single image. We also introduce a method called Direct Manipulation Blendshapes for improving the usability of our system.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127112180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The last decade has witnessed an increase in online human interactions, covering all aspects of personal and professional activities. Identification of people based on their behavior rather than physical traits is a growing industry, spanning diverse spheres such as online education, e-commerce and cyber security. One prominent behavior is the expression of opinions, commonly as a reaction to images posted online. Visual aesthetic is a soft, behavioral biometric that refers to a person's sense of fondness to a certain image. Identifying individuals using their visual aesthetics as discriminatory features is an emerging domain of research. This paper introduces a new method for aesthetic feature dimensionality reduction using gene expression programming. The advantage of this method is that the resulting system is capable of using a tree-based genetic approach for feature recombination. Reducing feature dimensionality improves classifier accuracy, reduces computation runtime, and minimizes required storage. The results obtained on a dataset of 200 Flickr users evaluating 40000 images demonstrates a 94% accuracy of identity recognition based solely on users' aesthetic preferences. This outperforms the best-known method by 13.5%.
{"title":"Person Identification from Visual Aesthetics Using Gene Expression Programming","authors":"Brandon Sieu, M. Gavrilova","doi":"10.1109/CW.2019.00053","DOIUrl":"https://doi.org/10.1109/CW.2019.00053","url":null,"abstract":"The last decade has witnessed an increase in online human interactions, covering all aspects of personal and professional activities. Identification of people based on their behavior rather than physical traits is a growing industry, spanning diverse spheres such as online education, e-commerce and cyber security. One prominent behavior is the expression of opinions, commonly as a reaction to images posted online. Visual aesthetic is a soft, behavioral biometric that refers to a person's sense of fondness to a certain image. Identifying individuals using their visual aesthetics as discriminatory features is an emerging domain of research. This paper introduces a new method for aesthetic feature dimensionality reduction using gene expression programming. The advantage of this method is that the resulting system is capable of using a tree-based genetic approach for feature recombination. Reducing feature dimensionality improves classifier accuracy, reduces computation runtime, and minimizes required storage. The results obtained on a dataset of 200 Flickr users evaluating 40000 images demonstrates a 94% accuracy of identity recognition based solely on users' aesthetic preferences. This outperforms the best-known method by 13.5%.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126084260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In these days, we can take many pictures everyday and everywhere with mobile devices such as smartphones. After taking a picture, we often modify it by using some image enhancement tools so that the appearance of the picture becomes favorable to his/her own preference. However, since there are many parameters in the enhancement functions, it is not an easy task to find an appropriate parameter set to obtain the desired result. Some tools have a function that automatically determine the parameters but they do not take into account the user's preference. In this paper, we present a system to address this problem. Our system first estimates the user's preference by using RankNet. Next, the image enhancement parameters are optimized to maximize the estimated preference. We show some experimental results to demonstrate the usefulness of our system.
{"title":"Automatic Image Enhancement Taking into Account User Preference","authors":"Yuri Murata, Y. Dobashi","doi":"10.1109/CW.2019.00070","DOIUrl":"https://doi.org/10.1109/CW.2019.00070","url":null,"abstract":"In these days, we can take many pictures everyday and everywhere with mobile devices such as smartphones. After taking a picture, we often modify it by using some image enhancement tools so that the appearance of the picture becomes favorable to his/her own preference. However, since there are many parameters in the enhancement functions, it is not an easy task to find an appropriate parameter set to obtain the desired result. Some tools have a function that automatically determine the parameters but they do not take into account the user's preference. In this paper, we present a system to address this problem. Our system first estimates the user's preference by using RankNet. Next, the image enhancement parameters are optimized to maximize the estimated preference. We show some experimental results to demonstrate the usefulness of our system.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116534723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Techniques to render 3D models like hand-drawings are often required. In this paper, we propose an approach that generates line-drawing with various styles by machine learning. We train two Convolutional neural networks (CNNs), of which one is a line extractor from the depth and normal images of a 3D object, and the other is a line thickness applicator. The following process to CNNs interprets the thickness of the lines as intensity to control properties of a line style. Using the obtained intensities, non-uniform line styled drawings are generated. The results show the efficiency of combining the machine learning method and the interpreter.
{"title":"Stylized Line Drawing of 3D Models using CNN","authors":"Mitsuhiro Uchida, S. Saito","doi":"10.1109/CW.2019.00015","DOIUrl":"https://doi.org/10.1109/CW.2019.00015","url":null,"abstract":"Techniques to render 3D models like hand-drawings are often required. In this paper, we propose an approach that generates line-drawing with various styles by machine learning. We train two Convolutional neural networks (CNNs), of which one is a line extractor from the depth and normal images of a 3D object, and the other is a line thickness applicator. The following process to CNNs interprets the thickness of the lines as intensity to control properties of a line style. Using the obtained intensities, non-uniform line styled drawings are generated. The results show the efficiency of combining the machine learning method and the interpreter.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125388611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electromyography (EMG) signals can be used for human movements classification. Nonetheless, due to their nonlinear and time-varying properties, it is difficult to classify the EMG signals and it is critical to use appropriate algorithms for EMG feature extraction and pattern classification. In literature various machine learning (ML) methods have been applied to the EMG signal classification problem in question. In this paper, we extracted four time-domain features of the EMG signals and use a generative graphical model, Deep Belief Network (DBN), to classify the EMG signals. A DBN is a fast, greedy deep learning algorithm that can rapidly find a set of optimal weights of a deep network with many hidden layers. To evaluate the DBN model, we acquired EMG signals, extracted their time-domain features, and then utilized the DBN model to classify human movements. The real data analysis results are presented to show the effectiveness of the proposed deep learning technique for both binary and 4-class recognition of human movements using the measured 8-channel EMG signals. The proposed DBN model may find applications in design of EMG-based user interfaces.
{"title":"Human Movements Classification Using Multi-channel Surface EMG Signals and Deep Learning Technique","authors":"Jianhua Zhang, C. Ling, Sunan Li","doi":"10.1109/CW.2019.00051","DOIUrl":"https://doi.org/10.1109/CW.2019.00051","url":null,"abstract":"Electromyography (EMG) signals can be used for human movements classification. Nonetheless, due to their nonlinear and time-varying properties, it is difficult to classify the EMG signals and it is critical to use appropriate algorithms for EMG feature extraction and pattern classification. In literature various machine learning (ML) methods have been applied to the EMG signal classification problem in question. In this paper, we extracted four time-domain features of the EMG signals and use a generative graphical model, Deep Belief Network (DBN), to classify the EMG signals. A DBN is a fast, greedy deep learning algorithm that can rapidly find a set of optimal weights of a deep network with many hidden layers. To evaluate the DBN model, we acquired EMG signals, extracted their time-domain features, and then utilized the DBN model to classify human movements. The real data analysis results are presented to show the effectiveness of the proposed deep learning technique for both binary and 4-class recognition of human movements using the measured 8-channel EMG signals. The proposed DBN model may find applications in design of EMG-based user interfaces.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125494127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yisi Liu, Fan Li, L. Tang, Zirui Lan, Jian Cui, O. Sourina, Chun-Hsien Chen
Currently, many modern humanoid robots have little appeal due to their simple designs and bland appearances. To provide recommendations for designers and improve the designs of humanoid robots, a study of human's perception on humanoid robot designs is conducted using Electroencephalogram (EEG), eye tracking information and questionnaires. We proposed and carried out an experiment with 20 subjects to collect the EEG and eye tracking data to study their reaction to different robot designs and the corresponding preference towards these designs. This study can possibly give us some insights on how people react to the aesthetic designs of different humanoid robot models and the important traits in a humanoid robot design, such as the perceived smartness and friendliness of the robots. Another point of interest is to investigate the most prominent feature of the robot, such as the head, facial features and the chest. The result shows that the head and facial features are the focus. It is also discovered that more attention is paid to the robots that appear to be more appealing. Lastly, it is affirmed that the first impressions of the robots generally do not change over time, which may imply that a good humanoid robot design impress the observers at first sight.
{"title":"Detection of Humanoid Robot Design Preferences Using EEG and Eye Tracker","authors":"Yisi Liu, Fan Li, L. Tang, Zirui Lan, Jian Cui, O. Sourina, Chun-Hsien Chen","doi":"10.1109/CW.2019.00044","DOIUrl":"https://doi.org/10.1109/CW.2019.00044","url":null,"abstract":"Currently, many modern humanoid robots have little appeal due to their simple designs and bland appearances. To provide recommendations for designers and improve the designs of humanoid robots, a study of human's perception on humanoid robot designs is conducted using Electroencephalogram (EEG), eye tracking information and questionnaires. We proposed and carried out an experiment with 20 subjects to collect the EEG and eye tracking data to study their reaction to different robot designs and the corresponding preference towards these designs. This study can possibly give us some insights on how people react to the aesthetic designs of different humanoid robot models and the important traits in a humanoid robot design, such as the perceived smartness and friendliness of the robots. Another point of interest is to investigate the most prominent feature of the robot, such as the head, facial features and the chest. The result shows that the head and facial features are the focus. It is also discovered that more attention is paid to the robots that appear to be more appealing. Lastly, it is affirmed that the first impressions of the robots generally do not change over time, which may imply that a good humanoid robot design impress the observers at first sight.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114890485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Not many people know about the first electronic musical instrument-the theremin-and can play it. The idea of this instrument is very groundbreaking: it is played without physical contact with it and in the same way as we sing but by using hands in place of our vocal cords. In this paper we consider how to implement the theremin with a computer using very different physical principles of optical hand tracking and by adding advantages of visual interfaces. The goal of this research is to eventually fulfill the dream of the inventor to make the theremin a musical instrument for everyone and to prove that everyone can play music.
{"title":"Music in the Air with Leap Motion Controller","authors":"A. Sourin","doi":"10.1109/cw.2019.00018","DOIUrl":"https://doi.org/10.1109/cw.2019.00018","url":null,"abstract":"Not many people know about the first electronic musical instrument-the theremin-and can play it. The idea of this instrument is very groundbreaking: it is played without physical contact with it and in the same way as we sing but by using hands in place of our vocal cords. In this paper we consider how to implement the theremin with a computer using very different physical principles of optical hand tracking and by adding advantages of visual interfaces. The goal of this research is to eventually fulfill the dream of the inventor to make the theremin a musical instrument for everyone and to prove that everyone can play music.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127057391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rear-lamp detection of a vehicle at nighttime is an important technique for advanced driver-assistance systems. We present a detection method by employing a variant of genetic algorithm, which utilizes bitwise genetic operation instead of classic crossover and mutation. That is, the detection task is cast to a localization problem under an evolutionary optimization framework. Specifically, geometric parameters of a rectangle pair form a model to represent the detected rear-lamp pair. The fitness function for evaluating each candidate solution is combinatorial, which consists of multiple fitness functions designed under handcrafted rules from the observation. In addition, the solution space is narrowed down by extracting the red-light sources, which yields in more efficient solution exploration. Experiment with a publicly available dataset which involves images captured in various traffic situations shows the effectiveness of our method qualitatively and quantitatively.
{"title":"Vehicle Rear-Lamp Detection at Nighttime via Probabilistic Bitwise Genetic Algorithm","authors":"Takumi Nakane, Tatsuya Takeshita, Shogo Tokai, Chao Zhang","doi":"10.1109/CW.2019.00027","DOIUrl":"https://doi.org/10.1109/CW.2019.00027","url":null,"abstract":"Rear-lamp detection of a vehicle at nighttime is an important technique for advanced driver-assistance systems. We present a detection method by employing a variant of genetic algorithm, which utilizes bitwise genetic operation instead of classic crossover and mutation. That is, the detection task is cast to a localization problem under an evolutionary optimization framework. Specifically, geometric parameters of a rectangle pair form a model to represent the detected rear-lamp pair. The fitness function for evaluating each candidate solution is combinatorial, which consists of multiple fitness functions designed under handcrafted rules from the observation. In addition, the solution space is narrowed down by extracting the red-light sources, which yields in more efficient solution exploration. Experiment with a publicly available dataset which involves images captured in various traffic situations shows the effectiveness of our method qualitatively and quantitatively.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132526982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}