Pub Date : 2019-02-01DOI: 10.1109/IWW-BCI.2019.8737316
V. Maksimenko, A. Hramov, A. Runnova, A. Pisarchik
We propose a special brain-brain interface (BBI) to enhance human-human interaction while performing collective tasks. The efficiency of the proposed interface is estimated in experimental sessions, where participants are subjected to the prolonged task of classification of ambiguous visual stimuli with different degrees of ambiguity. Our BBI allows increasing the mean working performance of a group of operators due to optimal real-time redistribution of a cognitive load among all participants, so that the more difficult task is always given to the member who exhibits the maximum cognitive performance. We show that human-human interaction is more efficient in the presence of the coupling delay determined by brain rhythms of the participants.
{"title":"Brain-to-brain interface increases efficiency of human-human interaction","authors":"V. Maksimenko, A. Hramov, A. Runnova, A. Pisarchik","doi":"10.1109/IWW-BCI.2019.8737316","DOIUrl":"https://doi.org/10.1109/IWW-BCI.2019.8737316","url":null,"abstract":"We propose a special brain-brain interface (BBI) to enhance human-human interaction while performing collective tasks. The efficiency of the proposed interface is estimated in experimental sessions, where participants are subjected to the prolonged task of classification of ambiguous visual stimuli with different degrees of ambiguity. Our BBI allows increasing the mean working performance of a group of operators due to optimal real-time redistribution of a cognitive load among all participants, so that the more difficult task is always given to the member who exhibits the maximum cognitive performance. We show that human-human interaction is more efficient in the presence of the coupling delay determined by brain rhythms of the participants.","PeriodicalId":345970,"journal":{"name":"2019 7th International Winter Conference on Brain-Computer Interface (BCI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125359867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/IWW-BCI.2019.8737342
Jinwoo Park, Sunghee Dong, Yuseong Hong, Jichai Jeong
Among today’s various brain imaging methodologies, functional near infrared spectroscopy(fNIRS) is one of the most promising one because of its great versatility accomplished by simple architecture. fNIRS provides the possibilities for unprecedented way of researching brain, because it can be designed to be compact and portable. However, signals are corrupted by motions of subjects, known as motion artifacts, resulting in the limits of fNIRS performances. In this paper, we successfully identify and quantify the correlation between the motion artifacts and the sensor displacement by implementing a fNIRS probe which cooperates with pressure sensor.
{"title":"Quantification of Motion Artifacts in fNIRS Data by Monitoring Sensor Attachment","authors":"Jinwoo Park, Sunghee Dong, Yuseong Hong, Jichai Jeong","doi":"10.1109/IWW-BCI.2019.8737342","DOIUrl":"https://doi.org/10.1109/IWW-BCI.2019.8737342","url":null,"abstract":"Among today’s various brain imaging methodologies, functional near infrared spectroscopy(fNIRS) is one of the most promising one because of its great versatility accomplished by simple architecture. fNIRS provides the possibilities for unprecedented way of researching brain, because it can be designed to be compact and portable. However, signals are corrupted by motions of subjects, known as motion artifacts, resulting in the limits of fNIRS performances. In this paper, we successfully identify and quantify the correlation between the motion artifacts and the sensor displacement by implementing a fNIRS probe which cooperates with pressure sensor.","PeriodicalId":345970,"journal":{"name":"2019 7th International Winter Conference on Brain-Computer Interface (BCI)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117069093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/IWW-BCI.2019.8737252
Hyunmi Lim, J. Ku
In this study, we compared engagement by two BCI action observation (AO) games that has relevant or irrelevant character’s movement in AO game. As a result, relevant game activated engagement more than irrelevant game. This result supports that the engagement in BCI rehabilitation program could be affected by the relevant of game content and action video, and it could be a synergistic approach for recovery.
{"title":"High engagement in BCI action observation game by relevant character’s movement","authors":"Hyunmi Lim, J. Ku","doi":"10.1109/IWW-BCI.2019.8737252","DOIUrl":"https://doi.org/10.1109/IWW-BCI.2019.8737252","url":null,"abstract":"In this study, we compared engagement by two BCI action observation (AO) games that has relevant or irrelevant character’s movement in AO game. As a result, relevant game activated engagement more than irrelevant game. This result supports that the engagement in BCI rehabilitation program could be affected by the relevant of game content and action video, and it could be a synergistic approach for recovery.","PeriodicalId":345970,"journal":{"name":"2019 7th International Winter Conference on Brain-Computer Interface (BCI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121104786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/IWW-BCI.2019.8737331
Junyong Park, Jin Woo Choi, Sungho Jo
In this paper we present a hybrid brain-computer interface (BCI) system that manipulates simultaneous localization and mapping (SLAM) for convenient control of a robot. Due to the low accuracy of classifying multi-class neural signals, using brain signals alone has been considered inadequate for precise control of a robotic systems. To overcome the negative aspects of BCI systems, we introduce a hybrid system where the BCI control of a robot is aided by SLAM. Subjects used electroencephalography (EEG) and electrooculography (EOG) to remotely control a turtle robot that is running SLAM in a maze environment. With the supplementary information on the surroundings provided by SLAM, the robot could calculate potential paths and rotate at precise angles while subjects give only high-level commands. Subjects could successfully navigate the robot to the destination showing the potential of utilizing SLAM along with BCIs.
{"title":"A SLAM Integrated Hybrid Brain-Computer Interface for Accurate and Concise Control","authors":"Junyong Park, Jin Woo Choi, Sungho Jo","doi":"10.1109/IWW-BCI.2019.8737331","DOIUrl":"https://doi.org/10.1109/IWW-BCI.2019.8737331","url":null,"abstract":"In this paper we present a hybrid brain-computer interface (BCI) system that manipulates simultaneous localization and mapping (SLAM) for convenient control of a robot. Due to the low accuracy of classifying multi-class neural signals, using brain signals alone has been considered inadequate for precise control of a robotic systems. To overcome the negative aspects of BCI systems, we introduce a hybrid system where the BCI control of a robot is aided by SLAM. Subjects used electroencephalography (EEG) and electrooculography (EOG) to remotely control a turtle robot that is running SLAM in a maze environment. With the supplementary information on the surroundings provided by SLAM, the robot could calculate potential paths and rotate at precise angles while subjects give only high-level commands. Subjects could successfully navigate the robot to the destination showing the potential of utilizing SLAM along with BCIs.","PeriodicalId":345970,"journal":{"name":"2019 7th International Winter Conference on Brain-Computer Interface (BCI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127717863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/IWW-BCI.2019.8737253
J. Son, J. Ku
The patients with severe paralysis have difficulty to perform rehabilitation training efficiently. In this paper, we developed BCI based Action Observation game program with FES. This program can make actual arm to move by FES while the patients watch the action video clips of the arm. And we conducted a survey after experiments on 12 subjects about usability of the program. As a result of the experiment, the subjects selected that the system having the FES while watching the exercise video was more appropriate for the actual exercise rehabilitation compared to FES with checkerboard or without FES. This can be useful for patients to perform rehabilitation more efficiently.
{"title":"Development of Brain Computer Interface based Action Observation Program with Functional Electrical Stimulation device(FES)","authors":"J. Son, J. Ku","doi":"10.1109/IWW-BCI.2019.8737253","DOIUrl":"https://doi.org/10.1109/IWW-BCI.2019.8737253","url":null,"abstract":"The patients with severe paralysis have difficulty to perform rehabilitation training efficiently. In this paper, we developed BCI based Action Observation game program with FES. This program can make actual arm to move by FES while the patients watch the action video clips of the arm. And we conducted a survey after experiments on 12 subjects about usability of the program. As a result of the experiment, the subjects selected that the system having the FES while watching the exercise video was more appropriate for the actual exercise rehabilitation compared to FES with checkerboard or without FES. This can be useful for patients to perform rehabilitation more efficiently.","PeriodicalId":345970,"journal":{"name":"2019 7th International Winter Conference on Brain-Computer Interface (BCI)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126395387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/IWW-BCI.2019.8737320
Xiaochen Liu, Ji-zhong Shen, Wufeng Zhao
To detect network fraud, a three-stimulus paradigm was used in a mock crime P300-based concealed information test. A P300-based deception detection method based on a modified genetic algorithm and a confidence-coefficient-based combined classifier was created for mock network fraud detection. After the multi-domain integrated signal preprocessing and feature extraction, a modified logistic equation based multi-population genetic algorithm was adopted for feature selection to obtain an optimal feature subset. Then the confidence coefficient was proposed to determine the classification difficulty levels of samples. A combined classifier based on confidence coefficient was proposed for classification. Compared with the component classifiers and other individual classifiers, the combined classifier requires 34% less computing time and the mean classification accuracy rate is 0.2 to 2.23 percentage points higher for twelve subjects using leave-one-out cross validation. Experiment results confirm that the proposed method is effective to detect deception during network fraud simulation.
{"title":"P300-based deception detection of mock network fraud with modified genetic algorithm and combined classification","authors":"Xiaochen Liu, Ji-zhong Shen, Wufeng Zhao","doi":"10.1109/IWW-BCI.2019.8737320","DOIUrl":"https://doi.org/10.1109/IWW-BCI.2019.8737320","url":null,"abstract":"To detect network fraud, a three-stimulus paradigm was used in a mock crime P300-based concealed information test. A P300-based deception detection method based on a modified genetic algorithm and a confidence-coefficient-based combined classifier was created for mock network fraud detection. After the multi-domain integrated signal preprocessing and feature extraction, a modified logistic equation based multi-population genetic algorithm was adopted for feature selection to obtain an optimal feature subset. Then the confidence coefficient was proposed to determine the classification difficulty levels of samples. A combined classifier based on confidence coefficient was proposed for classification. Compared with the component classifiers and other individual classifiers, the combined classifier requires 34% less computing time and the mean classification accuracy rate is 0.2 to 2.23 percentage points higher for twelve subjects using leave-one-out cross validation. Experiment results confirm that the proposed method is effective to detect deception during network fraud simulation.","PeriodicalId":345970,"journal":{"name":"2019 7th International Winter Conference on Brain-Computer Interface (BCI)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133238705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/IWW-BCI.2019.8737343
Youngchul Kwak, Woo‐Jin Song, Seong-Eun Kim
Individuals have different working memory performance and some studies investigated a relationship between working memory performance and electroencephalography (EEG) band power. In this paper, we study EEG features to classify low performance group and high performance group and find that the power ratio feature of alpha and beta is more separable than their absolute powers. We test a deep artificial neural network (ANN) using the power ratio feature to classify the low performance group and high performance group. Experimental results on the working memory tasks show that some subjects have quite low accuracies (<20%) and it results in a low average classification accuracy of 61%, but we can see a possibility in the estimation of working memory performance using EEG data.
{"title":"Classification of Working Memory Performance from EEG with Deep Artificial Neural Networks","authors":"Youngchul Kwak, Woo‐Jin Song, Seong-Eun Kim","doi":"10.1109/IWW-BCI.2019.8737343","DOIUrl":"https://doi.org/10.1109/IWW-BCI.2019.8737343","url":null,"abstract":"Individuals have different working memory performance and some studies investigated a relationship between working memory performance and electroencephalography (EEG) band power. In this paper, we study EEG features to classify low performance group and high performance group and find that the power ratio feature of alpha and beta is more separable than their absolute powers. We test a deep artificial neural network (ANN) using the power ratio feature to classify the low performance group and high performance group. Experimental results on the working memory tasks show that some subjects have quite low accuracies (<20%) and it results in a low average classification accuracy of 61%, but we can see a possibility in the estimation of working memory performance using EEG data.","PeriodicalId":345970,"journal":{"name":"2019 7th International Winter Conference on Brain-Computer Interface (BCI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116449069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/IWW-BCI.2019.8737327
Andrei Chiuzbaian, J. Jakobsen, S. Puthusserypady
A crucial element lost in the context of a neurodegenerative disease is the possibility to freely explore and interact with the world around us. The work presented in this paper is focused on developing a brain-controlled Assistive Device (AD) to aid individuals in exploring the world around them with the help of a computer and their thoughts. By using the potential of a noninvasive Steady-State Visual Evoked Potential (SSVEP)-based Brain Computer Interface (BCI) system, the users can control a flying robot (also known as UAV or drone) in 3D physical space. From a video stream received from a video camera mounted on the drone, users can experience a degree of freedom while controlling the drone in 3D. The system proposed in this study uses a consumer-oriented headset, known as Emotiv Epoch in order to record the electroencephalogram (EEG) data. The system was tested on ten able-bodied subjects where four distinctive SSVEPs (5.3 Hz, 7 Hz, 9.4 Hz and 13.5 Hz) were detected and used as control signals for actuating the drone. A highly customizable visual interface was developed in order to elicit each SSVEP. The data recorded was filtered with an 8th order Butterworth bandpass filter and a fast Fourier transform (FFT) spectral analysis of the signal was applied in other to detect and classify each SSVEP. The proposed BCI system resulted in an average Information Transfer Rate (ITR) of 10 bits/min and a Positive Predictive Value (PPV) of 92.5%. The final conducted tests have demonstrated that the system proposed in this paper can easily control a drone in 3D space.
{"title":"Mind Controlled Drone: An Innovative Multiclass SSVEP based Brain Computer Interface","authors":"Andrei Chiuzbaian, J. Jakobsen, S. Puthusserypady","doi":"10.1109/IWW-BCI.2019.8737327","DOIUrl":"https://doi.org/10.1109/IWW-BCI.2019.8737327","url":null,"abstract":"A crucial element lost in the context of a neurodegenerative disease is the possibility to freely explore and interact with the world around us. The work presented in this paper is focused on developing a brain-controlled Assistive Device (AD) to aid individuals in exploring the world around them with the help of a computer and their thoughts. By using the potential of a noninvasive Steady-State Visual Evoked Potential (SSVEP)-based Brain Computer Interface (BCI) system, the users can control a flying robot (also known as UAV or drone) in 3D physical space. From a video stream received from a video camera mounted on the drone, users can experience a degree of freedom while controlling the drone in 3D. The system proposed in this study uses a consumer-oriented headset, known as Emotiv Epoch in order to record the electroencephalogram (EEG) data. The system was tested on ten able-bodied subjects where four distinctive SSVEPs (5.3 Hz, 7 Hz, 9.4 Hz and 13.5 Hz) were detected and used as control signals for actuating the drone. A highly customizable visual interface was developed in order to elicit each SSVEP. The data recorded was filtered with an 8th order Butterworth bandpass filter and a fast Fourier transform (FFT) spectral analysis of the signal was applied in other to detect and classify each SSVEP. The proposed BCI system resulted in an average Information Transfer Rate (ITR) of 10 bits/min and a Positive Predictive Value (PPV) of 92.5%. The final conducted tests have demonstrated that the system proposed in this paper can easily control a drone in 3D space.","PeriodicalId":345970,"journal":{"name":"2019 7th International Winter Conference on Brain-Computer Interface (BCI)","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116840493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/IWW-BCI.2019.8737330
Tanja Krumpe, W. Rosenstiel, M. Spüler
A simple recognition task was used to investigate if the item familiarity of pictures can be predicted based on single trial ERPs during item presentation, to explore the possibility of using this property in a BCI application. Two experimental parts with equal learning phases but different ratios of old and new stimuli in a forced choice memory recognition test have been performed. We were able to predict item familiarity with accuracies above 70 % based on the ERPs elicited during item representation in both parts of the experiment. In some cases, the classification accuracy even exceeds the behavioral accuracy of the subjects. Usage of this property, for example in an education-oriented scenario, seems feasible in a BCI application.
{"title":"Prediction of item familiarity based on ERPs","authors":"Tanja Krumpe, W. Rosenstiel, M. Spüler","doi":"10.1109/IWW-BCI.2019.8737330","DOIUrl":"https://doi.org/10.1109/IWW-BCI.2019.8737330","url":null,"abstract":"A simple recognition task was used to investigate if the item familiarity of pictures can be predicted based on single trial ERPs during item presentation, to explore the possibility of using this property in a BCI application. Two experimental parts with equal learning phases but different ratios of old and new stimuli in a forced choice memory recognition test have been performed. We were able to predict item familiarity with accuracies above 70 % based on the ERPs elicited during item representation in both parts of the experiment. In some cases, the classification accuracy even exceeds the behavioral accuracy of the subjects. Usage of this property, for example in an education-oriented scenario, seems feasible in a BCI application.","PeriodicalId":345970,"journal":{"name":"2019 7th International Winter Conference on Brain-Computer Interface (BCI)","volume":"43 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120874343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/IWW-BCI.2019.8737344
B. Abibullaev, Yerzhan Orazayev, A. Zollanvari
Constructing accurate predictive models for the detection of event-related potentials (ERPs) is a crucial step to obtain robust Brain-Computer Interface (BCI) systems. In this regard, the majority of previous studies have used spatiotemporal features of ERPs for classification. Recently, we showed that the spatiospectral features of ERP signals also contain significant discriminatory effects in predicting users’ mental intent. In this study, we compare the discriminatory effect of spatiospectral features and spatiotemporal features of electroencephalographic signals. Spectral features are extracted by modeling ERP signals as a sum of sinusoids with unknown amplitudes, frequencies, and phases. Temporal features are the magnitude of ERP waveforms across time. As the classification rule Logistic Regression with L2-Ridge penalty (LRR) is used. We chose this classifier as we recently showed it could achieve high performance using spatiospectral features. We observe that generally by directly using temporal features rather than extracted spectral features even a higher classification performance is achieved.
{"title":"Novel Spatiospectral Features of ERPs Enhances Brain-Computer Interfaces","authors":"B. Abibullaev, Yerzhan Orazayev, A. Zollanvari","doi":"10.1109/IWW-BCI.2019.8737344","DOIUrl":"https://doi.org/10.1109/IWW-BCI.2019.8737344","url":null,"abstract":"Constructing accurate predictive models for the detection of event-related potentials (ERPs) is a crucial step to obtain robust Brain-Computer Interface (BCI) systems. In this regard, the majority of previous studies have used spatiotemporal features of ERPs for classification. Recently, we showed that the spatiospectral features of ERP signals also contain significant discriminatory effects in predicting users’ mental intent. In this study, we compare the discriminatory effect of spatiospectral features and spatiotemporal features of electroencephalographic signals. Spectral features are extracted by modeling ERP signals as a sum of sinusoids with unknown amplitudes, frequencies, and phases. Temporal features are the magnitude of ERP waveforms across time. As the classification rule Logistic Regression with L2-Ridge penalty (LRR) is used. We chose this classifier as we recently showed it could achieve high performance using spatiospectral features. We observe that generally by directly using temporal features rather than extracted spectral features even a higher classification performance is achieved.","PeriodicalId":345970,"journal":{"name":"2019 7th International Winter Conference on Brain-Computer Interface (BCI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126864349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}