Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367745
N. Abe, K. Uemura
This research aims to understand an assembly instruction manual by fusing the result obtained from understanding technical illustrations (TIU) and that obtained from understanding instructions (IU). In the first version of the system, we assume a TIU and an IU are carried out independently.<>
{"title":"Assembly instruction manual understanding by fusing natural language understanding and technical illustration understanding","authors":"N. Abe, K. Uemura","doi":"10.1109/ROMAN.1993.367745","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367745","url":null,"abstract":"This research aims to understand an assembly instruction manual by fusing the result obtained from understanding technical illustrations (TIU) and that obtained from understanding instructions (IU). In the first version of the system, we assume a TIU and an IU are carried out independently.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114086741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367678
A. Nakajima, T. Sakairi, F. Ando, M. Shinozaki, T. Kurosawa
We present a multimedia teleteaching system that integrates an electronic whiteboard with our multimedia conferencing system which supports a motion video and a shared chalkboard. The teleteaching system uses personal computers that contain a motion video CODEC, and transmits data through the ISDN basic interface (128 kbps). An electronic whiteboard is attached to the personal computer for direct input by the teacher and display of that input. With this system, a teacher can use the electronic whiteboard as a real chalkboard. He can directly write test and draw figures in the area of the shared chalkboard by using the pen provided for the electronic whiteboard with a digitizer in it. When the teacher draws something on the shared chalkboard, it is automatically transmitted to the students' site and displayed on their shared chalkboard using the functions of multimedia conferencing system. The teacher's gestures and the contents of the whiteboard are captured by a video camera and transmitted to the students' computer. Conversely, a motion video of the students is transmitted from their site to the teacher's site.<>
{"title":"A multimedia teleteaching system using an electronic whiteboard for two-way communication of motion videos and chalkboards","authors":"A. Nakajima, T. Sakairi, F. Ando, M. Shinozaki, T. Kurosawa","doi":"10.1109/ROMAN.1993.367678","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367678","url":null,"abstract":"We present a multimedia teleteaching system that integrates an electronic whiteboard with our multimedia conferencing system which supports a motion video and a shared chalkboard. The teleteaching system uses personal computers that contain a motion video CODEC, and transmits data through the ISDN basic interface (128 kbps). An electronic whiteboard is attached to the personal computer for direct input by the teacher and display of that input. With this system, a teacher can use the electronic whiteboard as a real chalkboard. He can directly write test and draw figures in the area of the shared chalkboard by using the pen provided for the electronic whiteboard with a digitizer in it. When the teacher draws something on the shared chalkboard, it is automatically transmitted to the students' site and displayed on their shared chalkboard using the functions of multimedia conferencing system. The teacher's gestures and the contents of the whiteboard are captured by a video camera and transmitted to the students' computer. Conversely, a motion video of the students is transmitted from their site to the teacher's site.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126491447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367704
T. Fujii, T. Yasuda, S. Yokoi, J. Toriwaki
We developed a virtual handling system of a pendulum in three-dimensional virtual space. In this system we realized two kinds of interaction-batting interaction (hitting the pendulum with a racket) and KENDAMA interaction (manipulating a saucer connected to the pendulum with a string so as to position the pendulum on the saucer). In order to implement these physically-based real time interactions with a pendulum, we derived simplified models of a rebounding motion, a parabolic motion, a motion of loose string, a swing of a pendulum, etc. To reduce a large amount of computation time for the generation of these physical motions, good approximate models for each motion were developed.<>
{"title":"A virtual pendulum manipulation system on a graphic workstation","authors":"T. Fujii, T. Yasuda, S. Yokoi, J. Toriwaki","doi":"10.1109/ROMAN.1993.367704","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367704","url":null,"abstract":"We developed a virtual handling system of a pendulum in three-dimensional virtual space. In this system we realized two kinds of interaction-batting interaction (hitting the pendulum with a racket) and KENDAMA interaction (manipulating a saucer connected to the pendulum with a string so as to position the pendulum on the saucer). In order to implement these physically-based real time interactions with a pendulum, we derived simplified models of a rebounding motion, a parabolic motion, a motion of loose string, a swing of a pendulum, etc. To reduce a large amount of computation time for the generation of these physical motions, good approximate models for each motion were developed.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130703173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367711
A. Ralescu, H. Iwamoto
In the study of the linguistic modeling of facial images we have been previously concerned with deriving qualitative descriptions such as "big eyes, long hair" of face components. To enhance this system we extend our approach at deriving higher level, qualitative descriptions. In particular, we focus on describing facial expressions. Our approach is that of qualitative modeling based on fuzzy number modeling. The result of this modeling method is a collection of fuzzy if-then rules obtained from input-output data. The input data consists of measurements of the movement of facial parts associated to different facial expressions. The output data consists of scores for face images collected using a questionnaire. In this paper, we show the modeling result obtained from this method for the facial expression "happy". While the modeling results are satisfactory the initial recognition results are limited, due in part to the absence of the models for the remaining facial expressions.<>
{"title":"Recognition of and reasoning about facial expressions using fuzzy logic","authors":"A. Ralescu, H. Iwamoto","doi":"10.1109/ROMAN.1993.367711","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367711","url":null,"abstract":"In the study of the linguistic modeling of facial images we have been previously concerned with deriving qualitative descriptions such as \"big eyes, long hair\" of face components. To enhance this system we extend our approach at deriving higher level, qualitative descriptions. In particular, we focus on describing facial expressions. Our approach is that of qualitative modeling based on fuzzy number modeling. The result of this modeling method is a collection of fuzzy if-then rules obtained from input-output data. The input data consists of measurements of the movement of facial parts associated to different facial expressions. The output data consists of scores for face images collected using a questionnaire. In this paper, we show the modeling result obtained from this method for the facial expression \"happy\". While the modeling results are satisfactory the initial recognition results are limited, due in part to the absence of the models for the remaining facial expressions.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"30 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132285503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367744
N. Abe, T. Shingoh
Technical illustrations (TI) are always drawn in any assembly manual. This means that a TI is more informative than instruction in assembly tasks. Without TI, we have to read assembly instructions, it is difficult to infer a correct assembling procedure, if they explain several types of assembly relations. Even a novice can assemble or disassemble mechanical parts by only seeing a TI. But the order of assembly/disassembly is not uniquely obtained from a TI, knowledge on assembling operation will limit the order to plausible one. An assembly task is successfully divided into several easier assembly tasks than the original one. This means that a TI is composed of a set of TIs which are small and easy subassembly tasks.<>
{"title":"Generation of technical illustration from description of machines","authors":"N. Abe, T. Shingoh","doi":"10.1109/ROMAN.1993.367744","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367744","url":null,"abstract":"Technical illustrations (TI) are always drawn in any assembly manual. This means that a TI is more informative than instruction in assembly tasks. Without TI, we have to read assembly instructions, it is difficult to infer a correct assembling procedure, if they explain several types of assembly relations. Even a novice can assemble or disassemble mechanical parts by only seeing a TI. But the order of assembly/disassembly is not uniquely obtained from a TI, knowledge on assembling operation will limit the order to plausible one. An assembly task is successfully divided into several easier assembly tasks than the original one. This means that a TI is composed of a set of TIs which are small and easy subassembly tasks.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127064837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367741
T. Ohashi, G. Obinata, Y. Shimada, K. Ebata
Functional electrical stimulation (FES) technology for restoration of upper limb function is today in a process of transfer from the research laboratory to the clinical environment. However, the restoration of locomotion in paraplegics is still difficult even in the research level. The main reason of the difficulty comes from the fact that the stabilizing torques generated by lower limb muscles are strongly dependent upon the posture of the upper half of the body. In this paper,a new-technique, which is called as Hybrid FES system (HFES), is proposed. The hardware of HFES system consists of a conventional FES system and an active orthosis. A pair of actuator and sensor is attached at each joint in the orthosis, which can generate a required torque and obtain values of angle and angular velocity at the joint. Based on the dynamics of the orthosis, the posture of the upper body can be estimated using the measurements from the sensors. Moreover, the stabilizing torques for the actuators can be dynamically calculated from the estimated posture. Some preliminary results are obtained in a simulation study and discussion on the performance and realizability of proposed HFES systems has been given in this paper.<>
{"title":"Control of hybrid FES system for restoration of paraplegic locomotion","authors":"T. Ohashi, G. Obinata, Y. Shimada, K. Ebata","doi":"10.1109/ROMAN.1993.367741","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367741","url":null,"abstract":"Functional electrical stimulation (FES) technology for restoration of upper limb function is today in a process of transfer from the research laboratory to the clinical environment. However, the restoration of locomotion in paraplegics is still difficult even in the research level. The main reason of the difficulty comes from the fact that the stabilizing torques generated by lower limb muscles are strongly dependent upon the posture of the upper half of the body. In this paper,a new-technique, which is called as Hybrid FES system (HFES), is proposed. The hardware of HFES system consists of a conventional FES system and an active orthosis. A pair of actuator and sensor is attached at each joint in the orthosis, which can generate a required torque and obtain values of angle and angular velocity at the joint. Based on the dynamics of the orthosis, the posture of the upper body can be estimated using the measurements from the sensors. Moreover, the stabilizing torques for the actuators can be dynamically calculated from the estimated posture. Some preliminary results are obtained in a simulation study and discussion on the performance and realizability of proposed HFES systems has been given in this paper.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115364090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367736
D. Inman
This paper makes a case for handwriting recognition compared to other input methods for communication with machines. A comparison is made with voice recognition and keyboard input systems for both western languages and for Japanese. Both single word recognition and whole sentence recognition are considered. A case is made for handwriting recognition for a language with a large character set and many homonyms, such as the Japanese language. For such a language, a fundamental problem exists for both keyboard input and for voice recognition. Both these systems need to convert a phonetic representation into Kanji, and this requires extensive knowledge of the meaning of the text if it is to be automatic. AI research has yet to deliver fast, competent text understanding systems. Consequently, both voice and keyboard input methods need to present the user with alternative choices during recognition, and this makes these methods slow and unnatural. A system is described here which is designed for accurate, fast sentence recognition of both western scripts and Japanese. The system is designed for whole sentence recognition, with the user allowed to write in a natural way. There is considerable flexibility allowed in terms of size and shape of the writing. The distinguishing characteristic of the system, is the use of a unified recognition technique applied to character, word and sentence recognition. This technique is an adaptation of chart parsing, used extensively in natural language processing in AI. Here the technique has been developed to allow weighted multiple hypotheses during recognition. This is important for a system that allows the user to write naturally. This approach to sentence recognition, allows mistakes made during low level processing to be corrected at higher levels. Knowledge of the vocabulary and allowable sentence structures are incorporated in the system in a unified way. A useful additional result of this approach, is the ability to produce a syntactic parse of the sentence recognised. Provisional results are presented for recognition of Japanese Hiragana characters and for English capital letters. The users were given considerable freedom on the style of writing used. The results show recognition rates of over 80% at present, for a variety of users. Improvements in this performance are anticipated when lexical and syntactic modules are added. Further improvements are anticipated by incorporating learning into the system, so that the knowledge base will be tuned for each user.<>
{"title":"Recognising hand-written Japanese sentences","authors":"D. Inman","doi":"10.1109/ROMAN.1993.367736","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367736","url":null,"abstract":"This paper makes a case for handwriting recognition compared to other input methods for communication with machines. A comparison is made with voice recognition and keyboard input systems for both western languages and for Japanese. Both single word recognition and whole sentence recognition are considered. A case is made for handwriting recognition for a language with a large character set and many homonyms, such as the Japanese language. For such a language, a fundamental problem exists for both keyboard input and for voice recognition. Both these systems need to convert a phonetic representation into Kanji, and this requires extensive knowledge of the meaning of the text if it is to be automatic. AI research has yet to deliver fast, competent text understanding systems. Consequently, both voice and keyboard input methods need to present the user with alternative choices during recognition, and this makes these methods slow and unnatural. A system is described here which is designed for accurate, fast sentence recognition of both western scripts and Japanese. The system is designed for whole sentence recognition, with the user allowed to write in a natural way. There is considerable flexibility allowed in terms of size and shape of the writing. The distinguishing characteristic of the system, is the use of a unified recognition technique applied to character, word and sentence recognition. This technique is an adaptation of chart parsing, used extensively in natural language processing in AI. Here the technique has been developed to allow weighted multiple hypotheses during recognition. This is important for a system that allows the user to write naturally. This approach to sentence recognition, allows mistakes made during low level processing to be corrected at higher levels. Knowledge of the vocabulary and allowable sentence structures are incorporated in the system in a unified way. A useful additional result of this approach, is the ability to produce a syntactic parse of the sentence recognised. Provisional results are presented for recognition of Japanese Hiragana characters and for English capital letters. The users were given considerable freedom on the style of writing used. The results show recognition rates of over 80% at present, for a variety of users. Improvements in this performance are anticipated when lexical and syntactic modules are added. Further improvements are anticipated by incorporating learning into the system, so that the knowledge base will be tuned for each user.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124144572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367751
H. Imai
This study examined the difference in averaging effect between the case in which the target (T) and the distracter (D) were in the same direction of the fixating point at different eccentricities (eccentricity-different condition) and the case in which T and D were in different directions (direction-different condition). As a result, the averaging effect was observed in both conditions when subjects were forced to make saccades with very short latencies (Experiment 2). However, the results mere different between the two conditions when the subjects made saccades with relatively long latencies (Experiment 1) or when they knew in advance the position at which T would be appearing (Experiment 3): The averaging effect was evidently decreased in the direction-different condition, while the effect was still observed in the eccentricity-different condition. This suggests that the process of selecting a target of the saccade works reflectively depending on the stimulus configurations and can be controlled by some top-down processes.<>
{"title":"Accuracy of selective saccades depends on the configuration of a target and a distracter","authors":"H. Imai","doi":"10.1109/ROMAN.1993.367751","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367751","url":null,"abstract":"This study examined the difference in averaging effect between the case in which the target (T) and the distracter (D) were in the same direction of the fixating point at different eccentricities (eccentricity-different condition) and the case in which T and D were in different directions (direction-different condition). As a result, the averaging effect was observed in both conditions when subjects were forced to make saccades with very short latencies (Experiment 2). However, the results mere different between the two conditions when the subjects made saccades with relatively long latencies (Experiment 1) or when they knew in advance the position at which T would be appearing (Experiment 3): The averaging effect was evidently decreased in the direction-different condition, while the effect was still observed in the eccentricity-different condition. This suggests that the process of selecting a target of the saccade works reflectively depending on the stimulus configurations and can be controlled by some top-down processes.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114577242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367743
Y. Sakai
Human expertise is a key for effective operations of highly advanced automated systems. Expertise is to be experienced as the term shows. Experiencing is analysed and how to acquire special knowledge required in operating a particular system is described, by conceptualizing, in a similar manner in which a human operator does.<>
{"title":"A human-oriented mechanism for building expertise","authors":"Y. Sakai","doi":"10.1109/ROMAN.1993.367743","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367743","url":null,"abstract":"Human expertise is a key for effective operations of highly advanced automated systems. Expertise is to be experienced as the term shows. Experiencing is analysed and how to acquire special knowledge required in operating a particular system is described, by conceptualizing, in a similar manner in which a human operator does.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116925945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367746
N. Abe, I. Ohno
Technical illustrations (TI) are one of the main methods to show a way to assemble/disassemble a mechanical assembly. The information including not only the shape of a constituent and an order of operations necessary to assemble/disassemble it can be driven from a single TI. However an additional TI is often required to further supplement insufficient information obtained from the single TI. The paper shows the solution required to resolve the problem which arises in augmenting a model description by unifying the results obtained from several TIs.<>
{"title":"Acquiring 3 dimensional models of mechanical object from technical illustrations","authors":"N. Abe, I. Ohno","doi":"10.1109/ROMAN.1993.367746","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367746","url":null,"abstract":"Technical illustrations (TI) are one of the main methods to show a way to assemble/disassemble a mechanical assembly. The information including not only the shape of a constituent and an order of operations necessary to assemble/disassemble it can be driven from a single TI. However an additional TI is often required to further supplement insufficient information obtained from the single TI. The paper shows the solution required to resolve the problem which arises in augmenting a model description by unifying the results obtained from several TIs.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128439462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}