Do phonemes have any kind of existence to the extent that they can be perceived? One may well argue that phonemes are the out; come of a linguistic operation on language material and may be described in terms of their distinctive character. It is by no means certain that this distinctive character has a perceptual correlate as such in the listener. On the contrary, there is substantial evidence that in speech perception a number of perceptual cues are Operative that need not coincide with the distinctive features as postulated by __ jalcabsan and Halle 1. In fact, some phonemes can be recognized in ' ‘3“ isolation on the strength of inherent perceptual cues, such as colour ' and duration in the case of vowels. That linguistic elements belonging to the same class of phenomena, in this case phonemes, should show a certain differentiation in the degree of autonomy need cause no surprise. On the morphemic level one generally distinguishes between free and bound forms. A similar observation may be made regarding the meaning of words. _ ‘ Some linguists hold that word meanings can be established only by _Ï_ ' . ????
{"title":"The Perception of Phonemes as a Function of Acoustic and Distributional Cues","authors":"A. Cohen, V. Katwijk","doi":"10.1159/000426948","DOIUrl":"https://doi.org/10.1159/000426948","url":null,"abstract":"Do phonemes have any kind of existence to the extent that they can be perceived? One may well argue that phonemes are the out; come of a linguistic operation on language material and may be described in terms of their distinctive character. It is by no means certain that this distinctive character has a perceptual correlate as such in the listener. On the contrary, there is substantial evidence that in speech perception a number of perceptual cues are Operative that need not coincide with the distinctive features as postulated by __ jalcabsan and Halle 1. In fact, some phonemes can be recognized in ' ‘3“ isolation on the strength of inherent perceptual cues, such as colour ' and duration in the case of vowels. That linguistic elements belonging to the same class of phenomena, in this case phonemes, should show a certain differentiation in the degree of autonomy need cause no surprise. On the morphemic level one generally distinguishes between free and bound forms. A similar observation may be made regarding the meaning of words. _ ‘ Some linguists hold that word meanings can be established only by _Ï_ ' . ????","PeriodicalId":369207,"journal":{"name":"IPO Annual Progress Report","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133941730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many adults complain that domestic products of today are more difficult to use than earlier products. Learning problems may have arisen because of (1) the increasing complexity of interfaces over the past years or (2) age-related cognitive changes. The organization of objects on an interface plays a key role in the user's understanding and recall of executable sequences of actions. Up until the eighties, objects on domestic products were organized in the breadth (single-layer). Later, a large expansion offunctionality on appliances made it infeasible to organize all objects on the same layer. Now objects are organized in depth (multi-layered) to hide less relevant functionality. Disadvantages of this solution are the reduction of status feedback on the device and the visual disconnection between a control and its function. Therefore, it can be assumed that it is easier to learn to use a device composed of a singlelayer interface than one composed of a multi-layered interface. It is assumed that older adults encounter even more difficulties than younger adults with multi-layered interfaces, due to age-related inefficiency of information processing and encoding. The learning behaviour of three age-groups was compared using a simulation of a single-layer and a two-layered videophone interface, with a counterbalanced block design. Younger people were found to encounter fewer interaction problems than older persons, and each of the three age-groups showed different learning progress. In general, the single-layer interface was found to be easier to use than the two-layered interface.
{"title":"Age-related learning effects in working with layered interfaces","authors":"M. D. Rama","doi":"10.1037/e493262004-001","DOIUrl":"https://doi.org/10.1037/e493262004-001","url":null,"abstract":"Many adults complain that domestic products of today are more difficult to use than earlier products. Learning problems may have arisen because of (1) the increasing complexity of interfaces over the past years or (2) age-related cognitive changes. The organization of objects on an interface plays a key role in the user's understanding and recall of executable sequences of actions. Up until the eighties, objects on domestic products were organized in the breadth (single-layer). Later, a large expansion offunctionality on appliances made it infeasible to organize all objects on the same layer. Now objects are organized in depth (multi-layered) to hide less relevant functionality. Disadvantages of this solution are the reduction of status feedback on the device and the visual disconnection between a control and its function. Therefore, it can be assumed that it is easier to learn to use a device composed of a singlelayer interface than one composed of a multi-layered interface. It is assumed that older adults encounter even more difficulties than younger adults with multi-layered interfaces, due to age-related inefficiency of information processing and encoding. The learning behaviour of three age-groups was compared using a simulation of a single-layer and a two-layered videophone interface, with a counterbalanced block design. Younger people were found to encounter fewer interaction problems than older persons, and each of the three age-groups showed different learning progress. In general, the single-layer interface was found to be easier to use than the two-layered interface.","PeriodicalId":369207,"journal":{"name":"IPO Annual Progress Report","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115244761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Yendrikhovskij, de H Huib Ridder, E. Fedorovskaya
The purpose of this study is to estimate individual differences in colourfulness judgments of four real-life scenes by means of direct scaling and difference scaling. Images were created by varying chroma in the CIELUV colour space while lightness and hue were kept constant. The results indicate that the strategy of colourfulness judgments varies among observers: some subjects use one unified scale and score the colourfulness on the basis of the absolute values of average chroma and its standard deviation, while others use several scales and score the colourfulness on the basis of the relative values. i.e., differences from a scene-dependent reference value, of average chroma and its standard deviation separately per scene. The difference-scaling procedure corresponds more with using one unified scale, and the direct-scaling procedure with using separate scales. A model describing this subjective bias is presented.
{"title":"Individual differences in colourfulness judgments of images of natural scenes","authors":"S. Yendrikhovskij, de H Huib Ridder, E. Fedorovskaya","doi":"10.1037/e492902004-001","DOIUrl":"https://doi.org/10.1037/e492902004-001","url":null,"abstract":"The purpose of this study is to estimate individual differences in colourfulness judgments of four real-life scenes by means of direct scaling and difference scaling. Images were created by varying chroma in the CIELUV colour space while lightness and hue were kept constant. The results indicate that the strategy of colourfulness judgments varies among observers: some subjects use one unified scale and score the colourfulness on the basis of the absolute values of average chroma and its standard deviation, while others use several scales and score the colourfulness on the basis of the relative values. i.e., differences from a scene-dependent reference value, of average chroma and its standard deviation separately per scene. The difference-scaling procedure corresponds more with using one unified scale, and the direct-scaling procedure with using separate scales. A model describing this subjective bias is presented.","PeriodicalId":369207,"journal":{"name":"IPO Annual Progress Report","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128201756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a formal model of communication in dialogues is described. Recently, a shift seems to have taken place from the traditional perception of the computer as a mere tool for carrying out a task towards a view of the computer as an assistant with which one or more users work together - i.e., cooperate - on a task. Now, cooperation is rooted in coordination of actions, which in turn can be achieved through communication. The model for cooperative behaviour in dialogues that we propose is based on observational studies into human-human communication carried out at IPO and findings from Discourse Analysis and Conversation Analysis. Our main concern will be to explain the role of the basic constituents of conversation that are known as adjacency pairs. Central in our model are rules that describe how the commitments of the dialogue partners are updated during the course of a dialogue and how they constrain the possible moves of the dialogue participants. We take communication to be part of the overall activity in which the interlocutors are engaged. Such a model is needed if we want to account for the fact that information is often exchanged in dialogues in a sequence of alternating (combinations of) modalities (linguistic means, object manipulations and/or gestures). The model was used for the behaviour rules of an artificial assistant that is implemented as part of the DenK (Dialogue Modelling and Knowledge Acquisition) project.
{"title":"Situated action and commitment in dialogue","authors":"Pla Paul Piwek","doi":"10.1037/e490652004-001","DOIUrl":"https://doi.org/10.1037/e490652004-001","url":null,"abstract":"In this paper, a formal model of communication in dialogues is described. Recently, a shift seems to have taken place from the traditional perception of the computer as a mere tool for carrying out a task towards a view of the computer as an assistant with which one or more users work together - i.e., cooperate - on a task. Now, cooperation is rooted in coordination of actions, which in turn can be achieved through communication. The model for cooperative behaviour in dialogues that we propose is based on observational studies into human-human communication carried out at IPO and findings from Discourse Analysis and Conversation Analysis. Our main concern will be to explain the role of the basic constituents of conversation that are known as adjacency pairs. Central in our model are rules that describe how the commitments of the dialogue partners are updated during the course of a dialogue and how they constrain the possible moves of the dialogue participants. We take communication to be part of the overall activity in which the interlocutors are engaged. Such a model is needed if we want to account for the fact that information is often exchanged in dialogues in a sequence of alternating (combinations of) modalities (linguistic means, object manipulations and/or gestures). The model was used for the behaviour rules of an artificial assistant that is implemented as part of the DenK (Dialogue Modelling and Knowledge Acquisition) project.","PeriodicalId":369207,"journal":{"name":"IPO Annual Progress Report","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133134900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The temporal resolution of the auditory system, or its ability to follow rapid fluctuations, is fundamental to the processing of all acoustic stimuli, including speech. A popular method of determining temporal resolution is to measure thresholds for detecting a brief decrement in the level of an otherwise continuous sinusoid. Background noise is often added in such experiments in order to mask 'spectral splatter'. Current models assume that detection is achieved if the maximum 'dip' in a smoothed internal representation of the stimulus exceeds a certain criterion level. The experiment described here was designed to test these models. Thresholds were measured for several durations of increments as well as decrements using a wide range of background-noise levels. Two important aspects of the results are not consistent with current models. Firstly, at short durations, there was a large asymmetry between increments and decrements; decrements were less easily detected than increments. Secondly, results were highly dependent on the level of the background noise. The data show the need for a revision of current models of temporal resolution, and cast doubt on the suitability of decrement detection as a measure of temporal resolution.
{"title":"Increment and decrement detection as a measure of auditory temporal resolution","authors":"A. Oxenham","doi":"10.1037/e493682004-001","DOIUrl":"https://doi.org/10.1037/e493682004-001","url":null,"abstract":"The temporal resolution of the auditory system, or its ability to follow rapid fluctuations, is fundamental to the processing of all acoustic stimuli, including speech. A popular method of determining temporal resolution is to measure thresholds for detecting a brief decrement in the level of an otherwise continuous sinusoid. Background noise is often added in such experiments in order to mask 'spectral splatter'. Current models assume that detection is achieved if the maximum 'dip' in a smoothed internal representation of the stimulus exceeds a certain criterion level. The experiment described here was designed to test these models. Thresholds were measured for several durations of increments as well as decrements using a wide range of background-noise levels. Two important aspects of the results are not consistent with current models. Firstly, at short durations, there was a large asymmetry between increments and decrements; decrements were less easily detected than increments. Secondly, results were highly dependent on the level of the background noise. The data show the need for a revision of current models of temporal resolution, and cast doubt on the suitability of decrement detection as a measure of temporal resolution.","PeriodicalId":369207,"journal":{"name":"IPO Annual Progress Report","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126114707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Swerts, H. Koiso, Atsushi Shimojima, Y. Katagiri
The study reported in this paper focuses on different functions of echoing in Japanese dialogues. Echoing is defined as a speaker's lexical repeat of (parts of) an utterance spoken by a conversation partner in a previous turn. The phenomenon was investigated in three task-oriented, informal dialogues. Repeats in this corpus were labelled in terms of whether or not the speaker had integrated the other person's utterance into hislher own body of knowledge. The investigation brought to light that the level of integration is reflected in a number of lexical and prosodic correlates. These features are discussed regarding their information potential, i.e., their accuracy and comprehensiveness as signals.
{"title":"Echoing in Japanese conversations","authors":"M. Swerts, H. Koiso, Atsushi Shimojima, Y. Katagiri","doi":"10.1037/e495562004-001","DOIUrl":"https://doi.org/10.1037/e495562004-001","url":null,"abstract":"The study reported in this paper focuses on different functions of echoing in Japanese dialogues. Echoing is defined as a speaker's lexical repeat of (parts of) an utterance spoken by a conversation partner in a previous turn. The phenomenon was investigated in three task-oriented, informal dialogues. Repeats in this corpus were labelled in terms of whether or not the speaker had integrated the other person's utterance into hislher own body of knowledge. The investigation brought to light that the level of integration is reflected in a number of lexical and prosodic correlates. These features are discussed regarding their information potential, i.e., their accuracy and comprehensiveness as signals.","PeriodicalId":369207,"journal":{"name":"IPO Annual Progress Report","volume":"25 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124906482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The following research elaborates on some of the 'semantic' and 'algorithmic' aspects of the categorization process for thc colour domain. The structure of colour categories is argued to resemble the structure of Ihe distribution of colours in the perceived world. This distribution can be represented as colour statistics in some perceptual and approximately uniform colour space (e.g., the CIELUV colour space). We propose that the process of colour categorization is determined by a trade-off between (1) accuracy in representation of perceived colours and (2) simplicity of the category system. Colour categorization can be represented through the grouping of colour statistics by clustering algorithms (e.g., K-means). These assumptions are analysed on the basis of colour statistics of 630 natural images in the CIELUV colour space.
{"title":"On colour categorization of nature","authors":"S. Yendrikhovskij","doi":"10.1037/e492482004-001","DOIUrl":"https://doi.org/10.1037/e492482004-001","url":null,"abstract":"The following research elaborates on some of the 'semantic' and 'algorithmic' aspects of the categorization process for thc colour domain. The structure of colour categories is argued to resemble the structure of Ihe distribution of colours in the perceived world. This distribution can be represented as colour statistics in some perceptual and approximately uniform colour space (e.g., the CIELUV colour space). We propose that the process of colour categorization is determined by a trade-off between (1) accuracy in representation of perceived colours and (2) simplicity of the category system. Colour categorization can be represented through the grouping of colour statistics by clustering algorithms (e.g., K-means). These assumptions are analysed on the basis of colour statistics of 630 natural images in the CIELUV colour space.","PeriodicalId":369207,"journal":{"name":"IPO Annual Progress Report","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121174433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When users interact with a voice-operated system, they bring along their expectations and habits from human-human dialogues as well as their experience with the domain and with other systems. The present study explores the extent to which problems in user-system interaction may be associated with users' expectations and (mis)conceptions of the system. In an exploratory study, twenty subjects queried two different train travel information systems. A semi-structured interview was held on subjects' dialogues with the systems, by replaying the recordings together with the subjects. The findings revealed in what ways users' misconceptions and misunderstandings of the system lead to various problems in interaction, such as undesired travel suggestions and irritation. The implications for the design of voice-operated systems are discussed.
{"title":"Users' (mis)conceptions of a voice-operated train travel information service","authors":"M. Weegels","doi":"10.1037/e493802004-001","DOIUrl":"https://doi.org/10.1037/e493802004-001","url":null,"abstract":"When users interact with a voice-operated system, they bring along their expectations and habits from human-human dialogues as well as their experience with the domain and with other systems. The present study explores the extent to which problems in user-system interaction may be associated with users' expectations and (mis)conceptions of the system. In an exploratory study, twenty subjects queried two different train travel information systems. A semi-structured interview was held on subjects' dialogues with the systems, by replaying the recordings together with the subjects. The findings revealed in what ways users' misconceptions and misunderstandings of the system lead to various problems in interaction, such as undesired travel suggestions and irritation. The implications for the design of voice-operated systems are discussed.","PeriodicalId":369207,"journal":{"name":"IPO Annual Progress Report","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128716684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An alternative for the Liljencrants-Fant (LF) glottal-pulse model is presented. This alternative is derived from the Rosenberg model, so we have caned it the Rosenberg++ model. It is described by the same set of Tor R parameters as the LF model but has the advantage over the LF model that it is computationally simple, which allows its use in real-time speech synthesizers. The Rosenberg++ model is compared with the LF model in a psycho-acoustic experiment, from which we conclude that in a practical situation it is capable of producing synthetic speech which is perceptually equivalent to speech generated with the LF model.
{"title":"An alternative for the LF model","authors":"Rnj Raymond Veldhuis","doi":"10.1037/e496772004-001","DOIUrl":"https://doi.org/10.1037/e496772004-001","url":null,"abstract":"An alternative for the Liljencrants-Fant (LF) glottal-pulse model is presented. This alternative is derived from the Rosenberg model, so we have caned it the Rosenberg++ model. It is described by the same set of Tor R parameters as the LF model but has the advantage over the LF model that it is computationally simple, which allows its use in real-time speech synthesizers. The Rosenberg++ model is compared with the LF model in a psycho-acoustic experiment, from which we conclude that in a practical situation it is capable of producing synthetic speech which is perceptually equivalent to speech generated with the LF model.","PeriodicalId":369207,"journal":{"name":"IPO Annual Progress Report","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124989825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An experimental evaluation is presented of the usability properties of a multimodal interaction style for music programming. The experiment investigated task performance and learning of procedures while performing music programming tasks in the presence or absence of a visual display combined with a tactual and auditory interface. The participant's task was to compile a music programme as quickly as possible. Task performance was measured by compilation time and number of actions executed. Procedural knowledge was assessed by a posttask questionnaire. Participants performed equally efficiently, i.e .. not significantly differently, with and without a visual display, except for the first programming task. In the first task, performing a task without a visual display required significantly more time (approximately one additional minute) and more, but not significantly more, actions, probably due to explorative behaviour required to develop an internal representation of the interaction style. Earlier experience with a visual display did not improve task performance without a visual display. It also appeared that participants who had performed tasks non visually had learned more procedures. Nonvisual interaction requires the explicit discovery and memorization of procedures, which induces a higher degree of cognitive processing. It could therefore be demonstrated that tactual and auditory feedback can compensate for the visual modality in contextsof- use, in which visual display of information is impoverished or even absent. e.g.. portable devices, remote controls, and car equipment.
{"title":"Music programming for your hands and ears only","authors":"S. Pauws, D. Bouwhuis, J. H. Eggen","doi":"10.1037/e493362004-001","DOIUrl":"https://doi.org/10.1037/e493362004-001","url":null,"abstract":"An experimental evaluation is presented of the usability properties of a multimodal interaction style for music programming. The experiment investigated task performance and learning of procedures while performing music programming tasks in the presence or absence of a visual display combined with a tactual and auditory interface. The participant's task was to compile a music programme as quickly as possible. Task performance was measured by compilation time and number of actions executed. Procedural knowledge was assessed by a posttask questionnaire. Participants performed equally efficiently, i.e .. not significantly differently, with and without a visual display, except for the first programming task. In the first task, performing a task without a visual display required significantly more time (approximately one additional minute) and more, but not significantly more, actions, probably due to explorative behaviour required to develop an internal representation of the interaction style. Earlier experience with a visual display did not improve task performance without a visual display. It also appeared that participants who had performed tasks non visually had learned more procedures. Nonvisual interaction requires the explicit discovery and memorization of procedures, which induces a higher degree of cognitive processing. It could therefore be demonstrated that tactual and auditory feedback can compensate for the visual modality in contextsof- use, in which visual display of information is impoverished or even absent. e.g.. portable devices, remote controls, and car equipment.","PeriodicalId":369207,"journal":{"name":"IPO Annual Progress Report","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124751531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}