Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521597
Yue Yang, Baoxin Li
We propose a non-linear image enhancement method based on Gabor filters, which allows selective enhancement based on the contrast sensitivity function of the human visual system. We also propose an evaluation method for measuring the performance of the algorithm and for comparing it with existing approaches. The selective enhancement of the proposed approach is especially suitable for digital television applications to improve the perceived visual quality of the images when the source image contains less satisfactory amount of high frequencies due to various reasons, including interpolation that is used to convert standard definition sources into high-definition images.
{"title":"Non-linear image enhancement for digital TV applications using Gabor filters","authors":"Yue Yang, Baoxin Li","doi":"10.1109/ICME.2005.1521597","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521597","url":null,"abstract":"We propose a non-linear image enhancement method based on Gabor filters, which allows selective enhancement based on the contrast sensitivity function of the human visual system. We also propose an evaluation method for measuring the performance of the algorithm and for comparing it with existing approaches. The selective enhancement of the proposed approach is especially suitable for digital television applications to improve the perceived visual quality of the images when the source image contains less satisfactory amount of high frequencies due to various reasons, including interpolation that is used to convert standard definition sources into high-definition images.","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130950058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521709
Cheng-Yao Chen, Yue Huang, P. Cook
To recognize and understand a person's emotion has been known as one of the most important issue in human-computer interaction. In this paper, we present a multimodal system that supports emotion recognition from both visual and acoustic feature analysis. Our main achievement is that with this bimodal method, we can effectively extend the recognized emotion categories compared to when only visual or acoustic feature analysis works alone. We also show that by carefully cooperating bimodal features, the recognition precision of each emotion category will exceed the limit set up by the single modality, both visual and acoustic. Moreover, we believe our system is closer to real human perception and experience and hence will make emotion recognition closer to practical application in the future
{"title":"Visual/Acoustic Emotion Recognition","authors":"Cheng-Yao Chen, Yue Huang, P. Cook","doi":"10.1109/ICME.2005.1521709","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521709","url":null,"abstract":"To recognize and understand a person's emotion has been known as one of the most important issue in human-computer interaction. In this paper, we present a multimodal system that supports emotion recognition from both visual and acoustic feature analysis. Our main achievement is that with this bimodal method, we can effectively extend the recognized emotion categories compared to when only visual or acoustic feature analysis works alone. We also show that by carefully cooperating bimodal features, the recognition precision of each emotion category will exceed the limit set up by the single modality, both visual and acoustic. Moreover, we believe our system is closer to real human perception and experience and hence will make emotion recognition closer to practical application in the future","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127300735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521591
R. Wong, M. Schaar, D. Turaga
Cross protocol layer optimizations have been recently proposed for improving the performance of real-time video transmission over 802.11 WLANs. However, performing such cross-layer optimizations is difficult since the video data and channel characteristics are time-varying, and analytically deriving the relationships between quality and channel characteristics given delay and power constraints is difficult. Furthermore, these relationships are often non-linear and non-deterministic (only worst or average case values can be determined). Complex Lagrangian or multi-objective optimization problems are thus often faced. In this paper, we propose a novel framework for solving cross MAC-application layer optimization problems. More specifically, we employ classification techniques to find an optimized cross-layer strategy for wireless multimedia transmission. Our solution deploys both content- and channel-related features to select a joint application-MAC strategy from the different strategies available at the various layers. Preliminary results indicate that considerable improvements can be obtained through the proposed cross-layer techniques relying on classification as opposed to ad-hoc solutions. The improvements are especially important at high packet-loss rates (5% and higher), where deploying a judicious mixture of strategies at the various layers becomes essential.
{"title":"Optimized wireless video transmission using classification","authors":"R. Wong, M. Schaar, D. Turaga","doi":"10.1109/ICME.2005.1521591","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521591","url":null,"abstract":"Cross protocol layer optimizations have been recently proposed for improving the performance of real-time video transmission over 802.11 WLANs. However, performing such cross-layer optimizations is difficult since the video data and channel characteristics are time-varying, and analytically deriving the relationships between quality and channel characteristics given delay and power constraints is difficult. Furthermore, these relationships are often non-linear and non-deterministic (only worst or average case values can be determined). Complex Lagrangian or multi-objective optimization problems are thus often faced. In this paper, we propose a novel framework for solving cross MAC-application layer optimization problems. More specifically, we employ classification techniques to find an optimized cross-layer strategy for wireless multimedia transmission. Our solution deploys both content- and channel-related features to select a joint application-MAC strategy from the different strategies available at the various layers. Preliminary results indicate that considerable improvements can be obtained through the proposed cross-layer techniques relying on classification as opposed to ad-hoc solutions. The improvements are especially important at high packet-loss rates (5% and higher), where deploying a judicious mixture of strategies at the various layers becomes essential.","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116185381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521407
T. Chong, O. Au, Tai-Wai Chan, Wing-San Chau
In this paper, we proposed a spatial-temporal de-interlacing algorithm for conversion of interlaced video to progressive video. Our proposed algorithm estimates the motion trajectory of three consecutive fields interpolates the missing field along the motion trajectory. In the motion estimator, the unidirectional motion estimation and the bidirectional motion estimation processes are combined by multiple objective minimization technique. The unidirectional motion estimation estimates the motion trajectory by comparing the blocks from opposite parity fields while the bi-directional motion estimation compares blocks from the same parity fields. By combining the two motion estimations, the motion trajectory can be accurately predicted. In addition, a quality analyzer is proposed to evaluate the visual quality of the reconstructed frame, which chooses the appropriate interpolation scheme in order to provide maximum de-interlacing performance. Simulation results show the proposed algorithm has better performance over existing de-interlacing algorithm.
{"title":"A spatial-temporal de-interlacing algorithm","authors":"T. Chong, O. Au, Tai-Wai Chan, Wing-San Chau","doi":"10.1109/ICME.2005.1521407","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521407","url":null,"abstract":"In this paper, we proposed a spatial-temporal de-interlacing algorithm for conversion of interlaced video to progressive video. Our proposed algorithm estimates the motion trajectory of three consecutive fields interpolates the missing field along the motion trajectory. In the motion estimator, the unidirectional motion estimation and the bidirectional motion estimation processes are combined by multiple objective minimization technique. The unidirectional motion estimation estimates the motion trajectory by comparing the blocks from opposite parity fields while the bi-directional motion estimation compares blocks from the same parity fields. By combining the two motion estimations, the motion trajectory can be accurately predicted. In addition, a quality analyzer is proposed to evaluate the visual quality of the reconstructed frame, which chooses the appropriate interpolation scheme in order to provide maximum de-interlacing performance. Simulation results show the proposed algorithm has better performance over existing de-interlacing algorithm.","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121175245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521447
O. Pietquin
Speech enabled interfaces and spoken dialog systems are mostly based on statistical speech and language processing modules. Their behavior is therefore not deterministic and hardly predictable. This makes the simulation and the optimization of such systems performances difficult, as well as the reuse of previous work to build new systems. In the aim of a partially automated optimization of such systems, this paper presents a formalism attempt for the description of man-machine spoken communication in the framework of spoken dialog systems. This formalization is partly based on a probabilistic description of the information processing occurring in each module composing a spoken dialog system but also on a stochastic user modeling. Eventually, some possible applications of this theoretic framework are proposed
{"title":"A Probabilistic Description of Man-Machine Spoken Communication","authors":"O. Pietquin","doi":"10.1109/ICME.2005.1521447","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521447","url":null,"abstract":"Speech enabled interfaces and spoken dialog systems are mostly based on statistical speech and language processing modules. Their behavior is therefore not deterministic and hardly predictable. This makes the simulation and the optimization of such systems performances difficult, as well as the reuse of previous work to build new systems. In the aim of a partially automated optimization of such systems, this paper presents a formalism attempt for the description of man-machine spoken communication in the framework of spoken dialog systems. This formalization is partly based on a probabilistic description of the information processing occurring in each module composing a spoken dialog system but also on a stochastic user modeling. Eventually, some possible applications of this theoretic framework are proposed","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127953416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521678
Yoshitaka Nakamura, Guiquan Ren, Masatoshi Nakamura, T. Umedu, T. Higashino
Due to the progress of portable computing devices such as PDAs, cellular phones and small sized PCs, many personal navigation systems have been developed which navigate their users to display routes to given destinations. Those navigation systems mainly focus on the guidance for personal use. In this paper, we have developed a group navigation system, which provides facilities for (1) personally customizable route navigation to a given destination, (2) management of group movement and (3) rehearsal usage when we make the personally customized route navigation. In our system, using wireless ad-hoc communication a few leaders of a group can collect and distribute the information about its members' current positions and give each member a suitable suggestion when the user is losing his/her way. The personalized route navigation scenario (program) running on portable devices can be obtained automatically only by clicking intersections sequentially on a given map and giving pictures and comments. A rehearsal mode is also prepared when we make the personalized route navigation
{"title":"Personally Customizable Group Navigation System Using Cellular Phones and Wireless Ad-Hoc Communication","authors":"Yoshitaka Nakamura, Guiquan Ren, Masatoshi Nakamura, T. Umedu, T. Higashino","doi":"10.1109/ICME.2005.1521678","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521678","url":null,"abstract":"Due to the progress of portable computing devices such as PDAs, cellular phones and small sized PCs, many personal navigation systems have been developed which navigate their users to display routes to given destinations. Those navigation systems mainly focus on the guidance for personal use. In this paper, we have developed a group navigation system, which provides facilities for (1) personally customizable route navigation to a given destination, (2) management of group movement and (3) rehearsal usage when we make the personally customized route navigation. In our system, using wireless ad-hoc communication a few leaders of a group can collect and distribute the information about its members' current positions and give each member a suitable suggestion when the user is losing his/her way. The personalized route navigation scenario (program) running on portable devices can be obtained automatically only by clicking intersections sequentially on a given map and giving pictures and comments. A rehearsal mode is also prepared when we make the personalized route navigation","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127551383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521613
Amith K Jain, Jeffrey R. Huang, S. Fang
Computer vision and pattern recognition systems play an important role in our lives by means of automated face detection, face and gesture recognition, and estimation of gender and age. This paper addresses the problem of gender classification using frontal facial images. We have developed gender classifiers with performance superior to existing gender classifiers. We experiment on 500 images (250 females and 250 males) randomly withdrawn from the FERET facial database. Independent component analysis (ICA) is used to represent each image as a feature vector in a low dimensional subspace. Different classifiers are studied in this lower dimensional space. Our experimental results show the superior performance of our approach to the existing gender classifiers. We get a 96% accuracy using support vector machine (SVM) in ICA space.
{"title":"Gender identification using frontal facial images","authors":"Amith K Jain, Jeffrey R. Huang, S. Fang","doi":"10.1109/ICME.2005.1521613","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521613","url":null,"abstract":"Computer vision and pattern recognition systems play an important role in our lives by means of automated face detection, face and gesture recognition, and estimation of gender and age. This paper addresses the problem of gender classification using frontal facial images. We have developed gender classifiers with performance superior to existing gender classifiers. We experiment on 500 images (250 females and 250 males) randomly withdrawn from the FERET facial database. Independent component analysis (ICA) is used to represent each image as a feature vector in a low dimensional subspace. Different classifiers are studied in this lower dimensional space. Our experimental results show the superior performance of our approach to the existing gender classifiers. We get a 96% accuracy using support vector machine (SVM) in ICA space.","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133590380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521529
Mohammed Ameer Ali, G. Karmakar, L. Dooley
Results of any clustering algorithm are highly sensitive to features that limit their generalization and hence provide a strong motivation to integrate shape information into the algorithm. Existing fuzzy shape-based clustering algorithms consider only circular and elliptical shape information and consequently do not segment well, arbitrary shaped objects. To address this issue, this paper introduces a new shape-based algorithm, called fuzzy image segmentation using shape information (FISS) by incorporating general shape information. Both qualitative and quantitative analysis proves the superiority of the new FISS algorithm compared to other well-established shape-based fuzzy clustering algorithms, including Gustafson-Kessel, ring-shaped, circular shell, c-ellipsoidal shells and elliptic ring-shaped clusters.
{"title":"Fuzzy image segmentation using shape information","authors":"Mohammed Ameer Ali, G. Karmakar, L. Dooley","doi":"10.1109/ICME.2005.1521529","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521529","url":null,"abstract":"Results of any clustering algorithm are highly sensitive to features that limit their generalization and hence provide a strong motivation to integrate shape information into the algorithm. Existing fuzzy shape-based clustering algorithms consider only circular and elliptical shape information and consequently do not segment well, arbitrary shaped objects. To address this issue, this paper introduces a new shape-based algorithm, called fuzzy image segmentation using shape information (FISS) by incorporating general shape information. Both qualitative and quantitative analysis proves the superiority of the new FISS algorithm compared to other well-established shape-based fuzzy clustering algorithms, including Gustafson-Kessel, ring-shaped, circular shell, c-ellipsoidal shells and elliptic ring-shaped clusters.","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130655048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521554
Björn Schuller, Brüning J. B. Schmitt, D. Arsic, S. Reiter, M. Lang, G. Rigoll
In this work we strive to find an optimal set of acoustic features for the discrimination of speech, monophonic singing, and polyphonic music to robustly segment acoustic media streams for annotation and interaction purposes. Furthermore we introduce ensemble-based classification approaches within this task. From a basis of 276 attributes we select the most efficient set by SVM-SFFS. Additionally relevance of single features by calculation of information gain ratio is presented. As a basis of comparison we reduce dimensionality by PCA. We show extensive analysis of different classifiers within the named task. Among these are kernel machines, decision trees, and Bayesian classifiers. Moreover we improve single classifier performance by bagging and boosting, and finally combine strengths of classifiers by stackingC. The database is formed by 2,114 samples of speech, and singing of 58 persons. 1,000 music clips have been taken from the MTV-Europe-Top-20 1980-2000. The outstanding discrimination results of a working realtime capable implementation stress the practicability of the proposed novel ideas
{"title":"Feature Selection and Stacking for Robust Discrimination of Speech, Monophonic Singing, and Polyphonic Music","authors":"Björn Schuller, Brüning J. B. Schmitt, D. Arsic, S. Reiter, M. Lang, G. Rigoll","doi":"10.1109/ICME.2005.1521554","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521554","url":null,"abstract":"In this work we strive to find an optimal set of acoustic features for the discrimination of speech, monophonic singing, and polyphonic music to robustly segment acoustic media streams for annotation and interaction purposes. Furthermore we introduce ensemble-based classification approaches within this task. From a basis of 276 attributes we select the most efficient set by SVM-SFFS. Additionally relevance of single features by calculation of information gain ratio is presented. As a basis of comparison we reduce dimensionality by PCA. We show extensive analysis of different classifiers within the named task. Among these are kernel machines, decision trees, and Bayesian classifiers. Moreover we improve single classifier performance by bagging and boosting, and finally combine strengths of classifiers by stackingC. The database is formed by 2,114 samples of speech, and singing of 58 persons. 1,000 music clips have been taken from the MTV-Europe-Top-20 1980-2000. The outstanding discrimination results of a working realtime capable implementation stress the practicability of the proposed novel ideas","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116698682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521537
Zhigang Hua, Chuang Wang, Xing Xie, Hanqing Lu, Wei-Ying Ma
Currently, a crucial challenge is raised on how to manage a large amount of images on the Web. Due to a real synergy between an image and its location, we propose an automatic solution to annotate contextual location information for WWW images. We construct an image importance model to acquire the dominant images in a page that comprise contextual surrounding text. For each acquired image, we develop an effective algorithm to compute location from its contextual text. We apply our approach to 1,000 pages from various Websites for image location annotation. The experiments demonstrated that more than 30% WWW images are related with geographic location information, and our solution can achieve the satisfactory results. Finally, we present some potential applications involving the utilization of image location information
{"title":"Automatic Annotation of Location Information for WWW Images","authors":"Zhigang Hua, Chuang Wang, Xing Xie, Hanqing Lu, Wei-Ying Ma","doi":"10.1109/ICME.2005.1521537","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521537","url":null,"abstract":"Currently, a crucial challenge is raised on how to manage a large amount of images on the Web. Due to a real synergy between an image and its location, we propose an automatic solution to annotate contextual location information for WWW images. We construct an image importance model to acquire the dominant images in a page that comprise contextual surrounding text. For each acquired image, we develop an effective algorithm to compute location from its contextual text. We apply our approach to 1,000 pages from various Websites for image location annotation. The experiments demonstrated that more than 30% WWW images are related with geographic location information, and our solution can achieve the satisfactory results. Finally, we present some potential applications involving the utilization of image location information","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116946014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}