The Websocket protocol enables a full-duplex communication, besides it simplifies an exchange of data and reduces network overload. This paper proposes the use of the Websocket protocol in control and service devices through web within real-time requirements. Through tests made in a virtual environment and another one in embedded experiment, It is possible to validate an initial proposal of implementation the Websocket protocol. From the analysis of the results obtained, it can be seen the use of the proposal in question provides a considerable reduction in the quantity of requests and transferred data in relation to traditional approach to sending data in HTTP-based communications. Consequently it seems to be a very promising technique for this type of application.
{"title":"Proposal to Use of the Websocket Protocol for Web Device Control","authors":"Adriano H. O. Maia, D. Silva","doi":"10.1145/3126858.3126887","DOIUrl":"https://doi.org/10.1145/3126858.3126887","url":null,"abstract":"The Websocket protocol enables a full-duplex communication, besides it simplifies an exchange of data and reduces network overload. This paper proposes the use of the Websocket protocol in control and service devices through web within real-time requirements. Through tests made in a virtual environment and another one in embedded experiment, It is possible to validate an initial proposal of implementation the Websocket protocol. From the analysis of the results obtained, it can be seen the use of the proposal in question provides a considerable reduction in the quantity of requests and transferred data in relation to traditional approach to sending data in HTTP-based communications. Consequently it seems to be a very promising technique for this type of application.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"2010 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133042936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lassion Laique Bomfim de Souza Santana, Alesson Bruno Santos Souza, Diego Lima Santana, Wendel Araújo Dourado, F. Durão
Recommender systems are information filtering tools that aspire to predict accurate ratings for users and items, with the ultimate goal of providing users with personalized and relevant recommendations. Recommender system that rely on the combination of quality metadata, i.e., all descriptive information about an item, are likely to be successful in the process of finding what is relevant or not for a target user. The problem arises when either data is sparse or important metadata is not available, making it hard for recommender systems to predict proper user-item ratings. In particular, this study investigates how our proposed collaborative-filtering recommender performs when important metadata is reduced from a dataset. To evaluate our approach use the HetRec 2011 2k dataset with five different movie metadata (genres, tags, directors, actors and countries). By applying our approach of metadata reduction, we provide a comprehensive analysis on how mean average precision is affected as important metadata become unavailable.
{"title":"Evaluating Ensemble Strategies for Recommender Systems under Metadata Reduction","authors":"Lassion Laique Bomfim de Souza Santana, Alesson Bruno Santos Souza, Diego Lima Santana, Wendel Araújo Dourado, F. Durão","doi":"10.1145/3126858.3126879","DOIUrl":"https://doi.org/10.1145/3126858.3126879","url":null,"abstract":"Recommender systems are information filtering tools that aspire to predict accurate ratings for users and items, with the ultimate goal of providing users with personalized and relevant recommendations. Recommender system that rely on the combination of quality metadata, i.e., all descriptive information about an item, are likely to be successful in the process of finding what is relevant or not for a target user. The problem arises when either data is sparse or important metadata is not available, making it hard for recommender systems to predict proper user-item ratings. In particular, this study investigates how our proposed collaborative-filtering recommender performs when important metadata is reduced from a dataset. To evaluate our approach use the HetRec 2011 2k dataset with five different movie metadata (genres, tags, directors, actors and countries). By applying our approach of metadata reduction, we provide a comprehensive analysis on how mean average precision is affected as important metadata become unavailable.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123475781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the delivery of hypermedia content over communication networks, the specified intermedia synchronization must be assured, despite the inherent delay and jitter of most transmission media and networks. This kind of content typically provides users multiple interaction paths, with different sets of media objects each one. In spite of that, when the hypermedia content is transmitted in push mode, users receive all media objects, regardless of the chosen interaction path. Transmission strategies that take into account the occurrence of both deterministic and non-deterministic hypermedia presentation events can decrease the waste of storage resources in the receiver side, as well as the need for network bandwidth. This work proposes a framework for an adaptable management of push-mode hypermedia content transmission. Adaptability is achieved by supporting multiple transmission strategies that may employ multiple transmission channels, which are built upon a content analysis for the identification of deterministic and non-deterministic hypermedia presentation events. Methods for instantiating the framework in the context of Ginga-NCL application transmission are also discussed over multiple transmission scenarios, in comparison with the existing, unmanaged content transmission.
{"title":"An Adaptable Transmission Management Framework for Push-mode Hypermedia Content","authors":"M. Josué, M. Moreno, R. Costa","doi":"10.1145/3126858.3126869","DOIUrl":"https://doi.org/10.1145/3126858.3126869","url":null,"abstract":"In the delivery of hypermedia content over communication networks, the specified intermedia synchronization must be assured, despite the inherent delay and jitter of most transmission media and networks. This kind of content typically provides users multiple interaction paths, with different sets of media objects each one. In spite of that, when the hypermedia content is transmitted in push mode, users receive all media objects, regardless of the chosen interaction path. Transmission strategies that take into account the occurrence of both deterministic and non-deterministic hypermedia presentation events can decrease the waste of storage resources in the receiver side, as well as the need for network bandwidth. This work proposes a framework for an adaptable management of push-mode hypermedia content transmission. Adaptability is achieved by supporting multiple transmission strategies that may employ multiple transmission channels, which are built upon a content analysis for the identification of deterministic and non-deterministic hypermedia presentation events. Methods for instantiating the framework in the context of Ginga-NCL application transmission are also discussed over multiple transmission scenarios, in comparison with the existing, unmanaged content transmission.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125112389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bruno Guilherme Gomes, Pedro Holanda, Ana Paula Couto da Silva, Olga Goussevskaia
Cyber Terrorism is a real threat to the modern society. Many terrorist organizations spread their ideas and recruit new supporters over Online Social Networks. Among all terrorist organizations, ISIS can be considered as the biggest one, which is responsible for inspiring terrorist actions in more than 20 countries. As expected, ISIS uses Twitter for spreading its hatred, and an important issue is how to characterize these supporters in order to understand their motivation. Our work investigates and discusses the way ISIS organizes within Twitter. We base our analyses on two curated datasets. The first dataset, "How ISIS Uses Twitter?" (HIUT), is provided by the Fifth Tribe digital agency. The second dataset, "Syria and ISIS Mentioners?" (SIM), we collected ourselves and curated without participation of experts in the field. We made the SIM dataset publically available, helping new studies in the understanding of ISIS supporters' profiles on Twitter. The main contribution of this work is a characterization of both HIUT and SIM datasets.
{"title":"Profiling ISIS Supporters on Twitter","authors":"Bruno Guilherme Gomes, Pedro Holanda, Ana Paula Couto da Silva, Olga Goussevskaia","doi":"10.1145/3126858.3131597","DOIUrl":"https://doi.org/10.1145/3126858.3131597","url":null,"abstract":"Cyber Terrorism is a real threat to the modern society. Many terrorist organizations spread their ideas and recruit new supporters over Online Social Networks. Among all terrorist organizations, ISIS can be considered as the biggest one, which is responsible for inspiring terrorist actions in more than 20 countries. As expected, ISIS uses Twitter for spreading its hatred, and an important issue is how to characterize these supporters in order to understand their motivation. Our work investigates and discusses the way ISIS organizes within Twitter. We base our analyses on two curated datasets. The first dataset, \"How ISIS Uses Twitter?\" (HIUT), is provided by the Fifth Tribe digital agency. The second dataset, \"Syria and ISIS Mentioners?\" (SIM), we collected ourselves and curated without participation of experts in the field. We made the SIM dataset publically available, helping new studies in the understanding of ISIS supporters' profiles on Twitter. The main contribution of this work is a characterization of both HIUT and SIM datasets.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127923285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to recent technological evolution, the provision of IPTV services has grown considerably. One of the services normally included in IPTV is Linear TV, where audiovisual contents are made available in the form of program schedules. Another service is Video on Demand, where the viewer can perform actions like pause, play and seek (trick mode). In this context, challenges emerge regarding the distribution of multimedia content over these services. One of these challenges is specifically the user's perception about the quality of the IPTV service, measured in terms of quality of experience (QoE). Thus, this article aims to analyze the problems that may occur when receiving Linear TV and VoD content and to propose solutions to these problems. Specifically, due to the congestion of transmission media or overload in the endpoints, a key issue is the statistical variation of time delay for multimedia content delivery (packet jitter). This paper's proposal comprises a novel dynamic management of buffers in IPTV terminal devices, taking into account the characteristics of both Linear TV and VoD services.
{"title":"Dynamic Buffer Management for IPTV Video Players","authors":"Marcos Paulo Mendes, M. Moreno","doi":"10.1145/3126858.3131580","DOIUrl":"https://doi.org/10.1145/3126858.3131580","url":null,"abstract":"Due to recent technological evolution, the provision of IPTV services has grown considerably. One of the services normally included in IPTV is Linear TV, where audiovisual contents are made available in the form of program schedules. Another service is Video on Demand, where the viewer can perform actions like pause, play and seek (trick mode). In this context, challenges emerge regarding the distribution of multimedia content over these services. One of these challenges is specifically the user's perception about the quality of the IPTV service, measured in terms of quality of experience (QoE). Thus, this article aims to analyze the problems that may occur when receiving Linear TV and VoD content and to propose solutions to these problems. Specifically, due to the congestion of transmission media or overload in the endpoints, a key issue is the statistical variation of time delay for multimedia content delivery (packet jitter). This paper's proposal comprises a novel dynamic management of buffers in IPTV terminal devices, taking into account the characteristics of both Linear TV and VoD services.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"126 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131692280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Lecomte, Vinícius Hipolito, B. Batista, B. Kuehne, Dionisio Machado Leite Filho, J. Martins, M. Peixoto
The growth of video surveillance devices increases the rate of streaming data. However, even working in the Fog Computing environment, these smart devices may fail collecting information, producing missing or invalid data. This issue can affect the user quality of experience, because the PTZ-controller may lose the target object tracking. Therefore, this paper presents the Singular Spectrum Analysis - (SSA), as the method to replace missing values in this complex environment of intelligent surveillance cameras. SSA is characterized within time series field by performing a non-parametric spectral estimation with spatial-temporal correlations. The values not correctly monitored, were estimated by SSA with accuracy, allowing the tracking of a suspect object.
{"title":"Gap Filling of Missing Streaming Data in a Network of Intelligent Surveillance Cameras","authors":"G. Lecomte, Vinícius Hipolito, B. Batista, B. Kuehne, Dionisio Machado Leite Filho, J. Martins, M. Peixoto","doi":"10.1145/3126858.3131585","DOIUrl":"https://doi.org/10.1145/3126858.3131585","url":null,"abstract":"The growth of video surveillance devices increases the rate of streaming data. However, even working in the Fog Computing environment, these smart devices may fail collecting information, producing missing or invalid data. This issue can affect the user quality of experience, because the PTZ-controller may lose the target object tracking. Therefore, this paper presents the Singular Spectrum Analysis - (SSA), as the method to replace missing values in this complex environment of intelligent surveillance cameras. SSA is characterized within time series field by performing a non-parametric spectral estimation with spatial-temporal correlations. The values not correctly monitored, were estimated by SSA with accuracy, allowing the tracking of a suspect object.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126808807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. N. Amorim, R. M. C. Segundo, Celso A. S. Santos, O. L. Tavares
This paper presents a general approach to perform crowdsourcing video annotation without requiring trained workers nor experts. It consists of dividing complex annotation tasks into simple and small microtasks and cascading them to generate a final result. Moreover, this approach allows using simple annotation tools rather than complex and expensive annotation systems. Also, it tends to avoid activities that may be tedious and time-consuming for workers. The cascade microtasks strategy is included in a workflow of three steps: Preparation, Annotation, and Presentation. A crowdsourcing video annotation process in which four different microtasks were cascaded was developed to evaluate the proposed approach. In the process, extra content such as images, text, hyperlinks and other elements are applied in the video enrichment. To support the experiment was developed a toolkit that includes Web-based annotation tools and aggregation methods, besides a presentation system for the annotated videos. This toolkit is open source and can be downloaded and used to replicate this experiment, as so to construct different crowdsourcing video annotation systems.
{"title":"Video Annotation by Cascading Microtasks: a Crowdsourcing Approach","authors":"M. N. Amorim, R. M. C. Segundo, Celso A. S. Santos, O. L. Tavares","doi":"10.1145/3126858.3126897","DOIUrl":"https://doi.org/10.1145/3126858.3126897","url":null,"abstract":"This paper presents a general approach to perform crowdsourcing video annotation without requiring trained workers nor experts. It consists of dividing complex annotation tasks into simple and small microtasks and cascading them to generate a final result. Moreover, this approach allows using simple annotation tools rather than complex and expensive annotation systems. Also, it tends to avoid activities that may be tedious and time-consuming for workers. The cascade microtasks strategy is included in a workflow of three steps: Preparation, Annotation, and Presentation. A crowdsourcing video annotation process in which four different microtasks were cascaded was developed to evaluate the proposed approach. In the process, extra content such as images, text, hyperlinks and other elements are applied in the video enrichment. To support the experiment was developed a toolkit that includes Web-based annotation tools and aggregation methods, besides a presentation system for the annotated videos. This toolkit is open source and can be downloaded and used to replicate this experiment, as so to construct different crowdsourcing video annotation systems.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116925260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rodolfo Valiente, José C. Gutiérrez, M. T. Sadaike, G. Bressan
Web images play an important role in delivering multimedia content on the Web. The text embedded in web images carry semantic information related to layout and content of the pages. Statistics show that there is a significant need to detect and recognize text from web images. This paper presents an architecture that efficiently integrates localization, extraction and recognition algorithms applied to text recognition in web images. In the recognition step is proposed a procedure based on super-resolution and an iterative method for improving the performance. The approach is implemented and evaluated using Matlab and cloud computing, making the system flexible, scalable and robust in detecting texts from complex web images with different orientations, dimensions and colors. Competitive results are presented, both in precision and recognition rate, when compared with other systems in the existing literature.
{"title":"Automatic Text Recognition in Web Images","authors":"Rodolfo Valiente, José C. Gutiérrez, M. T. Sadaike, G. Bressan","doi":"10.1145/3126858.3131570","DOIUrl":"https://doi.org/10.1145/3126858.3131570","url":null,"abstract":"Web images play an important role in delivering multimedia content on the Web. The text embedded in web images carry semantic information related to layout and content of the pages. Statistics show that there is a significant need to detect and recognize text from web images. This paper presents an architecture that efficiently integrates localization, extraction and recognition algorithms applied to text recognition in web images. In the recognition step is proposed a procedure based on super-resolution and an iterative method for improving the performance. The approach is implemented and evaluated using Matlab and cloud computing, making the system flexible, scalable and robust in detecting texts from complex web images with different orientations, dimensions and colors. Competitive results are presented, both in precision and recognition rate, when compared with other systems in the existing literature.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114608853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, educational videos are becoming more and more popular. Due to this increase in the amount of didactic content in the video format present on the web, it is interesting to make it possible for a search term to be related to a specific segment of the video. Better navigability allows the user to have quicker access to the topics that interest him, avoiding irrelevant content. This article proposes a method for automatic segmentation of scenes in educational videos through the use of automatic audio transcription and semantic annotation. With this targeting, you can improve content search on these videos by improving the user experience on e-learning platforms or educational video repositories.
{"title":"An Approach for Automatic Segmentation of Scenes in Educational Videos through the use of Audio Transcription and Semantic Annotation","authors":"Eduardo R. Soares, E. Barrére","doi":"10.1145/3126858.3126870","DOIUrl":"https://doi.org/10.1145/3126858.3126870","url":null,"abstract":"In recent years, educational videos are becoming more and more popular. Due to this increase in the amount of didactic content in the video format present on the web, it is interesting to make it possible for a search term to be related to a specific segment of the video. Better navigability allows the user to have quicker access to the topics that interest him, avoiding irrelevant content. This article proposes a method for automatic segmentation of scenes in educational videos through the use of automatic audio transcription and semantic annotation. With this targeting, you can improve content search on these videos by improving the user experience on e-learning platforms or educational video repositories.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123994454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Face recognition has received significant attention during the past several years. It is a challenge task because faces can be affected by scale, noises, face expression, illumination, color or pose variations. The most robust methodologies related to these variations are based on "key points?" localization, followed by the application of a local descriptor to each surrounding region. Such descriptors are associated to clustering algorithms or histogram representation based on Bag of Features (BoF). In the BoF approach, the codebook can effectively describe objects by their appearance based on local texture. Based on texture descriptors proposed previously for image detection, we propose in this paper the application of such descriptors for face recognition. We evaluate the performance of our methodology using Feret, ORL and Yale databases, comparing our descriptor against SIFT and LIOP descriptors, and also other methodologies recently published in the literature.
{"title":"Face Classification using a New Local Texture Descriptor","authors":"C. T. Ferraz, M. Manzato, A. Gonzaga","doi":"10.1145/3126858.3131584","DOIUrl":"https://doi.org/10.1145/3126858.3131584","url":null,"abstract":"Face recognition has received significant attention during the past several years. It is a challenge task because faces can be affected by scale, noises, face expression, illumination, color or pose variations. The most robust methodologies related to these variations are based on \"key points?\" localization, followed by the application of a local descriptor to each surrounding region. Such descriptors are associated to clustering algorithms or histogram representation based on Bag of Features (BoF). In the BoF approach, the codebook can effectively describe objects by their appearance based on local texture. Based on texture descriptors proposed previously for image detection, we propose in this paper the application of such descriptors for face recognition. We evaluate the performance of our methodology using Feret, ORL and Yale databases, comparing our descriptor against SIFT and LIOP descriptors, and also other methodologies recently published in the literature.","PeriodicalId":338362,"journal":{"name":"Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116813292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}