In this paper, we propose a Multi-View Learning (MVL) framework for news recommendation which uses both the content view and the user-news interaction graph view. In the content view, we use a news encoder to learn news representations from different information like titles, bodies and categories. We obtain representation of user from his/her browsed news conditioned on the candidate news article to be recommended. In the graph-view, we propose to use a graph neural network to capture the user-news, user-user and news-news relatedness in the user-news bipartite graphs by modeling the interactions between different users and news. In addition, we propose to incorporate attention mechanism into the graph neural network to model the importance of these interactions for more informative representation learning of user and news. Experiments on a real world dataset validate the effectiveness of MVL.
{"title":"MVL: Multi-View Learning for News Recommendation","authors":"Santosh T.Y.S.S, Avirup Saha, Niloy Ganguly","doi":"10.1145/3397271.3401294","DOIUrl":"https://doi.org/10.1145/3397271.3401294","url":null,"abstract":"In this paper, we propose a Multi-View Learning (MVL) framework for news recommendation which uses both the content view and the user-news interaction graph view. In the content view, we use a news encoder to learn news representations from different information like titles, bodies and categories. We obtain representation of user from his/her browsed news conditioned on the candidate news article to be recommended. In the graph-view, we propose to use a graph neural network to capture the user-news, user-user and news-news relatedness in the user-news bipartite graphs by modeling the interactions between different users and news. In addition, we propose to incorporate attention mechanism into the graph neural network to model the importance of these interactions for more informative representation learning of user and news. Experiments on a real world dataset validate the effectiveness of MVL.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115691859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lin Zheng, Naicheng Guo, Weihao Chen, Jin Yu, Dazhi Jiang
The existing sequential recommendation methods focus on modeling the temporal relationships of user behaviors and are good at using additional item information to improve performance. However, these methods rarely consider the influences of users' sequential subjective sentiments on their behaviors---and sometimes the temporal changes in human sentiment patterns plays a decisive role in users' final preferences. To investigate the influence of temporal sentiments on user preferences, we propose generating preferences by guiding user behavior through sequential sentiments. Specifically, we design a dual-channel fusion mechanism. The main channel consists of sentiment-guided attention to match and guide sequential user behavior, and the secondary channel consists of sparse sentiment attention to assist in preference generation. In the experiments, we demonstrate the effectiveness of these two sentiment modeling mechanisms through ablation studies. Our approach outperforms current state-of-the-art sequential recommendation methods that incorporate sentiment factors.
{"title":"Sentiment-guided Sequential Recommendation","authors":"Lin Zheng, Naicheng Guo, Weihao Chen, Jin Yu, Dazhi Jiang","doi":"10.1145/3397271.3401330","DOIUrl":"https://doi.org/10.1145/3397271.3401330","url":null,"abstract":"The existing sequential recommendation methods focus on modeling the temporal relationships of user behaviors and are good at using additional item information to improve performance. However, these methods rarely consider the influences of users' sequential subjective sentiments on their behaviors---and sometimes the temporal changes in human sentiment patterns plays a decisive role in users' final preferences. To investigate the influence of temporal sentiments on user preferences, we propose generating preferences by guiding user behavior through sequential sentiments. Specifically, we design a dual-channel fusion mechanism. The main channel consists of sentiment-guided attention to match and guide sequential user behavior, and the secondary channel consists of sparse sentiment attention to assist in preference generation. In the experiments, we demonstrate the effectiveness of these two sentiment modeling mechanisms through ablation studies. Our approach outperforms current state-of-the-art sequential recommendation methods that incorporate sentiment factors.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117220194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rui Wang, Xin Liu, Y. Cheung, Kai Cheng, Nannan Wang, Wentao Fan
Many cognitive researches have shown the natural possibility of face-voice association, and such potential association has attracted much attention in biometric cross-modal retrieval domain. Nevertheless, the existing methods often fail to explicitly learn the common embeddings for challenging face-voice association tasks. In this paper, we present to learn discriminative joint embedding for face-voice association, which can seamlessly train the face subnetwork and voice subnetwork to learn their high-level semantic features, while correlating them to be compared directly and efficiently. Within the proposed approach, we introduce bi-directional ranking constraint, identity constraint and center constraint to learn the joint face-voice embedding, and adopt bi-directional training strategy to train the deep correlated face-voice model. Meanwhile, an online hard negative mining technique is utilized to discriminatively construct hard triplets in a mini-batch manner, featuring on speeding up the learning process. Accordingly, the proposed approach is adaptive to benefit various face-voice association tasks, including cross-modal verification, 1:2 matching, 1:N matching, and retrieval scenarios. Extensive experiments have shown its improved performances in comparison with the state-of-the-art ones.
{"title":"Learning Discriminative Joint Embeddings for Efficient Face and Voice Association","authors":"Rui Wang, Xin Liu, Y. Cheung, Kai Cheng, Nannan Wang, Wentao Fan","doi":"10.1145/3397271.3401302","DOIUrl":"https://doi.org/10.1145/3397271.3401302","url":null,"abstract":"Many cognitive researches have shown the natural possibility of face-voice association, and such potential association has attracted much attention in biometric cross-modal retrieval domain. Nevertheless, the existing methods often fail to explicitly learn the common embeddings for challenging face-voice association tasks. In this paper, we present to learn discriminative joint embedding for face-voice association, which can seamlessly train the face subnetwork and voice subnetwork to learn their high-level semantic features, while correlating them to be compared directly and efficiently. Within the proposed approach, we introduce bi-directional ranking constraint, identity constraint and center constraint to learn the joint face-voice embedding, and adopt bi-directional training strategy to train the deep correlated face-voice model. Meanwhile, an online hard negative mining technique is utilized to discriminatively construct hard triplets in a mini-batch manner, featuring on speeding up the learning process. Accordingly, the proposed approach is adaptive to benefit various face-voice association tasks, including cross-modal verification, 1:2 matching, 1:N matching, and retrieval scenarios. Extensive experiments have shown its improved performances in comparison with the state-of-the-art ones.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125023137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Identifying critical information in real time in the beginning of a disaster is a challenging but important task. This task has been recently addressed using domain adaptation approaches, which eliminate the need for target labeled data, and can thus accelerate the process of identifying useful information. We propose to investigate the effectiveness of the Domain Reconstruction Classification Network (DRCN) approach on disaster tweets. DRCN adapts information from target data by reconstructing it with an autoencoder. Experimental results using a sequence-to-sequence autoencodershow that the DRCN approach can improve the performance of both supervised and domain adaptation baseline models.
{"title":"Domain Adaptation with Reconstruction for Disaster Tweet Classification","authors":"Xukun Li, Doina Caragea","doi":"10.1145/3397271.3401242","DOIUrl":"https://doi.org/10.1145/3397271.3401242","url":null,"abstract":"Identifying critical information in real time in the beginning of a disaster is a challenging but important task. This task has been recently addressed using domain adaptation approaches, which eliminate the need for target labeled data, and can thus accelerate the process of identifying useful information. We propose to investigate the effectiveness of the Domain Reconstruction Classification Network (DRCN) approach on disaster tweets. DRCN adapts information from target data by reconstructing it with an autoencoder. Experimental results using a sequence-to-sequence autoencodershow that the DRCN approach can improve the performance of both supervised and domain adaptation baseline models.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126188922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent advances in machine learning have led to emerging new approaches to deal with different kinds of biases that exist in the data. On the one hand, counterfactual learning copes with biases in the policy used for sampling (or logging) the data in order to evaluate and learn new policies. On the other hand, fairness-aware learning aims at learning fair models to avoid discrimination against certain individuals or groups. In this paper, we design a counterfactual framework to model fairness-aware learning which benefits from counterfactual reasoning to achieve more fair decision support systems. We utilize a definition of fairness to determine the bandit feedback in the counterfactual setting that learns a classification strategy from the offline data, and balances classification performance versus fairness measure. In the experiments, we demonstrate that a counterfactual setting can be perfectly exerted to learn fair models with competitive results compared to a well-known baseline system.
{"title":"Fair Classification with Counterfactual Learning","authors":"M. Tavakol","doi":"10.1145/3397271.3401291","DOIUrl":"https://doi.org/10.1145/3397271.3401291","url":null,"abstract":"Recent advances in machine learning have led to emerging new approaches to deal with different kinds of biases that exist in the data. On the one hand, counterfactual learning copes with biases in the policy used for sampling (or logging) the data in order to evaluate and learn new policies. On the other hand, fairness-aware learning aims at learning fair models to avoid discrimination against certain individuals or groups. In this paper, we design a counterfactual framework to model fairness-aware learning which benefits from counterfactual reasoning to achieve more fair decision support systems. We utilize a definition of fairness to determine the bandit feedback in the counterfactual setting that learns a classification strategy from the offline data, and balances classification performance versus fairness measure. In the experiments, we demonstrate that a counterfactual setting can be perfectly exerted to learn fair models with competitive results compared to a well-known baseline system.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125515668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Session-based recommendation produces item predictions mainly based on anonymous sessions. Previous studies have leveraged collaborative information from neighbor sessions to boost the recommendation accuracy for a given ongoing session. Previous work often selects the most recent sessions as candidate neighbors, thereby failing to identify the most related neighbors to obtain an effective neighbor representation. In addition, few existing methods simultaneously consider the sequential signal and the most recent interest in an ongoing session. In this paper, we introduce an Intent-guided Collaborative Machine for Session-based Recommendation (ICM-SR). ICM-SR encodes an ongoing session by leveraging the prior sequential items and the last item to generate an accurate session representation, which is then used to produce initial item predictions as intent. After that, we design an intent-guided neighbor detector to locate the correct neighbor sessions. Finally, the representations of the current session and the neighbor sessions are adaptively combined by a gated fusion layer to produce the final item recommendations. Experiments conducted on two public benchmark datasets show that ICM-SR achieves a significant improvement in terms of Recall and MRR over the state-of-the-art baselines.
{"title":"An Intent-guided Collaborative Machine for Session-based Recommendation","authors":"Zhiqiang Pan, Fei Cai, Yanxiang Ling, M. de Rijke","doi":"10.1145/3397271.3401273","DOIUrl":"https://doi.org/10.1145/3397271.3401273","url":null,"abstract":"Session-based recommendation produces item predictions mainly based on anonymous sessions. Previous studies have leveraged collaborative information from neighbor sessions to boost the recommendation accuracy for a given ongoing session. Previous work often selects the most recent sessions as candidate neighbors, thereby failing to identify the most related neighbors to obtain an effective neighbor representation. In addition, few existing methods simultaneously consider the sequential signal and the most recent interest in an ongoing session. In this paper, we introduce an Intent-guided Collaborative Machine for Session-based Recommendation (ICM-SR). ICM-SR encodes an ongoing session by leveraging the prior sequential items and the last item to generate an accurate session representation, which is then used to produce initial item predictions as intent. After that, we design an intent-guided neighbor detector to locate the correct neighbor sessions. Finally, the representations of the current session and the neighbor sessions are adaptively combined by a gated fusion layer to produce the final item recommendations. Experiments conducted on two public benchmark datasets show that ICM-SR achieves a significant improvement in terms of Recall and MRR over the state-of-the-art baselines.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126929450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yongfeng Zhang, Xu Chen, Yi Zhang, Min Zhang, C. Shah
Explainable recommendation and search attempt to develop models or methods that not only generate high-quality recommendation or search results, but also interpretability of the models or explanations of the results for users or system designers, which can help to improve the system transparency, persuasiveness, trustworthiness, and effectiveness, etc. This is even more important in personalized search and recommendation scenarios, where users would like to know why a particular product, web page, news report, or friend suggestion exists in his or her own search and recommendation lists. The workshop focuses on the research and application of explainable recommendation, search, and a broader scope of IR tasks. It will gather researchers as well as practitioners in the field for discussions, idea communications, and research promotions. It will also generate insightful debates about the recent regulations regarding AI interpretability, to a broader community including but not limited to IR, machine learning, AI, Data Science, and beyond.
{"title":"EARS 2020: The 3rd International Workshop on ExplainAble Recommendation and Search","authors":"Yongfeng Zhang, Xu Chen, Yi Zhang, Min Zhang, C. Shah","doi":"10.1145/3397271.3401468","DOIUrl":"https://doi.org/10.1145/3397271.3401468","url":null,"abstract":"Explainable recommendation and search attempt to develop models or methods that not only generate high-quality recommendation or search results, but also interpretability of the models or explanations of the results for users or system designers, which can help to improve the system transparency, persuasiveness, trustworthiness, and effectiveness, etc. This is even more important in personalized search and recommendation scenarios, where users would like to know why a particular product, web page, news report, or friend suggestion exists in his or her own search and recommendation lists. The workshop focuses on the research and application of explainable recommendation, search, and a broader scope of IR tasks. It will gather researchers as well as practitioners in the field for discussions, idea communications, and research promotions. It will also generate insightful debates about the recent regulations regarding AI interpretability, to a broader community including but not limited to IR, machine learning, AI, Data Science, and beyond.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115174645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Related or ideal follow-up suggestions to a web query in search engines are often optimized based on several different parameters -- relevance to the original query, diversity, click probability etc. One or many rankers may be trained to score each suggestion from a candidate pool based on these factors. These scorers are usually pairwise classification tasks where each training example consists of a user query and a single suggestion from the list of candidates. We propose an architecture that takes all candidate suggestions associated with a given query and outputs a suggestion block. We discuss the benefits of such an architecture over traditional approaches and experiment with further enforcing each individual metric through mixed-objective training.
{"title":"Training Mixed-Objective Pointing Decoders for Block-Level Optimization in Search Recommendation","authors":"Harsh Kohli","doi":"10.1145/3397271.3401236","DOIUrl":"https://doi.org/10.1145/3397271.3401236","url":null,"abstract":"Related or ideal follow-up suggestions to a web query in search engines are often optimized based on several different parameters -- relevance to the original query, diversity, click probability etc. One or many rankers may be trained to score each suggestion from a candidate pool based on these factors. These scorers are usually pairwise classification tasks where each training example consists of a user query and a single suggestion from the list of candidates. We propose an architecture that takes all candidate suggestions associated with a given query and outputs a suggestion block. We discuss the benefits of such an architecture over traditional approaches and experiment with further enforcing each individual metric through mixed-objective training.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122025950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Milad Alshomary, Nick Düsterhus, Henning Wachsmuth
Snippets are used in web search to help users assess the relevance of retrieved results to their query. Recently, specialized search engines have arisen that retrieve pro and con arguments on controversial issues. We argue that standard snippet generation is insufficient to represent the core reasoning of an argument. In this paper, we introduce the task of generating a snippet that represents the main claim and reason of an argument. We propose a query-independent extractive summarization approach to this task that uses a variant of PageRank to assess the importance of sentences based on their context and argumentativeness. In both automatic and manual evaluation, our approach outperforms strong baselines.
{"title":"Extractive Snippet Generation for Arguments","authors":"Milad Alshomary, Nick Düsterhus, Henning Wachsmuth","doi":"10.1145/3397271.3401186","DOIUrl":"https://doi.org/10.1145/3397271.3401186","url":null,"abstract":"Snippets are used in web search to help users assess the relevance of retrieved results to their query. Recently, specialized search engines have arisen that retrieve pro and con arguments on controversial issues. We argue that standard snippet generation is insufficient to represent the core reasoning of an argument. In this paper, we introduce the task of generating a snippet that represents the main claim and reason of an argument. We propose a query-independent extractive summarization approach to this task that uses a variant of PageRank to assess the importance of sentences based on their context and argumentativeness. In both automatic and manual evaluation, our approach outperforms strong baselines.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128571551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While the World Wide Web provides a large amount of text in many languages, cross-lingual parallel data is more difficult to obtain. Despite its scarcity, this parallel cross-lingual data plays a crucial role in a variety of tasks in natural language processing with applications in machine translation, cross-lingual information retrieval, and document classification, as well as learning cross-lingual representations. Here, we describe the end-to-end process of searching the web for parallel cross-lingual texts. We motivate obtaining parallel text as a retrieval problem whereby the goal is to retrieve cross-lingual parallel text from a large, multilingual web-crawled corpus. We introduce techniques for searching for cross-lingual parallel data based on language, content, and other metadata. We motivate and introduce multilingual sentence embeddings as a core tool and demonstrate techniques and models that leverage them for identifying parallel documents and sentences as well as techniques for retrieving and filtering this data. We describe several large-scale datasets curated using these techniques and show how training on sentences extracted from parallel or comparable documents mined from the Web can improve machine translation models and facilitate cross-lingual NLP.
{"title":"Searching the Web for Cross-lingual Parallel Data","authors":"Ahmed El-Kishky, Philipp Koehn, Holger Schwenk","doi":"10.1145/3397271.3401417","DOIUrl":"https://doi.org/10.1145/3397271.3401417","url":null,"abstract":"While the World Wide Web provides a large amount of text in many languages, cross-lingual parallel data is more difficult to obtain. Despite its scarcity, this parallel cross-lingual data plays a crucial role in a variety of tasks in natural language processing with applications in machine translation, cross-lingual information retrieval, and document classification, as well as learning cross-lingual representations. Here, we describe the end-to-end process of searching the web for parallel cross-lingual texts. We motivate obtaining parallel text as a retrieval problem whereby the goal is to retrieve cross-lingual parallel text from a large, multilingual web-crawled corpus. We introduce techniques for searching for cross-lingual parallel data based on language, content, and other metadata. We motivate and introduce multilingual sentence embeddings as a core tool and demonstrate techniques and models that leverage them for identifying parallel documents and sentences as well as techniques for retrieving and filtering this data. We describe several large-scale datasets curated using these techniques and show how training on sentences extracted from parallel or comparable documents mined from the Web can improve machine translation models and facilitate cross-lingual NLP.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124572130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}