J. Jin, Xinrong Hu, Kai He, Tao Peng, Junping Liu, Jie Yang
Image inpainting aims to reconstruct the missing or unknown region for a given image. As one of the most important topics from image processing, this task has attracted increasing research interest over the past few decades. Learning-based methods have been employed to solve this task, and achieved superior performance. Nevertheless, existing methods often produce artificial traces, due to the lack of constraints on image characterization under different semantics. To accommodate this issue, we propose a novel artistic Progressive Semantic Reasoning (PSR) network in this paper, which is composed of three shared parameters from the generation network superposition. More precisely, the proposed PSR algorithm follows a typical end-to-end training procedure, that learns low-level semantic features and further transfers them to a high-level semantic network for inpainting purposes. Furthermore, a simple but effective Cross Feature Reconstruction (CFR) strategy is proposed to tradeoff semantic information from different levels. Empirically, the proposed approach is evaluated via intensive experiments using a variety of real-world datasets. The results confirm the effectiveness of our algorithm compared with other state-of-the-art methods. The source code can be found from https://github.com/sfwyly/PSR-Net.
{"title":"Progressive Semantic Reasoning for Image Inpainting","authors":"J. Jin, Xinrong Hu, Kai He, Tao Peng, Junping Liu, Jie Yang","doi":"10.1145/3442442.3451142","DOIUrl":"https://doi.org/10.1145/3442442.3451142","url":null,"abstract":"Image inpainting aims to reconstruct the missing or unknown region for a given image. As one of the most important topics from image processing, this task has attracted increasing research interest over the past few decades. Learning-based methods have been employed to solve this task, and achieved superior performance. Nevertheless, existing methods often produce artificial traces, due to the lack of constraints on image characterization under different semantics. To accommodate this issue, we propose a novel artistic Progressive Semantic Reasoning (PSR) network in this paper, which is composed of three shared parameters from the generation network superposition. More precisely, the proposed PSR algorithm follows a typical end-to-end training procedure, that learns low-level semantic features and further transfers them to a high-level semantic network for inpainting purposes. Furthermore, a simple but effective Cross Feature Reconstruction (CFR) strategy is proposed to tradeoff semantic information from different levels. Empirically, the proposed approach is evaluated via intensive experiments using a variety of real-world datasets. The results confirm the effectiveness of our algorithm compared with other state-of-the-art methods. The source code can be found from https://github.com/sfwyly/PSR-Net.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130853493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
1 PROBLEM Although being an essential part of how we experience the world, smell is severely undervalued in the context of cultural heritage. The Odeuropa project aims at preserving and recreating the olfactory heritage of Europe. State-of-the-art methods of artificial intelligence are applied to large corpora of visual and textual data ranging from the 16th to 20th century of European history to extract olfactory references. Creating an ontology of smells, this information is stored in the “European Olfactory Knowledge Graph (EOKG)” following standards of the semantic web. My Ph.D. addresses the visual extraction part of the project. We will create a taxonomy of visual smell references and acquire a large corpus of artworks from various early modern European digital collections. Using computer vision techniques, we will implement a pipeline for the combined recognition of olfactory objects, poses, and iconographies and annotate the images from our image corpus accordingly. Following these steps, we will address the following research questions: (i)What visual representations of smell exist in European 16th to 20th century works of art and how can these be represented in the EOKG as an ontology shared with the other work packages of the Odeuropa project? (ii)Whichmachine-learning techniques exist for the automated extraction of olfactory references in the visual arts? Particularly, which techniques are suited to cope with the domain shift problem when applying computer vision techniques to our field of research? (iii) How do the identified techniques perform in terms of established evaluation metrics? Which ones work best for the extraction of olfactory references? Both the preservation of olfactory heritage [3] and the application of machine learning (ML) to cultural heritage [1] have been addressed before. However, in most cases machine learning algorithms are treated as “black boxes” and their application does not contribute back to ML [4]. Computer vision techniques like object detection and pose estimation have successfully been applied to the domain of visual arts ([8], [2]) but have not achieved performance comparable to their application in the photographic domain. One reason for the success of computer vision on photographs is the availability of huge labeled datasets like ImageNet [10]. Datasets containing artworks
{"title":"How to See Smells: Extracting Olfactory References from Artworks","authors":"Mathias Zinnen","doi":"10.1145/3442442.3453710","DOIUrl":"https://doi.org/10.1145/3442442.3453710","url":null,"abstract":"1 PROBLEM Although being an essential part of how we experience the world, smell is severely undervalued in the context of cultural heritage. The Odeuropa project aims at preserving and recreating the olfactory heritage of Europe. State-of-the-art methods of artificial intelligence are applied to large corpora of visual and textual data ranging from the 16th to 20th century of European history to extract olfactory references. Creating an ontology of smells, this information is stored in the “European Olfactory Knowledge Graph (EOKG)” following standards of the semantic web. My Ph.D. addresses the visual extraction part of the project. We will create a taxonomy of visual smell references and acquire a large corpus of artworks from various early modern European digital collections. Using computer vision techniques, we will implement a pipeline for the combined recognition of olfactory objects, poses, and iconographies and annotate the images from our image corpus accordingly. Following these steps, we will address the following research questions: (i)What visual representations of smell exist in European 16th to 20th century works of art and how can these be represented in the EOKG as an ontology shared with the other work packages of the Odeuropa project? (ii)Whichmachine-learning techniques exist for the automated extraction of olfactory references in the visual arts? Particularly, which techniques are suited to cope with the domain shift problem when applying computer vision techniques to our field of research? (iii) How do the identified techniques perform in terms of established evaluation metrics? Which ones work best for the extraction of olfactory references? Both the preservation of olfactory heritage [3] and the application of machine learning (ML) to cultural heritage [1] have been addressed before. However, in most cases machine learning algorithms are treated as “black boxes” and their application does not contribute back to ML [4]. Computer vision techniques like object detection and pose estimation have successfully been applied to the domain of visual arts ([8], [2]) but have not achieved performance comparable to their application in the photographic domain. One reason for the success of computer vision on photographs is the availability of huge labeled datasets like ImageNet [10]. Datasets containing artworks","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"459 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132941490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
News media reflects the present state of a country or region to its audiences. Media outlets of a region post different kinds of news for their local and global audiences. In this paper, we focus on Europe (precisely EU) and propose a method to identify news that has an impact on Europe from any aspect such as financial, business, crime, politics, etc. Predicting the location of the news is itself a challenging task. Most of the approaches restrict themselves towards named entities or handcrafted features. In this paper, we try to overcome that limitation i.e., instead of focusing only on the named entities (Europe location, politicians etc.) and some hand-crafted rules, we also explore the context of news articles with the help of pre-trained language model BERT. The auto-regressive language model based European news detector shows about 9-19% improvement in terms of F-score over baseline models. Interestingly, we observe that such models automatically capture named entities, their origin, etc; hence, no separate information is required. We also evaluate the role of such entities in the prediction and explore the tokens that BERT really looks at for deciding the news category. Entities such as person, location, organization turn out to be good rationale tokens for the prediction.
{"title":"EUDETECTOR: Leveraging Language Model to Identify EU-Related News","authors":"Koustav Rudra, Danny Tran, M. Shaltev","doi":"10.1145/3442442.3452324","DOIUrl":"https://doi.org/10.1145/3442442.3452324","url":null,"abstract":"News media reflects the present state of a country or region to its audiences. Media outlets of a region post different kinds of news for their local and global audiences. In this paper, we focus on Europe (precisely EU) and propose a method to identify news that has an impact on Europe from any aspect such as financial, business, crime, politics, etc. Predicting the location of the news is itself a challenging task. Most of the approaches restrict themselves towards named entities or handcrafted features. In this paper, we try to overcome that limitation i.e., instead of focusing only on the named entities (Europe location, politicians etc.) and some hand-crafted rules, we also explore the context of news articles with the help of pre-trained language model BERT. The auto-regressive language model based European news detector shows about 9-19% improvement in terms of F-score over baseline models. Interestingly, we observe that such models automatically capture named entities, their origin, etc; hence, no separate information is required. We also evaluate the role of such entities in the prediction and explore the tokens that BERT really looks at for deciding the news category. Entities such as person, location, organization turn out to be good rationale tokens for the prediction.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131859041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this contribution, we describe the systems presented by the PolyU CBS Team at the second Shared Task on Learning Semantic Similarities for the Financial Domain (FinSim-2), where participating teams had to identify the right hypernyms for a list of target terms from the financial domain. For this task, we ran our classification experiments with several distributional, string-based, and Transformer features. Our results show that a simple logistic regression classifier, when trained on a combination of word embeddings, semantic and string similarity metrics and BERT-derived probabilities, achieves a strong performance (above 90%) in financial hypernymy detection.
{"title":"PolyU-CBS at the FinSim-2 Task: Combining Distributional, String-Based and Transformers-Based Features for Hypernymy Detection in the Financial Domain","authors":"Emmanuele Chersoni, Chu-Ren Huang","doi":"10.1145/3442442.3451387","DOIUrl":"https://doi.org/10.1145/3442442.3451387","url":null,"abstract":"In this contribution, we describe the systems presented by the PolyU CBS Team at the second Shared Task on Learning Semantic Similarities for the Financial Domain (FinSim-2), where participating teams had to identify the right hypernyms for a list of target terms from the financial domain. For this task, we ran our classification experiments with several distributional, string-based, and Transformer features. Our results show that a simple logistic regression classifier, when trained on a combination of word embeddings, semantic and string similarity metrics and BERT-derived probabilities, achieves a strong performance (above 90%) in financial hypernymy detection.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"97 7-8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133722488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Document processing is a foundational pre-processing task in natural language application applied in the financial domain. In this paper, we present the result of FinSBD-3, the 3rd shared task on Structure Boundary Detection in unstructured text in the financial domain. The shared task is organized as part of the 1st Workshop on Financial Technology on the Web. Participants were asked to create system detecting the boundaries of elements in unstructured text extracted from financial PDF. This edition extends the previous shared tasks by adding boundaries of visual elements such as tables, figures, page headers and page footers; on top of sentences, lists and list items which were already present in previous edition of the shared tasks.
{"title":"FinSBD-2021: The 3rd Shared Task on Structure Boundary Detection in Unstructured Text in the Financial Domain","authors":"Willy Au, Abderrahim Ait-Azzi, Juyeon Kang","doi":"10.1145/3442442.3451378","DOIUrl":"https://doi.org/10.1145/3442442.3451378","url":null,"abstract":"Document processing is a foundational pre-processing task in natural language application applied in the financial domain. In this paper, we present the result of FinSBD-3, the 3rd shared task on Structure Boundary Detection in unstructured text in the financial domain. The shared task is organized as part of the 1st Workshop on Financial Technology on the Web. Participants were asked to create system detecting the boundaries of elements in unstructured text extracted from financial PDF. This edition extends the previous shared tasks by adding boundaries of visual elements such as tables, figures, page headers and page footers; on top of sentences, lists and list items which were already present in previous edition of the shared tasks.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114604947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modelling multilingual text data over time is a challenging task. This PhD is focused on semantic representation of domain specific short to mid length time stamped textual data. The proposed method is evaluated on the example of job postings, where we are modeling demand on IT jobs. More specifically, we addresses the following three problems: unifying the representation of multilingual text data; clustering similar textual data; using the proposed semantic representation to model and predict future demand of jobs. This works starts with a problem statement, followed by a description of the proposed approach and methodology and is concluded with an overview of the first results and summary of the ongoing research.
{"title":"Modeling Text Data Over Time - Example on Job Postings","authors":"Jakob Jelencic","doi":"10.1145/3442442.3453707","DOIUrl":"https://doi.org/10.1145/3442442.3453707","url":null,"abstract":"Modelling multilingual text data over time is a challenging task. This PhD is focused on semantic representation of domain specific short to mid length time stamped textual data. The proposed method is evaluated on the example of job postings, where we are modeling demand on IT jobs. More specifically, we addresses the following three problems: unifying the representation of multilingual text data; clustering similar textual data; using the proposed semantic representation to model and predict future demand of jobs. This works starts with a problem statement, followed by a description of the proposed approach and methodology and is concluded with an overview of the first results and summary of the ongoing research.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114672476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mehwish Alam, Russa Biswas, Yiyi Chen, D. Dessí, Genet Asefa Gesese, Fabian Hoppe, Harald Sack
A huge number of scholarly articles published every day in different domains makes it hard for the experts to organize and stay updated with the new research in a particular domain. This study gives an overview of a new approach, HierClasSArt, for knowledge aware hierarchical classification of the scholarly articles for mathematics into a predefined taxonomy. The method uses combination of neural networks and Knowledge Graphs for better document representation along with the meta-data information. This position paper further discusses the open problems about incorporation of new articles and evolving hierarchies in the pipeline. Mathematics domain has been used as a use-case.
{"title":"HierClasSArt: Knowledge-Aware Hierarchical Classification of Scholarly Articles","authors":"Mehwish Alam, Russa Biswas, Yiyi Chen, D. Dessí, Genet Asefa Gesese, Fabian Hoppe, Harald Sack","doi":"10.1145/3442442.3451365","DOIUrl":"https://doi.org/10.1145/3442442.3451365","url":null,"abstract":"A huge number of scholarly articles published every day in different domains makes it hard for the experts to organize and stay updated with the new research in a particular domain. This study gives an overview of a new approach, HierClasSArt, for knowledge aware hierarchical classification of the scholarly articles for mathematics into a predefined taxonomy. The method uses combination of neural networks and Knowledge Graphs for better document representation along with the meta-data information. This position paper further discusses the open problems about incorporation of new articles and evolving hierarchies in the pipeline. Mathematics domain has been used as a use-case.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114988027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we investigate the state-of-the-art of machine learning models to infer sociodemographic attributes of Wikipedia editors based on their public profile pages and corresponding implications for editor privacy. To build models for inferring sociodemographic attributes, ground truth labels are obtained via different strategies, using publicly disclosed information from editor profile pages. Different embedding techniques are used to derive features from editors’ profile texts. In comparative evaluations of different machine learning models, we show that the highest prediction accuracy can be obtained for the attribute gender, with precision values of 82% to 91% for women and men respectively, as well as an averaged F1-score of 0.78. For other attributes like age group, education, and religion, the utilized classifiers exhibit F1-scores in the range of 0.32 to 0.74, depending on the model class. By merely using publicly disclosed information of Wikipedia editors, we highlight issues surrounding editor privacy on Wikipedia and discuss ways to mitigate this problem. We believe our work can help start a conversation about carefully weighing the potential benefits and harms that come with the existence of information-rich, pre-labeled profile pages of Wikipedia editors.
{"title":"Inferring Sociodemographic Attributes of Wikipedia Editors: State-of-the-art and Implications for Editor Privacy","authors":"S. Brückner, F. Lemmerich, M. Strohmaier","doi":"10.1145/3442442.3452350","DOIUrl":"https://doi.org/10.1145/3442442.3452350","url":null,"abstract":"In this paper, we investigate the state-of-the-art of machine learning models to infer sociodemographic attributes of Wikipedia editors based on their public profile pages and corresponding implications for editor privacy. To build models for inferring sociodemographic attributes, ground truth labels are obtained via different strategies, using publicly disclosed information from editor profile pages. Different embedding techniques are used to derive features from editors’ profile texts. In comparative evaluations of different machine learning models, we show that the highest prediction accuracy can be obtained for the attribute gender, with precision values of 82% to 91% for women and men respectively, as well as an averaged F1-score of 0.78. For other attributes like age group, education, and religion, the utilized classifiers exhibit F1-scores in the range of 0.32 to 0.74, depending on the model class. By merely using publicly disclosed information of Wikipedia editors, we highlight issues surrounding editor privacy on Wikipedia and discuss ways to mitigate this problem. We believe our work can help start a conversation about carefully weighing the potential benefits and harms that come with the existence of information-rich, pre-labeled profile pages of Wikipedia editors.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117026841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is quite important how to correlate and find out relationships between how people grow up and succeed in their research field compared to their field’s grows. In many cases, people refer only to indices such as citation count, h-index, i10-index and compare scientists from different fields in a similar situation with the same variables. It is not a fair comparison since fields are different in development and being cited. In this paper, we used the acceleration concept from physics and propose a new method with new metrics to efficiently and fairly evaluate scientists according to the real-time analysis of their recent status compared to their field’s grows. This considers various inputs such as whether a person is beginner scientists or professional and applies all such key inputs in the evaluation. The evaluation is also over time. The results showed better evaluation compared to state-of-the-art metrics.
{"title":"SciBiD: Novel Scientometrics and NoSQL-enabled Scalable and domain-specific Analysis of Big Scholar Data","authors":"M. Bohlouli, Jonathan Hermann, Fabian Sunnus","doi":"10.1145/3442442.3453544","DOIUrl":"https://doi.org/10.1145/3442442.3453544","url":null,"abstract":"It is quite important how to correlate and find out relationships between how people grow up and succeed in their research field compared to their field’s grows. In many cases, people refer only to indices such as citation count, h-index, i10-index and compare scientists from different fields in a similar situation with the same variables. It is not a fair comparison since fields are different in development and being cited. In this paper, we used the acceleration concept from physics and propose a new method with new metrics to efficiently and fairly evaluate scientists according to the real-time analysis of their recent status compared to their field’s grows. This considers various inputs such as whether a person is beginner scientists or professional and applies all such key inputs in the evaluation. The evaluation is also over time. The results showed better evaluation compared to state-of-the-art metrics.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125193015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scientists always look for the most accurate and relevant answer to their queries on the scholarly literature. Traditional scholarly search systems list documents instead of providing direct answers to the search queries. As data in knowledge graphs are not acquainted semantically, they are not machine-readable. Therefore, a search on scholarly knowledge graphs ends up in a full-text search, not a search in the content of scholarly literature. In this demo, we present a faceted search system that retrieves data from a scholarly knowledge graph, which can be compared and filtered to better satisfy user information needs. Our practice’s novelty is that we use dynamic facets, which means facets are not fixed and will change according to the content of a comparison.
{"title":"Demonstration of Faceted Search on Scholarly Knowledge Graphs","authors":"Golsa Heidari, Ahmad Ramadan, M. Stocker, S. Auer","doi":"10.1145/3442442.3458605","DOIUrl":"https://doi.org/10.1145/3442442.3458605","url":null,"abstract":"Scientists always look for the most accurate and relevant answer to their queries on the scholarly literature. Traditional scholarly search systems list documents instead of providing direct answers to the search queries. As data in knowledge graphs are not acquainted semantically, they are not machine-readable. Therefore, a search on scholarly knowledge graphs ends up in a full-text search, not a search in the content of scholarly literature. In this demo, we present a faceted search system that retrieves data from a scholarly knowledge graph, which can be compared and filtered to better satisfy user information needs. Our practice’s novelty is that we use dynamic facets, which means facets are not fixed and will change according to the content of a comparison.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134473978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}