In this demonstration, we put ourselves in the place of a website manager who seeks to use browser fingerprinting for web authentication. The first step is to choose the attributes to implement among the hundreds that are available. To do so, we developed BrFAST, an attribute selection platform that includes FPSelect, an algorithm that rigorously selects the attributes according to a trade-off between security and usability. BrFAST is configured with a set of parameters for which we provide values for BrFAST to be usable as is. We notably include the resources to use two publicly available browser fingerprint datasets. BrFAST can be extended to use other parameters: other attribute selection methods, other measures of security and usability, or other fingerprint datasets. BrFAST helps visualize the exploration of the possibilities during the search of the best attribute set to use, evaluate the properties of attribute sets, and compare several attribute selection methods. During the demonstration, we compare the attribute sets selected by FPSelect with those selected by the usual methods according to the properties of the resulting browser fingerprints (e.g., their usability, their unicity).
{"title":"BrFAST: a Tool to Select Browser Fingerprinting Attributes for Web Authentication According to a Usability-Security Trade-off","authors":"Nampoina Andriamilanto, T. Allard","doi":"10.1145/3442442.3458610","DOIUrl":"https://doi.org/10.1145/3442442.3458610","url":null,"abstract":"In this demonstration, we put ourselves in the place of a website manager who seeks to use browser fingerprinting for web authentication. The first step is to choose the attributes to implement among the hundreds that are available. To do so, we developed BrFAST, an attribute selection platform that includes FPSelect, an algorithm that rigorously selects the attributes according to a trade-off between security and usability. BrFAST is configured with a set of parameters for which we provide values for BrFAST to be usable as is. We notably include the resources to use two publicly available browser fingerprint datasets. BrFAST can be extended to use other parameters: other attribute selection methods, other measures of security and usability, or other fingerprint datasets. BrFAST helps visualize the exploration of the possibilities during the search of the best attribute set to use, evaluate the properties of attribute sets, and compare several attribute selection methods. During the demonstration, we compare the attribute sets selected by FPSelect with those selected by the usual methods according to the properties of the resulting browser fingerprints (e.g., their usability, their unicity).","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131569501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intent detection plays an important role in customer service dialog systems for providing high-quality service in the financial industry. The lack of publicly available datasets and high annotation cost are two challenging issues in this research direction. To overcome these challenges, we propose a social media enhanced self-training approach for intent detection by using label names only. The experimental results show the effectiveness of the proposed method.
{"title":"Enhancing Intent Detection in Customer Service with Social Media Data","authors":"JianTao Huang, Yi-Ru Liou, Hsin-Hsi Chen","doi":"10.1145/3442442.3451377","DOIUrl":"https://doi.org/10.1145/3442442.3451377","url":null,"abstract":"Intent detection plays an important role in customer service dialog systems for providing high-quality service in the financial industry. The lack of publicly available datasets and high annotation cost are two challenging issues in this research direction. To overcome these challenges, we propose a social media enhanced self-training approach for intent detection by using label names only. The experimental results show the effectiveness of the proposed method.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"2022 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121475772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Esther Mead, Maryam Maleki, Recep Erol, Dr Nidhi Agarwal
The world-wide refugee problem has a long history, but continues to this day, and will unfortunately continue into the foreseeable future. Efforts to anticipate, mitigate and prepare for refugee counts, however, are still lacking. There are many potential causes, but the published research has primarily focused on identifying ways to integrate already existing refugees into the various communities wherein they ultimately reside, rather than on preventive measures. The work proposed herein uses a set of features that can be divided into three basic categories: 1) sociocultural, 2) socioeconomic, and 3) economic, which refer to the nature of each proposed predictive feature. For example, corruption perception is a sociocultural feature, access to healthcare is a socioeconomic feature, and inflation is an economic feature. Forty-five predictive features were collected for various years and countries of interest. As may seem intuitive, the features that fell under the category of "economic" produced the highest predictive value from the regression technique employed. However, additional potential predictive features that have not been previously addressed stood out in our experiments. These include: the global peace index (gpi), freedom of expression (fe), internet users (iu), access to healthcare (hc), cost of living index (coli), local purchasing power index (lppi), homicide rate (hr), access to justice (aj), and women's property rights (wpr). Many of these features are nascent in terms of both their development and collection, as well as the fact that some of these features are not yet collected at a universal level, meaning that the data is missing for some countries and years. Ongoing work regarding these datasets for predicting refugee counts is also discussed in this work.
{"title":"Proposing a Broader Scope of Predictive Features for Modeling Refugee Counts","authors":"Esther Mead, Maryam Maleki, Recep Erol, Dr Nidhi Agarwal","doi":"10.1145/3442442.3453457","DOIUrl":"https://doi.org/10.1145/3442442.3453457","url":null,"abstract":"The world-wide refugee problem has a long history, but continues to this day, and will unfortunately continue into the foreseeable future. Efforts to anticipate, mitigate and prepare for refugee counts, however, are still lacking. There are many potential causes, but the published research has primarily focused on identifying ways to integrate already existing refugees into the various communities wherein they ultimately reside, rather than on preventive measures. The work proposed herein uses a set of features that can be divided into three basic categories: 1) sociocultural, 2) socioeconomic, and 3) economic, which refer to the nature of each proposed predictive feature. For example, corruption perception is a sociocultural feature, access to healthcare is a socioeconomic feature, and inflation is an economic feature. Forty-five predictive features were collected for various years and countries of interest. As may seem intuitive, the features that fell under the category of \"economic\" produced the highest predictive value from the regression technique employed. However, additional potential predictive features that have not been previously addressed stood out in our experiments. These include: the global peace index (gpi), freedom of expression (fe), internet users (iu), access to healthcare (hc), cost of living index (coli), local purchasing power index (lppi), homicide rate (hr), access to justice (aj), and women's property rights (wpr). Many of these features are nascent in terms of both their development and collection, as well as the fact that some of these features are not yet collected at a universal level, meaning that the data is missing for some countries and years. Ongoing work regarding these datasets for predicting refugee counts is also discussed in this work.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128401825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wikidata recently supported entity schemas based on shape expressions (ShEx). They play an important role in the validation of items belonging to a multitude of domains on Wikidata. However, the number of entity schemas created by the contributors is relatively low compared to the number of WikiProjects. The past couple of years have seen attempts at simplifying the shape expressions and building tools for creating them. In this article, ShExStatements is presented with the goal of simplifying writing the shape expressions for Wikidata.
{"title":"ShExStatements: Simplifying Shape Expressions for Wikidata","authors":"J. Samuel","doi":"10.1145/3442442.3452349","DOIUrl":"https://doi.org/10.1145/3442442.3452349","url":null,"abstract":"Wikidata recently supported entity schemas based on shape expressions (ShEx). They play an important role in the validation of items belonging to a multitude of domains on Wikidata. However, the number of entity schemas created by the contributors is relatively low compared to the number of WikiProjects. The past couple of years have seen attempts at simplifying the shape expressions and building tools for creating them. In this article, ShExStatements is presented with the goal of simplifying writing the shape expressions for Wikidata.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132607597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Bengali Wikipedia has recently crossed the milestone of 100,000 articles after a journey of almost 17 years in December 2020. In this journey, the Bengali language edition of the world’s largest encyclopedia has experienced multiple changes with a promising increase in the overall performance considering the growth of community members and content. This paper analyzes the various associating factors throughout this journey including the number of active editors, number of content pages, pageview, etc., along with the connection to outreach activities with these parameters. The gender gap has been a worldwide problem and is quite prevalent in Bengali Wikipedia as well, which seems to be unchanged over the years and consequentially, leaving a conspicuous disparity in the movement. The paper inspects the present scenario of Bengali Wikipedia through quantitative factors with a relative comparison with other regional languages.
{"title":"A Brief Analysis of Bengali Wikipedia’s Journey to 100,000 Articles","authors":"Ankan Ghosh Dastider","doi":"10.1145/3442442.3452340","DOIUrl":"https://doi.org/10.1145/3442442.3452340","url":null,"abstract":"The Bengali Wikipedia has recently crossed the milestone of 100,000 articles after a journey of almost 17 years in December 2020. In this journey, the Bengali language edition of the world’s largest encyclopedia has experienced multiple changes with a promising increase in the overall performance considering the growth of community members and content. This paper analyzes the various associating factors throughout this journey including the number of active editors, number of content pages, pageview, etc., along with the connection to outreach activities with these parameters. The gender gap has been a worldwide problem and is quite prevalent in Bengali Wikipedia as well, which seems to be unchanged over the years and consequentially, leaving a conspicuous disparity in the movement. The paper inspects the present scenario of Bengali Wikipedia through quantitative factors with a relative comparison with other regional languages.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130915250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anamika Chhabra, S. Srivastava, S. Iyengar, P. Saini
The quality of Wikipedia articles is manually evaluated which is time inefficient as well as susceptible to human bias. An automated assessment of these articles may help in minimizing the overall time and manual errors. In this paper, we present a novel approach based on the structural analysis of Wikigraph to automate the estimation of the quality of Wikipedia articles. We examine the network built using the complete set of English Wikipedia articles and identify the variation of network signatures of the articles with respect to their quality. Our study shows that these signatures are useful for estimating the quality grades of un-assessed articles with an accuracy surpassing the existing approaches in this direction. The results of the study may help in reducing the need for human involvement for quality assessment tasks.
{"title":"Structural Analysis of Wikigraph to Investigate Quality Grades of Wikipedia Articles","authors":"Anamika Chhabra, S. Srivastava, S. Iyengar, P. Saini","doi":"10.1145/3442442.3452345","DOIUrl":"https://doi.org/10.1145/3442442.3452345","url":null,"abstract":"The quality of Wikipedia articles is manually evaluated which is time inefficient as well as susceptible to human bias. An automated assessment of these articles may help in minimizing the overall time and manual errors. In this paper, we present a novel approach based on the structural analysis of Wikigraph to automate the estimation of the quality of Wikipedia articles. We examine the network built using the complete set of English Wikipedia articles and identify the variation of network signatures of the articles with respect to their quality. Our study shows that these signatures are useful for estimating the quality grades of un-assessed articles with an accuracy surpassing the existing approaches in this direction. The results of the study may help in reducing the need for human involvement for quality assessment tasks.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131215565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To attract unsuspecting readers, news article headlines and abstracts are often written with speculative sentences or clauses. Male dominance in the news is very evident, whereas females are seen as “eye candy” or “inferior”, and are underrepresented and under-examined within the same news categories as their male counterparts. In this paper, we present an initial study on gender bias in news abstracts in two large English news datasets used for news recommendation and news classification. We perform three large-scale, yet effective text-analysis fairness measurements on 296,965 news abstracts. In particular, to our knowledge we construct two of the largest benchmark datasets of possessive (gender-specific and gender-neutral) nouns and attribute (career-related and family-related) words datasets1 which we will release to foster both bias and fairness research aid in developing fair NLP models to eliminate the paradox of gender bias. Our studies demonstrate that females are immensely marginalized and suffer from socially-constructed biases in the news. This paper individually devises a methodology whereby news content can be analyzed on a large scale utilizing natural language processing (NLP) techniques from machine learning (ML) to discover both implicit and explicit gender biases.
{"title":"Does Gender Matter in the News? Detecting and Examining Gender Bias in News Articles","authors":"Jamell Dacon, Haochen Liu","doi":"10.1145/3442442.3452325","DOIUrl":"https://doi.org/10.1145/3442442.3452325","url":null,"abstract":"To attract unsuspecting readers, news article headlines and abstracts are often written with speculative sentences or clauses. Male dominance in the news is very evident, whereas females are seen as “eye candy” or “inferior”, and are underrepresented and under-examined within the same news categories as their male counterparts. In this paper, we present an initial study on gender bias in news abstracts in two large English news datasets used for news recommendation and news classification. We perform three large-scale, yet effective text-analysis fairness measurements on 296,965 news abstracts. In particular, to our knowledge we construct two of the largest benchmark datasets of possessive (gender-specific and gender-neutral) nouns and attribute (career-related and family-related) words datasets1 which we will release to foster both bias and fairness research aid in developing fair NLP models to eliminate the paradox of gender bias. Our studies demonstrate that females are immensely marginalized and suffer from socially-constructed biases in the news. This paper individually devises a methodology whereby news content can be analyzed on a large scale utilizing natural language processing (NLP) techniques from machine learning (ML) to discover both implicit and explicit gender biases.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"349 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133875496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wikipedia is a critical platform for organizing and disseminating knowledge. One of the key principles of Wikipedia is neutral point of view (NPOV), so that bias is not injected into objective treatment of subject matter. As part of our research vision to develop resilient bias detection models that can self-adapt over time, we present in this paper our initial investigation of the potential of a cross-domain transfer learning approach to improve Wikipedia bias detection. The ultimate goal is to future-proof Wikipedia in the face of dynamic, evolving kinds of linguistic bias and adversarial manipulations intended to evade NPOV issues. We highlight the impact of incorporating evidence of bias from other subjectivity rich domains into further pre-training a BERT-based model, resulting in strong performance in comparison with traditional methods.
{"title":"Towards Ongoing Detection of Linguistic Bias on Wikipedia","authors":"K. Madanagopal, James Caverlee","doi":"10.1145/3442442.3452353","DOIUrl":"https://doi.org/10.1145/3442442.3452353","url":null,"abstract":"Wikipedia is a critical platform for organizing and disseminating knowledge. One of the key principles of Wikipedia is neutral point of view (NPOV), so that bias is not injected into objective treatment of subject matter. As part of our research vision to develop resilient bias detection models that can self-adapt over time, we present in this paper our initial investigation of the potential of a cross-domain transfer learning approach to improve Wikipedia bias detection. The ultimate goal is to future-proof Wikipedia in the face of dynamic, evolving kinds of linguistic bias and adversarial manipulations intended to evade NPOV issues. We highlight the impact of incorporating evidence of bias from other subjectivity rich domains into further pre-training a BERT-based model, resulting in strong performance in comparison with traditional methods.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133184993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
New discoveries in science are often built upon previous knowledge. Ideally, such dependency information should be made explicit in a scientific knowledge graph. The Keystone Framework was proposed for tracking the validity dependency among papers. A keystone citation indicates that the validity of a given paper depends on a previously published paper it cites. In this paper, we propose and evaluate a strategy that repurposes rhetorical category classifiers for the novel application of extracting keystone citations that relate to research methods. Five binary rhetorical category classifiers were constructed to identify Background, Objective, Methods, Results, and Conclusions sentences in biomedical papers. The resulting classifiers were used to test the strategy against two datasets. The initial strategy assumed that only citations contained in Methods sentences were methods keystone citations, but our analysis revealed that citations contained in sentences classified as either Methods or Results had a high likelihood to be methods keystone citations. Future work will focus on fine tuning the rhetorical category classifiers, experimenting with multiclass classifiers, evaluating the revised strategy with more data, and constructing a larger gold standard citation context sentence dataset for model training.
{"title":"Finding Keystone Citations for Constructing Validity Chains among Research Papers","authors":"Yuanxi Fu, Jodi Schneider, Catherine Blake","doi":"10.1145/3442442.3451368","DOIUrl":"https://doi.org/10.1145/3442442.3451368","url":null,"abstract":"New discoveries in science are often built upon previous knowledge. Ideally, such dependency information should be made explicit in a scientific knowledge graph. The Keystone Framework was proposed for tracking the validity dependency among papers. A keystone citation indicates that the validity of a given paper depends on a previously published paper it cites. In this paper, we propose and evaluate a strategy that repurposes rhetorical category classifiers for the novel application of extracting keystone citations that relate to research methods. Five binary rhetorical category classifiers were constructed to identify Background, Objective, Methods, Results, and Conclusions sentences in biomedical papers. The resulting classifiers were used to test the strategy against two datasets. The initial strategy assumed that only citations contained in Methods sentences were methods keystone citations, but our analysis revealed that citations contained in sentences classified as either Methods or Results had a high likelihood to be methods keystone citations. Future work will focus on fine tuning the rhetorical category classifiers, experimenting with multiclass classifiers, evaluating the revised strategy with more data, and constructing a larger gold standard citation context sentence dataset for model training.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130244377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bounce rate prediction for clicked ads in sponsored search advertising is crucial for improving the quality of ads shown to the user. Bounce rate represents the proportion of landing pages for clicked ads on which users spend less than a specified time signifying that the user did not find a possible match of their query intent with the landing page content. In the pay-per-click revenue model for search engines, higher bounce rates mean advertisers get charged without meaningful user engagement, which impacts user and advertiser retention in long term. In real-time search engine settings complex ML models are prohibitive due to stringent latency requirements. Also historical logs are ineffective for rare queries (tail) where the data is sparse, as well as for matching user intent to adcopy when the query and bidded keywords don’t exactly overlap (smart match). In this paper, we propose a real-time bounce rate prediction system that leverages lightweight features like modified tf, positional and proximity features computed from ad landing pages and improves prediction for rare queries. The model preserves privacy and uses no user based feature. The entire ensemble is trained on millions of examples from the offline user log of the Bing commercial search engine and improves the ranking metrics for tail queries and smart match by more than 2x compared to a model that only uses ad-copy-advertiser features.
{"title":"Improving Bounce Rate Prediction for Rare Queries by Leveraging Landing Page Signals","authors":"Yeshi Dolma, Raunak Kalani, Astha Agrawal, Saurav Basu","doi":"10.1145/3442442.3453540","DOIUrl":"https://doi.org/10.1145/3442442.3453540","url":null,"abstract":"Bounce rate prediction for clicked ads in sponsored search advertising is crucial for improving the quality of ads shown to the user. Bounce rate represents the proportion of landing pages for clicked ads on which users spend less than a specified time signifying that the user did not find a possible match of their query intent with the landing page content. In the pay-per-click revenue model for search engines, higher bounce rates mean advertisers get charged without meaningful user engagement, which impacts user and advertiser retention in long term. In real-time search engine settings complex ML models are prohibitive due to stringent latency requirements. Also historical logs are ineffective for rare queries (tail) where the data is sparse, as well as for matching user intent to adcopy when the query and bidded keywords don’t exactly overlap (smart match). In this paper, we propose a real-time bounce rate prediction system that leverages lightweight features like modified tf, positional and proximity features computed from ad landing pages and improves prediction for rare queries. The model preserves privacy and uses no user based feature. The entire ensemble is trained on millions of examples from the offline user log of the Bing commercial search engine and improves the ranking metrics for tail queries and smart match by more than 2x compared to a model that only uses ad-copy-advertiser features.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114896809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}