Ghazaleh Babanejad Dehaki, H. Ibrahim, F. Sidi, N. Udzir, A. Alwan
Skyline query which relies on the notion of Pareto dominance filters the data items from a database by ensuring only those data items that are not worse than any others are selected as skylines. However, the dynamic nature of databases in which their states and/or structures change throughout their lifetime to incorporate the current and latest information of database applications, requires a new set of skylines to be derived. Blindly computing skylines on the new state/structure of a database is inefficient, as not all the data items are affected by the changes. Hence, this paper proposes a rule-based approach in tackling the above issue with the main aim at avoiding unnecessary skyline computations. Based on the type of operation that changes the state/structure of a database, i.e. insert/delete/update a data item(s) or add/remove a dimension(s), a set of rules are defined. Besides, the prominent dominance relationships when pairwise comparisons are performed are retained; which are then utilised in the process of computing a new set of skylines. Several analyses have been conducted to evaluate the performance and prove the efficiency of our proposed solution.
{"title":"A Rule-based Skyline Computation over a Dynamic Database","authors":"Ghazaleh Babanejad Dehaki, H. Ibrahim, F. Sidi, N. Udzir, A. Alwan","doi":"10.1145/3428757.3429117","DOIUrl":"https://doi.org/10.1145/3428757.3429117","url":null,"abstract":"Skyline query which relies on the notion of Pareto dominance filters the data items from a database by ensuring only those data items that are not worse than any others are selected as skylines. However, the dynamic nature of databases in which their states and/or structures change throughout their lifetime to incorporate the current and latest information of database applications, requires a new set of skylines to be derived. Blindly computing skylines on the new state/structure of a database is inefficient, as not all the data items are affected by the changes. Hence, this paper proposes a rule-based approach in tackling the above issue with the main aim at avoiding unnecessary skyline computations. Based on the type of operation that changes the state/structure of a database, i.e. insert/delete/update a data item(s) or add/remove a dimension(s), a set of rules are defined. Besides, the prominent dominance relationships when pairwise comparisons are performed are retained; which are then utilised in the process of computing a new set of skylines. Several analyses have been conducted to evaluate the performance and prove the efficiency of our proposed solution.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127146877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we analyze the relationship between web search behavior and confirmation bias, in which people prefer to browse information that supports their existing opinions and beliefs. We conducted an online user experiment in which 89 participants were asked to perform a web search task to obtain health information. In this experiment, we controlled the participants' prior beliefs by presenting them with prior information to manipulate their impressions of a search topic prior to performing the search task. We then analyzed their behavioral logs during the search task. The results demonstrate that participants with confirmation bias frequently browsed only the top search results and completed the search task quickly. The results also indicate that, even if participants with the confirmation bias possessed health literacy, they did not utilize this literacy, even though such literacy is essential when viewing health information on the web critically.
{"title":"Analysis of Relationship between Confirmation Bias and Web Search Behavior","authors":"Masaki Suzuki, Yusuke Yamamoto","doi":"10.1145/3428757.3429086","DOIUrl":"https://doi.org/10.1145/3428757.3429086","url":null,"abstract":"In this paper, we analyze the relationship between web search behavior and confirmation bias, in which people prefer to browse information that supports their existing opinions and beliefs. We conducted an online user experiment in which 89 participants were asked to perform a web search task to obtain health information. In this experiment, we controlled the participants' prior beliefs by presenting them with prior information to manipulate their impressions of a search topic prior to performing the search task. We then analyzed their behavioral logs during the search task. The results demonstrate that participants with confirmation bias frequently browsed only the top search results and completed the search task quickly. The results also indicate that, even if participants with the confirmation bias possessed health literacy, they did not utilize this literacy, even though such literacy is essential when viewing health information on the web critically.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128774411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many types of content exist on SNSs. Sometimes authors' opinions are not properly communicated to the reader. The content might be inflammatory, known as flaming. We infer the importance of extracting passages in which the author's opinion is not communicated correctly when it is presented to the reader. This study particularly examines tweets, a popular message system of the Twitter SNS, and also specifically examines "rhetorical questions." Rhetorical questions are sometimes known as mandarin sentences. People might misunderstand them and might flame the author. We consider it important to extract rhetorical question tweets automatically and present them. This paper proposes a method to extract rhetorical question tweets. First, we propose two definitions of rhetorical question tweets by our preliminary experiment. Next we propose a method extracting rhetorical question tweets based on two definitions. Definition 1 is Including the author's opinion in a question. Definition 2 is Including an author's opinion sentence, commentary sentence, or sentiment reversal in a sentence. Specifically, we proposed a method of opinion sentence extraction, commentary sentence extraction, and sentiment reversal extraction. Furthermore, we conducted two experiments and measured the benefits of our proposed methods.
{"title":"Extracting Rhetorical Question from Twitter","authors":"Rinji Suzuki, Akiyo Nadamoto","doi":"10.1145/3428757.3429123","DOIUrl":"https://doi.org/10.1145/3428757.3429123","url":null,"abstract":"Many types of content exist on SNSs. Sometimes authors' opinions are not properly communicated to the reader. The content might be inflammatory, known as flaming. We infer the importance of extracting passages in which the author's opinion is not communicated correctly when it is presented to the reader. This study particularly examines tweets, a popular message system of the Twitter SNS, and also specifically examines \"rhetorical questions.\" Rhetorical questions are sometimes known as mandarin sentences. People might misunderstand them and might flame the author. We consider it important to extract rhetorical question tweets automatically and present them. This paper proposes a method to extract rhetorical question tweets. First, we propose two definitions of rhetorical question tweets by our preliminary experiment. Next we propose a method extracting rhetorical question tweets based on two definitions. Definition 1 is Including the author's opinion in a question. Definition 2 is Including an author's opinion sentence, commentary sentence, or sentiment reversal in a sentence. Specifically, we proposed a method of opinion sentence extraction, commentary sentence extraction, and sentiment reversal extraction. Furthermore, we conducted two experiments and measured the benefits of our proposed methods.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128801047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yudith Cardinale, M. Cornejo-Lupa, Regina P. Ticona-Herrera, D. Barrios-Aranibar
Representation of the knowledge related to any domain with flexible and well-defined models, such as ontologies, provides the base to develop efficient and interoperable solutions. Hence, a proliferation of ontologies in many domains is unleashed. It is necessary to define how to compare such ontologies to decide which one is the most suitable for specific needs of users/developers. Since the emerging developing of ontologies, several studies have proposed criteria to evaluate them. Nevertheless, there is still a lack of practical and reproducible guidelines to drive a comparative evaluation of ontologies as a systematic process. In this paper, we propose a methodological process to qualitatively and quantitatively compare ontologies at Lexical, Structural, and Domain Knowledge levels, considering Correctness and Quality perspectives. Since the evaluation methods of our proposal are based in a golden-standard, it can be customized to compare ontologies in any domain. To show the suitability of our proposal, we apply our methodological approach to conduct a comparative study of ontologies in the robotic domain, in particularly for the Simultaneous Localization and Mapping (SLAM) problem. With this study case, we demonstrate that with this methodological comparative process, we are able to identify the strengths and weaknesses of ontologies, as well as the gaps still needed to fill in the target domain (SLAM for our study case).
{"title":"A Methodological Approach to Compare Ontologies: Proposal and Application for SLAM Ontologies","authors":"Yudith Cardinale, M. Cornejo-Lupa, Regina P. Ticona-Herrera, D. Barrios-Aranibar","doi":"10.1145/3428757.3429091","DOIUrl":"https://doi.org/10.1145/3428757.3429091","url":null,"abstract":"Representation of the knowledge related to any domain with flexible and well-defined models, such as ontologies, provides the base to develop efficient and interoperable solutions. Hence, a proliferation of ontologies in many domains is unleashed. It is necessary to define how to compare such ontologies to decide which one is the most suitable for specific needs of users/developers. Since the emerging developing of ontologies, several studies have proposed criteria to evaluate them. Nevertheless, there is still a lack of practical and reproducible guidelines to drive a comparative evaluation of ontologies as a systematic process. In this paper, we propose a methodological process to qualitatively and quantitatively compare ontologies at Lexical, Structural, and Domain Knowledge levels, considering Correctness and Quality perspectives. Since the evaluation methods of our proposal are based in a golden-standard, it can be customized to compare ontologies in any domain. To show the suitability of our proposal, we apply our methodological approach to conduct a comparative study of ontologies in the robotic domain, in particularly for the Simultaneous Localization and Mapping (SLAM) problem. With this study case, we demonstrate that with this methodological comparative process, we are able to identify the strengths and weaknesses of ontologies, as well as the gaps still needed to fill in the target domain (SLAM for our study case).","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129036215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A resume is an essential tool for job seekers when it comes to job hunting. This paper is intended to develop a web-based resume generator alongside with augmented reality features, known as AResume. The web-based application is built for job applicants who have difficulty in creating a professional resume from scratch, as well as trying to attempt the 'one-size-fits-all' approach. AR.js and A-Frame are the main libraries or web AR frameworks employed in the development of AResume to enrich the experience of augmented reality. A web-based AR is developed over mobile AR because of its lightweight, cross-platform support and no installation required. A generated resume is embedded with a QR code and AR markers. The QR code could be scanned using a smartphone to direct users to the AR scanner website. Users are able to move the scanner from marker to marker to view different contents such as videos, photos, and documents. AResume not only enables job applicants to create a resume with augmented features but also provides a better user experience for hiring managers when reviewing resumes.
{"title":"A Resume Generator with Augmented Reality Features","authors":"Mary Chew Jia Yi, Ong Huey Fang","doi":"10.1145/3428757.3429094","DOIUrl":"https://doi.org/10.1145/3428757.3429094","url":null,"abstract":"A resume is an essential tool for job seekers when it comes to job hunting. This paper is intended to develop a web-based resume generator alongside with augmented reality features, known as AResume. The web-based application is built for job applicants who have difficulty in creating a professional resume from scratch, as well as trying to attempt the 'one-size-fits-all' approach. AR.js and A-Frame are the main libraries or web AR frameworks employed in the development of AResume to enrich the experience of augmented reality. A web-based AR is developed over mobile AR because of its lightweight, cross-platform support and no installation required. A generated resume is embedded with a QR code and AR markers. The QR code could be scanned using a smartphone to direct users to the AR scanner website. Users are able to move the scanner from marker to marker to view different contents such as videos, photos, and documents. AResume not only enables job applicants to create a resume with augmented features but also provides a better user experience for hiring managers when reviewing resumes.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130390446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A data lake is a relatively recent technology to maintain and allow access to voluminous and heterogeneous data sources. Governments, large corporations and startups have increasingly considered it for storing useful data and obtain valuable business trends. However, there is still a long evolutionary path related to data lake management, where data security is an open issue. In this paper we investigate confidentiality issues in the context of data lakes, with a focus on authentication and authorization. We apply a systematic review methodology focusing on approaches that provide some technology for authentication and authorization management. In the following, we compare the selected studies w.r.t. the used technologies and we also analyze how they are positioned w.r.t. a reference architecture for a data lake management system. This is the first paper that presents such a kind of analysis for data lakes.
{"title":"An Analysis of Confidentiality Issues in Data Lakes","authors":"João Luiz Monteiro Joaquim, R. Mello","doi":"10.1145/3428757.3429109","DOIUrl":"https://doi.org/10.1145/3428757.3429109","url":null,"abstract":"A data lake is a relatively recent technology to maintain and allow access to voluminous and heterogeneous data sources. Governments, large corporations and startups have increasingly considered it for storing useful data and obtain valuable business trends. However, there is still a long evolutionary path related to data lake management, where data security is an open issue. In this paper we investigate confidentiality issues in the context of data lakes, with a focus on authentication and authorization. We apply a systematic review methodology focusing on approaches that provide some technology for authentication and authorization management. In the following, we compare the selected studies w.r.t. the used technologies and we also analyze how they are positioned w.r.t. a reference architecture for a data lake management system. This is the first paper that presents such a kind of analysis for data lakes.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129177247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Irvin Dongo, Yudith Cadinale, A. Aguilera, F. Martínez, Yuni Quintero, Sergio Barrios
Twitter is one of the most popular information source available on the Web. Thus, there exist many studies focused on analyzing the credibility of the shared information. Most proposals use either Twitter API or web scraping to extract the data to perform such analysis. Both extraction techniques have advantages and disadvantages. In this work, we present a study to evaluate their performance and behavior. The motivation for this research comes from the necessity to know ways to extract online information in order to analyze in real-time the credibility of the content posted on the Web. To do so, we develop a framework which offers both alternatives of data extraction and implements a previously proposed credibility model. Our framework is implemented as a Google Chrome extension able to analyze tweets in real-time. Results report that both methods produce identical credibility values, when a robust normalization process is applied to the text (i.e., tweet). Moreover, concerning the time performance, web scraping is faster than Twitter API, and it is more flexible in terms of obtaining data; however, web scraping is very sensitive to website changes.
{"title":"Web Scraping versus Twitter API: A Comparison for a Credibility Analysis","authors":"Irvin Dongo, Yudith Cadinale, A. Aguilera, F. Martínez, Yuni Quintero, Sergio Barrios","doi":"10.1145/3428757.3429104","DOIUrl":"https://doi.org/10.1145/3428757.3429104","url":null,"abstract":"Twitter is one of the most popular information source available on the Web. Thus, there exist many studies focused on analyzing the credibility of the shared information. Most proposals use either Twitter API or web scraping to extract the data to perform such analysis. Both extraction techniques have advantages and disadvantages. In this work, we present a study to evaluate their performance and behavior. The motivation for this research comes from the necessity to know ways to extract online information in order to analyze in real-time the credibility of the content posted on the Web. To do so, we develop a framework which offers both alternatives of data extraction and implements a previously proposed credibility model. Our framework is implemented as a Google Chrome extension able to analyze tweets in real-time. Results report that both methods produce identical credibility values, when a robust normalization process is applied to the text (i.e., tweet). Moreover, concerning the time performance, web scraping is faster than Twitter API, and it is more flexible in terms of obtaining data; however, web scraping is very sensitive to website changes.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116034086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the domain of the Dutch cultural heritage various data sets describe different aspects of life during the Dutch Golden Age. These data sets, in the form of RDF graphs, use different standards and contain noise in the values of literal nodes, such as misspelled names and uncertainty in dates. The Golden Agents project aims at answering queries about the Dutch Golden ages using these distributed and independently maintained data sets. A problem in this project, among many other problems, is the identification of persons who occur in multiple data sets but under different URI's. This paper aims to solve this specific problem and generate a linkset, i.e. a set of pairs of URI's which are judged to represent the same person. We use domain knowledge in the application of an existing node context generation algorithm to serve as input for GloVe, an algorithm originally designed for embedding words. This embedding is then used to train a classifier on pairs of URI's which are known duplicates and non-duplicates. Using just the cosine similarity between URI-pairs in embedding space for prediction, we obtain a simple classifier with an F½-score of around 0.85, even when very few training examples are provided. On larger training sets, more complex classifiers are shown to reach an F½-score of up to 0.88.
{"title":"Tailored Graph Embeddings for Entity Alignment on Historical Data","authors":"J. Baas, M. Dastani, A. Feelders","doi":"10.1145/3428757.3429111","DOIUrl":"https://doi.org/10.1145/3428757.3429111","url":null,"abstract":"In the domain of the Dutch cultural heritage various data sets describe different aspects of life during the Dutch Golden Age. These data sets, in the form of RDF graphs, use different standards and contain noise in the values of literal nodes, such as misspelled names and uncertainty in dates. The Golden Agents project aims at answering queries about the Dutch Golden ages using these distributed and independently maintained data sets. A problem in this project, among many other problems, is the identification of persons who occur in multiple data sets but under different URI's. This paper aims to solve this specific problem and generate a linkset, i.e. a set of pairs of URI's which are judged to represent the same person. We use domain knowledge in the application of an existing node context generation algorithm to serve as input for GloVe, an algorithm originally designed for embedding words. This embedding is then used to train a classifier on pairs of URI's which are known duplicates and non-duplicates. Using just the cosine similarity between URI-pairs in embedding space for prediction, we obtain a simple classifier with an F½-score of around 0.85, even when very few training examples are provided. On larger training sets, more complex classifiers are shown to reach an F½-score of up to 0.88.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114926281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Users often obtain recipes from culinary websites when they are cooking. In this case, various recipes for the same dish are displayed. Therefore, users compare each recipe and decide which one they want to use. We believe that it would be easier to select a recipe if we extract the point of uniqueness of each recipe and present it in the search results. In this paper, we propose a method of extracting the uniqueness of a recipe by analyzing the ingredients and procedures used. Specifically, we reference a basic recipe that describes the standard cooking methods of a dish and extract the differences between it and other recipes to ascertain the points of uniqueness of the recipe using procedures' importance and correspondence. As a result of implementing the proposed method with several recipes, we confirmed that the proposed method is able to extract the uniqueness.
{"title":"Extraction Method for a Recipe's Uniqueness based on Recipe Frequency and LexRank of Procedures","authors":"Tatsuya Oonita, D. Kitayama","doi":"10.1145/3428757.3429128","DOIUrl":"https://doi.org/10.1145/3428757.3429128","url":null,"abstract":"Users often obtain recipes from culinary websites when they are cooking. In this case, various recipes for the same dish are displayed. Therefore, users compare each recipe and decide which one they want to use. We believe that it would be easier to select a recipe if we extract the point of uniqueness of each recipe and present it in the search results. In this paper, we propose a method of extracting the uniqueness of a recipe by analyzing the ingredients and procedures used. Specifically, we reference a basic recipe that describes the standard cooking methods of a dish and extract the differences between it and other recipes to ascertain the points of uniqueness of the recipe using procedures' importance and correspondence. As a result of implementing the proposed method with several recipes, we confirmed that the proposed method is able to extract the uniqueness.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"94 14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126053881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shrooq A. Alsenan, Isra M. Al-Turaiki, Alaaeldin M. Hafez
Shrooq A. Alsenan∗ 436203869@student.ksu.edu.sa Information Systems Department, College of Computer and Information Sciences, King Saud University Riyadh, Saudi Arabia Isra Al-Turaiki Information Technology Department, College of Computer and Information Sciences, King Saud University Riyadh, Saudi Arabia ialturaiki@ksu.edu.sa Alaaeldin Hafez Information Systems Department, College of Computer and Information Sciences, King Saud University Riydh, Saudi Arabia ahafez@ksu.edu.sa
Shrooq A. Alsenan * 436203869@student.ksu.edu.sa沙特阿拉伯利雅得沙特国王大学计算机与信息科学学院信息系统系Al-Turaiki信息技术系ialturaiki@ksu.edu.sa沙特阿拉伯利雅得沙特国王大学计算机与信息科学学院信息系统系Alaaeldin Hafez ahafez@ksu.edu.sa
{"title":"Chemoinformatics for Data Scientists: an Overview","authors":"Shrooq A. Alsenan, Isra M. Al-Turaiki, Alaaeldin M. Hafez","doi":"10.1145/3428757.3429147","DOIUrl":"https://doi.org/10.1145/3428757.3429147","url":null,"abstract":"Shrooq A. Alsenan∗ 436203869@student.ksu.edu.sa Information Systems Department, College of Computer and Information Sciences, King Saud University Riyadh, Saudi Arabia Isra Al-Turaiki Information Technology Department, College of Computer and Information Sciences, King Saud University Riyadh, Saudi Arabia ialturaiki@ksu.edu.sa Alaaeldin Hafez Information Systems Department, College of Computer and Information Sciences, King Saud University Riydh, Saudi Arabia ahafez@ksu.edu.sa","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131861708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}