Pub Date : 2021-12-01DOI: 10.1109/ICKG52313.2021.00072
Liuqi Jin, Yan Pan, Jiaoyun Yang, Lin Han, Lin Lv, Miki Raviv, Ning An
Pressure injury (PI) is one of the major causes of short-term death. Early intervention for patients at risk plays an essential role in PI. However, many nurses may ignore risks. This paper aims to establish a model to predict interventions according to the patient's physical signs, which can help nurses develop care plans. We used data from 1,483 patients with 25 characteristics and 17 interventions. We use the Random Forest and Particle Swarm Optimization (PSO) to optimize model parameters. Then we compared it with KNN, SVM, and Decision Tree. The 10-fold cross-validation result showed that the Random Forest has better accuracy than other methods, with an f1 score of 0.84. This finding proved the feasibility of using machine learning to help formulate care plans according to the classification of index prediction results. Our model shows that hemoglobin, Braden PI score, and age are the three most influential risk factors.
{"title":"Intervention Prediction for Patients with Pressure Injury Using Random Forest","authors":"Liuqi Jin, Yan Pan, Jiaoyun Yang, Lin Han, Lin Lv, Miki Raviv, Ning An","doi":"10.1109/ICKG52313.2021.00072","DOIUrl":"https://doi.org/10.1109/ICKG52313.2021.00072","url":null,"abstract":"Pressure injury (PI) is one of the major causes of short-term death. Early intervention for patients at risk plays an essential role in PI. However, many nurses may ignore risks. This paper aims to establish a model to predict interventions according to the patient's physical signs, which can help nurses develop care plans. We used data from 1,483 patients with 25 characteristics and 17 interventions. We use the Random Forest and Particle Swarm Optimization (PSO) to optimize model parameters. Then we compared it with KNN, SVM, and Decision Tree. The 10-fold cross-validation result showed that the Random Forest has better accuracy than other methods, with an f1 score of 0.84. This finding proved the feasibility of using machine learning to help formulate care plans according to the classification of index prediction results. Our model shows that hemoglobin, Braden PI score, and age are the three most influential risk factors.","PeriodicalId":174126,"journal":{"name":"2021 IEEE International Conference on Big Knowledge (ICBK)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133057650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ICKG52313.2021.00049
Weichen Li, Patrick Abels, Zahra Ahmadi, Sophie Burkhardt, Benjamin Schiller, Iryna Gurevych, S. Kramer
Decision-making tasks usually follow five steps: identifying the problem, collecting data, extracting evidence, iden-tifying arguments, and making the decision. This paper focuses on two steps of decision-making: extracting evidence by building knowledge graphs (KGs) of specialized topics and identifying sentences' arguments through sentence-level argument mining. We present a hybrid model that combines topic modeling using latent Dirichlet allocation (LDA) and word embeddings to obtain external knowledge from structured and unstructured data. We use a topic model to extract topic- and sentence-specific evidence from the structured knowledge base Wikidata. A knowledge graph is constructed based on the cosine similarity between the entity word vectors of Wikidata and the vector of the given sentence. A second graph based on topic-specific articles found via Google supplements the general incompleteness of the structured knowledge base. Combining these graphs, we obtain a graph-based model that, as our evaluation shows, successfully capitalizes on both structured and unstructured data.
{"title":"Topic-Guided Knowledge Graph Construction for Argument Mining","authors":"Weichen Li, Patrick Abels, Zahra Ahmadi, Sophie Burkhardt, Benjamin Schiller, Iryna Gurevych, S. Kramer","doi":"10.1109/ICKG52313.2021.00049","DOIUrl":"https://doi.org/10.1109/ICKG52313.2021.00049","url":null,"abstract":"Decision-making tasks usually follow five steps: identifying the problem, collecting data, extracting evidence, iden-tifying arguments, and making the decision. This paper focuses on two steps of decision-making: extracting evidence by building knowledge graphs (KGs) of specialized topics and identifying sentences' arguments through sentence-level argument mining. We present a hybrid model that combines topic modeling using latent Dirichlet allocation (LDA) and word embeddings to obtain external knowledge from structured and unstructured data. We use a topic model to extract topic- and sentence-specific evidence from the structured knowledge base Wikidata. A knowledge graph is constructed based on the cosine similarity between the entity word vectors of Wikidata and the vector of the given sentence. A second graph based on topic-specific articles found via Google supplements the general incompleteness of the structured knowledge base. Combining these graphs, we obtain a graph-based model that, as our evaluation shows, successfully capitalizes on both structured and unstructured data.","PeriodicalId":174126,"journal":{"name":"2021 IEEE International Conference on Big Knowledge (ICBK)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117125095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ICKG52313.2021.00037
Xiuxing Li, Zhenyu Li, Zhichao Duan, Jiacheng Xu, Ning Liu, Jianyong Wang
Knowledge bases become essential resources for many data mining and information retrieval tasks, but they remain far from complete. Knowledge base completion has attracted extensive research efforts from researchers and prac-titioners in diverse areas, which aims to infer missing facts from existing ones in a knowledge base. Quantities of knowledge base completion methods have been developed by regarding each relation as a translation from head entity to tail entity. However, existing methods merely concentrate on fact triples in the knowledge base or co-occurrence of words in the text, while supplementary semantic information expressed via related entities in the text has not been fully exploited. Meanwhile, the representation ability of current methods encounters bottlenecks due to the structure sparseness of knowledge base. In this paper, we propose a novel knowledge base representation learning method by taking advantage of the rich semantic information expressed via related entities in the textual corpus to expand the semantic structure of knowledge base. In this way, our model can break through the limitation of structure sparseness and promote the performance of knowledge base completion. Extensive experiments on two real-world datasets show that the proposed method successfully addresses the above issues and significantly outperforms the state-of-the-art methods on the benchmark task of link prediction.
{"title":"Jointly Modeling Fact Triples and Text Information for Knowledge Base Completion","authors":"Xiuxing Li, Zhenyu Li, Zhichao Duan, Jiacheng Xu, Ning Liu, Jianyong Wang","doi":"10.1109/ICKG52313.2021.00037","DOIUrl":"https://doi.org/10.1109/ICKG52313.2021.00037","url":null,"abstract":"Knowledge bases become essential resources for many data mining and information retrieval tasks, but they remain far from complete. Knowledge base completion has attracted extensive research efforts from researchers and prac-titioners in diverse areas, which aims to infer missing facts from existing ones in a knowledge base. Quantities of knowledge base completion methods have been developed by regarding each relation as a translation from head entity to tail entity. However, existing methods merely concentrate on fact triples in the knowledge base or co-occurrence of words in the text, while supplementary semantic information expressed via related entities in the text has not been fully exploited. Meanwhile, the representation ability of current methods encounters bottlenecks due to the structure sparseness of knowledge base. In this paper, we propose a novel knowledge base representation learning method by taking advantage of the rich semantic information expressed via related entities in the textual corpus to expand the semantic structure of knowledge base. In this way, our model can break through the limitation of structure sparseness and promote the performance of knowledge base completion. Extensive experiments on two real-world datasets show that the proposed method successfully addresses the above issues and significantly outperforms the state-of-the-art methods on the benchmark task of link prediction.","PeriodicalId":174126,"journal":{"name":"2021 IEEE International Conference on Big Knowledge (ICBK)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129932419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ICKG52313.2021.00066
Jaber Valizadeh, U. Aickelin, H. A. Khorshidi
No alternative to human blood has been found so far, and the only source is blood donation by donors. This study presents a blood supply chain optimization model focusing on the location and inventory management of different centers. The main purpose of this model is to reduce total costs, including hospital construction costs, patient allocation costs, patient service costs, expected time-out fines, non-absorbed blood fines, and outsourcing process costs. We then calculate the cost savings of collaborating in each hospital coalition to calculate the fair allocation of cost savings across hospitals. The proposed model is developed based on the data for the city of Tehran and previous studies in the field of the blood supply chain as well as using four Cooperative Game Theory (CGT) methods such as Shapley value, τ- Value, core-center and least core, to reduce the total cost and the fair profit sharing between hospitals have been evaluated.
{"title":"A Robust Mathematical Model for Blood Supply Chain Network using Game Theory","authors":"Jaber Valizadeh, U. Aickelin, H. A. Khorshidi","doi":"10.1109/ICKG52313.2021.00066","DOIUrl":"https://doi.org/10.1109/ICKG52313.2021.00066","url":null,"abstract":"No alternative to human blood has been found so far, and the only source is blood donation by donors. This study presents a blood supply chain optimization model focusing on the location and inventory management of different centers. The main purpose of this model is to reduce total costs, including hospital construction costs, patient allocation costs, patient service costs, expected time-out fines, non-absorbed blood fines, and outsourcing process costs. We then calculate the cost savings of collaborating in each hospital coalition to calculate the fair allocation of cost savings across hospitals. The proposed model is developed based on the data for the city of Tehran and previous studies in the field of the blood supply chain as well as using four Cooperative Game Theory (CGT) methods such as Shapley value, τ- Value, core-center and least core, to reduce the total cost and the fair profit sharing between hospitals have been evaluated.","PeriodicalId":174126,"journal":{"name":"2021 IEEE International Conference on Big Knowledge (ICBK)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114601001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ICKG52313.2021.00039
Huan Zhang, Songmao Zhang
In this paper, we propose a divide-and-conquer method for solving the preferred extensions enumeration prob-lem, which is computationally intractable in argumentation frameworks. The rationale is to take advantage of the fact that for acyclic argumentation frameworks the computation becomes tractable with polynomial time. Concretely, we identify sufficient conditions for decomposing an argumentation framework into sub-frameworks based on certain cycles, where the soundness and completeness in computing preferred extensions are proved. Based on this conclusion, we devise the partitioning algorithm and carry out an evaluation on the International Competition on Computational Models of Argumentation (ICCMA) 2019 dataset. The results show that for the complex, time-consuming tasks our method could reduce running time when compared with the state-of-the-art solver in ICCMA. This is our first attempt in tackling the complex argumentative knowledge and many directions are yet to be explored, both theoretical and empirical.
{"title":"A divide-and-conquer method for computing preferred extensions of argumentation frameworks","authors":"Huan Zhang, Songmao Zhang","doi":"10.1109/ICKG52313.2021.00039","DOIUrl":"https://doi.org/10.1109/ICKG52313.2021.00039","url":null,"abstract":"In this paper, we propose a divide-and-conquer method for solving the preferred extensions enumeration prob-lem, which is computationally intractable in argumentation frameworks. The rationale is to take advantage of the fact that for acyclic argumentation frameworks the computation becomes tractable with polynomial time. Concretely, we identify sufficient conditions for decomposing an argumentation framework into sub-frameworks based on certain cycles, where the soundness and completeness in computing preferred extensions are proved. Based on this conclusion, we devise the partitioning algorithm and carry out an evaluation on the International Competition on Computational Models of Argumentation (ICCMA) 2019 dataset. The results show that for the complex, time-consuming tasks our method could reduce running time when compared with the state-of-the-art solver in ICCMA. This is our first attempt in tackling the complex argumentative knowledge and many directions are yet to be explored, both theoretical and empirical.","PeriodicalId":174126,"journal":{"name":"2021 IEEE International Conference on Big Knowledge (ICBK)","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114693912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ICKG52313.2021.00025
Nan Xu, Nitin Kamra, Yan Liu
Treatment recommendation is a complex multi-faceted problem with many treatment goals considered by clini-cians and patients, e.g., optimizing the survival rate, mitigating negative impacts, reducing financial expenses, avoiding over-treatment, etc. Recently, deep reinforcement learning (RL) approaches have gained popularity for treatment recommendation. In this paper, we investigate preference-based reinforcement learning approaches for treatment recommendation, where the reward function is itself learned based on treatment goals, without requiring either expert demonstrations in advance or human involvement during policy learning. We first present an open sim-ulation platform11https://sites.google.com/view/tr-with-prl/ to model the evolution of two diseases, namely Cancer and Sepsis, and individuals' reactions to the received treatment. Secondly, we systematically examine preference-based RL for treatment recommendation via simulated experiments and observe high utility in the learned policy in terms of high survival rate and low side effects, with inferred rewards highly correlated to treatment goals. We further explore the transferability of inferred reward functions and guidelines for agent design to provide insights in achieving the right trade-off among various human objectives with preference-based RL approaches for treatment recommendation in the real world.
{"title":"Treatment Recommendation with Preference-based Reinforcement Learning","authors":"Nan Xu, Nitin Kamra, Yan Liu","doi":"10.1109/ICKG52313.2021.00025","DOIUrl":"https://doi.org/10.1109/ICKG52313.2021.00025","url":null,"abstract":"Treatment recommendation is a complex multi-faceted problem with many treatment goals considered by clini-cians and patients, e.g., optimizing the survival rate, mitigating negative impacts, reducing financial expenses, avoiding over-treatment, etc. Recently, deep reinforcement learning (RL) approaches have gained popularity for treatment recommendation. In this paper, we investigate preference-based reinforcement learning approaches for treatment recommendation, where the reward function is itself learned based on treatment goals, without requiring either expert demonstrations in advance or human involvement during policy learning. We first present an open sim-ulation platform11https://sites.google.com/view/tr-with-prl/ to model the evolution of two diseases, namely Cancer and Sepsis, and individuals' reactions to the received treatment. Secondly, we systematically examine preference-based RL for treatment recommendation via simulated experiments and observe high utility in the learned policy in terms of high survival rate and low side effects, with inferred rewards highly correlated to treatment goals. We further explore the transferability of inferred reward functions and guidelines for agent design to provide insights in achieving the right trade-off among various human objectives with preference-based RL approaches for treatment recommendation in the real world.","PeriodicalId":174126,"journal":{"name":"2021 IEEE International Conference on Big Knowledge (ICBK)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128241281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Melanoma is a type of skin cancer that usually develops rapidly and can spread to other parts of the body, causing death or complication of treatment to a large population. Early detection of Melanoma is the key to increase the chances of patients' survival. While accurate Melanoma diagnosis can be performed through clinical examination and dermatology tests, such procedures are usually time-consuming, costly, and severely delayed due to patients' emotional aspects or other obstacles. Recently, machine learning approaches, deep learning, in particular, have shown great potential in diagnosing Melanoma using images captured through the camera. Accurate detection of Melanoma using low-end images with machine learning delivers a solution for rapid screening of Melanoma without clinical visits or experts. Although many deep learning methods can be applied to Melanoma detection, there exists a large variance between their performance, given that their parameters, such as learning rates, optimizers, batch size, etc., always differ. In this paper, we carry out a systematic study to validate a more precise deep learning framework for detecting Melanoma and other types of skin lesions. A generic Convolutional Neural Network (CNN) is performed, and transfer learning using a pre-trained framework is proved to help improve the detection accuracy. In addition, data augmentation is applied, which improves the model performance. A series of parameters for the learning rates, batch size, and optimizers etc., are tested within the models. Our study shows tremendous improvement in Melanoma detection with higher accuracy, which can be very useful for medical experts to provide efficient Melanoma detection to patients.
{"title":"An Empirical Study of Deep Learning Frameworks for Melanoma Cancer Detection using Transfer Learning and Data Augmentation","authors":"Divya Gangwani, Qianxin Liang, Shuwen Wang, Xingquan Zhu","doi":"10.1109/ICKG52313.2021.00015","DOIUrl":"https://doi.org/10.1109/ICKG52313.2021.00015","url":null,"abstract":"Melanoma is a type of skin cancer that usually develops rapidly and can spread to other parts of the body, causing death or complication of treatment to a large population. Early detection of Melanoma is the key to increase the chances of patients' survival. While accurate Melanoma diagnosis can be performed through clinical examination and dermatology tests, such procedures are usually time-consuming, costly, and severely delayed due to patients' emotional aspects or other obstacles. Recently, machine learning approaches, deep learning, in particular, have shown great potential in diagnosing Melanoma using images captured through the camera. Accurate detection of Melanoma using low-end images with machine learning delivers a solution for rapid screening of Melanoma without clinical visits or experts. Although many deep learning methods can be applied to Melanoma detection, there exists a large variance between their performance, given that their parameters, such as learning rates, optimizers, batch size, etc., always differ. In this paper, we carry out a systematic study to validate a more precise deep learning framework for detecting Melanoma and other types of skin lesions. A generic Convolutional Neural Network (CNN) is performed, and transfer learning using a pre-trained framework is proved to help improve the detection accuracy. In addition, data augmentation is applied, which improves the model performance. A series of parameters for the learning rates, batch size, and optimizers etc., are tested within the models. Our study shows tremendous improvement in Melanoma detection with higher accuracy, which can be very useful for medical experts to provide efficient Melanoma detection to patients.","PeriodicalId":174126,"journal":{"name":"2021 IEEE International Conference on Big Knowledge (ICBK)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130835934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ICKG52313.2021.00044
Varun R. Embar, Andrey Kan, Bunyamin Sisman, C. Faloutsos, L. Getoor
Identifying discriminative attributes between prod-uct variations, e.g., the same wristwatch models but in different finishes, is crucial for improving e-commerce search engines and recommender systems. Despite the importance of these discrimi-native attributes, values for such attributes are often not available explicitly and instead are mentioned only in unstructured fields such as product title or product description. In this work, we introduce the novel task of discriminative attribute extraction which involves identifying the attributes that distinguish product variations, such as finish, and also, at the same time, extracting the values for these attributes from unstructured text. This task differs from the standard attribute value extraction task that has been well-studied in literature, as in our task we also need to identify the attribute, in addition to finding the value. We propose DiffXtract, a novel end-to-end, deep learning based approach that jointly identifies both the discriminative attribute and extracts its values from the product variations. The proposed approach is trained using a multitask objective and explicitly models the semantic representation of the discriminative attribute and uses it to extract the attribute values. We show that existing product attribute extraction approaches have several drawbacks, both theoretically and empirically. We also introduce a novel dataset based on a corpus of data previously crawled from a large number of e-commerce websites. In our empirical evaluation, we show that DiffXtract outperforms state-of-the-art deep learning-based and dictionary-based attribute extraction approaches by up to 8% F1 score when identifying attributes, and up to 10% F1 score when extracting attribute values.
{"title":"DiffXtract: Joint Discriminative Product Attribute-Value Extraction","authors":"Varun R. Embar, Andrey Kan, Bunyamin Sisman, C. Faloutsos, L. Getoor","doi":"10.1109/ICKG52313.2021.00044","DOIUrl":"https://doi.org/10.1109/ICKG52313.2021.00044","url":null,"abstract":"Identifying discriminative attributes between prod-uct variations, e.g., the same wristwatch models but in different finishes, is crucial for improving e-commerce search engines and recommender systems. Despite the importance of these discrimi-native attributes, values for such attributes are often not available explicitly and instead are mentioned only in unstructured fields such as product title or product description. In this work, we introduce the novel task of discriminative attribute extraction which involves identifying the attributes that distinguish product variations, such as finish, and also, at the same time, extracting the values for these attributes from unstructured text. This task differs from the standard attribute value extraction task that has been well-studied in literature, as in our task we also need to identify the attribute, in addition to finding the value. We propose DiffXtract, a novel end-to-end, deep learning based approach that jointly identifies both the discriminative attribute and extracts its values from the product variations. The proposed approach is trained using a multitask objective and explicitly models the semantic representation of the discriminative attribute and uses it to extract the attribute values. We show that existing product attribute extraction approaches have several drawbacks, both theoretically and empirically. We also introduce a novel dataset based on a corpus of data previously crawled from a large number of e-commerce websites. In our empirical evaluation, we show that DiffXtract outperforms state-of-the-art deep learning-based and dictionary-based attribute extraction approaches by up to 8% F1 score when identifying attributes, and up to 10% F1 score when extracting attribute values.","PeriodicalId":174126,"journal":{"name":"2021 IEEE International Conference on Big Knowledge (ICBK)","volume":"3 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130043951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ICKG52313.2021.00047
Mei Ma, Jianji Wang, Xuguang Lan, N. Zheng
The important task of multi-turn response selection in conversation systems must consider sufficient semantic information and spatio-temporal information when building retrieval-based chatbots. However, existing studies do not pay enough attention to both factors. In this study, a scheme of multi-turn response selection that combines a primary temporal matching module, an advanced temporal matching module, and a spatial matching module is proposed to extract matching information from context and response. The temporal matching modules progressively construct representations of the context and candidate responses at different granularities. Similarity matrices of the context and candidate responses are calculated and stacked using the spatial matching module. Convolutional neural network is then utilized to extract the spatial matching information. Finally, matching vectors of the three modules are fused to calculate the final matching score. Experimental results on two public datasets verify that our model can outperform state-of-the-art methods.
{"title":"Multi-level Spatio-temporal Matching Network for Multi-turn Response Selection in Retrieval-based Dialogue Systems","authors":"Mei Ma, Jianji Wang, Xuguang Lan, N. Zheng","doi":"10.1109/ICKG52313.2021.00047","DOIUrl":"https://doi.org/10.1109/ICKG52313.2021.00047","url":null,"abstract":"The important task of multi-turn response selection in conversation systems must consider sufficient semantic information and spatio-temporal information when building retrieval-based chatbots. However, existing studies do not pay enough attention to both factors. In this study, a scheme of multi-turn response selection that combines a primary temporal matching module, an advanced temporal matching module, and a spatial matching module is proposed to extract matching information from context and response. The temporal matching modules progressively construct representations of the context and candidate responses at different granularities. Similarity matrices of the context and candidate responses are calculated and stacked using the spatial matching module. Convolutional neural network is then utilized to extract the spatial matching information. Finally, matching vectors of the three modules are fused to calculate the final matching score. Experimental results on two public datasets verify that our model can outperform state-of-the-art methods.","PeriodicalId":174126,"journal":{"name":"2021 IEEE International Conference on Big Knowledge (ICBK)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125595472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ICKG52313.2021.00023
Haixia Zhao, Guliu Liu, Lei Li, Jiao Li
Graph Pattern Matching (GPM) plays an important role in the field of multi-attribute decision making. By designing a pattern graph involving multiple attribute constraints of the Decision Maker (DM), the sub graphs can be matched from the data graph. However, the existing work rarely considers the requirements from group DMs. In this case, the requirements on each attribute have multiple values from different DMs. How to aggregate these requirements and perform efficient sub graph matching is a challenging task. In this paper, first, a sub graph query problem that needs to consider the multiple requirements from group DMs is proposed. Then, to solve this problem, a Multi-Requirement-based Sub graph Query model (MR-SQ) is proposed, which is mainly composed of two stages: group require-ments aggregation and GPM. For the first stage, an Intuitionistic Fuzzy Requirements Aggregation (IFRA) method is proposed for requirements aggregation. Then, to solve the efficiency problem of large-scale GPM, a parallel strategy is designed for the GPM stage. Finally, the practicability and effectiveness of the proposed model have been verified through an illustrative example and time- performance comparison experiments.
{"title":"Intuitionistic Fuzzy Requirements Aggregation for Graph Pattern Matching with Group Decision Makers","authors":"Haixia Zhao, Guliu Liu, Lei Li, Jiao Li","doi":"10.1109/ICKG52313.2021.00023","DOIUrl":"https://doi.org/10.1109/ICKG52313.2021.00023","url":null,"abstract":"Graph Pattern Matching (GPM) plays an important role in the field of multi-attribute decision making. By designing a pattern graph involving multiple attribute constraints of the Decision Maker (DM), the sub graphs can be matched from the data graph. However, the existing work rarely considers the requirements from group DMs. In this case, the requirements on each attribute have multiple values from different DMs. How to aggregate these requirements and perform efficient sub graph matching is a challenging task. In this paper, first, a sub graph query problem that needs to consider the multiple requirements from group DMs is proposed. Then, to solve this problem, a Multi-Requirement-based Sub graph Query model (MR-SQ) is proposed, which is mainly composed of two stages: group require-ments aggregation and GPM. For the first stage, an Intuitionistic Fuzzy Requirements Aggregation (IFRA) method is proposed for requirements aggregation. Then, to solve the efficiency problem of large-scale GPM, a parallel strategy is designed for the GPM stage. Finally, the practicability and effectiveness of the proposed model have been verified through an illustrative example and time- performance comparison experiments.","PeriodicalId":174126,"journal":{"name":"2021 IEEE International Conference on Big Knowledge (ICBK)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122963979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}