{"title":"A novel semantic-aware search scheme based on BCI-tree index over encrypted cloud data","authors":"Qiang Zhou, Hua Dai, Yuanlong Liu, Geng Yang, X. Yi, Zheng Hu","doi":"10.1007/s11280-023-01176-w","DOIUrl":"https://doi.org/10.1007/s11280-023-01176-w","url":null,"abstract":"","PeriodicalId":49356,"journal":{"name":"World Wide Web-Internet and Web Information Systems","volume":"1 1","pages":"3055 - 3079"},"PeriodicalIF":3.7,"publicationDate":"2023-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75179629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Soft dimensionality reduction for reinforcement data clustering","authors":"Fatemeh Fathinezhad, Peyman Adibi, Bijan Shoushtarian, Hamidreza Baradaran Kashani, Jocelyn Chanussot","doi":"10.1007/s11280-023-01158-y","DOIUrl":"https://doi.org/10.1007/s11280-023-01158-y","url":null,"abstract":"","PeriodicalId":49356,"journal":{"name":"World Wide Web-Internet and Web Information Systems","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135643300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Relation classification aims to recognize semantic relation between two given entities mentioned in the given text. Existing models have performed well on the inverse relation classification with large-scale datasets, but their performance drops significantly for few-shot learning. In this paper, we propose a Phrase-level Attention Network, function words adaptively enhanced attention framework (FAEA+), to attend class-related function words by the designed hybrid attention for few-shot inverse relation classification in Knowledge Graph. Then, an instance-aware prototype network is present to adaptively capture relation information associated with query instances and eliminate intra-class redundancy due to function words introduced. We theoretically prove that the introduction of function words will increase intra-class differences, and the designed instance-aware prototype network is competent for reducing redundancy. Experimental results show that FAEA+ significantly improved over strong baselines on two few-shot relation classification datasets. Moreover, our model has a distinct advantage in solving inverse relations, which outperforms state-of-the-art results by 16.82% under a 1-shot setting in FewRel1.0.
{"title":"Phrase-level attention network for few-shot inverse relation classification in knowledge graph","authors":"Shaojuan Wu, Chunliu Dou, Dazhuang Wang, Jitong Li, Xiaowang Zhang, Zhiyong Feng, Kewen Wang, Sofonias Yitagesu","doi":"10.1007/s11280-023-01142-6","DOIUrl":"https://doi.org/10.1007/s11280-023-01142-6","url":null,"abstract":"Relation classification aims to recognize semantic relation between two given entities mentioned in the given text. Existing models have performed well on the inverse relation classification with large-scale datasets, but their performance drops significantly for few-shot learning. In this paper, we propose a Phrase-level Attention Network, function words adaptively enhanced attention framework (FAEA+), to attend class-related function words by the designed hybrid attention for few-shot inverse relation classification in Knowledge Graph. Then, an instance-aware prototype network is present to adaptively capture relation information associated with query instances and eliminate intra-class redundancy due to function words introduced. We theoretically prove that the introduction of function words will increase intra-class differences, and the designed instance-aware prototype network is competent for reducing redundancy. Experimental results show that FAEA+ significantly improved over strong baselines on two few-shot relation classification datasets. Moreover, our model has a distinct advantage in solving inverse relations, which outperforms state-of-the-art results by 16.82% under a 1-shot setting in FewRel1.0.","PeriodicalId":49356,"journal":{"name":"World Wide Web-Internet and Web Information Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135643530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-29DOI: 10.1007/s11280-023-01173-z
T. Kumari, Bhavna Gupta, Ravish Sharma, Punam Bedi
{"title":"Empowering reciprocal recommender system using contextual bandits and argumentation based explanations","authors":"T. Kumari, Bhavna Gupta, Ravish Sharma, Punam Bedi","doi":"10.1007/s11280-023-01173-z","DOIUrl":"https://doi.org/10.1007/s11280-023-01173-z","url":null,"abstract":"","PeriodicalId":49356,"journal":{"name":"World Wide Web-Internet and Web Information Systems","volume":"26 1","pages":"2969 - 3000"},"PeriodicalIF":3.7,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74543781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-29DOI: 10.1007/s11280-023-01170-2
Langjunqing Jin, Feng Zhao, Hai Jin
{"title":"HTSE: hierarchical time-surface model for temporal knowledge graph embedding","authors":"Langjunqing Jin, Feng Zhao, Hai Jin","doi":"10.1007/s11280-023-01170-2","DOIUrl":"https://doi.org/10.1007/s11280-023-01170-2","url":null,"abstract":"","PeriodicalId":49356,"journal":{"name":"World Wide Web-Internet and Web Information Systems","volume":"56 1","pages":"2947 - 2967"},"PeriodicalIF":3.7,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80212937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-26DOI: 10.1007/s11280-023-01149-z
Fengyin Li, Xiaojiao Wang, Yuhong Sun, Tao Li, Junrong Ge
The COVID-19 is still spreading today, and it has caused great harm to human beings. The system at the entrance of public places such as shopping malls and stations should check whether pedestrians are wearing masks. However, pedestrians often pass the system inspection by wearing cotton masks, scarves, etc. Therefore, the detection system not only needs to check whether pedestrians are wearing masks, but also needs to detect the type of masks. Based on the lightweight network architecture MobilenetV3, this paper proposes a cascaded deep learning network based on transfer learning, and then designs a mask recognition system based on the cascaded deep learning network. By modifying the activation function of the MobilenetV3 output layer and the structure of the model, two MobilenetV3 networks suitable for cascading are obtained. By introducing transfer learning into the training process of two modified MobilenetV3 networks and a multi-task convolutional neural network, the ImagNet underlying parameters of the network models are obtained in advance, which reduces the computational load of the models. The cascaded deep learning network consists of a multi-task convolutional neural network cascaded with these two modified MobilenetV3 networks. A multi-task convolutional neural network is used to detect faces in images, and two modified MobilenetV3 networks are used as the backbone network to extract the features of masks. After comparing with the classification results of the modified MobilenetV3 neural network before cascading, the classification accuracy of the cascading learning network is improved by 7%, and the excellent performance of the cascading network can be seen.
{"title":"Transfer learning based cascaded deep learning network and mask recognition for COVID-19.","authors":"Fengyin Li, Xiaojiao Wang, Yuhong Sun, Tao Li, Junrong Ge","doi":"10.1007/s11280-023-01149-z","DOIUrl":"10.1007/s11280-023-01149-z","url":null,"abstract":"<p><p>The COVID-19 is still spreading today, and it has caused great harm to human beings. The system at the entrance of public places such as shopping malls and stations should check whether pedestrians are wearing masks. However, pedestrians often pass the system inspection by wearing cotton masks, scarves, etc. Therefore, the detection system not only needs to check whether pedestrians are wearing masks, but also needs to detect the type of masks. Based on the lightweight network architecture MobilenetV3, this paper proposes a cascaded deep learning network based on transfer learning, and then designs a mask recognition system based on the cascaded deep learning network. By modifying the activation function of the MobilenetV3 output layer and the structure of the model, two MobilenetV3 networks suitable for cascading are obtained. By introducing transfer learning into the training process of two modified MobilenetV3 networks and a multi-task convolutional neural network, the ImagNet underlying parameters of the network models are obtained in advance, which reduces the computational load of the models. The cascaded deep learning network consists of a multi-task convolutional neural network cascaded with these two modified MobilenetV3 networks. A multi-task convolutional neural network is used to detect faces in images, and two modified MobilenetV3 networks are used as the backbone network to extract the features of masks. After comparing with the classification results of the modified MobilenetV3 neural network before cascading, the classification accuracy of the cascading learning network is improved by 7%, and the excellent performance of the cascading network can be seen.</p>","PeriodicalId":49356,"journal":{"name":"World Wide Web-Internet and Web Information Systems","volume":" ","pages":"1-16"},"PeriodicalIF":3.7,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10214323/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10092910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-25DOI: 10.1007/s11280-023-01168-w
Ben Liu, Miao Peng, Wenjie Xu, Min Peng
{"title":"Neighboring relation enhanced inductive knowledge graph link prediction via meta-learning","authors":"Ben Liu, Miao Peng, Wenjie Xu, Min Peng","doi":"10.1007/s11280-023-01168-w","DOIUrl":"https://doi.org/10.1007/s11280-023-01168-w","url":null,"abstract":"","PeriodicalId":49356,"journal":{"name":"World Wide Web-Internet and Web Information Systems","volume":"67 1","pages":"2909 - 2930"},"PeriodicalIF":3.7,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83940560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-19DOI: 10.1007/s11280-023-01167-x
H. Nie, Xiangguo Zhao, Xin Bi, Yuliang Ma, George Y. Yuan
{"title":"Correlation embedding learning with dynamic semantic enhanced sampling for knowledge graph completion","authors":"H. Nie, Xiangguo Zhao, Xin Bi, Yuliang Ma, George Y. Yuan","doi":"10.1007/s11280-023-01167-x","DOIUrl":"https://doi.org/10.1007/s11280-023-01167-x","url":null,"abstract":"","PeriodicalId":49356,"journal":{"name":"World Wide Web-Internet and Web Information Systems","volume":"12 1","pages":"2887 - 2907"},"PeriodicalIF":3.7,"publicationDate":"2023-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81995213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-17DOI: 10.1007/s11280-023-01166-y
Nan Hu, Yike Wu, Guilin Qi, Dehai Min, Jiaoyan Chen, Jeff Z Pan, Zafar Ali
Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP). It is now the consensus of the NLP community to adopt PLMs as the backbone for downstream tasks. In recent works on knowledge graph question answering (KGQA), BERT or its variants have become necessary in their KGQA models. However, there is still a lack of comprehensive research and comparison of the performance of different PLMs in KGQA. To this end, we summarize two basic KGQA frameworks based on PLMs without additional neural network modules to compare the performance of nine PLMs in terms of accuracy and efficiency. In addition, we present three benchmarks for larger-scale KGs based on the popular SimpleQuestions benchmark to investigate the scalability of PLMs. We carefully analyze the results of all PLMs-based KGQA basic frameworks on these benchmarks and two other popular datasets, WebQuestionSP and FreebaseQA, and find that knowledge distillation techniques and knowledge enhancement methods in PLMs are promising for KGQA. Furthermore, we test ChatGPT ( https://chat.openai.com/ ), which has drawn a great deal of attention in the NLP community, demonstrating its impressive capabilities and limitations in zero-shot KGQA. We have released the code and benchmarks to promote the use of PLMs on KGQA ( https://github.com/aannonymouuss/PLMs-in-Practical-KBQA ).
{"title":"An empirical study of pre-trained language models in simple knowledge graph question answering","authors":"Nan Hu, Yike Wu, Guilin Qi, Dehai Min, Jiaoyan Chen, Jeff Z Pan, Zafar Ali","doi":"10.1007/s11280-023-01166-y","DOIUrl":"https://doi.org/10.1007/s11280-023-01166-y","url":null,"abstract":"Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP). It is now the consensus of the NLP community to adopt PLMs as the backbone for downstream tasks. In recent works on knowledge graph question answering (KGQA), BERT or its variants have become necessary in their KGQA models. However, there is still a lack of comprehensive research and comparison of the performance of different PLMs in KGQA. To this end, we summarize two basic KGQA frameworks based on PLMs without additional neural network modules to compare the performance of nine PLMs in terms of accuracy and efficiency. In addition, we present three benchmarks for larger-scale KGs based on the popular SimpleQuestions benchmark to investigate the scalability of PLMs. We carefully analyze the results of all PLMs-based KGQA basic frameworks on these benchmarks and two other popular datasets, WebQuestionSP and FreebaseQA, and find that knowledge distillation techniques and knowledge enhancement methods in PLMs are promising for KGQA. Furthermore, we test ChatGPT ( https://chat.openai.com/ ), which has drawn a great deal of attention in the NLP community, demonstrating its impressive capabilities and limitations in zero-shot KGQA. We have released the code and benchmarks to promote the use of PLMs on KGQA ( https://github.com/aannonymouuss/PLMs-in-Practical-KBQA ).","PeriodicalId":49356,"journal":{"name":"World Wide Web-Internet and Web Information Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135861247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}