The intrusion detection system (IDS) has lower speed, less adaptability and lower detection accuracy especially for small samples sets. This paper presents a detection model based on normalized mutual antibodies information feature selection and adaptive quantum artificial immune with cooperative evolution of multiple operators (NMAIFS MOP-AQAI). First, for a high intrusion speed, the NMAIFS is used to achieve an effective reduction for high-dimensional features. Then, the best feature vectors are sent to the MOP-AQAI classifier, in which, vaccination strategy, the quantum computing, and cooperative evolution of multiple operators are adopted to generate excellent detectors. Lastly, the data is fed into NMAIFS MOP-AQAI and ultimately generates accurate detection results. The experimental results on real abnormal data demonstrate that the NMAIFS MOP-AQAI has higher detection accuracy, lower false negative rate and a higher adaptive performance than the existing anomaly detection methods, especially for small samples sets.
{"title":"An Intrusion Detection System Based on Normalized Mutual Information Antibodies Feature Selection and Adaptive Quantum Artificial Immune System","authors":"Zhang Ling, Zhang Jia Hao","doi":"10.4018/ijswis.308469","DOIUrl":"https://doi.org/10.4018/ijswis.308469","url":null,"abstract":"The intrusion detection system (IDS) has lower speed, less adaptability and lower detection accuracy especially for small samples sets. This paper presents a detection model based on normalized mutual antibodies information feature selection and adaptive quantum artificial immune with cooperative evolution of multiple operators (NMAIFS MOP-AQAI). First, for a high intrusion speed, the NMAIFS is used to achieve an effective reduction for high-dimensional features. Then, the best feature vectors are sent to the MOP-AQAI classifier, in which, vaccination strategy, the quantum computing, and cooperative evolution of multiple operators are adopted to generate excellent detectors. Lastly, the data is fed into NMAIFS MOP-AQAI and ultimately generates accurate detection results. The experimental results on real abnormal data demonstrate that the NMAIFS MOP-AQAI has higher detection accuracy, lower false negative rate and a higher adaptive performance than the existing anomaly detection methods, especially for small samples sets.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"14 1","pages":"1-25"},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89502083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Increasing number of ontologies demand the interoperability between them in order to gain accurate information. the ontology heterogeneity also makes the interoperability process even more difficult. These scenarios let the development of effective and efficient ontology matching. The existing ontology matching systems are mainly focusing with subject derivatives of the concern domain. Since ontologies are represented as data model in structured format, In this paper, a new modified model of similarity spreading for ontology mapping is proposed. In this approach the mapping mainly involves with node clustering based on edge affinity and then the graph matching is achieved by applying coefficient similarity propagation. This process is carried out by iterative manner and at the end the similarity score is calculated for iteration. This model is evaluated in terms of precision, recall and f-measure parameters and found that it outperforms well than its similar kind of systems.
{"title":"An Improved Structural-Based Ontology Matching Approach Using Similarity Spreading","authors":"Sengodan Mani, Samukutty Annadurai","doi":"10.4018/ijswis.300825","DOIUrl":"https://doi.org/10.4018/ijswis.300825","url":null,"abstract":"Increasing number of ontologies demand the interoperability between them in order to gain accurate information. the ontology heterogeneity also makes the interoperability process even more difficult. These scenarios let the development of effective and efficient ontology matching. The existing ontology matching systems are mainly focusing with subject derivatives of the concern domain. Since ontologies are represented as data model in structured format, In this paper, a new modified model of similarity spreading for ontology mapping is proposed. In this approach the mapping mainly involves with node clustering based on edge affinity and then the graph matching is achieved by applying coefficient similarity propagation. This process is carried out by iterative manner and at the end the similarity score is calculated for iteration. This model is evaluated in terms of precision, recall and f-measure parameters and found that it outperforms well than its similar kind of systems.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"66 1","pages":"1-17"},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90254853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using four types of publicly available datasets and ArcGIS software, the authors identify the spatial characteristics of postgraduate education in China at three scales: comprehensive economic zone, provincial, and city. They also employ geographically weighted regression and ordinary least squares to study the factors influencing the spatial pattern of postgraduate education in Gin at the city scale. The findings show that the number of postgraduate education institutions increases as the longitude of a city increases, but the number decreases from coast to inland. Second, postgraduate education institutions tend to group together in provincial capitals and megacities. Finally, GDP, per capita GDP, population size, local income, and total retail sales of consumer goods significantly impact postgraduate education development. The study contributes to the literature and provides insights for practitioners in promoting urban planning and infrastructure development.
{"title":"Spatial Patterns and Development Characteristics of China's Postgraduate Education","authors":"P. Li, Haidong Zhong, J. Zhang","doi":"10.4018/ijswis.313190","DOIUrl":"https://doi.org/10.4018/ijswis.313190","url":null,"abstract":"Using four types of publicly available datasets and ArcGIS software, the authors identify the spatial characteristics of postgraduate education in China at three scales: comprehensive economic zone, provincial, and city. They also employ geographically weighted regression and ordinary least squares to study the factors influencing the spatial pattern of postgraduate education in Gin at the city scale. The findings show that the number of postgraduate education institutions increases as the longitude of a city increases, but the number decreases from coast to inland. Second, postgraduate education institutions tend to group together in provincial capitals and megacities. Finally, GDP, per capita GDP, population size, local income, and total retail sales of consumer goods significantly impact postgraduate education development. The study contributes to the literature and provides insights for practitioners in promoting urban planning and infrastructure development.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"25 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74871087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this chapter, the authors propose to use contextual Word2Vec model for understanding OOV (out of vocabulary). The OOV is extracted by using left-right entropy and point information entropy. They choose to use Word2Vec to construct the word vector space and CBOW (continuous bag of words) to obtain the contextual information of the words. If there is a word that has similar contextual information to the OOV, the word can be used to understand the OOV. They chose the Weibo corpus as the dataset for the experiments. The results show that the proposed model achieves 97.10% accuracy, which is better than Skip-Gram by 8.53%.
在本章中,作者建议使用上下文Word2Vec模型来理解OOV (out of vocabulary)。利用左右熵和点信息熵提取OOV。他们选择使用Word2Vec来构建词向量空间,使用CBOW (continuous bag of words)来获取词的上下文信息。如果有一个单词与OOV具有相似的上下文信息,则可以使用该单词来理解OOV。他们选择微博语料库作为实验的数据集。结果表明,该模型的准确率为97.10%,比Skip-Gram高8.53%。
{"title":"Contextual Word2Vec Model for Understanding Chinese Out of Vocabularies on Online Social Media","authors":"Jiakai Gu, Gen Li, Nam D. Vo, Jason J. Jung","doi":"10.4018/ijswis.309428","DOIUrl":"https://doi.org/10.4018/ijswis.309428","url":null,"abstract":"In this chapter, the authors propose to use contextual Word2Vec model for understanding OOV (out of vocabulary). The OOV is extracted by using left-right entropy and point information entropy. They choose to use Word2Vec to construct the word vector space and CBOW (continuous bag of words) to obtain the contextual information of the words. If there is a word that has similar contextual information to the OOV, the word can be used to understand the OOV. They chose the Weibo corpus as the dataset for the experiments. The results show that the proposed model achieves 97.10% accuracy, which is better than Skip-Gram by 8.53%.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"36 1","pages":"1-14"},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83955888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meriem Ali Khoudja, Messaouda Fareh, Hafida Bouarfa
Ontology matching is an efficient method to establish interoperability among heterogeneous ontologies. Large-scale ontology matching still remains a big challenge for its long time and large memory space consumption. The actual solution to this problem is ontology partitioning which is also challenging. This paper presents DeepOM, an ontology matching system to deal with this large-scale heterogeneity problem without partitioning using deep learning techniques. It consists on creating semantic embeddings for concepts of input ontologies using a reference ontology, and use them to train an auto-encoder in order to learn more accurate and less dimensional representations for concepts. The experimental results of its evaluation on large ontologies, and its comparison with different ontology matching systems which have participated to the same test challenge, are very encouraging with a precision score of 0.99. They demonstrate the higher efficiency of the proposed system to increase the performance of the large-scale ontology matching task.
{"title":"Deep Embedding Learning With Auto-Encoder for Large-Scale Ontology Matching","authors":"Meriem Ali Khoudja, Messaouda Fareh, Hafida Bouarfa","doi":"10.4018/ijswis.297042","DOIUrl":"https://doi.org/10.4018/ijswis.297042","url":null,"abstract":"Ontology matching is an efficient method to establish interoperability among heterogeneous ontologies. Large-scale ontology matching still remains a big challenge for its long time and large memory space consumption. The actual solution to this problem is ontology partitioning which is also challenging. This paper presents DeepOM, an ontology matching system to deal with this large-scale heterogeneity problem without partitioning using deep learning techniques. It consists on creating semantic embeddings for concepts of input ontologies using a reference ontology, and use them to train an auto-encoder in order to learn more accurate and less dimensional representations for concepts. The experimental results of its evaluation on large ontologies, and its comparison with different ontology matching systems which have participated to the same test challenge, are very encouraging with a precision score of 0.99. They demonstrate the higher efficiency of the proposed system to increase the performance of the large-scale ontology matching task.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"155 1","pages":"1-18"},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86297724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. F. García, Maria Isabel Manzano García, Roberto Berjón Gallinas, Montserrat Mateos Sánchez, M. E. B. Gutiérrez
The aim of this work is the development of an information system that, by integrating data from different sources and applying semantic technologies, makes it possible to publish and share with society the scientific production generated in the university environment, promoting its dissemination and thus contributing to the knowledge society, among others. In practice, this is the implementation of a CRIS (current research information system). This CRIS presents advanced features. On one hand it applies semantic technologies, providing a query service through a SPARQL Point, besides the reuse of shared data by exporting them in different formats. In this sense, it is also based on a European ontology or semantic standard such as CERIF, which facilitates its portability. On the other hand, CRIS also presents an alternative to the lack of a single data system by allowing data from different sources to be integrated and managed.
{"title":"Integration and Open Access System Based on Semantic Technologies","authors":"A. F. García, Maria Isabel Manzano García, Roberto Berjón Gallinas, Montserrat Mateos Sánchez, M. E. B. Gutiérrez","doi":"10.4018/ijswis.309422","DOIUrl":"https://doi.org/10.4018/ijswis.309422","url":null,"abstract":"The aim of this work is the development of an information system that, by integrating data from different sources and applying semantic technologies, makes it possible to publish and share with society the scientific production generated in the university environment, promoting its dissemination and thus contributing to the knowledge society, among others. In practice, this is the implementation of a CRIS (current research information system). This CRIS presents advanced features. On one hand it applies semantic technologies, providing a query service through a SPARQL Point, besides the reuse of shared data by exporting them in different formats. In this sense, it is also based on a European ontology or semantic standard such as CERIF, which facilitates its portability. On the other hand, CRIS also presents an alternative to the lack of a single data system by allowing data from different sources to be integrated and managed.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"78 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78141683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The eagle expresses of cloud computing plays a pivotal role in the development of technology. The aim is to solve in such a way that it will provide an optimized solution. The key role of allocating these efficient resources and making the algorithms for its time and cost optimization. The approach of the research is based on the rough set theory RST. RST is a great method for making a large difference in qualitative analysis situations. It's a technique to find knowledge discovery and handle the problems such as inductive reasoning, automatic classification, pattern recognition, learning algorithms, and data reduction. The rough set theory is the new method in cloud service selection so that the best services provide for cloud users and efficient service improvement for cloud providers. The simulation of the work is finished at intervals with the merchandise utilized for the formation of the philosophy framework. The simulation shows the IoT services provided by the IoT service supplier to the user are the best utilization with the parameters and ontology technique.
{"title":"Adaptive Ontology-Based IoT Resource Provisioning in Computing Systems","authors":"Ashish Tiwari, R. Garg","doi":"10.4018/ijswis.306260","DOIUrl":"https://doi.org/10.4018/ijswis.306260","url":null,"abstract":"The eagle expresses of cloud computing plays a pivotal role in the development of technology. The aim is to solve in such a way that it will provide an optimized solution. The key role of allocating these efficient resources and making the algorithms for its time and cost optimization. The approach of the research is based on the rough set theory RST. RST is a great method for making a large difference in qualitative analysis situations. It's a technique to find knowledge discovery and handle the problems such as inductive reasoning, automatic classification, pattern recognition, learning algorithms, and data reduction. The rough set theory is the new method in cloud service selection so that the best services provide for cloud users and efficient service improvement for cloud providers. The simulation of the work is finished at intervals with the merchandise utilized for the formation of the philosophy framework. The simulation shows the IoT services provided by the IoT service supplier to the user are the best utilization with the parameters and ontology technique.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"32 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74628101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jitendra V. Tembhurne, Md. Moin Almin, Tausif Diwan
With the advancement of technology, social media has become a major source of digital news due to its global exposure. This has led to an increase in spreading fake news and misinformation online. Humans cannot differentiate fake news from real news because they can be easily influenced. A lot of research work has been conducted for detecting fake news using Artificial Intelligence and Machine Learning. A large number of deep learning models and their architectural variants have been investigated and many websites are utilizing these models directly or indirectly to detect fake news. However, state-of-the-arts demonstrate the limited accuracy in distinguishing fake news from the original news. We propose a multi-channel deep learning model namely Mc-DNN, leveraging and processing the news headlines and news articles along different channels for differentiating fake or real news. We achieve the highest accuracy of 99.23% on ISOT Fake News Dataset and 94.68% on Fake News Data for Mc-DNN. Thus, we highly recommend the use of Mc-DNN for fake news detection.
{"title":"Mc-DNN: Fake News Detection Using Multi-Channel Deep Neural Networks","authors":"Jitendra V. Tembhurne, Md. Moin Almin, Tausif Diwan","doi":"10.4018/ijswis.295553","DOIUrl":"https://doi.org/10.4018/ijswis.295553","url":null,"abstract":"With the advancement of technology, social media has become a major source of digital news due to its global exposure. This has led to an increase in spreading fake news and misinformation online. Humans cannot differentiate fake news from real news because they can be easily influenced. A lot of research work has been conducted for detecting fake news using Artificial Intelligence and Machine Learning. A large number of deep learning models and their architectural variants have been investigated and many websites are utilizing these models directly or indirectly to detect fake news. However, state-of-the-arts demonstrate the limited accuracy in distinguishing fake news from the original news. We propose a multi-channel deep learning model namely Mc-DNN, leveraging and processing the news headlines and news articles along different channels for differentiating fake or real news. We achieve the highest accuracy of 99.23% on ISOT Fake News Dataset and 94.68% on Fake News Data for Mc-DNN. Thus, we highly recommend the use of Mc-DNN for fake news detection.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"48 1","pages":"1-20"},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91271192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianfeng Wang, Zhisong Pan, Guyu Hu, Yexin Duan, Yu Pan
Compared with traditional machine learning model, graph neural networks (GNNs) have distinct advantages in processing unstructured data. However, the vulnerability of GNNs cannot be ignored. Graph universal adversarial attack is a special type of attack on graph which can attack any targeted victim by flipping edges connected to anchor nodes. In this paper, we propose the forward-derivative-based graph universal adversarial attack (FDGUA). Firstly, we point out that one node as training data is sufficient to generate an effective continuous attack vector. Then we discretize the continuous attack vector based on forward derivative. FDGUA can achieve impressive attack performance that three anchor nodes can result in attack success rate higher than 80% for the dataset Cora. Moreover, we propose the first graph universal adversarial training (GUAT) to defend against universal adversarial attack. Experiments show that GUAT can effectively improve the robustness of the GNNs without degrading the accuracy of the model.
{"title":"Understanding Universal Adversarial Attack and Defense on Graph","authors":"Tianfeng Wang, Zhisong Pan, Guyu Hu, Yexin Duan, Yu Pan","doi":"10.4018/ijswis.308812","DOIUrl":"https://doi.org/10.4018/ijswis.308812","url":null,"abstract":"Compared with traditional machine learning model, graph neural networks (GNNs) have distinct advantages in processing unstructured data. However, the vulnerability of GNNs cannot be ignored. Graph universal adversarial attack is a special type of attack on graph which can attack any targeted victim by flipping edges connected to anchor nodes. In this paper, we propose the forward-derivative-based graph universal adversarial attack (FDGUA). Firstly, we point out that one node as training data is sufficient to generate an effective continuous attack vector. Then we discretize the continuous attack vector based on forward derivative. FDGUA can achieve impressive attack performance that three anchor nodes can result in attack success rate higher than 80% for the dataset Cora. Moreover, we propose the first graph universal adversarial training (GUAT) to defend against universal adversarial attack. Experiments show that GUAT can effectively improve the robustness of the GNNs without degrading the accuracy of the model.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"30 1","pages":"1-21"},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85298552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shudong Li, Danyi Qin, Xiaobo Wu, Juan Li, Baohui Li, Weihong Han
Among the large number of network attack alerts generated every day, actual security incidents are usually overwhelmed by a large number of redundant alerts. Therefore, how to remove these redundant alerts in real time and improve the quality of alerts is an urgent problem to be solved in large-scale network security protection. This paper uses the method of combining machine learning and deep learning to improve the effect of false alarm detection and then more accurately identify real alarms, that is, in the process of training the model, the features of a hidden layer output of the DNN model are used as input to train the machine learning model. In order to verify the proposed method, we use the marked alert data to do classification experiments, and finally use the accuracy recall rate, precision, and F1 value to evaluate the model. Good results have been obtained.
{"title":"False Alert Detection Based on Deep Learning and Machine Learning","authors":"Shudong Li, Danyi Qin, Xiaobo Wu, Juan Li, Baohui Li, Weihong Han","doi":"10.4018/ijswis.297035","DOIUrl":"https://doi.org/10.4018/ijswis.297035","url":null,"abstract":"Among the large number of network attack alerts generated every day, actual security incidents are usually overwhelmed by a large number of redundant alerts. Therefore, how to remove these redundant alerts in real time and improve the quality of alerts is an urgent problem to be solved in large-scale network security protection. This paper uses the method of combining machine learning and deep learning to improve the effect of false alarm detection and then more accurately identify real alarms, that is, in the process of training the model, the features of a hidden layer output of the DNN model are used as input to train the machine learning model. In order to verify the proposed method, we use the marked alert data to do classification experiments, and finally use the accuracy recall rate, precision, and F1 value to evaluate the model. Good results have been obtained.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"23 1","pages":"1-21"},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89234086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}