{"title":"2018 14th International Conference on Semantics, Knowledge and Grids","authors":"","doi":"10.1109/skg.2018.00001","DOIUrl":"https://doi.org/10.1109/skg.2018.00001","url":null,"abstract":"","PeriodicalId":265760,"journal":{"name":"2018 14th International Conference on Semantics, Knowledge and Grids (SKG)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123736762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Requirement analysis of network organization is critical for network efficiency as well as operational efficiency in operational planning. Currently, most researchers pay much attention to the organization itself, but pay little attention to the requirement analysis of organization. In this paper, we propose a novel approach of operational network organization based on Problem Domains Oriented Analysis (PDOA) model, aiming to meet the need of precise operational network organizing. We firstly introduce the PDOA model and the concrete analysis process. Afterwards, a list of the requirement of network capability is proposed and classified using the PDOA model. In the end, Basing on the context diagram and problem diagram analysis, we divide the problem domain in detail.
{"title":"Requirement Analysis of Operational Network Organization Based on PDOA","authors":"Jianfeng Hou, Ruicheng Yan","doi":"10.1109/SKG.2018.00043","DOIUrl":"https://doi.org/10.1109/SKG.2018.00043","url":null,"abstract":"Requirement analysis of network organization is critical for network efficiency as well as operational efficiency in operational planning. Currently, most researchers pay much attention to the organization itself, but pay little attention to the requirement analysis of organization. In this paper, we propose a novel approach of operational network organization based on Problem Domains Oriented Analysis (PDOA) model, aiming to meet the need of precise operational network organizing. We firstly introduce the PDOA model and the concrete analysis process. Afterwards, a list of the requirement of network capability is proposed and classified using the PDOA model. In the end, Basing on the context diagram and problem diagram analysis, we divide the problem domain in detail.","PeriodicalId":265760,"journal":{"name":"2018 14th International Conference on Semantics, Knowledge and Grids (SKG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130080589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alper Tufek, A. Gurbuz, Omer Faruk Ekuklu, M. Aktaş
Loss of life and property, disruptions to transportation and trading operations, etc. caused by meteorological events increasingly highlight the importance of fast and accurate weather forecasting. For this reason, there are various Numerical Weather Prediction (NWP) models worldwide that are run on either a local or a global scale. NWP models typically take hours to finish a complete run, however, depending on the input parameters and the size of the forecast domain. Provenance information is of central importance for detecting unexpected events that may develop during the course of model execution, and also for taking necessary action as early as possible. In addition, the need to share scientific data and results between researchers or scientists also highlights the importance of data quality and reliability. This can only be achieved through provenance information collected during the entire lifecycle of the data of interest. The Weather Research and Forecasting (WRF) Model is a Numerical Weather Prediction model developed as open source. In this study, we develop a framework for tracking the WRF model and for generating, storing and analyzing provenance data. The proposed system enables easy management and understanding of numerical weather forecast workflows by providing provenance graphs. By analyzing these graphs, potential faulty situations that may occur during the execution of WRF can be traced to their root causes. Our proposed system has been evaluated and has been shown to perform well even in a high-frequency provenance information flow.
{"title":"Provenance Collection Platform for the Weather Research and Forecasting Model","authors":"Alper Tufek, A. Gurbuz, Omer Faruk Ekuklu, M. Aktaş","doi":"10.1109/SKG.2018.00009","DOIUrl":"https://doi.org/10.1109/SKG.2018.00009","url":null,"abstract":"Loss of life and property, disruptions to transportation and trading operations, etc. caused by meteorological events increasingly highlight the importance of fast and accurate weather forecasting. For this reason, there are various Numerical Weather Prediction (NWP) models worldwide that are run on either a local or a global scale. NWP models typically take hours to finish a complete run, however, depending on the input parameters and the size of the forecast domain. Provenance information is of central importance for detecting unexpected events that may develop during the course of model execution, and also for taking necessary action as early as possible. In addition, the need to share scientific data and results between researchers or scientists also highlights the importance of data quality and reliability. This can only be achieved through provenance information collected during the entire lifecycle of the data of interest. The Weather Research and Forecasting (WRF) Model is a Numerical Weather Prediction model developed as open source. In this study, we develop a framework for tracking the WRF model and for generating, storing and analyzing provenance data. The proposed system enables easy management and understanding of numerical weather forecast workflows by providing provenance graphs. By analyzing these graphs, potential faulty situations that may occur during the execution of WRF can be traced to their root causes. Our proposed system has been evaluated and has been shown to perform well even in a high-frequency provenance information flow.","PeriodicalId":265760,"journal":{"name":"2018 14th International Conference on Semantics, Knowledge and Grids (SKG)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126337469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a multi-attribute aggregation query mechanism in the context of edge computing, where an energy-aware IR-tree is constructed to process query processing in single edge networks, while an edge node routing graph is es-tablished to facilitate query processing for marginal smart things contained in contiguous edge networks. This in-network and localized strategy has shown its e □ ciency and applicability of query processing in IoT sensing networks, and experimental evaluation demonstrates that this technique performs better than the rivals in reducing the network tra □ c and energy consumption.
{"title":"Multi-Attribute Query Processing Through In-Network Aggregation in Edge Computing","authors":"Xiaocui Li, Zhangbing Zhou","doi":"10.1109/SKG.2018.00027","DOIUrl":"https://doi.org/10.1109/SKG.2018.00027","url":null,"abstract":"This paper proposes a multi-attribute aggregation query mechanism in the context of edge computing, where an energy-aware IR-tree is constructed to process query processing in single edge networks, while an edge node routing graph is es-tablished to facilitate query processing for marginal smart things contained in contiguous edge networks. This in-network and localized strategy has shown its e □ ciency and applicability of query processing in IoT sensing networks, and experimental evaluation demonstrates that this technique performs better than the rivals in reducing the network tra □ c and energy consumption.","PeriodicalId":265760,"journal":{"name":"2018 14th International Conference on Semantics, Knowledge and Grids (SKG)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127354493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matrix Factorization is a popular and successful method. It is already a common model method for collaborative filtering in recommendation systems. As most of the scoring matrix is sparse and the dimensions are increasing rapidly, the prediction accuracy and calculation time of the current matrix decomposition are limited. In this paper, a matrix decomposition model based on user characteristics is proposed, which can effectively improve the accuracy of predictive scoring and reduce the number of iterations. By testing the actual data and comparing it with the existing recommendation algorithm, the experimental results show that the method proposed in this paper can predict user's score well.
{"title":"Matrix Factorization Recommendation Algorithm Based on User Characteristics","authors":"Hongtao Liu, Ouyang Mao, Chen Long, Xueyan Liu, Zhenjia Zhu","doi":"10.1109/SKG.2018.00012","DOIUrl":"https://doi.org/10.1109/SKG.2018.00012","url":null,"abstract":"Matrix Factorization is a popular and successful method. It is already a common model method for collaborative filtering in recommendation systems. As most of the scoring matrix is sparse and the dimensions are increasing rapidly, the prediction accuracy and calculation time of the current matrix decomposition are limited. In this paper, a matrix decomposition model based on user characteristics is proposed, which can effectively improve the accuracy of predictive scoring and reduce the number of iterations. By testing the actual data and comparing it with the existing recommendation algorithm, the experimental results show that the method proposed in this paper can predict user's score well.","PeriodicalId":265760,"journal":{"name":"2018 14th International Conference on Semantics, Knowledge and Grids (SKG)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121257949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Glioma is one of the most widespread and intense forms of primary brain tumors. Accurate subcortical brain segmentation is essential in the evaluation of gliomas which helps to monitor the growth of gliomas and assists in the assessment of medication effects. Manual segmentation is needed a lot of human resources on Magnetic Resonance Imaging (MRI) data. Deep learning methods have become a powerful tool to learn features automatically in medical imaging applications including brain tissue segmentation, liver segmentation, and brain tumor segmentation. The shape of gliomas, structure, and location are different among individual patients, and it is a challenge to developing a model. In this paper, 3D hyper-dense Convolutional Neural Network(Cnn)is developed to segment tumors, in which it captures the global and local contextual information from two scales of global and local patches along with the two scales of receptive field. Densely connected blocks are used to exploit the benefit of a CNN to boost the model segmentation performance in Enhancing Tumor (ET), Non-Enhancing Tumor (NET), and Peritumoral Edema (PE). This dense architecture adopts 3D Fully Convolutional Network (FCN) architecture that is used for end-to-end volumetric prediction. The dense connectivity can offer a chance of deep supervision and improve gradient flow information in the learning process. The network is trained hierarchically based on global and local patches. In this scenario, the both patches are processed in their separate path, and dense connections happen not only between same path layers but also between different path layers. Our approach is validated on the BraTS 2018 dataset with the dice-score of 0.87, 0.81 and 0.84 for the complete tumor, enhancing tumor, and tumor core respectively. These outcomes are very close to the reported state-of-the-art results, and our approach is preferable to present 3D-based approaches when it comes to compactness, time and parameter efficiency on MRI brain tumor segmentation.
{"title":"3D Hyper-Dense Connected Convolutional Neural Network for Brain Tumor Segmentation","authors":"Saqib Qamar, Hai Jin, Ran Zheng, Parvez Ahmad","doi":"10.1109/SKG.2018.00024","DOIUrl":"https://doi.org/10.1109/SKG.2018.00024","url":null,"abstract":"Glioma is one of the most widespread and intense forms of primary brain tumors. Accurate subcortical brain segmentation is essential in the evaluation of gliomas which helps to monitor the growth of gliomas and assists in the assessment of medication effects. Manual segmentation is needed a lot of human resources on Magnetic Resonance Imaging (MRI) data. Deep learning methods have become a powerful tool to learn features automatically in medical imaging applications including brain tissue segmentation, liver segmentation, and brain tumor segmentation. The shape of gliomas, structure, and location are different among individual patients, and it is a challenge to developing a model. In this paper, 3D hyper-dense Convolutional Neural Network(Cnn)is developed to segment tumors, in which it captures the global and local contextual information from two scales of global and local patches along with the two scales of receptive field. Densely connected blocks are used to exploit the benefit of a CNN to boost the model segmentation performance in Enhancing Tumor (ET), Non-Enhancing Tumor (NET), and Peritumoral Edema (PE). This dense architecture adopts 3D Fully Convolutional Network (FCN) architecture that is used for end-to-end volumetric prediction. The dense connectivity can offer a chance of deep supervision and improve gradient flow information in the learning process. The network is trained hierarchically based on global and local patches. In this scenario, the both patches are processed in their separate path, and dense connections happen not only between same path layers but also between different path layers. Our approach is validated on the BraTS 2018 dataset with the dice-score of 0.87, 0.81 and 0.84 for the complete tumor, enhancing tumor, and tumor core respectively. These outcomes are very close to the reported state-of-the-art results, and our approach is preferable to present 3D-based approaches when it comes to compactness, time and parameter efficiency on MRI brain tumor segmentation.","PeriodicalId":265760,"journal":{"name":"2018 14th International Conference on Semantics, Knowledge and Grids (SKG)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128659205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Jian, Xiaoming Yu, Jinchi Zhu, Xueyan Liu, Hongtao Liu
In order to better discover the overlapping communities, this paper proposes an overlapping community detection method (INFELPA) based on the influence of the label and the spread of the edge tags. We use the influence of the node to initialize the label on the edge and sort the edge's influence to avoid these random factors during the edge label updating process. In order to retain multiple communities we retain multiple labels on the edges and restore the completed edge tag to the node. The experimental results show that this algorithm has certain competitive advantages
{"title":"Study on Community Discovery Algorithm from the Perspection of Label Influence Propagation","authors":"Jie Jian, Xiaoming Yu, Jinchi Zhu, Xueyan Liu, Hongtao Liu","doi":"10.1109/SKG.2018.00025","DOIUrl":"https://doi.org/10.1109/SKG.2018.00025","url":null,"abstract":"In order to better discover the overlapping communities, this paper proposes an overlapping community detection method (INFELPA) based on the influence of the label and the spread of the edge tags. We use the influence of the node to initialize the label on the edge and sort the edge's influence to avoid these random factors during the edge label updating process. In order to retain multiple communities we retain multiple labels on the edges and restore the completed edge tag to the node. The experimental results show that this algorithm has certain competitive advantages","PeriodicalId":265760,"journal":{"name":"2018 14th International Conference on Semantics, Knowledge and Grids (SKG)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129425588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a semantic-based paraphrase identification approach. The core concept of this proposal is to identify paraphrases when sentences contain a set of named-entities and common words. The developed approach distinguishes the computation of the semantic similarity of named-entity tokens from the rest of the sentence text. More specifically, this is based on the integration of word semantic similarity derived from WordNet taxonomic relations, and named-entity semantic relatedness inferred from the crowd-sourced knowledge in Wikipedia database. Besides, we improve WordNet similarity measure by nominalizing verbs, adjectives and adverbs with the aid of Categorial Variation database (CatVar). The paraphrase identification system is then evaluated using two different datasets; namely, Microsoft Research Paraphrase Corpus (MSRPC) and TREC-9 Question Variants. Experimental results on the aforementioned datasets show that our system outperforms baselines in the paraphrase identification task.
{"title":"Semantic and Heuristic Based Approach for Paraphrase Identification","authors":"Muhidin A. Mohamed, M. Oussalah","doi":"10.1109/SKG.2018.00037","DOIUrl":"https://doi.org/10.1109/SKG.2018.00037","url":null,"abstract":"In this paper, we propose a semantic-based paraphrase identification approach. The core concept of this proposal is to identify paraphrases when sentences contain a set of named-entities and common words. The developed approach distinguishes the computation of the semantic similarity of named-entity tokens from the rest of the sentence text. More specifically, this is based on the integration of word semantic similarity derived from WordNet taxonomic relations, and named-entity semantic relatedness inferred from the crowd-sourced knowledge in Wikipedia database. Besides, we improve WordNet similarity measure by nominalizing verbs, adjectives and adverbs with the aid of Categorial Variation database (CatVar). The paraphrase identification system is then evaluated using two different datasets; namely, Microsoft Research Paraphrase Corpus (MSRPC) and TREC-9 Question Variants. Experimental results on the aforementioned datasets show that our system outperforms baselines in the paraphrase identification task.","PeriodicalId":265760,"journal":{"name":"2018 14th International Conference on Semantics, Knowledge and Grids (SKG)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129940727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image mining is widely concerned in processing geo-tagged landmark images of alphanumeric and real-time satellites. Useful information loss in feature extracting process may results in inappropriate image categorization. Reserving useful information is highly challenging and critical in feature extraction and reduction. This research work intends to utilize the hybrid features such as Local Binary Pattern (LBP), colour moments and statistical features for enhancing the categorization accuracy. Then, the k-means classification technique is used to determine the class labels used for model training. In order to mitigate overfitting and to increase the overall classification precision, the Component Reduced Naive Bayesian (CRNB) model is proposed. Also, the physical landmarks of the geo-tagged images are located by using the Hybrid Feature Extraction based Naive Bayesian (HFE-NB) approach. During experiments, two different datasets have been used to test the proposed model, and some other existing models are considered to compare the results. The results stated that the proposed method significantly improves the precision, recall and accuracy of image retrieval. When compared to the existing techniques, it provides the best results by using the texture and colour features with increased sensitivity and specificity such as 3.36% and 0.1 % respectively.
{"title":"Towards Efficient for Learning Model Image Retrieval","authors":"M. J. J. Ghrabat, Guangzhi Ma, Chih Cheng","doi":"10.1109/SKG.2018.00020","DOIUrl":"https://doi.org/10.1109/SKG.2018.00020","url":null,"abstract":"Image mining is widely concerned in processing geo-tagged landmark images of alphanumeric and real-time satellites. Useful information loss in feature extracting process may results in inappropriate image categorization. Reserving useful information is highly challenging and critical in feature extraction and reduction. This research work intends to utilize the hybrid features such as Local Binary Pattern (LBP), colour moments and statistical features for enhancing the categorization accuracy. Then, the k-means classification technique is used to determine the class labels used for model training. In order to mitigate overfitting and to increase the overall classification precision, the Component Reduced Naive Bayesian (CRNB) model is proposed. Also, the physical landmarks of the geo-tagged images are located by using the Hybrid Feature Extraction based Naive Bayesian (HFE-NB) approach. During experiments, two different datasets have been used to test the proposed model, and some other existing models are considered to compare the results. The results stated that the proposed method significantly improves the precision, recall and accuracy of image retrieval. When compared to the existing techniques, it provides the best results by using the texture and colour features with increased sensitivity and specificity such as 3.36% and 0.1 % respectively.","PeriodicalId":265760,"journal":{"name":"2018 14th International Conference on Semantics, Knowledge and Grids (SKG)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116358054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xianglong Chen, Chunping Ouyang, Yongbin Liu, Lingyun Luo, Xiaohua Yang
Deep learning has shown its effectiveness in many tasks such as text classification and computer vision. Most text classification tasks are concentrated in the use of convolution neural network and recurrent neural network to obtain text feature representation. In some researches, Attention mechanism is usually adopted to improve classification accuracy. According to the target of task 6 in NLP&CC2018, a hybrid deep learning model which combined BiGRU, CNN and Attention mechanism was proposed to improve text classification. The experimental results show that the F1-score of the proposed model successfully excels the task's baseline model. Besides, this hybrid Deep Learning model gets higher Precision, Recall and F1-score comparing with some other popular Deep Learning models, and the improvement of on F1-score is 5.4% than the single CNN model.
{"title":"A Hybrid Deep Learning Model for Text Classification","authors":"Xianglong Chen, Chunping Ouyang, Yongbin Liu, Lingyun Luo, Xiaohua Yang","doi":"10.1109/SKG.2018.00014","DOIUrl":"https://doi.org/10.1109/SKG.2018.00014","url":null,"abstract":"Deep learning has shown its effectiveness in many tasks such as text classification and computer vision. Most text classification tasks are concentrated in the use of convolution neural network and recurrent neural network to obtain text feature representation. In some researches, Attention mechanism is usually adopted to improve classification accuracy. According to the target of task 6 in NLP&CC2018, a hybrid deep learning model which combined BiGRU, CNN and Attention mechanism was proposed to improve text classification. The experimental results show that the F1-score of the proposed model successfully excels the task's baseline model. Besides, this hybrid Deep Learning model gets higher Precision, Recall and F1-score comparing with some other popular Deep Learning models, and the improvement of on F1-score is 5.4% than the single CNN model.","PeriodicalId":265760,"journal":{"name":"2018 14th International Conference on Semantics, Knowledge and Grids (SKG)","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122987532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}