Pub Date : 2020-11-01DOI: 10.1109/SCC49832.2020.00043
Yilei Zhang, Xiao Zhang, Peiyun Zhang, Jun Luo
With the widespread adoption of cloud computing, Service-Orientated Architecture (SOA) facilitates the deployment of large-scale online applications in many key areas where quality and reliability are critical. In order to ensure the performance of cloud applications, Quality of Service (QoS) is widely used as a key metric to enable QoS-driven service selection, composition, adaption, etc. Since QoS data observed by users is sparse due to technical constraints, previous studies have proposed prediction approaches to solve this problem. However, the dynamic nature of the cloud environment requires timely prediction of time-varying QoS values. In addition, unreliable QoS data from untrustworthy users may significantly affect the prediction accuracy. In this paper, we propose a credible online QoS prediction approach to address these challenges. We evaluate user credibility through a reputation mechanism and employ online learning techniques to provide QoS prediction results at runtime. The proposed approach is evaluated on a large-scale real-world QoS dataset, and the experimental results demonstrate its effectiveness and efficiency in unreliable cloud environment.
{"title":"Credible and Online QoS Prediction for Services in Unreliable Cloud Environment","authors":"Yilei Zhang, Xiao Zhang, Peiyun Zhang, Jun Luo","doi":"10.1109/SCC49832.2020.00043","DOIUrl":"https://doi.org/10.1109/SCC49832.2020.00043","url":null,"abstract":"With the widespread adoption of cloud computing, Service-Orientated Architecture (SOA) facilitates the deployment of large-scale online applications in many key areas where quality and reliability are critical. In order to ensure the performance of cloud applications, Quality of Service (QoS) is widely used as a key metric to enable QoS-driven service selection, composition, adaption, etc. Since QoS data observed by users is sparse due to technical constraints, previous studies have proposed prediction approaches to solve this problem. However, the dynamic nature of the cloud environment requires timely prediction of time-varying QoS values. In addition, unreliable QoS data from untrustworthy users may significantly affect the prediction accuracy. In this paper, we propose a credible online QoS prediction approach to address these challenges. We evaluate user credibility through a reputation mechanism and employ online learning techniques to provide QoS prediction results at runtime. The proposed approach is evaluated on a large-scale real-world QoS dataset, and the experimental results demonstrate its effectiveness and efficiency in unreliable cloud environment.","PeriodicalId":274909,"journal":{"name":"2020 IEEE International Conference on Services Computing (SCC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131596302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SCC49832.2020.00040
Soheila Sadeghiram, Hui Ma, Gang Chen
With the complex needs of the companies that cannot be met by a single service, Data-intensive Web Service Composition (DWSC) is required to compose multiple services in a distributed service environment. Compositions must satisfy functional specifications and non-functional requirements, i.e. Quality of Service (QoS). Existing approaches on DWSC make the underlying assumption that the participating Web services and communication networks are static so that their QoS and bandwidth seldom change. However, those approaches are impractical since network failures or dynamic bandwidth changes cause violations of user agreements. Additionally, they ignore the distribution of services in general, and therefore, variations in network attributes are not taken into account. In this paper, we address the problem of dynamic distributed DWSC (D2-DWSC), design a simulation model for bandwidth patterns, and propose an algorithm to generate robust solutions for D2-DWSC which can cope with the changes in dynamic environments. Experimental results verify the effectiveness of our method.
{"title":"A Distance-based Genetic Algorithm for Robust Data-intensive Web Service Composition in Dynamic Bandwidth Environment","authors":"Soheila Sadeghiram, Hui Ma, Gang Chen","doi":"10.1109/SCC49832.2020.00040","DOIUrl":"https://doi.org/10.1109/SCC49832.2020.00040","url":null,"abstract":"With the complex needs of the companies that cannot be met by a single service, Data-intensive Web Service Composition (DWSC) is required to compose multiple services in a distributed service environment. Compositions must satisfy functional specifications and non-functional requirements, i.e. Quality of Service (QoS). Existing approaches on DWSC make the underlying assumption that the participating Web services and communication networks are static so that their QoS and bandwidth seldom change. However, those approaches are impractical since network failures or dynamic bandwidth changes cause violations of user agreements. Additionally, they ignore the distribution of services in general, and therefore, variations in network attributes are not taken into account. In this paper, we address the problem of dynamic distributed DWSC (D2-DWSC), design a simulation model for bandwidth patterns, and propose an algorithm to generate robust solutions for D2-DWSC which can cope with the changes in dynamic environments. Experimental results verify the effectiveness of our method.","PeriodicalId":274909,"journal":{"name":"2020 IEEE International Conference on Services Computing (SCC)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130842328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SCC49832.2020.00029
Yi Pan, Xiaoning Sun, Yunni Xia, Wanbo Zheng, Xin Luo
The cloud computing paradigm is featured by its ability to offer elastic computational resource provisioning patterns and deliver on-demand and versatile services. It’s thus getting increasingly popular to build business process and workflow-based applications upon cloud computing platforms. However, it remains a difficulty to guarantee cost-effectiveness and quality of service of cloud-based workflows because real-world cloud services are usually subject to real-time performance variations or fluctuations. Existing researches mainly consider that cloud are with constant performance and formulate the scheduling decision-making as a static optimization problem. In this work, instead, we consider that scientific computing processes to be supported by decentralized cloud infrastructures are with fluctuating QoS and aim at managing the monetary cost of workflows with the completion-time constraint to be satisfied. We address the performance-trend-aware workflow scheduling problem by leveraging a time-series-based prediction model and a Critical-Path-Duration-Estimation-based (CPDE for short) scheduling strategy. The proposed method is capable of exploiting real-time trends of performance changes of cloud infrastructures and generating dynamic workflow scheduling plans. To prove the effectiveness of our proposed method, we build a large-prime-number-generation workflow supported by real-world third-party commercial clouds and show that our method clearly beats existing approaches in terms of cost, workflow completion time, and Service-Level-Agreement (SLA) violation rate.
{"title":"A Predictive-Trend-Aware and Critical-Path-Estimation-Based Method for Workflow Scheduling Upon Cloud Services","authors":"Yi Pan, Xiaoning Sun, Yunni Xia, Wanbo Zheng, Xin Luo","doi":"10.1109/SCC49832.2020.00029","DOIUrl":"https://doi.org/10.1109/SCC49832.2020.00029","url":null,"abstract":"The cloud computing paradigm is featured by its ability to offer elastic computational resource provisioning patterns and deliver on-demand and versatile services. It’s thus getting increasingly popular to build business process and workflow-based applications upon cloud computing platforms. However, it remains a difficulty to guarantee cost-effectiveness and quality of service of cloud-based workflows because real-world cloud services are usually subject to real-time performance variations or fluctuations. Existing researches mainly consider that cloud are with constant performance and formulate the scheduling decision-making as a static optimization problem. In this work, instead, we consider that scientific computing processes to be supported by decentralized cloud infrastructures are with fluctuating QoS and aim at managing the monetary cost of workflows with the completion-time constraint to be satisfied. We address the performance-trend-aware workflow scheduling problem by leveraging a time-series-based prediction model and a Critical-Path-Duration-Estimation-based (CPDE for short) scheduling strategy. The proposed method is capable of exploiting real-time trends of performance changes of cloud infrastructures and generating dynamic workflow scheduling plans. To prove the effectiveness of our proposed method, we build a large-prime-number-generation workflow supported by real-world third-party commercial clouds and show that our method clearly beats existing approaches in terms of cost, workflow completion time, and Service-Level-Agreement (SLA) violation rate.","PeriodicalId":274909,"journal":{"name":"2020 IEEE International Conference on Services Computing (SCC)","volume":"90 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128374757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SCC49832.2020.00024
Md Rakib Shahriar, Xiaoqing Frank Liu, Md Mahfuzer Rahman, S. Sunny
Traditional service registries or catalogs publish and describe individual services of different entities. In this paper, we propose a new approach of service publication and discovery based on the philosophy of "product catalogs". In this new approach, entities are equivalent to products in typical products catalogs. We conceptualize an entity registry where each entry constitutes to its collection of remote services. For logical abstraction of entities, we utilize the concept of Digital Twin (DT). To support our objective, we present a reference DT architecture that virtualizes entities, exposes all its functionalities as services, and offers a remote programmable instance to invoke the services directly from application code. To publish and discover DTs, we propose a novel framework, OpenDT. This framework enables entity owners to publish DTs and allow users to discover them for creating mashups and applications using DT services. It also allows developers to create composite DTs that consists of other DTs for large and complex entities. To evaluate OpenDT, we implement a cyber-manufacturing testbed comprising of multiple machining tools and their DTs. Case validations from testbed show excellent efficiency of DT-driven entity publication and discovery.
{"title":"OpenDT: A Reference Framework for Service Publication and Discovery using Remote Programmable Digital Twins","authors":"Md Rakib Shahriar, Xiaoqing Frank Liu, Md Mahfuzer Rahman, S. Sunny","doi":"10.1109/SCC49832.2020.00024","DOIUrl":"https://doi.org/10.1109/SCC49832.2020.00024","url":null,"abstract":"Traditional service registries or catalogs publish and describe individual services of different entities. In this paper, we propose a new approach of service publication and discovery based on the philosophy of \"product catalogs\". In this new approach, entities are equivalent to products in typical products catalogs. We conceptualize an entity registry where each entry constitutes to its collection of remote services. For logical abstraction of entities, we utilize the concept of Digital Twin (DT). To support our objective, we present a reference DT architecture that virtualizes entities, exposes all its functionalities as services, and offers a remote programmable instance to invoke the services directly from application code. To publish and discover DTs, we propose a novel framework, OpenDT. This framework enables entity owners to publish DTs and allow users to discover them for creating mashups and applications using DT services. It also allows developers to create composite DTs that consists of other DTs for large and complex entities. To evaluate OpenDT, we implement a cyber-manufacturing testbed comprising of multiple machining tools and their DTs. Case validations from testbed show excellent efficiency of DT-driven entity publication and discovery.","PeriodicalId":274909,"journal":{"name":"2020 IEEE International Conference on Services Computing (SCC)","volume":"103 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124159487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SCC49832.2020.00038
Zhongjie Wang, Min Li, Zhiying Tu
Many software systems have been servitized into the form of service-based architecture, and their constituent service components are deployed in distributed ubiquitous environment. In terms of such type of systems, the quality perceived by massive users are diversified at different times, at different locations, or in different domains. This phenomenon is called Temporal- Spatial-Domain (TSD) distribution of quality attributes, and there is a TSD space Q for each quality attribute. When two service components offered by different providers collaborate with each other, there might be severe quality conflicts due to inconsistencies among quality standards adopted in different business domains and among subjective user perceptions at different locations or times. It is necessary to make alignment on values of quality attributes in terms of their TSD distributions. This paper presents a model to formally delineate the TSD distribution characteristics of quality attributes, and then uses Quality Contour Lines (QCLs) to represent equivalent user- perceived quality levels at different TSD points. A quality alignment method is proposed to eliminate quality inconsistencies based on pre-defined QCLs and Service Quality Levels (SQLs). This work extends traditional software/service quality models and could help facilitate more precise quality design and quality improvement for software/service developers.
{"title":"A Temporal-Spatial-Domain Distribution Model and Alignment Method for Quality Attributes","authors":"Zhongjie Wang, Min Li, Zhiying Tu","doi":"10.1109/SCC49832.2020.00038","DOIUrl":"https://doi.org/10.1109/SCC49832.2020.00038","url":null,"abstract":"Many software systems have been servitized into the form of service-based architecture, and their constituent service components are deployed in distributed ubiquitous environment. In terms of such type of systems, the quality perceived by massive users are diversified at different times, at different locations, or in different domains. This phenomenon is called Temporal- Spatial-Domain (TSD) distribution of quality attributes, and there is a TSD space Q for each quality attribute. When two service components offered by different providers collaborate with each other, there might be severe quality conflicts due to inconsistencies among quality standards adopted in different business domains and among subjective user perceptions at different locations or times. It is necessary to make alignment on values of quality attributes in terms of their TSD distributions. This paper presents a model to formally delineate the TSD distribution characteristics of quality attributes, and then uses Quality Contour Lines (QCLs) to represent equivalent user- perceived quality levels at different TSD points. A quality alignment method is proposed to eliminate quality inconsistencies based on pre-defined QCLs and Service Quality Levels (SQLs). This work extends traditional software/service quality models and could help facilitate more precise quality design and quality improvement for software/service developers.","PeriodicalId":274909,"journal":{"name":"2020 IEEE International Conference on Services Computing (SCC)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126161923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SCC49832.2020.00025
Md Mahfuzer Rahman, X. Liu
Mashup application developers combine relevant web APIs from existing sources. Still, developers often face challenges in finding appropriate web APIs as they have to go through thousands of available ones. Recommending relevant web APIs might help, but very low API invocation from mashup applications creates a sparse dataset for the recommendation models to learn about the mashups and their invocation pattern, ultimately affecting their accuracy. Effectively reducing sparsity and using supplemental information such as mashup and web API specific features that trigger mashups to invoke the same web APIs in their applications and web APIs to be used together by a mashup can help to generate more accurate and useful recommendations. In this work, we developed a novel web API recommendation model for mashup application, which uses two-level topic modeling of mashups and user interaction with mashup and web APIs sequentially to reduce the sparsity of the initial dataset. Then, we applied regularized matrix factorization with the mashup and web API embeddings. These embeddings integrate 'mashup to mashup' and 'web API to web API' relationships with 'mashup to web API' invocation analysis. Compared with existing web API recommendation models, our model achieved 54% more precision, 36.4% more Normalized Discounted Cumulative Gain (NDCG), and 36% more recall value over other baseline models on a dataset collected from programmableWeb1.
{"title":"Integrated Topic Modeling and User Interaction Enhanced WebAPI Recommendation using Regularized Matrix Factorization for Mashup Application Development","authors":"Md Mahfuzer Rahman, X. Liu","doi":"10.1109/SCC49832.2020.00025","DOIUrl":"https://doi.org/10.1109/SCC49832.2020.00025","url":null,"abstract":"Mashup application developers combine relevant web APIs from existing sources. Still, developers often face challenges in finding appropriate web APIs as they have to go through thousands of available ones. Recommending relevant web APIs might help, but very low API invocation from mashup applications creates a sparse dataset for the recommendation models to learn about the mashups and their invocation pattern, ultimately affecting their accuracy. Effectively reducing sparsity and using supplemental information such as mashup and web API specific features that trigger mashups to invoke the same web APIs in their applications and web APIs to be used together by a mashup can help to generate more accurate and useful recommendations. In this work, we developed a novel web API recommendation model for mashup application, which uses two-level topic modeling of mashups and user interaction with mashup and web APIs sequentially to reduce the sparsity of the initial dataset. Then, we applied regularized matrix factorization with the mashup and web API embeddings. These embeddings integrate 'mashup to mashup' and 'web API to web API' relationships with 'mashup to web API' invocation analysis. Compared with existing web API recommendation models, our model achieved 54% more precision, 36.4% more Normalized Discounted Cumulative Gain (NDCG), and 36% more recall value over other baseline models on a dataset collected from programmableWeb1.","PeriodicalId":274909,"journal":{"name":"2020 IEEE International Conference on Services Computing (SCC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128500626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frequent pattern mining in graph data is a hot topic in recent years. At present, most frequent graph pattern mining methods use the concept of subgraph isomorphism for the matching of candidate graph pattern in data graph. However, in some applications where the accuracy of matching is not so strict, the topology constraints of subgraph isomorphism may lose some meaningful frequent patterns. Simulation matching plays an important role in graph pattern matching. However, in frequent graph pattern mining, it may lead to the matching between the connected candidate pattern and the disconnected substructure in the data graph. The topology of matching results can not be guaranteed, which greatly affects the quality of mining, and may lead to mining a large number of redundant graphics patterns with repeated structure. Therefore, this paper proposes a new concept of simulation matching - colSimulation, which can ensure the point-to-point matching between pattern graph and data graph, effectively avoid redundant mining results and improve the mining speed. The D-colSimulation proposed in this paper is a distributed frequent graph pattern mining method based on colSimulation for large-scale graph data. Experiments on datasets show that our method not only improves the mining efficiency, but also performs well on data sets with poor performance of subgraph isomorphism.
{"title":"D-colSimulation: A Distributed Approach for Frequent Graph Pattern Mining based on colSimulation in a Single Large Graph","authors":"Guanqi Hua, Junhua Zhang, Li-zhen Cui, Wei Guo, Xudong Lu, Wei He","doi":"10.1109/SCC49832.2020.00019","DOIUrl":"https://doi.org/10.1109/SCC49832.2020.00019","url":null,"abstract":"Frequent pattern mining in graph data is a hot topic in recent years. At present, most frequent graph pattern mining methods use the concept of subgraph isomorphism for the matching of candidate graph pattern in data graph. However, in some applications where the accuracy of matching is not so strict, the topology constraints of subgraph isomorphism may lose some meaningful frequent patterns. Simulation matching plays an important role in graph pattern matching. However, in frequent graph pattern mining, it may lead to the matching between the connected candidate pattern and the disconnected substructure in the data graph. The topology of matching results can not be guaranteed, which greatly affects the quality of mining, and may lead to mining a large number of redundant graphics patterns with repeated structure. Therefore, this paper proposes a new concept of simulation matching - colSimulation, which can ensure the point-to-point matching between pattern graph and data graph, effectively avoid redundant mining results and improve the mining speed. The D-colSimulation proposed in this paper is a distributed frequent graph pattern mining method based on colSimulation for large-scale graph data. Experiments on datasets show that our method not only improves the mining efficiency, but also performs well on data sets with poor performance of subgraph isomorphism.","PeriodicalId":274909,"journal":{"name":"2020 IEEE International Conference on Services Computing (SCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131317159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SCC49832.2020.00048
Anas Dawod, Dimitrios Georgakopoulos, P. Jayaraman, A. Nirmalathas
This paper introduces a novel IoT-owned service for Global IoT Device Discovery and Integration (GIDDI) of existing IoT devices that are owned and managed by different parties who are the IoT devices providers. The GIDDI service promotes the sharing of existing IoT devices and the deployment of new devices via a revenue generating scheme for the IoT device providers. Unlike existing IoT device discovery and integration solutions that are currently owned and/or controlled by specific IoT platform or service providers, the GIDDI service has been specifically designed to manage all the metadata needed for IoT device discovery and integration in a specialized blockchain (we refer to this as GIDDI Blockchain) and via this blockchain-based solution be IoT-owned (i.e., not owned or controlled by any specific provider). In addition to the GIDDI Blockchain, the GIDDI service includes a distributed GIDDI Marketplace that provides the functionality of IoT device discovery, integration and payment. The paper describes a proof-of-concept implementation of the GIDDI blockchain. It also provides an experimental evaluation of the GIDDI blockchain in variety of IoT device registration and query workloads. An evaluation of the proposed GIDDI service concludes the paper.
{"title":"An IoT-owned Service for Global IoT Device Discovery, Integration and (Re)use","authors":"Anas Dawod, Dimitrios Georgakopoulos, P. Jayaraman, A. Nirmalathas","doi":"10.1109/SCC49832.2020.00048","DOIUrl":"https://doi.org/10.1109/SCC49832.2020.00048","url":null,"abstract":"This paper introduces a novel IoT-owned service for Global IoT Device Discovery and Integration (GIDDI) of existing IoT devices that are owned and managed by different parties who are the IoT devices providers. The GIDDI service promotes the sharing of existing IoT devices and the deployment of new devices via a revenue generating scheme for the IoT device providers. Unlike existing IoT device discovery and integration solutions that are currently owned and/or controlled by specific IoT platform or service providers, the GIDDI service has been specifically designed to manage all the metadata needed for IoT device discovery and integration in a specialized blockchain (we refer to this as GIDDI Blockchain) and via this blockchain-based solution be IoT-owned (i.e., not owned or controlled by any specific provider). In addition to the GIDDI Blockchain, the GIDDI service includes a distributed GIDDI Marketplace that provides the functionality of IoT device discovery, integration and payment. The paper describes a proof-of-concept implementation of the GIDDI blockchain. It also provides an experimental evaluation of the GIDDI blockchain in variety of IoT device registration and query workloads. An evaluation of the proposed GIDDI service concludes the paper.","PeriodicalId":274909,"journal":{"name":"2020 IEEE International Conference on Services Computing (SCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114936102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the popularity of smartphones, mobile applications (mobile apps) have become a necessity in people’s lives and work. Massive apps provide users with a variety of choices, but also bring about the information overload problem. In reality, the number of apps that users have used is very limited, resulting in a very sparse interaction matrix between users and apps. It is not accurate enough to use a sparse interaction matrix to predict numerous unknown ratings, so that the recommended results cannot satisfy users. This paper aims to exploit the user’s historical behavior data and the app’s side information to make app recommendation to solve the problem of information overload. Specifically, first of all, multiple semantic meta-graphs are designed by leveraging the user information, app information, user historical usage record information, and app’s side information. Then, similarity matrices between users and apps based on different semantic meta-graphs are obtained. The graph neural network with the attention mechanism is employed to learn the collaborative information between users and apps, and to selectively aggregate the feature information of the neighbors. Finally, the multi-view learning and attention mechanism are adopted to obtain users’ ratings for apps from different perspectives. Comprehensive experiments with different numbers of training samples show that the proposed method outperforms other comparison methods.
{"title":"Graph Neural Network and Multi-view Learning Based Mobile Application Recommendation in Heterogeneous Graphs","authors":"Fenfang Xie, Zengxu Cao, Yangjun Xu, Liang Chen, Zibin Zheng","doi":"10.1109/SCC49832.2020.00022","DOIUrl":"https://doi.org/10.1109/SCC49832.2020.00022","url":null,"abstract":"With the popularity of smartphones, mobile applications (mobile apps) have become a necessity in people’s lives and work. Massive apps provide users with a variety of choices, but also bring about the information overload problem. In reality, the number of apps that users have used is very limited, resulting in a very sparse interaction matrix between users and apps. It is not accurate enough to use a sparse interaction matrix to predict numerous unknown ratings, so that the recommended results cannot satisfy users. This paper aims to exploit the user’s historical behavior data and the app’s side information to make app recommendation to solve the problem of information overload. Specifically, first of all, multiple semantic meta-graphs are designed by leveraging the user information, app information, user historical usage record information, and app’s side information. Then, similarity matrices between users and apps based on different semantic meta-graphs are obtained. The graph neural network with the attention mechanism is employed to learn the collaborative information between users and apps, and to selectively aggregate the feature information of the neighbors. Finally, the multi-view learning and attention mechanism are adopted to obtain users’ ratings for apps from different perspectives. Comprehensive experiments with different numbers of training samples show that the proposed method outperforms other comparison methods.","PeriodicalId":274909,"journal":{"name":"2020 IEEE International Conference on Services Computing (SCC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126369969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}