Pub Date : 2019-12-01DOI: 10.1109/SSCI44817.2019.9002779
Sina Khajehabdollahi, G. Martius, A. Levina
Can we generate abstract aesthetic images without bias from natural or human selected image corpi? Are aesthetic images singled out in their correlation functions? In this paper we give answers to these and more questions. We generate images using compositional pattern-producing networks with random weights and varying architecture. We demonstrate that even with the randomly selected weights the correlation functions remain largely determined by the network architecture. In a controlled experiment, human subjects picked aesthetic images out of a large dataset of all generated images. Statistical analysis reveals that the correlation function is indeed different for aesthetic images.
{"title":"Assessing Aesthetics of Generated Abstract Images Using Correlation Structure","authors":"Sina Khajehabdollahi, G. Martius, A. Levina","doi":"10.1109/SSCI44817.2019.9002779","DOIUrl":"https://doi.org/10.1109/SSCI44817.2019.9002779","url":null,"abstract":"Can we generate abstract aesthetic images without bias from natural or human selected image corpi? Are aesthetic images singled out in their correlation functions? In this paper we give answers to these and more questions. We generate images using compositional pattern-producing networks with random weights and varying architecture. We demonstrate that even with the randomly selected weights the correlation functions remain largely determined by the network architecture. In a controlled experiment, human subjects picked aesthetic images out of a large dataset of all generated images. Statistical analysis reveals that the correlation function is indeed different for aesthetic images.","PeriodicalId":6729,"journal":{"name":"2019 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"18 1","pages":"306-313"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79584385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/SSCI44817.2019.9002764
Xiaojie Zhai, Xiukun Wei, Jihong Yang
With the condition monitoring equipment becoming more sophisticated, data-driven based prognostic approaches for remaining useful life (RUL) are emerging. This paper introduces three classical prognostic approaches and verifies the effectiveness through the whole-life cycle experimental data of degenerated rolling bearings. The result shows that the prediction of the methods based on probability statistics will be greatly affected, if the prior parameters are inaccurate. And the degradation model cannot be adapted to individual bearing accurately. The prognostic method based on artificial intelligence and condition monitoring is more accurate in the case of a small number of training samples, and it will output a remaining useful life prediction interval with higher reliability. Therefore, by combining with other models, improving the intelligent algorithm to enhance the accuracy of its RUL prediction is the key to solve the problem about online prognostic.
{"title":"A Comparative Study on the Data-driven Based Prognostic Approaches for RUL of Rolling Bearings","authors":"Xiaojie Zhai, Xiukun Wei, Jihong Yang","doi":"10.1109/SSCI44817.2019.9002764","DOIUrl":"https://doi.org/10.1109/SSCI44817.2019.9002764","url":null,"abstract":"With the condition monitoring equipment becoming more sophisticated, data-driven based prognostic approaches for remaining useful life (RUL) are emerging. This paper introduces three classical prognostic approaches and verifies the effectiveness through the whole-life cycle experimental data of degenerated rolling bearings. The result shows that the prediction of the methods based on probability statistics will be greatly affected, if the prior parameters are inaccurate. And the degradation model cannot be adapted to individual bearing accurately. The prognostic method based on artificial intelligence and condition monitoring is more accurate in the case of a small number of training samples, and it will output a remaining useful life prediction interval with higher reliability. Therefore, by combining with other models, improving the intelligent algorithm to enhance the accuracy of its RUL prediction is the key to solve the problem about online prognostic.","PeriodicalId":6729,"journal":{"name":"2019 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"51 1","pages":"1751-1755"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79696970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/SSCI44817.2019.9002691
Yan Pei
We propose to use the relationship between the parameter of kernel function and its decisional angle or distance metrics for selecting the optimal setting of the parameter of kernel functions in kernel method-based algorithms. Kernel method is established in the reproducing kernel Hilbert space, the angle and distance are two metrics in such space. We analyse and investigate the relationship between the parameter of kernel function and the metrics (distance or angle) in the reproducing kernel Hilbert space. We design a target function of optimization to model the relationship between these two variables, and found that (1) the landscape shapes of parameter and the metrics are the same in Gaussian kernel function because the norm of all the vectors are equal to one in reproducing kernel Hilbert space; (2) the landscape monotonicity of that are opposite in polynomial kernel function from that of Gaussian kernel. The monotonicity of designed target functions of optimization using Gaussian kernel and polynomial kernel is different as well. The distance metric and angle metric have different distribution characteristics for the decision of parameter setting in kernel function. It needs to balance these two metrics when selecting a proper parameter of the kernel function in kernel-based algorithms. We use evolutionary multi-objective optimization algorithms to obtain the Pareto solutions for optimal selection of the parameter in kernel functions. We found that evolutionary multi-objective optimization algorithms are useful tools to balance the distance metric and angle metric in the decision of parameter setting in kernel method-based algorithms.
{"title":"Automatic Decision Making for Parameters in Kernel Method","authors":"Yan Pei","doi":"10.1109/SSCI44817.2019.9002691","DOIUrl":"https://doi.org/10.1109/SSCI44817.2019.9002691","url":null,"abstract":"We propose to use the relationship between the parameter of kernel function and its decisional angle or distance metrics for selecting the optimal setting of the parameter of kernel functions in kernel method-based algorithms. Kernel method is established in the reproducing kernel Hilbert space, the angle and distance are two metrics in such space. We analyse and investigate the relationship between the parameter of kernel function and the metrics (distance or angle) in the reproducing kernel Hilbert space. We design a target function of optimization to model the relationship between these two variables, and found that (1) the landscape shapes of parameter and the metrics are the same in Gaussian kernel function because the norm of all the vectors are equal to one in reproducing kernel Hilbert space; (2) the landscape monotonicity of that are opposite in polynomial kernel function from that of Gaussian kernel. The monotonicity of designed target functions of optimization using Gaussian kernel and polynomial kernel is different as well. The distance metric and angle metric have different distribution characteristics for the decision of parameter setting in kernel function. It needs to balance these two metrics when selecting a proper parameter of the kernel function in kernel-based algorithms. We use evolutionary multi-objective optimization algorithms to obtain the Pareto solutions for optimal selection of the parameter in kernel functions. We found that evolutionary multi-objective optimization algorithms are useful tools to balance the distance metric and angle metric in the decision of parameter setting in kernel method-based algorithms.","PeriodicalId":6729,"journal":{"name":"2019 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"32 1","pages":"3207-3214"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81765789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/SSCI44817.2019.9003005
Wenjian Luo, Yingying Qiao, Xin Lin, Peilan Xu, M. Preuss
Evolutionary multimodal optimization has received considerable attention in the past decade. Most existing evolutionary multimodal optimization algorithms are designed to solve problems with relatively few global optima. However, in real-world applications, the problems can possess a lot of global optima (and sometimes acceptable local optima). Finding more global optima can help us learn more about their landscapes and distributions. However, solving these problems with limited computational resources is a challenge for current algorithms.In this paper, many-modal optimization problems are studied, and each of them has more than 100 global optima. We first present a benchmark with 10 many-modal problems based on the existing multimodal optimization benchmarks. The numbers of global optima of these 10 problems vary from 108 to 7776. Second, we propose the difficulty-based cooperative co-evolution (DBCC) strategy for solving many-modal optimization problems. DBCC comprises four primary steps: problem separation, resource allocation, optimization, and solution reconstruction. The clonal selection algorithm is selected as the optimizer in DBCC. Experimental results demonstrate that DBCC provides satisfactory performance.
{"title":"Many-Modal Optimization by Difficulty-Based Cooperative Co-evolution","authors":"Wenjian Luo, Yingying Qiao, Xin Lin, Peilan Xu, M. Preuss","doi":"10.1109/SSCI44817.2019.9003005","DOIUrl":"https://doi.org/10.1109/SSCI44817.2019.9003005","url":null,"abstract":"Evolutionary multimodal optimization has received considerable attention in the past decade. Most existing evolutionary multimodal optimization algorithms are designed to solve problems with relatively few global optima. However, in real-world applications, the problems can possess a lot of global optima (and sometimes acceptable local optima). Finding more global optima can help us learn more about their landscapes and distributions. However, solving these problems with limited computational resources is a challenge for current algorithms.In this paper, many-modal optimization problems are studied, and each of them has more than 100 global optima. We first present a benchmark with 10 many-modal problems based on the existing multimodal optimization benchmarks. The numbers of global optima of these 10 problems vary from 108 to 7776. Second, we propose the difficulty-based cooperative co-evolution (DBCC) strategy for solving many-modal optimization problems. DBCC comprises four primary steps: problem separation, resource allocation, optimization, and solution reconstruction. The clonal selection algorithm is selected as the optimizer in DBCC. Experimental results demonstrate that DBCC provides satisfactory performance.","PeriodicalId":6729,"journal":{"name":"2019 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"3 1","pages":"1907-1914"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80225433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/SSCI44817.2019.9002801
Xiaoxiao Du, Alina Zare, Derek T. Anderson
Classifier fusion methods integrate complementary information from multiple classifiers or detectors and can aid remote sensing applications such as target detection and hy-perspectral image analysis. The Choquet integral (CI), param-eterized by fuzzy measures (FMs), has been widely used in the literature as an effective non-linear fusion framework. Standard supervised CI fusion algorithms often require precise ground-truth labels for each training data point, which can be difficult or impossible to obtain for remote sensing data. Previously, we proposed a Multiple Instance Choquet Integral (MICI) classifier fusion approach to address such label uncertainty, yet it can be slow to train due to large search space for FM variables. In this paper, we propose a new efficient learning scheme using binary fuzzy measures (BFMs) with the MICI framework for two-class classifier fusion given ambiguously and imprecisely labeled training data. We present experimental results on both synthetic data and real target detection problems and show that the proposed MICI-BFM algorithm can effectively and efficiently perform classifier fusion given remote sensing data with imprecise labels.
{"title":"Multiple Instance Choquet Integral with Binary Fuzzy Measures for Remote Sensing Classifier Fusion with Imprecise Labels","authors":"Xiaoxiao Du, Alina Zare, Derek T. Anderson","doi":"10.1109/SSCI44817.2019.9002801","DOIUrl":"https://doi.org/10.1109/SSCI44817.2019.9002801","url":null,"abstract":"Classifier fusion methods integrate complementary information from multiple classifiers or detectors and can aid remote sensing applications such as target detection and hy-perspectral image analysis. The Choquet integral (CI), param-eterized by fuzzy measures (FMs), has been widely used in the literature as an effective non-linear fusion framework. Standard supervised CI fusion algorithms often require precise ground-truth labels for each training data point, which can be difficult or impossible to obtain for remote sensing data. Previously, we proposed a Multiple Instance Choquet Integral (MICI) classifier fusion approach to address such label uncertainty, yet it can be slow to train due to large search space for FM variables. In this paper, we propose a new efficient learning scheme using binary fuzzy measures (BFMs) with the MICI framework for two-class classifier fusion given ambiguously and imprecisely labeled training data. We present experimental results on both synthetic data and real target detection problems and show that the proposed MICI-BFM algorithm can effectively and efficiently perform classifier fusion given remote sensing data with imprecise labels.","PeriodicalId":6729,"journal":{"name":"2019 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"61 1","pages":"1154-1162"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80482111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/SSCI44817.2019.9002698
Yiqiao Cai, Deining Peng, Shunkai Fu, H. Tian
As a new emerging research topic in the field of evolutionary computation, evolutionary multitasking optimization (EMTO) is presented to solve multiple optimization tasks concurrently by transferring knowledge across them. However, the promising search directions found during the evolutionary process have not been shared and utilized effectively in most EMTO algorithms. Therefore, this paper puts forward a difference vector sharing mechanism (DVSM) for multitasking differential evolution (MDE), with the purpose of capturing, sharing and utilizing the useful knowledge across different tasks. The performance of the proposed algorithm, named MDE with DVSM (MDE-DVSM), is evaluated on a suite of single-objective multitasking benchmark problems. The experimental results have demonstrated the superiority of MDE-DVSM when compared with other competitive algorithms.
{"title":"Multitasking differential evolution with difference vector sharing mechanism","authors":"Yiqiao Cai, Deining Peng, Shunkai Fu, H. Tian","doi":"10.1109/SSCI44817.2019.9002698","DOIUrl":"https://doi.org/10.1109/SSCI44817.2019.9002698","url":null,"abstract":"As a new emerging research topic in the field of evolutionary computation, evolutionary multitasking optimization (EMTO) is presented to solve multiple optimization tasks concurrently by transferring knowledge across them. However, the promising search directions found during the evolutionary process have not been shared and utilized effectively in most EMTO algorithms. Therefore, this paper puts forward a difference vector sharing mechanism (DVSM) for multitasking differential evolution (MDE), with the purpose of capturing, sharing and utilizing the useful knowledge across different tasks. The performance of the proposed algorithm, named MDE with DVSM (MDE-DVSM), is evaluated on a suite of single-objective multitasking benchmark problems. The experimental results have demonstrated the superiority of MDE-DVSM when compared with other competitive algorithms.","PeriodicalId":6729,"journal":{"name":"2019 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"10 1","pages":"3039-3046"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83146195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/SSCI44817.2019.9002805
S. Ullah, Hongya Wang, S. Menzel, B. Sendhoff, Thomas Bäck
This research investigates the potential of using meta-modeling techniques in the context of robust optimization namely optimization under uncertainty/noise. A systematic empirical comparison is performed for evaluating and comparing different meta-modeling techniques for robust optimization. The experimental setup includes three noise levels, six meta-modeling algorithms, and six benchmark problems from the continuous optimization domain, each for three different dimensionalities. Two robustness definitions: robust regularization and robust composition, are used in the experiments. The meta-modeling techniques are evaluated and compared with respect to the modeling accuracy and the optimal function values. The results clearly show that Kriging, Support Vector Machine and Polynomial regression perform excellently as they achieve high accuracy and the optimal point on the model landscape is close to the true optimum of test functions in most cases.
{"title":"An Empirical Comparison of Meta-Modeling Techniques for Robust Design Optimization","authors":"S. Ullah, Hongya Wang, S. Menzel, B. Sendhoff, Thomas Bäck","doi":"10.1109/SSCI44817.2019.9002805","DOIUrl":"https://doi.org/10.1109/SSCI44817.2019.9002805","url":null,"abstract":"This research investigates the potential of using meta-modeling techniques in the context of robust optimization namely optimization under uncertainty/noise. A systematic empirical comparison is performed for evaluating and comparing different meta-modeling techniques for robust optimization. The experimental setup includes three noise levels, six meta-modeling algorithms, and six benchmark problems from the continuous optimization domain, each for three different dimensionalities. Two robustness definitions: robust regularization and robust composition, are used in the experiments. The meta-modeling techniques are evaluated and compared with respect to the modeling accuracy and the optimal function values. The results clearly show that Kriging, Support Vector Machine and Polynomial regression perform excellently as they achieve high accuracy and the optimal point on the model landscape is close to the true optimum of test functions in most cases.","PeriodicalId":6729,"journal":{"name":"2019 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"4 1","pages":"819-828"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81171068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/SSCI44817.2019.9002930
Catherine McHugh, S. Coleman, D. Kerr, Daniel McGlynn
Electricity prices display nonlinear behaviour making it difficult to forecast prices in the market. In addition, various external factors influence electricity prices therefore predicting the day-ahead electricity price is subject to other factors fluctuating. Time-series models learn to follow past market trends and then use historical information as training input to predict future output. This paper focusses on understanding and interpreting statistical approaches for electricity price forecasting and explains these techniques through time-series application with real energy data. The model considered here is a Seasonal AutoRegressive Integrated Moving Average model with eXogenous variables (SARIMAX) as electricity prices follow a seasonal pattern controlled by various external factors. By applying algorithm rules for differencing to remove continuing trends, the data becomes stationary and parameters, 14 external factors, are chosen to predict day ahead electricity prices. In the presented experimental results, the Root Mean Square Error (RMSE) was reasonably low and the model accurately predicted electricity prices.
{"title":"Forecasting Day-ahead Electricity Prices with A SARIMAX Model","authors":"Catherine McHugh, S. Coleman, D. Kerr, Daniel McGlynn","doi":"10.1109/SSCI44817.2019.9002930","DOIUrl":"https://doi.org/10.1109/SSCI44817.2019.9002930","url":null,"abstract":"Electricity prices display nonlinear behaviour making it difficult to forecast prices in the market. In addition, various external factors influence electricity prices therefore predicting the day-ahead electricity price is subject to other factors fluctuating. Time-series models learn to follow past market trends and then use historical information as training input to predict future output. This paper focusses on understanding and interpreting statistical approaches for electricity price forecasting and explains these techniques through time-series application with real energy data. The model considered here is a Seasonal AutoRegressive Integrated Moving Average model with eXogenous variables (SARIMAX) as electricity prices follow a seasonal pattern controlled by various external factors. By applying algorithm rules for differencing to remove continuing trends, the data becomes stationary and parameters, 14 external factors, are chosen to predict day ahead electricity prices. In the presented experimental results, the Root Mean Square Error (RMSE) was reasonably low and the model accurately predicted electricity prices.","PeriodicalId":6729,"journal":{"name":"2019 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"38 1","pages":"1523-1529"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81398663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/SSCI44817.2019.9003006
Zhechen Wang, Yongquan Xie, Y. Murphey
The applications of item recommendation are universal in our daily life, such as job advertising, e-commercial promotion, movie and music recommendation, restaurant suggesting. However, some particular challenges emerge when it comes to music recommendation when applied to information and computation resources constrained (ICRC) platforms such as in-vehicle infotainment systems. The challenges include huge amount of total users and items, invisible user profiles, and limited in-vehicle computational resources, etc. We investigated the methods of making music recommendation for ICRC platforms in this paper. Two systems are proposed and studied, both of which are based on the collaborative filter algorithm, and designed to be target user-specific recommending so as to refrain from consuming too much computational resources. The first system remains raw user-item ratings with a goal to predict ratings from the user to other songs, while the other system focuses more on the prediction of the like behavior of a user to the songs. The configurations of the two systems are investigated. To evaluate the performance of the two systems, we include Yahoo! Music User Ratings of Songs with Artist, Album, and Genre Meta Information data set and conducted experiments. The two proposed music recommendation systems are shown to have differentiable quality in recommending abilities, e.g., mean absolute error, recall, negative recall and precision, and therefore can be applied flexibly according to practical demands.
{"title":"User-Specific Music recommendation Applied to Information and Computation Resource Constrained System","authors":"Zhechen Wang, Yongquan Xie, Y. Murphey","doi":"10.1109/SSCI44817.2019.9003006","DOIUrl":"https://doi.org/10.1109/SSCI44817.2019.9003006","url":null,"abstract":"The applications of item recommendation are universal in our daily life, such as job advertising, e-commercial promotion, movie and music recommendation, restaurant suggesting. However, some particular challenges emerge when it comes to music recommendation when applied to information and computation resources constrained (ICRC) platforms such as in-vehicle infotainment systems. The challenges include huge amount of total users and items, invisible user profiles, and limited in-vehicle computational resources, etc. We investigated the methods of making music recommendation for ICRC platforms in this paper. Two systems are proposed and studied, both of which are based on the collaborative filter algorithm, and designed to be target user-specific recommending so as to refrain from consuming too much computational resources. The first system remains raw user-item ratings with a goal to predict ratings from the user to other songs, while the other system focuses more on the prediction of the like behavior of a user to the songs. The configurations of the two systems are investigated. To evaluate the performance of the two systems, we include Yahoo! Music User Ratings of Songs with Artist, Album, and Genre Meta Information data set and conducted experiments. The two proposed music recommendation systems are shown to have differentiable quality in recommending abilities, e.g., mean absolute error, recall, negative recall and precision, and therefore can be applied flexibly according to practical demands.","PeriodicalId":6729,"journal":{"name":"2019 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"19 1","pages":"1179-1184"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81564266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/SSCI44817.2019.9002724
Huiwei Wang, Yong Zhao, Qingya Wang, Bo Zhou
Data sparsity, cold-start, and suboptimal recommendation for local users or items have been recognized as the most crucial three challenges in the latent factor model (LFM) for recommender systems. This paper proposes an approach that integrates the User-Item attributes into the classical LFM named UILFM focusing on above challenges. First, for the problem of data sparsity and cold-start, we develop an online learning algorithm to update the weights of user or item attribute for identifying the importance of different attributes. By aggregating the users and items based on their similar attributes, we obtain the local neighbor group which makes it possible for recom- mender to estimate some missing ratings based on adjacent user's ratings towards items and adjacent item's ratings. By introducing the convex mixed-parameters, we combine the estimate ratings with the classical LFM to predict the missing entries of the high-dimensional and sparse (HiDS) matrix for further closing the true ratings and reducing matrix sparsity. Second, for the suboptimal recommendation problem, we propose a new matrix filling (for missing ratings) method based on positive and negative samples, in which when the sparsity of the HiDS matrix is reduced to a threshold, the classical LFM will dominate the filling procedure, instead, the prediction based on neighbors' ratings remains a domination role. This method elegantly solves the suboptimal recommendation problem that the ratings of partial users are extremely sparse and the number of ratings per user are unbalanced. The proposed algorithm is tested by the MovieLens dataset, the results show that it promotes the recommendation accuracy compared with the classical LFM algorithm and the dimensionality reduction approaches as well as the collaborative filtering (CF) algorithms.
{"title":"Latent Factor Models Fusing User & Item Attributes","authors":"Huiwei Wang, Yong Zhao, Qingya Wang, Bo Zhou","doi":"10.1109/SSCI44817.2019.9002724","DOIUrl":"https://doi.org/10.1109/SSCI44817.2019.9002724","url":null,"abstract":"Data sparsity, cold-start, and suboptimal recommendation for local users or items have been recognized as the most crucial three challenges in the latent factor model (LFM) for recommender systems. This paper proposes an approach that integrates the User-Item attributes into the classical LFM named UILFM focusing on above challenges. First, for the problem of data sparsity and cold-start, we develop an online learning algorithm to update the weights of user or item attribute for identifying the importance of different attributes. By aggregating the users and items based on their similar attributes, we obtain the local neighbor group which makes it possible for recom- mender to estimate some missing ratings based on adjacent user's ratings towards items and adjacent item's ratings. By introducing the convex mixed-parameters, we combine the estimate ratings with the classical LFM to predict the missing entries of the high-dimensional and sparse (HiDS) matrix for further closing the true ratings and reducing matrix sparsity. Second, for the suboptimal recommendation problem, we propose a new matrix filling (for missing ratings) method based on positive and negative samples, in which when the sparsity of the HiDS matrix is reduced to a threshold, the classical LFM will dominate the filling procedure, instead, the prediction based on neighbors' ratings remains a domination role. This method elegantly solves the suboptimal recommendation problem that the ratings of partial users are extremely sparse and the number of ratings per user are unbalanced. The proposed algorithm is tested by the MovieLens dataset, the results show that it promotes the recommendation accuracy compared with the classical LFM algorithm and the dimensionality reduction approaches as well as the collaborative filtering (CF) algorithms.","PeriodicalId":6729,"journal":{"name":"2019 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"1 1","pages":"3201-3206"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90347590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}