Pub Date : 2022-12-09DOI: 10.1109/ACAIT56212.2022.10138000
G. Luca, Yinong Chen
A major area of research in the field of quantum machine learning is the analysis of the loss landscape, particularly of variational quantum algorithms. These works often provide bounds and generalizations for various ansatzes and quantum embedding strategies. These analyses include approaches such as the Hessian and Fisher information matrices as well as generalized trigonometric polynomials. However, many such reviews often rely on a rotational encoding in practice or focus on few different approaches. The goal of this work is to statistically analyze experimental results from a quantum machine learning model that employs various different quantum embedding approaches, including those covered in related work, as well as the effect of measurement basis on the model.
{"title":"Quantum Embeddings of Classical Data for Quantum Machine Learning","authors":"G. Luca, Yinong Chen","doi":"10.1109/ACAIT56212.2022.10138000","DOIUrl":"https://doi.org/10.1109/ACAIT56212.2022.10138000","url":null,"abstract":"A major area of research in the field of quantum machine learning is the analysis of the loss landscape, particularly of variational quantum algorithms. These works often provide bounds and generalizations for various ansatzes and quantum embedding strategies. These analyses include approaches such as the Hessian and Fisher information matrices as well as generalized trigonometric polynomials. However, many such reviews often rely on a rotational encoding in practice or focus on few different approaches. The goal of this work is to statistically analyze experimental results from a quantum machine learning model that employs various different quantum embedding approaches, including those covered in related work, as well as the effect of measurement basis on the model.","PeriodicalId":398228,"journal":{"name":"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131115163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-09DOI: 10.1109/ACAIT56212.2022.10137862
Haitao Yu, Juntao Zeng, Xiaofeng Xie
Osteoporosis is a global skeletal disease which will seriously affect the human life. The early diagnosis of osteoporosis by using bone mineral density (BMD) examination can help to decrease the probability of osteoporosis. In the development of computer aided diagnosis, the calculation of BMD can be achieved by deep learning model in CT, without using the specially measuring devices. In this paper, we used a 3D-Unet model to segment the cortical and cancellous bone in the spine and perform quantitative analysis. After that, the three-dimensional visualization of cortical and cancellous bone was reconstructed, and the BMD value and other information were calculated to help doctors to predict the risk of osteoporosis. The expeirmental result shown that the proposed method achieve high performance in segementation and quantization.
{"title":"Deep Learning Model Research for Cortical Bone Separation in Chest CT Spine Imaging","authors":"Haitao Yu, Juntao Zeng, Xiaofeng Xie","doi":"10.1109/ACAIT56212.2022.10137862","DOIUrl":"https://doi.org/10.1109/ACAIT56212.2022.10137862","url":null,"abstract":"Osteoporosis is a global skeletal disease which will seriously affect the human life. The early diagnosis of osteoporosis by using bone mineral density (BMD) examination can help to decrease the probability of osteoporosis. In the development of computer aided diagnosis, the calculation of BMD can be achieved by deep learning model in CT, without using the specially measuring devices. In this paper, we used a 3D-Unet model to segment the cortical and cancellous bone in the spine and perform quantitative analysis. After that, the three-dimensional visualization of cortical and cancellous bone was reconstructed, and the BMD value and other information were calculated to help doctors to predict the risk of osteoporosis. The expeirmental result shown that the proposed method achieve high performance in segementation and quantization.","PeriodicalId":398228,"journal":{"name":"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131206141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-09DOI: 10.1109/ACAIT56212.2022.10138001
Kaiyue Sun, Qiaoming Li, Wenlong Wang, P. Zhang, Zhantu Li, Xingnan Zhao, Zeqi Li
Loess landslide geological disasters are widely distributed in Northwest China, but there are few relevant attention and researches. Landslide recognition can provide information help for landslide disaster management and risk management. Previous works of landslide recognition of remote sensing images based on deep learning, due to the lack of high resolution multi-source datasets, the boundary of landslide recognition is missing and not obvious and the identification accuracy is not ideal. In this work, a multi-scale dense feature fusion loess landslide recognition network (MDFF) was proposed and an open dataset of loess landslide samples (MSLLD) based on GF-2 images and DEM was constructed, which has spectral and topographic information. The MDFF network retains different levels of features by means of dense connection mechanism to make up for the loss of detailed features, the dense connected dilated convolution layer is introduced into the network to capture the different scale features of landslide images, expand the receptive field and avoid convolution degradation. When testing different networks on MSLLD, the proposed network achieves the most advanced performance, mIoU and F1-score were 82.31 % and 84.59% respectively, indicating that the proposed network can effectively recognize landslides, which is of great value for the investigation and analysis of loess landslide disasters.
{"title":"Multi-Scale Dense Feature Fusion Based Loess Landslide Recognition","authors":"Kaiyue Sun, Qiaoming Li, Wenlong Wang, P. Zhang, Zhantu Li, Xingnan Zhao, Zeqi Li","doi":"10.1109/ACAIT56212.2022.10138001","DOIUrl":"https://doi.org/10.1109/ACAIT56212.2022.10138001","url":null,"abstract":"Loess landslide geological disasters are widely distributed in Northwest China, but there are few relevant attention and researches. Landslide recognition can provide information help for landslide disaster management and risk management. Previous works of landslide recognition of remote sensing images based on deep learning, due to the lack of high resolution multi-source datasets, the boundary of landslide recognition is missing and not obvious and the identification accuracy is not ideal. In this work, a multi-scale dense feature fusion loess landslide recognition network (MDFF) was proposed and an open dataset of loess landslide samples (MSLLD) based on GF-2 images and DEM was constructed, which has spectral and topographic information. The MDFF network retains different levels of features by means of dense connection mechanism to make up for the loss of detailed features, the dense connected dilated convolution layer is introduced into the network to capture the different scale features of landslide images, expand the receptive field and avoid convolution degradation. When testing different networks on MSLLD, the proposed network achieves the most advanced performance, mIoU and F1-score were 82.31 % and 84.59% respectively, indicating that the proposed network can effectively recognize landslides, which is of great value for the investigation and analysis of loess landslide disasters.","PeriodicalId":398228,"journal":{"name":"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133298458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-09DOI: 10.1109/ACAIT56212.2022.10137845
Xuefeng Liu, Guangjian Zhang
Grasping skills are the basic skills required by robots in many practical applications. Recent research on robotic grasping detection generally focuses on grasping poses similar to human grasping. However, this grasping pose is not suitable for all grasping scenarios in practical applications. Therefore, this paper uses a new inside-propped grasping pose to label a large number of images with inside-propped grasping potential. In this way, an inside-propped grasp dataset is completed. Based on this dataset, this paper constructs a generative deep neural network for the inside-propped grasping prediction. The experimental results show that the success rate of the inside-propped grasping prediction network is 65.59%, and the average prediction time is 82ms, which has achieved good results in accuracy and real-time performance.
{"title":"IPGD: A Dataset for Robotic Inside-Propped Grasp Detection","authors":"Xuefeng Liu, Guangjian Zhang","doi":"10.1109/ACAIT56212.2022.10137845","DOIUrl":"https://doi.org/10.1109/ACAIT56212.2022.10137845","url":null,"abstract":"Grasping skills are the basic skills required by robots in many practical applications. Recent research on robotic grasping detection generally focuses on grasping poses similar to human grasping. However, this grasping pose is not suitable for all grasping scenarios in practical applications. Therefore, this paper uses a new inside-propped grasping pose to label a large number of images with inside-propped grasping potential. In this way, an inside-propped grasp dataset is completed. Based on this dataset, this paper constructs a generative deep neural network for the inside-propped grasping prediction. The experimental results show that the success rate of the inside-propped grasping prediction network is 65.59%, and the average prediction time is 82ms, which has achieved good results in accuracy and real-time performance.","PeriodicalId":398228,"journal":{"name":"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134265270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-09DOI: 10.1109/ACAIT56212.2022.10137909
Qianqian Qiu, Min Li, Sijie Shen, Shaobo Deng, Sujie Guan
K-nearest neighbor algorithm (KNN) is one of the most representative methods in data mining classification techniques. However, the KNN algorithm has a problem that when the traditional Euclidean distance formula is used to calculate the nearest neighbor distance, we ignore the relationship between attributes in the feature space. To tackle this issue, a covariance matrix is used to calculate the attribute contribution of the samples in order to solve the above problem. So an attribute contribution-based k-nearest neighbor classifier (ACWKNN) is proposed in this paper. The proposed algorithm is compared and experimented on the UCI standard dataset, and the results show that the method outperforms other KNN algorithms.
{"title":"An Attribute Contribution-Based K-Nearest Neighbor Classifier","authors":"Qianqian Qiu, Min Li, Sijie Shen, Shaobo Deng, Sujie Guan","doi":"10.1109/ACAIT56212.2022.10137909","DOIUrl":"https://doi.org/10.1109/ACAIT56212.2022.10137909","url":null,"abstract":"K-nearest neighbor algorithm (KNN) is one of the most representative methods in data mining classification techniques. However, the KNN algorithm has a problem that when the traditional Euclidean distance formula is used to calculate the nearest neighbor distance, we ignore the relationship between attributes in the feature space. To tackle this issue, a covariance matrix is used to calculate the attribute contribution of the samples in order to solve the above problem. So an attribute contribution-based k-nearest neighbor classifier (ACWKNN) is proposed in this paper. The proposed algorithm is compared and experimented on the UCI standard dataset, and the results show that the method outperforms other KNN algorithms.","PeriodicalId":398228,"journal":{"name":"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114403470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-09DOI: 10.1109/ACAIT56212.2022.10137979
Yan Hu, Gaodi Xu, Jie Shen, Houqun Yang, Shumeng He
Blockchain technology has attracted much attention since its emergence. Its unique characteristics of decentralization, trustworthiness and tamper-proof provide the possibility to build a more secure and effective data sharing platform. This paper first discusses the relevant knowledge of data sharing technology, explains how block chain realizes data sharing, and then analyzes existing data sharing schemes. It is also classified according to its core technology, so that researchers can quickly understand the existing data sharing schemes based on block chain, and can judge and choose research direction and technical route according to their own needs. This is also the value of this study. Finally, this paper analyzes the performance of four shared data schemes using experimental data from literature, and predicts the future development of sharing technology.
{"title":"Research on Secure Data Sharing Technology of Block Chain","authors":"Yan Hu, Gaodi Xu, Jie Shen, Houqun Yang, Shumeng He","doi":"10.1109/ACAIT56212.2022.10137979","DOIUrl":"https://doi.org/10.1109/ACAIT56212.2022.10137979","url":null,"abstract":"Blockchain technology has attracted much attention since its emergence. Its unique characteristics of decentralization, trustworthiness and tamper-proof provide the possibility to build a more secure and effective data sharing platform. This paper first discusses the relevant knowledge of data sharing technology, explains how block chain realizes data sharing, and then analyzes existing data sharing schemes. It is also classified according to its core technology, so that researchers can quickly understand the existing data sharing schemes based on block chain, and can judge and choose research direction and technical route according to their own needs. This is also the value of this study. Finally, this paper analyzes the performance of four shared data schemes using experimental data from literature, and predicts the future development of sharing technology.","PeriodicalId":398228,"journal":{"name":"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114445877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-09DOI: 10.1109/ACAIT56212.2022.10137951
Zhijian Xu
In order to improve the financial evaluation ability of pledged repo transactions, a method of identifying abnormal financial fluctuations of pledged repo transactions based on machine learning is proposed. Using the method of market risk identification, the pledge risk index system evaluation model for the financial evaluation of pledge type repo transactions is constructed. The balance of the capital flow channel of the pledge type repo financial system is controlled by using machine learning algorithm. Combined with machine learning to extract the abnormal fluctuation characteristics of the pledge type repo financial system, the fuzzy classification learning model of the data structure of the pledge type repo financial system is constructed. Spatial resampling method is used to reconstruct the abnormal financial volatility of pledge repurchase transactions and mining association rules. Clustering and matching the abnormal feature spectrum of the structural data of the financial system of pledge repurchase transactions by using machine learning algorithms. The model adopts the evaluation method of fluctuation synergy parameter. An adaptive learning algorithm is used to identify the abnormal financial fluctuations of pledge repurchase transactions. The simulation results show that this method has good clustering characteristics in identifying the abnormal financial fluctuations of pledge type repo transactions, effectively reducing the capital loss of the financial system structure of pledge type repo transactions, and improving the risk management ability.
{"title":"Research on Identification of Financial Abnormal Fluctuations in Pledged Repurchase Transactions Based on Machine Learning","authors":"Zhijian Xu","doi":"10.1109/ACAIT56212.2022.10137951","DOIUrl":"https://doi.org/10.1109/ACAIT56212.2022.10137951","url":null,"abstract":"In order to improve the financial evaluation ability of pledged repo transactions, a method of identifying abnormal financial fluctuations of pledged repo transactions based on machine learning is proposed. Using the method of market risk identification, the pledge risk index system evaluation model for the financial evaluation of pledge type repo transactions is constructed. The balance of the capital flow channel of the pledge type repo financial system is controlled by using machine learning algorithm. Combined with machine learning to extract the abnormal fluctuation characteristics of the pledge type repo financial system, the fuzzy classification learning model of the data structure of the pledge type repo financial system is constructed. Spatial resampling method is used to reconstruct the abnormal financial volatility of pledge repurchase transactions and mining association rules. Clustering and matching the abnormal feature spectrum of the structural data of the financial system of pledge repurchase transactions by using machine learning algorithms. The model adopts the evaluation method of fluctuation synergy parameter. An adaptive learning algorithm is used to identify the abnormal financial fluctuations of pledge repurchase transactions. The simulation results show that this method has good clustering characteristics in identifying the abnormal financial fluctuations of pledge type repo transactions, effectively reducing the capital loss of the financial system structure of pledge type repo transactions, and improving the risk management ability.","PeriodicalId":398228,"journal":{"name":"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117137966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-09DOI: 10.1109/ACAIT56212.2022.10137929
Dengyi Xiao
In order to correct business English translation errors, this paper puts forward a method of business English translation error correction based on convolutional neural network and English pronunciation feature recognition. The blind convolution network spectrum parameter detection method is used to detect the pronunciation spectrum features of business English translation, and the scalar time series of pronunciation output audio parameter sequence and translated text semantic feature sequence are established. Combined with the noise intensity detection and signal scale decomposition method of business English translation pronunciation audio time series, the detailed signal energy parameters of business English translation pronunciation audio time series are extracted, and the convolution neural network classification method is used to classify the features. The interference component of single audio feature sequence of English translation pronunciation is removed by high-frequency wavelet threshold detection, and the modulation and demodulation of single audio feature sequence of English translation pronunciation are realized by using translation dictionary set and semantic context matching. The spectral analysis and error correction model of business English translation pronunciation audio time series is established, and the output stability of business English translation pronunciation audio time series is detected by threshold detection on each scale. According to the difference between output signal and pronunciation standard signal, the accuracy of English translator is detected and identified. The simulation results show that the accuracy of business English translation error correction with this method is high, the detection performance is good, and the output accuracy of English translators is improved.
{"title":"Error Correction Method of Business English Translation Based on Convolutional Neural Network","authors":"Dengyi Xiao","doi":"10.1109/ACAIT56212.2022.10137929","DOIUrl":"https://doi.org/10.1109/ACAIT56212.2022.10137929","url":null,"abstract":"In order to correct business English translation errors, this paper puts forward a method of business English translation error correction based on convolutional neural network and English pronunciation feature recognition. The blind convolution network spectrum parameter detection method is used to detect the pronunciation spectrum features of business English translation, and the scalar time series of pronunciation output audio parameter sequence and translated text semantic feature sequence are established. Combined with the noise intensity detection and signal scale decomposition method of business English translation pronunciation audio time series, the detailed signal energy parameters of business English translation pronunciation audio time series are extracted, and the convolution neural network classification method is used to classify the features. The interference component of single audio feature sequence of English translation pronunciation is removed by high-frequency wavelet threshold detection, and the modulation and demodulation of single audio feature sequence of English translation pronunciation are realized by using translation dictionary set and semantic context matching. The spectral analysis and error correction model of business English translation pronunciation audio time series is established, and the output stability of business English translation pronunciation audio time series is detected by threshold detection on each scale. According to the difference between output signal and pronunciation standard signal, the accuracy of English translator is detected and identified. The simulation results show that the accuracy of business English translation error correction with this method is high, the detection performance is good, and the output accuracy of English translators is improved.","PeriodicalId":398228,"journal":{"name":"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114996089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-09DOI: 10.1109/ACAIT56212.2022.10137970
Li Zhou, Jin Shen, Ting Zhang
In order to further strengthen the control of financial market trends, a financial trend prediction model based on deep belief network (DBN) is proposed to further improve the prediction level of financial trend. Among them, the prediction and classification of financial market trend is realized by introducing Elliott wave theory. The prediction model adopts deep belief network model. Experimental results show that by introducing the Elliott wave theory, the designed financial trend prediction model based on deep belief network can achieve the accurate prediction of financial trend, the prediction precision is 67.5%, and the corresponding mean square error is 0.413. Compared with BP network and MLP network, deep belief network shows better performance on four evaluation indicators, namely ER, MAE, RMSE and MSE, and is more suitable for the design of financial trend prediction model. The above experimental results verify the feasibility and superiority of the financial trend prediction model based on deep belief network proposed in this study, which has certain application value.
{"title":"Financial Trend Prediction Based on Deep Belief Network","authors":"Li Zhou, Jin Shen, Ting Zhang","doi":"10.1109/ACAIT56212.2022.10137970","DOIUrl":"https://doi.org/10.1109/ACAIT56212.2022.10137970","url":null,"abstract":"In order to further strengthen the control of financial market trends, a financial trend prediction model based on deep belief network (DBN) is proposed to further improve the prediction level of financial trend. Among them, the prediction and classification of financial market trend is realized by introducing Elliott wave theory. The prediction model adopts deep belief network model. Experimental results show that by introducing the Elliott wave theory, the designed financial trend prediction model based on deep belief network can achieve the accurate prediction of financial trend, the prediction precision is 67.5%, and the corresponding mean square error is 0.413. Compared with BP network and MLP network, deep belief network shows better performance on four evaluation indicators, namely ER, MAE, RMSE and MSE, and is more suitable for the design of financial trend prediction model. The above experimental results verify the feasibility and superiority of the financial trend prediction model based on deep belief network proposed in this study, which has certain application value.","PeriodicalId":398228,"journal":{"name":"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123296813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-09DOI: 10.1109/ACAIT56212.2022.10137868
B. He, Xiao Wang, Lili Zhu
In the internet age, various contents flood people’s internet life, causing information redundancy, so performing more useful information extraction becomes an important task. Among the recommendation algorithms, the most common one is the collaborative filtering algorithm, which has the problem of data sparsity when performing matrix construction due to the poor relationship between users and items, which affects the effectiveness of recommendations. To address the data sparsity problem, the thesis proposes a collaborative filtering recommendation algorithm (KGCF) based on K-Means and GCN, which introduces K-Means and GCN, using the ability of K-Means to aggregate data and the ability of GCN to extract features in non-Euclidean space to obtain the hidden relationships between users and items, and populate the similarity matrix of users and items to alleviate the The paper uses the MovieLens dataset to improve the recommendation performance of traditional collaborative filtering algorithms. The paper uses the MovieLens dataset for comparison experiments, and uses MAE as the evaluation metric. The results show that this paper’s algorithm is better than similar algorithms in solving the sparsity of collaborative filtering data.
{"title":"Collaborative Filtering Recommendation Algorithm Based on K-Means and GCN","authors":"B. He, Xiao Wang, Lili Zhu","doi":"10.1109/ACAIT56212.2022.10137868","DOIUrl":"https://doi.org/10.1109/ACAIT56212.2022.10137868","url":null,"abstract":"In the internet age, various contents flood people’s internet life, causing information redundancy, so performing more useful information extraction becomes an important task. Among the recommendation algorithms, the most common one is the collaborative filtering algorithm, which has the problem of data sparsity when performing matrix construction due to the poor relationship between users and items, which affects the effectiveness of recommendations. To address the data sparsity problem, the thesis proposes a collaborative filtering recommendation algorithm (KGCF) based on K-Means and GCN, which introduces K-Means and GCN, using the ability of K-Means to aggregate data and the ability of GCN to extract features in non-Euclidean space to obtain the hidden relationships between users and items, and populate the similarity matrix of users and items to alleviate the The paper uses the MovieLens dataset to improve the recommendation performance of traditional collaborative filtering algorithms. The paper uses the MovieLens dataset for comparison experiments, and uses MAE as the evaluation metric. The results show that this paper’s algorithm is better than similar algorithms in solving the sparsity of collaborative filtering data.","PeriodicalId":398228,"journal":{"name":"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123890996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}