Rekka Mastouri, Nawrès Khlifa, H. Neji, S. Hantous-Zannad
Lung cancer is one of the leading causes of death worldwide. Its early detection in its nodular form is extremely effective in improving patient survival rate. Deep learning (DL) and especially Convolutional Neural Network (CNN) have an important development over the past decade and were largely explored in medical imaging analysis. In this paper, a trending DL model composed of two CNN streams, named Bilinear CNN (B-CNN), was proposed for lung nodules classification on CT scans. In the developed B-CNN model, the pre-trained VGG16 architecture was trained as a feature extractor. It is the most important part of the proposed model in which its effectiveness depends stringently on its performances. Aiming to improve these performances, we address this question: what process leads with the performance improvement of the feature extractors? Transfer learning or Fine-tuning? To answer this question, two B-CNN models were implemented, in which the first one was based on transfer learning process and the second was based on fine-tuning, using VGG16 networks. A set of experiments was conducted and the results have shown the outperformance of the fine-tuned B-CNN model compared to the transfer learning-based model. Moreover, the proposed B-CNN model was demonstrating its efficiency and viability for the classification of lung nodules in terms of accuracy and AUC compared to existing works.
{"title":"Transfer Learning Vs. Fine-Tuning in Bilinear CNN for Lung Nodules Classification on CT Scans","authors":"Rekka Mastouri, Nawrès Khlifa, H. Neji, S. Hantous-Zannad","doi":"10.1145/3430199.3430211","DOIUrl":"https://doi.org/10.1145/3430199.3430211","url":null,"abstract":"Lung cancer is one of the leading causes of death worldwide. Its early detection in its nodular form is extremely effective in improving patient survival rate. Deep learning (DL) and especially Convolutional Neural Network (CNN) have an important development over the past decade and were largely explored in medical imaging analysis. In this paper, a trending DL model composed of two CNN streams, named Bilinear CNN (B-CNN), was proposed for lung nodules classification on CT scans. In the developed B-CNN model, the pre-trained VGG16 architecture was trained as a feature extractor. It is the most important part of the proposed model in which its effectiveness depends stringently on its performances. Aiming to improve these performances, we address this question: what process leads with the performance improvement of the feature extractors? Transfer learning or Fine-tuning? To answer this question, two B-CNN models were implemented, in which the first one was based on transfer learning process and the second was based on fine-tuning, using VGG16 networks. A set of experiments was conducted and the results have shown the outperformance of the fine-tuned B-CNN model compared to the transfer learning-based model. Moreover, the proposed B-CNN model was demonstrating its efficiency and viability for the classification of lung nodules in terms of accuracy and AUC compared to existing works.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122706383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The clustering algorithms that view each object data as a single sample drawn from a certain distribution have been a hot topic for decades. Many clustering algorithms, such as k-means and spectral clustering, are proposed based on the assumption that each clustering object is a vector generated by a Gaussian distribution. However, in real practice, each input object is usually a set of vectors drawn from a certain hidden distribution. Traditional clustering algorithms cannot handle such a situation. This fact calls for the multiple samples clustering algorithm. In this paper, we propose two algorithms for multiple samples clustering: Wasserstein distance based spectral clustering and Bhattacharyya distance based spectral clustering, and compare them with the traditional spectral clustering. The simulation results show that the second-moment information can greatly improve the clustering accuracy and stability. These algorithms are applied to the stock dataset to separate stocks into different groups based on their historical prices. Investors can make investment decisions based on the clustering information, to invest stocks in the same cluster and get the highest earning or to invest stocks of different clusters to avoid the risk.
{"title":"Multiple Samples Clustering with Second-moment Information in Stock Clustering","authors":"Xiang Wang","doi":"10.1145/3430199.3430223","DOIUrl":"https://doi.org/10.1145/3430199.3430223","url":null,"abstract":"The clustering algorithms that view each object data as a single sample drawn from a certain distribution have been a hot topic for decades. Many clustering algorithms, such as k-means and spectral clustering, are proposed based on the assumption that each clustering object is a vector generated by a Gaussian distribution. However, in real practice, each input object is usually a set of vectors drawn from a certain hidden distribution. Traditional clustering algorithms cannot handle such a situation. This fact calls for the multiple samples clustering algorithm. In this paper, we propose two algorithms for multiple samples clustering: Wasserstein distance based spectral clustering and Bhattacharyya distance based spectral clustering, and compare them with the traditional spectral clustering. The simulation results show that the second-moment information can greatly improve the clustering accuracy and stability. These algorithms are applied to the stock dataset to separate stocks into different groups based on their historical prices. Investors can make investment decisions based on the clustering information, to invest stocks in the same cluster and get the highest earning or to invest stocks of different clusters to avoid the risk.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134601175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper proposes a high capacity reversible watermarking algorithm based on block-level prediction error histogram shifting. This algorithm, in contrast to the known prediction error histogram shifting based ones, provides a higher embedding capacity, which is achieved by employing similarity among adjacent blocks rather than simply pixels inside of a block. The imperceptibility is ensured by the premise that once the block selected is small enough, the center pixels of adjacent blocks have high probability of similarity or equality. At last, the effectiveness and performance are evaluated by experiments and the result shows that our proposed algorithm has higher capacity under the same imperceptibility compared with the classic prediction error histogram shifting based reversible watermarking algorithm as well as the state-of-the-art.
{"title":"A High Capacity Reversible Watermarking Algorithm Based on Block-Level Prediction Error Histogram Shifting","authors":"Xin Tang, Dan Liu, Linna Zhou, Yi Zhang","doi":"10.1145/3430199.3430234","DOIUrl":"https://doi.org/10.1145/3430199.3430234","url":null,"abstract":"The paper proposes a high capacity reversible watermarking algorithm based on block-level prediction error histogram shifting. This algorithm, in contrast to the known prediction error histogram shifting based ones, provides a higher embedding capacity, which is achieved by employing similarity among adjacent blocks rather than simply pixels inside of a block. The imperceptibility is ensured by the premise that once the block selected is small enough, the center pixels of adjacent blocks have high probability of similarity or equality. At last, the effectiveness and performance are evaluated by experiments and the result shows that our proposed algorithm has higher capacity under the same imperceptibility compared with the classic prediction error histogram shifting based reversible watermarking algorithm as well as the state-of-the-art.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134321718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Usman Shoukat, U. Bhatti, Yang Yiqiang, Anum Mehmood, S. Nawaz, R. Ahmad
At present, most watermarking algorithms use linear correlation method to detect watermarks. However, when the original media signal does not obey the Gaussian distribution, or the watermark is not embedded into the media object to be protected, this method has certain problems. The imperceptibility constraint of digital watermark determines that watermark detection is a weak signal detection problem. Using this feature, firstly, based on the statistical characteristics of DCT (discrete cosine transform) and DWT (discrete wavelet transform), the generalized Gaussian distribution is used to establish its statistical distribution model. Then, the watermark detection problem is transformed into a binary hypothesis test problem. The basic theory of weak signal detection in non-Gaussian noise is used as the theoretical detection model of multiplication watermarking, and the optimized multiply embedded watermark detection algorithm is derived. The algorithm is tested. The results show that the proposed watermark detector has good detection performance for the blind detection of watermarking with unknown embedding strength. Therefore, the detector can be applied in the copyright protection of digital media data.
{"title":"Improved multiple watermarking algorithm for Medical Images","authors":"Muhammad Usman Shoukat, U. Bhatti, Yang Yiqiang, Anum Mehmood, S. Nawaz, R. Ahmad","doi":"10.1145/3430199.3430237","DOIUrl":"https://doi.org/10.1145/3430199.3430237","url":null,"abstract":"At present, most watermarking algorithms use linear correlation method to detect watermarks. However, when the original media signal does not obey the Gaussian distribution, or the watermark is not embedded into the media object to be protected, this method has certain problems. The imperceptibility constraint of digital watermark determines that watermark detection is a weak signal detection problem. Using this feature, firstly, based on the statistical characteristics of DCT (discrete cosine transform) and DWT (discrete wavelet transform), the generalized Gaussian distribution is used to establish its statistical distribution model. Then, the watermark detection problem is transformed into a binary hypothesis test problem. The basic theory of weak signal detection in non-Gaussian noise is used as the theoretical detection model of multiplication watermarking, and the optimized multiply embedded watermark detection algorithm is derived. The algorithm is tested. The results show that the proposed watermark detector has good detection performance for the blind detection of watermarking with unknown embedding strength. Therefore, the detector can be applied in the copyright protection of digital media data.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"504 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133178913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The image quality assessment results for foggy images are of great significance in the objective measurement of image quality and the design and optimization of dehazing algorithm. Initially, to address the issue that there are few no-reference evaluation algorithms for foggy image quality in real scenes, this paper proposes a no-reference quality assessment method for foggy image quality in real scenes. Firstly, we establish a real scene foggy image database and evaluate it subjectively to obtain the mean opinion score (MOS). Then, we propose a feature selection method combining correlation coefficients and union ideas, which can pick out features positively correlated with haze image quality, to simplify the features without affecting the prediction accuracy of the model. Finally, we use the support vector regression method to learn the regression mapping between features and subjective scores of the foggy images, by which we can obtain the image quality assessment results. The experimental results on the database show that the algorithm in this paper is better than other algorithms. The objective image quality evaluation results of the proposed algorithm are in good agreement with the human eye's subjective perception results. Besides, the experimental results prove that the model in this paper has better performance in predicting the quality of the image after defogging.
{"title":"A No-reference Image Quality Assessment Method for Real Foggy Images","authors":"Dianwei Wang, Jing Zhai, Pengfei Han, Jing-Dai Jiang, Xincheng Ren, Yongrui Qin, Zhijie Xu","doi":"10.1145/3430199.3430231","DOIUrl":"https://doi.org/10.1145/3430199.3430231","url":null,"abstract":"The image quality assessment results for foggy images are of great significance in the objective measurement of image quality and the design and optimization of dehazing algorithm. Initially, to address the issue that there are few no-reference evaluation algorithms for foggy image quality in real scenes, this paper proposes a no-reference quality assessment method for foggy image quality in real scenes. Firstly, we establish a real scene foggy image database and evaluate it subjectively to obtain the mean opinion score (MOS). Then, we propose a feature selection method combining correlation coefficients and union ideas, which can pick out features positively correlated with haze image quality, to simplify the features without affecting the prediction accuracy of the model. Finally, we use the support vector regression method to learn the regression mapping between features and subjective scores of the foggy images, by which we can obtain the image quality assessment results. The experimental results on the database show that the algorithm in this paper is better than other algorithms. The objective image quality evaluation results of the proposed algorithm are in good agreement with the human eye's subjective perception results. Besides, the experimental results prove that the model in this paper has better performance in predicting the quality of the image after defogging.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133512527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vehicle of the same manufacturer and the same color can only be distinguished by their subtle difference. If these small features, such as stickers on windows and spray paint on cars, can be better used, we can significantly improve the accuracy of vehicle reidentification. This paper aims to develop an effective network combining local features and attention mechanisms for vehicle reidentification. It divides the feature map to enable the network to capture more detailed feature information. At the same time, it uses the attention mechanism to enable the network to focus on the most important part of each branch, effectively eliminating background and other interference, and improving the network performance. Experiments show that this method improves the result of Rank-1 and mAP on two public datasets: VeRi-776 and VRIC.
{"title":"A Network Combining Local Features and Attention Mechanisms for Vehicle Re-Identification","authors":"Linghui Li, Xiaohui Zhang, Yan Xu","doi":"10.1145/3430199.3430206","DOIUrl":"https://doi.org/10.1145/3430199.3430206","url":null,"abstract":"Vehicle of the same manufacturer and the same color can only be distinguished by their subtle difference. If these small features, such as stickers on windows and spray paint on cars, can be better used, we can significantly improve the accuracy of vehicle reidentification. This paper aims to develop an effective network combining local features and attention mechanisms for vehicle reidentification. It divides the feature map to enable the network to capture more detailed feature information. At the same time, it uses the attention mechanism to enable the network to focus on the most important part of each branch, effectively eliminating background and other interference, and improving the network performance. Experiments show that this method improves the result of Rank-1 and mAP on two public datasets: VeRi-776 and VRIC.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129124292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abnormal behavior detection based on video sequence is a hot field. At the same time, monitoring and tracking the UAV (Unmanned Aerial Vehicle) and identifying its abnormal behavior are great significance for the UAV defense. This paper focuses on the detection and recognition of the UAV abnormal trajectory based on real-time video sequence. By tracking and analyzing the characteristics of the UAV, the detection and recognition of abnormal trajectory are divided into two stages. First, by analyzing the UAV's abnormal trajectory satisfying the change conditions is extracted by the quantitative analysis of the UAV's directional angle change features. Second, the normalized polar path fourier spectrum feature of abnormal trajectory is established, and the feature is combined with window search length to accelerate the classification and identification of the UAV trajectory types. Through the contrast experiment, it shows that the method in this paper has good real-time performance and accuracy for trajectory recognition with scale and translation changes.
{"title":"Real Time Detection and Identification of UAV Abnormal Trajectory","authors":"Ziyuan Wang, Geng Zhang, Bing-liang Hu, Xiangpeng Feng","doi":"10.1145/3430199.3430212","DOIUrl":"https://doi.org/10.1145/3430199.3430212","url":null,"abstract":"Abnormal behavior detection based on video sequence is a hot field. At the same time, monitoring and tracking the UAV (Unmanned Aerial Vehicle) and identifying its abnormal behavior are great significance for the UAV defense. This paper focuses on the detection and recognition of the UAV abnormal trajectory based on real-time video sequence. By tracking and analyzing the characteristics of the UAV, the detection and recognition of abnormal trajectory are divided into two stages. First, by analyzing the UAV's abnormal trajectory satisfying the change conditions is extracted by the quantitative analysis of the UAV's directional angle change features. Second, the normalized polar path fourier spectrum feature of abnormal trajectory is established, and the feature is combined with window search length to accelerate the classification and identification of the UAV trajectory types. Through the contrast experiment, it shows that the method in this paper has good real-time performance and accuracy for trajectory recognition with scale and translation changes.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128018608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Based on the general digital scanning of paper documents, a Chinese character recognition model is designed by using convolution neural network and image processing technology. The model is developed based on Python and TensorFlow framework, and printed Chinese character recognition is completed by using improved AlexNet convolution neural network structure. The recognition system includes data preprocessing, text area location, single character segmentation, character recognition and result output. The experimental results show that, on the premise of high recognition accuracy, the network model is small and fast recognition, and the recognition rate can basically meet the needs of practical use.
{"title":"Design of Chinese Character Recognition Based on AlexNet Convolution Neural Network","authors":"Songhua Xie, Hailiang Yang, Hui Nie","doi":"10.1145/3430199.3430230","DOIUrl":"https://doi.org/10.1145/3430199.3430230","url":null,"abstract":"Based on the general digital scanning of paper documents, a Chinese character recognition model is designed by using convolution neural network and image processing technology. The model is developed based on Python and TensorFlow framework, and printed Chinese character recognition is completed by using improved AlexNet convolution neural network structure. The recognition system includes data preprocessing, text area location, single character segmentation, character recognition and result output. The experimental results show that, on the premise of high recognition accuracy, the network model is small and fast recognition, and the recognition rate can basically meet the needs of practical use.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"27 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122642382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaiyuan Yang, Cai Zhong, Xiaotian Zhang, Xiaohui Zhu, Yong Yue
Modelling and visualization of riverbeds can provide topographic features and sedimentation distribution of river systems, which is essential to support water environment management. We developed a novel approach for building 3-dimensional (3D) models and visualization of riverbeds based on a non-uniform Rational B-Spline (NURBS) algorithm. We used an Unmanned Surface Vehicle (USV) to collect water depth and GPS positions of a river system for modelling. A data reduction method was proposed to accelerate the modelling process while keeping the model accuracy. To obtain a more realistic 3D model of a riverbed, we applied an algorithm to optimize weight factors of control points. We achieved the algorithm on MATLAB, and experimental results show that the algorithm can visualize topographic features and sedimentation distribution of riverbeds in 3D models.
{"title":"3D Modeling of Riverbeds Based on NURBS Algorithm","authors":"Kaiyuan Yang, Cai Zhong, Xiaotian Zhang, Xiaohui Zhu, Yong Yue","doi":"10.1145/3430199.3430239","DOIUrl":"https://doi.org/10.1145/3430199.3430239","url":null,"abstract":"Modelling and visualization of riverbeds can provide topographic features and sedimentation distribution of river systems, which is essential to support water environment management. We developed a novel approach for building 3-dimensional (3D) models and visualization of riverbeds based on a non-uniform Rational B-Spline (NURBS) algorithm. We used an Unmanned Surface Vehicle (USV) to collect water depth and GPS positions of a river system for modelling. A data reduction method was proposed to accelerate the modelling process while keeping the model accuracy. To obtain a more realistic 3D model of a riverbed, we applied an algorithm to optimize weight factors of control points. We achieved the algorithm on MATLAB, and experimental results show that the algorithm can visualize topographic features and sedimentation distribution of riverbeds in 3D models.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"10 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114059636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Student's-t Processes were recently proposed as a probabilistic alternative to Gaussian Processes for Bayesian optimization. Student's-t Processes are a generalization of Gaussian Processes, using an extra parameter v, which addresses Gaussian Processes' weaknesses. Separately, recent work used prior knowledge of a black-box function's global optimum f*, to create a new acquisition function for Bayesian optimization called Expected Regret Minimization. Gaussian Processes were then combined with Expected Regret Minimization to outperform existing models for Bayesian optimization. No published work currently exists for Expected Regret Minimization with Student's-t Processes. This research compares Expected Regret Minimization for Bayesian optimization, using Student's-t Processes versus Gaussian Processes. Both models are applied to four problems popular in mathematical optimization. Our work enhances Bayesian optimization by showing superior training regret minimization for Expected Regret Minimization, using Student's-t Processes versus Gaussian Processes.
{"title":"Expected Regret Minimization for Bayesian Optimization with Student's-t Processes","authors":"Conor Clare, G. Hawe, S. McClean","doi":"10.1145/3430199.3430218","DOIUrl":"https://doi.org/10.1145/3430199.3430218","url":null,"abstract":"Student's-t Processes were recently proposed as a probabilistic alternative to Gaussian Processes for Bayesian optimization. Student's-t Processes are a generalization of Gaussian Processes, using an extra parameter v, which addresses Gaussian Processes' weaknesses. Separately, recent work used prior knowledge of a black-box function's global optimum f*, to create a new acquisition function for Bayesian optimization called Expected Regret Minimization. Gaussian Processes were then combined with Expected Regret Minimization to outperform existing models for Bayesian optimization. No published work currently exists for Expected Regret Minimization with Student's-t Processes. This research compares Expected Regret Minimization for Bayesian optimization, using Student's-t Processes versus Gaussian Processes. Both models are applied to four problems popular in mathematical optimization. Our work enhances Bayesian optimization by showing superior training regret minimization for Expected Regret Minimization, using Student's-t Processes versus Gaussian Processes.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130678634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}