Abstract With the development of artificial intelligence, people begin to pay attention to the protection of sensitive information and data. Therefore, a homomorphic encryption framework based on effective integer vector is proposed and applied to deep learning to protect the privacy of users in binary convolutional neural network model. The conclusion shows that the model can achieve high accuracy. The training is 93.75% in MNIST dataset and 89.24% in original dataset. Because of the confidentiality of data, the training accuracy of the training set is only 86.77%. After increasing the training period, the accuracy began to converge to about 300 cycles, and finally reached about 86.39%. In addition, after taking the absolute value of the elements in the encryption matrix, the training accuracy of the model is 88.79%, and the test accuracy is 85.12%. The improved model is also compared with the traditional model. This model can reduce the storage consumption in the model calculation process, effectively improve the calculation speed, and have little impact on the accuracy. Specifically, the speed of the improved model is 58 times that of the traditional CNN model, and the storage consumption is 1/32 of that of the traditional CNN model. Therefore, homomorphic encryption can be applied to information encryption under the background of big data, and the privacy of the neural network can be realized.
{"title":"Anti-leakage method of network sensitive information data based on homomorphic encryption","authors":"Junlong Shi, Xiaofeng Zhao","doi":"10.1515/jisys-2022-0281","DOIUrl":"https://doi.org/10.1515/jisys-2022-0281","url":null,"abstract":"Abstract With the development of artificial intelligence, people begin to pay attention to the protection of sensitive information and data. Therefore, a homomorphic encryption framework based on effective integer vector is proposed and applied to deep learning to protect the privacy of users in binary convolutional neural network model. The conclusion shows that the model can achieve high accuracy. The training is 93.75% in MNIST dataset and 89.24% in original dataset. Because of the confidentiality of data, the training accuracy of the training set is only 86.77%. After increasing the training period, the accuracy began to converge to about 300 cycles, and finally reached about 86.39%. In addition, after taking the absolute value of the elements in the encryption matrix, the training accuracy of the model is 88.79%, and the test accuracy is 85.12%. The improved model is also compared with the traditional model. This model can reduce the storage consumption in the model calculation process, effectively improve the calculation speed, and have little impact on the accuracy. Specifically, the speed of the improved model is 58 times that of the traditional CNN model, and the storage consumption is 1/32 of that of the traditional CNN model. Therefore, homomorphic encryption can be applied to information encryption under the background of big data, and the privacy of the neural network can be realized.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"29 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86712507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Gattal, Chawki Djeddi, Faycel Abbas, I. Siddiqi, Bouderah Brahim
Abstract Identifying the writer of a handwritten document has remained an interesting pattern classification problem for document examiners, forensic experts, and paleographers. While mature identification systems have been developed for handwriting in contemporary documents, the problem remains challenging from the viewpoint of historical manuscripts. Design and development of expert systems that can identify the writer of a questioned manuscript or retrieve samples belonging to a given writer can greatly help the paleographers in their practices. In this context, the current study exploits the textural information in handwriting to characterize writer from historical documents. More specifically, we employ oBIF(oriented Basic Image Features) and hinge features and introduce a novel moment-based matching method to compare the feature vectors extracted from writing samples. Classification is based on minimization of a similarity criterion using the proposed moment distance. A comprehensive series of experiments using the International Conference on Document Analysis and Recognition 2017 historical writer identification dataset reported promising results and validated the ideas put forward in this study.
{"title":"A new method for writer identification based on historical documents","authors":"A. Gattal, Chawki Djeddi, Faycel Abbas, I. Siddiqi, Bouderah Brahim","doi":"10.1515/jisys-2022-0244","DOIUrl":"https://doi.org/10.1515/jisys-2022-0244","url":null,"abstract":"Abstract Identifying the writer of a handwritten document has remained an interesting pattern classification problem for document examiners, forensic experts, and paleographers. While mature identification systems have been developed for handwriting in contemporary documents, the problem remains challenging from the viewpoint of historical manuscripts. Design and development of expert systems that can identify the writer of a questioned manuscript or retrieve samples belonging to a given writer can greatly help the paleographers in their practices. In this context, the current study exploits the textural information in handwriting to characterize writer from historical documents. More specifically, we employ oBIF(oriented Basic Image Features) and hinge features and introduce a novel moment-based matching method to compare the feature vectors extracted from writing samples. Classification is based on minimization of a similarity criterion using the proposed moment distance. A comprehensive series of experiments using the International Conference on Document Analysis and Recognition 2017 historical writer identification dataset reported promising results and validated the ideas put forward in this study.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"87 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81093418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Aiming at the problem that users’ check-in interest preferences in social networks have complex time dependences, which leads to inaccurate point-of-interest (POI) recommendations, a location-based POI recommendation model using deep learning for social network big data is proposed. First, the original data are fed into an embedding layer of the model for dense vector representation and to obtain the user’s check-in sequence (UCS) and space-time interval information. Then, the UCS and spatiotemporal interval information are sent into a bidirectional long-term memory model for detailed analysis, where the UCS and location sequence representation are updated using a self-attention mechanism. Finally, candidate POIs are compared with the user’s preferences, and a POI sequence with three consecutive recommended locations is generated. The experimental analysis shows that the model performs best when the Huber loss function is used and the number of training iterations is set to 200. In the Foursquare dataset, Recall@20 and NDCG@20 reach 0.418 and 0.143, and in the Gowalla dataset, the corresponding values are 0.387 and 0.148.
{"title":"A BiLSTM-attention-based point-of-interest recommendation algorithm","authors":"Aichuan Li, Fuzhi Liu","doi":"10.1515/jisys-2023-0033","DOIUrl":"https://doi.org/10.1515/jisys-2023-0033","url":null,"abstract":"Abstract Aiming at the problem that users’ check-in interest preferences in social networks have complex time dependences, which leads to inaccurate point-of-interest (POI) recommendations, a location-based POI recommendation model using deep learning for social network big data is proposed. First, the original data are fed into an embedding layer of the model for dense vector representation and to obtain the user’s check-in sequence (UCS) and space-time interval information. Then, the UCS and spatiotemporal interval information are sent into a bidirectional long-term memory model for detailed analysis, where the UCS and location sequence representation are updated using a self-attention mechanism. Finally, candidate POIs are compared with the user’s preferences, and a POI sequence with three consecutive recommended locations is generated. The experimental analysis shows that the model performs best when the Huber loss function is used and the number of training iterations is set to 200. In the Foursquare dataset, Recall@20 and NDCG@20 reach 0.418 and 0.143, and in the Gowalla dataset, the corresponding values are 0.387 and 0.148.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135104735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Waste classification is the issue of sorting rubbish into valuable categories for efficient waste management. Problems arise from issues such as individual ignorance or inactivity and more overt issues like pollution in the environment, lack of resources, or a malfunctioning system. Education, established behaviors, an improved infrastructure, technology, and legislative incentives to promote effective trash sorting and management are all necessary for a solution to be implemented. For solid waste management and recycling efforts to be successful, waste materials must be sorted appropriately. This study evaluates the effectiveness of several deep learning (DL) models for the challenge of waste material classification. The focus will be on finding the best DL technique for solid waste classification. This study extensively compares several DL architectures (Resnet50, GoogleNet, InceptionV3, and Xception). Images of various types of trash are amassed and cleaned up to form a dataset. Accuracy, precision, recall, and F 1 score are only a few measures used to assess the performance of the many DL models trained and tested on this dataset. ResNet50 showed impressive performance in waste material classification, with 95% accuracy, 95.4% precision, 95% recall, and 94.8% in the F 1 score, with only two incorrect categories in the glass class. All classes are correctly classified with an F 1 score of 100% due to Inception V3’s remarkable accuracy, precision, recall, and F 1 score. Xception’s classification accuracy was excellent (100%), with a few difficulties in the glass and trash categories. With a good 90.78% precision, 100% recall, and 89.81% F 1 score, GoogleNet performed admirably. This study highlights the significance of using models based on DL for categorizing trash. The results open the way for enhanced trash sorting and recycling operations, contributing to an economically and ecologically friendly future.
{"title":"Waste material classification using performance evaluation of deep learning models","authors":"Israa Badr Al-Mashhadani","doi":"10.1515/jisys-2023-0064","DOIUrl":"https://doi.org/10.1515/jisys-2023-0064","url":null,"abstract":"Abstract Waste classification is the issue of sorting rubbish into valuable categories for efficient waste management. Problems arise from issues such as individual ignorance or inactivity and more overt issues like pollution in the environment, lack of resources, or a malfunctioning system. Education, established behaviors, an improved infrastructure, technology, and legislative incentives to promote effective trash sorting and management are all necessary for a solution to be implemented. For solid waste management and recycling efforts to be successful, waste materials must be sorted appropriately. This study evaluates the effectiveness of several deep learning (DL) models for the challenge of waste material classification. The focus will be on finding the best DL technique for solid waste classification. This study extensively compares several DL architectures (Resnet50, GoogleNet, InceptionV3, and Xception). Images of various types of trash are amassed and cleaned up to form a dataset. Accuracy, precision, recall, and F 1 score are only a few measures used to assess the performance of the many DL models trained and tested on this dataset. ResNet50 showed impressive performance in waste material classification, with 95% accuracy, 95.4% precision, 95% recall, and 94.8% in the F 1 score, with only two incorrect categories in the glass class. All classes are correctly classified with an F 1 score of 100% due to Inception V3’s remarkable accuracy, precision, recall, and F 1 score. Xception’s classification accuracy was excellent (100%), with a few difficulties in the glass and trash categories. With a good 90.78% precision, 100% recall, and 89.81% F 1 score, GoogleNet performed admirably. This study highlights the significance of using models based on DL for categorizing trash. The results open the way for enhanced trash sorting and recycling operations, contributing to an economically and ecologically friendly future.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135561223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Collaborative filtering recommender system (CFRS) plays a vital role in today’s e-commerce industry. CFRSs collect ratings from the users and predict recommendations for the targeted product. Conventionally, CFRS uses the user-product ratings to make recommendations. Often these user-product ratings are biased. The higher ratings are called push ratings (PRs) and the lower ratings are called nuke ratings (NRs). PRs and NRs are injected by factitious users with an intention either to aggravate or degrade the recommendations of a product. Hence, it is necessary to investigate PRs or NRs and discard them. In this work, opinion mining approach is applied on textual reviews that are given by users for a product to detect the PRs and NRs. The work also examines the effect of PRs and NRs on the performance of CFRS by evaluating various measures such as precision, recall, F-measure and accuracy.
{"title":"Detecting biased user-product ratings for online products using opinion mining","authors":"A. Chopra, V. S. Dixit","doi":"10.1515/jisys-2022-9030","DOIUrl":"https://doi.org/10.1515/jisys-2022-9030","url":null,"abstract":"Abstract Collaborative filtering recommender system (CFRS) plays a vital role in today’s e-commerce industry. CFRSs collect ratings from the users and predict recommendations for the targeted product. Conventionally, CFRS uses the user-product ratings to make recommendations. Often these user-product ratings are biased. The higher ratings are called push ratings (PRs) and the lower ratings are called nuke ratings (NRs). PRs and NRs are injected by factitious users with an intention either to aggravate or degrade the recommendations of a product. Hence, it is necessary to investigate PRs or NRs and discard them. In this work, opinion mining approach is applied on textual reviews that are given by users for a product to detect the PRs and NRs. The work also examines the effect of PRs and NRs on the performance of CFRS by evaluating various measures such as precision, recall, F-measure and accuracy.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"102 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79426395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In order to better improve the teaching quality of university teachers, an effective method should be adopted for evaluation and analysis. This work studied the machine learning algorithms and selected the support vector machine (SVM) algorithm to evaluate teaching quality. First, the principles of selecting evaluation indexes were briefly introduced, and 16 evaluation indexes were selected from different aspects. Then, the SVM algorithm was used for evaluation. A genetic algorithm (GA)-SVM algorithm was designed and experimentally analyzed. It was found that the training time and testing time of the GA-SVM algorithm were 23.21 and 7.25 ms, both of which were shorter than the SVM algorithm. In the evaluation of teaching quality, the evaluation value of the GA-SVM algorithm was closer to the actual value, indicating that the evaluation result was more accurate. The average accuracy of the GA-SVM algorithm was 11.64% higher than that of the SVM algorithm (98.36 vs 86.72%). The experimental results verify that the GA-SVM algorithm can have a good application in evaluating and analyzing teaching quality in universities with its advantages in efficiency and accuracy.
摘要为了更好地提高高校教师的教学质量,需要采取有效的方法对高校教师的教学质量进行评价和分析。本工作研究了机器学习算法,选择支持向量机(SVM)算法来评价教学质量。首先,简要介绍了评价指标的选取原则,从不同方面选取了16个评价指标。然后,使用SVM算法进行评价。设计了一种遗传算法-支持向量机算法,并进行了实验分析。结果表明,GA-SVM算法的训练时间为23.21 ms,测试时间为7.25 ms,均短于SVM算法。在教学质量评价中,GA-SVM算法的评价值更接近实际值,说明评价结果更准确。GA-SVM算法的平均准确率比SVM算法高11.64% (98.36 vs 86.72%)。实验结果验证了GA-SVM算法以其高效、准确的优势在高校教学质量评价与分析中具有良好的应用前景。
{"title":"Evaluation and analysis of teaching quality of university teachers using machine learning algorithms","authors":"Ying Zhong","doi":"10.1515/jisys-2022-0204","DOIUrl":"https://doi.org/10.1515/jisys-2022-0204","url":null,"abstract":"Abstract In order to better improve the teaching quality of university teachers, an effective method should be adopted for evaluation and analysis. This work studied the machine learning algorithms and selected the support vector machine (SVM) algorithm to evaluate teaching quality. First, the principles of selecting evaluation indexes were briefly introduced, and 16 evaluation indexes were selected from different aspects. Then, the SVM algorithm was used for evaluation. A genetic algorithm (GA)-SVM algorithm was designed and experimentally analyzed. It was found that the training time and testing time of the GA-SVM algorithm were 23.21 and 7.25 ms, both of which were shorter than the SVM algorithm. In the evaluation of teaching quality, the evaluation value of the GA-SVM algorithm was closer to the actual value, indicating that the evaluation result was more accurate. The average accuracy of the GA-SVM algorithm was 11.64% higher than that of the SVM algorithm (98.36 vs 86.72%). The experimental results verify that the GA-SVM algorithm can have a good application in evaluating and analyzing teaching quality in universities with its advantages in efficiency and accuracy.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"1 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75998133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yaser M. Abid, N. Kaittan, M. Mahdi, B. I. Bakri, A. Omran, M. Altaee, Sura Khalil Abid
Abstract Training, sports equipment, and facilities are the main aspects of sports advancement. Countries are investing heavily in the training of athletes, especially in table tennis. Athletes require basic equipment for exercises, but most athletes cannot afford the high cost; hence, the necessity for developing a low-cost automated system has increased. To enhance the quality of the athletes’ training, the proposed research focuses on using the enormous developments in artificial intelligence by developing an automated training system that can maintain the training duration and intensity whenever necessary. In this research, an intelligent controller has been designed to simulate training patterns of table tennis. The intelligent controller will control the system that sends the table tennis balls’ intensity, speed, and duration. The system will detect the hand sign that has been previously assigned to different speeds using an image detection method and will work accordingly by accelerating the speed using pulse width modulation techniques. Simply showing the athletes’ hand sign to the system will trigger the artificial intelligent camera to identify it, sending the tennis ball at the assigned speed. The artificial intelligence of the proposed device showed promising results in detecting hand signs with minimum errors in training sessions and intensity. The image detection accuracy collected from the intelligent controller during training was 90.05%. Furthermore, the proposed system has a minimal material cost and can be easily installed and used.
{"title":"Development of an intelligent controller for sports training system based on FPGA","authors":"Yaser M. Abid, N. Kaittan, M. Mahdi, B. I. Bakri, A. Omran, M. Altaee, Sura Khalil Abid","doi":"10.1515/jisys-2022-0260","DOIUrl":"https://doi.org/10.1515/jisys-2022-0260","url":null,"abstract":"Abstract Training, sports equipment, and facilities are the main aspects of sports advancement. Countries are investing heavily in the training of athletes, especially in table tennis. Athletes require basic equipment for exercises, but most athletes cannot afford the high cost; hence, the necessity for developing a low-cost automated system has increased. To enhance the quality of the athletes’ training, the proposed research focuses on using the enormous developments in artificial intelligence by developing an automated training system that can maintain the training duration and intensity whenever necessary. In this research, an intelligent controller has been designed to simulate training patterns of table tennis. The intelligent controller will control the system that sends the table tennis balls’ intensity, speed, and duration. The system will detect the hand sign that has been previously assigned to different speeds using an image detection method and will work accordingly by accelerating the speed using pulse width modulation techniques. Simply showing the athletes’ hand sign to the system will trigger the artificial intelligent camera to identify it, sending the tennis ball at the assigned speed. The artificial intelligence of the proposed device showed promising results in detecting hand signs with minimum errors in training sessions and intensity. The image detection accuracy collected from the intelligent controller during training was 90.05%. Furthermore, the proposed system has a minimal material cost and can be easily installed and used.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"34 10 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82780818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract With the rapid expansion in plant disease detection, there has been a progressive increase in the demand for more accurate systems. In this work, we propose a new method combining color information, edge information, and textural information to identify diseases in 14 different plants. A novel 3-branch architecture is proposed containing the color information branch, an edge information branch, and a textural information branch extracting the textural information with the help of the central difference convolution network (CDCN). ResNet-18 was chosen as the base architecture of the deep neural network (DNN). Unlike the traditional DNNs, the weights adjust automatically during the training phase and provide the best of all the ratios. The experiments were performed to determine individual and combinational features’ contribution to the classification process. Experimental results of the PlantVillage database with 38 classes show that the proposed method has higher accuracy, i.e., 99.23%, than the existing feature fusion methods for plant disease identification.
{"title":"Automatic adaptive weighted fusion of features-based approach for plant disease identification","authors":"Kirti, N. Rajpal, V. P. Vishwakarma","doi":"10.1515/jisys-2022-0247","DOIUrl":"https://doi.org/10.1515/jisys-2022-0247","url":null,"abstract":"Abstract With the rapid expansion in plant disease detection, there has been a progressive increase in the demand for more accurate systems. In this work, we propose a new method combining color information, edge information, and textural information to identify diseases in 14 different plants. A novel 3-branch architecture is proposed containing the color information branch, an edge information branch, and a textural information branch extracting the textural information with the help of the central difference convolution network (CDCN). ResNet-18 was chosen as the base architecture of the deep neural network (DNN). Unlike the traditional DNNs, the weights adjust automatically during the training phase and provide the best of all the ratios. The experiments were performed to determine individual and combinational features’ contribution to the classification process. Experimental results of the PlantVillage database with 38 classes show that the proposed method has higher accuracy, i.e., 99.23%, than the existing feature fusion methods for plant disease identification.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"23 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83930703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The supply and storage of drugs are critical components of the medical industry and distribution. The shelf life of most medications is predetermined. When medicines are supplied in large quantities it is exceeding actual need, and long-term drug storage results. If demand is lower than necessary, this has an impact on consumer happiness and medicine marketing. Therefore, it is necessary to find a way to predict the actual quantity required for the organization’s needs to avoid material spoilage and storage problems. A mathematical prediction model is required to assist any management in achieving the required availability of medicines for customers and safe storage of medicines. Artificial intelligence applications and predictive modeling have used machine learning (ML) and deep learning algorithms to build prediction models. This model allows for the optimization of inventory levels, thus reducing costs and potentially increasing sales. Various measures, such as mean squared error, mean absolute squared error, root mean squared error, and others, are used to evaluate the prediction model. This study aims to review ML and deep learning approaches of forecasting to obtain the highest accuracy in the process of forecasting future demand for pharmaceuticals. Because of the lack of data, they could not use complex models for prediction. Even when there is a long history of accessible demand data, these problems still exist because the old data may not be very useful when it changes the market climate.
{"title":"Predicting medicine demand using deep learning techniques: A review","authors":"Bashaer Abdurahman Mousa, Belal Al-Khateeb","doi":"10.1515/jisys-2022-0297","DOIUrl":"https://doi.org/10.1515/jisys-2022-0297","url":null,"abstract":"Abstract The supply and storage of drugs are critical components of the medical industry and distribution. The shelf life of most medications is predetermined. When medicines are supplied in large quantities it is exceeding actual need, and long-term drug storage results. If demand is lower than necessary, this has an impact on consumer happiness and medicine marketing. Therefore, it is necessary to find a way to predict the actual quantity required for the organization’s needs to avoid material spoilage and storage problems. A mathematical prediction model is required to assist any management in achieving the required availability of medicines for customers and safe storage of medicines. Artificial intelligence applications and predictive modeling have used machine learning (ML) and deep learning algorithms to build prediction models. This model allows for the optimization of inventory levels, thus reducing costs and potentially increasing sales. Various measures, such as mean squared error, mean absolute squared error, root mean squared error, and others, are used to evaluate the prediction model. This study aims to review ML and deep learning approaches of forecasting to obtain the highest accuracy in the process of forecasting future demand for pharmaceuticals. Because of the lack of data, they could not use complex models for prediction. Even when there is a long history of accessible demand data, these problems still exist because the old data may not be very useful when it changes the market climate.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"48 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90279097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Due to the diversity and complexity of data information, traditional data fusion methods cannot effectively fuse multidimensional data, which affects the effective application of data. To achieve accurate and efficient fusion of multidimensional data, this experiment used back propagation (BP) neural network and fireworks algorithm (FWA) to establish the FWA–BP multidimensional data processing model, and a case study of PM2.5 concentration prediction was carried out by using the model. In the PM2.5 concentration prediction results, the trend between the FWA–BP prediction curve and the real curve was basically consistent, and the prediction deviation was less than 10. The average mean absolute error and root mean square error of FWA–BP network model in different samples were 3.7 and 4.3%, respectively. The correlation coefficient R value of FWA–BP network model was 0.963, which is higher than other network models. The results showed that FWA–BP network model could continuously optimize when predicting PM2.5 concentration, so as to avoid falling into local optimum prematurely. At the same time, the prediction accuracy is better with the improvement in the correlation coefficient between real and predicted value, which means, in computer technology of multisensor data fusion, this method can be applied better.
{"title":"Computer technology of multisensor data fusion based on FWA–BP network","authors":"Xiaowei Hai","doi":"10.1515/jisys-2022-0278","DOIUrl":"https://doi.org/10.1515/jisys-2022-0278","url":null,"abstract":"Abstract Due to the diversity and complexity of data information, traditional data fusion methods cannot effectively fuse multidimensional data, which affects the effective application of data. To achieve accurate and efficient fusion of multidimensional data, this experiment used back propagation (BP) neural network and fireworks algorithm (FWA) to establish the FWA–BP multidimensional data processing model, and a case study of PM2.5 concentration prediction was carried out by using the model. In the PM2.5 concentration prediction results, the trend between the FWA–BP prediction curve and the real curve was basically consistent, and the prediction deviation was less than 10. The average mean absolute error and root mean square error of FWA–BP network model in different samples were 3.7 and 4.3%, respectively. The correlation coefficient R value of FWA–BP network model was 0.963, which is higher than other network models. The results showed that FWA–BP network model could continuously optimize when predicting PM2.5 concentration, so as to avoid falling into local optimum prematurely. At the same time, the prediction accuracy is better with the improvement in the correlation coefficient between real and predicted value, which means, in computer technology of multisensor data fusion, this method can be applied better.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"13 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85088144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}