Abstract In the era of big data, data information has exploded, and all walks of life are impacted by big data. The arrival of big data provides the possibility for the realization of intelligent financial analysis of enterprises. At present, most enterprises’ financial analysis and decision-making based on the analysis results are mainly based on human resources, with poor automation and obvious problems in efficiency and error. In order to help the senior management of enterprises to conduct scientific and effective management, the study uses big data web crawler technology and ETL technology to process data and build an intelligent financial decision support system integrating big data together with Internet plus platform. J Group in S Province is taken as an example to study the effect before and after the application of intelligent financial decision support system. The results show that the crawler technology can monitor the basic data and the big data in the industry in real time, and improve the accuracy of decision-making. Through the intelligent financial decision support system which integrates big data, the core indexes such as profit, net asset return, and accounts receivable of the enterprises can be clearly displayed. The system can query the causes of financial changes hidden behind the financial data. Through the intelligent financial decision support system, it is found that the asset liability ratio, current assets growth rate, operating income growth rate, and financial expenses of J Group are 55.27, 10.38, 20.28%, and 1,974 million RMB, respectively. The growth rate of real sales income of J Group is 0.63%, which is 31.27% less than the excellent value of the industry 31.90%. After adopting the intelligent financial decision support system, the monthly financial statements of the enterprises increase significantly, and the monthly report analysis time decreases. The maximum number of financial statements received by the Group per month is 332, and the processing time at this time is only 2 h. According to the results, it can be seen that the intelligent financial decision support system integrating big data as the research result can effectively improve the financial management level of enterprises, improve the usefulness of financial decision-making, and make practical contributions to the field of corporate financial decision-making.
{"title":"Intelligent financial decision support system based on big data","authors":"Danna Tong, Guixian Tian","doi":"10.1515/jisys-2022-0320","DOIUrl":"https://doi.org/10.1515/jisys-2022-0320","url":null,"abstract":"Abstract In the era of big data, data information has exploded, and all walks of life are impacted by big data. The arrival of big data provides the possibility for the realization of intelligent financial analysis of enterprises. At present, most enterprises’ financial analysis and decision-making based on the analysis results are mainly based on human resources, with poor automation and obvious problems in efficiency and error. In order to help the senior management of enterprises to conduct scientific and effective management, the study uses big data web crawler technology and ETL technology to process data and build an intelligent financial decision support system integrating big data together with Internet plus platform. J Group in S Province is taken as an example to study the effect before and after the application of intelligent financial decision support system. The results show that the crawler technology can monitor the basic data and the big data in the industry in real time, and improve the accuracy of decision-making. Through the intelligent financial decision support system which integrates big data, the core indexes such as profit, net asset return, and accounts receivable of the enterprises can be clearly displayed. The system can query the causes of financial changes hidden behind the financial data. Through the intelligent financial decision support system, it is found that the asset liability ratio, current assets growth rate, operating income growth rate, and financial expenses of J Group are 55.27, 10.38, 20.28%, and 1,974 million RMB, respectively. The growth rate of real sales income of J Group is 0.63%, which is 31.27% less than the excellent value of the industry 31.90%. After adopting the intelligent financial decision support system, the monthly financial statements of the enterprises increase significantly, and the monthly report analysis time decreases. The maximum number of financial statements received by the Group per month is 332, and the processing time at this time is only 2 h. According to the results, it can be seen that the intelligent financial decision support system integrating big data as the research result can effectively improve the financial management level of enterprises, improve the usefulness of financial decision-making, and make practical contributions to the field of corporate financial decision-making.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135358255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Paternity testing using a deoxyribose nucleic acid (DNA) profile is an essential branch of forensic science, and DNA short tandem repeat (STR) is usually used for this purpose. Nowadays, in third-world countries, conventional kinship analysis techniques used in forensic investigations result in inadequate accuracy measurements, especially when dealing with large human STR datasets; they compare human profiles manually so that the number of samples is limited due to the required human efforts and time consumption. By utilizing automation made possible by AI, forensic investigations are conducted more efficiently, saving both time conception and cost. In this article, we propose a new algorithm for predicting paternity based on the 15-loci STR-DNA datasets using a deep neural network (DNN), where comparisons among many human profiles are held regardless of the limitation of the number of samples. For the purpose of paternity testing, familial data are artificially created based on the real data of individual Iraqi people from Al-Najaf province. Such action helps to overcome the shortage of Iraqi data due to restricted policies and the secrecy of familial datasets. About 53,530 datasets are used in the proposed DNN model for the purpose of training and testing. The Keras library based on Python is used to implement and test the proposed system, as well as the confusion matrix and receiver operating characteristic curve for system evaluation. The system shows excellent accuracy of 99.6% in paternity tests, which is the highest accuracy compared to the existing works. This system shows a good attempt at testing paternity based on a technique of artificial intelligence.
{"title":"A deep neural network model for paternity testing based on 15-loci STR for Iraqi families","authors":"Donya A. Khalid, Nasser Nafea","doi":"10.1515/jisys-2023-0041","DOIUrl":"https://doi.org/10.1515/jisys-2023-0041","url":null,"abstract":"Abstract Paternity testing using a deoxyribose nucleic acid (DNA) profile is an essential branch of forensic science, and DNA short tandem repeat (STR) is usually used for this purpose. Nowadays, in third-world countries, conventional kinship analysis techniques used in forensic investigations result in inadequate accuracy measurements, especially when dealing with large human STR datasets; they compare human profiles manually so that the number of samples is limited due to the required human efforts and time consumption. By utilizing automation made possible by AI, forensic investigations are conducted more efficiently, saving both time conception and cost. In this article, we propose a new algorithm for predicting paternity based on the 15-loci STR-DNA datasets using a deep neural network (DNN), where comparisons among many human profiles are held regardless of the limitation of the number of samples. For the purpose of paternity testing, familial data are artificially created based on the real data of individual Iraqi people from Al-Najaf province. Such action helps to overcome the shortage of Iraqi data due to restricted policies and the secrecy of familial datasets. About 53,530 datasets are used in the proposed DNN model for the purpose of training and testing. The Keras library based on Python is used to implement and test the proposed system, as well as the confusion matrix and receiver operating characteristic curve for system evaluation. The system shows excellent accuracy of 99.6% in paternity tests, which is the highest accuracy compared to the existing works. This system shows a good attempt at testing paternity based on a technique of artificial intelligence.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135611752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The nonlinear system is difficult to achieve the desired effect by using traditional proportional integral derivative (PID) or linear controller. First, this study presents an improved lazy learning algorithm based on k-vector nearest neighbors, which not only considers the matching of input and output data, but also considers the consistency of the model. Based on the optimization index of an additional penalty function, the optimal solution of the lazy learning is obtained by the iterative least-square method. Second, based on the improved lazy learning, an adaptive PID control algorithm is proposed. Finally, the control effect under the condition of complete data and incomplete data is compared by simulation experiment.
{"title":"Design model-free adaptive PID controller based on lazy learning algorithm","authors":"Hongcheng Zhou","doi":"10.1515/jisys-2022-0279","DOIUrl":"https://doi.org/10.1515/jisys-2022-0279","url":null,"abstract":"Abstract The nonlinear system is difficult to achieve the desired effect by using traditional proportional integral derivative (PID) or linear controller. First, this study presents an improved lazy learning algorithm based on k-vector nearest neighbors, which not only considers the matching of input and output data, but also considers the consistency of the model. Based on the optimization index of an additional penalty function, the optimal solution of the lazy learning is obtained by the iterative least-square method. Second, based on the improved lazy learning, an adaptive PID control algorithm is proposed. Finally, the control effect under the condition of complete data and incomplete data is compared by simulation experiment.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135953356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Deep learning (DL) has revolutionized advanced digital picture processing, enabling significant advancements in computer vision (CV). However, it is important to note that older CV techniques, developed prior to the emergence of DL, still hold value and relevance. Particularly in the realm of more complex, three-dimensional (3D) data such as video and 3D models, CV and multimedia retrieval remain at the forefront of technological advancements. We provide critical insights into the progress made in developing higher-dimensional qualities through the application of DL, and also discuss the advantages and strategies employed in DL. With the widespread use of 3D sensor data and 3D modeling, the analysis and representation of the world in three dimensions have become commonplace. This progress has been facilitated by the development of additional sensors, driven by advancements in areas such as 3D gaming and self-driving vehicles. These advancements have enabled researchers to create feature description models that surpass traditional two-dimensional approaches. This study reveals the current state of advanced digital picture processing, highlighting the role of DL in pushing the boundaries of CV and multimedia retrieval in handling complex, 3D data.
{"title":"Development and research of deep neural network fusion computer vision technology","authors":"Jiangtao Wang","doi":"10.1515/jisys-2022-0264","DOIUrl":"https://doi.org/10.1515/jisys-2022-0264","url":null,"abstract":"Abstract Deep learning (DL) has revolutionized advanced digital picture processing, enabling significant advancements in computer vision (CV). However, it is important to note that older CV techniques, developed prior to the emergence of DL, still hold value and relevance. Particularly in the realm of more complex, three-dimensional (3D) data such as video and 3D models, CV and multimedia retrieval remain at the forefront of technological advancements. We provide critical insights into the progress made in developing higher-dimensional qualities through the application of DL, and also discuss the advantages and strategies employed in DL. With the widespread use of 3D sensor data and 3D modeling, the analysis and representation of the world in three dimensions have become commonplace. This progress has been facilitated by the development of additional sensors, driven by advancements in areas such as 3D gaming and self-driving vehicles. These advancements have enabled researchers to create feature description models that surpass traditional two-dimensional approaches. This study reveals the current state of advanced digital picture processing, highlighting the role of DL in pushing the boundaries of CV and multimedia retrieval in handling complex, 3D data.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135157947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In recent years, smart contract technology has garnered significant attention due to its ability to address trust issues that traditional technologies have long struggled with. However, like any evolving technology, smart contracts are not immune to vulnerabilities, and some remain underexplored, often eluding detection by existing vulnerability assessment tools. In this article, we have performed a systematic literature review of all the scientific research and papers conducted between 2016 and 2021. The main objective of this work is to identify what vulnerabilities and smart contract technologies have not been well studied. In addition, we list all the datasets used by previous researchers that can help researchers in building more efficient machine-learning models in the future. In addition, comparisons are drawn among the smart contract analysis tools by considering various features. Finally, various future directions are also discussed in the field of smart contracts that can help researchers to set the direction for future research in this domain.
{"title":"A systematic literature review of undiscovered vulnerabilities and tools in smart contract technology","authors":"Oualid Zaazaa, Hanan El Bakkali","doi":"10.1515/jisys-2023-0038","DOIUrl":"https://doi.org/10.1515/jisys-2023-0038","url":null,"abstract":"Abstract In recent years, smart contract technology has garnered significant attention due to its ability to address trust issues that traditional technologies have long struggled with. However, like any evolving technology, smart contracts are not immune to vulnerabilities, and some remain underexplored, often eluding detection by existing vulnerability assessment tools. In this article, we have performed a systematic literature review of all the scientific research and papers conducted between 2016 and 2021. The main objective of this work is to identify what vulnerabilities and smart contract technologies have not been well studied. In addition, we list all the datasets used by previous researchers that can help researchers in building more efficient machine-learning models in the future. In addition, comparisons are drawn among the smart contract analysis tools by considering various features. Finally, various future directions are also discussed in the field of smart contracts that can help researchers to set the direction for future research in this domain.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84117441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract With the development of artificial intelligence, people begin to pay attention to the protection of sensitive information and data. Therefore, a homomorphic encryption framework based on effective integer vector is proposed and applied to deep learning to protect the privacy of users in binary convolutional neural network model. The conclusion shows that the model can achieve high accuracy. The training is 93.75% in MNIST dataset and 89.24% in original dataset. Because of the confidentiality of data, the training accuracy of the training set is only 86.77%. After increasing the training period, the accuracy began to converge to about 300 cycles, and finally reached about 86.39%. In addition, after taking the absolute value of the elements in the encryption matrix, the training accuracy of the model is 88.79%, and the test accuracy is 85.12%. The improved model is also compared with the traditional model. This model can reduce the storage consumption in the model calculation process, effectively improve the calculation speed, and have little impact on the accuracy. Specifically, the speed of the improved model is 58 times that of the traditional CNN model, and the storage consumption is 1/32 of that of the traditional CNN model. Therefore, homomorphic encryption can be applied to information encryption under the background of big data, and the privacy of the neural network can be realized.
{"title":"Anti-leakage method of network sensitive information data based on homomorphic encryption","authors":"Junlong Shi, Xiaofeng Zhao","doi":"10.1515/jisys-2022-0281","DOIUrl":"https://doi.org/10.1515/jisys-2022-0281","url":null,"abstract":"Abstract With the development of artificial intelligence, people begin to pay attention to the protection of sensitive information and data. Therefore, a homomorphic encryption framework based on effective integer vector is proposed and applied to deep learning to protect the privacy of users in binary convolutional neural network model. The conclusion shows that the model can achieve high accuracy. The training is 93.75% in MNIST dataset and 89.24% in original dataset. Because of the confidentiality of data, the training accuracy of the training set is only 86.77%. After increasing the training period, the accuracy began to converge to about 300 cycles, and finally reached about 86.39%. In addition, after taking the absolute value of the elements in the encryption matrix, the training accuracy of the model is 88.79%, and the test accuracy is 85.12%. The improved model is also compared with the traditional model. This model can reduce the storage consumption in the model calculation process, effectively improve the calculation speed, and have little impact on the accuracy. Specifically, the speed of the improved model is 58 times that of the traditional CNN model, and the storage consumption is 1/32 of that of the traditional CNN model. Therefore, homomorphic encryption can be applied to information encryption under the background of big data, and the privacy of the neural network can be realized.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86712507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Protecting big data from attacks on large organizations is essential because of how vital such data are to organizations and individuals. Moreover, such data can be put at risk when attackers gain unauthorized access to information and use it in illegal ways. One of the most common such attacks is the structured query language injection attack (SQLIA). This attack is a vulnerability attack that allows attackers to illegally access a database quickly and easily by manipulating structured query language (SQL) queries, especially when dealing with a big data environment. To address these risks, this study aims to build an approach that acts as a middle protection layer between the client and database server layers and reduces the time consumed to classify the SQL payload sent from the user layer. The proposed method involves training a model by using a machine learning (ML) technique for logistic regression with the Spark ML library that handles big data. An experiment was conducted using the SQLI dataset. Results show that the proposed approach achieved an accuracy of 99.04, a precision of 98.87, a recall of 99.89, and an F-score of 99.04. The time taken to identify and prevent SQLIA is 0.05 s. Our approach can protect the data by using the middle layer. Moreover, using the Spark ML library with ML algorithms gives better accuracy and shortens the time required to determine the type of request sent from the user layer.
{"title":"Analyzing SQL payloads using logistic regression in a big data environment","authors":"O. Shareef, Rehab Flaih Hasan, Ammar Hatem Farhan","doi":"10.1515/jisys-2023-0063","DOIUrl":"https://doi.org/10.1515/jisys-2023-0063","url":null,"abstract":"Abstract Protecting big data from attacks on large organizations is essential because of how vital such data are to organizations and individuals. Moreover, such data can be put at risk when attackers gain unauthorized access to information and use it in illegal ways. One of the most common such attacks is the structured query language injection attack (SQLIA). This attack is a vulnerability attack that allows attackers to illegally access a database quickly and easily by manipulating structured query language (SQL) queries, especially when dealing with a big data environment. To address these risks, this study aims to build an approach that acts as a middle protection layer between the client and database server layers and reduces the time consumed to classify the SQL payload sent from the user layer. The proposed method involves training a model by using a machine learning (ML) technique for logistic regression with the Spark ML library that handles big data. An experiment was conducted using the SQLI dataset. Results show that the proposed approach achieved an accuracy of 99.04, a precision of 98.87, a recall of 99.89, and an F-score of 99.04. The time taken to identify and prevent SQLIA is 0.05 s. Our approach can protect the data by using the middle layer. Moreover, using the Spark ML library with ML algorithms gives better accuracy and shortens the time required to determine the type of request sent from the user layer.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86671255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Gattal, Chawki Djeddi, Faycel Abbas, I. Siddiqi, Bouderah Brahim
Abstract Identifying the writer of a handwritten document has remained an interesting pattern classification problem for document examiners, forensic experts, and paleographers. While mature identification systems have been developed for handwriting in contemporary documents, the problem remains challenging from the viewpoint of historical manuscripts. Design and development of expert systems that can identify the writer of a questioned manuscript or retrieve samples belonging to a given writer can greatly help the paleographers in their practices. In this context, the current study exploits the textural information in handwriting to characterize writer from historical documents. More specifically, we employ oBIF(oriented Basic Image Features) and hinge features and introduce a novel moment-based matching method to compare the feature vectors extracted from writing samples. Classification is based on minimization of a similarity criterion using the proposed moment distance. A comprehensive series of experiments using the International Conference on Document Analysis and Recognition 2017 historical writer identification dataset reported promising results and validated the ideas put forward in this study.
{"title":"A new method for writer identification based on historical documents","authors":"A. Gattal, Chawki Djeddi, Faycel Abbas, I. Siddiqi, Bouderah Brahim","doi":"10.1515/jisys-2022-0244","DOIUrl":"https://doi.org/10.1515/jisys-2022-0244","url":null,"abstract":"Abstract Identifying the writer of a handwritten document has remained an interesting pattern classification problem for document examiners, forensic experts, and paleographers. While mature identification systems have been developed for handwriting in contemporary documents, the problem remains challenging from the viewpoint of historical manuscripts. Design and development of expert systems that can identify the writer of a questioned manuscript or retrieve samples belonging to a given writer can greatly help the paleographers in their practices. In this context, the current study exploits the textural information in handwriting to characterize writer from historical documents. More specifically, we employ oBIF(oriented Basic Image Features) and hinge features and introduce a novel moment-based matching method to compare the feature vectors extracted from writing samples. Classification is based on minimization of a similarity criterion using the proposed moment distance. A comprehensive series of experiments using the International Conference on Document Analysis and Recognition 2017 historical writer identification dataset reported promising results and validated the ideas put forward in this study.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81093418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Aiming at the problem that users’ check-in interest preferences in social networks have complex time dependences, which leads to inaccurate point-of-interest (POI) recommendations, a location-based POI recommendation model using deep learning for social network big data is proposed. First, the original data are fed into an embedding layer of the model for dense vector representation and to obtain the user’s check-in sequence (UCS) and space-time interval information. Then, the UCS and spatiotemporal interval information are sent into a bidirectional long-term memory model for detailed analysis, where the UCS and location sequence representation are updated using a self-attention mechanism. Finally, candidate POIs are compared with the user’s preferences, and a POI sequence with three consecutive recommended locations is generated. The experimental analysis shows that the model performs best when the Huber loss function is used and the number of training iterations is set to 200. In the Foursquare dataset, Recall@20 and NDCG@20 reach 0.418 and 0.143, and in the Gowalla dataset, the corresponding values are 0.387 and 0.148.
{"title":"A BiLSTM-attention-based point-of-interest recommendation algorithm","authors":"Aichuan Li, Fuzhi Liu","doi":"10.1515/jisys-2023-0033","DOIUrl":"https://doi.org/10.1515/jisys-2023-0033","url":null,"abstract":"Abstract Aiming at the problem that users’ check-in interest preferences in social networks have complex time dependences, which leads to inaccurate point-of-interest (POI) recommendations, a location-based POI recommendation model using deep learning for social network big data is proposed. First, the original data are fed into an embedding layer of the model for dense vector representation and to obtain the user’s check-in sequence (UCS) and space-time interval information. Then, the UCS and spatiotemporal interval information are sent into a bidirectional long-term memory model for detailed analysis, where the UCS and location sequence representation are updated using a self-attention mechanism. Finally, candidate POIs are compared with the user’s preferences, and a POI sequence with three consecutive recommended locations is generated. The experimental analysis shows that the model performs best when the Huber loss function is used and the number of training iterations is set to 200. In the Foursquare dataset, Recall@20 and NDCG@20 reach 0.418 and 0.143, and in the Gowalla dataset, the corresponding values are 0.387 and 0.148.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135104735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Waste classification is the issue of sorting rubbish into valuable categories for efficient waste management. Problems arise from issues such as individual ignorance or inactivity and more overt issues like pollution in the environment, lack of resources, or a malfunctioning system. Education, established behaviors, an improved infrastructure, technology, and legislative incentives to promote effective trash sorting and management are all necessary for a solution to be implemented. For solid waste management and recycling efforts to be successful, waste materials must be sorted appropriately. This study evaluates the effectiveness of several deep learning (DL) models for the challenge of waste material classification. The focus will be on finding the best DL technique for solid waste classification. This study extensively compares several DL architectures (Resnet50, GoogleNet, InceptionV3, and Xception). Images of various types of trash are amassed and cleaned up to form a dataset. Accuracy, precision, recall, and F 1 score are only a few measures used to assess the performance of the many DL models trained and tested on this dataset. ResNet50 showed impressive performance in waste material classification, with 95% accuracy, 95.4% precision, 95% recall, and 94.8% in the F 1 score, with only two incorrect categories in the glass class. All classes are correctly classified with an F 1 score of 100% due to Inception V3’s remarkable accuracy, precision, recall, and F 1 score. Xception’s classification accuracy was excellent (100%), with a few difficulties in the glass and trash categories. With a good 90.78% precision, 100% recall, and 89.81% F 1 score, GoogleNet performed admirably. This study highlights the significance of using models based on DL for categorizing trash. The results open the way for enhanced trash sorting and recycling operations, contributing to an economically and ecologically friendly future.
{"title":"Waste material classification using performance evaluation of deep learning models","authors":"Israa Badr Al-Mashhadani","doi":"10.1515/jisys-2023-0064","DOIUrl":"https://doi.org/10.1515/jisys-2023-0064","url":null,"abstract":"Abstract Waste classification is the issue of sorting rubbish into valuable categories for efficient waste management. Problems arise from issues such as individual ignorance or inactivity and more overt issues like pollution in the environment, lack of resources, or a malfunctioning system. Education, established behaviors, an improved infrastructure, technology, and legislative incentives to promote effective trash sorting and management are all necessary for a solution to be implemented. For solid waste management and recycling efforts to be successful, waste materials must be sorted appropriately. This study evaluates the effectiveness of several deep learning (DL) models for the challenge of waste material classification. The focus will be on finding the best DL technique for solid waste classification. This study extensively compares several DL architectures (Resnet50, GoogleNet, InceptionV3, and Xception). Images of various types of trash are amassed and cleaned up to form a dataset. Accuracy, precision, recall, and F 1 score are only a few measures used to assess the performance of the many DL models trained and tested on this dataset. ResNet50 showed impressive performance in waste material classification, with 95% accuracy, 95.4% precision, 95% recall, and 94.8% in the F 1 score, with only two incorrect categories in the glass class. All classes are correctly classified with an F 1 score of 100% due to Inception V3’s remarkable accuracy, precision, recall, and F 1 score. Xception’s classification accuracy was excellent (100%), with a few difficulties in the glass and trash categories. With a good 90.78% precision, 100% recall, and 89.81% F 1 score, GoogleNet performed admirably. This study highlights the significance of using models based on DL for categorizing trash. The results open the way for enhanced trash sorting and recycling operations, contributing to an economically and ecologically friendly future.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135561223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}