Pub Date : 2024-05-18DOI: 10.1007/s12652-024-04808-9
Jia Zhao, Zhanfeng Yao, Liujun Qiu, Tanghuai Fan, Ivan Lee
The density peaks clustering (DPC) algorithm is simple in principle, efficient in operation, and has good clustering effects on various types of datasets. However, this algorithm still has some defects: (1) due to the definition limitations of local density and relative distance of samples, it is difficult for the algorithm to find correct density peaks; (2) the allocation strategy of the algorithm has poor robustness and is prone to cause other problems. In response to solve the above shortcomings, we proposed a density peaks clustering algorithm based on multi-cluster merge (DPC-MM). In view of the difficulty in selecting density peaks of the DPC algorithm, a new method of calculating relative distance of samples was defined to make the density peaks found more accurate. The allocation strategy of multi-cluster merge was proposed to alleviate or avoid problems caused by allocation errors. Experimental results revealed that the DPC-MM algorithm can efficiently perform clustering on datasets of any shape and scale. The DPC-MM algorithm was applied in extraction of typical load patterns of users, and can more accurately perform clustering on user loads. The extraction results can better reflect electricity consumption habits of users.
{"title":"Density peaks clustering algorithm based on multi-cluster merge and its application in the extraction of typical load patterns of users","authors":"Jia Zhao, Zhanfeng Yao, Liujun Qiu, Tanghuai Fan, Ivan Lee","doi":"10.1007/s12652-024-04808-9","DOIUrl":"https://doi.org/10.1007/s12652-024-04808-9","url":null,"abstract":"<p>The density peaks clustering (DPC) algorithm is simple in principle, efficient in operation, and has good clustering effects on various types of datasets. However, this algorithm still has some defects: (1) due to the definition limitations of local density and relative distance of samples, it is difficult for the algorithm to find correct density peaks; (2) the allocation strategy of the algorithm has poor robustness and is prone to cause other problems. In response to solve the above shortcomings, we proposed a density peaks clustering algorithm based on multi-cluster merge (DPC-MM). In view of the difficulty in selecting density peaks of the DPC algorithm, a new method of calculating relative distance of samples was defined to make the density peaks found more accurate. The allocation strategy of multi-cluster merge was proposed to alleviate or avoid problems caused by allocation errors. Experimental results revealed that the DPC-MM algorithm can efficiently perform clustering on datasets of any shape and scale. The DPC-MM algorithm was applied in extraction of typical load patterns of users, and can more accurately perform clustering on user loads. The extraction results can better reflect electricity consumption habits of users.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141062788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-17DOI: 10.1007/s12652-024-04811-0
Ying Xin
{"title":"MusicEmo: transformer-based intelligent approach towards music emotion generation and recognition","authors":"Ying Xin","doi":"10.1007/s12652-024-04811-0","DOIUrl":"https://doi.org/10.1007/s12652-024-04811-0","url":null,"abstract":"","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140964929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-10DOI: 10.1007/s12652-024-04807-w
Kholoud Maswadi, Ali Alhazmi, Faisal Alshanketi, Christopher Ifeanyi Eke
Disaster-based tweets during an emergency consist of a variety of information on people who have been hurt or killed, people who are lost or discovered, infrastructure and utilities destroyed; this information can assist governmental and humanitarian organizations in prioritizing their aid and rescue efforts. It is crucial to build a model that can categorize these tweets into distinct types due to their massive volume so as to better organize rescue and relief effort and save lives. In this study, Twitter data of 2013 Queensland flood and 2015 Nepal earthquake has been classified as disaster or non-disaster by employing three classes of models. The first model is performed using the lexical feature based on Term Frequency-Inverse Document Frequency (TF-IDF). The classification was performed using five classification algorithms such as DT, LR, SVM, RF, while Ensemble Voting was used to produce the outcome of the models. The second model uses shallow classifiers in conjunction with several features, including lexical (TF-IDF), hashtag, POS, and GloVe embedding. The third set of the model utilized deep learning algorithms including LSTM, LSTM, and GRU, using BERT (Bidirectional Encoder Representations from Transformers) for constructing semantic word embedding to learn the context. The key performance evaluation metrics such as accuracy, F1 score, recall, and precision were employed to measure and compare the three sets of models for disaster response classification on two publicly available Twitter datasets. By performing a comprehensive empirical evaluation of the tweet classification technique across different disaster kinds, the predictive performance shows that the best accuracy was achieved with DT algorithm which attained the highest performance accuracy followed by Bi-LSTM models for disaster response classification by attaining the best accuracy of 96.46% and 96.40% on the Queensland flood dataset; DT algorithm also attained 78.3% accuracy on the Nepal earthquake dataset based on the majority-voting ensemble respectively. Thus, this research contributes by investigating the integration of deep and shallow learning models effectively in a tweet classification system designed for disaster response. Examining the ways that these two methods work seamlessly offers insights into how to best utilize their complimentary advantages to increase the robustness and accuracy of locating suitable data in disaster crisis.
{"title":"The empirical study of tweet classification system for disaster response using shallow and deep learning models","authors":"Kholoud Maswadi, Ali Alhazmi, Faisal Alshanketi, Christopher Ifeanyi Eke","doi":"10.1007/s12652-024-04807-w","DOIUrl":"https://doi.org/10.1007/s12652-024-04807-w","url":null,"abstract":"<p>Disaster-based tweets during an emergency consist of a variety of information on people who have been hurt or killed, people who are lost or discovered, infrastructure and utilities destroyed; this information can assist governmental and humanitarian organizations in prioritizing their aid and rescue efforts. It is crucial to build a model that can categorize these tweets into distinct types due to their massive volume so as to better organize rescue and relief effort and save lives. In this study, Twitter data of 2013 Queensland flood and 2015 Nepal earthquake has been classified as disaster or non-disaster by employing three classes of models. The first model is performed using the lexical feature based on Term Frequency-Inverse Document Frequency (TF-IDF). The classification was performed using five classification algorithms such as DT, LR, SVM, RF, while Ensemble Voting was used to produce the outcome of the models. The second model uses shallow classifiers in conjunction with several features, including lexical (TF-IDF), hashtag, POS, and GloVe embedding. The third set of the model utilized deep learning algorithms including LSTM, LSTM, and GRU, using BERT (Bidirectional Encoder Representations from Transformers) for constructing semantic word embedding to learn the context. The key performance evaluation metrics such as accuracy, F1 score, recall, and precision were employed to measure and compare the three sets of models for disaster response classification on two publicly available Twitter datasets. By performing a comprehensive empirical evaluation of the tweet classification technique across different disaster kinds, the predictive performance shows that the best accuracy was achieved with DT algorithm which attained the highest performance accuracy followed by Bi-LSTM models for disaster response classification by attaining the best accuracy of 96.46% and 96.40% on the Queensland flood dataset; DT algorithm also attained 78.3% accuracy on the Nepal earthquake dataset based on the majority-voting ensemble respectively. Thus, this research contributes by investigating the integration of deep and shallow learning models effectively in a tweet classification system designed for disaster response. Examining the ways that these two methods work seamlessly offers insights into how to best utilize their complimentary advantages to increase the robustness and accuracy of locating suitable data in disaster crisis.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140931634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.1007/s12652-024-04798-8
Fei Liu
Unlike other marketing strategies, sporting goods marketing strategies are affected by the fragility and randomness of marketing data, resulting in more restrictive factors. To improve the economic benefits of sporting goods enterprises, the design of sporting goods marketing strategy simulation system based on multi-agent technology was proposed. Based on the LMZ10503 circuit, ADP2164 circuit, and ADP1755 circuit, the rectifier is the power supply. The hardware design of the system is completed by combining the design of the marketing strategy acquisition card module and the sporting goods marketing strategy simulator module. In the software design of the system, according to the evaluation results of the logic degree of the sports marketing strategy simulation node, the logic degree of the marketing strategy simulation node is optimized. The multi-agent technology is used to implement the multi-agent modeling of the marketing strategy, and the simulation of the sports marketing strategy is realized by generating the marketing strategy simulation signal. The test results show that the sports equipment market marketing strategy simulation system based on multi-agent technology has successfully simulated the marketing strategy of sports equipment and achieved exciting results. Through the application of this system, sales volume and profit margin have increased to 900,000 units and 90% respectively. These results validate the potential of the system in optimizing marketing strategies and improving economic benefits, and provide strong reference and guidance for the sports equipment industry. Further promotion and application of this system is expected to help enterprises develop more accurate and scientific marketing strategies, achieve higher sales volume and profit margins, and thus promote the sustainable development and competitive advantage of the enterprise.
{"title":"Design of sports goods marketing strategy simulation system based on multi agent technology","authors":"Fei Liu","doi":"10.1007/s12652-024-04798-8","DOIUrl":"https://doi.org/10.1007/s12652-024-04798-8","url":null,"abstract":"<p>Unlike other marketing strategies, sporting goods marketing strategies are affected by the fragility and randomness of marketing data, resulting in more restrictive factors. To improve the economic benefits of sporting goods enterprises, the design of sporting goods marketing strategy simulation system based on multi-agent technology was proposed. Based on the LMZ10503 circuit, ADP2164 circuit, and ADP1755 circuit, the rectifier is the power supply. The hardware design of the system is completed by combining the design of the marketing strategy acquisition card module and the sporting goods marketing strategy simulator module. In the software design of the system, according to the evaluation results of the logic degree of the sports marketing strategy simulation node, the logic degree of the marketing strategy simulation node is optimized. The multi-agent technology is used to implement the multi-agent modeling of the marketing strategy, and the simulation of the sports marketing strategy is realized by generating the marketing strategy simulation signal. The test results show that the sports equipment market marketing strategy simulation system based on multi-agent technology has successfully simulated the marketing strategy of sports equipment and achieved exciting results. Through the application of this system, sales volume and profit margin have increased to 900,000 units and 90% respectively. These results validate the potential of the system in optimizing marketing strategies and improving economic benefits, and provide strong reference and guidance for the sports equipment industry. Further promotion and application of this system is expected to help enterprises develop more accurate and scientific marketing strategies, achieve higher sales volume and profit margins, and thus promote the sustainable development and competitive advantage of the enterprise.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140931418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.1007/s12652-024-04804-z
Arifa Shikalgar, Shefali Sonavane
Data mining applications use high-dimensional datasets, but still, a large number of extents causes the well-known ‘Curse of Dimensionality,' which leads to worse accuracy of machine learning classifiers due to the fact that most unimportant and unnecessary dimensions are included in the dataset. Many approaches are employed to handle critical dimension datasets, but their accuracy suffers as a result. As a consequence, to deal with high-dimensional datasets, a hybrid Deep Kernelized Stacked De-Noising Auto encoder based on feature learning was proposed (DKSDA). Because of the layered property, the DKSDA can manage vast amounts of heterogeneous data and performs knowledge-based reduction by taking into account many qualities. It will examine all the multimodalities and all hidden potential modalities using two fine-tuning stages, the input has random noise along with feature vectors, and a stack of de-noising auto-encoders is generated. This SDA processing decreases the prediction error caused by the lack of analysis of concealed objects among the multimodalities. In addition, to handle a huge set of data, a new layer of Spatial Pyramid Pooling (SPP) is introduced along with the structure of Convolutional Neural Network (CNN) by decreasing or removing the remaining sections other than the key characteristic with structural knowledge using kernel function. The recent studies revealed that the DKSDA proposed has an average accuracy of about 97.57% with a dimensionality reduction of 12%. By enhancing the classification accuracy and processing complexity, pre-training reduces dimensionality.
{"title":"Deep kernelized dimensionality reducer for multi-modality heterogeneous data","authors":"Arifa Shikalgar, Shefali Sonavane","doi":"10.1007/s12652-024-04804-z","DOIUrl":"https://doi.org/10.1007/s12652-024-04804-z","url":null,"abstract":"<p>Data mining applications use high-dimensional datasets, but still, a large number of extents causes the well-known ‘Curse of Dimensionality,' which leads to worse accuracy of machine learning classifiers due to the fact that most unimportant and unnecessary dimensions are included in the dataset. Many approaches are employed to handle critical dimension datasets, but their accuracy suffers as a result. As a consequence, to deal with high-dimensional datasets, a hybrid Deep Kernelized Stacked De-Noising Auto encoder based on feature learning was proposed (DKSDA). Because of the layered property, the DKSDA can manage vast amounts of heterogeneous data and performs knowledge-based reduction by taking into account many qualities. It will examine all the multimodalities and all hidden potential modalities using two fine-tuning stages, the input has random noise along with feature vectors, and a stack of de-noising auto-encoders is generated. This SDA processing decreases the prediction error caused by the lack of analysis of concealed objects among the multimodalities. In addition, to handle a huge set of data, a new layer of Spatial Pyramid Pooling (SPP) is introduced along with the structure of Convolutional Neural Network (CNN) by decreasing or removing the remaining sections other than the key characteristic with structural knowledge using kernel function. The recent studies revealed that the DKSDA proposed has an average accuracy of about 97.57% with a dimensionality reduction of 12%. By enhancing the classification accuracy and processing complexity, pre-training reduces dimensionality.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140931424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.1007/s12652-024-04801-2
Issameldeen Elfadul, Lijun Wu, Rashad Elhabob, Ahmed Elkhalil
Decentralized Anonymous Payment Systems (DAP), often known as cryptocurrencies, stand out as some of the most innovative and successful applications on the blockchain. These systems have garnered significant attention in the financial industry due to their highly secure and reliable features. Regrettably, the DAP system can be exploited to fund illegal activities such as drug dealing and terrorism. Therefore, governments are increasingly worried about the illicit use of DAP systems, which poses a critical threat to their security. This paper proposes Privacy and Compliance in Regulated Anonymous Payment System Based on Blockchain (PCRAP), which provides government supervision and enforces regulations over transactions without sacrificing the essential idea of the blockchain, that is, without surrendering transaction privacy or anonymity of the participants. The key characteristic of the proposed scheme is using a ring signature and stealth address to ensure the anonymity of both the sender and receiver of the transaction. Moreover, a Merkle Tree is used to guarantee government supervision and enforce regulations. Our proposed scheme satisfies most of the stringent security requirements and complies with the standards of secure payment systems. Additionally, while our work supports government regulations and supervision, it guarantees unconditional anonymity for users. Furthermore, the performance analysis demonstrates that our suggested scheme still remains applicable and effective even when achieving complete anonymity.
{"title":"A privacy and compliance in regulated anonymous payment system based on blockchain","authors":"Issameldeen Elfadul, Lijun Wu, Rashad Elhabob, Ahmed Elkhalil","doi":"10.1007/s12652-024-04801-2","DOIUrl":"https://doi.org/10.1007/s12652-024-04801-2","url":null,"abstract":"<p>Decentralized Anonymous Payment Systems (DAP), often known as cryptocurrencies, stand out as some of the most innovative and successful applications on the blockchain. These systems have garnered significant attention in the financial industry due to their highly secure and reliable features. Regrettably, the DAP system can be exploited to fund illegal activities such as drug dealing and terrorism. Therefore, governments are increasingly worried about the illicit use of DAP systems, which poses a critical threat to their security. This paper proposes Privacy and Compliance in Regulated Anonymous Payment System Based on Blockchain (PCRAP), which provides government supervision and enforces regulations over transactions without sacrificing the essential idea of the blockchain, that is, without surrendering transaction privacy or anonymity of the participants. The key characteristic of the proposed scheme is using a ring signature and stealth address to ensure the anonymity of both the sender and receiver of the transaction. Moreover, a Merkle Tree is used to guarantee government supervision and enforce regulations. Our proposed scheme satisfies most of the stringent security requirements and complies with the standards of secure payment systems. Additionally, while our work supports government regulations and supervision, it guarantees unconditional anonymity for users. Furthermore, the performance analysis demonstrates that our suggested scheme still remains applicable and effective even when achieving complete anonymity.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140931633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-07DOI: 10.1007/s12652-024-04805-y
Prarthana Dutta, Naresh Babu Muppalaneni
Digitization offers a solution to the challenges associated with managing and retrieving paper-based documents. However, these paper-based documents must be converted into a format that digital machines can comprehend, as they primarily understand alphanumeric text. This transformation is achieved through Optical Character Recognition (OCR), a technology that converts scanned image documents into a format that machines can process. A novel top-down character segmentation approach has been proposed in this work, involving multiple stages. Our approach began by isolating lines from handwritten documents and using these lines to segment words and characters. To further enhance the character segmentation, a Raster Scanning object detection technique is employed to isolate individual characters within words. Thus, the character segmentation results are integrated from the results of the vertical projection and raster scanning. Recognizing the significance of advancing digitization of handwritten documents, we have chosen to focus on the regional languages of Assam and Andhra Pradesh due to their historical and cultural importance in India’s linguistic diversity. So, we have collected datasets of handwritten texts in Assamese and Telugu languages due to their unavailability in the desired form. Our approach achieved an average segmentation accuracy of 93.61%, 85.96%, and 88.74% for lines, words, and characters for both languages. The key motivation behind opting for a top-down approach is two-fold: firstly, it enhances the accuracy of character recognition, and secondly, it holds the potential for future use in language/script identification through the utilization of segmented lines and words.
{"title":"A top-down character segmentation approach for Assamese and Telugu handwritten documents","authors":"Prarthana Dutta, Naresh Babu Muppalaneni","doi":"10.1007/s12652-024-04805-y","DOIUrl":"https://doi.org/10.1007/s12652-024-04805-y","url":null,"abstract":"<p>Digitization offers a solution to the challenges associated with managing and retrieving paper-based documents. However, these paper-based documents must be converted into a format that digital machines can comprehend, as they primarily understand alphanumeric text. This transformation is achieved through Optical Character Recognition (OCR), a technology that converts scanned image documents into a format that machines can process. A novel top-down character segmentation approach has been proposed in this work, involving multiple stages. Our approach began by isolating lines from handwritten documents and using these lines to segment words and characters. To further enhance the character segmentation, a <i>Raster Scanning</i> object detection technique is employed to isolate individual characters within words. Thus, the character segmentation results are integrated from the results of the vertical projection and raster scanning. Recognizing the significance of advancing digitization of handwritten documents, we have chosen to focus on the regional languages of Assam and Andhra Pradesh due to their historical and cultural importance in India’s linguistic diversity. So, we have collected datasets of handwritten texts in Assamese and Telugu languages due to their unavailability in the desired form. Our approach achieved an average segmentation accuracy of 93.61%, 85.96%, and 88.74% for lines, words, and characters for both languages. The key motivation behind opting for a top-down approach is two-fold: firstly, it enhances the accuracy of character recognition, and secondly, it holds the potential for future use in language/script identification through the utilization of segmented lines and words.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140889126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-06DOI: 10.1007/s12652-024-04806-x
Xiangping Wu, Zheng Zhang, Wangjun Wan, Shuaiwei Yao
Predicting human mobility is essential for urban planning and personalized services. The problem addressed in this study is analyzing user behavior patterns and predicting their next destination. Due to the complexity and diversity of human mobility, it’s necessary to study user behavior patterns from various angles and leverage diverse context information to construct prediction models. Unfortunately, most previous research often neglects personalized preferences and falls short in offering a comprehensive understanding of user behavior patterns. Furthermore, some studies have not effectively mined and utilized contextual information. To address these shortcomings, this paper introduces a novel Personalized Behavior Modeling Network (PBMN). Compared to existing methods, PBMN provides a more comprehensive modeling of user behavior and utilizes context information more extensively, enabling more accurate prediction. It models user behavior through two parallel channels, taking into account both sequential patterns and personalized preferences, while fully utilizing different contextual information. Ultimately, it generates prediction results by personalized integration of different behavior features. Specifically, PBMN employs a pair of attention-based encoders and decoders to model the overall behavior features. Additionally, it utilizes three parallel recurrent neural networks to model recent behavior features at different levels of context information. The performance of PBMN was evaluated using two real-world datasets. Experimental results demonstrate that PBMN outperforms five mainstream prediction methods concerning three commonly used evaluation metrics, emphasizing the effectiveness of PBMN
{"title":"Personalized behavior modeling network for human mobility prediction","authors":"Xiangping Wu, Zheng Zhang, Wangjun Wan, Shuaiwei Yao","doi":"10.1007/s12652-024-04806-x","DOIUrl":"https://doi.org/10.1007/s12652-024-04806-x","url":null,"abstract":"<p>Predicting human mobility is essential for urban planning and personalized services. The problem addressed in this study is analyzing user behavior patterns and predicting their next destination. Due to the complexity and diversity of human mobility, it’s necessary to study user behavior patterns from various angles and leverage diverse context information to construct prediction models. Unfortunately, most previous research often neglects personalized preferences and falls short in offering a comprehensive understanding of user behavior patterns. Furthermore, some studies have not effectively mined and utilized contextual information. To address these shortcomings, this paper introduces a novel Personalized Behavior Modeling Network (PBMN). Compared to existing methods, PBMN provides a more comprehensive modeling of user behavior and utilizes context information more extensively, enabling more accurate prediction. It models user behavior through two parallel channels, taking into account both sequential patterns and personalized preferences, while fully utilizing different contextual information. Ultimately, it generates prediction results by personalized integration of different behavior features. Specifically, PBMN employs a pair of attention-based encoders and decoders to model the overall behavior features. Additionally, it utilizes three parallel recurrent neural networks to model recent behavior features at different levels of context information. The performance of PBMN was evaluated using two real-world datasets. Experimental results demonstrate that PBMN outperforms five mainstream prediction methods concerning three commonly used evaluation metrics, emphasizing the effectiveness of PBMN</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140889124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-05DOI: 10.1007/s12652-024-04791-1
Woojin Lee, Sungyoon Lee, Hoki Kim, Jaewook Lee
Recently, deep-learning-based models have achieved impressive performance on tasks that were previously considered to be extremely challenging. However, recent works have shown that various deep learning models are susceptible to adversarial data samples. In this paper, we propose the sliced Wasserstein adversarial training method to encourage the logit distributions of clean and adversarial data to be similar to each other. We capture the dissimilarity between two distributions using the Wasserstein metric and then align distributions using an end-to-end training process. We present the theoretical background of the motivation for our study by providing generalization error bounds for adversarial data samples. We performed experiments on three standard datasets and the results demonstrate that our method is more robust against white box attacks compared to previous methods.
{"title":"Sliced Wasserstein adversarial training for improving adversarial robustness","authors":"Woojin Lee, Sungyoon Lee, Hoki Kim, Jaewook Lee","doi":"10.1007/s12652-024-04791-1","DOIUrl":"https://doi.org/10.1007/s12652-024-04791-1","url":null,"abstract":"<p>Recently, deep-learning-based models have achieved impressive performance on tasks that were previously considered to be extremely challenging. However, recent works have shown that various deep learning models are susceptible to adversarial data samples. In this paper, we propose the sliced Wasserstein adversarial training method to encourage the logit distributions of clean and adversarial data to be similar to each other. We capture the dissimilarity between two distributions using the Wasserstein metric and then align distributions using an end-to-end training process. We present the theoretical background of the motivation for our study by providing generalization error bounds for adversarial data samples. We performed experiments on three standard datasets and the results demonstrate that our method is more robust against white box attacks compared to previous methods.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140889847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-05DOI: 10.1007/s12652-024-04794-y
P. Rajesh Kanna, P. Santhi
The field of computer networking is experiencing rapid growth, accompanied by the swift advancement of internet tools. As a result, people are becoming more aware of the importance of network security. One of the primary concerns in ensuring security is the authority over domains, and network owners are striving to establish a common language to exchange security information and respond quickly to emerging threats. Given the increasing prevalence of various types of attacks, network security has become a significant challenge in the realm of computing. To address this, a multi-level distributed approach incorporating vulnerability identification, dimensioning, and countermeasures based on attack graphs has been developed. Implementing reconfigurable virtual systems as countermeasures significantly improves attack detection and mitigates the impact of attacks. Password-based authentication, for instance, can be susceptible to password cracking techniques, social engineering attacks, or data breaches that expose user credentials. Similarly, ensuring privacy during data transmission through encryption helps protect data from unauthorized access, but it does not guarantee the prevention of other types of attacks such as malware infiltration or insider threats. This research explores various techniques to achieve effective attack detection. Multiple research methods have been utilized and evaluated to identify the most suitable approach for network security and attack detection in the context of cloud computing. The analysis and implementation of diverse research studies demonstrate that the based signature intrusion detection method outperforms others in terms of precision, recall, F-measure, accuracy, reliability, and time complexity.
{"title":"Exploring the landscape of network security: a comparative analysis of attack detection strategies","authors":"P. Rajesh Kanna, P. Santhi","doi":"10.1007/s12652-024-04794-y","DOIUrl":"https://doi.org/10.1007/s12652-024-04794-y","url":null,"abstract":"<p>The field of computer networking is experiencing rapid growth, accompanied by the swift advancement of internet tools. As a result, people are becoming more aware of the importance of network security. One of the primary concerns in ensuring security is the authority over domains, and network owners are striving to establish a common language to exchange security information and respond quickly to emerging threats. Given the increasing prevalence of various types of attacks, network security has become a significant challenge in the realm of computing. To address this, a multi-level distributed approach incorporating vulnerability identification, dimensioning, and countermeasures based on attack graphs has been developed. Implementing reconfigurable virtual systems as countermeasures significantly improves attack detection and mitigates the impact of attacks. Password-based authentication, for instance, can be susceptible to password cracking techniques, social engineering attacks, or data breaches that expose user credentials. Similarly, ensuring privacy during data transmission through encryption helps protect data from unauthorized access, but it does not guarantee the prevention of other types of attacks such as malware infiltration or insider threats. This research explores various techniques to achieve effective attack detection. Multiple research methods have been utilized and evaluated to identify the most suitable approach for network security and attack detection in the context of cloud computing. The analysis and implementation of diverse research studies demonstrate that the based signature intrusion detection method outperforms others in terms of precision, recall, F-measure, accuracy, reliability, and time complexity.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140889052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}