首页 > 最新文献

Journal of Ambient Intelligence and Humanized Computing最新文献

英文 中文
Suspicious activities detection using spatial–temporal features based on vision transformer and recurrent neural network 利用基于视觉变换器和递归神经网络的时空特征检测可疑活动
3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-29 DOI: 10.1007/s12652-024-04818-7
Saba Hameed, Javaria Amin, Muhammad Almas Anjum, Muhammad Sharif

Nowadays there is growing demand for surveillance applications due to the safety and security from anomalous events. An anomaly in the video is referred to as an event that has some unusual behavior. Although time is required for the recognition of these anomalous events, computerized methods might help to decrease it and perform efficient prediction. However, accurate anomaly detection is still a challenge due to complex background, illumination, variations, and occlusion. To handle these challenges a method is proposed for a vision transformer convolutional recurrent neural network named ViT-CNN-RCNN model for the classification of suspicious activities based on frames and videos. The proposed pre-trained ViT-base-patch16-224-in21k model contains 224 × 224 × 3 video frames as input and converts into a 16 × 16 patch size. The ViT-base-patch16-224-in21k has a patch embedding layer, ViT encoder, and ViT transformer layer having 11 blocks, layer-norm, and ViT pooler. The ViT model is trained on selected learning parameters such as 20 training epochs, and 10 batch-size to categorize the input frames into thirteen different classes such as robbery, fighting, shooting, stealing, shoplifting, Arrest, Arson, Abuse, exploiting, Road Accident, Burglary, and Vandalism. The CNN-RNN sequential model is designed to process sequential data, that contains an input layer, GRU layer, GRU-1 Layer and Dense Layer. This model is trained on optimal hyperparameters such as 32 video frame sizes, 30 training epochs, and 16 batch-size for classification into corresponding class labels. The proposed model is evaluated on UNI-crime and UCF-crime datasets. The experimental outcomes conclude that the proposed approach better performed as compared to recently published works.

如今,由于异常事件对安全和安保的影响,对监控应用的需求日益增长。视频中的异常是指具有某些异常行为的事件。虽然识别这些异常事件需要时间,但计算机化方法可能有助于减少时间并进行有效预测。然而,由于复杂的背景、光照、变化和遮挡,准确的异常检测仍然是一个挑战。为了应对这些挑战,我们提出了一种名为 ViT-CNN-RCNN 模型的视觉变换卷积递归神经网络方法,用于根据帧和视频对可疑活动进行分类。拟议的预训练 ViT-base-patch16-224-in21k 模型包含 224 × 224 × 3 视频帧作为输入,并转换成 16 × 16 补丁大小。ViT-base-patch16-224-in21k 有一个补丁嵌入层、ViT 编码器、ViT 变换层(有 11 个块)、层规范和 ViT 池器。ViT 模型根据选定的学习参数(如 20 个训练历元和 10 个批量大小)进行训练,将输入帧分为 13 个不同的类别,如抢劫、斗殴、枪击、偷窃、商店行窃、纵火、虐待、剥削、道路事故、入室盗窃和破坏。CNN-RNN 序列模型设计用于处理序列数据,包含输入层、GRU 层、GRU-1 层和密集层。该模型在最佳超参数(如 32 个视频帧大小、30 个训练历元和 16 个批量大小)的基础上进行训练,以将数据分类为相应的类别标签。在 UNI 犯罪数据集和 UCF 犯罪数据集上对所提出的模型进行了评估。实验结果表明,与最近发表的作品相比,所提出的方法性能更好。
{"title":"Suspicious activities detection using spatial–temporal features based on vision transformer and recurrent neural network","authors":"Saba Hameed, Javaria Amin, Muhammad Almas Anjum, Muhammad Sharif","doi":"10.1007/s12652-024-04818-7","DOIUrl":"https://doi.org/10.1007/s12652-024-04818-7","url":null,"abstract":"<p>Nowadays there is growing demand for surveillance applications due to the safety and security from anomalous events. An anomaly in the video is referred to as an event that has some unusual behavior. Although time is required for the recognition of these anomalous events, computerized methods might help to decrease it and perform efficient prediction. However, accurate anomaly detection is still a challenge due to complex background, illumination, variations, and occlusion. To handle these challenges a method is proposed for a vision transformer convolutional recurrent neural network named ViT-CNN-RCNN model for the classification of suspicious activities based on frames and videos. The proposed pre-trained ViT-base-patch16-224-in21k model contains 224 × 224 × 3 video frames as input and converts into a 16 × 16 patch size. The ViT-base-patch16-224-in21k has a patch embedding layer, ViT encoder, and ViT transformer layer having 11 blocks, layer-norm, and ViT pooler. The ViT model is trained on selected learning parameters such as 20 training epochs, and 10 batch-size to categorize the input frames into thirteen different classes such as robbery, fighting, shooting, stealing, shoplifting, Arrest, Arson, Abuse, exploiting, Road Accident, Burglary, and Vandalism. The CNN-RNN sequential model is designed to process sequential data, that contains an input layer, GRU layer, GRU-1 Layer and Dense Layer. This model is trained on optimal hyperparameters such as 32 video frame sizes, 30 training epochs, and 16 batch-size for classification into corresponding class labels. The proposed model is evaluated on UNI-crime and UCF-crime datasets. The experimental outcomes conclude that the proposed approach better performed as compared to recently published works.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":"95 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141198178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on badminton take-off recognition method based on improved deep learning 基于改进型深度学习的羽毛球起飞识别方法研究
3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-28 DOI: 10.1007/s12652-024-04809-8
Lu Lianju, Zhang Haiying

Because of the fast take-off speed of badminton, a single action recognition method can’t quickly and accurately identify the action. Therefore, a new badminton take-off recognition method based on improved deep learning is proposed to capture badminton take-off accurately. Collect badminton sports videos and get images of athletes’ activity areas by tracking the moving targets in badminton competition videos. The static characteristics of badminton players’ take-off actions are extracted from the athletes’ activity areas’ images using 3D ConvNets. According to the human joint points in the badminton player’s target tracking image, the human skeleton sequence is constructed by using a 2D coordinate pseudo-image and 2D skeleton data design algorithm, and the dynamic characteristics of badminton take-off action are extracted from the human skeleton sequence by using LSTM (Long-term and Short-term Memory Network). After the static and dynamic features are fused by weighted summation, badminton take-off feature fusion results are input into a convolutional neural network (CNN) to complete badminton take-off recognition. The CNN pool layer is improved by adaptive pooling, and the network convergence is accelerated by combining batch normalization to further optimize the recognition results of badminton take-off. Experiments show that the human skeleton model can accurately match human movements and assist in extracting action features. The improved CNN has greatly improved the accuracy of recognition of take-off actions. When recognizing real images, it can accurately identify human movements and judge whether there is a take-off action.

由于羽毛球起飞速度快,单一的动作识别方法无法快速准确地识别动作。因此,提出了一种基于改进的深度学习的新型羽毛球起飞识别方法,以准确捕捉羽毛球的起飞动作。采集羽毛球运动视频,通过跟踪羽毛球比赛视频中的运动目标,获取运动员活动区域图像。利用三维 ConvNets 从运动员活动区域图像中提取羽毛球运动员腾空动作的静态特征。根据羽毛球运动员目标跟踪图像中的人体关节点,利用二维坐标伪图像和二维骨架数据设计算法构建人体骨架序列,并利用 LSTM(长短期记忆网络)从人体骨架序列中提取羽毛球运动员起跳动作的动态特征。通过加权求和将静态和动态特征融合后,将羽毛球腾空特征融合结果输入卷积神经网络(CNN),完成羽毛球腾空识别。通过自适应池化改进 CNN 池层,并结合批量归一化加速网络收敛,进一步优化羽毛球起飞的识别结果。实验表明,人体骨架模型能准确匹配人体动作,并辅助提取动作特征。改进后的 CNN 极大地提高了起飞动作的识别准确率。在识别真实图像时,它能准确识别人体动作并判断是否有起球动作。
{"title":"Research on badminton take-off recognition method based on improved deep learning","authors":"Lu Lianju, Zhang Haiying","doi":"10.1007/s12652-024-04809-8","DOIUrl":"https://doi.org/10.1007/s12652-024-04809-8","url":null,"abstract":"<p>Because of the fast take-off speed of badminton, a single action recognition method can’t quickly and accurately identify the action. Therefore, a new badminton take-off recognition method based on improved deep learning is proposed to capture badminton take-off accurately. Collect badminton sports videos and get images of athletes’ activity areas by tracking the moving targets in badminton competition videos. The static characteristics of badminton players’ take-off actions are extracted from the athletes’ activity areas’ images using 3D ConvNets. According to the human joint points in the badminton player’s target tracking image, the human skeleton sequence is constructed by using a 2D coordinate pseudo-image and 2D skeleton data design algorithm, and the dynamic characteristics of badminton take-off action are extracted from the human skeleton sequence by using LSTM (Long-term and Short-term Memory Network). After the static and dynamic features are fused by weighted summation, badminton take-off feature fusion results are input into a convolutional neural network (CNN) to complete badminton take-off recognition. The CNN pool layer is improved by adaptive pooling, and the network convergence is accelerated by combining batch normalization to further optimize the recognition results of badminton take-off. Experiments show that the human skeleton model can accurately match human movements and assist in extracting action features. The improved CNN has greatly improved the accuracy of recognition of take-off actions. When recognizing real images, it can accurately identify human movements and judge whether there is a take-off action.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141169612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cryptography-based location privacy protection in the Internet of Vehicles 车联网中基于密码学的位置隐私保护
3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-21 DOI: 10.1007/s12652-024-04752-8
George Routis, George Katsouris, Ioanna Roussaki
{"title":"Cryptography-based location privacy protection in the Internet of Vehicles","authors":"George Routis, George Katsouris, Ioanna Roussaki","doi":"10.1007/s12652-024-04752-8","DOIUrl":"https://doi.org/10.1007/s12652-024-04752-8","url":null,"abstract":"","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":"52 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141116601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Density peaks clustering algorithm based on multi-cluster merge and its application in the extraction of typical load patterns of users 基于多簇合并的密度峰聚类算法及其在用户典型负载模式提取中的应用
3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-18 DOI: 10.1007/s12652-024-04808-9
Jia Zhao, Zhanfeng Yao, Liujun Qiu, Tanghuai Fan, Ivan Lee

The density peaks clustering (DPC) algorithm is simple in principle, efficient in operation, and has good clustering effects on various types of datasets. However, this algorithm still has some defects: (1) due to the definition limitations of local density and relative distance of samples, it is difficult for the algorithm to find correct density peaks; (2) the allocation strategy of the algorithm has poor robustness and is prone to cause other problems. In response to solve the above shortcomings, we proposed a density peaks clustering algorithm based on multi-cluster merge (DPC-MM). In view of the difficulty in selecting density peaks of the DPC algorithm, a new method of calculating relative distance of samples was defined to make the density peaks found more accurate. The allocation strategy of multi-cluster merge was proposed to alleviate or avoid problems caused by allocation errors. Experimental results revealed that the DPC-MM algorithm can efficiently perform clustering on datasets of any shape and scale. The DPC-MM algorithm was applied in extraction of typical load patterns of users, and can more accurately perform clustering on user loads. The extraction results can better reflect electricity consumption habits of users.

密度峰聚类(DPC)算法原理简单、运行高效,在各类数据集上具有良好的聚类效果。但该算法仍存在一些缺陷:(1)由于局部密度和样本相对距离的定义限制,该算法很难找到正确的密度峰;(2)该算法的分配策略鲁棒性差,容易引发其他问题。针对上述不足,我们提出了一种基于多簇合并的密度峰聚类算法(DPC-MM)。针对 DPC 算法难以选取密度峰的问题,定义了一种新的样本相对距离计算方法,使密度峰的选取更加准确。提出了多簇合并的分配策略,以减轻或避免分配误差带来的问题。实验结果表明,DPC-MM 算法可以有效地对任何形状和规模的数据集进行聚类。DPC-MM 算法被应用于用户典型负荷模式的提取,能更准确地对用户负荷进行聚类。提取结果能更好地反映用户的用电习惯。
{"title":"Density peaks clustering algorithm based on multi-cluster merge and its application in the extraction of typical load patterns of users","authors":"Jia Zhao, Zhanfeng Yao, Liujun Qiu, Tanghuai Fan, Ivan Lee","doi":"10.1007/s12652-024-04808-9","DOIUrl":"https://doi.org/10.1007/s12652-024-04808-9","url":null,"abstract":"<p>The density peaks clustering (DPC) algorithm is simple in principle, efficient in operation, and has good clustering effects on various types of datasets. However, this algorithm still has some defects: (1) due to the definition limitations of local density and relative distance of samples, it is difficult for the algorithm to find correct density peaks; (2) the allocation strategy of the algorithm has poor robustness and is prone to cause other problems. In response to solve the above shortcomings, we proposed a density peaks clustering algorithm based on multi-cluster merge (DPC-MM). In view of the difficulty in selecting density peaks of the DPC algorithm, a new method of calculating relative distance of samples was defined to make the density peaks found more accurate. The allocation strategy of multi-cluster merge was proposed to alleviate or avoid problems caused by allocation errors. Experimental results revealed that the DPC-MM algorithm can efficiently perform clustering on datasets of any shape and scale. The DPC-MM algorithm was applied in extraction of typical load patterns of users, and can more accurately perform clustering on user loads. The extraction results can better reflect electricity consumption habits of users.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141062788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MusicEmo: transformer-based intelligent approach towards music emotion generation and recognition MusicEmo:基于变换器的音乐情感生成与识别智能方法
3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-17 DOI: 10.1007/s12652-024-04811-0
Ying Xin
{"title":"MusicEmo: transformer-based intelligent approach towards music emotion generation and recognition","authors":"Ying Xin","doi":"10.1007/s12652-024-04811-0","DOIUrl":"https://doi.org/10.1007/s12652-024-04811-0","url":null,"abstract":"","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":"55 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140964929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The empirical study of tweet classification system for disaster response using shallow and deep learning models 利用浅层学习和深度学习模型对用于灾难响应的推文分类系统进行实证研究
3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-10 DOI: 10.1007/s12652-024-04807-w
Kholoud Maswadi, Ali Alhazmi, Faisal Alshanketi, Christopher Ifeanyi Eke

Disaster-based tweets during an emergency consist of a variety of information on people who have been hurt or killed, people who are lost or discovered, infrastructure and utilities destroyed; this information can assist governmental and humanitarian organizations in prioritizing their aid and rescue efforts. It is crucial to build a model that can categorize these tweets into distinct types due to their massive volume so as to better organize rescue and relief effort and save lives. In this study, Twitter data of 2013 Queensland flood and 2015 Nepal earthquake has been classified as disaster or non-disaster by employing three classes of models. The first model is performed using the lexical feature based on Term Frequency-Inverse Document Frequency (TF-IDF). The classification was performed using five classification algorithms such as DT, LR, SVM, RF, while Ensemble Voting was used to produce the outcome of the models. The second model uses shallow classifiers in conjunction with several features, including lexical (TF-IDF), hashtag, POS, and GloVe embedding. The third set of the model utilized deep learning algorithms including LSTM, LSTM, and GRU, using BERT (Bidirectional Encoder Representations from Transformers) for constructing semantic word embedding to learn the context. The key performance evaluation metrics such as accuracy, F1 score, recall, and precision were employed to measure and compare the three sets of models for disaster response classification on two publicly available Twitter datasets. By performing a comprehensive empirical evaluation of the tweet classification technique across different disaster kinds, the predictive performance shows that the best accuracy was achieved with DT algorithm which attained the highest performance accuracy followed by Bi-LSTM models for disaster response classification by attaining the best accuracy of 96.46% and 96.40% on the Queensland flood dataset; DT algorithm also attained 78.3% accuracy on the Nepal earthquake dataset based on the majority-voting ensemble respectively. Thus, this research contributes by investigating the integration of deep and shallow learning models effectively in a tweet classification system designed for disaster response. Examining the ways that these two methods work seamlessly offers insights into how to best utilize their complimentary advantages to increase the robustness and accuracy of locating suitable data in disaster crisis.

紧急情况下的灾难推文包括各种信息,如人员伤亡、人员走失或被发现、基础设施和公用设施被毁等,这些信息可以帮助政府和人道主义组织确定援助和救援工作的优先次序。由于推文数量庞大,建立一个能将这些推文分为不同类型的模型至关重要,这样才能更好地组织救援工作,拯救生命。在本研究中,通过采用三类模型将 2013 年昆士兰洪灾和 2015 年尼泊尔地震的 Twitter 数据划分为灾难或非灾难。第一个模型使用基于词频-反向文档频率(TF-IDF)的词性特征。分类使用了五种分类算法,如 DT、LR、SVM、RF,同时使用了集合投票来生成模型结果。第二个模型使用浅层分类器,并结合词法(TF-IDF)、标签、POS 和 GloVe 嵌入等几个特征。第三套模型利用了深度学习算法,包括 LSTM、LSTM 和 GRU,使用 BERT(Bidirectional Encoder Representations from Transformers)构建语义词嵌入来学习上下文。在两个公开的 Twitter 数据集上,采用准确率、F1 分数、召回率和精确度等关键性能评估指标来衡量和比较用于灾难响应分类的三套模型。通过对不同灾害类型的推文分类技术进行综合实证评估,预测结果表明,在昆士兰洪水数据集上,DT 算法的准确率最高,其次是用于灾害响应分类的 Bi-LSTM 模型,分别达到 96.46% 和 96.40% 的最佳准确率;在尼泊尔地震数据集上,基于多数票合集的 DT 算法也分别达到了 78.3% 的准确率。因此,本研究通过研究深度学习模型和浅层学习模型在为灾害响应而设计的推特分类系统中的有效整合做出了贡献。通过研究这两种方法的无缝工作方式,可以深入了解如何更好地利用它们的互补优势,提高在灾难危机中定位合适数据的稳健性和准确性。
{"title":"The empirical study of tweet classification system for disaster response using shallow and deep learning models","authors":"Kholoud Maswadi, Ali Alhazmi, Faisal Alshanketi, Christopher Ifeanyi Eke","doi":"10.1007/s12652-024-04807-w","DOIUrl":"https://doi.org/10.1007/s12652-024-04807-w","url":null,"abstract":"<p>Disaster-based tweets during an emergency consist of a variety of information on people who have been hurt or killed, people who are lost or discovered, infrastructure and utilities destroyed; this information can assist governmental and humanitarian organizations in prioritizing their aid and rescue efforts. It is crucial to build a model that can categorize these tweets into distinct types due to their massive volume so as to better organize rescue and relief effort and save lives. In this study, Twitter data of 2013 Queensland flood and 2015 Nepal earthquake has been classified as disaster or non-disaster by employing three classes of models. The first model is performed using the lexical feature based on Term Frequency-Inverse Document Frequency (TF-IDF). The classification was performed using five classification algorithms such as DT, LR, SVM, RF, while Ensemble Voting was used to produce the outcome of the models. The second model uses shallow classifiers in conjunction with several features, including lexical (TF-IDF), hashtag, POS, and GloVe embedding. The third set of the model utilized deep learning algorithms including LSTM, LSTM, and GRU, using BERT (Bidirectional Encoder Representations from Transformers) for constructing semantic word embedding to learn the context. The key performance evaluation metrics such as accuracy, F1 score, recall, and precision were employed to measure and compare the three sets of models for disaster response classification on two publicly available Twitter datasets. By performing a comprehensive empirical evaluation of the tweet classification technique across different disaster kinds, the predictive performance shows that the best accuracy was achieved with DT algorithm which attained the highest performance accuracy followed by Bi-LSTM models for disaster response classification by attaining the best accuracy of 96.46% and 96.40% on the Queensland flood dataset; DT algorithm also attained 78.3% accuracy on the Nepal earthquake dataset based on the majority-voting ensemble respectively. Thus, this research contributes by investigating the integration of deep and shallow learning models effectively in a tweet classification system designed for disaster response. Examining the ways that these two methods work seamlessly offers insights into how to best utilize their complimentary advantages to increase the robustness and accuracy of locating suitable data in disaster crisis.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140931634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of sports goods marketing strategy simulation system based on multi agent technology 基于多代理技术的体育用品营销战略模拟系统设计
3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-09 DOI: 10.1007/s12652-024-04798-8
Fei Liu

Unlike other marketing strategies, sporting goods marketing strategies are affected by the fragility and randomness of marketing data, resulting in more restrictive factors. To improve the economic benefits of sporting goods enterprises, the design of sporting goods marketing strategy simulation system based on multi-agent technology was proposed. Based on the LMZ10503 circuit, ADP2164 circuit, and ADP1755 circuit, the rectifier is the power supply. The hardware design of the system is completed by combining the design of the marketing strategy acquisition card module and the sporting goods marketing strategy simulator module. In the software design of the system, according to the evaluation results of the logic degree of the sports marketing strategy simulation node, the logic degree of the marketing strategy simulation node is optimized. The multi-agent technology is used to implement the multi-agent modeling of the marketing strategy, and the simulation of the sports marketing strategy is realized by generating the marketing strategy simulation signal. The test results show that the sports equipment market marketing strategy simulation system based on multi-agent technology has successfully simulated the marketing strategy of sports equipment and achieved exciting results. Through the application of this system, sales volume and profit margin have increased to 900,000 units and 90% respectively. These results validate the potential of the system in optimizing marketing strategies and improving economic benefits, and provide strong reference and guidance for the sports equipment industry. Further promotion and application of this system is expected to help enterprises develop more accurate and scientific marketing strategies, achieve higher sales volume and profit margins, and thus promote the sustainable development and competitive advantage of the enterprise.

与其他营销策略不同,体育用品营销策略受营销数据的脆弱性和随机性影响,导致制约因素较多。为提高体育用品企业的经济效益,提出了基于多代理技术的体育用品营销策略仿真系统设计方案。以 LMZ10503 电路、ADP2164 电路、ADP1755 电路为基础,整流器为电源。结合营销策略采集卡模块和体育用品营销策略模拟器模块的设计,完成了系统的硬件设计。在系统的软件设计中,根据体育营销策略模拟节点的逻辑度评估结果,对营销策略模拟节点的逻辑度进行了优化。采用多代理技术实现营销策略的多代理建模,通过生成营销策略仿真信号实现体育营销策略的仿真。测试结果表明,基于多代理技术的体育器材市场营销策略仿真系统成功地模拟了体育器材的营销策略,取得了令人振奋的效果。通过应用该系统,销售量和利润率分别增加到 90 万件和 90%。这些成果验证了该系统在优化营销策略和提高经济效益方面的潜力,为体育器材行业提供了有力的参考和指导。该系统的进一步推广和应用,有望帮助企业制定更准确、更科学的营销策略,实现更高的销售量和利润率,从而促进企业的可持续发展和竞争优势。
{"title":"Design of sports goods marketing strategy simulation system based on multi agent technology","authors":"Fei Liu","doi":"10.1007/s12652-024-04798-8","DOIUrl":"https://doi.org/10.1007/s12652-024-04798-8","url":null,"abstract":"<p>Unlike other marketing strategies, sporting goods marketing strategies are affected by the fragility and randomness of marketing data, resulting in more restrictive factors. To improve the economic benefits of sporting goods enterprises, the design of sporting goods marketing strategy simulation system based on multi-agent technology was proposed. Based on the LMZ10503 circuit, ADP2164 circuit, and ADP1755 circuit, the rectifier is the power supply. The hardware design of the system is completed by combining the design of the marketing strategy acquisition card module and the sporting goods marketing strategy simulator module. In the software design of the system, according to the evaluation results of the logic degree of the sports marketing strategy simulation node, the logic degree of the marketing strategy simulation node is optimized. The multi-agent technology is used to implement the multi-agent modeling of the marketing strategy, and the simulation of the sports marketing strategy is realized by generating the marketing strategy simulation signal. The test results show that the sports equipment market marketing strategy simulation system based on multi-agent technology has successfully simulated the marketing strategy of sports equipment and achieved exciting results. Through the application of this system, sales volume and profit margin have increased to 900,000 units and 90% respectively. These results validate the potential of the system in optimizing marketing strategies and improving economic benefits, and provide strong reference and guidance for the sports equipment industry. Further promotion and application of this system is expected to help enterprises develop more accurate and scientific marketing strategies, achieve higher sales volume and profit margins, and thus promote the sustainable development and competitive advantage of the enterprise.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140931418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep kernelized dimensionality reducer for multi-modality heterogeneous data 针对多模态异构数据的深度核化降维器
3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-09 DOI: 10.1007/s12652-024-04804-z
Arifa Shikalgar, Shefali Sonavane

Data mining applications use high-dimensional datasets, but still, a large number of extents causes the well-known ‘Curse of Dimensionality,' which leads to worse accuracy of machine learning classifiers due to the fact that most unimportant and unnecessary dimensions are included in the dataset. Many approaches are employed to handle critical dimension datasets, but their accuracy suffers as a result. As a consequence, to deal with high-dimensional datasets, a hybrid Deep Kernelized Stacked De-Noising Auto encoder based on feature learning was proposed (DKSDA). Because of the layered property, the DKSDA can manage vast amounts of heterogeneous data and performs knowledge-based reduction by taking into account many qualities. It will examine all the multimodalities and all hidden potential modalities using two fine-tuning stages, the input has random noise along with feature vectors, and a stack of de-noising auto-encoders is generated. This SDA processing decreases the prediction error caused by the lack of analysis of concealed objects among the multimodalities. In addition, to handle a huge set of data, a new layer of Spatial Pyramid Pooling (SPP) is introduced along with the structure of Convolutional Neural Network (CNN) by decreasing or removing the remaining sections other than the key characteristic with structural knowledge using kernel function. The recent studies revealed that the DKSDA proposed has an average accuracy of about 97.57% with a dimensionality reduction of 12%. By enhancing the classification accuracy and processing complexity, pre-training reduces dimensionality.

数据挖掘应用使用高维数据集,但大量的维数仍然会导致众所周知的 "维数诅咒",即由于数据集中包含了大多数不重要和不必要的维数,机器学习分类器的准确性会降低。人们采用了许多方法来处理临界维度数据集,但其准确性却因此受到了影响。因此,为了处理高维数据集,人们提出了一种基于特征学习的混合深度核化堆叠去噪自动编码器(DKSDA)。由于具有分层特性,DKSDA 可以管理海量异构数据,并通过考虑多种质量来执行基于知识的降噪。它将使用两个微调阶段来检查所有多模态和所有隐藏的潜在模态,输入有随机噪声和特征向量,并生成一叠去噪自动编码器。这种 SDA 处理方法可减少因缺乏对多模态中隐藏对象的分析而造成的预测误差。此外,为了处理庞大的数据集,还在卷积神经网络(CNN)结构的基础上引入了新的空间金字塔池化层(SPP),利用核函数的结构知识减少或去除关键特征以外的剩余部分。最近的研究表明,DKSDA 的平均准确率约为 97.57%,维度降低了 12%。通过提高分类准确率和处理复杂度,预训练降低了维度。
{"title":"Deep kernelized dimensionality reducer for multi-modality heterogeneous data","authors":"Arifa Shikalgar, Shefali Sonavane","doi":"10.1007/s12652-024-04804-z","DOIUrl":"https://doi.org/10.1007/s12652-024-04804-z","url":null,"abstract":"<p>Data mining applications use high-dimensional datasets, but still, a large number of extents causes the well-known ‘Curse of Dimensionality,' which leads to worse accuracy of machine learning classifiers due to the fact that most unimportant and unnecessary dimensions are included in the dataset. Many approaches are employed to handle critical dimension datasets, but their accuracy suffers as a result. As a consequence, to deal with high-dimensional datasets, a hybrid Deep Kernelized Stacked De-Noising Auto encoder based on feature learning was proposed (DKSDA). Because of the layered property, the DKSDA can manage vast amounts of heterogeneous data and performs knowledge-based reduction by taking into account many qualities. It will examine all the multimodalities and all hidden potential modalities using two fine-tuning stages, the input has random noise along with feature vectors, and a stack of de-noising auto-encoders is generated. This SDA processing decreases the prediction error caused by the lack of analysis of concealed objects among the multimodalities. In addition, to handle a huge set of data, a new layer of Spatial Pyramid Pooling (SPP) is introduced along with the structure of Convolutional Neural Network (CNN) by decreasing or removing the remaining sections other than the key characteristic with structural knowledge using kernel function. The recent studies revealed that the DKSDA proposed has an average accuracy of about 97.57% with a dimensionality reduction of 12%. By enhancing the classification accuracy and processing complexity, pre-training reduces dimensionality.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140931424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A privacy and compliance in regulated anonymous payment system based on blockchain 基于区块链的受监管匿名支付系统的隐私与合规性
3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-09 DOI: 10.1007/s12652-024-04801-2
Issameldeen Elfadul, Lijun Wu, Rashad Elhabob, Ahmed Elkhalil

Decentralized Anonymous Payment Systems (DAP), often known as cryptocurrencies, stand out as some of the most innovative and successful applications on the blockchain. These systems have garnered significant attention in the financial industry due to their highly secure and reliable features. Regrettably, the DAP system can be exploited to fund illegal activities such as drug dealing and terrorism. Therefore, governments are increasingly worried about the illicit use of DAP systems, which poses a critical threat to their security. This paper proposes Privacy and Compliance in Regulated Anonymous Payment System Based on Blockchain (PCRAP), which provides government supervision and enforces regulations over transactions without sacrificing the essential idea of the blockchain, that is, without surrendering transaction privacy or anonymity of the participants. The key characteristic of the proposed scheme is using a ring signature and stealth address to ensure the anonymity of both the sender and receiver of the transaction. Moreover, a Merkle Tree is used to guarantee government supervision and enforce regulations. Our proposed scheme satisfies most of the stringent security requirements and complies with the standards of secure payment systems. Additionally, while our work supports government regulations and supervision, it guarantees unconditional anonymity for users. Furthermore, the performance analysis demonstrates that our suggested scheme still remains applicable and effective even when achieving complete anonymity.

去中心化匿名支付系统(DAP)通常被称为加密货币,是区块链上最具创新性和最成功的应用之一。这些系统因其高度安全可靠的特点而在金融业备受关注。遗憾的是,DAP 系统可能会被用于资助贩毒和恐怖主义等非法活动。因此,各国政府越来越担心 DAP 系统被非法使用,这对其安全构成了严重威胁。本文提出了基于区块链的受监管匿名支付系统(PCRAP)中的隐私与合规性,在不牺牲区块链的基本思想,即不放弃参与者的交易隐私或匿名性的前提下,提供政府监督并对交易实施监管。拟议方案的主要特点是使用环形签名和隐身地址来确保交易发送方和接收方的匿名性。此外,梅克尔树(Merkle Tree)用于保证政府的监督和法规的执行。我们提出的方案满足了大多数严格的安全要求,符合安全支付系统的标准。此外,我们的工作在支持政府法规和监督的同时,还保证了用户的无条件匿名性。此外,性能分析表明,即使在实现完全匿名的情况下,我们建议的方案仍然适用且有效。
{"title":"A privacy and compliance in regulated anonymous payment system based on blockchain","authors":"Issameldeen Elfadul, Lijun Wu, Rashad Elhabob, Ahmed Elkhalil","doi":"10.1007/s12652-024-04801-2","DOIUrl":"https://doi.org/10.1007/s12652-024-04801-2","url":null,"abstract":"<p>Decentralized Anonymous Payment Systems (DAP), often known as cryptocurrencies, stand out as some of the most innovative and successful applications on the blockchain. These systems have garnered significant attention in the financial industry due to their highly secure and reliable features. Regrettably, the DAP system can be exploited to fund illegal activities such as drug dealing and terrorism. Therefore, governments are increasingly worried about the illicit use of DAP systems, which poses a critical threat to their security. This paper proposes Privacy and Compliance in Regulated Anonymous Payment System Based on Blockchain (PCRAP), which provides government supervision and enforces regulations over transactions without sacrificing the essential idea of the blockchain, that is, without surrendering transaction privacy or anonymity of the participants. The key characteristic of the proposed scheme is using a ring signature and stealth address to ensure the anonymity of both the sender and receiver of the transaction. Moreover, a Merkle Tree is used to guarantee government supervision and enforce regulations. Our proposed scheme satisfies most of the stringent security requirements and complies with the standards of secure payment systems. Additionally, while our work supports government regulations and supervision, it guarantees unconditional anonymity for users. Furthermore, the performance analysis demonstrates that our suggested scheme still remains applicable and effective even when achieving complete anonymity.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140931633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A top-down character segmentation approach for Assamese and Telugu handwritten documents 针对阿萨姆语和泰卢固语手写文档的自顶向下字符分割方法
3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-07 DOI: 10.1007/s12652-024-04805-y
Prarthana Dutta, Naresh Babu Muppalaneni

Digitization offers a solution to the challenges associated with managing and retrieving paper-based documents. However, these paper-based documents must be converted into a format that digital machines can comprehend, as they primarily understand alphanumeric text. This transformation is achieved through Optical Character Recognition (OCR), a technology that converts scanned image documents into a format that machines can process. A novel top-down character segmentation approach has been proposed in this work, involving multiple stages. Our approach began by isolating lines from handwritten documents and using these lines to segment words and characters. To further enhance the character segmentation, a Raster Scanning object detection technique is employed to isolate individual characters within words. Thus, the character segmentation results are integrated from the results of the vertical projection and raster scanning. Recognizing the significance of advancing digitization of handwritten documents, we have chosen to focus on the regional languages of Assam and Andhra Pradesh due to their historical and cultural importance in India’s linguistic diversity. So, we have collected datasets of handwritten texts in Assamese and Telugu languages due to their unavailability in the desired form. Our approach achieved an average segmentation accuracy of 93.61%, 85.96%, and 88.74% for lines, words, and characters for both languages. The key motivation behind opting for a top-down approach is two-fold: firstly, it enhances the accuracy of character recognition, and secondly, it holds the potential for future use in language/script identification through the utilization of segmented lines and words.

数字化为管理和检索纸质文件所面临的挑战提供了解决方案。但是,这些纸质文件必须转换成数字机器能够理解的格式,因为数字机器主要理解字母数字文本。这种转换是通过光学字符识别(OCR)技术实现的,该技术可将扫描的图像文件转换成机器可以处理的格式。这项工作提出了一种新颖的自上而下的字符分割方法,涉及多个阶段。我们的方法首先从手写文档中分离出线条,然后利用这些线条来分割单词和字符。为了进一步提高字符分割效果,我们采用了光栅扫描对象检测技术来分离单词中的单个字符。因此,字符分割结果是由垂直投影和光栅扫描的结果整合而成的。我们认识到推进手写文件数字化的重要意义,因此选择重点研究阿萨姆邦和安得拉邦的地区语言,因为它们在印度语言多样性中具有重要的历史和文化意义。因此,我们收集了阿萨姆语和泰卢固语的手写文本数据集,因为它们无法以所需的形式提供。我们的方法对这两种语言的行、词和字符的平均分割准确率分别达到了 93.61%、85.96% 和 88.74%。选择自上而下方法的主要动机有两个方面:首先,它提高了字符识别的准确性;其次,通过利用分割的行和字,它有可能在未来用于语言/文字识别。
{"title":"A top-down character segmentation approach for Assamese and Telugu handwritten documents","authors":"Prarthana Dutta, Naresh Babu Muppalaneni","doi":"10.1007/s12652-024-04805-y","DOIUrl":"https://doi.org/10.1007/s12652-024-04805-y","url":null,"abstract":"<p>Digitization offers a solution to the challenges associated with managing and retrieving paper-based documents. However, these paper-based documents must be converted into a format that digital machines can comprehend, as they primarily understand alphanumeric text. This transformation is achieved through Optical Character Recognition (OCR), a technology that converts scanned image documents into a format that machines can process. A novel top-down character segmentation approach has been proposed in this work, involving multiple stages. Our approach began by isolating lines from handwritten documents and using these lines to segment words and characters. To further enhance the character segmentation, a <i>Raster Scanning</i> object detection technique is employed to isolate individual characters within words. Thus, the character segmentation results are integrated from the results of the vertical projection and raster scanning. Recognizing the significance of advancing digitization of handwritten documents, we have chosen to focus on the regional languages of Assam and Andhra Pradesh due to their historical and cultural importance in India’s linguistic diversity. So, we have collected datasets of handwritten texts in Assamese and Telugu languages due to their unavailability in the desired form. Our approach achieved an average segmentation accuracy of 93.61%, 85.96%, and 88.74% for lines, words, and characters for both languages. The key motivation behind opting for a top-down approach is two-fold: firstly, it enhances the accuracy of character recognition, and secondly, it holds the potential for future use in language/script identification through the utilization of segmented lines and words.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140889126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Ambient Intelligence and Humanized Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1