首页 > 最新文献

Cognitive Computation最新文献

英文 中文
Quasi-projective Synchronization Control of Delayed Stochastic Quaternion-Valued Fuzzy Cellular Neural Networks with Mismatched Parameters 参数不匹配的延迟随机四元数值模糊蜂窝神经网络的准投影同步控制
IF 5.4 3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-27 DOI: 10.1007/s12559-024-10299-9
Xiaofang Meng, Yu Fei, Zhouhong Li

This paper deals with the quasi-projective synchronization problem of delayed stochastic quaternion fuzzy cellular neural networks with mismatch parameters. Although the parameter mismatch of the drive-response system increases the computational complexity of the article, it is of practical significance to consider the existence of deviations between the two systems. The method of this article is to design an appropriate controller and construct Lyapunov functional and stochastic analysis theory based on the Itô formula in the quaternion domain. We adopt the non-decomposable method of quaternion FCNN, which preserves the original data and reduces computational effort. We obtain sufficient conditions for quasi-projective synchronization of the considered random quaternion numerical FCNNs with mismatched parameters. Additionally, we estimate the error bounds of quasi-projective synchronization and then carry out a numerical example to verify their validity. Our results are novel even if the considered neural networks degenerate into real-valued or complex-valued neural networks. This article provides a good research idea for studying the quasi-projective synchronization problem of random quaternion numerical FCNN with time delay and has obtained good results. The method in this article can also be used to study the quasi-projective synchronization of a Clifford-valued neural network.

本文讨论了参数不匹配的延迟随机四元模糊蜂窝神经网络的准投影同步问题。虽然驱动-响应系统的参数不匹配增加了文章的计算复杂度,但考虑两个系统之间存在偏差具有实际意义。本文的方法是设计一个合适的控制器,并基于四元数域中的 Itô 公式构建 Lyapunov 函数和随机分析理论。我们采用了四元数 FCNN 的不可分解方法,既保留了原始数据,又减少了计算量。我们获得了参数不匹配的随机四元数 FCNN 准投影同步的充分条件。此外,我们还估算了准投影同步的误差边界,并通过一个数值示例验证了其有效性。即使所考虑的神经网络退化为实值或复值神经网络,我们的结果也是新颖的。本文为研究带时延的随机四元数值 FCNN 的准投影同步问题提供了一个很好的研究思路,并取得了很好的效果。本文的方法也可用于研究 Clifford 值神经网络的准投影同步问题。
{"title":"Quasi-projective Synchronization Control of Delayed Stochastic Quaternion-Valued Fuzzy Cellular Neural Networks with Mismatched Parameters","authors":"Xiaofang Meng, Yu Fei, Zhouhong Li","doi":"10.1007/s12559-024-10299-9","DOIUrl":"https://doi.org/10.1007/s12559-024-10299-9","url":null,"abstract":"<p>This paper deals with the quasi-projective synchronization problem of delayed stochastic quaternion fuzzy cellular neural networks with mismatch parameters. Although the parameter mismatch of the drive-response system increases the computational complexity of the article, it is of practical significance to consider the existence of deviations between the two systems. The method of this article is to design an appropriate controller and construct Lyapunov functional and stochastic analysis theory based on the Itô formula in the quaternion domain. We adopt the non-decomposable method of quaternion FCNN, which preserves the original data and reduces computational effort. We obtain sufficient conditions for quasi-projective synchronization of the considered random quaternion numerical FCNNs with mismatched parameters. Additionally, we estimate the error bounds of quasi-projective synchronization and then carry out a numerical example to verify their validity. Our results are novel even if the considered neural networks degenerate into real-valued or complex-valued neural networks. This article provides a good research idea for studying the quasi-projective synchronization problem of random quaternion numerical FCNN with time delay and has obtained good results. The method in this article can also be used to study the quasi-projective synchronization of a Clifford-valued neural network.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141169219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision-Enabled Large Language and Deep Learning Models for Image-Based Emotion Recognition 基于视觉的大型语言和深度学习模型,用于基于图像的情感识别
IF 5.4 3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-27 DOI: 10.1007/s12559-024-10281-5
Mohammad Nadeem, Shahab Saquib Sohail, Laeeba Javed, Faisal Anwer, Abdul Khader Jilani Saudagar, Khan Muhammad

The significant advancements in the capabilities, reasoning, and efficiency of artificial intelligence (AI)-based tools and systems are evident. Some noteworthy examples of such tools include generative AI-based large language models (LLMs) such as generative pretrained transformer 3.5 (GPT 3.5), generative pretrained transformer 4 (GPT-4), and Bard. LLMs are versatile and effective for various tasks such as composing poetry, writing codes, generating essays, and solving puzzles. Thus far, LLMs can only effectively process text-based input. However, recent advancements have enabled them to handle multimodal inputs, such as text, images, and audio, making them highly general-purpose tools. LLMs have achieved decent performance in pattern recognition tasks (such as classification), therefore, there is a curiosity about whether general-purpose LLMs can perform comparable or even superior to specialized deep learning models (DLMs) trained specifically for a given task. In this study, we compared the performances of fine-tuned DLMs with those of general-purpose LLMs for image-based emotion recognition. We trained DLMs, namely, a convolutional neural network (CNN) (two CNN models were used: (CNN_1) and (CNN_2)), ResNet50, and VGG-16 models, using an image dataset for emotion recognition, and then tested their performance on another dataset. Subsequently, we subjected the same testing dataset to two vision-enabled LLMs (LLaVa and GPT-4). The (CNN_2) was found to be the superior model with an accuracy of 62% while VGG16 produced the lowest accuracy with 31%. In the category of LLMs, GPT-4 performed the best, with an accuracy of 55.81%. LLava LLM had a higher accuracy than (CNN_1) and VGG16 models. The other performance metrics such as precision, recall, and F1-score followed similar trends. However, GPT-4 performed the best with small datasets. The poor results observed in LLMs can be attributed to their general-purpose nature, which, despite extensive pretraining, may not fully capture the features required for specific tasks like emotion recognition in images as effectively as models fine-tuned for those tasks. The LLMs did not surpass specialized models but achieved comparable performance, making them a viable option for specific tasks without additional training. In addition, LLMs can be considered a good alternative when the available dataset is small.

基于人工智能(AI)的工具和系统在能力、推理和效率方面的巨大进步是显而易见的。这类工具中值得一提的例子包括基于生成式人工智能的大型语言模型(LLM),如生成式预训练变换器 3.5(GPT 3.5)、生成式预训练变换器 4(GPT-4)和巴德(Bard)。LLMs 用途广泛,可有效完成各种任务,如创作诗歌、编写代码、生成文章和解谜。迄今为止,LLM 只能有效处理基于文本的输入。然而,最近的进步使它们能够处理多模态输入,如文本、图像和音频,从而使它们成为高度通用的工具。LLM 在模式识别任务(如分类)中取得了不俗的表现,因此,人们对通用 LLM 的表现是否能与专为特定任务训练的专业深度学习模型(DLM)相媲美甚至更胜一筹充满了好奇。在本研究中,我们比较了微调 DLM 与通用 LLM 在基于图像的情感识别中的表现。我们训练了 DLMs,即一个卷积神经网络(CNN)(使用了两个 CNN 模型:(CNN_1)和(CNN_2))、ResNet50和VGG-16模型,然后在另一个数据集上测试它们的性能。随后,我们将同一个测试数据集交给了两个支持视觉的 LLM(LLaVa 和 GPT-4)。结果发现,CNN_2是最优秀的模型,准确率为62%,而VGG16的准确率最低,只有31%。在 LLM 类别中,GPT-4 表现最好,准确率为 55.81%。LLava LLM 的准确率高于(CNN_1)和 VGG16 模型。其他性能指标,如精确度、召回率和 F1 分数,也呈现出类似的趋势。然而,GPT-4 在小型数据集上的表现最好。在 LLMs 中观察到的较差结果可归因于它们的通用性,尽管进行了大量的预训练,但它们可能无法像针对特定任务微调的模型那样有效地捕捉特定任务(如图像中的情感识别)所需的特征。LLM 并没有超越专用模型,但取得了不相上下的性能,这使它们成为无需额外训练即可完成特定任务的可行选择。此外,当可用数据集较少时,LLMs 也可被视为一种很好的选择。
{"title":"Vision-Enabled Large Language and Deep Learning Models for Image-Based Emotion Recognition","authors":"Mohammad Nadeem, Shahab Saquib Sohail, Laeeba Javed, Faisal Anwer, Abdul Khader Jilani Saudagar, Khan Muhammad","doi":"10.1007/s12559-024-10281-5","DOIUrl":"https://doi.org/10.1007/s12559-024-10281-5","url":null,"abstract":"<p>The significant advancements in the capabilities, reasoning, and efficiency of artificial intelligence (AI)-based tools and systems are evident. Some noteworthy examples of such tools include generative AI-based large language models (LLMs) such as generative pretrained transformer 3.5 (GPT 3.5), generative pretrained transformer 4 (GPT-4), and Bard. LLMs are versatile and effective for various tasks such as composing poetry, writing codes, generating essays, and solving puzzles. Thus far, LLMs can only effectively process text-based input. However, recent advancements have enabled them to handle multimodal inputs, such as text, images, and audio, making them highly general-purpose tools. LLMs have achieved decent performance in pattern recognition tasks (such as classification), therefore, there is a curiosity about whether general-purpose LLMs can perform comparable or even superior to specialized deep learning models (DLMs) trained specifically for a given task. In this study, we compared the performances of fine-tuned DLMs with those of general-purpose LLMs for image-based emotion recognition. We trained DLMs, namely, a convolutional neural network (CNN) (two CNN models were used: <span>(CNN_1)</span> and <span>(CNN_2)</span>), ResNet50, and VGG-16 models, using an image dataset for emotion recognition, and then tested their performance on another dataset. Subsequently, we subjected the same testing dataset to two vision-enabled LLMs (LLaVa and GPT-4). The <span>(CNN_2)</span> was found to be the superior model with an accuracy of 62% while VGG16 produced the lowest accuracy with 31%. In the category of LLMs, GPT-4 performed the best, with an accuracy of 55.81%. LLava LLM had a higher accuracy than <span>(CNN_1)</span> and VGG16 models. The other performance metrics such as precision, recall, and F1-score followed similar trends. However, GPT-4 performed the best with small datasets. The poor results observed in LLMs can be attributed to their general-purpose nature, which, despite extensive pretraining, may not fully capture the features required for specific tasks like emotion recognition in images as effectively as models fine-tuned for those tasks. The LLMs did not surpass specialized models but achieved comparable performance, making them a viable option for specific tasks without additional training. In addition, LLMs can be considered a good alternative when the available dataset is small.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141169300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the Influence of Scene Video on EEG-Based Evaluation of Interior Sound in Passenger Cars 研究场景视频对基于脑电图的乘用车内声音评估的影响
IF 5.4 3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-25 DOI: 10.1007/s12559-024-10303-2
Liping Xie, Zhien Liu, Yi Sun, Yawei Zhu

The evaluation of automobile sound quality is an important research topic in the interior sound design of passenger car, and the accurate and effective evaluation methods are required for the determination of the acoustic targets in automobile development. However, there are some deficiencies in the existing evaluation studies of automobile sound quality. (1) Most of subjective evaluations only considered the auditory perception, which is easy to be achieved but does not fully reflect the impacts of sound on participants; (2) similarly, most of the existing subjective evaluations only considered the inherent properties of sounds, such as physical and psychoacoustic parameters, which make it difficult to reflect the complex relationship between the sound and the subjective perception of the evaluators; (3) the construction of evaluation models only from physical and psychoacoustic perspectives does not provide a comprehensive analysis of the real subjective emotions of the participants. Therefore, to alleviate the above flaws, the auditory and visual perceptions are combined to explore the inference of scene video on the evaluation of sound quality, and the EEG signal is introduced as a physiological acoustic index to evaluate the sound quality; simultaneously, an Elman neural network model is constructed to predict the powerful sound quality combined with the proposed indexes of physical acoustics, psychoacoustics, and physiological acoustics. The results show that evaluation results of sound quality combined with scene videos better reflect the subjective perceptions of participants. The proposed objective evaluation indexes of physical, psychoacoustic, and physiological acoustic contribute to mapping the subjective results of the powerful sound quality, and the constructed Elman model outperforms the traditional back propagation (BP) and support vector machine (SVM) models. The analysis method proposed in this paper can be better applied in the field of automotive sound design, providing a clear guideline for the evaluation and optimization of automotive sound quality in the future.

汽车声品质评价是乘用车内部声学设计的重要研究课题,汽车开发中声学目标的确定需要准确有效的评价方法。然而,现有的汽车声品质评价研究存在一些不足。(1)主观评价大多只考虑听觉感受,虽然容易实现,但不能全面反映声音对参与者的影响;(2)同样,现有的主观评价大多只考虑声音的物理和心理声学参数等固有属性,难以反映声音与评价者主观感受之间的复杂关系;(3)仅从物理和心理声学角度构建评价模型,不能全面分析参与者的真实主观情感。因此,为缓解上述缺陷,结合听觉和视觉感知,探索场景视频对音质评价的推断,并引入脑电信号作为生理声学指标对音质进行评价;同时,结合提出的物理声学、心理声学和生理声学指标,构建 Elman 神经网络模型,对强大的音质进行预测。结果表明,结合场景视频的音质评价结果能更好地反映参与者的主观感受。所提出的物理声学、心理声学和生理声学客观评价指标有助于映射出强大音质的主观结果,所构建的 Elman 模型优于传统的反向传播(BP)和支持向量机(SVM)模型。本文提出的分析方法可以更好地应用于汽车声音设计领域,为今后汽车声音质量的评估和优化提供明确的指导。
{"title":"Investigating the Influence of Scene Video on EEG-Based Evaluation of Interior Sound in Passenger Cars","authors":"Liping Xie, Zhien Liu, Yi Sun, Yawei Zhu","doi":"10.1007/s12559-024-10303-2","DOIUrl":"https://doi.org/10.1007/s12559-024-10303-2","url":null,"abstract":"<p>The evaluation of automobile sound quality is an important research topic in the interior sound design of passenger car, and the accurate and effective evaluation methods are required for the determination of the acoustic targets in automobile development. However, there are some deficiencies in the existing evaluation studies of automobile sound quality. (1) Most of subjective evaluations only considered the auditory perception, which is easy to be achieved but does not fully reflect the impacts of sound on participants; (2) similarly, most of the existing subjective evaluations only considered the inherent properties of sounds, such as physical and psychoacoustic parameters, which make it difficult to reflect the complex relationship between the sound and the subjective perception of the evaluators; (3) the construction of evaluation models only from physical and psychoacoustic perspectives does not provide a comprehensive analysis of the real subjective emotions of the participants. Therefore, to alleviate the above flaws, the auditory and visual perceptions are combined to explore the inference of scene video on the evaluation of sound quality, and the EEG signal is introduced as a physiological acoustic index to evaluate the sound quality; simultaneously, an Elman neural network model is constructed to predict the powerful sound quality combined with the proposed indexes of physical acoustics, psychoacoustics, and physiological acoustics. The results show that evaluation results of sound quality combined with scene videos better reflect the subjective perceptions of participants. The proposed objective evaluation indexes of physical, psychoacoustic, and physiological acoustic contribute to mapping the subjective results of the powerful sound quality, and the constructed Elman model outperforms the traditional back propagation (BP) and support vector machine (SVM) models. The analysis method proposed in this paper can be better applied in the field of automotive sound design, providing a clear guideline for the evaluation and optimization of automotive sound quality in the future.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141150943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-resolution Twinned Residual Auto-Encoders (MR-TRAE)—A Novel DL Model for Image Multi-resolution 多分辨率孪生残差自动编码器(MR-TRAE)--一种用于图像多分辨率的新型 DL 模型
IF 5.4 3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-21 DOI: 10.1007/s12559-024-10293-1
Alireza Momenzadeh, E. Baccarelli, M. Scarpiniti, Sima Sarv Ahrabi
{"title":"Multi-resolution Twinned Residual Auto-Encoders (MR-TRAE)—A Novel DL Model for Image Multi-resolution","authors":"Alireza Momenzadeh, E. Baccarelli, M. Scarpiniti, Sima Sarv Ahrabi","doi":"10.1007/s12559-024-10293-1","DOIUrl":"https://doi.org/10.1007/s12559-024-10293-1","url":null,"abstract":"","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141114087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuromorphic Cognitive Learning Systems: The Future of Artificial Intelligence? 神经形态认知学习系统:人工智能的未来?
IF 5.4 3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-19 DOI: 10.1007/s12559-024-10308-x
Vassilis Cutsuridis
{"title":"Neuromorphic Cognitive Learning Systems: The Future of Artificial Intelligence?","authors":"Vassilis Cutsuridis","doi":"10.1007/s12559-024-10308-x","DOIUrl":"https://doi.org/10.1007/s12559-024-10308-x","url":null,"abstract":"","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141123438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative Model-Driven Synthetic Training Image Generation: An Approach to Cognition in Railway Defect Detection 生成模型驱动的合成训练图像生成:铁路缺陷检测中的认知方法
IF 5.4 3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-17 DOI: 10.1007/s12559-024-10283-3
Rahatara Ferdousi, Chunsheng Yang, M. Anwar Hossain, Fedwa Laamarti, M. Shamim Hossain, Abdulmotaleb El Saddik
{"title":"Generative Model-Driven Synthetic Training Image Generation: An Approach to Cognition in Railway Defect Detection","authors":"Rahatara Ferdousi, Chunsheng Yang, M. Anwar Hossain, Fedwa Laamarti, M. Shamim Hossain, Abdulmotaleb El Saddik","doi":"10.1007/s12559-024-10283-3","DOIUrl":"https://doi.org/10.1007/s12559-024-10283-3","url":null,"abstract":"","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140963676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pairwise-Pixel Self-Supervised and Superpixel-Guided Prototype Contrastive Loss for Weakly Supervised Semantic Segmentation 用于弱监督语义分割的对像素自监督和超像素引导原型对比损失
IF 5.4 3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-16 DOI: 10.1007/s12559-024-10277-1
Lu Xie, Weigang Li, Yun-tao Zhao
{"title":"Pairwise-Pixel Self-Supervised and Superpixel-Guided Prototype Contrastive Loss for Weakly Supervised Semantic Segmentation","authors":"Lu Xie, Weigang Li, Yun-tao Zhao","doi":"10.1007/s12559-024-10277-1","DOIUrl":"https://doi.org/10.1007/s12559-024-10277-1","url":null,"abstract":"","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140970892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeuralPMG: A Neural Polyphonic Music Generation System Based on Machine Learning Algorithms NeuralPMG:基于机器学习算法的神经复调音乐生成系统
IF 5.4 3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-15 DOI: 10.1007/s12559-024-10280-6
Tommaso Colafiglio, Carmelo Ardito, Paolo Sorino, Domenico Lofù, Fabrizio Festa, Tommaso Di Noia, Eugenio Di Sciascio
{"title":"NeuralPMG: A Neural Polyphonic Music Generation System Based on Machine Learning Algorithms","authors":"Tommaso Colafiglio, Carmelo Ardito, Paolo Sorino, Domenico Lofù, Fabrizio Festa, Tommaso Di Noia, Eugenio Di Sciascio","doi":"10.1007/s12559-024-10280-6","DOIUrl":"https://doi.org/10.1007/s12559-024-10280-6","url":null,"abstract":"","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140971763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structured Encoding Based on Semantic Disambiguation for Video Captioning 基于语义消歧的视频字幕结构化编码
IF 5.4 3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-09 DOI: 10.1007/s12559-024-10275-3
Bo Sun, Jinyu Tian, Yong Wu, Lunjun Yu, Yuanyan Tang

Video captioning, which aims to automatically generate video captions, has gained significant attention due to its wide range of applications in video surveillance and retrieval. However, most existing methods focus on frame-level convolution to extract features, which ignores the semantic relationships between objects, resulting in the inability to encode video details. To address this problem, inspired by human cognitive processes towards the world, we propose a video captioning method based on semantic disambiguation through structured encoding. First, the conceptual semantic graph of a video is constructed by introducing a knowledge graph. Then, the graph convolution networks are used for relational learning of the conceptual semantic graph to mine the semantic relationships of objects and form the detail encoding of video. Aiming to address the semantic ambiguity of multiple relationships between objects, we propose a method to dynamically learn the most relevant relationships using video scene semantics to construct semantic graphs based on semantic disambiguation. Finally, we propose a cross-domain guided relationship learning strategy to avoid the negative impact caused by using only captions as cross-entropy loss. Experiments based on three datasets—MSR-VTT, ActivityNet Captions, and Student Classroom Behavior—showed that our method outperforms other methods. The results show that introducing a knowledge graph for common sense reasoning of objects in videos can deeply encode the semantic relationships between objects to capture video details and improve captioning performance.

视频字幕旨在自动生成视频字幕,因其在视频监控和检索方面的广泛应用而备受关注。然而,现有的大多数方法都是通过帧级卷积来提取特征,忽略了物体之间的语义关系,导致无法对视频细节进行编码。为了解决这一问题,我们从人类对世界的认知过程中汲取灵感,提出了一种通过结构化编码进行语义消歧的视频字幕制作方法。首先,通过引入知识图谱构建视频的概念语义图。然后,利用图卷积网络对概念语义图进行关系学习,挖掘对象的语义关系,形成视频的细节编码。针对物体间多种关系的语义模糊性,我们提出了一种利用视频场景语义动态学习最相关关系的方法,从而在语义消歧的基础上构建语义图。最后,我们提出了一种跨领域引导关系学习策略,以避免仅使用字幕作为交叉熵损失所带来的负面影响。基于三个数据集(SSR-VTT、ActivityNet Captions 和 Student Classroom Behavior)的实验表明,我们的方法优于其他方法。结果表明,引入知识图谱对视频中的对象进行常识推理,可以深入编码对象之间的语义关系,从而捕捉视频细节,提高字幕性能。
{"title":"Structured Encoding Based on Semantic Disambiguation for Video Captioning","authors":"Bo Sun, Jinyu Tian, Yong Wu, Lunjun Yu, Yuanyan Tang","doi":"10.1007/s12559-024-10275-3","DOIUrl":"https://doi.org/10.1007/s12559-024-10275-3","url":null,"abstract":"<p>Video captioning, which aims to automatically generate video captions, has gained significant attention due to its wide range of applications in video surveillance and retrieval. However, most existing methods focus on frame-level convolution to extract features, which ignores the semantic relationships between objects, resulting in the inability to encode video details. To address this problem, inspired by human cognitive processes towards the world, we propose a video captioning method based on semantic disambiguation through structured encoding. First, the conceptual semantic graph of a video is constructed by introducing a knowledge graph. Then, the graph convolution networks are used for relational learning of the conceptual semantic graph to mine the semantic relationships of objects and form the detail encoding of video. Aiming to address the semantic ambiguity of multiple relationships between objects, we propose a method to dynamically learn the most relevant relationships using video scene semantics to construct semantic graphs based on semantic disambiguation. Finally, we propose a cross-domain guided relationship learning strategy to avoid the negative impact caused by using only captions as cross-entropy loss. Experiments based on three datasets—MSR-VTT, ActivityNet Captions, and Student Classroom Behavior—showed that our method outperforms other methods. The results show that introducing a knowledge graph for common sense reasoning of objects in videos can deeply encode the semantic relationships between objects to capture video details and improve captioning performance.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140935623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated Constrastive Learning and Visual Transformers for Personal Recommendation 用于个人推荐的联合构造学习和视觉转换器
IF 5.4 3区 计算机科学 Q1 Computer Science Pub Date : 2024-05-08 DOI: 10.1007/s12559-024-10286-0
Asma Belhadi, Youcef Djenouri, Fabio Augusto de Alcantara Andrade, Gautam Srivastava

This paper introduces a novel solution for personal recommendation in consumer electronic applications. It addresses, on the one hand, the data confidentiality during the training, by exploring federated learning and trusted authority mechanisms. On the other hand, it deals with data quantity, and quality by exploring both transformers and consumer clustering. The process starts by clustering the consumers into similar clusters using contrastive learning and k-means algorithm. The local model of each consumer is trained on the local data. The local models of the consumers with the clustering information are then sent to the server, where integrity verification is performed by a trusted authority. Instead of traditional federated learning solutions, two kinds of aggregation are performed. The first one is the aggregation of all models of the consumers to derive the global model. The second one is the aggregation of the models of each cluster to derive a local model of similar consumers. Both models are sent to the consumers, where each consumer decides which appropriate model might be used for personal recommendation. Robust experiments have been carried out to demonstrate the applicability of the method using MovieLens-1M, and Amazon-book. The results reveal the superiority of the proposed method compared to the baseline methods, where it reaches an average accuracy of 0.27, against the other methods that do not exceed 0.25.

本文为消费电子应用中的个人推荐介绍了一种新颖的解决方案。一方面,它通过探索联合学习和可信权威机制,解决了训练过程中的数据保密问题。另一方面,它通过探索转换器和消费者聚类来解决数据数量和质量问题。在这一过程中,首先使用对比学习和 k-means 算法将消费者聚类为相似的群组。每个消费者的本地模型都是在本地数据上训练出来的。然后,消费者的本地模型和聚类信息被发送到服务器,由受信任的机构进行完整性验证。与传统的联合学习解决方案不同,有两种聚合方式。第一种是聚合消费者的所有模型,得出全局模型。第二种是聚合每个集群的模型,得出类似消费者的本地模型。这两个模型都会发送给消费者,由每个消费者决定哪一个合适的模型可用于个人推荐。为了证明该方法的适用性,我们使用 MovieLens-1M 和 Amazon-book 进行了大量实验。实验结果表明,与基线方法相比,所提出的方法更胜一筹,其平均准确率达到 0.27,而其他方法的平均准确率不超过 0.25。
{"title":"Federated Constrastive Learning and Visual Transformers for Personal Recommendation","authors":"Asma Belhadi, Youcef Djenouri, Fabio Augusto de Alcantara Andrade, Gautam Srivastava","doi":"10.1007/s12559-024-10286-0","DOIUrl":"https://doi.org/10.1007/s12559-024-10286-0","url":null,"abstract":"<p>This paper introduces a novel solution for personal recommendation in consumer electronic applications. It addresses, on the one hand, the data confidentiality during the training, by exploring federated learning and trusted authority mechanisms. On the other hand, it deals with data quantity, and quality by exploring both transformers and consumer clustering. The process starts by clustering the consumers into similar clusters using contrastive learning and k-means algorithm. The local model of each consumer is trained on the local data. The local models of the consumers with the clustering information are then sent to the server, where integrity verification is performed by a trusted authority. Instead of traditional federated learning solutions, two kinds of aggregation are performed. The first one is the aggregation of all models of the consumers to derive the global model. The second one is the aggregation of the models of each cluster to derive a local model of similar consumers. Both models are sent to the consumers, where each consumer decides which appropriate model might be used for personal recommendation. Robust experiments have been carried out to demonstrate the applicability of the method using MovieLens-1M, and Amazon-book. The results reveal the superiority of the proposed method compared to the baseline methods, where it reaches an average accuracy of 0.27, against the other methods that do not exceed 0.25.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140935658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Cognitive Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1