Gaining clinicians’ trust will unleash the full potential of artificial intelligence (AI) in medicine, and explaining AI decisions is seen as the way to build trustworthy systems. However, explainable artificial intelligence (XAI) methods in medicine often lack a proper evaluation. In this paper, we present our evaluation methodology for XAI methods using forward simulatability. We define the Forward Simulatability Score (FSS) and analyze its limitations in the context of clinical predictors. Then, we applied FSS to our XAI approach defined over an ML-RO, a machine learning clinical predictor based on random optimization over a multiple kernel support vector machine (SVM) algorithm. To Compare FSS values before and after the explanation phase, we test our evaluation methodology for XAI methods on three clinical datasets, namely breast cancer, VTE, and migraine. The ML-RO system is a good model on which to test our XAI evaluation strategy based on the FSS. Indeed, ML-RO outperforms two other base models—a decision tree (DT) and a plain SVM—in the three datasets and gives the possibility of defining different XAI models: TOPK, MIGF, and F4G. The FSS evaluation score suggests that the explanation method F4G for the ML-RO is the most effective in two datasets out of the three tested, and it shows the limits of the learned model for one dataset. Our study aims to introduce a standard practice for evaluating XAI methods in medicine. By establishing a rigorous evaluation framework, we seek to provide healthcare professionals with reliable tools for assessing the performance of XAI methods to enhance the adoption of AI systems in clinical practice.
{"title":"Evaluating Explainable Machine Learning Models for Clinicians","authors":"Noemi Scarpato, Aria Nourbakhsh, Patrizia Ferroni, Silvia Riondino, Mario Roselli, Francesca Fallucchi, Piero Barbanti, Fiorella Guadagni, Fabio Massimo Zanzotto","doi":"10.1007/s12559-024-10297-x","DOIUrl":"https://doi.org/10.1007/s12559-024-10297-x","url":null,"abstract":"<p>Gaining clinicians’ trust will unleash the full potential of artificial intelligence (AI) in medicine, and explaining AI decisions is seen as the way to build trustworthy systems. However, explainable artificial intelligence (XAI) methods in medicine often lack a proper evaluation. In this paper, we present our evaluation methodology for XAI methods using forward simulatability. We define the Forward Simulatability Score (FSS) and analyze its limitations in the context of clinical predictors. Then, we applied FSS to our XAI approach defined over an ML-RO, a machine learning clinical predictor based on random optimization over a multiple kernel support vector machine (SVM) algorithm. To Compare FSS values before and after the explanation phase, we test our evaluation methodology for XAI methods on three clinical datasets, namely breast cancer, VTE, and migraine. The ML-RO system is a good model on which to test our XAI evaluation strategy based on the FSS. Indeed, ML-RO outperforms two other base models—a decision tree (DT) and a plain SVM—in the three datasets and gives the possibility of defining different XAI models: TOPK, MIGF, and F4G. The FSS evaluation score suggests that the explanation method F4G for the ML-RO is the most effective in two datasets out of the three tested, and it shows the limits of the learned model for one dataset. Our study aims to introduce a standard practice for evaluating XAI methods in medicine. By establishing a rigorous evaluation framework, we seek to provide healthcare professionals with reliable tools for assessing the performance of XAI methods to enhance the adoption of AI systems in clinical practice.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":"34 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141191947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-30DOI: 10.1007/s12559-024-10294-0
Nijat Mehdiyev, Maxim Majlatow, Peter Fettke
In this study, we propose a pioneering framework for generating multi-objective counterfactual explanations in job-shop scheduling contexts, combining predictive process monitoring with advanced mathematical optimization techniques. Using the Non-dominated Sorting Genetic Algorithm II (NSGA-II) for multi-objective optimization, our approach enhances the generation of counterfactual explanations that illuminate potential enhancements at both the operational and systemic levels. Validated with real-world data, our methodology underscores the superiority of NSGA-II in crafting pertinent and actionable counterfactual explanations, surpassing traditional methods in both efficiency and practical relevance. This work advances the domains of explainable artificial intelligence (XAI), predictive process monitoring, and combinatorial optimization, providing an effective tool for improving automated scheduling systems’ clarity, and decision-making capabilities.
{"title":"Counterfactual Explanations in the Big Picture: An Approach for Process Prediction-Driven Job-Shop Scheduling Optimization","authors":"Nijat Mehdiyev, Maxim Majlatow, Peter Fettke","doi":"10.1007/s12559-024-10294-0","DOIUrl":"https://doi.org/10.1007/s12559-024-10294-0","url":null,"abstract":"<p>In this study, we propose a pioneering framework for generating multi-objective counterfactual explanations in job-shop scheduling contexts, combining predictive process monitoring with advanced mathematical optimization techniques. Using the Non-dominated Sorting Genetic Algorithm II (NSGA-II) for multi-objective optimization, our approach enhances the generation of counterfactual explanations that illuminate potential enhancements at both the operational and systemic levels. Validated with real-world data, our methodology underscores the superiority of NSGA-II in crafting pertinent and actionable counterfactual explanations, surpassing traditional methods in both efficiency and practical relevance. This work advances the domains of explainable artificial intelligence (XAI), predictive process monitoring, and combinatorial optimization, providing an effective tool for improving automated scheduling systems’ clarity, and decision-making capabilities.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":"42 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141192110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cardiovascular diseases are the leading contributor of mortality worldwide. Accurate cardiovascular disease prediction is crucial, and the application of machine learning and data mining techniques could facilitate decision-making and improve predictive capabilities. This study aimed to present a model for accurate prediction of cardiovascular diseases and identifying key contributing factors with the greatest impact. The Cleveland dataset besides the locally collected dataset, called the Noor dataset, was used in this study. Accordingly, various data mining techniques besides four ensemble learning-based models were implemented on both datasets. Moreover, a novel model for combining individual classifiers in ensemble learning, wherein weights were assigned to each classifier (using a genetic algorithm), was developed. The predictive strength of each feature was also investigated to ensure the generalizability of the outcomes. The ultimate ensemble-based model achieved a precision rate of 88.05% and 90.12% on the Cleveland and Noor datasets, respectively, demonstrating its reliability and suitability for future research in predicting the likelihood of cardiovascular diseases. Not only the proposed model introduces an innovative approach for specifying cardiovascular diseases by unraveling the intricate relationships between various biological variables but also facilitates early detection of cardiovascular diseases.
心血管疾病是导致全球死亡的主要因素。准确预测心血管疾病至关重要,而应用机器学习和数据挖掘技术可以促进决策并提高预测能力。本研究旨在提出一个模型,用于准确预测心血管疾病,并确定影响最大的关键诱因。除本地收集的数据集(称为 Noor 数据集)外,本研究还使用了克利夫兰数据集。因此,除了四个基于集合学习的模型外,还在这两个数据集上实施了各种数据挖掘技术。此外,还开发了一种在集合学习中组合单个分类器的新模型,其中为每个分类器分配了权重(使用遗传算法)。此外,还对每个特征的预测强度进行了研究,以确保结果的通用性。最终基于集合的模型在克利夫兰和努尔数据集上的精确率分别达到了 88.05% 和 90.12%,证明了其可靠性以及在未来预测心血管疾病可能性研究中的适用性。所提出的模型不仅通过揭示各种生物变量之间错综复杂的关系,为心血管疾病的诊断引入了一种创新方法,而且有助于心血管疾病的早期检测。
{"title":"Detection of Cardiovascular Diseases Using Data Mining Approaches: Application of an Ensemble-Based Model","authors":"Mojdeh Nazari, Hassan Emami, Reza Rabiei, Azamossadat Hosseini, Shahabedin Rahmatizadeh","doi":"10.1007/s12559-024-10306-z","DOIUrl":"https://doi.org/10.1007/s12559-024-10306-z","url":null,"abstract":"<p>Cardiovascular diseases are the leading contributor of mortality worldwide. Accurate cardiovascular disease prediction is crucial, and the application of machine learning and data mining techniques could facilitate decision-making and improve predictive capabilities. This study aimed to present a model for accurate prediction of cardiovascular diseases and identifying key contributing factors with the greatest impact. The Cleveland dataset besides the locally collected dataset, called the Noor dataset, was used in this study. Accordingly, various data mining techniques besides four ensemble learning-based models were implemented on both datasets. Moreover, a novel model for combining individual classifiers in ensemble learning, wherein weights were assigned to each classifier (using a genetic algorithm), was developed. The predictive strength of each feature was also investigated to ensure the generalizability of the outcomes. The ultimate ensemble-based model achieved a precision rate of 88.05% and 90.12% on the Cleveland and Noor datasets, respectively, demonstrating its reliability and suitability for future research in predicting the likelihood of cardiovascular diseases. Not only the proposed model introduces an innovative approach for specifying cardiovascular diseases by unraveling the intricate relationships between various biological variables but also facilitates early detection of cardiovascular diseases.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":"84 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141191950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s12559-024-10284-2
Noman Khan, Samee Ullah Khan, Ahmed Farouk, Sung Wook Baik
The rise in power consumption (PC) is caused by several factors such as the growing global population, urbanization, technological advances, economic development, and growth of businesses and commercial sectors. In these days, intermittent renewable energy sources (RESs) are widely utilized in electric grids to meet the need for power. Data-driven techniques are essential to assuring the steady operation of the electric grid and accurate power consumption and generation forecasting. Conversely, the available datasets for time series electric power forecasting in the energy industry are not as large as those for other domains such as in computer vision. Thus, a deep learning (DL) framework for predicting PC in residential and commercial buildings as well as the power generation (PG) from RESs is introduced. The raw power data obtained from buildings and RESs-based power plants are conceded by the purging process where absent values are filled in and noise and outliers are eliminated. Next, the proposed generative adversarial network (GAN) uses a portion of the cleaned data to generate synthetic parallel data, which is combined with the actual data to make a hybrid dataset. Subsequently, the stacked gated recurrent unit (GRU) model, which is optimized for power forecasting, is trained using the hybrid dataset. Six existent power data are used to train and test sixteen linear and nonlinear models for energy forecasting. The best-performing network is selected as the proposed method for forecasting tasks. For Korea Yeongam solar power (KYSP), individual household electric power consumption (IHEPC), and advanced institute of convergence technology (AICT) datasets, the proposed model obtains mean absolute error (MAE) values of 0.0716, 0.0819, and 0.0877, respectively. Similarly, its MAE values are 0.1215, 0.5093, and 0.5751, for Australia Alice Springs solar power (AASSP), Korea south east wind power (KSEWP), and, Korea south east solar power (KSESP) datasets, respectively.
全球人口增长、城市化、技术进步、经济发展以及企业和商业部门的增长等因素导致了电力消耗(PC)的增加。如今,间歇性可再生能源(RES)被广泛应用于电网,以满足电力需求。数据驱动技术对于确保电网稳定运行、准确预测用电量和发电量至关重要。相反,能源行业中用于时间序列电力预测的可用数据集不如计算机视觉等其他领域的数据集大。因此,我们引入了一个深度学习(DL)框架,用于预测住宅和商业建筑的 PC 以及可再生能源的发电量(PG)。从建筑物和基于可再生能源的发电厂获得的原始电力数据会经过净化过程,其中缺失的值会被填补,噪声和异常值会被消除。接下来,建议的生成式对抗网络(GAN)使用部分净化数据生成合成并行数据,并将其与实际数据相结合,形成混合数据集。随后,使用混合数据集训练针对电力预测进行了优化的叠加门控递归单元(GRU)模型。六个现有电力数据用于训练和测试十六个线性和非线性模型,以进行电能预测。选择表现最好的网络作为预测任务的建议方法。对于韩国永岩太阳能发电(KYSP)、个人家庭电力消耗(IHEPC)和高级融合技术研究所(AICT)数据集,建议模型获得的平均绝对误差(MAE)值分别为 0.0716、0.0819 和 0.0877。同样,澳大利亚爱丽斯泉太阳能发电数据集(AASSP)、韩国东南部风力发电数据集(KSEWP)和韩国东南部太阳能发电数据集(KSESP)的平均绝对误差(MAE)值分别为 0.1215、0.5093 和 0.5751。
{"title":"Generative Adversarial Network-Assisted Framework for Power Management","authors":"Noman Khan, Samee Ullah Khan, Ahmed Farouk, Sung Wook Baik","doi":"10.1007/s12559-024-10284-2","DOIUrl":"https://doi.org/10.1007/s12559-024-10284-2","url":null,"abstract":"<p>The rise in power consumption (PC) is caused by several factors such as the growing global population, urbanization, technological advances, economic development, and growth of businesses and commercial sectors. In these days, intermittent renewable energy sources (RESs) are widely utilized in electric grids to meet the need for power. Data-driven techniques are essential to assuring the steady operation of the electric grid and accurate power consumption and generation forecasting. Conversely, the available datasets for time series electric power forecasting in the energy industry are not as large as those for other domains such as in computer vision. Thus, a deep learning (DL) framework for predicting PC in residential and commercial buildings as well as the power generation (PG) from RESs is introduced. The raw power data obtained from buildings and RESs-based power plants are conceded by the purging process where absent values are filled in and noise and outliers are eliminated. Next, the proposed generative adversarial network (GAN) uses a portion of the cleaned data to generate synthetic parallel data, which is combined with the actual data to make a hybrid dataset. Subsequently, the stacked gated recurrent unit (GRU) model, which is optimized for power forecasting, is trained using the hybrid dataset. Six existent power data are used to train and test sixteen linear and nonlinear models for energy forecasting. The best-performing network is selected as the proposed method for forecasting tasks. For Korea Yeongam solar power (KYSP), individual household electric power consumption (IHEPC), and advanced institute of convergence technology (AICT) datasets, the proposed model obtains mean absolute error (MAE) values of 0.0716, 0.0819, and 0.0877, respectively. Similarly, its MAE values are 0.1215, 0.5093, and 0.5751, for Australia Alice Springs solar power (AASSP), Korea south east wind power (KSEWP), and, Korea south east solar power (KSESP) datasets, respectively.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":"18 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141173345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s12559-024-10299-9
Xiaofang Meng, Yu Fei, Zhouhong Li
This paper deals with the quasi-projective synchronization problem of delayed stochastic quaternion fuzzy cellular neural networks with mismatch parameters. Although the parameter mismatch of the drive-response system increases the computational complexity of the article, it is of practical significance to consider the existence of deviations between the two systems. The method of this article is to design an appropriate controller and construct Lyapunov functional and stochastic analysis theory based on the Itô formula in the quaternion domain. We adopt the non-decomposable method of quaternion FCNN, which preserves the original data and reduces computational effort. We obtain sufficient conditions for quasi-projective synchronization of the considered random quaternion numerical FCNNs with mismatched parameters. Additionally, we estimate the error bounds of quasi-projective synchronization and then carry out a numerical example to verify their validity. Our results are novel even if the considered neural networks degenerate into real-valued or complex-valued neural networks. This article provides a good research idea for studying the quasi-projective synchronization problem of random quaternion numerical FCNN with time delay and has obtained good results. The method in this article can also be used to study the quasi-projective synchronization of a Clifford-valued neural network.
{"title":"Quasi-projective Synchronization Control of Delayed Stochastic Quaternion-Valued Fuzzy Cellular Neural Networks with Mismatched Parameters","authors":"Xiaofang Meng, Yu Fei, Zhouhong Li","doi":"10.1007/s12559-024-10299-9","DOIUrl":"https://doi.org/10.1007/s12559-024-10299-9","url":null,"abstract":"<p>This paper deals with the quasi-projective synchronization problem of delayed stochastic quaternion fuzzy cellular neural networks with mismatch parameters. Although the parameter mismatch of the drive-response system increases the computational complexity of the article, it is of practical significance to consider the existence of deviations between the two systems. The method of this article is to design an appropriate controller and construct Lyapunov functional and stochastic analysis theory based on the Itô formula in the quaternion domain. We adopt the non-decomposable method of quaternion FCNN, which preserves the original data and reduces computational effort. We obtain sufficient conditions for quasi-projective synchronization of the considered random quaternion numerical FCNNs with mismatched parameters. Additionally, we estimate the error bounds of quasi-projective synchronization and then carry out a numerical example to verify their validity. Our results are novel even if the considered neural networks degenerate into real-valued or complex-valued neural networks. This article provides a good research idea for studying the quasi-projective synchronization problem of random quaternion numerical FCNN with time delay and has obtained good results. The method in this article can also be used to study the quasi-projective synchronization of a Clifford-valued neural network.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":"22 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141169219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s12559-024-10281-5
Mohammad Nadeem, Shahab Saquib Sohail, Laeeba Javed, Faisal Anwer, Abdul Khader Jilani Saudagar, Khan Muhammad
The significant advancements in the capabilities, reasoning, and efficiency of artificial intelligence (AI)-based tools and systems are evident. Some noteworthy examples of such tools include generative AI-based large language models (LLMs) such as generative pretrained transformer 3.5 (GPT 3.5), generative pretrained transformer 4 (GPT-4), and Bard. LLMs are versatile and effective for various tasks such as composing poetry, writing codes, generating essays, and solving puzzles. Thus far, LLMs can only effectively process text-based input. However, recent advancements have enabled them to handle multimodal inputs, such as text, images, and audio, making them highly general-purpose tools. LLMs have achieved decent performance in pattern recognition tasks (such as classification), therefore, there is a curiosity about whether general-purpose LLMs can perform comparable or even superior to specialized deep learning models (DLMs) trained specifically for a given task. In this study, we compared the performances of fine-tuned DLMs with those of general-purpose LLMs for image-based emotion recognition. We trained DLMs, namely, a convolutional neural network (CNN) (two CNN models were used: (CNN_1) and (CNN_2)), ResNet50, and VGG-16 models, using an image dataset for emotion recognition, and then tested their performance on another dataset. Subsequently, we subjected the same testing dataset to two vision-enabled LLMs (LLaVa and GPT-4). The (CNN_2) was found to be the superior model with an accuracy of 62% while VGG16 produced the lowest accuracy with 31%. In the category of LLMs, GPT-4 performed the best, with an accuracy of 55.81%. LLava LLM had a higher accuracy than (CNN_1) and VGG16 models. The other performance metrics such as precision, recall, and F1-score followed similar trends. However, GPT-4 performed the best with small datasets. The poor results observed in LLMs can be attributed to their general-purpose nature, which, despite extensive pretraining, may not fully capture the features required for specific tasks like emotion recognition in images as effectively as models fine-tuned for those tasks. The LLMs did not surpass specialized models but achieved comparable performance, making them a viable option for specific tasks without additional training. In addition, LLMs can be considered a good alternative when the available dataset is small.
{"title":"Vision-Enabled Large Language and Deep Learning Models for Image-Based Emotion Recognition","authors":"Mohammad Nadeem, Shahab Saquib Sohail, Laeeba Javed, Faisal Anwer, Abdul Khader Jilani Saudagar, Khan Muhammad","doi":"10.1007/s12559-024-10281-5","DOIUrl":"https://doi.org/10.1007/s12559-024-10281-5","url":null,"abstract":"<p>The significant advancements in the capabilities, reasoning, and efficiency of artificial intelligence (AI)-based tools and systems are evident. Some noteworthy examples of such tools include generative AI-based large language models (LLMs) such as generative pretrained transformer 3.5 (GPT 3.5), generative pretrained transformer 4 (GPT-4), and Bard. LLMs are versatile and effective for various tasks such as composing poetry, writing codes, generating essays, and solving puzzles. Thus far, LLMs can only effectively process text-based input. However, recent advancements have enabled them to handle multimodal inputs, such as text, images, and audio, making them highly general-purpose tools. LLMs have achieved decent performance in pattern recognition tasks (such as classification), therefore, there is a curiosity about whether general-purpose LLMs can perform comparable or even superior to specialized deep learning models (DLMs) trained specifically for a given task. In this study, we compared the performances of fine-tuned DLMs with those of general-purpose LLMs for image-based emotion recognition. We trained DLMs, namely, a convolutional neural network (CNN) (two CNN models were used: <span>(CNN_1)</span> and <span>(CNN_2)</span>), ResNet50, and VGG-16 models, using an image dataset for emotion recognition, and then tested their performance on another dataset. Subsequently, we subjected the same testing dataset to two vision-enabled LLMs (LLaVa and GPT-4). The <span>(CNN_2)</span> was found to be the superior model with an accuracy of 62% while VGG16 produced the lowest accuracy with 31%. In the category of LLMs, GPT-4 performed the best, with an accuracy of 55.81%. LLava LLM had a higher accuracy than <span>(CNN_1)</span> and VGG16 models. The other performance metrics such as precision, recall, and F1-score followed similar trends. However, GPT-4 performed the best with small datasets. The poor results observed in LLMs can be attributed to their general-purpose nature, which, despite extensive pretraining, may not fully capture the features required for specific tasks like emotion recognition in images as effectively as models fine-tuned for those tasks. The LLMs did not surpass specialized models but achieved comparable performance, making them a viable option for specific tasks without additional training. In addition, LLMs can be considered a good alternative when the available dataset is small.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":"97 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141169300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-25DOI: 10.1007/s12559-024-10303-2
Liping Xie, Zhien Liu, Yi Sun, Yawei Zhu
The evaluation of automobile sound quality is an important research topic in the interior sound design of passenger car, and the accurate and effective evaluation methods are required for the determination of the acoustic targets in automobile development. However, there are some deficiencies in the existing evaluation studies of automobile sound quality. (1) Most of subjective evaluations only considered the auditory perception, which is easy to be achieved but does not fully reflect the impacts of sound on participants; (2) similarly, most of the existing subjective evaluations only considered the inherent properties of sounds, such as physical and psychoacoustic parameters, which make it difficult to reflect the complex relationship between the sound and the subjective perception of the evaluators; (3) the construction of evaluation models only from physical and psychoacoustic perspectives does not provide a comprehensive analysis of the real subjective emotions of the participants. Therefore, to alleviate the above flaws, the auditory and visual perceptions are combined to explore the inference of scene video on the evaluation of sound quality, and the EEG signal is introduced as a physiological acoustic index to evaluate the sound quality; simultaneously, an Elman neural network model is constructed to predict the powerful sound quality combined with the proposed indexes of physical acoustics, psychoacoustics, and physiological acoustics. The results show that evaluation results of sound quality combined with scene videos better reflect the subjective perceptions of participants. The proposed objective evaluation indexes of physical, psychoacoustic, and physiological acoustic contribute to mapping the subjective results of the powerful sound quality, and the constructed Elman model outperforms the traditional back propagation (BP) and support vector machine (SVM) models. The analysis method proposed in this paper can be better applied in the field of automotive sound design, providing a clear guideline for the evaluation and optimization of automotive sound quality in the future.
{"title":"Investigating the Influence of Scene Video on EEG-Based Evaluation of Interior Sound in Passenger Cars","authors":"Liping Xie, Zhien Liu, Yi Sun, Yawei Zhu","doi":"10.1007/s12559-024-10303-2","DOIUrl":"https://doi.org/10.1007/s12559-024-10303-2","url":null,"abstract":"<p>The evaluation of automobile sound quality is an important research topic in the interior sound design of passenger car, and the accurate and effective evaluation methods are required for the determination of the acoustic targets in automobile development. However, there are some deficiencies in the existing evaluation studies of automobile sound quality. (1) Most of subjective evaluations only considered the auditory perception, which is easy to be achieved but does not fully reflect the impacts of sound on participants; (2) similarly, most of the existing subjective evaluations only considered the inherent properties of sounds, such as physical and psychoacoustic parameters, which make it difficult to reflect the complex relationship between the sound and the subjective perception of the evaluators; (3) the construction of evaluation models only from physical and psychoacoustic perspectives does not provide a comprehensive analysis of the real subjective emotions of the participants. Therefore, to alleviate the above flaws, the auditory and visual perceptions are combined to explore the inference of scene video on the evaluation of sound quality, and the EEG signal is introduced as a physiological acoustic index to evaluate the sound quality; simultaneously, an Elman neural network model is constructed to predict the powerful sound quality combined with the proposed indexes of physical acoustics, psychoacoustics, and physiological acoustics. The results show that evaluation results of sound quality combined with scene videos better reflect the subjective perceptions of participants. The proposed objective evaluation indexes of physical, psychoacoustic, and physiological acoustic contribute to mapping the subjective results of the powerful sound quality, and the constructed Elman model outperforms the traditional back propagation (BP) and support vector machine (SVM) models. The analysis method proposed in this paper can be better applied in the field of automotive sound design, providing a clear guideline for the evaluation and optimization of automotive sound quality in the future.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":"64 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141150943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.1007/s12559-024-10275-3
Bo Sun, Jinyu Tian, Yong Wu, Lunjun Yu, Yuanyan Tang
Video captioning, which aims to automatically generate video captions, has gained significant attention due to its wide range of applications in video surveillance and retrieval. However, most existing methods focus on frame-level convolution to extract features, which ignores the semantic relationships between objects, resulting in the inability to encode video details. To address this problem, inspired by human cognitive processes towards the world, we propose a video captioning method based on semantic disambiguation through structured encoding. First, the conceptual semantic graph of a video is constructed by introducing a knowledge graph. Then, the graph convolution networks are used for relational learning of the conceptual semantic graph to mine the semantic relationships of objects and form the detail encoding of video. Aiming to address the semantic ambiguity of multiple relationships between objects, we propose a method to dynamically learn the most relevant relationships using video scene semantics to construct semantic graphs based on semantic disambiguation. Finally, we propose a cross-domain guided relationship learning strategy to avoid the negative impact caused by using only captions as cross-entropy loss. Experiments based on three datasets—MSR-VTT, ActivityNet Captions, and Student Classroom Behavior—showed that our method outperforms other methods. The results show that introducing a knowledge graph for common sense reasoning of objects in videos can deeply encode the semantic relationships between objects to capture video details and improve captioning performance.
{"title":"Structured Encoding Based on Semantic Disambiguation for Video Captioning","authors":"Bo Sun, Jinyu Tian, Yong Wu, Lunjun Yu, Yuanyan Tang","doi":"10.1007/s12559-024-10275-3","DOIUrl":"https://doi.org/10.1007/s12559-024-10275-3","url":null,"abstract":"<p>Video captioning, which aims to automatically generate video captions, has gained significant attention due to its wide range of applications in video surveillance and retrieval. However, most existing methods focus on frame-level convolution to extract features, which ignores the semantic relationships between objects, resulting in the inability to encode video details. To address this problem, inspired by human cognitive processes towards the world, we propose a video captioning method based on semantic disambiguation through structured encoding. First, the conceptual semantic graph of a video is constructed by introducing a knowledge graph. Then, the graph convolution networks are used for relational learning of the conceptual semantic graph to mine the semantic relationships of objects and form the detail encoding of video. Aiming to address the semantic ambiguity of multiple relationships between objects, we propose a method to dynamically learn the most relevant relationships using video scene semantics to construct semantic graphs based on semantic disambiguation. Finally, we propose a cross-domain guided relationship learning strategy to avoid the negative impact caused by using only captions as cross-entropy loss. Experiments based on three datasets—MSR-VTT, ActivityNet Captions, and Student Classroom Behavior—showed that our method outperforms other methods. The results show that introducing a knowledge graph for common sense reasoning of objects in videos can deeply encode the semantic relationships between objects to capture video details and improve captioning performance.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":"1 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140935623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-08DOI: 10.1007/s12559-024-10273-5
Hui Sun, Ziyan Zhang, Lili Huang, Bo Jiang, Bin Luo
Few-shot segmentation (FS) which aims to segment unseen query image based on a few annotated support samples is an active problem in computer vision and multimedia field. It is known that the core issue of FS is how to leverage the annotated information from the support images to guide query image segmentation. Existing methods mainly adopt Siamese Convolutional Neural Network (SCNN) which first encodes both support and query images and then utilizes the masked Global Average Pooling (GAP) to facilitate query image pixel-level representation and segmentation. However, this pipeline generally fails to fully exploit the category/class coherent information between support and query images. For FS task, one can observe that both support and query images share the same category information. This inherent property provides an important cue for FS task. However, previous methods generally fail to fully exploit it for FS task. To overcome this limitation, in this paper, we propose a novel Category-aware Siamese Learning Network (CaSLNet) to encode both support and query images. The proposed CaSLNet conducts Category Consistent Learning (CCL) for both support images and query images and thus can achieve the information communication between support and query images more sufficiently. Comprehensive experimental results on several public datasets demonstrate the advantage of our proposed CaSLNet. Our code is publicly available at https://github.com/HuiSun123/CaSLN.
{"title":"Category-Aware Siamese Learning Network for Few-Shot Segmentation","authors":"Hui Sun, Ziyan Zhang, Lili Huang, Bo Jiang, Bin Luo","doi":"10.1007/s12559-024-10273-5","DOIUrl":"https://doi.org/10.1007/s12559-024-10273-5","url":null,"abstract":"<p>Few-shot segmentation (FS) which aims to segment unseen query image based on a few annotated support samples is an active problem in computer vision and multimedia field. It is known that the core issue of FS is how to leverage the annotated information from the support images to guide query image segmentation. Existing methods mainly adopt Siamese Convolutional Neural Network (SCNN) which first encodes both support and query images and then utilizes the masked Global Average Pooling (GAP) to facilitate query image pixel-level representation and segmentation. However, this pipeline generally fails to fully exploit the category/class coherent information between support and query images. <i>For FS task, one can observe that both support and query images share the same category information</i>. This inherent property provides an important cue for FS task. However, previous methods generally fail to fully exploit it for FS task. To overcome this limitation, in this paper, we propose a novel Category-aware Siamese Learning Network (CaSLNet) to encode both support and query images. The proposed CaSLNet conducts <i>Category Consistent Learning (CCL)</i> for both support images and query images and thus can achieve the information communication between support and query images more sufficiently. Comprehensive experimental results on several public datasets demonstrate the advantage of our proposed CaSLNet. Our code is publicly available at https://github.com/HuiSun123/CaSLN.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":"35 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140935614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-08DOI: 10.1007/s12559-024-10286-0
Asma Belhadi, Youcef Djenouri, Fabio Augusto de Alcantara Andrade, Gautam Srivastava
This paper introduces a novel solution for personal recommendation in consumer electronic applications. It addresses, on the one hand, the data confidentiality during the training, by exploring federated learning and trusted authority mechanisms. On the other hand, it deals with data quantity, and quality by exploring both transformers and consumer clustering. The process starts by clustering the consumers into similar clusters using contrastive learning and k-means algorithm. The local model of each consumer is trained on the local data. The local models of the consumers with the clustering information are then sent to the server, where integrity verification is performed by a trusted authority. Instead of traditional federated learning solutions, two kinds of aggregation are performed. The first one is the aggregation of all models of the consumers to derive the global model. The second one is the aggregation of the models of each cluster to derive a local model of similar consumers. Both models are sent to the consumers, where each consumer decides which appropriate model might be used for personal recommendation. Robust experiments have been carried out to demonstrate the applicability of the method using MovieLens-1M, and Amazon-book. The results reveal the superiority of the proposed method compared to the baseline methods, where it reaches an average accuracy of 0.27, against the other methods that do not exceed 0.25.
{"title":"Federated Constrastive Learning and Visual Transformers for Personal Recommendation","authors":"Asma Belhadi, Youcef Djenouri, Fabio Augusto de Alcantara Andrade, Gautam Srivastava","doi":"10.1007/s12559-024-10286-0","DOIUrl":"https://doi.org/10.1007/s12559-024-10286-0","url":null,"abstract":"<p>This paper introduces a novel solution for personal recommendation in consumer electronic applications. It addresses, on the one hand, the data confidentiality during the training, by exploring federated learning and trusted authority mechanisms. On the other hand, it deals with data quantity, and quality by exploring both transformers and consumer clustering. The process starts by clustering the consumers into similar clusters using contrastive learning and k-means algorithm. The local model of each consumer is trained on the local data. The local models of the consumers with the clustering information are then sent to the server, where integrity verification is performed by a trusted authority. Instead of traditional federated learning solutions, two kinds of aggregation are performed. The first one is the aggregation of all models of the consumers to derive the global model. The second one is the aggregation of the models of each cluster to derive a local model of similar consumers. Both models are sent to the consumers, where each consumer decides which appropriate model might be used for personal recommendation. Robust experiments have been carried out to demonstrate the applicability of the method using MovieLens-1M, and Amazon-book. The results reveal the superiority of the proposed method compared to the baseline methods, where it reaches an average accuracy of 0.27, against the other methods that do not exceed 0.25.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":"36 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140935658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}