Pub Date : 2026-01-14eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1732820
Jiangxiao Zhang, Feng Gao, Shengmei He, Bin Zhang
Camouflaged object detection (COD) aims to identify objects that are visually indistinguishable from their surrounding background, making it challenging to precisely distinguish the boundaries between objects and backgrounds in camouflaged environments. In recent years, numerous studies have leveraged frequency-domain methods to aid in camouflage target detection by utilizing frequency-domain information. However, current methods based on the frequency domain cannot effectively capture the boundary information between disguised objects and the background. To address this limitation, we propose a Laplace transform-guided camouflage object detection network called the Self-Correlation Cross Relation Network (SeCoCR). In this framework, the Laplace-transformed camouflage target is treated as high-frequency information, while the original image serves as low-frequency information. These are then separately input into our proposed Self-Relation Attention module to extract both local and global features. Within the Self-Relation Attention module, key semantic information is retained in the low-frequency data, and crucial boundary information is preserved in the high-frequency data. Furthermore, we design a multi-scale attention mechanism for low- and high-frequency information, Low-High Mix Fusion, to effectively integrate essential information from both frequencies for camouflage object detection. Comprehensive experiments on three COD benchmark datasets demonstrate that our approach significantly surpasses existing state-of-the-art frequency-domain-assisted methods.
伪装目标检测(COD)旨在识别在视觉上与周围背景无法区分的物体,这给在伪装环境中精确区分物体和背景之间的边界带来了挑战。近年来,许多研究利用频域信息,利用频域方法来辅助伪装目标检测。然而,现有的基于频域的方法不能有效地捕获被伪装物体与背景之间的边界信息。为了解决这一限制,我们提出了一种拉普拉斯变换制导的伪装目标检测网络,称为自相关交叉关系网络(SeCoCR)。在该框架中,将拉普拉斯变换后的伪装目标作为高频信息,将原始图像作为低频信息。然后将这些信息分别输入到我们提出的自关系注意模块中,以提取局部和全局特征。在自关系注意模块中,关键的语义信息保留在低频数据中,关键的边界信息保留在高频数据中。此外,我们设计了一种低频和高频信息的多尺度注意机制,即low- high Mix Fusion,以有效地整合两种频率的关键信息,用于伪装目标检测。在三个COD基准数据集上的综合实验表明,我们的方法明显优于现有的最先进的频域辅助方法。
{"title":"Laplace-guided fusion network for camouflage object detection.","authors":"Jiangxiao Zhang, Feng Gao, Shengmei He, Bin Zhang","doi":"10.3389/frai.2025.1732820","DOIUrl":"10.3389/frai.2025.1732820","url":null,"abstract":"<p><p>Camouflaged object detection (COD) aims to identify objects that are visually indistinguishable from their surrounding background, making it challenging to precisely distinguish the boundaries between objects and backgrounds in camouflaged environments. In recent years, numerous studies have leveraged frequency-domain methods to aid in camouflage target detection by utilizing frequency-domain information. However, current methods based on the frequency domain cannot effectively capture the boundary information between disguised objects and the background. To address this limitation, we propose a Laplace transform-guided camouflage object detection network called the Self-Correlation Cross Relation Network (SeCoCR). In this framework, the Laplace-transformed camouflage target is treated as high-frequency information, while the original image serves as low-frequency information. These are then separately input into our proposed Self-Relation Attention module to extract both local and global features. Within the Self-Relation Attention module, key semantic information is retained in the low-frequency data, and crucial boundary information is preserved in the high-frequency data. Furthermore, we design a multi-scale attention mechanism for low- and high-frequency information, Low-High Mix Fusion, to effectively integrate essential information from both frequencies for camouflage object detection. Comprehensive experiments on three COD benchmark datasets demonstrate that our approach significantly surpasses existing state-of-the-art frequency-domain-assisted methods.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1732820"},"PeriodicalIF":4.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12847256/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1720547
Shani Alkoby, Ron S Hirschprung
Introduction: Privacy has become a significant concern in the digital world, especially concerning the personal data collected by websites and other service providers on the World Wide Web network. One of the significant approaches to enable the individual to control privacy is the privacy policy document, which contains vital information on this matter. Publishing a privacy policy is required by regulation in most Western countries. However, the privacy policy document is a natural free text-based object, usually phrased in a legal language, and rapidly changes, making it consequently relatively hard to understand and almost always neglected by humans.
Methods: This research proposes a novel methodology to receive an unstructured privacy policy text and automatically structure it into predefined parameters. The methodology is based on a two-layer artificial intelligence (AI) process.
Results: In an empirical study that included 49 actual privacy policies from different websites, we demonstrated an average F1-score > 0.8 where five of six parameters achieved a very high classification accuracy.
Discussion: This methodology can serve both humans and AI agents by addressing issues such as cognitive burden, non-standard formalizations, cognitive laziness, and the dynamics of the document across a timeline, which deters the use of the privacy policy as a resource. The study addresses a critical gap between the present regulations, aiming at enhancing privacy, and the abilities of humans to benefit from the mandatory published privacy policy.
{"title":"Structuring privacy policy: an AI approach.","authors":"Shani Alkoby, Ron S Hirschprung","doi":"10.3389/frai.2025.1720547","DOIUrl":"10.3389/frai.2025.1720547","url":null,"abstract":"<p><strong>Introduction: </strong>Privacy has become a significant concern in the digital world, especially concerning the personal data collected by websites and other service providers on the World Wide Web network. One of the significant approaches to enable the individual to control privacy is the privacy policy document, which contains vital information on this matter. Publishing a privacy policy is required by regulation in most Western countries. However, the privacy policy document is a natural free text-based object, usually phrased in a legal language, and rapidly changes, making it consequently relatively hard to understand and almost always neglected by humans.</p><p><strong>Methods: </strong>This research proposes a novel methodology to receive an unstructured privacy policy text and automatically structure it into predefined parameters. The methodology is based on a two-layer artificial intelligence (AI) process.</p><p><strong>Results: </strong>In an empirical study that included 49 actual privacy policies from different websites, we demonstrated an average F1-score > 0.8 where five of six parameters achieved a very high classification accuracy.</p><p><strong>Discussion: </strong>This methodology can serve both humans and AI agents by addressing issues such as cognitive burden, non-standard formalizations, cognitive laziness, and the dynamics of the document across a timeline, which deters the use of the privacy policy as a resource. The study addresses a critical gap between the present regulations, aiming at enhancing privacy, and the abilities of humans to benefit from the mandatory published privacy policy.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1720547"},"PeriodicalIF":4.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12847394/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial: Advancing knowledge-based economies and societies through AI and optimization: innovations, challenges, and implications.","authors":"Erfan Babaee Tirkolaee, Ramin Ranjbarzadeh, Gerhard-Wilhelm Weber","doi":"10.3389/frai.2025.1757072","DOIUrl":"https://doi.org/10.3389/frai.2025.1757072","url":null,"abstract":"","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1757072"},"PeriodicalIF":4.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12847418/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1610856
Osvaldo Velazquez-Gonzalez, Antonio Alarcón-Paredes, Cornelio Yañez-Marquez
Classification is a central task in machine learning, underpinning applications in domains such as finance, medicine, engineering, information technology, and biology. However, machine learning pattern classification can become a complex or even inexplicable task for current robust models due to the complexity of objective datasets, which is why there is a strong interest in achieving high classification performance. On the other hand, in particular cases, there is a need to achieve such performance while maintaining a certain level of explainability in the operation and decisions of classification algorithms, which can become complex. For this reason, an algorithm is proposed that is robust, simple, highly explainable, and applicable to datasets primarily in medicine with complex class imbalance. The main contribution of this research is a novel machine learning classification algorithm based on binary string similarity that is competitive, simple, interpretable, and transparent, as it is clear why a pattern is classified into a given class. Therefore, a comparative study of the performance of the best-known state-of-the-art classification algorithms and the proposed model is presented. The experimental results demonstrate the benefits of the proposal in this research work, which were validated through statistical hypothesis tests to assess significant performance differences.
{"title":"Medical pattern classification using a novel binary similarity approach based on an associative classifier.","authors":"Osvaldo Velazquez-Gonzalez, Antonio Alarcón-Paredes, Cornelio Yañez-Marquez","doi":"10.3389/frai.2025.1610856","DOIUrl":"10.3389/frai.2025.1610856","url":null,"abstract":"<p><p>Classification is a central task in machine learning, underpinning applications in domains such as finance, medicine, engineering, information technology, and biology. However, machine learning pattern classification can become a complex or even inexplicable task for current robust models due to the complexity of objective datasets, which is why there is a strong interest in achieving high classification performance. On the other hand, in particular cases, there is a need to achieve such performance while maintaining a certain level of explainability in the operation and decisions of classification algorithms, which can become complex. For this reason, an algorithm is proposed that is robust, simple, highly explainable, and applicable to datasets primarily in medicine with complex class imbalance. The main contribution of this research is a novel machine learning classification algorithm based on binary string similarity that is competitive, simple, interpretable, and transparent, as it is clear why a pattern is classified into a given class. Therefore, a comparative study of the performance of the best-known state-of-the-art classification algorithms and the proposed model is presented. The experimental results demonstrate the benefits of the proposal in this research work, which were validated through statistical hypothesis tests to assess significant performance differences.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1610856"},"PeriodicalIF":4.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12847284/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: The integration of large language models (LLMs) into cardio-oncology patient education holds promise for addressing the critical gap in accessible, accurate, and patient-friendly information. However, the performance of publicly available LLMs in this specialized domain remains underexplored.
Objectives: This study evaluates the performance of three LLMs (ChatGPT-4, Kimi, DouBao) act as assistants for physicians in cardio-oncology patient education and examines the impact of prompt engineering on response quality.
Methods: Twenty standardized questions spanning cardio-oncology topics were posed twice to three LLMs (ChatGPT-4, Kimi, DouBao): once without prompts and once with a directive to simplify language, generating 240 responses. These responses were evaluated by four cardio-oncology specialists for accuracy, comprehensiveness, helpfulness, and practicality. Readability and complexity were assessed using a Chinese text analysis framework.
Results: Among 240 responses, 63.3% were rated "correct," 35.0% "partially correct," and 1.7% "incorrect." No significant differences in accuracy were observed between models (p = 0.26). Kimi demonstrated no incorrect responses. Significant declines in comprehensiveness (p = 0.03) and helpfulness (p < 0.01) occurred post-prompt, particularly for DouBao (accuracy: 57.5% vs. 7.5%, p < 0.01). Readability metrics (readability age, difficulty score, total word count, sentence length) showed no inter-model differences, but prompts reduced complexity (e.g., DouBao's readability age decreased from 12.9 ± 0.8 to 10.1 ± 1.2 years, p < 0.01).
Conclusion: Publicly available LLMs provide largely accurate responses to cardio-oncology questions, yet their utility is constrained by inconsistent comprehensiveness and sensitivity to prompt design. While simplifying language improves readability, it risks compromising clinical relevance. Tailored fine-tuning and specialized evaluation frameworks are essential to optimize LLMs for patient education in cardio-oncology.
背景:将大型语言模型(LLMs)整合到心脏肿瘤学患者教育中,有望解决在可访问、准确和患者友好信息方面的关键差距。然而,公开可用的法学硕士在这一专业领域的表现仍未得到充分探索。目的:本研究评估了三位法学硕士(ChatGPT-4, Kimi, DouBao)在心脏肿瘤患者教育中作为医生助理的表现,并研究了提示工程对响应质量的影响。方法:向三位法学硕士(ChatGPT-4、Kimi、DouBao)提出了20个涉及心脏肿瘤学主题的标准化问题,两次:一次没有提示,一次有简化语言的指令,共产生240个回答。这些反应由四位心脏肿瘤学专家评估其准确性、全面性、有用性和实用性。使用中文文本分析框架评估可读性和复杂性。结果:在240个回答中,63.3%被评为“正确”,35.0%被评为“部分正确”,1.7%被评为“错误”。模型之间的准确率无显著差异(p = 0.26)。基米没有表现出错误的反应。综合性(p = 0.03)和帮助性(p p p )显著下降结论:公开可获得的法学硕士在很大程度上提供了心脏肿瘤学问题的准确答案,但其实用性受到不一致的综合性和对提示设计的敏感性的限制。虽然简化语言可以提高可读性,但它有损害临床相关性的风险。量身定制的微调和专门的评估框架对于优化llm在心脏肿瘤学患者教育至关重要。
{"title":"Evaluating the efficacy of large language models in cardio-oncology patient education: a comparative analysis of accuracy, readability, and prompt engineering strategies.","authors":"Zhao Wang, Lin Liang, Hao Xu, Yuhui Huang, Chen He, Weiran Xu, Haojie Zhu","doi":"10.3389/frai.2025.1693446","DOIUrl":"https://doi.org/10.3389/frai.2025.1693446","url":null,"abstract":"<p><strong>Background: </strong>The integration of large language models (LLMs) into cardio-oncology patient education holds promise for addressing the critical gap in accessible, accurate, and patient-friendly information. However, the performance of publicly available LLMs in this specialized domain remains underexplored.</p><p><strong>Objectives: </strong>This study evaluates the performance of three LLMs (ChatGPT-4, Kimi, DouBao) act as assistants for physicians in cardio-oncology patient education and examines the impact of prompt engineering on response quality.</p><p><strong>Methods: </strong>Twenty standardized questions spanning cardio-oncology topics were posed twice to three LLMs (ChatGPT-4, Kimi, DouBao): once without prompts and once with a directive to simplify language, generating 240 responses. These responses were evaluated by four cardio-oncology specialists for accuracy, comprehensiveness, helpfulness, and practicality. Readability and complexity were assessed using a Chinese text analysis framework.</p><p><strong>Results: </strong>Among 240 responses, 63.3% were rated \"correct,\" 35.0% \"partially correct,\" and 1.7% \"incorrect.\" No significant differences in accuracy were observed between models (<i>p</i> = 0.26). Kimi demonstrated no incorrect responses. Significant declines in comprehensiveness (<i>p</i> = 0.03) and helpfulness (<i>p</i> < 0.01) occurred post-prompt, particularly for DouBao (accuracy: 57.5% vs. 7.5%, <i>p</i> < 0.01). Readability metrics (readability age, difficulty score, total word count, sentence length) showed no inter-model differences, but prompts reduced complexity (e.g., DouBao's readability age decreased from 12.9 ± 0.8 to 10.1 ± 1.2 years, <i>p</i> < 0.01).</p><p><strong>Conclusion: </strong>Publicly available LLMs provide largely accurate responses to cardio-oncology questions, yet their utility is constrained by inconsistent comprehensiveness and sensitivity to prompt design. While simplifying language improves readability, it risks compromising clinical relevance. Tailored fine-tuning and specialized evaluation frameworks are essential to optimize LLMs for patient education in cardio-oncology.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1693446"},"PeriodicalIF":4.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12835249/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1709489
Faizan Ahmed, Faseeh Haider, Ramsha Ali, Muhammad Arham, Yusra Junaid, Allah Dad, Kinza Bakht, Maryam Abbasi, Bareera Tanveer Malik, Abdul Mateen, Najam Gohar, Rubiya Ali, Yasar Sattar, Mushood Ahmed, Mohamed Bakr, Swapnil Patel, Jesus Almendral, Fawaz Alenezi
Introduction: Pulmonary hypertension (PH) has an incidence of approximately 6 cases per million adults, with a global prevalence ranging from 49 to 55 cases per million adults. Recent advancements in artificial intelligence (AI) have demonstrated promising improvements in the diagnostic accuracy of imaging for PH, achieving an area under the curve (AUC) of 0.94, compared to seasoned professionals.
Research objective: To systematically synthesize available evidence on the comparative accuracy of AI versus manual interpretation in detecting PH across various chest imaging modalities, i.e., chest X-ray, echocardiography, CT scan and cardiac MRI.
Methods: Following PRISMA guidelines, a comprehensive search was conducted across five databases-PubMed, Embase, ScienceDirect, Scopus, and the Cochrane Library-from inception through March 2025. Statistical analysis was performed using R (version 2024.12.1 + 563) with 2 × 2 contingency data. Sensitivity, specificity, and diagnostic odds ratio (DOR) were pooled using a bivariate random-effects model (reitsma() from the mada package), while the AUC were meta-analyzed using logit-transformed values via the metagen() function from the meta package.
Results: This meta-analysis of 12 studies, encompassing 7,459 patients, demonstrated a statistically significant improvement in diagnostic accuracy of PH with AI integration, evidenced by a logit mean difference in AUC of 0.43 (95% CI: 0.23-0.64; p < 0.0001) and low heterogeneity (I2 = 21.0%, τ2 < 0.0001, p = 0.2090), which was consolidated by pooled AUC of 0.934 on bivariate model. Pooled sensitivity and specificity for AI models were 0.83 (95% CI: 0.73-0.90) and 0.91 (95% CI: 0.86-0.95), respectively, with substantial heterogeneity for sensitivity (I2 = 83.8%, τ2 = 0.4934, p < 0.0001) and moderate for specificity (I2 = 41.5%, τ2 = 0.1015, p = 0.1146); the diagnostic odds ratio was 54.26 (95% CI: 22.50-130.87) with substantial heterogeneity (I2 = 70.7%, τ2 = 0.8451, p = 0.0023). Sensitivity analysis showed stable estimates and did not reduce heterogeneity across outcomes.
Conclusion: AI-integrated imaging significantly enhances diagnostic accuracy for pulmonary hypertension, with higher sensitivity (0.83) and specificity (0.91) compared to manual interpretation across chest imaging modalities. However, further high-quality trials with externally validated cohorts may be needed to confirm these findings and reduce variability among AI models across diverse clinical settings.
{"title":"Comparative accuracy of artificial intelligence versus manual interpretation in detecting pulmonary hypertension across chest imaging modalities: a diagnostic test accuracy meta-analysis.","authors":"Faizan Ahmed, Faseeh Haider, Ramsha Ali, Muhammad Arham, Yusra Junaid, Allah Dad, Kinza Bakht, Maryam Abbasi, Bareera Tanveer Malik, Abdul Mateen, Najam Gohar, Rubiya Ali, Yasar Sattar, Mushood Ahmed, Mohamed Bakr, Swapnil Patel, Jesus Almendral, Fawaz Alenezi","doi":"10.3389/frai.2025.1709489","DOIUrl":"https://doi.org/10.3389/frai.2025.1709489","url":null,"abstract":"<p><strong>Introduction: </strong>Pulmonary hypertension (PH) has an incidence of approximately 6 cases per million adults, with a global prevalence ranging from 49 to 55 cases per million adults. Recent advancements in artificial intelligence (AI) have demonstrated promising improvements in the diagnostic accuracy of imaging for PH, achieving an area under the curve (AUC) of 0.94, compared to seasoned professionals.</p><p><strong>Research objective: </strong>To systematically synthesize available evidence on the comparative accuracy of AI versus manual interpretation in detecting PH across various chest imaging modalities, i.e., chest X-ray, echocardiography, CT scan and cardiac MRI.</p><p><strong>Methods: </strong>Following PRISMA guidelines, a comprehensive search was conducted across five databases-PubMed, Embase, ScienceDirect, Scopus, and the Cochrane Library-from inception through March 2025. Statistical analysis was performed using R (version 2024.12.1 + 563) with 2 × 2 contingency data. Sensitivity, specificity, and diagnostic odds ratio (DOR) were pooled using a bivariate random-effects model (reitsma() from the mada package), while the AUC were meta-analyzed using logit-transformed values via the metagen() function from the meta package.</p><p><strong>Results: </strong>This meta-analysis of 12 studies, encompassing 7,459 patients, demonstrated a statistically significant improvement in diagnostic accuracy of PH with AI integration, evidenced by a logit mean difference in AUC of 0.43 (95% CI: 0.23-0.64; <i>p</i> < 0.0001) and low heterogeneity (<i>I</i> <sup>2</sup> = 21.0%, <i>τ</i> <sup>2</sup> < 0.0001, <i>p</i> = 0.2090), which was consolidated by pooled AUC of 0.934 on bivariate model. Pooled sensitivity and specificity for AI models were 0.83 (95% CI: 0.73-0.90) and 0.91 (95% CI: 0.86-0.95), respectively, with substantial heterogeneity for sensitivity (<i>I</i> <sup>2</sup> = 83.8%, <i>τ</i> <sup>2</sup> = 0.4934, <i>p</i> < 0.0001) and moderate for specificity (<i>I</i> <sup>2</sup> = 41.5%, <i>τ</i> <sup>2</sup> = 0.1015, <i>p</i> = 0.1146); the diagnostic odds ratio was 54.26 (95% CI: 22.50-130.87) with substantial heterogeneity (<i>I</i> <sup>2</sup> = 70.7%, <i>τ</i> <sup>2</sup> = 0.8451, <i>p</i> = 0.0023). Sensitivity analysis showed stable estimates and did not reduce heterogeneity across outcomes.</p><p><strong>Conclusion: </strong>AI-integrated imaging significantly enhances diagnostic accuracy for pulmonary hypertension, with higher sensitivity (0.83) and specificity (0.91) compared to manual interpretation across chest imaging modalities. However, further high-quality trials with externally validated cohorts may be needed to confirm these findings and reduce variability among AI models across diverse clinical settings.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1709489"},"PeriodicalIF":4.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12835279/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1768205
Marina Magdy Saady, Hatim Ghazi Zaini, Mohamed Hassan Essai Ali, Sahar A El Rahman, Osama A Omer, Ali R Abdellah, Shaima Elnazer
[This corrects the article DOI: 10.3389/frai.2025.1701951.].
[这更正了文章DOI: 10.3389/frai.2025.1701951.]。
{"title":"Correction: Deep learning neural networks-based traffic predictors for V2X communication networks.","authors":"Marina Magdy Saady, Hatim Ghazi Zaini, Mohamed Hassan Essai Ali, Sahar A El Rahman, Osama A Omer, Ali R Abdellah, Shaima Elnazer","doi":"10.3389/frai.2025.1768205","DOIUrl":"https://doi.org/10.3389/frai.2025.1768205","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.3389/frai.2025.1701951.].</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1768205"},"PeriodicalIF":4.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12838250/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1738444
Juan Ruiz de Miras, José Luis Vílchez, María José Gacto, Domingo Martín
Painting authentication is an inherently complex task, often relying on a combination of connoisseurship and technical analysis. This study focuses on the authentication of a single painting attributed to Paolo Veronese, using a convolutional neural network approach tailored to severe data scarcity. To ensure that stylistic comparisons were based on artistic execution rather than iconographic differences, the dataset was restricted to paintings depicting the Holy Family, the same subject as the work under authentication. A custom shallow convolutional network was developed to process multichannel inputs (RGB, grayscale, and edge maps) extracted from overlapping patches via a sliding-window strategy. This patch-based design expanded the dataset from a small number of paintings to thousands of localized patches, enabling the model to learn microtextural and brushstroke features. Regularization techniques were employed to enhance generalization, while a painting-level cross-validation strategy was used to prevent data leakage. The model achieved high classification performance (accuracy of 94.51%, Area under the Curve 0.99) and generated probability heatmaps that revealed stylistic coherence in authentic Veronese works and fragmentation in non-Veronese paintings. The work under examination yielded an intermediate global mean Veronese probability (61%) with extensive high-probability regions over stylistically salient passages, suggesting partial stylistic affinity. The results support the use of patch-based models for stylistic analysis in art authentication, especially under domain-specific data constraints. While the network provides strong probabilistic evidence of stylistic affinity, definitive attribution requires further integration with historical, technical, and provenance-based analyses.
{"title":"Painting authentication using CNNs and sliding window feature extraction.","authors":"Juan Ruiz de Miras, José Luis Vílchez, María José Gacto, Domingo Martín","doi":"10.3389/frai.2025.1738444","DOIUrl":"https://doi.org/10.3389/frai.2025.1738444","url":null,"abstract":"<p><p>Painting authentication is an inherently complex task, often relying on a combination of connoisseurship and technical analysis. This study focuses on the authentication of a single painting attributed to Paolo Veronese, using a convolutional neural network approach tailored to severe data scarcity. To ensure that stylistic comparisons were based on artistic execution rather than iconographic differences, the dataset was restricted to paintings depicting the Holy Family, the same subject as the work under authentication. A custom shallow convolutional network was developed to process multichannel inputs (RGB, grayscale, and edge maps) extracted from overlapping patches via a sliding-window strategy. This patch-based design expanded the dataset from a small number of paintings to thousands of localized patches, enabling the model to learn microtextural and brushstroke features. Regularization techniques were employed to enhance generalization, while a painting-level cross-validation strategy was used to prevent data leakage. The model achieved high classification performance (accuracy of 94.51%, Area under the Curve 0.99) and generated probability heatmaps that revealed stylistic coherence in authentic Veronese works and fragmentation in non-Veronese paintings. The work under examination yielded an intermediate global mean Veronese probability (61%) with extensive high-probability regions over stylistically salient passages, suggesting partial stylistic affinity. The results support the use of patch-based models for stylistic analysis in art authentication, especially under domain-specific data constraints. While the network provides strong probabilistic evidence of stylistic affinity, definitive attribution requires further integration with historical, technical, and provenance-based analyses.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1738444"},"PeriodicalIF":4.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12836884/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1696423
Arafat Rohan, Md Deluar Hossen, Md Nuruzzaman Pranto, Balayet Hossain, Areyfin Mohammed Yoshi, Rakibul Islam
This study reviews the advancements in AI-driven methods for predicting stock prices, tracing their evolution from traditional approaches to modern finance. The role of AI in the market extends beyond predictive systems to encompass the intersection of financial markets with emerging technologies, such as blockchain, and the potential influence of quantum computing on economic modeling. A decentralized finance system examines the application of Reinforcement Learning in financial market prediction, highlighting its potential for continuous learning from dynamic market conditions. The study discusses the development of hybrid prediction models, stock market machine learning systems, and AI-driven investment portfolio management. The potential of quantum computing enhances portfolio analysis, fraud detection, optimization, and asset valuation for complex market predictions, as well as the impact of blockchain technologies on transparency, security, and efficiency. Machine learning techniques can significantly automate data collection and purification. Financial decision-making and the application of time-series analysis techniques can be readily learned through deep reinforcement learning for stock price prediction. Deep Neural Networks and Strategic Asset Allocation can be managed by evaluating performance and portfolio using real-time market insights from AI models. Although there are numerous ethical, sentimental, regulatory, and data quality issues in market prediction, the future job market is heavily dependent on these criteria, particularly through effective risk management and fraud detection.
{"title":"Artificial intelligence in financial market prediction: advancements in machine learning for stock price forecasting.","authors":"Arafat Rohan, Md Deluar Hossen, Md Nuruzzaman Pranto, Balayet Hossain, Areyfin Mohammed Yoshi, Rakibul Islam","doi":"10.3389/frai.2025.1696423","DOIUrl":"https://doi.org/10.3389/frai.2025.1696423","url":null,"abstract":"<p><p>This study reviews the advancements in AI-driven methods for predicting stock prices, tracing their evolution from traditional approaches to modern finance. The role of AI in the market extends beyond predictive systems to encompass the intersection of financial markets with emerging technologies, such as blockchain, and the potential influence of quantum computing on economic modeling. A decentralized finance system examines the application of Reinforcement Learning in financial market prediction, highlighting its potential for continuous learning from dynamic market conditions. The study discusses the development of hybrid prediction models, stock market machine learning systems, and AI-driven investment portfolio management. The potential of quantum computing enhances portfolio analysis, fraud detection, optimization, and asset valuation for complex market predictions, as well as the impact of blockchain technologies on transparency, security, and efficiency. Machine learning techniques can significantly automate data collection and purification. Financial decision-making and the application of time-series analysis techniques can be readily learned through deep reinforcement learning for stock price prediction. Deep Neural Networks and Strategic Asset Allocation can be managed by evaluating performance and portfolio using real-time market insights from AI models. Although there are numerous ethical, sentimental, regulatory, and data quality issues in market prediction, the future job market is heavily dependent on these criteria, particularly through effective risk management and fraud detection.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1696423"},"PeriodicalIF":4.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12835427/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1679962
Md Nesarul Hoque, Rudra Pratap Deb Nath, Abu Nowshed Chy, Debasish Ghose, Md Hanif Seddiqui
Cyberbullying on social networks has emerged as a pressing global issue, yet research in low-resource languages such as Bengali remains underdeveloped due to the scarcity of high-quality datasets, linguistic resources, and targeted methodologies. Many existing approaches overlook essential language-specific preprocessing, neglect the integration of advanced transformer-based models, and do not adequately address model validation, scalability, and adaptability. To address these limitations, this study introduces three Bengali-specific preprocessing strategies to enhance feature representation. It then proposes Transformer-stacking, an effective hybrid detection framework that combines three transformer models, XLM-R-base, multilingual BERT, and Bangla-Bert-Base, via a stacking strategy with a multi-layer perceptron classifier. The framework is evaluated on a publicly available Bengali cyberbullying dataset comprising 44,001 samples across both binary (Sub-task A) and multiclass (Sub-task B) classification settings. Transformer-stacking achieves an F1-score of 93.61% and an accuracy of 93.62% for Sub-task A, and an F1-score and accuracy of 89.23% for Sub-task B, outperforming eight baseline transformer models, four transformer ensemble techniques, and recent state-of-the-art methods. These improvements are statistically validated using McNemar's test. Furthermore, experiments on two external Bengali datasets, focused on hate speech and abusive language, demonstrate the model's scalability and adaptability. Overall, Transformer-stacking offers an effective and generalizable solution for Bengali cyberbullying detection, establishing a new benchmark in this underexplored domain.
{"title":"Advancing cyberbullying detection in low-resource languages: a transformer- stacking framework for Bengali.","authors":"Md Nesarul Hoque, Rudra Pratap Deb Nath, Abu Nowshed Chy, Debasish Ghose, Md Hanif Seddiqui","doi":"10.3389/frai.2025.1679962","DOIUrl":"https://doi.org/10.3389/frai.2025.1679962","url":null,"abstract":"<p><p>Cyberbullying on social networks has emerged as a pressing global issue, yet research in low-resource languages such as Bengali remains underdeveloped due to the scarcity of high-quality datasets, linguistic resources, and targeted methodologies. Many existing approaches overlook essential language-specific preprocessing, neglect the integration of advanced transformer-based models, and do not adequately address model validation, scalability, and adaptability. To address these limitations, this study introduces three Bengali-specific preprocessing strategies to enhance feature representation. It then proposes <i>Transformer-stacking</i>, an effective hybrid detection framework that combines three transformer models, XLM-R-base, multilingual BERT, and Bangla-Bert-Base, via a stacking strategy with a multi-layer perceptron classifier. The framework is evaluated on a publicly available Bengali cyberbullying dataset comprising 44,001 samples across both binary (<i>Sub-task A</i>) and multiclass (<i>Sub-task B</i>) classification settings. <i>Transformer-stacking</i> achieves an F1-score of 93.61% and an accuracy of 93.62% for <i>Sub-task A</i>, and an F1-score and accuracy of 89.23% for <i>Sub-task B</i>, outperforming eight baseline transformer models, four transformer ensemble techniques, and recent state-of-the-art methods. These improvements are statistically validated using McNemar's test. Furthermore, experiments on two external Bengali datasets, focused on hate speech and abusive language, demonstrate the model's scalability and adaptability. Overall, <i>Transformer-stacking</i> offers an effective and generalizable solution for Bengali cyberbullying detection, establishing a new benchmark in this underexplored domain.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1679962"},"PeriodicalIF":4.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12835245/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}