Pub Date : 2026-01-16eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1600044
Morgan Williams, Uma Rani
Introduction: The business model of multi-sided digital labor platforms relies on maintaining a balance between workers and customers or clients to sustain operations. These platforms initially leveraged venture capital to attract workers by providing them with incentives and the promise of flexibility, creating lock-in effects to consolidate their market power and enable monopolistic practices. As platforms mature, they increasingly implement algorithmic management and control mechanisms, such as rating systems, which restrict worker autonomy, access to work and flexibility. Despite limited bargaining power, workers have developed both individual and collective strategies to counteract these algorithmic restrictions.
Methods: This article employs a structured synthesis, drawing on existing academic literature as well as surveys conducted by the International Labour Office (ILO) between 2017 and 2023, to examine how platform workers utilize a combination of informal and formal forms of resistance to build resilience against algorithmic disruptions.
Results: The analysis covers different sectors (freelance and microtask work, taxi and delivery services, and domestic work and beauty care platforms) offering insights into the changing dynamics of worker agency on platforms, which have enabled resilience-building among workers on digital labour platforms. In the face of significant barriers to carrying out formal acts of resistance, workers on digital labour platforms often turn to informal acts of resistance, often mediated by social media, to adapt to changes in the platforms' algorithms and maintain their well-being.
Discussion: Platform workers increasingly have a diverse array of tools to exercise their agency physically and virtually. However, the process of establishing resilience in such conditions is often not straight-forward. As platforms counteract workers' acts of resistance, workers must continue to develop new and innovative strategies to strengthen their resilience. Such a complex and nuanced landscape merits continued research and analysis.
{"title":"Resilience through resistance: the role of worker agency in navigating algorithmic control.","authors":"Morgan Williams, Uma Rani","doi":"10.3389/frai.2025.1600044","DOIUrl":"10.3389/frai.2025.1600044","url":null,"abstract":"<p><strong>Introduction: </strong>The business model of multi-sided digital labor platforms relies on maintaining a balance between workers and customers or clients to sustain operations. These platforms initially leveraged venture capital to attract workers by providing them with incentives and the promise of flexibility, creating lock-in effects to consolidate their market power and enable monopolistic practices. As platforms mature, they increasingly implement algorithmic management and control mechanisms, such as rating systems, which restrict worker autonomy, access to work and flexibility. Despite limited bargaining power, workers have developed both individual and collective strategies to counteract these algorithmic restrictions.</p><p><strong>Methods: </strong>This article employs a structured synthesis, drawing on existing academic literature as well as surveys conducted by the International Labour Office (ILO) between 2017 and 2023, to examine how platform workers utilize a combination of informal and formal forms of resistance to build resilience against algorithmic disruptions.</p><p><strong>Results: </strong>The analysis covers different sectors (freelance and microtask work, taxi and delivery services, and domestic work and beauty care platforms) offering insights into the changing dynamics of worker agency on platforms, which have enabled resilience-building among workers on digital labour platforms. In the face of significant barriers to carrying out formal acts of resistance, workers on digital labour platforms often turn to informal acts of resistance, often mediated by social media, to adapt to changes in the platforms' algorithms and maintain their well-being.</p><p><strong>Discussion: </strong>Platform workers increasingly have a diverse array of tools to exercise their agency physically and virtually. However, the process of establishing resilience in such conditions is often not straight-forward. As platforms counteract workers' acts of resistance, workers must continue to develop new and innovative strategies to strengthen their resilience. Such a complex and nuanced landscape merits continued research and analysis.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1600044"},"PeriodicalIF":4.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12855402/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1734013
M Mayuranathan, V Anitha, P Nehru, Bosko Nikolic, Miloš Janjić, Nebojsa Bacanin
Introduction: Heart diseases (CVDs) are a major cause of morbidity and mortality in all global regions and thus there is the pressing need to develop early detection and effective management approaches. Traditional cardiovascular monitoring systems do not necessarily have real-time analyzing solutions and individual understanding, which leads to delayed interventions. Moreover, one of the greatest issues in digital healthcare applications remains to be data privacy and security.
Methods: The proposed research is to present a developed model of CVD detection that will combine Internet of Things (IoT)-based wearable devices, electronic clinical records, and access control using blockchain. The system starts by registering patients and medical personnel and then proceeds with collecting physiological as well as clinical data. Kalman filtering helps in improving data reliability in the pre-processing stage. Shallow and deep feature extraction methods are used to describe complicated patterns of data. A Refracted Sand Cat Swarm Optimization (SCSO) algorithm is used as part of feature maximization. A new TriBoostCardio Ensemble model (CatBoost, AdaBoost, and LogitBoost) is used to conduct the classification task and enhance the predictive accuracy. Smart contracts provide safe and transparent access to health information.
Results: There are experimental results that the proposed framework enhances high predictive accuracy and detecting cardiovascular diseases earlier than traditional ones. The combination between SCSO feature selection and the TriBoostCardio Ensemble model improves the sturdiness of the model and precision of classification.
Discussion: Besides the fact that the presented framework promotes the accuracy and timeliness of CVD detection, it also way to deal with important problems related to the data privacy and integrity with the help of blockchain-based access control. This solution offers a stable and trustworthy solution to the current healthcare systems with the combination of the smart optimization of features, ensemble learning, and secure data management.
{"title":"Triboostcardio ensemble model for cardiovascular disease detection using advanced blockchain-enabled health monitoring.","authors":"M Mayuranathan, V Anitha, P Nehru, Bosko Nikolic, Miloš Janjić, Nebojsa Bacanin","doi":"10.3389/frai.2025.1734013","DOIUrl":"10.3389/frai.2025.1734013","url":null,"abstract":"<p><strong>Introduction: </strong>Heart diseases (CVDs) are a major cause of morbidity and mortality in all global regions and thus there is the pressing need to develop early detection and effective management approaches. Traditional cardiovascular monitoring systems do not necessarily have real-time analyzing solutions and individual understanding, which leads to delayed interventions. Moreover, one of the greatest issues in digital healthcare applications remains to be data privacy and security.</p><p><strong>Methods: </strong>The proposed research is to present a developed model of CVD detection that will combine Internet of Things (IoT)-based wearable devices, electronic clinical records, and access control using blockchain. The system starts by registering patients and medical personnel and then proceeds with collecting physiological as well as clinical data. Kalman filtering helps in improving data reliability in the pre-processing stage. Shallow and deep feature extraction methods are used to describe complicated patterns of data. A Refracted Sand Cat Swarm Optimization (SCSO) algorithm is used as part of feature maximization. A new TriBoostCardio Ensemble model (CatBoost, AdaBoost, and LogitBoost) is used to conduct the classification task and enhance the predictive accuracy. Smart contracts provide safe and transparent access to health information.</p><p><strong>Results: </strong>There are experimental results that the proposed framework enhances high predictive accuracy and detecting cardiovascular diseases earlier than traditional ones. The combination between SCSO feature selection and the TriBoostCardio Ensemble model improves the sturdiness of the model and precision of classification.</p><p><strong>Discussion: </strong>Besides the fact that the presented framework promotes the accuracy and timeliness of CVD detection, it also way to deal with important problems related to the data privacy and integrity with the help of blockchain-based access control. This solution offers a stable and trustworthy solution to the current healthcare systems with the combination of the smart optimization of features, ensemble learning, and secure data management.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1734013"},"PeriodicalIF":4.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12855408/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1702087
Maria Frasca, Gianluca Gazzaniga, Agnese Graziosi, Valentina De Nicolo, Costantino De Giacomo, Stefano Martinelli, Michele Senatore, Alessandra Romandini, Chiara Moretti, Giulia Angela Carla Pattarino, Alice Proto, Romano Danesi, Francesco Scaglione, Gianluca Vago, Davide La Torre, Arianna Pani
Background: Accurate drug dosing in pediatrics is complicated by age-related physiological variability. Standard weight-based dosing may result in either subtherapeutic exposure or toxicity. Machine learning (ML) models can capture complex relationships among clinical variables and support individualized therapy.
Methods: We analyzed clinical and pharmacokinetic data from 20 pediatric patients enrolled in the PUERI study (January 2020-November 2021, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy). Eight ML models-including linear regression (LR), ridge regression (RR), lasso regression (LaR), Huber regression (HR), random forest (RF), XGBoost, LightGBM, and a neural network (MLP)-were trained to predict ceftaroline doses that would achieve plasma concentrations close to the therapeutic target of 10 mg/L. Model performance was evaluated using mean absolute error (MAE), root mean squared error (RMSE), and the coefficient of determination (R2). To ensure interpretability, we applied local interpretable model-agnostic explanations (LIME) to identify the most influential predictors of dose.
Results: MLP (MAE 1.53 mg, R2 0.94) and XGBoost (MAE 2.04 mg, R2 0.89) outperformed linear models. Predicted doses were more frequently aligned with therapeutic concentrations than those clinically administered. Model-based simulated concentrations fell within the therapeutic range in approximately 85% of cases, and the best ML models showed over 90% patient-level clinical. RF, LightGBM and XGBoost achieved the highest clinical alignment, with 94.2, 92.4 and 91.5% of patients reaching therapeutic levels. Renal function markers, such as serum creatinine and azotemia, together with anthropometric parameters including weight, height, and body mass index, were consistently the most influential features.
Conclusion: Advanced ML models can optimize ceftaroline dosing in pediatric patients and outperform traditional dosing strategies. Combining predictive accuracy with interpretability (via LIME) supports clinical trust and may enhance precision antibiotic therapy while reducing the risks of antimicrobial resistance and toxicity.
{"title":"Artificial intelligence and precision medicine: a pilot study predicting optimal ceftaroline dosage for pediatric patients.","authors":"Maria Frasca, Gianluca Gazzaniga, Agnese Graziosi, Valentina De Nicolo, Costantino De Giacomo, Stefano Martinelli, Michele Senatore, Alessandra Romandini, Chiara Moretti, Giulia Angela Carla Pattarino, Alice Proto, Romano Danesi, Francesco Scaglione, Gianluca Vago, Davide La Torre, Arianna Pani","doi":"10.3389/frai.2025.1702087","DOIUrl":"10.3389/frai.2025.1702087","url":null,"abstract":"<p><strong>Background: </strong>Accurate drug dosing in pediatrics is complicated by age-related physiological variability. Standard weight-based dosing may result in either subtherapeutic exposure or toxicity. Machine learning (ML) models can capture complex relationships among clinical variables and support individualized therapy.</p><p><strong>Methods: </strong>We analyzed clinical and pharmacokinetic data from 20 pediatric patients enrolled in the PUERI study (January 2020-November 2021, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy). Eight ML models-including linear regression (LR), ridge regression (RR), lasso regression (LaR), Huber regression (HR), random forest (RF), XGBoost, LightGBM, and a neural network (MLP)-were trained to predict ceftaroline doses that would achieve plasma concentrations close to the therapeutic target of 10 mg/L. Model performance was evaluated using mean absolute error (MAE), root mean squared error (RMSE), and the coefficient of determination (R<sup>2</sup>). To ensure interpretability, we applied local interpretable model-agnostic explanations (LIME) to identify the most influential predictors of dose.</p><p><strong>Results: </strong>MLP (MAE 1.53 mg, R<sup>2</sup> 0.94) and XGBoost (MAE 2.04 mg, R<sup>2</sup> 0.89) outperformed linear models. Predicted doses were more frequently aligned with therapeutic concentrations than those clinically administered. Model-based simulated concentrations fell within the therapeutic range in approximately 85% of cases, and the best ML models showed over 90% patient-level clinical. RF, LightGBM and XGBoost achieved the highest clinical alignment, with 94.2, 92.4 and 91.5% of patients reaching therapeutic levels. Renal function markers, such as serum creatinine and azotemia, together with anthropometric parameters including weight, height, and body mass index, were consistently the most influential features.</p><p><strong>Conclusion: </strong>Advanced ML models can optimize ceftaroline dosing in pediatric patients and outperform traditional dosing strategies. Combining predictive accuracy with interpretability (via LIME) supports clinical trust and may enhance precision antibiotic therapy while reducing the risks of antimicrobial resistance and toxicity.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1702087"},"PeriodicalIF":4.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12856755/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1732088
Chenguang Wu, Wenlan Zhang, Liangliang Hu, Ming Li
Introduction: Technostress is an essential factor in predicting middle school teachers' willingness to adopt artificial intelligence (AI) in future educational practices and their actual use of such technologies. This study examines technostress among middle school teachers in the context of AI integration and explores how personal competence (including digital awareness, digital technology knowledge and skills, and digital application competence), role conflict, organizational support, and technological features influence technostress.
Methods: The Technology Acceptance Model (TAM) is employed as the theoretical underpinning for the present research, using survey data from 301 middle school teachers, a path model was constructed to analyze these relationships.
Results: The results indicate that the overall level of technostress is relatively low; however, different teacher groups experience distinct sources of stress. Specifically, appropriate technological features and strong digital awareness effectively alleviate technostress, while role conflict intensifies it. Furthermore, these factors play a significant mediating role between organizational support and technostress.
Discussion: Based on these findings, the study proposes several strategies to mitigate technostress among middle school teachers. First, a tiered and category based approach should be adopted to provide targeted support according to teachers' actual needs. Second, it is important to balance the relationship between technological supply and educational demand to ensure sustainable implementation. Third, showcasing typical successful cases can help enhance teachers' digital awareness and confidence in using AI. Finally, strengthen role positioning and work flexibility to ease teachers' role conflict. These strategies offer practical guidance for educational administrators seeking to promote the effective integration of AI technologies in middle school education.
{"title":"Research on middle school teachers' technostress empowered by artificial intelligence.","authors":"Chenguang Wu, Wenlan Zhang, Liangliang Hu, Ming Li","doi":"10.3389/frai.2025.1732088","DOIUrl":"10.3389/frai.2025.1732088","url":null,"abstract":"<p><strong>Introduction: </strong>Technostress is an essential factor in predicting middle school teachers' willingness to adopt artificial intelligence (AI) in future educational practices and their actual use of such technologies. This study examines technostress among middle school teachers in the context of AI integration and explores how personal competence (including digital awareness, digital technology knowledge and skills, and digital application competence), role conflict, organizational support, and technological features influence technostress.</p><p><strong>Methods: </strong>The Technology Acceptance Model (TAM) is employed as the theoretical underpinning for the present research, using survey data from 301 middle school teachers, a path model was constructed to analyze these relationships.</p><p><strong>Results: </strong>The results indicate that the overall level of technostress is relatively low; however, different teacher groups experience distinct sources of stress. Specifically, appropriate technological features and strong digital awareness effectively alleviate technostress, while role conflict intensifies it. Furthermore, these factors play a significant mediating role between organizational support and technostress.</p><p><strong>Discussion: </strong>Based on these findings, the study proposes several strategies to mitigate technostress among middle school teachers. First, a tiered and category based approach should be adopted to provide targeted support according to teachers' actual needs. Second, it is important to balance the relationship between technological supply and educational demand to ensure sustainable implementation. Third, showcasing typical successful cases can help enhance teachers' digital awareness and confidence in using AI. Finally, strengthen role positioning and work flexibility to ease teachers' role conflict. These strategies offer practical guidance for educational administrators seeking to promote the effective integration of AI technologies in middle school education.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1732088"},"PeriodicalIF":4.7,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12852380/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1686750
Bruno Souza, Manuel Castro, Ahmed Esmin, Leonardo Machado, Alexandre Ferreira, Anderson Rocha
Causal reasoning is essential for understanding relationships and guiding decision-making in different applications, as it allows for the identification of cause-and-effect relationships between variables. By uncovering the underlying process that drives these relationships, causal reasoning enables more accurate predictions, controlled interventions, and the ability to distinguish genuine causal effects from mere correlations in complex systems. In oil field management, where interactions between injector and producer wells are inherently dynamic, it is vital to uncover causal connections to optimize recovery and minimize waste. Since controlled experiments are impractical in this setting, we must rely solely on observed data. In this paper, we develop an innovative causality-inspired framework that leverages domain expertise for causal feature learning for robust connectivity estimation. We address the challenge posed by confounding factors, latency in system responses, and the complexity of inter-well interactions that complicate causal analysis. First, we frame the problem through a causal lens and propose a novel framework that generates pairwise features driven by causal theory. This method captures meaningful representations of relationships within the oil field system. By constructing independent pairwise feature representations, our method implicitly accounts for confounder signal and enhances the reliability of connectivity estimation. Furthermore, our approach requires only limited context data to train machine learning models that estimate the connectivity probability between injectors and producers. We first validate our methodology through experiments on synthetic and semi-synthetic datasets, ensuring its robustness across varied scenarios. We then apply it to the complex Brazilian Pre-Salt oil fields using public synthetic and real-world data. Our results show that the proposed method effectively identifies injector-producer connectivity while maintaining rapid training times. This enables scalability and provides an interpretable approach for complex dynamic systems through causal theory. While previous projects have employed causal methods in the oil field context, to the best of our knowledge, this is the first time to systematically formulate the problem using causal reasoning that explicitly accounts for relevant confounders and develops an approach that effectively addresses these challenges and facilitates the discovery of interwell connections within an oil field.
{"title":"Causality-driven feature representation for connectivity prediction.","authors":"Bruno Souza, Manuel Castro, Ahmed Esmin, Leonardo Machado, Alexandre Ferreira, Anderson Rocha","doi":"10.3389/frai.2025.1686750","DOIUrl":"10.3389/frai.2025.1686750","url":null,"abstract":"<p><p>Causal reasoning is essential for understanding relationships and guiding decision-making in different applications, as it allows for the identification of cause-and-effect relationships between variables. By uncovering the underlying process that drives these relationships, causal reasoning enables more accurate predictions, controlled interventions, and the ability to distinguish genuine causal effects from mere correlations in complex systems. In oil field management, where interactions between injector and producer wells are inherently dynamic, it is vital to uncover causal connections to optimize recovery and minimize waste. Since controlled experiments are impractical in this setting, we must rely solely on observed data. In this paper, we develop an innovative causality-inspired framework that leverages domain expertise for causal feature learning for robust connectivity estimation. We address the challenge posed by confounding factors, latency in system responses, and the complexity of inter-well interactions that complicate causal analysis. First, we frame the problem through a causal lens and propose a novel framework that generates pairwise features driven by causal theory. This method captures meaningful representations of relationships within the oil field system. By constructing independent pairwise feature representations, our method implicitly accounts for confounder signal and enhances the reliability of connectivity estimation. Furthermore, our approach requires only limited context data to train machine learning models that estimate the connectivity probability between injectors and producers. We first validate our methodology through experiments on synthetic and semi-synthetic datasets, ensuring its robustness across varied scenarios. We then apply it to the complex Brazilian Pre-Salt oil fields using public synthetic and real-world data. Our results show that the proposed method effectively identifies injector-producer connectivity while maintaining rapid training times. This enables scalability and provides an interpretable approach for complex dynamic systems through causal theory. While previous projects have employed causal methods in the oil field context, to the best of our knowledge, this is the first time to systematically formulate the problem using causal reasoning that explicitly accounts for relevant confounders and develops an approach that effectively addresses these challenges and facilitates the discovery of interwell connections within an oil field.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1686750"},"PeriodicalIF":4.7,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12852402/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1732820
Jiangxiao Zhang, Feng Gao, Shengmei He, Bin Zhang
Camouflaged object detection (COD) aims to identify objects that are visually indistinguishable from their surrounding background, making it challenging to precisely distinguish the boundaries between objects and backgrounds in camouflaged environments. In recent years, numerous studies have leveraged frequency-domain methods to aid in camouflage target detection by utilizing frequency-domain information. However, current methods based on the frequency domain cannot effectively capture the boundary information between disguised objects and the background. To address this limitation, we propose a Laplace transform-guided camouflage object detection network called the Self-Correlation Cross Relation Network (SeCoCR). In this framework, the Laplace-transformed camouflage target is treated as high-frequency information, while the original image serves as low-frequency information. These are then separately input into our proposed Self-Relation Attention module to extract both local and global features. Within the Self-Relation Attention module, key semantic information is retained in the low-frequency data, and crucial boundary information is preserved in the high-frequency data. Furthermore, we design a multi-scale attention mechanism for low- and high-frequency information, Low-High Mix Fusion, to effectively integrate essential information from both frequencies for camouflage object detection. Comprehensive experiments on three COD benchmark datasets demonstrate that our approach significantly surpasses existing state-of-the-art frequency-domain-assisted methods.
伪装目标检测(COD)旨在识别在视觉上与周围背景无法区分的物体,这给在伪装环境中精确区分物体和背景之间的边界带来了挑战。近年来,许多研究利用频域信息,利用频域方法来辅助伪装目标检测。然而,现有的基于频域的方法不能有效地捕获被伪装物体与背景之间的边界信息。为了解决这一限制,我们提出了一种拉普拉斯变换制导的伪装目标检测网络,称为自相关交叉关系网络(SeCoCR)。在该框架中,将拉普拉斯变换后的伪装目标作为高频信息,将原始图像作为低频信息。然后将这些信息分别输入到我们提出的自关系注意模块中,以提取局部和全局特征。在自关系注意模块中,关键的语义信息保留在低频数据中,关键的边界信息保留在高频数据中。此外,我们设计了一种低频和高频信息的多尺度注意机制,即low- high Mix Fusion,以有效地整合两种频率的关键信息,用于伪装目标检测。在三个COD基准数据集上的综合实验表明,我们的方法明显优于现有的最先进的频域辅助方法。
{"title":"Laplace-guided fusion network for camouflage object detection.","authors":"Jiangxiao Zhang, Feng Gao, Shengmei He, Bin Zhang","doi":"10.3389/frai.2025.1732820","DOIUrl":"10.3389/frai.2025.1732820","url":null,"abstract":"<p><p>Camouflaged object detection (COD) aims to identify objects that are visually indistinguishable from their surrounding background, making it challenging to precisely distinguish the boundaries between objects and backgrounds in camouflaged environments. In recent years, numerous studies have leveraged frequency-domain methods to aid in camouflage target detection by utilizing frequency-domain information. However, current methods based on the frequency domain cannot effectively capture the boundary information between disguised objects and the background. To address this limitation, we propose a Laplace transform-guided camouflage object detection network called the Self-Correlation Cross Relation Network (SeCoCR). In this framework, the Laplace-transformed camouflage target is treated as high-frequency information, while the original image serves as low-frequency information. These are then separately input into our proposed Self-Relation Attention module to extract both local and global features. Within the Self-Relation Attention module, key semantic information is retained in the low-frequency data, and crucial boundary information is preserved in the high-frequency data. Furthermore, we design a multi-scale attention mechanism for low- and high-frequency information, Low-High Mix Fusion, to effectively integrate essential information from both frequencies for camouflage object detection. Comprehensive experiments on three COD benchmark datasets demonstrate that our approach significantly surpasses existing state-of-the-art frequency-domain-assisted methods.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1732820"},"PeriodicalIF":4.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12847256/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1720547
Shani Alkoby, Ron S Hirschprung
Introduction: Privacy has become a significant concern in the digital world, especially concerning the personal data collected by websites and other service providers on the World Wide Web network. One of the significant approaches to enable the individual to control privacy is the privacy policy document, which contains vital information on this matter. Publishing a privacy policy is required by regulation in most Western countries. However, the privacy policy document is a natural free text-based object, usually phrased in a legal language, and rapidly changes, making it consequently relatively hard to understand and almost always neglected by humans.
Methods: This research proposes a novel methodology to receive an unstructured privacy policy text and automatically structure it into predefined parameters. The methodology is based on a two-layer artificial intelligence (AI) process.
Results: In an empirical study that included 49 actual privacy policies from different websites, we demonstrated an average F1-score > 0.8 where five of six parameters achieved a very high classification accuracy.
Discussion: This methodology can serve both humans and AI agents by addressing issues such as cognitive burden, non-standard formalizations, cognitive laziness, and the dynamics of the document across a timeline, which deters the use of the privacy policy as a resource. The study addresses a critical gap between the present regulations, aiming at enhancing privacy, and the abilities of humans to benefit from the mandatory published privacy policy.
{"title":"Structuring privacy policy: an AI approach.","authors":"Shani Alkoby, Ron S Hirschprung","doi":"10.3389/frai.2025.1720547","DOIUrl":"10.3389/frai.2025.1720547","url":null,"abstract":"<p><strong>Introduction: </strong>Privacy has become a significant concern in the digital world, especially concerning the personal data collected by websites and other service providers on the World Wide Web network. One of the significant approaches to enable the individual to control privacy is the privacy policy document, which contains vital information on this matter. Publishing a privacy policy is required by regulation in most Western countries. However, the privacy policy document is a natural free text-based object, usually phrased in a legal language, and rapidly changes, making it consequently relatively hard to understand and almost always neglected by humans.</p><p><strong>Methods: </strong>This research proposes a novel methodology to receive an unstructured privacy policy text and automatically structure it into predefined parameters. The methodology is based on a two-layer artificial intelligence (AI) process.</p><p><strong>Results: </strong>In an empirical study that included 49 actual privacy policies from different websites, we demonstrated an average F1-score > 0.8 where five of six parameters achieved a very high classification accuracy.</p><p><strong>Discussion: </strong>This methodology can serve both humans and AI agents by addressing issues such as cognitive burden, non-standard formalizations, cognitive laziness, and the dynamics of the document across a timeline, which deters the use of the privacy policy as a resource. The study addresses a critical gap between the present regulations, aiming at enhancing privacy, and the abilities of humans to benefit from the mandatory published privacy policy.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1720547"},"PeriodicalIF":4.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12847394/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial: Advancing knowledge-based economies and societies through AI and optimization: innovations, challenges, and implications.","authors":"Erfan Babaee Tirkolaee, Ramin Ranjbarzadeh, Gerhard-Wilhelm Weber","doi":"10.3389/frai.2025.1757072","DOIUrl":"https://doi.org/10.3389/frai.2025.1757072","url":null,"abstract":"","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1757072"},"PeriodicalIF":4.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12847418/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1610856
Osvaldo Velazquez-Gonzalez, Antonio Alarcón-Paredes, Cornelio Yañez-Marquez
Classification is a central task in machine learning, underpinning applications in domains such as finance, medicine, engineering, information technology, and biology. However, machine learning pattern classification can become a complex or even inexplicable task for current robust models due to the complexity of objective datasets, which is why there is a strong interest in achieving high classification performance. On the other hand, in particular cases, there is a need to achieve such performance while maintaining a certain level of explainability in the operation and decisions of classification algorithms, which can become complex. For this reason, an algorithm is proposed that is robust, simple, highly explainable, and applicable to datasets primarily in medicine with complex class imbalance. The main contribution of this research is a novel machine learning classification algorithm based on binary string similarity that is competitive, simple, interpretable, and transparent, as it is clear why a pattern is classified into a given class. Therefore, a comparative study of the performance of the best-known state-of-the-art classification algorithms and the proposed model is presented. The experimental results demonstrate the benefits of the proposal in this research work, which were validated through statistical hypothesis tests to assess significant performance differences.
{"title":"Medical pattern classification using a novel binary similarity approach based on an associative classifier.","authors":"Osvaldo Velazquez-Gonzalez, Antonio Alarcón-Paredes, Cornelio Yañez-Marquez","doi":"10.3389/frai.2025.1610856","DOIUrl":"10.3389/frai.2025.1610856","url":null,"abstract":"<p><p>Classification is a central task in machine learning, underpinning applications in domains such as finance, medicine, engineering, information technology, and biology. However, machine learning pattern classification can become a complex or even inexplicable task for current robust models due to the complexity of objective datasets, which is why there is a strong interest in achieving high classification performance. On the other hand, in particular cases, there is a need to achieve such performance while maintaining a certain level of explainability in the operation and decisions of classification algorithms, which can become complex. For this reason, an algorithm is proposed that is robust, simple, highly explainable, and applicable to datasets primarily in medicine with complex class imbalance. The main contribution of this research is a novel machine learning classification algorithm based on binary string similarity that is competitive, simple, interpretable, and transparent, as it is clear why a pattern is classified into a given class. Therefore, a comparative study of the performance of the best-known state-of-the-art classification algorithms and the proposed model is presented. The experimental results demonstrate the benefits of the proposal in this research work, which were validated through statistical hypothesis tests to assess significant performance differences.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1610856"},"PeriodicalIF":4.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12847284/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: The integration of large language models (LLMs) into cardio-oncology patient education holds promise for addressing the critical gap in accessible, accurate, and patient-friendly information. However, the performance of publicly available LLMs in this specialized domain remains underexplored.
Objectives: This study evaluates the performance of three LLMs (ChatGPT-4, Kimi, DouBao) act as assistants for physicians in cardio-oncology patient education and examines the impact of prompt engineering on response quality.
Methods: Twenty standardized questions spanning cardio-oncology topics were posed twice to three LLMs (ChatGPT-4, Kimi, DouBao): once without prompts and once with a directive to simplify language, generating 240 responses. These responses were evaluated by four cardio-oncology specialists for accuracy, comprehensiveness, helpfulness, and practicality. Readability and complexity were assessed using a Chinese text analysis framework.
Results: Among 240 responses, 63.3% were rated "correct," 35.0% "partially correct," and 1.7% "incorrect." No significant differences in accuracy were observed between models (p = 0.26). Kimi demonstrated no incorrect responses. Significant declines in comprehensiveness (p = 0.03) and helpfulness (p < 0.01) occurred post-prompt, particularly for DouBao (accuracy: 57.5% vs. 7.5%, p < 0.01). Readability metrics (readability age, difficulty score, total word count, sentence length) showed no inter-model differences, but prompts reduced complexity (e.g., DouBao's readability age decreased from 12.9 ± 0.8 to 10.1 ± 1.2 years, p < 0.01).
Conclusion: Publicly available LLMs provide largely accurate responses to cardio-oncology questions, yet their utility is constrained by inconsistent comprehensiveness and sensitivity to prompt design. While simplifying language improves readability, it risks compromising clinical relevance. Tailored fine-tuning and specialized evaluation frameworks are essential to optimize LLMs for patient education in cardio-oncology.
背景:将大型语言模型(LLMs)整合到心脏肿瘤学患者教育中,有望解决在可访问、准确和患者友好信息方面的关键差距。然而,公开可用的法学硕士在这一专业领域的表现仍未得到充分探索。目的:本研究评估了三位法学硕士(ChatGPT-4, Kimi, DouBao)在心脏肿瘤患者教育中作为医生助理的表现,并研究了提示工程对响应质量的影响。方法:向三位法学硕士(ChatGPT-4、Kimi、DouBao)提出了20个涉及心脏肿瘤学主题的标准化问题,两次:一次没有提示,一次有简化语言的指令,共产生240个回答。这些反应由四位心脏肿瘤学专家评估其准确性、全面性、有用性和实用性。使用中文文本分析框架评估可读性和复杂性。结果:在240个回答中,63.3%被评为“正确”,35.0%被评为“部分正确”,1.7%被评为“错误”。模型之间的准确率无显著差异(p = 0.26)。基米没有表现出错误的反应。综合性(p = 0.03)和帮助性(p p p )显著下降结论:公开可获得的法学硕士在很大程度上提供了心脏肿瘤学问题的准确答案,但其实用性受到不一致的综合性和对提示设计的敏感性的限制。虽然简化语言可以提高可读性,但它有损害临床相关性的风险。量身定制的微调和专门的评估框架对于优化llm在心脏肿瘤学患者教育至关重要。
{"title":"Evaluating the efficacy of large language models in cardio-oncology patient education: a comparative analysis of accuracy, readability, and prompt engineering strategies.","authors":"Zhao Wang, Lin Liang, Hao Xu, Yuhui Huang, Chen He, Weiran Xu, Haojie Zhu","doi":"10.3389/frai.2025.1693446","DOIUrl":"https://doi.org/10.3389/frai.2025.1693446","url":null,"abstract":"<p><strong>Background: </strong>The integration of large language models (LLMs) into cardio-oncology patient education holds promise for addressing the critical gap in accessible, accurate, and patient-friendly information. However, the performance of publicly available LLMs in this specialized domain remains underexplored.</p><p><strong>Objectives: </strong>This study evaluates the performance of three LLMs (ChatGPT-4, Kimi, DouBao) act as assistants for physicians in cardio-oncology patient education and examines the impact of prompt engineering on response quality.</p><p><strong>Methods: </strong>Twenty standardized questions spanning cardio-oncology topics were posed twice to three LLMs (ChatGPT-4, Kimi, DouBao): once without prompts and once with a directive to simplify language, generating 240 responses. These responses were evaluated by four cardio-oncology specialists for accuracy, comprehensiveness, helpfulness, and practicality. Readability and complexity were assessed using a Chinese text analysis framework.</p><p><strong>Results: </strong>Among 240 responses, 63.3% were rated \"correct,\" 35.0% \"partially correct,\" and 1.7% \"incorrect.\" No significant differences in accuracy were observed between models (<i>p</i> = 0.26). Kimi demonstrated no incorrect responses. Significant declines in comprehensiveness (<i>p</i> = 0.03) and helpfulness (<i>p</i> < 0.01) occurred post-prompt, particularly for DouBao (accuracy: 57.5% vs. 7.5%, <i>p</i> < 0.01). Readability metrics (readability age, difficulty score, total word count, sentence length) showed no inter-model differences, but prompts reduced complexity (e.g., DouBao's readability age decreased from 12.9 ± 0.8 to 10.1 ± 1.2 years, <i>p</i> < 0.01).</p><p><strong>Conclusion: </strong>Publicly available LLMs provide largely accurate responses to cardio-oncology questions, yet their utility is constrained by inconsistent comprehensiveness and sensitivity to prompt design. While simplifying language improves readability, it risks compromising clinical relevance. Tailored fine-tuning and specialized evaluation frameworks are essential to optimize LLMs for patient education in cardio-oncology.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1693446"},"PeriodicalIF":4.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12835249/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}