Pub Date : 2026-01-22eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1748468
Leon H Oehme, Jonas Boysen, Zhangkai Wu, Anthony Stein, Joachim Müller
Increasingly many applications of machine vision and artificial intelligence (AI) can be observed in agriculture. Yet, high-quality training data remains a bottleneck in the development of many AI solutions, particularly for image segmentation. Therefore, ARAMSAM (agricultural rapid annotation module based on segment anything models) was developed, a user interface that orchestrates the pre-labelling capabilities of both the segment anything models (SAM 1, SAM 2) and conventional annotation tools. One in silico experiment on zero-shot performance of SAM 1 and SAM 2 on three unseen agricultural datasets and another experiment on hyperparameter optimization of the automatic mask generators (AMG) were conducted. In a user experiment, 14 agricultural experts applied ARAMSAM to quantify the reduction of annotation times. SAM 2 benefited greatly from hyperparameter optimization of its AMG. Based on ground-truth masks matched with predicted masks, the F2 -score of SAM 2 improved from 0.05 to 0.74, while that of SAM 1 was improved from 0.87 to 0.93. The user interaction time could be reduced to 2.1 s/mask on single images (SAM 1) and to 1.6 s/mask on image sequences (SAM 2) compared to polygon drawing (9.7 s/mask). This study demonstrates the potential of segment anything models as incorporated into ARAMSAM to significantly accelerate the process of segmentation mask annotation in agriculture and other fields. ARAMSAM will be released as open-source software (AGPL-3.0 license) at https://github.com/DerOehmer/ARAMSAM.
{"title":"Orchestrating segment anything models to accelerate segmentation annotation on agricultural image datasets.","authors":"Leon H Oehme, Jonas Boysen, Zhangkai Wu, Anthony Stein, Joachim Müller","doi":"10.3389/frai.2025.1748468","DOIUrl":"10.3389/frai.2025.1748468","url":null,"abstract":"<p><p>Increasingly many applications of machine vision and artificial intelligence (AI) can be observed in agriculture. Yet, high-quality training data remains a bottleneck in the development of many AI solutions, particularly for image segmentation. Therefore, ARAMSAM (agricultural rapid annotation module based on segment anything models) was developed, a user interface that orchestrates the pre-labelling capabilities of both the segment anything models (SAM 1, SAM 2) and conventional annotation tools. One <i>in silico</i> experiment on zero-shot performance of SAM 1 and SAM 2 on three unseen agricultural datasets and another experiment on hyperparameter optimization of the automatic mask generators (AMG) were conducted. In a user experiment, 14 agricultural experts applied ARAMSAM to quantify the reduction of annotation times. SAM 2 benefited greatly from hyperparameter optimization of its AMG. Based on ground-truth masks matched with predicted masks, the <i>F<sub>2</sub></i> -score of SAM 2 improved from 0.05 to 0.74, while that of SAM 1 was improved from 0.87 to 0.93. The user interaction time could be reduced to 2.1 s/mask on single images (SAM 1) and to 1.6 s/mask on image sequences (SAM 2) compared to polygon drawing (9.7 s/mask). This study demonstrates the potential of segment anything models as incorporated into ARAMSAM to significantly accelerate the process of segmentation mask annotation in agriculture and other fields. ARAMSAM will be released as open-source software (AGPL-3.0 license) at https://github.com/DerOehmer/ARAMSAM.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1748468"},"PeriodicalIF":4.7,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12872900/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146143585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1752580
Jiarui Chi
The rapid development of robo-advisory and quantitative investment has been accompanied by persistent concerns about limited personalization and the opacity of black-box models operating on multimodal financial information. This paper addresses these issues from a decision-support perspective by constructing FinErva, a multimodal chain-of-thought dataset tailored to financial applications. FinErva comprises 7,544 manually verified question-answer pairs, divided into two economically relevant tasks: contract and disclosure understanding (FinErva-Pact) and candlestick-chart-based technical analysis (FinErva-Price). Building on this dataset, the paper propose a two-stage training framework: Supervised-CoT Learning followed by Self-CoT Refinement, and apply it to eight vision-language models, each with fewer than 0.8 billion parameters. Empirical results show that those lightweight models approach the performance of finance professionals and clearly outperform non-expert investors. Overall, the findings indicate that appropriately designed multimodal chain of thought supervision enables interpretable modeling of key research tasks such as contract review and chart interpretation under realistic computational and deployment constraints, providing new data and methodology for the development of personalized, explainable, and operationally feasible AI systems in investment advisory and risk management.
{"title":"Interpretable multimodal reasoning for robo-advisory: the FinErva framework.","authors":"Jiarui Chi","doi":"10.3389/frai.2025.1752580","DOIUrl":"10.3389/frai.2025.1752580","url":null,"abstract":"<p><p>The rapid development of robo-advisory and quantitative investment has been accompanied by persistent concerns about limited personalization and the opacity of black-box models operating on multimodal financial information. This paper addresses these issues from a decision-support perspective by constructing FinErva, a multimodal chain-of-thought dataset tailored to financial applications. FinErva comprises 7,544 manually verified question-answer pairs, divided into two economically relevant tasks: contract and disclosure understanding (FinErva-Pact) and candlestick-chart-based technical analysis (FinErva-Price). Building on this dataset, the paper propose a two-stage training framework: Supervised-CoT Learning followed by Self-CoT Refinement, and apply it to eight vision-language models, each with fewer than 0.8 billion parameters. Empirical results show that those lightweight models approach the performance of finance professionals and clearly outperform non-expert investors. Overall, the findings indicate that appropriately designed multimodal chain of thought supervision enables interpretable modeling of key research tasks such as contract review and chart interpretation under realistic computational and deployment constraints, providing new data and methodology for the development of personalized, explainable, and operationally feasible AI systems in investment advisory and risk management.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1752580"},"PeriodicalIF":4.7,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12868138/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146126599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1719955
Hafiza Sana Mansoor, Bambang Sumardjoko, Anam Sutopo
The aim of this systematic review is to examine and synthesize existing empirical evidence on external variables that influence students' attitudes toward the acceptance of artificial intelligence (AI) in improving English writing skills. This research offers a conceptual framework, AI Constructivist Learning Model (AICLM), based on Technology Acceptance Model (TAM) and Constructivist Learning Theory (CLT). Motivation, engagement, and societal expectations, based on CLT, are identified as external variables in TAM. These three constructs support active, autonomous, and student-centered learning. A systematic search of academic databases was conducted following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Sixteen empirical studies published from 2021 to 2025, indexed in Scopus, Web of Science, and Google Scholar, were included in this review. Articles were selected on the basis of certain keywords such as, AI, English writing, TAM, and CLT. Findings indicate that students perceive the ease of use and usefulness of AI if they have high motivation, more engagement, and positive societal expectations. Therefore, motivation, engagement, and societal expectations are significant external variables that influence the attitudes of students toward AI acceptance in improving English writing. AI integration in English writing development can be successful if the interaction between the constructs of TAM and CLT is understood well. CLT supports why and how students engage actively with AI tools. Students are more likely to accept AI if it increases motivation enhances engagement and fulfils societal expectations. This conceptual framework is significant for future researchers and teachers in designing effective AI-based writing instructional strategies and curricula.
本系统综述的目的是对影响学生接受人工智能(AI)提高英语写作技能的态度的外部变量进行检查和综合现有的经验证据。本研究提出了一个基于技术接受模型(TAM)和建构主义学习理论(CLT)的概念框架——人工智能建构主义学习模型(AICLM)。基于CLT的动机、参与和社会期望被确定为TAM中的外部变量。这三种结构支持主动、自主和以学生为中心的学习。按照PRISMA(系统评价和荟萃分析的首选报告项目)指南对学术数据库进行了系统搜索。本综述纳入了在Scopus、Web of Science和谷歌Scholar中检索的2021 - 2025年间发表的16项实证研究。文章是根据AI、英语写作、TAM、CLT等关键词选出的。研究结果表明,如果学生有较高的动机、更多的参与和积极的社会期望,他们就会认为人工智能易于使用和有用。因此,动机、参与和社会期望是影响学生在提高英语写作中接受人工智能态度的重要外部变量。如果能很好地理解TAM和CLT结构之间的相互作用,人工智能在英语写作发展中的整合就会成功。CLT支持学生积极参与人工智能工具的原因和方式。如果人工智能能提高学生的积极性、提高参与度并满足社会期望,学生就更有可能接受人工智能。这一概念框架对未来的研究者和教师设计有效的基于人工智能的写作教学策略和课程具有重要意义。
{"title":"External variables influencing the attitudes of students toward AI acceptance in improving English writing: a systematic review.","authors":"Hafiza Sana Mansoor, Bambang Sumardjoko, Anam Sutopo","doi":"10.3389/frai.2025.1719955","DOIUrl":"10.3389/frai.2025.1719955","url":null,"abstract":"<p><p>The aim of this systematic review is to examine and synthesize existing empirical evidence on external variables that influence students' attitudes toward the acceptance of artificial intelligence (AI) in improving English writing skills. This research offers a conceptual framework, AI Constructivist Learning Model (AICLM), based on Technology Acceptance Model (TAM) and Constructivist Learning Theory (CLT). Motivation, engagement, and societal expectations, based on CLT, are identified as external variables in TAM. These three constructs support active, autonomous, and student-centered learning. A systematic search of academic databases was conducted following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Sixteen empirical studies published from 2021 to 2025, indexed in Scopus, Web of Science, and Google Scholar, were included in this review. Articles were selected on the basis of certain keywords such as, AI, English writing, TAM, and CLT. Findings indicate that students perceive the ease of use and usefulness of AI if they have high motivation, more engagement, and positive societal expectations. Therefore, motivation, engagement, and societal expectations are significant external variables that influence the attitudes of students toward AI acceptance in improving English writing. AI integration in English writing development can be successful if the interaction between the constructs of TAM and CLT is understood well. CLT supports why and how students engage actively with AI tools. Students are more likely to accept AI if it increases motivation enhances engagement and fulfils societal expectations. This conceptual framework is significant for future researchers and teachers in designing effective AI-based writing instructional strategies and curricula.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1719955"},"PeriodicalIF":4.7,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12868173/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146126603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1712614
José Joel Cruz-Tarrillo, Jose Tarrillo-Paredes, Karla Liliana Haro-Zea, Robin Alexander Díaz Díaz Saavedra
Artificial intelligence has become a crucial tool for effective customer management; therefore, this research aims to design and validate a scale measuring the adoption of artificial intelligence in the customer experience. It is approached from a quantitative methodology perspective and an instrumental design. A survey was conducted among 528 customers who frequently make virtual purchases. Then, an exploratory analysis was conducted to determine the factor structure of the scale, followed by a confirmatory analysis to validate the construct. On the other hand, an invariance analysis was conducted to determine whether the construct varies across groups. The results show a multidimensional scale of 16 items grouped into 4 factors (trust in AI, perception of AI, knowledge of AI, shopping experience). Each factor consists of four items, using a Likert-type response scale where 1 indicates "totally disagree" and 5 indicates "totally agree". In conclusion, the proposed scale is a valid measure. It can be used to continue exploring this concept in other latitudes, serving as a valuable tool for entrepreneurs to make an effective diagnosis of this new technology.
{"title":"Design and validation of the scale for the adoption of artificial intelligence in the online shopping experience of Peruvian consumers.","authors":"José Joel Cruz-Tarrillo, Jose Tarrillo-Paredes, Karla Liliana Haro-Zea, Robin Alexander Díaz Díaz Saavedra","doi":"10.3389/frai.2025.1712614","DOIUrl":"10.3389/frai.2025.1712614","url":null,"abstract":"<p><p>Artificial intelligence has become a crucial tool for effective customer management; therefore, this research aims to design and validate a scale measuring the adoption of artificial intelligence in the customer experience. It is approached from a quantitative methodology perspective and an instrumental design. A survey was conducted among 528 customers who frequently make virtual purchases. Then, an exploratory analysis was conducted to determine the factor structure of the scale, followed by a confirmatory analysis to validate the construct. On the other hand, an invariance analysis was conducted to determine whether the construct varies across groups. The results show a multidimensional scale of 16 items grouped into 4 factors (trust in AI, perception of AI, knowledge of AI, shopping experience). Each factor consists of four items, using a Likert-type response scale where 1 indicates \"totally disagree\" and 5 indicates \"totally agree\". In conclusion, the proposed scale is a valid measure. It can be used to continue exploring this concept in other latitudes, serving as a valuable tool for entrepreneurs to make an effective diagnosis of this new technology.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1712614"},"PeriodicalIF":4.7,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12864389/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1738770
Joyce de Paula Souza, Jonathan Blum, Uko Maran, Sulev Sild, Louis Dawson, Aleksandra Čavoški, Laura Holden, Robert Lee, Veronika Karnel, Lukas Meusburger, Sandrine Fraize-Frontier, Alexander Walsh, Gilles Rivière, Giuseppa Raitano, Alessandra Roncaglioni, Emma Di Consiglio, Olga Tcheremenskaia, Cecilia Bossa, Lina Wendt-Rasch, Tomasz Puzyn, Ellen Fritsche
The integration of artificial intelligence (AI) into chemical risk assessment (CRA) is emerging as a powerful approach to enhance the interpretation of complex toxicological data and accelerate safety evaluations. However, the regulatory uptake of AI remains limited due to concerns about transparency, explainability, and trustworthiness. The European Partnership for the Assessment of Risks from Chemicals (PARC) project ReadyAI was established to address these challenges by developing a readiness scoring system to evaluate the maturity and regulatory applicability of AI-based models in CRA. The project unites a multidisciplinary consortium of academic, regulatory, and legal experts to define transparent and reproducible criteria encompassing data curation, model development, validation, explainability, and uncertainty quantification. Current efforts focus on identifying key priorities, including harmonized terminology, rigorous data quality standards, case studies, and targeted training of regulatory scientists. ReadyAI aims to deliver a practical, evidence-based scoring system that enables regulators to assess whether AI tools are sufficiently reliable for decision-making and guides developers toward compliance with regulatory expectations. By bridging the gap between AI innovation and regulatory applicability, ReadyAI contributes to the responsible integration of AI into chemical safety assessment frameworks, ultimately supporting human and environmental health protection.
{"title":"Advancing the implementation of artificial intelligence in regulatory frameworks for chemical safety assessment by defining robust readiness criteria.","authors":"Joyce de Paula Souza, Jonathan Blum, Uko Maran, Sulev Sild, Louis Dawson, Aleksandra Čavoški, Laura Holden, Robert Lee, Veronika Karnel, Lukas Meusburger, Sandrine Fraize-Frontier, Alexander Walsh, Gilles Rivière, Giuseppa Raitano, Alessandra Roncaglioni, Emma Di Consiglio, Olga Tcheremenskaia, Cecilia Bossa, Lina Wendt-Rasch, Tomasz Puzyn, Ellen Fritsche","doi":"10.3389/frai.2025.1738770","DOIUrl":"10.3389/frai.2025.1738770","url":null,"abstract":"<p><p>The integration of artificial intelligence (AI) into chemical risk assessment (CRA) is emerging as a powerful approach to enhance the interpretation of complex toxicological data and accelerate safety evaluations. However, the regulatory uptake of AI remains limited due to concerns about transparency, explainability, and trustworthiness. The European Partnership for the Assessment of Risks from Chemicals (PARC) project <i>ReadyAI</i> was established to address these challenges by developing a readiness scoring system to evaluate the maturity and regulatory applicability of AI-based models in CRA. The project unites a multidisciplinary consortium of academic, regulatory, and legal experts to define transparent and reproducible criteria encompassing data curation, model development, validation, explainability, and uncertainty quantification. Current efforts focus on identifying key priorities, including harmonized terminology, rigorous data quality standards, case studies, and targeted training of regulatory scientists. <i>ReadyAI</i> aims to deliver a practical, evidence-based scoring system that enables regulators to assess whether AI tools are sufficiently reliable for decision-making and guides developers toward compliance with regulatory expectations. By bridging the gap between AI innovation and regulatory applicability, <i>ReadyAI</i> contributes to the responsible integration of AI into chemical safety assessment frameworks, ultimately supporting human and environmental health protection.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1738770"},"PeriodicalIF":4.7,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12864435/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1729796
Aleena Tanveer, Raja Hashim Ali, Jitendra Majhi, Moumita Mukherjee
<p><strong>Background: </strong>Despite national screening initiatives, coverage of breast cancer screening is low, and late-stage diagnosis remains a major contributor to mortality among Indian women. Accurate, precise, and actionable prediction of socioeconomic and structural inequities in screening uptake is critical for formulating equitable cancer control policies. This study aimed to apply machine learning to predict determinants of screening uptake, estimate inequalities in uptake and their concentration indices, and identify contributing factors to inequity using concentration index decomposition across economic, educational, and caste gradients.</p><p><strong>Methods: </strong>Cross-sectional National Family Health Survey (NFHS-5) 2019-2021 data, comprising 68,526 women aged 30-49 years, is used for the study. Levesque's framework of healthcare access directed variable selection across approachability, acceptability, affordability, availability, and appropriateness dimensions to decide on the set of explanatory covariates. We applied three single learners-Logistic Regression (LR), Naïve Bayes (NB), and Decision Tree (DT)-and two ensemble learners-Random Forest (RF) and XGBoost (XGB)-to train on balanced weighted data. Given the risk of overfitting after the synthetic minority oversampling technique (SMOTE), predictive performance was validated using 10-fold cross-validation. Five evaluation metrics were compared to select the best learner predicting the screening uptake. Inequality was measured using conventional and algorithm-based concentration indices and decomposed using algorithm-based feature importance and feature-specific inequality scores to estimate contributions to three inequality-health gradients in screening access.</p><p><strong>Findings: </strong>In India, remarkably low (0.9%) screening uptake with clear economic, educational, and social disparities is evident. Although Random Forest and XGBoost performed with higher predictive accuracy (96%) and explainability (AUROC = 0.99), Decision Tree brought stable generalizability (mean AUROC = 0.995) after 10-fold validation. Feature importance results indicate that education, autonomy, interactions with community health workers, provincial and spatial features explain most of the variability. Proximity, transport availability, hesitancy in unaccompanied care seeking, and financial constraints were access barriers with limited contribution to the variation in screening uptake. Concentration index estimates reflect a pro-rich (0.1, <i>p</i> < 0.001), pro-educated (0.182, <i>p</i> < 0.001), and pro-marginalized social gradient (-0.011, <i>p</i> < 0.05). Tree-based decomposition predicts higher affordability, and education deepens pro-rich and pro-educated inequalities but can be an effective policy instrument to mitigate social position-based disparities if contributions can be increased. Access-related barriers intensified inequality across all gradients. Nevertheless, factors th
{"title":"Predicting and identifying correlates of inequalities in breast cancer screening uptake using national level data from India.","authors":"Aleena Tanveer, Raja Hashim Ali, Jitendra Majhi, Moumita Mukherjee","doi":"10.3389/frai.2025.1729796","DOIUrl":"10.3389/frai.2025.1729796","url":null,"abstract":"<p><strong>Background: </strong>Despite national screening initiatives, coverage of breast cancer screening is low, and late-stage diagnosis remains a major contributor to mortality among Indian women. Accurate, precise, and actionable prediction of socioeconomic and structural inequities in screening uptake is critical for formulating equitable cancer control policies. This study aimed to apply machine learning to predict determinants of screening uptake, estimate inequalities in uptake and their concentration indices, and identify contributing factors to inequity using concentration index decomposition across economic, educational, and caste gradients.</p><p><strong>Methods: </strong>Cross-sectional National Family Health Survey (NFHS-5) 2019-2021 data, comprising 68,526 women aged 30-49 years, is used for the study. Levesque's framework of healthcare access directed variable selection across approachability, acceptability, affordability, availability, and appropriateness dimensions to decide on the set of explanatory covariates. We applied three single learners-Logistic Regression (LR), Naïve Bayes (NB), and Decision Tree (DT)-and two ensemble learners-Random Forest (RF) and XGBoost (XGB)-to train on balanced weighted data. Given the risk of overfitting after the synthetic minority oversampling technique (SMOTE), predictive performance was validated using 10-fold cross-validation. Five evaluation metrics were compared to select the best learner predicting the screening uptake. Inequality was measured using conventional and algorithm-based concentration indices and decomposed using algorithm-based feature importance and feature-specific inequality scores to estimate contributions to three inequality-health gradients in screening access.</p><p><strong>Findings: </strong>In India, remarkably low (0.9%) screening uptake with clear economic, educational, and social disparities is evident. Although Random Forest and XGBoost performed with higher predictive accuracy (96%) and explainability (AUROC = 0.99), Decision Tree brought stable generalizability (mean AUROC = 0.995) after 10-fold validation. Feature importance results indicate that education, autonomy, interactions with community health workers, provincial and spatial features explain most of the variability. Proximity, transport availability, hesitancy in unaccompanied care seeking, and financial constraints were access barriers with limited contribution to the variation in screening uptake. Concentration index estimates reflect a pro-rich (0.1, <i>p</i> < 0.001), pro-educated (0.182, <i>p</i> < 0.001), and pro-marginalized social gradient (-0.011, <i>p</i> < 0.05). Tree-based decomposition predicts higher affordability, and education deepens pro-rich and pro-educated inequalities but can be an effective policy instrument to mitigate social position-based disparities if contributions can be increased. Access-related barriers intensified inequality across all gradients. Nevertheless, factors th","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1729796"},"PeriodicalIF":4.7,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12820423/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146031036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The growing integration of AI into educational and professional settings raises urgent questions about how human creativity evolves when intelligent systems guide, constrain, or accelerate the design process. Generative AI offers structured suggestions and rapid access to ideas, but its role in adopting genuine innovation remains contested. This paper investigates the dynamics of human-AI collaboration in challenge-based design experiments, applying established creativity metrics: fluency, flexibility, originality, and elaboration in order to evaluate outcomes and implications in an engineering education context. Through an exploratory quasi-experimental study, a comparison of AI-assisted and human-only teams was conducted across four dimensions of creative performance: quantity, variety, uniqueness, and quality of design solutions. Findings point to a layered outcome: although AI accelerated idea generation, it also encouraged premature convergence, narrowed exploration, and compromised functional refinement. Human-only teams engaged in more iterative experimentation and produced designs of higher functional quality and greater ideational diversity. Participants' self-perceptions of creativity remained stable across both conditions, highlighting the risk of cognitive offloading, where reliance on AI may reduce genuine creative engagement while masking deficits through inflated confidence. Importantly, cognitive offloading is not directly measured in this study; rather, it is introduced here as a theoretically grounded interpretive explanation that helps contextualize the observed disconnect between performance outcomes and self-perceived creativity. These results bring opportunities and risks. On the one hand, AI can support ideation and broaden access to concepts; on the other, overreliance risks weakening iterative learning and the development of durable creative capacities. The ethical implications are significant, raising questions about accountability and educational integrity when outcomes emerge from human-AI co-creation. The study argues for process-aware and ethically grounded frameworks that balance augmentation with human agency, supporting exploration without eroding the foundations of creative problem-solving. The study consolidates empirical findings with conceptual analysis, advancing the discussion on when and how AI should guide the creative process and providing insights for the broader debate on the future of human-AI collaboration.
{"title":"AI-assisted design synthesis and human creativity in engineering education.","authors":"Mariza Tsakalerou, Saltanat Akhmadi, Aruzhan Balgynbayeva, Yerdaulet Kumisbek","doi":"10.3389/frai.2026.1714523","DOIUrl":"10.3389/frai.2026.1714523","url":null,"abstract":"<p><p>The growing integration of AI into educational and professional settings raises urgent questions about how human creativity evolves when intelligent systems guide, constrain, or accelerate the design process. Generative AI offers structured suggestions and rapid access to ideas, but its role in adopting genuine innovation remains contested. This paper investigates the dynamics of human-AI collaboration in challenge-based design experiments, applying established creativity metrics: fluency, flexibility, originality, and elaboration in order to evaluate outcomes and implications in an engineering education context. Through an exploratory quasi-experimental study, a comparison of AI-assisted and human-only teams was conducted across four dimensions of creative performance: quantity, variety, uniqueness, and quality of design solutions. Findings point to a layered outcome: although AI accelerated idea generation, it also encouraged premature convergence, narrowed exploration, and compromised functional refinement. Human-only teams engaged in more iterative experimentation and produced designs of higher functional quality and greater ideational diversity. Participants' self-perceptions of creativity remained stable across both conditions, highlighting the risk of cognitive offloading, where reliance on AI may reduce genuine creative engagement while masking deficits through inflated confidence. Importantly, cognitive offloading is not directly measured in this study; rather, it is introduced here as a theoretically grounded interpretive explanation that helps contextualize the observed disconnect between performance outcomes and self-perceived creativity. These results bring opportunities and risks. On the one hand, AI can support ideation and broaden access to concepts; on the other, overreliance risks weakening iterative learning and the development of durable creative capacities. The ethical implications are significant, raising questions about accountability and educational integrity when outcomes emerge from human-AI co-creation. The study argues for process-aware and ethically grounded frameworks that balance augmentation with human agency, supporting exploration without eroding the foundations of creative problem-solving. The study consolidates empirical findings with conceptual analysis, advancing the discussion on when and how AI should guide the creative process and providing insights for the broader debate on the future of human-AI collaboration.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1714523"},"PeriodicalIF":4.7,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12864478/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146119549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rationale and objectives: To address the challenges in detecting anterior cruciate ligament (ACL) lesions in knee MRI examinations, including difficulties in identifying tiny lesions, insufficient extraction of low-contrast features, and poor modeling of irregular lesion morphologies, and to provide a precise and efficient auxiliary diagnostic tool for clinical practice.
Materials and methods: An enhanced framework based on YOLOv10 is constructed. The backbone network is optimized using the C2f-SimAM module to enhance multi-scale feature extraction and spatial attention; an Adaptive Spatial Fusion (ASF) module is introduced in the neck to better fuse multi-scale spatial features; and a novel hybrid loss function combining Focal-EIoU and KPT Loss is employed. To ensure rigorous statistical evaluation, we utilized a five-fold cross-validation strategy on a dataset of 917 cases.
Results: Evaluation on the KneeMRI dataset demonstrates that the proposed model achieves statistically significant improvements over standard YOLOv10, Faster R-CNN, and Transformer-based detectors (RT-DETR). Specifically, mAP@0.5 is increased by 1.3% (p < 0.05) compared to the standard YOLOv10, and mAP@0.5:0.95 is improved by 2.5%. Qualitative analysis further confirms the model's ability to reduce false negatives in small, low-contrast tears.
Conclusion: This framework effectively connects general object detection models with the specific requirements of medical imaging, providing a precise and efficient solution for diagnosing ACL injuries in routine clinical workflows.
{"title":"An improved YOLOv10-based framework for knee MRI lesion detection with enhanced small object recognition and low contrast feature extraction.","authors":"Hongwei Yang, Wenqu Song, Tiankai Jiang, Chuanhao Wang, Luping Zhang, Zhian Cai, Yuhan Sun, Qing Zhao, Yuyu Sun","doi":"10.3389/frai.2025.1675834","DOIUrl":"10.3389/frai.2025.1675834","url":null,"abstract":"<p><strong>Rationale and objectives: </strong>To address the challenges in detecting anterior cruciate ligament (ACL) lesions in knee MRI examinations, including difficulties in identifying tiny lesions, insufficient extraction of low-contrast features, and poor modeling of irregular lesion morphologies, and to provide a precise and efficient auxiliary diagnostic tool for clinical practice.</p><p><strong>Materials and methods: </strong>An enhanced framework based on YOLOv10 is constructed. The backbone network is optimized using the C2f-SimAM module to enhance multi-scale feature extraction and spatial attention; an Adaptive Spatial Fusion (ASF) module is introduced in the neck to better fuse multi-scale spatial features; and a novel hybrid loss function combining Focal-EIoU and KPT Loss is employed. To ensure rigorous statistical evaluation, we utilized a five-fold cross-validation strategy on a dataset of 917 cases.</p><p><strong>Results: </strong>Evaluation on the KneeMRI dataset demonstrates that the proposed model achieves statistically significant improvements over standard YOLOv10, Faster R-CNN, and Transformer-based detectors (RT-DETR). Specifically, mAP@0.5 is increased by 1.3% (<i>p</i> < 0.05) compared to the standard YOLOv10, and mAP@0.5:0.95 is improved by 2.5%. Qualitative analysis further confirms the model's ability to reduce false negatives in small, low-contrast tears.</p><p><strong>Conclusion: </strong>This framework effectively connects general object detection models with the specific requirements of medical imaging, providing a precise and efficient solution for diagnosing ACL injuries in routine clinical workflows.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1675834"},"PeriodicalIF":4.7,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12864410/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1600044
Morgan Williams, Uma Rani
Introduction: The business model of multi-sided digital labor platforms relies on maintaining a balance between workers and customers or clients to sustain operations. These platforms initially leveraged venture capital to attract workers by providing them with incentives and the promise of flexibility, creating lock-in effects to consolidate their market power and enable monopolistic practices. As platforms mature, they increasingly implement algorithmic management and control mechanisms, such as rating systems, which restrict worker autonomy, access to work and flexibility. Despite limited bargaining power, workers have developed both individual and collective strategies to counteract these algorithmic restrictions.
Methods: This article employs a structured synthesis, drawing on existing academic literature as well as surveys conducted by the International Labour Office (ILO) between 2017 and 2023, to examine how platform workers utilize a combination of informal and formal forms of resistance to build resilience against algorithmic disruptions.
Results: The analysis covers different sectors (freelance and microtask work, taxi and delivery services, and domestic work and beauty care platforms) offering insights into the changing dynamics of worker agency on platforms, which have enabled resilience-building among workers on digital labour platforms. In the face of significant barriers to carrying out formal acts of resistance, workers on digital labour platforms often turn to informal acts of resistance, often mediated by social media, to adapt to changes in the platforms' algorithms and maintain their well-being.
Discussion: Platform workers increasingly have a diverse array of tools to exercise their agency physically and virtually. However, the process of establishing resilience in such conditions is often not straight-forward. As platforms counteract workers' acts of resistance, workers must continue to develop new and innovative strategies to strengthen their resilience. Such a complex and nuanced landscape merits continued research and analysis.
{"title":"Resilience through resistance: the role of worker agency in navigating algorithmic control.","authors":"Morgan Williams, Uma Rani","doi":"10.3389/frai.2025.1600044","DOIUrl":"10.3389/frai.2025.1600044","url":null,"abstract":"<p><strong>Introduction: </strong>The business model of multi-sided digital labor platforms relies on maintaining a balance between workers and customers or clients to sustain operations. These platforms initially leveraged venture capital to attract workers by providing them with incentives and the promise of flexibility, creating lock-in effects to consolidate their market power and enable monopolistic practices. As platforms mature, they increasingly implement algorithmic management and control mechanisms, such as rating systems, which restrict worker autonomy, access to work and flexibility. Despite limited bargaining power, workers have developed both individual and collective strategies to counteract these algorithmic restrictions.</p><p><strong>Methods: </strong>This article employs a structured synthesis, drawing on existing academic literature as well as surveys conducted by the International Labour Office (ILO) between 2017 and 2023, to examine how platform workers utilize a combination of informal and formal forms of resistance to build resilience against algorithmic disruptions.</p><p><strong>Results: </strong>The analysis covers different sectors (freelance and microtask work, taxi and delivery services, and domestic work and beauty care platforms) offering insights into the changing dynamics of worker agency on platforms, which have enabled resilience-building among workers on digital labour platforms. In the face of significant barriers to carrying out formal acts of resistance, workers on digital labour platforms often turn to informal acts of resistance, often mediated by social media, to adapt to changes in the platforms' algorithms and maintain their well-being.</p><p><strong>Discussion: </strong>Platform workers increasingly have a diverse array of tools to exercise their agency physically and virtually. However, the process of establishing resilience in such conditions is often not straight-forward. As platforms counteract workers' acts of resistance, workers must continue to develop new and innovative strategies to strengthen their resilience. Such a complex and nuanced landscape merits continued research and analysis.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1600044"},"PeriodicalIF":4.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12855402/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1734013
M Mayuranathan, V Anitha, P Nehru, Bosko Nikolic, Miloš Janjić, Nebojsa Bacanin
Introduction: Heart diseases (CVDs) are a major cause of morbidity and mortality in all global regions and thus there is the pressing need to develop early detection and effective management approaches. Traditional cardiovascular monitoring systems do not necessarily have real-time analyzing solutions and individual understanding, which leads to delayed interventions. Moreover, one of the greatest issues in digital healthcare applications remains to be data privacy and security.
Methods: The proposed research is to present a developed model of CVD detection that will combine Internet of Things (IoT)-based wearable devices, electronic clinical records, and access control using blockchain. The system starts by registering patients and medical personnel and then proceeds with collecting physiological as well as clinical data. Kalman filtering helps in improving data reliability in the pre-processing stage. Shallow and deep feature extraction methods are used to describe complicated patterns of data. A Refracted Sand Cat Swarm Optimization (SCSO) algorithm is used as part of feature maximization. A new TriBoostCardio Ensemble model (CatBoost, AdaBoost, and LogitBoost) is used to conduct the classification task and enhance the predictive accuracy. Smart contracts provide safe and transparent access to health information.
Results: There are experimental results that the proposed framework enhances high predictive accuracy and detecting cardiovascular diseases earlier than traditional ones. The combination between SCSO feature selection and the TriBoostCardio Ensemble model improves the sturdiness of the model and precision of classification.
Discussion: Besides the fact that the presented framework promotes the accuracy and timeliness of CVD detection, it also way to deal with important problems related to the data privacy and integrity with the help of blockchain-based access control. This solution offers a stable and trustworthy solution to the current healthcare systems with the combination of the smart optimization of features, ensemble learning, and secure data management.
{"title":"Triboostcardio ensemble model for cardiovascular disease detection using advanced blockchain-enabled health monitoring.","authors":"M Mayuranathan, V Anitha, P Nehru, Bosko Nikolic, Miloš Janjić, Nebojsa Bacanin","doi":"10.3389/frai.2025.1734013","DOIUrl":"10.3389/frai.2025.1734013","url":null,"abstract":"<p><strong>Introduction: </strong>Heart diseases (CVDs) are a major cause of morbidity and mortality in all global regions and thus there is the pressing need to develop early detection and effective management approaches. Traditional cardiovascular monitoring systems do not necessarily have real-time analyzing solutions and individual understanding, which leads to delayed interventions. Moreover, one of the greatest issues in digital healthcare applications remains to be data privacy and security.</p><p><strong>Methods: </strong>The proposed research is to present a developed model of CVD detection that will combine Internet of Things (IoT)-based wearable devices, electronic clinical records, and access control using blockchain. The system starts by registering patients and medical personnel and then proceeds with collecting physiological as well as clinical data. Kalman filtering helps in improving data reliability in the pre-processing stage. Shallow and deep feature extraction methods are used to describe complicated patterns of data. A Refracted Sand Cat Swarm Optimization (SCSO) algorithm is used as part of feature maximization. A new TriBoostCardio Ensemble model (CatBoost, AdaBoost, and LogitBoost) is used to conduct the classification task and enhance the predictive accuracy. Smart contracts provide safe and transparent access to health information.</p><p><strong>Results: </strong>There are experimental results that the proposed framework enhances high predictive accuracy and detecting cardiovascular diseases earlier than traditional ones. The combination between SCSO feature selection and the TriBoostCardio Ensemble model improves the sturdiness of the model and precision of classification.</p><p><strong>Discussion: </strong>Besides the fact that the presented framework promotes the accuracy and timeliness of CVD detection, it also way to deal with important problems related to the data privacy and integrity with the help of blockchain-based access control. This solution offers a stable and trustworthy solution to the current healthcare systems with the combination of the smart optimization of features, ensemble learning, and secure data management.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1734013"},"PeriodicalIF":4.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12855408/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}