Pub Date : 2025-02-19eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1510410
Sheetal Mahadik, Madhuri Gedam, Deven Shah
Environmental sustainability is a pressing global concern, with energy conservation and efficient utilization playing a key role in its achievement. Smart grid technology has emerged as a promising solution, facilitating energy efficiency, promoting renewable energy integration, and fostering consumer engagement. But the addition of intelligent sensors to these grids has the potential to greatly increase the level of sustainability initiatives. This paper highlights the role of smart grid sensors in addressing challenges like energy losses, demand-response limitations, and renewable energy integration. It explains how these sensors enable real-time monitoring, fault detection, and optimal load management to improve grid performance and reduce environmental impact. This also study looks at how AI with smart grid sensor can perform real-time data monitoring, optimal energy distribution, and proactive decision support from smart grid sensors might improve environmental sustainability. Furthermore, it examines advancements in sensor technologies in India, including pilot projects like the BESCOM initiative in Bangalore and Tata Power-DDL's renewable energy trading in Delhi, to showcase their practical applications and outcomes. Smart sensors provide accurate tracking of energy usage trends, enhance load distribution, and advance the sensible application of renewable energy resources. These sensors aid in cutting down on energy waste and carbon emissions by interacting with customers and enabling demand-response systems. This study addresses the critical role of smart sensors in overcoming the shortcomings of conventional grids and guaranteeing a more resilient, efficient, and sustainable energy future through an extensive analysis of the literature. Grid-enabled systems, such as electric water heaters with sensor, can achieve energy savings of up to 29%. The integration of renewable energy sources through sensors enhances system efficiency, reduces reliance on fossil fuels, and optimizes supply and demand. Utilizing Internet of Things (IoT) technology enables precise monitoring of air quality, water consumption, and resource management, significantly improving environmental oversight. This integration can lead to a reduction in greenhouse gas emissions by up to 20% and water usage by 30%. Lastly, the paper discusses how integrating artificial intelligence with smart grid sensors can enhance predictive maintenance, energy management, and cybersecurity, further strengthening the case for their deployment.
{"title":"Environment sustainability with smart grid sensor.","authors":"Sheetal Mahadik, Madhuri Gedam, Deven Shah","doi":"10.3389/frai.2024.1510410","DOIUrl":"10.3389/frai.2024.1510410","url":null,"abstract":"<p><p>Environmental sustainability is a pressing global concern, with energy conservation and efficient utilization playing a key role in its achievement. Smart grid technology has emerged as a promising solution, facilitating energy efficiency, promoting renewable energy integration, and fostering consumer engagement. But the addition of intelligent sensors to these grids has the potential to greatly increase the level of sustainability initiatives. <i>This paper highlights the role of smart grid sensors in addressing challenges like energy losses, demand-response limitations, and renewable energy integration. It explains how these sensors enable real-time monitoring, fault detection, and optimal load management to improve grid performance and reduce environmental impact.</i> This also study looks at how AI with smart grid sensor can perform real-time data monitoring, optimal energy distribution, and proactive decision support from smart grid sensors might improve environmental sustainability. <i>Furthermore, it examines advancements in sensor technologies in India, including pilot projects like the BESCOM initiative in Bangalore and Tata Power-DDL's renewable energy trading in Delhi, to showcase their practical applications and outcomes.</i> Smart sensors provide accurate tracking of energy usage trends, enhance load distribution, and advance the sensible application of renewable energy resources. These sensors aid in cutting down on energy waste and carbon emissions by interacting with customers and enabling demand-response systems. This study addresses the critical role of smart sensors in overcoming the shortcomings of conventional grids and guaranteeing a more resilient, efficient, and sustainable energy future through an extensive analysis of the literature. Grid-enabled systems, such as electric water heaters with sensor, can achieve energy savings of up to 29%. The integration of renewable energy sources through sensors enhances system efficiency, reduces reliance on fossil fuels, and optimizes supply and demand. Utilizing Internet of Things (IoT) technology enables precise monitoring of air quality, water consumption, and resource management, significantly improving environmental oversight. This integration can lead to a reduction in greenhouse gas emissions by up to 20% and water usage by 30%. <i>Lastly, the paper discusses how integrating artificial intelligence with smart grid sensors can enhance predictive maintenance, energy management, and cybersecurity, further strengthening the case for their deployment.</i></p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1510410"},"PeriodicalIF":3.0,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11879990/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143567677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-18eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1502504
Karthik Menon, Thomas Tcheng, Cairn Seale, David Greene, Martha Morrell, Sharanya Arcot Desai
Brain stimulation has become a widely accepted treatment for neurological disorders such as epilepsy and Parkinson's disease. These devices not only deliver therapeutic stimulation but also record brain activity, offering valuable insights into neural dynamics. However, brain recordings during stimulation are often blanked or contaminated by artifact, posing significant challenges for analyzing the acute effects of stimulation. To address these challenges, we propose a transformer-based model, Stim-BERT, trained on a large intracranial EEG (iEEG) dataset to reconstruct brain activity lost during stimulation blanking. To train the Stim-BERT model, 4,653,720 iEEG channels from 380 RNS system patients were tokenized into 3 (or 4) frequency band bins using 1 s non-overlapping windows resulting in a total vocabulary size of 1,000 (or 10,000). Stim-BERT leverages self-supervised learning with masked tokens, inspired by BERT's success in natural language processing, and shows significant improvements over traditional interpolation methods, especially for longer blanking periods. These findings highlight the potential of transformer models for filling in missing time-series neural data, advancing neural signal processing and our efforts to understand the acute effects of brain stimulation.
{"title":"Reconstructing signal during brain stimulation with Stim-BERT: a self-supervised learning model trained on millions of iEEG files.","authors":"Karthik Menon, Thomas Tcheng, Cairn Seale, David Greene, Martha Morrell, Sharanya Arcot Desai","doi":"10.3389/frai.2025.1502504","DOIUrl":"10.3389/frai.2025.1502504","url":null,"abstract":"<p><p>Brain stimulation has become a widely accepted treatment for neurological disorders such as epilepsy and Parkinson's disease. These devices not only deliver therapeutic stimulation but also record brain activity, offering valuable insights into neural dynamics. However, brain recordings during stimulation are often blanked or contaminated by artifact, posing significant challenges for analyzing the acute effects of stimulation. To address these challenges, we propose a transformer-based model, Stim-BERT, trained on a large intracranial EEG (iEEG) dataset to reconstruct brain activity lost during stimulation blanking. To train the Stim-BERT model, 4,653,720 iEEG channels from 380 RNS system patients were tokenized into 3 (or 4) frequency band bins using 1 s non-overlapping windows resulting in a total vocabulary size of 1,000 (or 10,000). Stim-BERT leverages self-supervised learning with masked tokens, inspired by BERT's success in natural language processing, and shows significant improvements over traditional interpolation methods, especially for longer blanking periods. These findings highlight the potential of transformer models for filling in missing time-series neural data, advancing neural signal processing and our efforts to understand the acute effects of brain stimulation.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1502504"},"PeriodicalIF":3.0,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11876146/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Riding a motorcycle involves risks that can be minimized through advanced sensing and response systems to assist the rider. The use of camera-collected images to monitor road conditions can aid in the development of tools designed to enhance rider safety and prevent accidents. This paper proposes a method for developing deep learning models designed to operate efficiently on embedded systems like the Raspberry Pi, facilitating real-time decisions that consider the road condition. Our research tests and compares several state-of-the-art convolutional neural network architectures, including EfficientNet and Inception, to determine which offers the best balance between inference time and accuracy. Specifically, we measured top-1 accuracy and inference time on a Raspberry Pi, identifying EfficientNetV2 as the most suitable model due to its optimal trade-off between performance and computational demand. The model's top-1 accuracy significantly outperformed other models while maintaining competitive inference speeds, making it ideal for real-time applications in traffic-dense urban settings.
{"title":"Advanced driving assistance integration in electric motorcycles: road surface classification with a focus on gravel detection using deep learning.","authors":"Ranan Venancio, Vitor Filipe, Adelaide Cerveira, Lio Gonçalves","doi":"10.3389/frai.2025.1520557","DOIUrl":"https://doi.org/10.3389/frai.2025.1520557","url":null,"abstract":"<p><p>Riding a motorcycle involves risks that can be minimized through advanced sensing and response systems to assist the rider. The use of camera-collected images to monitor road conditions can aid in the development of tools designed to enhance rider safety and prevent accidents. This paper proposes a method for developing deep learning models designed to operate efficiently on embedded systems like the Raspberry Pi, facilitating real-time decisions that consider the road condition. Our research tests and compares several state-of-the-art convolutional neural network architectures, including EfficientNet and Inception, to determine which offers the best balance between inference time and accuracy. Specifically, we measured top-1 accuracy and inference time on a Raspberry Pi, identifying EfficientNetV2 as the most suitable model due to its optimal trade-off between performance and computational demand. The model's top-1 accuracy significantly outperformed other models while maintaining competitive inference speeds, making it ideal for real-time applications in traffic-dense urban settings.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1520557"},"PeriodicalIF":3.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11868262/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143543690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-14eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1523336
A M Mutawa, Ayshah Alrumaih
The metrical structure of classical Arabic poetry, deeply rooted in its rich literary heritage, is governed by 16 distinct meters, making its analysis both a linguistic and computational challenge. In this study, a deep learning-based approach was developed to accurately determine the meter of Arabic poetry using TensorFlow and a large dataset. Character-level encoding was employed to convert text into integers, enabling the classification of both full-verse and half-verse data. In particular, the data were evaluated without removing diacritics, preserving critical linguistic features. A train-test-split method with a 70-15-15 division was utilized, with 15% of the total dataset reserved as unseen test data for evaluation across all models. Multiple deep learning architectures, including long short-term memory (LSTM), gated recurrent units (GRU), and bidirectional long short-term memory (Bi-LSTM), were tested. Among these, the bidirectional long short-term memory model achieved the highest accuracy, with 97.53% for full-verse and 95.23% for half-verse data. This study introduces an effective framework for Arabic meter classification, contributing significantly to the application of artificial intelligence in natural language processing and text analytics.
{"title":"Determining the meter of classical Arabic poetry using deep learning: a performance analysis.","authors":"A M Mutawa, Ayshah Alrumaih","doi":"10.3389/frai.2025.1523336","DOIUrl":"https://doi.org/10.3389/frai.2025.1523336","url":null,"abstract":"<p><p>The metrical structure of classical Arabic poetry, deeply rooted in its rich literary heritage, is governed by 16 distinct meters, making its analysis both a linguistic and computational challenge. In this study, a deep learning-based approach was developed to accurately determine the meter of Arabic poetry using TensorFlow and a large dataset. Character-level encoding was employed to convert text into integers, enabling the classification of both full-verse and half-verse data. In particular, the data were evaluated without removing diacritics, preserving critical linguistic features. A train-test-split method with a 70-15-15 division was utilized, with 15% of the total dataset reserved as unseen test data for evaluation across all models. Multiple deep learning architectures, including long short-term memory (LSTM), gated recurrent units (GRU), and bidirectional long short-term memory (Bi-LSTM), were tested. Among these, the bidirectional long short-term memory model achieved the highest accuracy, with 97.53% for full-verse and 95.23% for half-verse data. This study introduces an effective framework for Arabic meter classification, contributing significantly to the application of artificial intelligence in natural language processing and text analytics.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1523336"},"PeriodicalIF":3.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11868067/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143543695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-14eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1521063
Raúl Jimenez-Cruz, Cornelio Yáñez-Márquez, Miguel Gonzalez-Mendoza, Yenni Villuendas-Rey, Raúl Monroy
This paper presents the development of the N-Spherical Minimalist Machine Learning (MML) classifier, an innovative model within the Minimalist Machine Learning paradigm. Using N-spherical coordinates and concepts from metaheuristics and associative models, this classifier effectively addresses challenges such as data dimensionality and class imbalance in complex datasets. Performance evaluations using the F1 measure and balanced accuracy demonstrate its superior efficiency and robustness compared to state-of-the-art classifiers. Statistical validation is conducted using the Friedman and Holm tests. Although currently limited to binary classification, this work highlights the potential of minimalist approaches in machine learning for classification of highly dimensional and imbalanced data. Future extensions aim to include multi-class problems and mechanisms for handling categorical data.
{"title":"Spherical model for Minimalist Machine Learning paradigm in handling complex databases.","authors":"Raúl Jimenez-Cruz, Cornelio Yáñez-Márquez, Miguel Gonzalez-Mendoza, Yenni Villuendas-Rey, Raúl Monroy","doi":"10.3389/frai.2025.1521063","DOIUrl":"https://doi.org/10.3389/frai.2025.1521063","url":null,"abstract":"<p><p>This paper presents the development of the N-Spherical Minimalist Machine Learning (MML) classifier, an innovative model within the Minimalist Machine Learning paradigm. Using N-spherical coordinates and concepts from metaheuristics and associative models, this classifier effectively addresses challenges such as data dimensionality and class imbalance in complex datasets. Performance evaluations using the F1 measure and balanced accuracy demonstrate its superior efficiency and robustness compared to state-of-the-art classifiers. Statistical validation is conducted using the Friedman and Holm tests. Although currently limited to binary classification, this work highlights the potential of minimalist approaches in machine learning for classification of highly dimensional and imbalanced data. Future extensions aim to include multi-class problems and mechanisms for handling categorical data.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1521063"},"PeriodicalIF":3.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11868079/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143543697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-13eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1517918
Mini Han Wang, Xudong Jiang, Peijin Zeng, Xinyue Li, Kelvin Kam-Lung Chong, Guanghui Hou, Xiaoxiao Fang, Yang Yu, Xiangrong Yu, Junbin Fang, Yi Pan
Introduction: The rapid evolution of the Internet of Things (IoT) and Artificial Intelligence (AI) has opened new possibilities for public healthcare. Effective integration of these technologies is essential to ensure precise and efficient healthcare delivery. This study explores the application of IoT-enabled, AI-driven systems for detecting and managing Dry Eye Disease (DED), emphasizing the use of prompt engineering to enhance system performance.
Methods: A specialized prompt mechanism was developed utilizing OpenAI GPT-4.0 and ERNIE Bot-4.0 APIs to assess the urgency of medical attention based on 5,747 simulated patient complaints. A Bidirectional Encoder Representations from Transformers (BERT) machine learning model was employed for text classification to differentiate urgent from non-urgent cases. User satisfaction was evaluated through composite scores derived from Service Experiences (SE) and Medical Quality (MQ) assessments.
Results: The comparison between prompted and non-prompted queries revealed a significant accuracy increase from 80.1% to 99.6%. However, this improvement was accompanied by a notable rise in response time, resulting in a decrease in SE scores (95.5 to 84.7) but a substantial increase in MQ satisfaction (73.4 to 96.7). These findings indicate a trade-off between accuracy and user satisfaction.
Discussion: The study highlights the critical role of prompt engineering in improving AI-based healthcare services. While enhanced accuracy is achievable, careful attention must be given to balancing response time and user satisfaction. Future research should optimize prompt structures, explore dynamic prompting approaches, and prioritize real-time evaluations to address the identified challenges and maximize the potential of IoT-integrated AI systems in medical applications.
{"title":"Balancing accuracy and user satisfaction: the role of prompt engineering in AI-driven healthcare solutions.","authors":"Mini Han Wang, Xudong Jiang, Peijin Zeng, Xinyue Li, Kelvin Kam-Lung Chong, Guanghui Hou, Xiaoxiao Fang, Yang Yu, Xiangrong Yu, Junbin Fang, Yi Pan","doi":"10.3389/frai.2025.1517918","DOIUrl":"10.3389/frai.2025.1517918","url":null,"abstract":"<p><strong>Introduction: </strong>The rapid evolution of the Internet of Things (IoT) and Artificial Intelligence (AI) has opened new possibilities for public healthcare. Effective integration of these technologies is essential to ensure precise and efficient healthcare delivery. This study explores the application of IoT-enabled, AI-driven systems for detecting and managing Dry Eye Disease (DED), emphasizing the use of prompt engineering to enhance system performance.</p><p><strong>Methods: </strong>A specialized prompt mechanism was developed utilizing OpenAI GPT-4.0 and ERNIE Bot-4.0 APIs to assess the urgency of medical attention based on 5,747 simulated patient complaints. A Bidirectional Encoder Representations from Transformers (BERT) machine learning model was employed for text classification to differentiate urgent from non-urgent cases. User satisfaction was evaluated through composite scores derived from Service Experiences (SE) and Medical Quality (MQ) assessments.</p><p><strong>Results: </strong>The comparison between prompted and non-prompted queries revealed a significant accuracy increase from 80.1% to 99.6%. However, this improvement was accompanied by a notable rise in response time, resulting in a decrease in SE scores (95.5 to 84.7) but a substantial increase in MQ satisfaction (73.4 to 96.7). These findings indicate a trade-off between accuracy and user satisfaction.</p><p><strong>Discussion: </strong>The study highlights the critical role of prompt engineering in improving AI-based healthcare services. While enhanced accuracy is achievable, careful attention must be given to balancing response time and user satisfaction. Future research should optimize prompt structures, explore dynamic prompting approaches, and prioritize real-time evaluations to address the identified challenges and maximize the potential of IoT-integrated AI systems in medical applications.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1517918"},"PeriodicalIF":3.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11865202/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143524576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-13eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1520592
Fawad Naseer, Abdullah Addas, Muhammad Tahir, Muhammad Nasir Khan, Noreen Sattar
The need for effective and personalized in-home solutions will continue to rise with the world population of elderly individuals expected to surpass 1.6 billion by the year 2050. The study presents a system that merges Generative Adversarial Network (GAN) with IoT-enabled adaptive artificial intelligence (AI) framework for transforming personalized elderly care within the smart home environment. The reason for the application of GANs is to generate synthetic health data, which in turn addresses the scarcity of data, especially of some rare but critical conditions, and helps enhance the predictive accuracy of the system. Continuous data collection from IoT sensors, including wearable sensors (e.g., heart rate monitors, pulse oximeters) and environmental sensors (e.g., temperature, humidity, and gas detectors), enables the system to track vital indications of health, activities, and environment for early warnings and personalized suggestions through real-time analysis. The AI adapts to the unique pattern of healthy and behavioral habits in every individual's lifestyle, hence offering personalized prompts, reminders, and sends off emergency alert notifications to the caregiver or health provider, when required. We were showing significant improvements like 30% faster detection of risk conditions in a large-scale real-world test setup, and 25% faster response times compared with other solutions. GANs applied to the synthesis of data enable more robust and accurate predictive models, ensuring privacy with the generation of realistic yet anonymized health profiles. The system merges state-of-the-art AI with GAN technology in advancing elderly care in a proactive, dignified, secure environment that allows improved quality of life and greater independence for the aging individual. The work hence provides a novel framework for the utilization of GAN in personalized healthcare and points out that this will help reshape elderly care in IoT-enabled "smart" homes.
{"title":"Integrating generative adversarial networks with IoT for adaptive AI-powered personalized elderly care in smart homes.","authors":"Fawad Naseer, Abdullah Addas, Muhammad Tahir, Muhammad Nasir Khan, Noreen Sattar","doi":"10.3389/frai.2025.1520592","DOIUrl":"10.3389/frai.2025.1520592","url":null,"abstract":"<p><p>The need for effective and personalized in-home solutions will continue to rise with the world population of elderly individuals expected to surpass 1.6 billion by the year 2050. The study presents a system that merges Generative Adversarial Network (GAN) with IoT-enabled adaptive artificial intelligence (AI) framework for transforming personalized elderly care within the smart home environment. The reason for the application of GANs is to generate synthetic health data, which in turn addresses the scarcity of data, especially of some rare but critical conditions, and helps enhance the predictive accuracy of the system. Continuous data collection from IoT sensors, including wearable sensors (e.g., heart rate monitors, pulse oximeters) and environmental sensors (e.g., temperature, humidity, and gas detectors), enables the system to track vital indications of health, activities, and environment for early warnings and personalized suggestions through real-time analysis. The AI adapts to the unique pattern of healthy and behavioral habits in every individual's lifestyle, hence offering personalized prompts, reminders, and sends off emergency alert notifications to the caregiver or health provider, when required. We were showing significant improvements like 30% faster detection of risk conditions in a large-scale real-world test setup, and 25% faster response times compared with other solutions. GANs applied to the synthesis of data enable more robust and accurate predictive models, ensuring privacy with the generation of realistic yet anonymized health profiles. The system merges state-of-the-art AI with GAN technology in advancing elderly care in a proactive, dignified, secure environment that allows improved quality of life and greater independence for the aging individual. The work hence provides a novel framework for the utilization of GAN in personalized healthcare and points out that this will help reshape elderly care in IoT-enabled \"smart\" homes.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1520592"},"PeriodicalIF":3.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11865026/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143524578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-13eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1484260
Marco Piastra, Patrizia Catellani
This study investigates the potential of ChatGPT 4 in the assessment of personality traits based on written texts. Using two publicly available datasets containing both written texts and self-assessments of the authors' psychological traits based on the Big Five model, we aimed to evaluate the predictive performance of ChatGPT 4. For each sample text, we asked for numerical predictions on an eleven-point scale and compared them with the self-assessments. We also asked for ChatGPT 4 confidence scores on an eleven-point scale for each prediction. To keep the study within a manageable scope, a zero-prompt modality was chosen, although more sophisticated prompting strategies could potentially improve performance. The results show that ChatGPT 4 has moderate but significant abilities to automatically infer personality traits from written text. However, it also shows limitations in recognizing whether the input text is appropriate or representative enough to make accurate inferences, which could hinder practical applications. Furthermore, the results suggest that improved benchmarking methods could increase the efficiency and reliability of the evaluation process. These results pave the way for a more comprehensive evaluation of the capabilities of Large Language Models in assessing personality traits from written texts.
{"title":"On the emergent capabilities of ChatGPT 4 to estimate personality traits.","authors":"Marco Piastra, Patrizia Catellani","doi":"10.3389/frai.2025.1484260","DOIUrl":"10.3389/frai.2025.1484260","url":null,"abstract":"<p><p>This study investigates the potential of ChatGPT 4 in the assessment of personality traits based on written texts. Using two publicly available datasets containing both written texts and self-assessments of the authors' psychological traits based on the Big Five model, we aimed to evaluate the predictive performance of ChatGPT 4. For each sample text, we asked for numerical predictions on an eleven-point scale and compared them with the self-assessments. We also asked for ChatGPT 4 confidence scores on an eleven-point scale for each prediction. To keep the study within a manageable scope, a zero-prompt modality was chosen, although more sophisticated prompting strategies could potentially improve performance. The results show that ChatGPT 4 has moderate but significant abilities to automatically infer personality traits from written text. However, it also shows limitations in recognizing whether the input text is appropriate or representative enough to make accurate inferences, which could hinder practical applications. Furthermore, the results suggest that improved benchmarking methods could increase the efficiency and reliability of the evaluation process. These results pave the way for a more comprehensive evaluation of the capabilities of Large Language Models in assessing personality traits from written texts.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1484260"},"PeriodicalIF":3.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11865037/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143524582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-12eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1523390
Eduardo Cisternas Jiménez, Fang-Fang Yin
Intensity-Modulated Radiation Therapy requires the manual adjustment to numerous treatment plan parameters (TPPs) through a trial-and-error process to deliver precise radiation doses to the target while minimizing exposure to surrounding healthy tissues. The goal is to achieve a dose distribution that adheres to a prescribed plan tailored to each patient. Developing an automated approach to optimize patient-specific prescriptions is valuable in scenarios where trade-off selection is uncertain and varies among patients. This study presents a proof-of-concept artificial intelligence (AI) system based on an Adaptive Neuro-Fuzzy Inference System (ANFIS) to guide IMRT planning and achieve optimal, patient-specific prescriptions in aligned with a radiation oncologist's treatment objectives. We developed an in-house ANFIS-AI system utilizing Prescription Dose (PD) constraints to guide the optimization process toward achievable prescriptions. Mimicking human planning behavior, the AI system adjusts TPPs, represented as dose-volume constraints, to meet the prescribed dose goals. This process is informed by a Fuzzy Inference System (FIS) that incorporates prior knowledge from experienced planners, captured through "if-then" rules based on routine planning adjustments. The innovative aspect of our research lies in employing ANFIS's adaptive network to fine-tune the FIS components (membership functions and rule strengths), thereby enhancing the accuracy of the system. Once calibrated, the AI system modifies TPPs for each patient, progressing through acceptable prescription levels, from restrictive to clinically allowable. The system evaluates dosimetric parameters and compares dose distributions, dose-volume histograms, and dosimetric statistics between the conventional FIS and ANFIS. Results demonstrate that ANFIS consistently met dosimetric goals, outperforming FIS with a 0.7% improvement in mean dose conformity for the planning target volume (PTV) and a 28% reduction in mean dose exposure for organs at risk (OARs) in a C-Shape phantom. In a mock prostate phantom, ANFIS reduced the mean dose by 17.4% for the rectum and by 14.1% for the bladder. These findings highlight ANFIS's potential for efficient, accurate IMRT planning and its integration into clinical workflows.
{"title":"Adaptive Neuro-Fuzzy Inference System guided objective function parameter optimization for inverse treatment planning.","authors":"Eduardo Cisternas Jiménez, Fang-Fang Yin","doi":"10.3389/frai.2025.1523390","DOIUrl":"10.3389/frai.2025.1523390","url":null,"abstract":"<p><p>Intensity-Modulated Radiation Therapy requires the manual adjustment to numerous treatment plan parameters (TPPs) through a trial-and-error process to deliver precise radiation doses to the target while minimizing exposure to surrounding healthy tissues. The goal is to achieve a dose distribution that adheres to a prescribed plan tailored to each patient. Developing an automated approach to optimize patient-specific prescriptions is valuable in scenarios where trade-off selection is uncertain and varies among patients. This study presents a proof-of-concept artificial intelligence (AI) system based on an Adaptive Neuro-Fuzzy Inference System (ANFIS) to guide IMRT planning and achieve optimal, patient-specific prescriptions in aligned with a radiation oncologist's treatment objectives. We developed an in-house ANFIS-AI system utilizing Prescription Dose (PD) constraints to guide the optimization process toward achievable prescriptions. Mimicking human planning behavior, the AI system adjusts TPPs, represented as dose-volume constraints, to meet the prescribed dose goals. This process is informed by a Fuzzy Inference System (FIS) that incorporates prior knowledge from experienced planners, captured through \"if-then\" rules based on routine planning adjustments. The innovative aspect of our research lies in employing ANFIS's adaptive network to fine-tune the FIS components (membership functions and rule strengths), thereby enhancing the accuracy of the system. Once calibrated, the AI system modifies TPPs for each patient, progressing through acceptable prescription levels, from restrictive to clinically allowable. The system evaluates dosimetric parameters and compares dose distributions, dose-volume histograms, and dosimetric statistics between the conventional FIS and ANFIS. Results demonstrate that ANFIS consistently met dosimetric goals, outperforming FIS with a 0.7% improvement in mean dose conformity for the planning target volume (PTV) and a 28% reduction in mean dose exposure for organs at risk (OARs) in a C-Shape phantom. In a mock prostate phantom, ANFIS reduced the mean dose by 17.4% for the rectum and by 14.1% for the bladder. These findings highlight ANFIS's potential for efficient, accurate IMRT planning and its integration into clinical workflows.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1523390"},"PeriodicalIF":3.0,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11861086/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143516804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-12eCollection Date: 2025-01-01DOI: 10.3389/frai.2025.1496653
Norhan Khallaf, Osama Abd-El Rouf, Abeer D Algarni, Mohy Hadhoud, Ahmed Kafafy
Modern technologies, particularly artificial intelligence, play a crucial role in improving medical waste management by developing intelligent systems that optimize the shortest routes for waste transport, from its generation to final disposal. Algorithms such as Q-learning and Deep Q Network enhance the efficiency of transport and disposal while reducing environmental pollution risks. In this study, artificial intelligence algorithms were trained using Homogeneous agent systems with a capacity of 3 tons to optimize routes between hospitals within the Closed Capacitated Vehicle Routing Problem framework. Integrating AI with pathfinding techniques, especially the hybrid A*-Deep Q Network approach, led to advanced results despite initial challenges. K-means clustering was used to divide hospitals into zones, allowing agents to navigate the shortest paths using the Deep Q Network. Analysis revealed that the agents' capacity was not fully utilized. This led to the application of Fractional Knapsack dynamic programming with Deep Q Network to maximize capacity utilization while achieving optimal routes. Since the criteria used to compare the algorithms' effectiveness are the number of vehicles and the utilization of the total vehicle capacity, it was found that the Fractional Knapsack with DQN stands out by requiring the fewest number of vehicles (4), achieving 0% loss in this metric as it matches the optimal value. Compared to other algorithms that require 5 or 7 vehicles, it reduces the fleet size by 20 and 42.86%, respectively. Additionally, it maximizes vehicle capacity utilization at 100%, unlike other methods, which utilize only 33 to 66% of vehicle capacity. However, this improvement comes at the cost of a 9% increase in distance, reflecting the longer routes needed to serve more hospitals per trip. Despite this trade-off, the algorithm's ability to minimize fleet size while fully utilizing vehicle capacity makes it the optimal choice in scenarios where these factors are critical. This approach not only improved performance but also enhanced environmental sustainability, making it the most effective and challenging solution among all the algorithms used in the study.
{"title":"Enhanced vehicle routing for medical waste management via hybrid deep reinforcement learning and optimization algorithms.","authors":"Norhan Khallaf, Osama Abd-El Rouf, Abeer D Algarni, Mohy Hadhoud, Ahmed Kafafy","doi":"10.3389/frai.2025.1496653","DOIUrl":"10.3389/frai.2025.1496653","url":null,"abstract":"<p><p>Modern technologies, particularly artificial intelligence, play a crucial role in improving medical waste management by developing intelligent systems that optimize the shortest routes for waste transport, from its generation to final disposal. Algorithms such as Q-learning and Deep Q Network enhance the efficiency of transport and disposal while reducing environmental pollution risks. In this study, artificial intelligence algorithms were trained using Homogeneous agent systems with a capacity of 3 tons to optimize routes between hospitals within the Closed Capacitated Vehicle Routing Problem framework. Integrating AI with pathfinding techniques, especially the hybrid A*-Deep Q Network approach, led to advanced results despite initial challenges. K-means clustering was used to divide hospitals into zones, allowing agents to navigate the shortest paths using the Deep Q Network. Analysis revealed that the agents' capacity was not fully utilized. This led to the application of Fractional Knapsack dynamic programming with Deep Q Network to maximize capacity utilization while achieving optimal routes. Since the criteria used to compare the algorithms' effectiveness are the number of vehicles and the utilization of the total vehicle capacity, it was found that the Fractional Knapsack with DQN stands out by requiring the fewest number of vehicles (4), achieving 0% loss in this metric as it matches the optimal value. Compared to other algorithms that require 5 or 7 vehicles, it reduces the fleet size by 20 and 42.86%, respectively. Additionally, it maximizes vehicle capacity utilization at 100%, unlike other methods, which utilize only 33 to 66% of vehicle capacity. However, this improvement comes at the cost of a 9% increase in distance, reflecting the longer routes needed to serve more hospitals per trip. Despite this trade-off, the algorithm's ability to minimize fleet size while fully utilizing vehicle capacity makes it the optimal choice in scenarios where these factors are critical. This approach not only improved performance but also enhanced environmental sustainability, making it the most effective and challenging solution among all the algorithms used in the study.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1496653"},"PeriodicalIF":3.0,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11861366/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143516806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}