Pub Date : 2025-02-26eCollection Date: 2025-01-01DOI: 10.3389/fdata.2025.1519369
Francesco Bertolotti, Niccolò Kadera, Luca Pasquino, Luca Mari
This paper presents an epidemiological extension of the El Farol Bar problem, where both a social and an epidemiological dimension are present. In the model, individual agents making binary decisions-to visit a bar or stay home-amidst a non-fatal epidemic. The extension of the classic social dilemma is implemented as an agent-based model, and it is later explored by sampling the parameter space and observing the resulting behavior. The results of this analysis suggest that the infection could be contained by increasing the information available in the underlying social system and adjusting its structure.
{"title":"An epidemiological extension of the El Farol Bar problem.","authors":"Francesco Bertolotti, Niccolò Kadera, Luca Pasquino, Luca Mari","doi":"10.3389/fdata.2025.1519369","DOIUrl":"10.3389/fdata.2025.1519369","url":null,"abstract":"<p><p>This paper presents an epidemiological extension of the El Farol Bar problem, where both a social and an epidemiological dimension are present. In the model, individual agents making binary decisions-to visit a bar or stay home-amidst a non-fatal epidemic. The extension of the classic social dilemma is implemented as an agent-based model, and it is later explored by sampling the parameter space and observing the resulting behavior. The results of this analysis suggest that the infection could be contained by increasing the information available in the underlying social system and adjusting its structure.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1519369"},"PeriodicalIF":2.4,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11897257/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143617838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-20eCollection Date: 2025-01-01DOI: 10.3389/fdata.2025.1515341
Huanjing Liu, Xiao Zhang, Qian Liu
Neurodegenerative diseases are chronic, progressive conditions that cause irreversible damage to the nervous system, particularly in aging populations. Early diagnosis is a critical challenge, as these diseases often develop slowly and without clear symptoms until significant damage has occurred. Recent advances in radiomics and genomics have provided valuable insights into the mechanisms of these diseases by identifying specific imaging features and genomic patterns. Radiogenomics enhances diagnostic capabilities by linking genomics with imaging phenotypes, offering a more comprehensive understanding of disease progression. The growing field of artificial intelligence (AI), including machine learning and deep learning, opens new opportunities for improving the accuracy and timeliness of these diagnoses. This review examines the application of AI-based radiogenomics in neurodegenerative diseases, summarizing key model designs, performance metrics, publicly available data resources, significant findings, and future research directions. It provides a starting point and guidance for those seeking to explore this emerging area of study.
{"title":"A review of AI-based radiogenomics in neurodegenerative disease.","authors":"Huanjing Liu, Xiao Zhang, Qian Liu","doi":"10.3389/fdata.2025.1515341","DOIUrl":"10.3389/fdata.2025.1515341","url":null,"abstract":"<p><p>Neurodegenerative diseases are chronic, progressive conditions that cause irreversible damage to the nervous system, particularly in aging populations. Early diagnosis is a critical challenge, as these diseases often develop slowly and without clear symptoms until significant damage has occurred. Recent advances in radiomics and genomics have provided valuable insights into the mechanisms of these diseases by identifying specific imaging features and genomic patterns. Radiogenomics enhances diagnostic capabilities by linking genomics with imaging phenotypes, offering a more comprehensive understanding of disease progression. The growing field of artificial intelligence (AI), including machine learning and deep learning, opens new opportunities for improving the accuracy and timeliness of these diagnoses. This review examines the application of AI-based radiogenomics in neurodegenerative diseases, summarizing key model designs, performance metrics, publicly available data resources, significant findings, and future research directions. It provides a starting point and guidance for those seeking to explore this emerging area of study.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1515341"},"PeriodicalIF":2.4,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11882605/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143574665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introduction: Skin diseases significantly impact individuals' health and mental wellbeing. However, their classification remains challenging due to complex lesion characteristics, overlapping symptoms, and limited annotated datasets. Traditional convolutional neural networks (CNNs) often struggle with generalization, leading to suboptimal classification performance. To address these challenges, this study proposes a Hybrid Deep Transfer Learning Method (HDTLM) that integrates DenseNet121 and EfficientNetB0 for improved skin disease prediction.
Methods: The proposed hybrid model leverages DenseNet121's dense connectivity for capturing intricate patterns and EfficientNetB0's computational efficiency and scalability. A dataset comprising 19 skin conditions with 19,171 images was used for training and validation. The model was evaluated using multiple performance metrics, including accuracy, precision, recall, and F1-score. Additionally, a comparative analysis was conducted against state-of-the-art models such as DenseNet121, EfficientNetB0, VGG19, MobileNetV2, and AlexNet.
Results: The proposed HDTLM achieved a training accuracy of 98.18% and a validation accuracy of 97.57%. It consistently outperformed baseline models, achieving a precision of 0.95, recall of 0.96, F1-score of 0.95, and an overall accuracy of 98.18%. The results demonstrate the hybrid model's superior ability to generalize across diverse skin disease categories.
Discussion: The findings underscore the effectiveness of the HDTLM in enhancing skin disease classification, particularly in scenarios with significant domain shifts and limited labeled data. By integrating complementary strengths of DenseNet121 and EfficientNetB0, the proposed model provides a robust and scalable solution for automated dermatological diagnostics.
{"title":"Next-generation approach to skin disorder prediction employing hybrid deep transfer learning.","authors":"Yonis Gulzar, Shivani Agarwal, Saira Soomro, Meenakshi Kandpal, Sherzod Turaev, Choo W Onn, Shilpa Saini, Abdenour Bounsiar","doi":"10.3389/fdata.2025.1503883","DOIUrl":"10.3389/fdata.2025.1503883","url":null,"abstract":"<p><strong>Introduction: </strong>Skin diseases significantly impact individuals' health and mental wellbeing. However, their classification remains challenging due to complex lesion characteristics, overlapping symptoms, and limited annotated datasets. Traditional convolutional neural networks (CNNs) often struggle with generalization, leading to suboptimal classification performance. To address these challenges, this study proposes a Hybrid Deep Transfer Learning Method (HDTLM) that integrates DenseNet121 and EfficientNetB0 for improved skin disease prediction.</p><p><strong>Methods: </strong>The proposed hybrid model leverages DenseNet121's dense connectivity for capturing intricate patterns and EfficientNetB0's computational efficiency and scalability. A dataset comprising 19 skin conditions with 19,171 images was used for training and validation. The model was evaluated using multiple performance metrics, including accuracy, precision, recall, and F1-score. Additionally, a comparative analysis was conducted against state-of-the-art models such as DenseNet121, EfficientNetB0, VGG19, MobileNetV2, and AlexNet.</p><p><strong>Results: </strong>The proposed HDTLM achieved a training accuracy of 98.18% and a validation accuracy of 97.57%. It consistently outperformed baseline models, achieving a precision of 0.95, recall of 0.96, F1-score of 0.95, and an overall accuracy of 98.18%. The results demonstrate the hybrid model's superior ability to generalize across diverse skin disease categories.</p><p><strong>Discussion: </strong>The findings underscore the effectiveness of the HDTLM in enhancing skin disease classification, particularly in scenarios with significant domain shifts and limited labeled data. By integrating complementary strengths of DenseNet121 and EfficientNetB0, the proposed model provides a robust and scalable solution for automated dermatological diagnostics.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1503883"},"PeriodicalIF":2.4,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11879938/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143568816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-19eCollection Date: 2025-01-01DOI: 10.3389/fdata.2025.1508087
Guo Zhang
This study examines the transformative impact of cloud computing on the integration of computer applications and information management systems to improve operational efficiency. Grounded in a robust methodological framework, the research employs experimental testing and comparative data analysis to assess the performance of an information management system within a cloud computing environment. Data was meticulously collected and analyzed, highlighting a threshold where user demand surpasses 400, leading to a stabilization in CPU utilization at an optimal level and maintaining subsystem response times consistently below 5 s. This comprehensive evaluation underscores the significant advantages of cloud computing, demonstrating its capacity to optimize the synergy between computer applications and information management. The findings not only contribute to theoretical advancements in the field but also offer actionable insights for organizations seeking to enhance efficiency through effective cloud-based solutions.
{"title":"Cloud computing convergence: integrating computer applications and information management for enhanced efficiency.","authors":"Guo Zhang","doi":"10.3389/fdata.2025.1508087","DOIUrl":"10.3389/fdata.2025.1508087","url":null,"abstract":"<p><p>This study examines the transformative impact of cloud computing on the integration of computer applications and information management systems to improve operational efficiency. Grounded in a robust methodological framework, the research employs experimental testing and comparative data analysis to assess the performance of an information management system within a cloud computing environment. Data was meticulously collected and analyzed, highlighting a threshold where user demand surpasses 400, leading to a stabilization in CPU utilization at an optimal level and maintaining subsystem response times consistently below 5 s. This comprehensive evaluation underscores the significant advantages of cloud computing, demonstrating its capacity to optimize the synergy between computer applications and information management. The findings not only contribute to theoretical advancements in the field but also offer actionable insights for organizations seeking to enhance efficiency through effective cloud-based solutions.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1508087"},"PeriodicalIF":2.4,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11880219/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143568815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-14eCollection Date: 2025-01-01DOI: 10.3389/fdata.2025.1507036
Masoud A Rostami, LeMaur Kydd, Behnaz Balmaki, Lee A Dyer, Julie M Allen
Accurate identification of pollen grains from Abies (fir), Picea (spruce), and Pinus (pine) is an important method for reconstructing historical environments, past landscapes and understanding human-environment interactions. However, distinguishing between pollen grains of conifer genera poses challenges in palynology due to their morphological similarities. To address this identification challenge, this study leverages advanced deep learning techniques, specifically transfer learning models, which are effective in identifying similarities among detailed features. We evaluated nine different transfer learning architectures: DenseNet201, EfficientNetV2S, InceptionV3, MobileNetV2, ResNet101, ResNet50, VGG16, VGG19, and Xception. Each model was trained and validated on a dataset of images of pollen grains collected from museum specimens, mounted and imaged for training purposes. The models were assessed on various performance metrics, including accuracy, precision, recall, and F1-score across training, validation, and testing phases. Our results indicate that ResNet101 relatively outperformed other models, achieving a test accuracy of 99%, with equally high precision, recall, and F1-score. This study underscores the efficacy of transfer learning to produce models that can aid in identifications of difficult species. These models may aid conifer species classification and enhance pollen grain analysis, critical for ecological research and monitoring environmental changes.
{"title":"Deep learning for accurate classification of conifer pollen grains: enhancing species identification in palynology.","authors":"Masoud A Rostami, LeMaur Kydd, Behnaz Balmaki, Lee A Dyer, Julie M Allen","doi":"10.3389/fdata.2025.1507036","DOIUrl":"https://doi.org/10.3389/fdata.2025.1507036","url":null,"abstract":"<p><p>Accurate identification of pollen grains from <i>Abies</i> (fir), <i>Picea</i> (spruce), and <i>Pinus</i> (pine) is an important method for reconstructing historical environments, past landscapes and understanding human-environment interactions. However, distinguishing between pollen grains of conifer genera poses challenges in palynology due to their morphological similarities. To address this identification challenge, this study leverages advanced deep learning techniques, specifically transfer learning models, which are effective in identifying similarities among detailed features. We evaluated nine different transfer learning architectures: DenseNet201, EfficientNetV2S, InceptionV3, MobileNetV2, ResNet101, ResNet50, VGG16, VGG19, and Xception. Each model was trained and validated on a dataset of images of pollen grains collected from museum specimens, mounted and imaged for training purposes. The models were assessed on various performance metrics, including accuracy, precision, recall, and F1-score across training, validation, and testing phases. Our results indicate that ResNet101 relatively outperformed other models, achieving a test accuracy of 99%, with equally high precision, recall, and F1-score. This study underscores the efficacy of transfer learning to produce models that can aid in identifications of difficult species. These models may aid conifer species classification and enhance pollen grain analysis, critical for ecological research and monitoring environmental changes.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1507036"},"PeriodicalIF":2.4,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11868112/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-14eCollection Date: 2025-01-01DOI: 10.3389/fdata.2025.1567941
Federico Colecchia, Daniele Giunchi, Rui Qin, Eleonora Ceccaldi, Fang Wang
{"title":"Editorial: Machine learning and immersive technologies for user-centered digital healthcare innovation.","authors":"Federico Colecchia, Daniele Giunchi, Rui Qin, Eleonora Ceccaldi, Fang Wang","doi":"10.3389/fdata.2025.1567941","DOIUrl":"https://doi.org/10.3389/fdata.2025.1567941","url":null,"abstract":"","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1567941"},"PeriodicalIF":2.4,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11868051/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-10eCollection Date: 2025-01-01DOI: 10.3389/fdata.2025.1546850
Houdie Tu, Lei Li, Zhenchao Tao, Zan Zhang
Introduction: Traditional Graph Pattern Matching (GPM) research mainly focuses on improving the accuracy and efficiency of complex network analysis and fast subgraph retrieval. Despite their ability to return subgraphs quickly and accurately, these methods are limited to their applications without medical data research.
Methods: In order to overcome this limitation, based on the existing research on GPM with the lung cancer knowledge graph, this paper introduces the Monte Carlo method and proposes an edge-level multi-constraint graph pattern matching algorithm TEM with lung cancer knowledge graph. Furthermore, we apply Monte Carlo method to both nodes and edges, and propose a multi-constraint hologram pattern matching algorithm THM with lung cancer knowledge graph.
Results: The experiments have verified the effectiveness and efficiency of TEM algorithm.
Discussion: This method effectively addresses the complexity of uncertainty in lung cancer knowledge graph, and is significantly better than the existing algorithms on efficiency.
{"title":"Edge-level multi-constraint graph pattern matching with lung cancer knowledge graph.","authors":"Houdie Tu, Lei Li, Zhenchao Tao, Zan Zhang","doi":"10.3389/fdata.2025.1546850","DOIUrl":"10.3389/fdata.2025.1546850","url":null,"abstract":"<p><strong>Introduction: </strong>Traditional Graph Pattern Matching (GPM) research mainly focuses on improving the accuracy and efficiency of complex network analysis and fast subgraph retrieval. Despite their ability to return subgraphs quickly and accurately, these methods are limited to their applications without medical data research.</p><p><strong>Methods: </strong>In order to overcome this limitation, based on the existing research on GPM with the lung cancer knowledge graph, this paper introduces the Monte Carlo method and proposes an edge-level multi-constraint graph pattern matching algorithm TEM with lung cancer knowledge graph. Furthermore, we apply Monte Carlo method to both nodes and edges, and propose a multi-constraint hologram pattern matching algorithm THM with lung cancer knowledge graph.</p><p><strong>Results: </strong>The experiments have verified the effectiveness and efficiency of TEM algorithm.</p><p><strong>Discussion: </strong>This method effectively addresses the complexity of uncertainty in lung cancer knowledge graph, and is significantly better than the existing algorithms on efficiency.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1546850"},"PeriodicalIF":2.4,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11947724/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143732893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-10eCollection Date: 2025-01-01DOI: 10.3389/fdata.2025.1497622
Allison Reinsvold Hall, Nicole Skidmore, Gabriele Benelli, Ben Carlson, Claire David, Jonathan Davies, Wouter Deconinck, David DeMuth, Peter Elmer, Rocky Bala Garg, Stephan Hageböck, Killian Lieret, Valeriia Lukashenko, Sudhir Malik, Andy Morris, Heidi Schellman, Graeme A Stewart, Jason Veatch, Michel Hernandez Villanueva
In this article we document the current analysis software training and onboarding activities in several High Energy Physics (HEP) experiments: ATLAS, CMS, LHCb, Belle II and DUNE. Fast and efficient onboarding of new collaboration members is increasingly important for HEP experiments. With rapidly increasing data volumes and larger collaborations the analyses and consequently, the related software, become ever more complex. This necessitates structured onboarding and training. Recognizing this, a meeting series was held by the HEP Software Foundation (HSF) in 2022 for experiments to showcase their initiatives. Here we document and analyze these in an attempt to determine a set of key considerations for future HEP experiments.
在本文中,我们记录了目前在几个高能物理(HEP)实验中:ATLAS, CMS, LHCb, Belle II和DUNE的分析软件培训和使用活动。对于HEP实验来说,快速高效的新合作成员的入职越来越重要。随着快速增长的数据量和更大的协作,分析和相关软件变得越来越复杂。这就需要结构化的入职和培训。意识到这一点,HEP软件基金会(HSF)于2022年举行了一系列会议,以展示他们的举措。在这里,我们记录和分析这些,试图确定未来HEP实验的一组关键考虑因素。
{"title":"Training and onboarding initiatives in high energy physics experiments.","authors":"Allison Reinsvold Hall, Nicole Skidmore, Gabriele Benelli, Ben Carlson, Claire David, Jonathan Davies, Wouter Deconinck, David DeMuth, Peter Elmer, Rocky Bala Garg, Stephan Hageböck, Killian Lieret, Valeriia Lukashenko, Sudhir Malik, Andy Morris, Heidi Schellman, Graeme A Stewart, Jason Veatch, Michel Hernandez Villanueva","doi":"10.3389/fdata.2025.1497622","DOIUrl":"10.3389/fdata.2025.1497622","url":null,"abstract":"<p><p>In this article we document the current analysis software training and onboarding activities in several High Energy Physics (HEP) experiments: ATLAS, CMS, LHCb, Belle II and DUNE. Fast and efficient onboarding of new collaboration members is increasingly important for HEP experiments. With rapidly increasing data volumes and larger collaborations the analyses and consequently, the related software, become ever more complex. This necessitates structured onboarding and training. Recognizing this, a meeting series was held by the HEP Software Foundation (HSF) in 2022 for experiments to showcase their initiatives. Here we document and analyze these in an attempt to determine a set of key considerations for future HEP experiments.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1497622"},"PeriodicalIF":2.4,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11847898/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-06eCollection Date: 2025-01-01DOI: 10.3389/fdata.2025.1513027
Muhammad Arshad, Choo Wou Onn, Ashfaq Ahmad, Goabaone Mogwe
As the trend in the current generation with the use of mobile devices is rapidly increasing, online video streaming has risen to the top in the entertainment industry. These platforms have experienced radical expansion due to the incorporation of Big Data Analytics and Artificial Intelligence which are critical in improving the user interface, improving its functioning, and customization of recommended content. This paper seeks to examine how Big Data Analytics makes it possible to obtain large amounts of data about users and how they view, what they like, or how they behave. While customers benefit from this data by receiving more suitable material, getting better recommendations, and allowing for more efficient content delivery, AI utilizes it. As a result, the study also points to the importance and relevance of such technologies to promote business development, and user interaction and maintain competitiveness in the online video streaming market with examples of their effective application. This work presents a comprehensive investigation of the combined role of Big Data and AI and presents the necessary findings to determine their efficacy as success factors of existing and future video streaming services.
{"title":"Big data analytics and AI as success factors for online video streaming platforms.","authors":"Muhammad Arshad, Choo Wou Onn, Ashfaq Ahmad, Goabaone Mogwe","doi":"10.3389/fdata.2025.1513027","DOIUrl":"https://doi.org/10.3389/fdata.2025.1513027","url":null,"abstract":"<p><p>As the trend in the current generation with the use of mobile devices is rapidly increasing, online video streaming has risen to the top in the entertainment industry. These platforms have experienced radical expansion due to the incorporation of Big Data Analytics and Artificial Intelligence which are critical in improving the user interface, improving its functioning, and customization of recommended content. This paper seeks to examine how Big Data Analytics makes it possible to obtain large amounts of data about users and how they view, what they like, or how they behave. While customers benefit from this data by receiving more suitable material, getting better recommendations, and allowing for more efficient content delivery, AI utilizes it. As a result, the study also points to the importance and relevance of such technologies to promote business development, and user interaction and maintain competitiveness in the online video streaming market with examples of their effective application. This work presents a comprehensive investigation of the combined role of Big Data and AI and presents the necessary findings to determine their efficacy as success factors of existing and future video streaming services.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1513027"},"PeriodicalIF":2.4,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11841674/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143469954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-04eCollection Date: 2025-01-01DOI: 10.3389/fdata.2025.1563730
Florian Windhager, Steffen Koch, Sander Münster, Eva Mayr
{"title":"Editorial: Visualizing big culture and history data.","authors":"Florian Windhager, Steffen Koch, Sander Münster, Eva Mayr","doi":"10.3389/fdata.2025.1563730","DOIUrl":"https://doi.org/10.3389/fdata.2025.1563730","url":null,"abstract":"","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1563730"},"PeriodicalIF":2.4,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11832713/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143451018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}