Pub Date : 2024-04-11DOI: 10.3389/frai.2024.1357121
S. Kharya, Sunita Soni, Abhilash Pati, Amrutanshu Panigrahi, Jayant Giri, Hong Qin, Saurav Mallik, Debasish Swapnesh Kumar Nayak, T. Swarnkar
Diabetes is an enduring metabolic condition identified by heightened blood sugar levels stemming from insufficient production of insulin or ineffective utilization of insulin within the body. India is commonly labeled as the “diabetes capital of the world” owing to the widespread prevalence of this condition. To the best of the authors' last knowledge updated on September 2021, approximately 77 million adults in India were reported to be affected by diabetes, reported by the International Diabetes Federation. Owing to the concealed early symptoms, numerous diabetic patients go undiagnosed, leading to delayed treatment. While Computational Intelligence approaches have been utilized to improve the prediction rate, a significant portion of these methods lacks interpretability, primarily due to their inherent black box nature. Rule extraction is frequently utilized to elucidate the opaque nature inherent in machine learning algorithms. Moreover, to resolve the black box nature, a method for extracting strong rules based on Weighted Bayesian Association Rule Mining is used so that the extracted rules to diagnose any disease such as diabetes can be very transparent and easily analyzed by the clinical experts, enhancing the interpretability. The WBBN model is constructed utilizing the UCI machine learning repository, demonstrating a performance accuracy of 95.8%.
{"title":"Weighted Bayesian Belief Network for diabetics: a predictive model","authors":"S. Kharya, Sunita Soni, Abhilash Pati, Amrutanshu Panigrahi, Jayant Giri, Hong Qin, Saurav Mallik, Debasish Swapnesh Kumar Nayak, T. Swarnkar","doi":"10.3389/frai.2024.1357121","DOIUrl":"https://doi.org/10.3389/frai.2024.1357121","url":null,"abstract":"Diabetes is an enduring metabolic condition identified by heightened blood sugar levels stemming from insufficient production of insulin or ineffective utilization of insulin within the body. India is commonly labeled as the “diabetes capital of the world” owing to the widespread prevalence of this condition. To the best of the authors' last knowledge updated on September 2021, approximately 77 million adults in India were reported to be affected by diabetes, reported by the International Diabetes Federation. Owing to the concealed early symptoms, numerous diabetic patients go undiagnosed, leading to delayed treatment. While Computational Intelligence approaches have been utilized to improve the prediction rate, a significant portion of these methods lacks interpretability, primarily due to their inherent black box nature. Rule extraction is frequently utilized to elucidate the opaque nature inherent in machine learning algorithms. Moreover, to resolve the black box nature, a method for extracting strong rules based on Weighted Bayesian Association Rule Mining is used so that the extracted rules to diagnose any disease such as diabetes can be very transparent and easily analyzed by the clinical experts, enhancing the interpretability. The WBBN model is constructed utilizing the UCI machine learning repository, demonstrating a performance accuracy of 95.8%.","PeriodicalId":508738,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140713988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-10DOI: 10.3389/frai.2024.1329605
Stefano Triberti, Raffaele Di Fuccio, Chiara Scuotto, Emanuele Marsico, P. Limone
Artificial Intelligence (AI) tools are currently designed and tested in many fields to improve humans’ ability to make decisions. One of these fields is higher education. For example, AI-based chatbots (“conversational pedagogical agents”) could engage in conversations with students in order to provide timely feedback and responses to questions while the learning process is taking place and to collect data to personalize the delivery of course materials. However, many existent tools are able to perform tasks that human professionals (educators, tutors, professors) could perform, just in a timelier manner. While discussing the possible implementation of AI-based tools in our university’s educational programs, we reviewed the current literature and identified a number of capabilities that future AI solutions may feature, in order to improve higher education processes, with a focus on distance higher education. Specifically, we suggest that innovative tools could influence the methodologies by which students approach learning; facilitate connections and information attainment beyond course materials; support the communication with the professor; and, draw from motivation theories to foster learning engagement, in a personalized manner. Future research should explore high-level opportunities represented by AI for higher education, including their effects on learning outcomes and the quality of the learning experience as a whole.
{"title":"“Better than my professor?” How to develop artificial intelligence tools for higher education","authors":"Stefano Triberti, Raffaele Di Fuccio, Chiara Scuotto, Emanuele Marsico, P. Limone","doi":"10.3389/frai.2024.1329605","DOIUrl":"https://doi.org/10.3389/frai.2024.1329605","url":null,"abstract":"Artificial Intelligence (AI) tools are currently designed and tested in many fields to improve humans’ ability to make decisions. One of these fields is higher education. For example, AI-based chatbots (“conversational pedagogical agents”) could engage in conversations with students in order to provide timely feedback and responses to questions while the learning process is taking place and to collect data to personalize the delivery of course materials. However, many existent tools are able to perform tasks that human professionals (educators, tutors, professors) could perform, just in a timelier manner. While discussing the possible implementation of AI-based tools in our university’s educational programs, we reviewed the current literature and identified a number of capabilities that future AI solutions may feature, in order to improve higher education processes, with a focus on distance higher education. Specifically, we suggest that innovative tools could influence the methodologies by which students approach learning; facilitate connections and information attainment beyond course materials; support the communication with the professor; and, draw from motivation theories to foster learning engagement, in a personalized manner. Future research should explore high-level opportunities represented by AI for higher education, including their effects on learning outcomes and the quality of the learning experience as a whole.","PeriodicalId":508738,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140717656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-09DOI: 10.3389/frai.2024.1371988
Alejandro Hernández-Cano, Yang Ni, Zhuowen Zou, Ali Zakeri, Mohsen Imani
Introduction Brain-inspired computing has become an emerging field, where a growing number of works focus on developing algorithms that bring machine learning closer to human brains at the functional level. As one of the promising directions, Hyperdimensional Computing (HDC) is centered around the idea of having holographic and high-dimensional representation as the neural activities in our brains. Such representation is the fundamental enabler for the efficiency and robustness of HDC. However, existing HDC-based algorithms suffer from limitations within the encoder. To some extent, they all rely on manually selected encoders, meaning that the resulting representation is never adapted to the tasks at hand. Methods In this paper, we propose FLASH, a novel hyperdimensional learning method that incorporates an adaptive and learnable encoder design, aiming at better overall learning performance while maintaining good properties of HDC representation. Current HDC encoders leverage Random Fourier Features (RFF) for kernel correspondence and enable locality-preserving encoding. We propose to learn the encoder matrix distribution via gradient descent and effectively adapt the kernel for a more suitable HDC encoding. Results Our experiments on various regression datasets show that tuning the HDC encoder can significantly boost the accuracy, surpassing the current HDC-based algorithm and providing faster inference than other baselines, including RFF-based kernel ridge regression. Discussion The results indicate the importance of an adaptive encoder and customized high-dimensional representation in HDC.
{"title":"Hyperdimensional computing with holographic and adaptive encoder","authors":"Alejandro Hernández-Cano, Yang Ni, Zhuowen Zou, Ali Zakeri, Mohsen Imani","doi":"10.3389/frai.2024.1371988","DOIUrl":"https://doi.org/10.3389/frai.2024.1371988","url":null,"abstract":"Introduction Brain-inspired computing has become an emerging field, where a growing number of works focus on developing algorithms that bring machine learning closer to human brains at the functional level. As one of the promising directions, Hyperdimensional Computing (HDC) is centered around the idea of having holographic and high-dimensional representation as the neural activities in our brains. Such representation is the fundamental enabler for the efficiency and robustness of HDC. However, existing HDC-based algorithms suffer from limitations within the encoder. To some extent, they all rely on manually selected encoders, meaning that the resulting representation is never adapted to the tasks at hand. Methods In this paper, we propose FLASH, a novel hyperdimensional learning method that incorporates an adaptive and learnable encoder design, aiming at better overall learning performance while maintaining good properties of HDC representation. Current HDC encoders leverage Random Fourier Features (RFF) for kernel correspondence and enable locality-preserving encoding. We propose to learn the encoder matrix distribution via gradient descent and effectively adapt the kernel for a more suitable HDC encoding. Results Our experiments on various regression datasets show that tuning the HDC encoder can significantly boost the accuracy, surpassing the current HDC-based algorithm and providing faster inference than other baselines, including RFF-based kernel ridge regression. Discussion The results indicate the importance of an adaptive encoder and customized high-dimensional representation in HDC.","PeriodicalId":508738,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140725103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-09DOI: 10.3389/frai.2024.1351942
Muhammad Awais, Riaz Ahmad, Nabeela Kausar, A. Alzahrani, Nasser Alalwan, Anum Masood
Acute lymphoblastic leukemia (ALL) is a fatal blood disorder characterized by the excessive proliferation of immature white blood cells, originating in the bone marrow. An effective prognosis and treatment of ALL calls for its accurate and timely detection. Deep convolutional neural networks (CNNs) have shown promising results in digital pathology. However, they face challenges in classifying different subtypes of leukemia due to their subtle morphological differences. This study proposes an improved pipeline for binary detection and sub-type classification of ALL from blood smear images. At first, a customized, 88 layers deep CNN is proposed and trained using transfer learning along with GoogleNet CNN to create an ensemble of features. Furthermore, this study models the feature selection problem as a combinatorial optimization problem and proposes a memetic version of binary whale optimization algorithm, incorporating Differential Evolution-based local search method to enhance the exploration and exploitation of feature search space. The proposed approach is validated using publicly available standard datasets containing peripheral blood smear images of various classes of ALL. An overall best average accuracy of 99.15% is achieved for binary classification of ALL with an 85% decrease in the feature vector, together with 99% precision and 98.8% sensitivity. For B-ALL sub-type classification, the best accuracy of 98.69% is attained with 98.7% precision and 99.57% specificity. The proposed methodology shows better performance metrics as compared with several existing studies.
{"title":"ALL classification using neural ensemble and memetic deep feature optimization","authors":"Muhammad Awais, Riaz Ahmad, Nabeela Kausar, A. Alzahrani, Nasser Alalwan, Anum Masood","doi":"10.3389/frai.2024.1351942","DOIUrl":"https://doi.org/10.3389/frai.2024.1351942","url":null,"abstract":"Acute lymphoblastic leukemia (ALL) is a fatal blood disorder characterized by the excessive proliferation of immature white blood cells, originating in the bone marrow. An effective prognosis and treatment of ALL calls for its accurate and timely detection. Deep convolutional neural networks (CNNs) have shown promising results in digital pathology. However, they face challenges in classifying different subtypes of leukemia due to their subtle morphological differences. This study proposes an improved pipeline for binary detection and sub-type classification of ALL from blood smear images. At first, a customized, 88 layers deep CNN is proposed and trained using transfer learning along with GoogleNet CNN to create an ensemble of features. Furthermore, this study models the feature selection problem as a combinatorial optimization problem and proposes a memetic version of binary whale optimization algorithm, incorporating Differential Evolution-based local search method to enhance the exploration and exploitation of feature search space. The proposed approach is validated using publicly available standard datasets containing peripheral blood smear images of various classes of ALL. An overall best average accuracy of 99.15% is achieved for binary classification of ALL with an 85% decrease in the feature vector, together with 99% precision and 98.8% sensitivity. For B-ALL sub-type classification, the best accuracy of 98.69% is attained with 98.7% precision and 99.57% specificity. The proposed methodology shows better performance metrics as compared with several existing studies.","PeriodicalId":508738,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"26 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140727801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-08DOI: 10.3389/frai.2024.1401162
Jacobo Roda-Segarra, Santiago Mengual-Andrés, Andrés Payà Rico
Research in the field of Artificial Intelligence applied to emotions in the educational context has experienced significant growth in recent years. However, despite the field’s profound implications for the educational community, the social impact of this scientific production on digital social media remains unclear. To address this question, the present research has been proposed, aiming to analyze the social impact of scientific production on the use of Artificial Intelligence for emotions in the educational context. For this purpose, a sample of 243 scientific publications indexed in Scopus and Web of Science has been selected, from which a second sample of 6,094 social impact records has been extracted from Altmetric, Crossref, and PlumX databases. A dual analysis has been conducted using specially designed software: on one hand, the scientific sample has been analyzed from a bibliometric perspective, and on the other hand, the social impact records have been studied. Comparative analysis based on the two dimensions, scientific and social, has focused on the evolution of scientific production with its corresponding social impact, sources, impact, and content analysis. The results indicate that scientific publications have had a high social impact (with an average of 25.08 social impact records per publication), with a significant increase in research interest starting from 2019, likely driven by the emotional implications of measures taken to curb the COVID-19 pandemic. Furthermore, a lack of alignment has been identified between articles with the highest scientific impact and those with the highest social impact, as well as a lack of alignment in the most commonly used terms from both scientific and social perspectives, a significant variability in the lag in months for scientific research to make an impact on social media, and the fact that the social impact of the research did not emerge from the interest of Twitter users unaffiliated with the research, but rather from the authors, publishers, or scientific institutions. The proposed comparative methodology can be applied to any field of study, making it a useful tool given that current trends in accreditation agencies propose the analysis of the repercussion of scientific research in social media.
近年来,将人工智能应用于情感教育领域的研究取得了长足的发展。然而,尽管这一领域对教育界有着深远的影响,但这一科学成果对数字社交媒体的社会影响仍不明确。为了解决这个问题,我们提出了本研究,旨在分析在教育领域使用人工智能情感的科学成果的社会影响。为此,我们选取了 Scopus 和 Web of Science 中索引的 243 篇科学出版物作为样本,并从 Altmetric、Crossref 和 PlumX 数据库中提取了 6094 条社会影响记录作为第二样本。我们使用专门设计的软件进行了双重分析:一方面从文献计量学的角度分析了科学样本,另一方面研究了社会影响记录。基于科学和社会两个维度的比较分析侧重于科学成果的演变及其相应的社会影响、来源、影响和内容分析。结果表明,科学出版物具有较高的社会影响(平均每份出版物的社会影响记录为 25.08),从 2019 年开始,研究兴趣显著增加,这可能是由于为遏制 COVID-19 大流行而采取的措施所产生的情感影响。此外,还发现科学影响最大的文章与社会影响最大的文章之间缺乏一致性,从科学和社会角度来看,最常用的术语也缺乏一致性,科学研究在社交媒体上产生影响的滞后月数存在显著差异,而且研究的社会影响并非来自与研究无关的推特用户的兴趣,而是来自作者、出版商或科研机构。所提出的比较方法可应用于任何研究领域,鉴于当前认证机构的趋势是建议分析科学研究在社交媒体上的反响,因此该方法是一个有用的工具。
{"title":"Analysis of social metrics on scientific production in the field of emotion-aware education through artificial intelligence","authors":"Jacobo Roda-Segarra, Santiago Mengual-Andrés, Andrés Payà Rico","doi":"10.3389/frai.2024.1401162","DOIUrl":"https://doi.org/10.3389/frai.2024.1401162","url":null,"abstract":"Research in the field of Artificial Intelligence applied to emotions in the educational context has experienced significant growth in recent years. However, despite the field’s profound implications for the educational community, the social impact of this scientific production on digital social media remains unclear. To address this question, the present research has been proposed, aiming to analyze the social impact of scientific production on the use of Artificial Intelligence for emotions in the educational context. For this purpose, a sample of 243 scientific publications indexed in Scopus and Web of Science has been selected, from which a second sample of 6,094 social impact records has been extracted from Altmetric, Crossref, and PlumX databases. A dual analysis has been conducted using specially designed software: on one hand, the scientific sample has been analyzed from a bibliometric perspective, and on the other hand, the social impact records have been studied. Comparative analysis based on the two dimensions, scientific and social, has focused on the evolution of scientific production with its corresponding social impact, sources, impact, and content analysis. The results indicate that scientific publications have had a high social impact (with an average of 25.08 social impact records per publication), with a significant increase in research interest starting from 2019, likely driven by the emotional implications of measures taken to curb the COVID-19 pandemic. Furthermore, a lack of alignment has been identified between articles with the highest scientific impact and those with the highest social impact, as well as a lack of alignment in the most commonly used terms from both scientific and social perspectives, a significant variability in the lag in months for scientific research to make an impact on social media, and the fact that the social impact of the research did not emerge from the interest of Twitter users unaffiliated with the research, but rather from the authors, publishers, or scientific institutions. The proposed comparative methodology can be applied to any field of study, making it a useful tool given that current trends in accreditation agencies propose the analysis of the repercussion of scientific research in social media.","PeriodicalId":508738,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"63 S14","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140731810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-08DOI: 10.3389/frai.2024.1371502
Kristina Sutiene, Peter Schwendner, Ciprian Sipos, Luis Lorenzo, Miroslav Mirchev, Petre Lameski, Audrius Kabašinskas, Chemseddine Tidjani, Belma Ozturkkal, Jurgita Černevičienė
Building an investment portfolio is a problem that numerous researchers have addressed for many years. The key goal has always been to balance risk and reward by optimally allocating assets such as stocks, bonds, and cash. In general, the portfolio management process is based on three steps: planning, execution, and feedback, each of which has its objectives and methods to be employed. Starting from Markowitz's mean-variance portfolio theory, different frameworks have been widely accepted, which considerably renewed how asset allocation is being solved. Recent advances in artificial intelligence provide methodological and technological capabilities to solve highly complex problems, and investment portfolio is no exception. For this reason, the paper reviews the current state-of-the-art approaches by answering the core question of how artificial intelligence is transforming portfolio management steps. Moreover, as the use of artificial intelligence in finance is challenged by transparency, fairness and explainability requirements, the case study of post-hoc explanations for asset allocation is demonstrated. Finally, we discuss recent regulatory developments in the European investment business and highlight specific aspects of this business where explainable artificial intelligence could advance transparency of the investment process.
{"title":"Enhancing portfolio management using artificial intelligence: literature review","authors":"Kristina Sutiene, Peter Schwendner, Ciprian Sipos, Luis Lorenzo, Miroslav Mirchev, Petre Lameski, Audrius Kabašinskas, Chemseddine Tidjani, Belma Ozturkkal, Jurgita Černevičienė","doi":"10.3389/frai.2024.1371502","DOIUrl":"https://doi.org/10.3389/frai.2024.1371502","url":null,"abstract":"Building an investment portfolio is a problem that numerous researchers have addressed for many years. The key goal has always been to balance risk and reward by optimally allocating assets such as stocks, bonds, and cash. In general, the portfolio management process is based on three steps: planning, execution, and feedback, each of which has its objectives and methods to be employed. Starting from Markowitz's mean-variance portfolio theory, different frameworks have been widely accepted, which considerably renewed how asset allocation is being solved. Recent advances in artificial intelligence provide methodological and technological capabilities to solve highly complex problems, and investment portfolio is no exception. For this reason, the paper reviews the current state-of-the-art approaches by answering the core question of how artificial intelligence is transforming portfolio management steps. Moreover, as the use of artificial intelligence in finance is challenged by transparency, fairness and explainability requirements, the case study of post-hoc explanations for asset allocation is demonstrated. Finally, we discuss recent regulatory developments in the European investment business and highlight specific aspects of this business where explainable artificial intelligence could advance transparency of the investment process.","PeriodicalId":508738,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"37 23","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140728349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-08DOI: 10.3389/frai.2024.1281110
Gabor Molnar, Luís Ferreira Pires, Oscar de Boer, Vera Kovaleva
Introduction Video service providers are moving from focusing on Quality of Service (QoS) to Quality of Experience (QoE) in their video networks since the users’ demand for high-quality video content is continually growing. By focusing on QoE, video service providers can provide their subscribers with a more personalized and engaging experience, which can help increase viewer satisfaction and retention. This focus shift requires not only a more sophisticated approach to network management and new tools and technologies to measure and optimize QoE in their networks but also a novel approach to video delivery operations. Methods This paper describes the components, interactions, and relationships of an algorithm factory for video delivery operation that assures high QoE for video streaming services. The paper also showcases the results of gradually implementing an algorithm factory in the video industry. Using a dataset from 2016 to 2022, we present the case of a European PayTV service provider that achieved improved performance measured by both objective and subjective metrics. Results The use of an algorithm factory significantly improved the PayTV service provider’s performance. The study found a fivefold increase in the speed of critical incident resolution and a 59% reduction in the number of critical incidents, all while expanding the customer base and maintaining the same level of labor resources. The case also demonstrates a strong positive relation between the productivity measures of the PayTV operator and their survey-based quality ratings. These results underscore the importance of flawless QoS and operational excellence in delivering QoE to meet the evolving demands of viewers. Discussion The paper adds to the existing literature on relationships between operational efficiency, innovation, and subjective quality. The paper further offers empirical evidence from the PayTV industry. The insights provided are expected to benefit both traditional and over-the-top (OTT) video service providers in their quest to stay ahead in the rapidly evolving video industry. It may also translate to other service providers in similar industries committed to supporting high-quality service delivery.
{"title":"An empirical assessment of the use of an algorithm factory for video delivery operations","authors":"Gabor Molnar, Luís Ferreira Pires, Oscar de Boer, Vera Kovaleva","doi":"10.3389/frai.2024.1281110","DOIUrl":"https://doi.org/10.3389/frai.2024.1281110","url":null,"abstract":"Introduction Video service providers are moving from focusing on Quality of Service (QoS) to Quality of Experience (QoE) in their video networks since the users’ demand for high-quality video content is continually growing. By focusing on QoE, video service providers can provide their subscribers with a more personalized and engaging experience, which can help increase viewer satisfaction and retention. This focus shift requires not only a more sophisticated approach to network management and new tools and technologies to measure and optimize QoE in their networks but also a novel approach to video delivery operations. Methods This paper describes the components, interactions, and relationships of an algorithm factory for video delivery operation that assures high QoE for video streaming services. The paper also showcases the results of gradually implementing an algorithm factory in the video industry. Using a dataset from 2016 to 2022, we present the case of a European PayTV service provider that achieved improved performance measured by both objective and subjective metrics. Results The use of an algorithm factory significantly improved the PayTV service provider’s performance. The study found a fivefold increase in the speed of critical incident resolution and a 59% reduction in the number of critical incidents, all while expanding the customer base and maintaining the same level of labor resources. The case also demonstrates a strong positive relation between the productivity measures of the PayTV operator and their survey-based quality ratings. These results underscore the importance of flawless QoS and operational excellence in delivering QoE to meet the evolving demands of viewers. Discussion The paper adds to the existing literature on relationships between operational efficiency, innovation, and subjective quality. The paper further offers empirical evidence from the PayTV industry. The insights provided are expected to benefit both traditional and over-the-top (OTT) video service providers in their quest to stay ahead in the rapidly evolving video industry. It may also translate to other service providers in similar industries committed to supporting high-quality service delivery.","PeriodicalId":508738,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"19 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140728706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-05DOI: 10.3389/frai.2024.1365777
Alexander J. Didier, Anthony Nigro, Zaid Noori, Mohamed A. Omballi, Scott M. Pappada, Danae Hamouda
Introduction Machine learning (ML) techniques have gained increasing attention in the field of healthcare, including predicting outcomes in patients with lung cancer. ML has the potential to enhance prognostication in lung cancer patients and improve clinical decision-making. In this systematic review and meta-analysis, we aimed to evaluate the performance of ML models compared to logistic regression (LR) models in predicting overall survival in patients with lung cancer. Methods We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement. A comprehensive search was conducted in Medline, Embase, and Cochrane databases using a predefined search query. Two independent reviewers screened abstracts and conflicts were resolved by a third reviewer. Inclusion and exclusion criteria were applied to select eligible studies. Risk of bias assessment was performed using predefined criteria. Data extraction was conducted using the Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modeling Studies (CHARMS) checklist. Meta-analytic analysis was performed to compare the discriminative ability of ML and LR models. Results The literature search resulted in 3,635 studies, and 12 studies with a total of 211,068 patients were included in the analysis. Six studies reported confidence intervals and were included in the meta-analysis. The performance of ML models varied across studies, with C-statistics ranging from 0.60 to 0.85. The pooled analysis showed that ML models had higher discriminative ability compared to LR models, with a weighted average C-statistic of 0.78 for ML models compared to 0.70 for LR models. Conclusion Machine learning models show promise in predicting overall survival in patients with lung cancer, with superior discriminative ability compared to logistic regression models. However, further validation and standardization of ML models are needed before their widespread implementation in clinical practice. Future research should focus on addressing the limitations of the current literature, such as potential bias and heterogeneity among studies, to improve the accuracy and generalizability of ML models for predicting outcomes in patients with lung cancer. Further research and development of ML models in this field may lead to improved patient outcomes and personalized treatment strategies.
导言 机器学习(ML)技术在医疗保健领域受到越来越多的关注,其中包括肺癌患者的预后预测。机器学习有可能提高肺癌患者的预后并改善临床决策。在这项系统综述和荟萃分析中,我们旨在评估 ML 模型与逻辑回归(LR)模型相比在预测肺癌患者总生存期方面的性能。方法 我们遵循了系统综述和荟萃分析首选报告项目(PRISMA)声明。使用预定义的检索查询在 Medline、Embase 和 Cochrane 数据库中进行了全面检索。两位独立审稿人对摘要进行筛选,并由第三位审稿人解决冲突。纳入和排除标准用于筛选符合条件的研究。采用预定义标准对偏倚风险进行评估。数据提取采用预测建模研究系统性综述的批判性评估和数据提取(CHARMS)核对表进行。进行了元分析,以比较 ML 和 LR 模型的判别能力。结果 文献检索结果为 3,635 项研究,其中 12 项研究纳入了分析,共计 211,068 名患者。六项研究报告了置信区间,并被纳入荟萃分析。不同研究的 ML 模型性能各不相同,C 统计量从 0.60 到 0.85 不等。汇总分析表明,与 LR 模型相比,ML 模型具有更高的分辨能力,ML 模型的加权平均 C 统计量为 0.78,而 LR 模型为 0.70。结论 机器学习模型有望预测肺癌患者的总生存期,其判别能力优于逻辑回归模型。然而,在广泛应用于临床实践之前,还需要对机器学习模型进行进一步的验证和标准化。未来的研究应侧重于解决当前文献的局限性,如潜在的偏倚和研究间的异质性,以提高 ML 模型预测肺癌患者预后的准确性和可推广性。在这一领域进一步研究和开发 ML 模型可能会改善患者的预后和个性化治疗策略。
{"title":"Application of machine learning for lung cancer survival prognostication—A systematic review and meta-analysis","authors":"Alexander J. Didier, Anthony Nigro, Zaid Noori, Mohamed A. Omballi, Scott M. Pappada, Danae Hamouda","doi":"10.3389/frai.2024.1365777","DOIUrl":"https://doi.org/10.3389/frai.2024.1365777","url":null,"abstract":"Introduction Machine learning (ML) techniques have gained increasing attention in the field of healthcare, including predicting outcomes in patients with lung cancer. ML has the potential to enhance prognostication in lung cancer patients and improve clinical decision-making. In this systematic review and meta-analysis, we aimed to evaluate the performance of ML models compared to logistic regression (LR) models in predicting overall survival in patients with lung cancer. Methods We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement. A comprehensive search was conducted in Medline, Embase, and Cochrane databases using a predefined search query. Two independent reviewers screened abstracts and conflicts were resolved by a third reviewer. Inclusion and exclusion criteria were applied to select eligible studies. Risk of bias assessment was performed using predefined criteria. Data extraction was conducted using the Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modeling Studies (CHARMS) checklist. Meta-analytic analysis was performed to compare the discriminative ability of ML and LR models. Results The literature search resulted in 3,635 studies, and 12 studies with a total of 211,068 patients were included in the analysis. Six studies reported confidence intervals and were included in the meta-analysis. The performance of ML models varied across studies, with C-statistics ranging from 0.60 to 0.85. The pooled analysis showed that ML models had higher discriminative ability compared to LR models, with a weighted average C-statistic of 0.78 for ML models compared to 0.70 for LR models. Conclusion Machine learning models show promise in predicting overall survival in patients with lung cancer, with superior discriminative ability compared to logistic regression models. However, further validation and standardization of ML models are needed before their widespread implementation in clinical practice. Future research should focus on addressing the limitations of the current literature, such as potential bias and heterogeneity among studies, to improve the accuracy and generalizability of ML models for predicting outcomes in patients with lung cancer. Further research and development of ML models in this field may lead to improved patient outcomes and personalized treatment strategies.","PeriodicalId":508738,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"23 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140738607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-05DOI: 10.3389/frai.2024.1329737
Saad Bin Saeed Ahmed, Shahzaib Naeem, Agha Muhammad Hammad Khan, Bilal Mazhar Qureshi, Amjad Hussain, Bulent Aydogan, Wazir Muhammad
Background and purpose We proposed an artificial neural network model to predict radiobiological parameters for the head and neck squamous cell carcinoma patients treated with radiation therapy. The model uses the tumor specification, demographics, and radiation dose distribution to predict the tumor control probability and the normal tissue complications probability. These indices are crucial for the assessment and clinical management of cancer patients during treatment planning. Methods Two publicly available datasets of 31 and 215 head and neck squamous cell carcinoma patients treated with conformal radiation therapy were selected. The demographics, tumor specifications, and radiation therapy treatment parameters were extracted from the datasets used as inputs for the training of perceptron. Radiobiological indices are calculated by open-source software using dosevolume histograms from radiation therapy treatment plans. Those indices were used as output in the training of a single-layer neural network. The distribution of data used for training, validation, and testing purposes was 70, 15, and 15%, respectively. Results The best performance of the neural network was noted at epoch number 32 with the mean squared error of 0.0465. The accuracy of the prediction of radiobiological indices by the artificial neural network in training, validation, and test phases were determined to be 0.89, 0.87, and 0.82, respectively. We also found that the percentage volume of parotid inside the planning target volume is the significant parameter for the prediction of normal tissue complications probability. Conclusion We believe that the model has significant potential to predict radiobiological indices and help clinicians in treatment plan evaluation and treatment management of head and neck squamous cell carcinoma patients.
{"title":"Artificial neural network-assisted prediction of radiobiological indices in head and neck cancer","authors":"Saad Bin Saeed Ahmed, Shahzaib Naeem, Agha Muhammad Hammad Khan, Bilal Mazhar Qureshi, Amjad Hussain, Bulent Aydogan, Wazir Muhammad","doi":"10.3389/frai.2024.1329737","DOIUrl":"https://doi.org/10.3389/frai.2024.1329737","url":null,"abstract":"Background and purpose We proposed an artificial neural network model to predict radiobiological parameters for the head and neck squamous cell carcinoma patients treated with radiation therapy. The model uses the tumor specification, demographics, and radiation dose distribution to predict the tumor control probability and the normal tissue complications probability. These indices are crucial for the assessment and clinical management of cancer patients during treatment planning. Methods Two publicly available datasets of 31 and 215 head and neck squamous cell carcinoma patients treated with conformal radiation therapy were selected. The demographics, tumor specifications, and radiation therapy treatment parameters were extracted from the datasets used as inputs for the training of perceptron. Radiobiological indices are calculated by open-source software using dosevolume histograms from radiation therapy treatment plans. Those indices were used as output in the training of a single-layer neural network. The distribution of data used for training, validation, and testing purposes was 70, 15, and 15%, respectively. Results The best performance of the neural network was noted at epoch number 32 with the mean squared error of 0.0465. The accuracy of the prediction of radiobiological indices by the artificial neural network in training, validation, and test phases were determined to be 0.89, 0.87, and 0.82, respectively. We also found that the percentage volume of parotid inside the planning target volume is the significant parameter for the prediction of normal tissue complications probability. Conclusion We believe that the model has significant potential to predict radiobiological indices and help clinicians in treatment plan evaluation and treatment management of head and neck squamous cell carcinoma patients.","PeriodicalId":508738,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"10 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140739910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-05DOI: 10.3389/frai.2024.1208874
O. Scrivner, Thuy Nguyen, Michael Ginda, Kosali Simon, Katy Börner
Background Public health policy researchers face a persistent challenge in identifying and integrating relevant data, particularly in the context of the U.S. opioid crisis, where a comprehensive approach is crucial. Purpose To meet this new workforce demand health policy and health economics programs are increasingly introducing data analysis and data visualization skills. Such skills facilitate data integration and discovery by linking multiple resources. Common linking strategies include individual or aggregate level linking (e.g., patient identifiers) in primary clinical data and conceptual linking (e.g., healthcare workforce, state funding, burnout rates) in secondary data. Often, the combination of primary and secondary datasets is sought, requiring additional skills, for example, understanding metadata and constructing interlinkages. Methods To help improve those skills, we developed a 2-step process using a scoping method to discover data and network visualization to interlink metadata. Results: We show how these new skills enable the discovery of relationships among data sources pertinent to public policy research related to the opioid overdose crisis and facilitate inquiry across heterogeneous data resources. In addition, our interactive network visualization introduces (1) a conceptual approach, drawing from recent systematic review studies and linked by the publications, and (2) an aggregate approach, constructed using publicly available datasets and linked through crosswalks. Conclusions These novel metadata visualization techniques can be used as a teaching tool or a discovery method and can also be extended to other public policy domains.
{"title":"Interactive network visualization of opioid crisis research: a tool for reinforcing data linkage skills for public health policy researchers","authors":"O. Scrivner, Thuy Nguyen, Michael Ginda, Kosali Simon, Katy Börner","doi":"10.3389/frai.2024.1208874","DOIUrl":"https://doi.org/10.3389/frai.2024.1208874","url":null,"abstract":"Background Public health policy researchers face a persistent challenge in identifying and integrating relevant data, particularly in the context of the U.S. opioid crisis, where a comprehensive approach is crucial. Purpose To meet this new workforce demand health policy and health economics programs are increasingly introducing data analysis and data visualization skills. Such skills facilitate data integration and discovery by linking multiple resources. Common linking strategies include individual or aggregate level linking (e.g., patient identifiers) in primary clinical data and conceptual linking (e.g., healthcare workforce, state funding, burnout rates) in secondary data. Often, the combination of primary and secondary datasets is sought, requiring additional skills, for example, understanding metadata and constructing interlinkages. Methods To help improve those skills, we developed a 2-step process using a scoping method to discover data and network visualization to interlink metadata. Results: We show how these new skills enable the discovery of relationships among data sources pertinent to public policy research related to the opioid overdose crisis and facilitate inquiry across heterogeneous data resources. In addition, our interactive network visualization introduces (1) a conceptual approach, drawing from recent systematic review studies and linked by the publications, and (2) an aggregate approach, constructed using publicly available datasets and linked through crosswalks. Conclusions These novel metadata visualization techniques can be used as a teaching tool or a discovery method and can also be extended to other public policy domains.","PeriodicalId":508738,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"4 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140736959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}