Pub Date : 2024-10-25eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1326153
Djavan De Clercq, Elias Nehring, Harry Mayne, Adam Mahdi
Coverage of ChatGPT-style large language models (LLMs) in the media has focused on their eye-catching achievements, including solving advanced mathematical problems and reaching expert proficiency in medical examinations. But the gradual adoption of LLMs in agriculture, an industry which touches every human life, has received much less public scrutiny. In this short perspective, we examine risks and opportunities related to more widespread adoption of language models in food production systems. While LLMs can potentially enhance agricultural efficiency, drive innovation, and inform better policies, challenges like agricultural misinformation, collection of vast amounts of farmer data, and threats to agricultural jobs are important concerns. The rapid evolution of the LLM landscape underscores the need for agricultural policymakers to think carefully about frameworks and guidelines that ensure the responsible use of LLMs in food production before these technologies become so ingrained that policy intervention becomes challenging.
{"title":"Large language models can help boost food production, but be mindful of their risks.","authors":"Djavan De Clercq, Elias Nehring, Harry Mayne, Adam Mahdi","doi":"10.3389/frai.2024.1326153","DOIUrl":"https://doi.org/10.3389/frai.2024.1326153","url":null,"abstract":"<p><p>Coverage of ChatGPT-style large language models (LLMs) in the media has focused on their eye-catching achievements, including solving advanced mathematical problems and reaching expert proficiency in medical examinations. But the gradual adoption of LLMs in agriculture, an industry which touches every human life, has received much less public scrutiny. In this short perspective, we examine risks and opportunities related to more widespread adoption of language models in food production systems. While LLMs can potentially enhance agricultural efficiency, drive innovation, and inform better policies, challenges like agricultural misinformation, collection of vast amounts of farmer data, and threats to agricultural jobs are important concerns. The rapid evolution of the LLM landscape underscores the need for agricultural policymakers to think carefully about frameworks and guidelines that ensure the responsible use of LLMs in food production before these technologies become so ingrained that policy intervention becomes challenging.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1326153"},"PeriodicalIF":3.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11543567/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142629563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1471208
Stefan Haas, Konstantin Hegestweiler, Michael Rapp, Maximilian Muschalik, Eyke Hüllermeier
Machine learning has made tremendous progress in predictive performance in recent years. Despite these advances, employing machine learning models in high-stake domains remains challenging due to the opaqueness of many high-performance models. If their behavior cannot be analyzed, this likely decreases the trust in such models and hinders the acceptance of human decision-makers. Motivated by these challenges, we propose a process model for developing and evaluating explainable decision support systems that are tailored to the needs of different stakeholders. To demonstrate its usefulness, we apply the process model to a real-world application in an enterprise context. The goal is to increase the acceptance of an existing black-box model developed at a car manufacturer for supporting manual goodwill assessments. Following the proposed process, we conduct two quantitative surveys targeted at the application's stakeholders. Our study reveals that textual explanations based on local feature importance best fit the needs of the stakeholders in the considered use case. Specifically, our results show that all stakeholders, including business specialists, goodwill assessors, and technical IT experts, agree that such explanations significantly increase their trust in the decision support system. Furthermore, our technical evaluation confirms the faithfulness and stability of the selected explanation method. These practical findings demonstrate the potential of our process model to facilitate the successful deployment of machine learning models in enterprise settings. The results emphasize the importance of developing explanations that are tailored to the specific needs and expectations of diverse stakeholders.
近年来,机器学习在预测性能方面取得了巨大进步。尽管取得了这些进步,但由于许多高性能模型的不透明性,在高风险领域使用机器学习模型仍具有挑战性。如果无法对其行为进行分析,很可能会降低对此类模型的信任度,并阻碍人类决策者对其的接受。在这些挑战的激励下,我们提出了一个流程模型,用于开发和评估可解释的决策支持系统,以满足不同利益相关者的需求。为了证明其实用性,我们将该流程模型应用于企业环境中的实际应用。我们的目标是提高一家汽车制造商为支持人工商誉评估而开发的现有黑盒模型的接受度。按照建议的流程,我们针对应用程序的利益相关者进行了两次定量调查。我们的研究表明,在所考虑的用例中,基于局部特征重要性的文字说明最符合利益相关者的需求。具体来说,我们的研究结果表明,所有利益相关者,包括业务专家、商誉评估员和 IT 技术专家,都认为这种解释能显著提高他们对决策支持系统的信任度。此外,我们的技术评估证实了所选解释方法的忠实性和稳定性。这些实际研究结果表明,我们的流程模型具有促进机器学习模型在企业环境中成功部署的潜力。结果强调了根据不同利益相关者的具体需求和期望制定解释的重要性。
{"title":"Stakeholder-centric explanations for black-box decisions: an XAI process model and its application to automotive goodwill assessments.","authors":"Stefan Haas, Konstantin Hegestweiler, Michael Rapp, Maximilian Muschalik, Eyke Hüllermeier","doi":"10.3389/frai.2024.1471208","DOIUrl":"https://doi.org/10.3389/frai.2024.1471208","url":null,"abstract":"<p><p>Machine learning has made tremendous progress in predictive performance in recent years. Despite these advances, employing machine learning models in high-stake domains remains challenging due to the opaqueness of many high-performance models. If their behavior cannot be analyzed, this likely decreases the trust in such models and hinders the acceptance of human decision-makers. Motivated by these challenges, we propose a process model for developing and evaluating explainable decision support systems that are tailored to the needs of different stakeholders. To demonstrate its usefulness, we apply the process model to a real-world application in an enterprise context. The goal is to increase the acceptance of an existing black-box model developed at a car manufacturer for supporting manual goodwill assessments. Following the proposed process, we conduct two quantitative surveys targeted at the application's stakeholders. Our study reveals that textual explanations based on local feature importance best fit the needs of the stakeholders in the considered use case. Specifically, our results show that all stakeholders, including business specialists, goodwill assessors, and technical IT experts, agree that such explanations significantly increase their trust in the decision support system. Furthermore, our technical evaluation confirms the faithfulness and stability of the selected explanation method. These practical findings demonstrate the potential of our process model to facilitate the successful deployment of machine learning models in enterprise settings. The results emphasize the importance of developing explanations that are tailored to the specific needs and expectations of diverse stakeholders.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1471208"},"PeriodicalIF":3.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11540772/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142606638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1471224
Myles Joshua Toledo Tan, Nicholle Mae Amor Tan Maravilla
{"title":"Shaping integrity: why generative artificial intelligence does not have to undermine education.","authors":"Myles Joshua Toledo Tan, Nicholle Mae Amor Tan Maravilla","doi":"10.3389/frai.2024.1471224","DOIUrl":"https://doi.org/10.3389/frai.2024.1471224","url":null,"abstract":"","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1471224"},"PeriodicalIF":3.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11540794/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142606634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-23eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1446640
Yan Xi, Yi Xu, Zheng Shu
Objective: This study utilized artificial intelligence (AI) to quantify coronary computed tomography angiography (CCTA) images, aiming to compare plaque characteristics and CT-derived fractional flow reserve (FFR-CT) in type 2 diabetes mellitus (T2DM) patients with or without hypertension (HTN).
Methods: A retrospective analysis was conducted on 1,151 patients with suspected coronary artery disease who underwent CCTA at a single center. Patients were grouped into T2DM (n = 133), HTN (n = 442), T2DM (HTN+) (n = 256), and control (n = 320). AI assessed various CCTA parameters, including plaque components, high-risk plaques (HRPs), FFR-CT, severity of coronary stenosis using Coronary Artery Disease Reporting and Data System 2.0 (CAD-RADS 2.0), segment involvement score (SIS), and segment stenosis score (SSS). Statistical analysis compared these parameters among groups.
Results: The T2DM (HTN+) group had the highest plaque volume and length, SIS, SSS, and CAD-RADS 2.0 classification. In the T2DM group, 54.0% of the plaque volume was noncalcified and 46.0% was calcified, while in the HTN group, these values were 24.0 and 76.0%, respectively. The T2DM (HTN+) group had more calcified plaques (35.7% noncalcified, 64.3% calcified) than the T2DM group. The average necrotic core volume was 4.25 mm3 in the T2DM group and 5.23 mm3 in the T2DM (HTN+) group, with no significant difference (p > 0.05). HRPs were more prevalent in both T2DM and T2DM (HTN+) compared to HTN and control groups (p < 0.05). The T2DM (HTN+) group had a higher likelihood (26.1%) of FFR-CT ≤0.75 compared to the T2DM group (13.8%). FFR-CT ≤0.75 correlated with CAD-RADS 2.0 (OR = 7.986, 95% CI = 5.466-11.667, cutoff = 3, p < 0.001) and noncalcified plaque volume (OR = 1.006, 95% CI = 1.003-1.009, cutoff = 29.65 mm3, p < 0.001). HRPs were associated with HbA1c levels (OR = 1.631, 95% CI = 1.387-1.918).
Conclusion: AI analysis of CCTA identifies patterns in quantitative plaque characteristics and FFR-CT values. Comorbid HTN exacerbates partially calcified plaques, leading to more severe coronary artery stenosis in patients with T2DM. T2DM is associated with partially noncalcified plaques, whereas HTN is linked to partially calcified plaques.
{"title":"Impact of hypertension on coronary artery plaques and FFR-CT in type 2 diabetes mellitus patients: evaluation utilizing artificial intelligence processed coronary computed tomography angiography.","authors":"Yan Xi, Yi Xu, Zheng Shu","doi":"10.3389/frai.2024.1446640","DOIUrl":"10.3389/frai.2024.1446640","url":null,"abstract":"<p><strong>Objective: </strong>This study utilized artificial intelligence (AI) to quantify coronary computed tomography angiography (CCTA) images, aiming to compare plaque characteristics and CT-derived fractional flow reserve (FFR-CT) in type 2 diabetes mellitus (T2DM) patients with or without hypertension (HTN).</p><p><strong>Methods: </strong>A retrospective analysis was conducted on 1,151 patients with suspected coronary artery disease who underwent CCTA at a single center. Patients were grouped into T2DM (<i>n</i> = 133), HTN (<i>n</i> = 442), T2DM (HTN+) (<i>n</i> = 256), and control (<i>n</i> = 320). AI assessed various CCTA parameters, including plaque components, high-risk plaques (HRPs), FFR-CT, severity of coronary stenosis using Coronary Artery Disease Reporting and Data System 2.0 (CAD-RADS 2.0), segment involvement score (SIS), and segment stenosis score (SSS). Statistical analysis compared these parameters among groups.</p><p><strong>Results: </strong>The T2DM (HTN+) group had the highest plaque volume and length, SIS, SSS, and CAD-RADS 2.0 classification. In the T2DM group, 54.0% of the plaque volume was noncalcified and 46.0% was calcified, while in the HTN group, these values were 24.0 and 76.0%, respectively. The T2DM (HTN+) group had more calcified plaques (35.7% noncalcified, 64.3% calcified) than the T2DM group. The average necrotic core volume was 4.25 mm<sup>3</sup> in the T2DM group and 5.23 mm<sup>3</sup> in the T2DM (HTN+) group, with no significant difference (<i>p</i> > 0.05). HRPs were more prevalent in both T2DM and T2DM (HTN+) compared to HTN and control groups (<i>p</i> < 0.05). The T2DM (HTN+) group had a higher likelihood (26.1%) of FFR-CT ≤0.75 compared to the T2DM group (13.8%). FFR-CT ≤0.75 correlated with CAD-RADS 2.0 (OR = 7.986, 95% CI = 5.466-11.667, cutoff = 3, <i>p</i> < 0.001) and noncalcified plaque volume (OR = 1.006, 95% CI = 1.003-1.009, cutoff = 29.65 mm<sup>3</sup>, <i>p</i> < 0.001). HRPs were associated with HbA1c levels (OR = 1.631, 95% CI = 1.387-1.918).</p><p><strong>Conclusion: </strong>AI analysis of CCTA identifies patterns in quantitative plaque characteristics and FFR-CT values. Comorbid HTN exacerbates partially calcified plaques, leading to more severe coronary artery stenosis in patients with T2DM. T2DM is associated with partially noncalcified plaques, whereas HTN is linked to partially calcified plaques.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1446640"},"PeriodicalIF":3.0,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11537896/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-23eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1388479
Hassan S Al Khatib, Subash Neupane, Harish Kumar Manchukonda, Noorbakhsh Amiri Golilarz, Sudip Mittal, Amin Amirlatifi, Shahram Rahimi
Patient-Centric Knowledge Graphs (PCKGs) represent an important shift in healthcare that focuses on individualized patient care by mapping the patient's health information holistically and multi-dimensionally. PCKGs integrate various types of health data to provide healthcare professionals with a comprehensive understanding of a patient's health, enabling more personalized and effective care. This literature review explores the methodologies, challenges, and opportunities associated with PCKGs, focusing on their role in integrating disparate healthcare data and enhancing patient care through a unified health perspective. In addition, this review also discusses the complexities of PCKG development, including ontology design, data integration techniques, knowledge extraction, and structured representation of knowledge. It highlights advanced techniques such as reasoning, semantic search, and inference mechanisms essential in constructing and evaluating PCKGs for actionable healthcare insights. We further explore the practical applications of PCKGs in personalized medicine, emphasizing their significance in improving disease prediction and formulating effective treatment plans. Overall, this review provides a foundational perspective on the current state-of-the-art and best practices of PCKGs, guiding future research and applications in this dynamic field.
{"title":"Patient-centric knowledge graphs: a survey of current methods, challenges, and applications.","authors":"Hassan S Al Khatib, Subash Neupane, Harish Kumar Manchukonda, Noorbakhsh Amiri Golilarz, Sudip Mittal, Amin Amirlatifi, Shahram Rahimi","doi":"10.3389/frai.2024.1388479","DOIUrl":"10.3389/frai.2024.1388479","url":null,"abstract":"<p><p>Patient-Centric Knowledge Graphs (PCKGs) represent an important shift in healthcare that focuses on individualized patient care by mapping the patient's health information holistically and multi-dimensionally. PCKGs integrate various types of health data to provide healthcare professionals with a comprehensive understanding of a patient's health, enabling more personalized and effective care. This literature review explores the methodologies, challenges, and opportunities associated with PCKGs, focusing on their role in integrating disparate healthcare data and enhancing patient care through a unified health perspective. In addition, this review also discusses the complexities of PCKG development, including ontology design, data integration techniques, knowledge extraction, and structured representation of knowledge. It highlights advanced techniques such as reasoning, semantic search, and inference mechanisms essential in constructing and evaluating PCKGs for actionable healthcare insights. We further explore the practical applications of PCKGs in personalized medicine, emphasizing their significance in improving disease prediction and formulating effective treatment plans. Overall, this review provides a foundational perspective on the current state-of-the-art and best practices of PCKGs, guiding future research and applications in this dynamic field.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1388479"},"PeriodicalIF":3.0,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11558794/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142629565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-23eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1460337
Frederik Dilling, Marc Herrmann
In this exploratory study, the potential of large language models (LLMs), specifically ChatGPT to support pre-service primary education mathematics teachers in constructing mathematical proofs in geometry is investigated. Utilizing the theoretical framework of instrumental genesis, the prior experiences of students with LLMs, their beliefs about the operating principle and their interactions with the chatbot are analyzed. Using qualitative content analysis, inductive categories for these aspects are formed. Results indicate that students had limited prior experiences with LLMs and used them predominantly for applications that are not mathematics specific. Regarding their beliefs, most show only superficial knowledge about the technology and misconceptions are common. The analysis of interactions showed multiple types of in parts mathematics-specific prompts and patterns on three different levels from single prompts to whole chat interactions.
{"title":"Using large language models to support pre-service teachers mathematical reasoning-an exploratory study on ChatGPT as an instrument for creating mathematical proofs in geometry.","authors":"Frederik Dilling, Marc Herrmann","doi":"10.3389/frai.2024.1460337","DOIUrl":"10.3389/frai.2024.1460337","url":null,"abstract":"<p><p>In this exploratory study, the potential of large language models (LLMs), specifically ChatGPT to support pre-service primary education mathematics teachers in constructing mathematical proofs in geometry is investigated. Utilizing the theoretical framework of instrumental genesis, the prior experiences of students with LLMs, their beliefs about the operating principle and their interactions with the chatbot are analyzed. Using qualitative content analysis, inductive categories for these aspects are formed. Results indicate that students had limited prior experiences with LLMs and used them predominantly for applications that are not mathematics specific. Regarding their beliefs, most show only superficial knowledge about the technology and misconceptions are common. The analysis of interactions showed multiple types of in parts mathematics-specific prompts and patterns on three different levels from single prompts to whole chat interactions.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1460337"},"PeriodicalIF":3.0,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11537848/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1451926
Keita Tokuda, Yuichi Katori
Introduction: Nonlinear and non-stationary processes are prevalent in various natural and physical phenomena, where system dynamics can change qualitatively due to bifurcation phenomena. Machine learning methods have advanced our ability to learn and predict such systems from observed time series data. However, predicting the behavior of systems with temporal parameter variations without knowledge of true parameter values remains a significant challenge.
Methods: This study uses reservoir computing framework to address this problem by unsupervised extraction of slowly varying system parameters from time series data. We propose a model architecture consisting of a slow reservoir with long timescale internal dynamics and a fast reservoir with short timescale dynamics. The slow reservoir extracts the temporal variation of system parameters, which are then used to predict unknown bifurcations in the fast dynamics.
Results: Through experiments on chaotic dynamical systems, our proposed model successfully extracted slowly varying system parameters and predicted bifurcations that were not included in the training data. The model demonstrated robust predictive performance, showing that the reservoir computing framework can handle nonlinear, non-stationary systems without prior knowledge of the system's true parameters.
Discussion: Our approach shows potential for applications in fields such as neuroscience, material science, and weather prediction, where slow dynamics influencing qualitative changes are often unobservable.
{"title":"Prediction of unobserved bifurcation by unsupervised extraction of slowly time-varying system parameter dynamics from time series using reservoir computing.","authors":"Keita Tokuda, Yuichi Katori","doi":"10.3389/frai.2024.1451926","DOIUrl":"10.3389/frai.2024.1451926","url":null,"abstract":"<p><strong>Introduction: </strong>Nonlinear and non-stationary processes are prevalent in various natural and physical phenomena, where system dynamics can change qualitatively due to bifurcation phenomena. Machine learning methods have advanced our ability to learn and predict such systems from observed time series data. However, predicting the behavior of systems with temporal parameter variations without knowledge of true parameter values remains a significant challenge.</p><p><strong>Methods: </strong>This study uses reservoir computing framework to address this problem by unsupervised extraction of slowly varying system parameters from time series data. We propose a model architecture consisting of a slow reservoir with long timescale internal dynamics and a fast reservoir with short timescale dynamics. The slow reservoir extracts the temporal variation of system parameters, which are then used to predict unknown bifurcations in the fast dynamics.</p><p><strong>Results: </strong>Through experiments on chaotic dynamical systems, our proposed model successfully extracted slowly varying system parameters and predicted bifurcations that were not included in the training data. The model demonstrated robust predictive performance, showing that the reservoir computing framework can handle nonlinear, non-stationary systems without prior knowledge of the system's true parameters.</p><p><strong>Discussion: </strong>Our approach shows potential for applications in fields such as neuroscience, material science, and weather prediction, where slow dynamics influencing qualitative changes are often unobservable.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1451926"},"PeriodicalIF":3.0,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11534796/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1414122
Ramprasath Jayaprakash, Krishnaraj Natarajan, J Alfred Daniel, Chandru Vignesh Chinnappan, Jayant Giri, Hong Qin, Saurav Mallik
Life has become more comfortable in the era of advanced technology in this cutthroat competitive world. However, there are also emerging harmful technologies that pose a threat. Without a doubt, phishing is one of the rising concerns that leads to stealing vital information such as passwords, security codes, and personal data from any target node through communication hijacking techniques. In addition, phishing attacks include delivering false messages that originate from a trusted source. Moreover, a phishing attack aims to get the victim to run malicious programs and reveal confidential data, such as bank credentials, one-time passwords, and user login credentials. The sole intention is to collect personal information through malicious program-based attempts embedded in URLs, emails, and website-based attempts. Notably, this proposed technique detects URL, email, and website-based phishing attacks, which will be beneficial and secure us from scam attempts. Subsequently, the data are pre-processed to identify phishing attacks using data cleaning, attribute selection, and attacks detected using machine learning techniques. Furthermore, the proposed techniques use heuristic-based machine learning to identify phishing attacks. Admittedly, 56 features are used to analyze URL phishing findings, and experimental results show that the proposed technique has a better accuracy of 97.2%. Above all, the proposed techniques for email phishing detection obtain a higher accuracy of 97.4%. In addition, the proposed technique for website phishing detection has a better accuracy of 98.1%, and 48 features are used for analysis.
{"title":"Heuristic machine learning approaches for identifying phishing threats across web and email platforms.","authors":"Ramprasath Jayaprakash, Krishnaraj Natarajan, J Alfred Daniel, Chandru Vignesh Chinnappan, Jayant Giri, Hong Qin, Saurav Mallik","doi":"10.3389/frai.2024.1414122","DOIUrl":"10.3389/frai.2024.1414122","url":null,"abstract":"<p><p>Life has become more comfortable in the era of advanced technology in this cutthroat competitive world. However, there are also emerging harmful technologies that pose a threat. Without a doubt, phishing is one of the rising concerns that leads to stealing vital information such as passwords, security codes, and personal data from any target node through communication hijacking techniques. In addition, phishing attacks include delivering false messages that originate from a trusted source. Moreover, a phishing attack aims to get the victim to run malicious programs and reveal confidential data, such as bank credentials, one-time passwords, and user login credentials. The sole intention is to collect personal information through malicious program-based attempts embedded in URLs, emails, and website-based attempts. Notably, this proposed technique detects URL, email, and website-based phishing attacks, which will be beneficial and secure us from scam attempts. Subsequently, the data are pre-processed to identify phishing attacks using data cleaning, attribute selection, and attacks detected using machine learning techniques. Furthermore, the proposed techniques use heuristic-based machine learning to identify phishing attacks. Admittedly, 56 features are used to analyze URL phishing findings, and experimental results show that the proposed technique has a better accuracy of 97.2%. Above all, the proposed techniques for email phishing detection obtain a higher accuracy of 97.4%. In addition, the proposed technique for website phishing detection has a better accuracy of 98.1%, and 48 features are used for analysis.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1414122"},"PeriodicalIF":3.0,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11532189/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1446063
Meshari Alazmi
Introduction: In the intricate realm of enzymology, the precise quantification of enzyme efficiency, epitomized by the turnover number (kcat), is a paramount yet elusive objective. Existing methodologies, though sophisticated, often grapple with the inherent stochasticity and multifaceted nature of enzymatic reactions. Thus, there arises a necessity to explore avant-garde computational paradigms.
Methods: In this context, we introduce "enzyme catalytic efficiency prediction (ECEP)," leveraging advanced deep learning techniques to enhance the previous implementation, TurNuP, for predicting the enzyme catalase kcat. Our approach significantly outperforms prior methodologies, incorporating new features derived from enzyme sequences and chemical reaction dynamics. Through ECEP, we unravel the intricate enzyme-substrate interactions, capturing the nuanced interplay of molecular determinants.
Results: Preliminary assessments, compared against established models like TurNuP and DLKcat, underscore the superior predictive capabilities of ECEP, marking a pivotal shift in silico enzymatic turnover number estimation. This study enriches the computational toolkit available to enzymologists and lays the groundwork for future explorations in the burgeoning field of bioinformatics. This paper suggested a multi-feature ensemble deep learning-based approach to predict enzyme kinetic parameters using an ensemble convolution neural network and XGBoost by calculating weighted-average of each feature-based model's output to outperform traditional machine learning methods. The proposed "ECEP" model significantly outperformed existing methodologies, achieving a mean squared error (MSE) reduction of 0.35 from 0.81 to 0.46 and R-squared score from 0.44 to 0.54, thereby demonstrating its superior accuracy and effectiveness in enzyme catalytic efficiency prediction.
Discussion: This improvement underscores the model's potential to enhance the field of bioinformatics, setting a new benchmark for performance.
{"title":"Enzyme catalytic efficiency prediction: employing convolutional neural networks and XGBoost.","authors":"Meshari Alazmi","doi":"10.3389/frai.2024.1446063","DOIUrl":"10.3389/frai.2024.1446063","url":null,"abstract":"<p><strong>Introduction: </strong>In the intricate realm of enzymology, the precise quantification of enzyme efficiency, epitomized by the turnover number (<i>k</i> <sub>cat</sub>), is a paramount yet elusive objective. Existing methodologies, though sophisticated, often grapple with the inherent stochasticity and multifaceted nature of enzymatic reactions. Thus, there arises a necessity to explore avant-garde computational paradigms.</p><p><strong>Methods: </strong>In this context, we introduce \"enzyme catalytic efficiency prediction (ECEP),\" leveraging advanced deep learning techniques to enhance the previous implementation, TurNuP, for predicting the enzyme catalase <i>k</i> <sub>cat</sub>. Our approach significantly outperforms prior methodologies, incorporating new features derived from enzyme sequences and chemical reaction dynamics. Through ECEP, we unravel the intricate enzyme-substrate interactions, capturing the nuanced interplay of molecular determinants.</p><p><strong>Results: </strong>Preliminary assessments, compared against established models like TurNuP and DLKcat, underscore the superior predictive capabilities of ECEP, marking a pivotal shift <i>in silico</i> enzymatic turnover number estimation. This study enriches the computational toolkit available to enzymologists and lays the groundwork for future explorations in the burgeoning field of bioinformatics. This paper suggested a multi-feature ensemble deep learning-based approach to predict enzyme kinetic parameters using an ensemble convolution neural network and XGBoost by calculating weighted-average of each feature-based model's output to outperform traditional machine learning methods. The proposed \"ECEP\" model significantly outperformed existing methodologies, achieving a mean squared error (MSE) reduction of 0.35 from 0.81 to 0.46 and <i>R</i>-squared score from 0.44 to 0.54, thereby demonstrating its superior accuracy and effectiveness in enzyme catalytic efficiency prediction.</p><p><strong>Discussion: </strong>This improvement underscores the model's potential to enhance the field of bioinformatics, setting a new benchmark for performance.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1446063"},"PeriodicalIF":3.0,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11532030/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1456486
Jenia Kim, Henry Maathuis, Danielle Sent
Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there's been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation "good" from a user's perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human-AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
{"title":"Human-centered evaluation of explainable AI applications: a systematic review.","authors":"Jenia Kim, Henry Maathuis, Danielle Sent","doi":"10.3389/frai.2024.1456486","DOIUrl":"10.3389/frai.2024.1456486","url":null,"abstract":"<p><p>Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there's been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation \"good\" from a user's perspective, i.e., what makes an explanation <i>meaningful</i> to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human-AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1456486"},"PeriodicalIF":3.0,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11525002/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142558991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}