Evana Gizzi, Connor Firth, Caleb Adams, James Berck, P. Timothy Chase Jr, Christian Cassamajor-Paul, Rachael Chertok, Lily Clough, Jonathan Davis, Melissa De La Cruz, Matthew Dosberg, Alan Gibson, Jonathan Hammer, Ibrahim Haroon, Michael A. Johnson, Brian Kempa, James Marshall, Patrick Maynard, Brett McKinney, Leyton McKinney, Michael Monaghan, Robin Onsay, Hayley Owens, Sam Pedrotty, Daniel Rogers, Mahmooda Sultana, Jivko Sinapov, Bethany Theiling, Aaron Woodard, Caroline Zouloumian, Connor Williams
Infusing artificial intelligence algorithms into production aerospace systems can be challenging due to costs, timelines, and a risk-averse industry. We introduce the Onboard Artificial Intelligence Research (OnAIR) platform, an open-source software pipeline and cognitive architecture tool that enables full life cycle AI research for on-board intelligent systems. We begin with a description and user walk-through of the OnAIR tool. Next, we describe four use cases of OnAIR for both research and deployed onboard applications, detailing their use of OnAIR and the benefits it provided to the development and function of each respective scenario. Lastly, we describe two upcoming planned deployments which will leverage OnAIR for crucial mission outcomes. We conclude with remarks on future work and goals for the forward progression of OnAIR as a tool to enable a larger AI and aerospace research community.
{"title":"OnAIR: Applications of the NASA on-board artificial intelligence research platform","authors":"Evana Gizzi, Connor Firth, Caleb Adams, James Berck, P. Timothy Chase Jr, Christian Cassamajor-Paul, Rachael Chertok, Lily Clough, Jonathan Davis, Melissa De La Cruz, Matthew Dosberg, Alan Gibson, Jonathan Hammer, Ibrahim Haroon, Michael A. Johnson, Brian Kempa, James Marshall, Patrick Maynard, Brett McKinney, Leyton McKinney, Michael Monaghan, Robin Onsay, Hayley Owens, Sam Pedrotty, Daniel Rogers, Mahmooda Sultana, Jivko Sinapov, Bethany Theiling, Aaron Woodard, Caroline Zouloumian, Connor Williams","doi":"10.1002/aaai.70020","DOIUrl":"https://doi.org/10.1002/aaai.70020","url":null,"abstract":"<p>Infusing artificial intelligence algorithms into production aerospace systems can be challenging due to costs, timelines, and a risk-averse industry. We introduce the Onboard Artificial Intelligence Research (OnAIR) platform, an open-source software pipeline and cognitive architecture tool that enables full life cycle AI research for on-board intelligent systems. We begin with a description and user walk-through of the OnAIR tool. Next, we describe four use cases of OnAIR for both research and deployed onboard applications, detailing their use of OnAIR and the benefits it provided to the development and function of each respective scenario. Lastly, we describe two upcoming planned deployments which will leverage OnAIR for crucial mission outcomes. We conclude with remarks on future work and goals for the forward progression of OnAIR as a tool to enable a larger AI and aerospace research community.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 3","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70020","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ji Won Kim, Jae Hong Park, Yuri Anna Kim, Sang Jun Lee
According to an industry survey, many people miss opportunities to apply for government subsidy programs because they do not know how to apply. People also need to search manually and check whether these programs are suitable for them. To address this issue, our study developed a new generative recommender system with both users' information and government subsidy documents. Within our recommender system framework, we modify the existing Residual Quantization Variational Auto-Encoder (RQ-VAE) model to capture deep and abstract information from subsidy documents. Using semantic IDs generated for approximately 185,610 user click-stream histories and 240,000 documents, we train our recommender system to predict the semantic IDs of the next subsidy policy documents in which a user might be interested. In 2024, we successfully deployed our generative recommender system in Wello, a Korean Gov-Tech startup. In collaboration with the Korean government, our generative recommender system helped enhance program effectiveness by saving $7.8 million in unused funds and achieved $27.4 million in advertising efficiency gains. Also, Wello observed a 68% improvement in Click-Through-Ratio (CTR), increasing from 41.4% in the third quarter of 2024 to 69.6% in the fourth quarter of 2024. We thus anticipate that our generative recommender system will have a significant impact on both individuals and the government.
{"title":"Developing generative recommender systems for government subsidy programs with a new RQ-VAE model: Wello and the Korean government case","authors":"Ji Won Kim, Jae Hong Park, Yuri Anna Kim, Sang Jun Lee","doi":"10.1002/aaai.70029","DOIUrl":"https://doi.org/10.1002/aaai.70029","url":null,"abstract":"<p>According to an industry survey, many people miss opportunities to apply for government subsidy programs because they do not know how to apply. People also need to search manually and check whether these programs are suitable for them. To address this issue, our study developed a new generative recommender system with both users' information and government subsidy documents. Within our recommender system framework, we modify the existing Residual Quantization Variational Auto-Encoder (RQ-VAE) model to capture deep and abstract information from subsidy documents. Using semantic IDs generated for approximately 185,610 user click-stream histories and 240,000 documents, we train our recommender system to predict the semantic IDs of the next subsidy policy documents in which a user might be interested. In 2024, we successfully deployed our generative recommender system in Wello, a Korean Gov-Tech startup. In collaboration with the Korean government, our generative recommender system helped enhance program effectiveness by saving $7.8 million in unused funds and achieved $27.4 million in advertising efficiency gains. Also, Wello observed a 68% improvement in Click-Through-Ratio (CTR), increasing from 41.4% in the third quarter of 2024 to 69.6% in the fourth quarter of 2024. We thus anticipate that our generative recommender system will have a significant impact on both individuals and the government. </p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 3","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70029","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145012508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Akash V. Maharaj, David Arbour, Daniel Lee, Uttaran Bhattacharya, Anup Rao, Austin Zane, Avi Feller, Kun Qian, Sajjadur Rahman, Yunyao Li
Enterprise AI Assistants are increasingly deployed in domains where accuracy is paramount, making each erroneous output a potentially significant incident. This paper presents a comprehensive framework for monitoring, benchmarking, and continuously improving such complex, multi-component systems under active development by multiple teams. Our approach encompasses three key elements: (1) a hierarchical “severity” framework for incident detection that identifies and categorizes errors while attributing component-specific error rates, facilitating targeted improvements; (2) a scalable and principled methodology for benchmark construction, evaluation, and deployment, designed to accommodate multiple development teams, mitigate overfitting risks, and assess the downstream impact of system modifications; and (3) a continual improvement strategy leveraging multidimensional evaluation, enabling the identification and implementation of diverse enhancement opportunities. By adopting this holistic framework, organizations can systematically enhance the reliability and performance of their AI Assistants, ensuring their efficacy in critical enterprise environments. We conclude by discussing how this multifaceted approach opens avenues for various classes of enhancements, including human-AI collaborative evaluation, paving the way for more robust and trustworthy AI systems.
{"title":"Evaluation and incident prevention in an enterprise AI assistant","authors":"Akash V. Maharaj, David Arbour, Daniel Lee, Uttaran Bhattacharya, Anup Rao, Austin Zane, Avi Feller, Kun Qian, Sajjadur Rahman, Yunyao Li","doi":"10.1002/aaai.70028","DOIUrl":"https://doi.org/10.1002/aaai.70028","url":null,"abstract":"<p>Enterprise AI Assistants are increasingly deployed in domains where accuracy is paramount, making each erroneous output a potentially significant incident. This paper presents a comprehensive framework for monitoring, benchmarking, and continuously improving such complex, multi-component systems under active development by multiple teams. Our approach encompasses three key elements: (1) a hierarchical “severity” framework for incident detection that identifies and categorizes errors while attributing component-specific error rates, facilitating targeted improvements; (2) a scalable and principled methodology for benchmark construction, evaluation, and deployment, designed to accommodate multiple development teams, mitigate overfitting risks, and assess the downstream impact of system modifications; and (3) a continual improvement strategy leveraging multidimensional evaluation, enabling the identification and implementation of diverse enhancement opportunities. By adopting this holistic framework, organizations can systematically enhance the reliability and performance of their AI Assistants, ensuring their efficacy in critical enterprise environments. We conclude by discussing how this multifaceted approach opens avenues for various classes of enhancements, including human-AI collaborative evaluation, paving the way for more robust and trustworthy AI systems. </p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 3","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70028","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145012507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This year's innovative applications of AI special issue features AI systems deployed in real-world settings, from enterprise platforms to public services, demonstrating both technical rigor and measurable benefits for organizations and society. The eight selected articles span enterprise reliability, cybersecurity, aerospace, education, healthcare logistics, government services, and scalable AI strategy. Collectively, these works illustrate how AI is progressing from research prototypes to systems that organizations now rely on for critical decisions, offering lessons learned for both researchers and practitioners.
{"title":"Introduction to the special issue on innovative applications of artificial intelligence (IAAI 2025)","authors":"Serdar Kadıoğlu, Sean McGregor, Jan Seyler","doi":"10.1002/aaai.70027","DOIUrl":"https://doi.org/10.1002/aaai.70027","url":null,"abstract":"<p>This year's <i>innovative applications of AI</i> special issue features AI systems deployed in real-world settings, from enterprise platforms to public services, demonstrating both technical rigor and measurable benefits for organizations and society. The eight selected articles span enterprise reliability, cybersecurity, aerospace, education, healthcare logistics, government services, and scalable AI strategy. Collectively, these works illustrate how AI is progressing from research prototypes to systems that organizations now rely on for critical decisions, offering lessons learned for both researchers and practitioners.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 3","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145012548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Finetuning serves as the critical adaptation mechanism for multimodal large language models, bridging their pretrained knowledge with specialized downstream task requirements. This paper reviews recent finetuning advances across three key dimensions: (1) efficiency-oriented methods that reduce resource costs; (2) capability-specific techniques enhancing specialized multimodal skills; and (3) task-unifying approaches that bridge understanding and generation. We demonstrate how these directions transform multimodal large language models from versatile foundations into adaptive, human-aligned systems, providing researchers with a structured roadmap for developing next-generation multimodal AI.
{"title":"Recent advances in finetuning multimodal large language models","authors":"Zhen Wang, Lin Li, Long Chen","doi":"10.1002/aaai.70025","DOIUrl":"https://doi.org/10.1002/aaai.70025","url":null,"abstract":"<p>Finetuning serves as the critical adaptation mechanism for multimodal large language models, bridging their pretrained knowledge with specialized downstream task requirements. This paper reviews recent finetuning advances across three key dimensions: (1) efficiency-oriented methods that reduce resource costs; (2) capability-specific techniques enhancing specialized multimodal skills; and (3) task-unifying approaches that bridge understanding and generation. We demonstrate how these directions transform multimodal large language models from versatile foundations into adaptive, human-aligned systems, providing researchers with a structured roadmap for developing next-generation multimodal AI.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 3","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70025","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144930055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ensuring that AI systems do what we, as humans, actually want them to do is one of the biggest open research challenges in AI alignment and safety. My research seeks to directly address this challenge by enabling AI systems to interact with humans to learn aligned and robust behaviors. The way robots and other AI systems behave is often the result of optimizing a reward function. However, manually designing good reward functions is highly challenging and error-prone, even for domain experts. Although reward functions are often difficult to manually specify, human feedback in the form of demonstrations or preferences is often much easier to obtain but can be difficult to interpret due to ambiguity and noise. Thus, it is critical that AI systems take into account epistemic uncertainty over the human's true intent. As part of the AAAI New Faculty Highlight Program, I will give an overview of my research progress along the following fundamental research areas: (1) efficiently quantifying uncertainty over human intent, (2) directly optimizing behavior to be robust to uncertainty over human intent, and (3) actively querying for additional human input to reduce uncertainty over human intent.
{"title":"Toward robust, interactive, and human-aligned AI systems","authors":"Daniel S. Brown","doi":"10.1002/aaai.70024","DOIUrl":"https://doi.org/10.1002/aaai.70024","url":null,"abstract":"<p>Ensuring that AI systems do what we, as humans, actually want them to do is one of the biggest open research challenges in AI alignment and safety. My research seeks to directly address this challenge by enabling AI systems to interact with humans to learn aligned and robust behaviors. The way robots and other AI systems behave is often the result of optimizing a reward function. However, manually designing good reward functions is highly challenging and error-prone, even for domain experts. Although reward functions are often difficult to manually specify, human feedback in the form of demonstrations or preferences is often much easier to obtain but can be difficult to interpret due to ambiguity and noise. Thus, it is critical that AI systems take into account epistemic uncertainty over the human's true intent. As part of the AAAI New Faculty Highlight Program, I will give an overview of my research progress along the following fundamental research areas: (1) efficiently quantifying uncertainty over human intent, (2) directly optimizing behavior to be robust to uncertainty over human intent, and (3) actively querying for additional human input to reduce uncertainty over human intent.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 3","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70024","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144915217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The future of artificial intelligence demands a paradigm shift toward multisensory perception—to systems that can digest ongoing multisensory observations, that can discover structure in unlabeled raw sensory data, and that can intelligently fuse useful information from different sensory modalities for decision-making. While we humans naturally perceive the world by looking, listening, touching, smelling, and tasting, traditional forms of machine intelligence mostly focus on a single sensory modality, particularly vision. Therefore, my research, which I refer to as multisensory machine intelligence, seeks to bridge this gap by empowering machines to emulate and enhance human capabilities in seeing, hearing, and feeling, ultimately enabling them to comprehensively perceive, understand, and interact with multisensory world.
{"title":"Multisensory machine intelligence","authors":"Ruohan Gao","doi":"10.1002/aaai.70026","DOIUrl":"https://doi.org/10.1002/aaai.70026","url":null,"abstract":"<p>The future of artificial intelligence demands a paradigm shift toward multisensory perception—to systems that can digest ongoing multisensory observations, that can discover structure in unlabeled raw sensory data, and that can intelligently fuse useful information from different sensory modalities for decision-making. While we humans naturally perceive the world by looking, listening, touching, smelling, and tasting, traditional forms of machine intelligence mostly focus on a single sensory modality, particularly vision. Therefore, my research, which I refer to as multisensory machine intelligence, seeks to bridge this gap by empowering machines to emulate and enhance human capabilities in seeing, hearing, and feeling, ultimately enabling them to comprehensively perceive, understand, and interact with multisensory world.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 3","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70026","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144897785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahdi Al-Husseini, Kyle H. Wray, Mykel J. Kochenderfer
The transfer of patients between two aircraft using an underway watercraft increases medical evacuation reach and flexibility in maritime environments. The selection of any one of multiple underway watercraft for patient exchange is complicated by participating aircraft utilization histories and participating watercraft positions and velocities. The selection problem is modeled as a semi-Markov decision process with an action space, including both fixed land and moving watercraft exchange points. Monte Carlo tree search with root parallelization is used to select optimal exchange points and determine aircraft dispatch times. Model parameters are varied in simulation to identify representative scenarios where watercraft exchange points reduce incident response times. We find that an optimal policy with watercraft exchange points outperforms an optimal policy without watercraft exchange points and a greedy policy by 35% and 40%, respectively. In partnership with the United States Army, we deploy for the first time the watercraft exchange point by executing a mock patient transfer with a manikin between two HH-60M medical evacuation helicopters and an underway Army Logistic Support Vessel south of the Hawaiian island of Oahu. Both helicopters were dispatched in accordance with our optimized decision strategy.
{"title":"Semi-Markovian planning to coordinate aerial and maritime medical evacuation platforms","authors":"Mahdi Al-Husseini, Kyle H. Wray, Mykel J. Kochenderfer","doi":"10.1002/aaai.70023","DOIUrl":"https://doi.org/10.1002/aaai.70023","url":null,"abstract":"<p>The transfer of patients between two aircraft using an underway watercraft increases medical evacuation reach and flexibility in maritime environments. The selection of any one of multiple underway watercraft for patient exchange is complicated by participating aircraft utilization histories and participating watercraft positions and velocities. The selection problem is modeled as a semi-Markov decision process with an action space, including both fixed land and moving watercraft exchange points. Monte Carlo tree search with root parallelization is used to select optimal exchange points and determine aircraft dispatch times. Model parameters are varied in simulation to identify representative scenarios where watercraft exchange points reduce incident response times. We find that an optimal policy with watercraft exchange points outperforms an optimal policy without watercraft exchange points and a greedy policy by 35% and 40%, respectively. In partnership with the United States Army, we deploy for the first time the watercraft exchange point by executing a mock patient transfer with a manikin between two HH-60M medical evacuation helicopters and an underway Army Logistic Support Vessel south of the Hawaiian island of Oahu. Both helicopters were dispatched in accordance with our optimized decision strategy.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 3","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70023","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144897786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advent of generative AI, particularly large language models like ChatGPT, has precipitated a seismic shift in academia. Far from a gradual evolution, its sudden emergence has jolted educational institutions, leaving many academics grappling with a perceived encroachment upon their intellectual domain. This upheaval has sparked intense debates, with concerns ranging from the erosion of academic integrity to the devaluation of scholarly labor. This essay contends that such apprehensions, while understandable, may overlook the transformative potential of AI as a collaborative tool. Drawing parallels to historical disruptions—such as the advent of photography challenging traditional art forms—we explore how AI can augment human creativity rather than supplant it. By examining the dynamics of authorship, originality, and accountability, we argue for a redefinition of these concepts in the context of AI-assisted work. Emphasizing the importance of human oversight in guiding AI outputs, we advocate for a framework that recognizes the symbiotic relationship between human intellect and machine efficiency. Such a perspective not only preserves the essence of academic rigor but also embraces the democratization of knowledge production. Ultimately, this essay calls for a balanced approach that mitigates risks while harnessing the innovative capacities of generative AI in academia.
{"title":"Reclaiming authorship in the age of generative AI: From panic to possibility","authors":"Mohsen Askari","doi":"10.1002/aaai.70022","DOIUrl":"https://doi.org/10.1002/aaai.70022","url":null,"abstract":"<p>The advent of generative AI, particularly large language models like ChatGPT, has precipitated a seismic shift in academia. Far from a gradual evolution, its sudden emergence has jolted educational institutions, leaving many academics grappling with a perceived encroachment upon their intellectual domain. This upheaval has sparked intense debates, with concerns ranging from the erosion of academic integrity to the devaluation of scholarly labor. This essay contends that such apprehensions, while understandable, may overlook the transformative potential of AI as a collaborative tool. Drawing parallels to historical disruptions—such as the advent of photography challenging traditional art forms—we explore how AI can augment human creativity rather than supplant it. By examining the dynamics of authorship, originality, and accountability, we argue for a redefinition of these concepts in the context of AI-assisted work. Emphasizing the importance of human oversight in guiding AI outputs, we advocate for a framework that recognizes the symbiotic relationship between human intellect and machine efficiency. Such a perspective not only preserves the essence of academic rigor but also embraces the democratization of knowledge production. Ultimately, this essay calls for a balanced approach that mitigates risks while harnessing the innovative capacities of generative AI in academia.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 3","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70022","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144897777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feeling heard: Can AI really understand human's feeling?","authors":"Nuke F. Hatta","doi":"10.1002/aaai.70017","DOIUrl":"https://doi.org/10.1002/aaai.70017","url":null,"abstract":"","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 3","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144888541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}