农业数据分析中可解释的人工智能和可解释的机器学习

IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Artificial Intelligence in Agriculture Pub Date : 2022-01-01 DOI:10.1016/j.aiia.2022.11.003
Masahiro Ryo
{"title":"农业数据分析中可解释的人工智能和可解释的机器学习","authors":"Masahiro Ryo","doi":"10.1016/j.aiia.2022.11.003","DOIUrl":null,"url":null,"abstract":"<div><p>Artificial intelligence and machine learning have been increasingly applied for prediction in agricultural science. However, many models are typically black boxes, meaning we cannot explain what the models learned from the data and the reasons behind predictions. To address this issue, I introduce an emerging subdomain of artificial intelligence, explainable artificial intelligence (XAI), and associated toolkits, interpretable machine learning. This study demonstrates the usefulness of several methods by applying them to an openly available dataset. The dataset includes the no-tillage effect on crop yield relative to conventional tillage and soil, climate, and management variables. Data analysis discovered that no-tillage management can increase maize crop yield where yield in conventional tillage is &lt;5000 kg/ha and the maximum temperature is higher than 32°. These methods are useful to answer (i) which variables are important for prediction in regression/classification, (ii) which variable interactions are important for prediction, (iii) how important variables and their interactions are associated with the response variable, (iv) what are the reasons underlying a predicted value for a certain instance, and (v) whether different machine learning algorithms offer the same answer to these questions. I argue that the goodness of model fit is overly evaluated with model performance measures in the current practice, while these questions are unanswered. XAI and interpretable machine learning can enhance trust and explainability in AI.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"6 ","pages":"Pages 257-265"},"PeriodicalIF":8.2000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721722000216/pdfft?md5=78c1f19fb82554b3033a90b75d5e2da9&pid=1-s2.0-S2589721722000216-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Explainable artificial intelligence and interpretable machine learning for agricultural data analysis\",\"authors\":\"Masahiro Ryo\",\"doi\":\"10.1016/j.aiia.2022.11.003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Artificial intelligence and machine learning have been increasingly applied for prediction in agricultural science. However, many models are typically black boxes, meaning we cannot explain what the models learned from the data and the reasons behind predictions. To address this issue, I introduce an emerging subdomain of artificial intelligence, explainable artificial intelligence (XAI), and associated toolkits, interpretable machine learning. This study demonstrates the usefulness of several methods by applying them to an openly available dataset. The dataset includes the no-tillage effect on crop yield relative to conventional tillage and soil, climate, and management variables. Data analysis discovered that no-tillage management can increase maize crop yield where yield in conventional tillage is &lt;5000 kg/ha and the maximum temperature is higher than 32°. These methods are useful to answer (i) which variables are important for prediction in regression/classification, (ii) which variable interactions are important for prediction, (iii) how important variables and their interactions are associated with the response variable, (iv) what are the reasons underlying a predicted value for a certain instance, and (v) whether different machine learning algorithms offer the same answer to these questions. I argue that the goodness of model fit is overly evaluated with model performance measures in the current practice, while these questions are unanswered. XAI and interpretable machine learning can enhance trust and explainability in AI.</p></div>\",\"PeriodicalId\":52814,\"journal\":{\"name\":\"Artificial Intelligence in Agriculture\",\"volume\":\"6 \",\"pages\":\"Pages 257-265\"},\"PeriodicalIF\":8.2000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2589721722000216/pdfft?md5=78c1f19fb82554b3033a90b75d5e2da9&pid=1-s2.0-S2589721722000216-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence in Agriculture\",\"FirstCategoryId\":\"1087\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2589721722000216\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AGRICULTURE, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence in Agriculture","FirstCategoryId":"1087","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2589721722000216","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

人工智能和机器学习越来越多地应用于农业科学的预测。然而,许多模型都是典型的黑盒子,这意味着我们无法解释模型从数据中学到了什么,以及预测背后的原因。为了解决这个问题,我介绍了人工智能的一个新兴子领域,可解释的人工智能(XAI),以及相关的工具包,可解释的机器学习。本研究通过将几种方法应用于公开可用的数据集来证明它们的有用性。该数据集包括相对于常规耕作和土壤、气候和管理变量的免耕对作物产量的影响。数据分析发现,在常规耕作产量为5000 kg/ha、最高温度高于32℃的情况下,免耕管理可以提高玉米作物产量。这些方法有助于回答(i)哪些变量对回归/分类中的预测很重要,(ii)哪些变量相互作用对预测很重要,(iii)变量及其相互作用与响应变量的关联有多重要,(iv)某个实例的预测值背后的原因是什么,以及(v)不同的机器学习算法是否为这些问题提供相同的答案。我认为,在目前的实践中,模型拟合的优度被过度地用模型性能度量来评估,而这些问题没有得到回答。XAI和可解释性机器学习可以增强AI中的信任和可解释性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Explainable artificial intelligence and interpretable machine learning for agricultural data analysis

Artificial intelligence and machine learning have been increasingly applied for prediction in agricultural science. However, many models are typically black boxes, meaning we cannot explain what the models learned from the data and the reasons behind predictions. To address this issue, I introduce an emerging subdomain of artificial intelligence, explainable artificial intelligence (XAI), and associated toolkits, interpretable machine learning. This study demonstrates the usefulness of several methods by applying them to an openly available dataset. The dataset includes the no-tillage effect on crop yield relative to conventional tillage and soil, climate, and management variables. Data analysis discovered that no-tillage management can increase maize crop yield where yield in conventional tillage is <5000 kg/ha and the maximum temperature is higher than 32°. These methods are useful to answer (i) which variables are important for prediction in regression/classification, (ii) which variable interactions are important for prediction, (iii) how important variables and their interactions are associated with the response variable, (iv) what are the reasons underlying a predicted value for a certain instance, and (v) whether different machine learning algorithms offer the same answer to these questions. I argue that the goodness of model fit is overly evaluated with model performance measures in the current practice, while these questions are unanswered. XAI and interpretable machine learning can enhance trust and explainability in AI.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Artificial Intelligence in Agriculture
Artificial Intelligence in Agriculture Engineering-Engineering (miscellaneous)
CiteScore
21.60
自引率
0.00%
发文量
18
审稿时长
12 weeks
期刊最新文献
Enhancing crop yield prediction in Senegal using advanced machine learning techniques and synthetic data Neural network architecture search enabled wide-deep learning (NAS-WD) for spatially heterogenous property awared chicken woody breast classification and hardness regression Utility-based regression and meta-learning techniques for modeling actual ET: Comparison to (METRIC-EEFLUX) model Detectability of multi-dimensional movement and behaviour in cattle using sensor data and machine learning algorithms: Study on a Charolais bull Estimating TYLCV resistance level using RGBD sensors in production greenhouse conditions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1