Bernhard J. BergerUniversity of Rostock, Software Engineering Chair Rostock, GermanyHamburg University of Technology, Institute of Embedded Systems, Germany, Christina PlumpDFKI - Cyber-Physical Systems Bremen, Germany, Rolf DrechslerUniversity of Bremen, Departments of Mathematics and Computer ScienceDFKI - Cyber-Physical Systems Bremen, Germany
{"title":"$EvoAl^{2048}$","authors":"Bernhard J. BergerUniversity of Rostock, Software Engineering Chair Rostock, GermanyHamburg University of Technology, Institute of Embedded Systems, Germany, Christina PlumpDFKI - Cyber-Physical Systems Bremen, Germany, Rolf DrechslerUniversity of Bremen, Departments of Mathematics and Computer ScienceDFKI - Cyber-Physical Systems Bremen, Germany","doi":"arxiv-2408.16780","DOIUrl":null,"url":null,"abstract":"As AI solutions enter safety-critical products, the explainability and\ninterpretability of solutions generated by AI products become increasingly\nimportant. In the long term, such explanations are the key to gaining users'\nacceptance of AI-based systems' decisions. We report on applying a\nmodel-driven-based optimisation to search for an interpretable and explainable\npolicy that solves the game 2048. This paper describes a solution to the\nGECCO'24 Interpretable Control Competition using the open-source software\nEvoAl. We aimed to develop an approach for creating interpretable policies that\nare easy to adapt to new ideas.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"38 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.16780","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As AI solutions enter safety-critical products, the explainability and
interpretability of solutions generated by AI products become increasingly
important. In the long term, such explanations are the key to gaining users'
acceptance of AI-based systems' decisions. We report on applying a
model-driven-based optimisation to search for an interpretable and explainable
policy that solves the game 2048. This paper describes a solution to the
GECCO'24 Interpretable Control Competition using the open-source software
EvoAl. We aimed to develop an approach for creating interpretable policies that
are easy to adapt to new ideas.