{"title":"Optimal Explanation Generation using Attention Distribution Model","authors":"Akhila Bairy, M. Fränzle","doi":"10.54941/ahfe1002928","DOIUrl":null,"url":null,"abstract":"With highly automated and Autonomous Vehicles (AVs) being one of the most\n prominent emerging technologies in the automotive industry, efforts to achieve SAE Level\n 3+ vehicles have skyrocketed in recent years. As new technologies emerge on a daily\n basis, these systems are becoming increasingly complex. To help people understand - and\n also accept - these new technologies, there is a need for explanation. There are three\n essential dimensions to designing explanations, namely content, frequency, and timing.\n Our goal is to develop an algorithm that optimises explanation in AVs. Most of the\n existing research focuses on the content of an explanation, whereas the fine-granularity\n of the frequency and timing of an explanation is relatively unexplored. Previous studies\n concerning \"when to explain\" have tended to make broad distinctions between explaining\n before, during or after an action is performed. For AVs, studies have shown that\n passengers prefer to receive an explanation before an autonomous action takes place.\n However, it seems likely that the acclimatisation that occurs through prolonged exposure\n to and use of a particular AV will reduce the need for explanation. As comprehension of\n explanations is workload-intensive, it is necessary to optimise both the frequency, i.e.\n skipping explanations when they are not helpful to reduce workload, and the precise\n point in time when an explanation is given, i.e. giving an explanation when it provides\n the maximum workload reduction. Extra mental workload for passengers can be caused by\n both giving and omitting an explanation. Every explanation that is presented requires\n cognitive processing in order to be understood, even if its content is considered to be\n redundant or if it will not be remembered by the addressee. On the other hand, skipping\n the explanation can cause the passenger to actively scan the environment for potential\n cues themselves, if necessary. Such an attention strategy would also impose a\n significant cognitive load on the passenger. In our work, to predict the mental workload\n of the passenger, we use the state-of-the-art attention model called SEEV (Salience,\n Effort, Expectancy, and Value). The SEEV model is dynamically used for forecasting the\n likelihood of the direction of attention. Our work aims to generate an optimally timed\n strategy for presenting an explanation. Using the SEEV model we build a probabilistic\n reactive game, i.e., 1.5-player game or Markov Decision Process, and we use reactive\n synthesis to generate an optimal reactive strategy for presenting an explanation that\n minimises workload.","PeriodicalId":383834,"journal":{"name":"Human Interaction and Emerging Technologies (IHIET-AI 2023): Artificial\n Intelligence and Future Applications","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Interaction and Emerging Technologies (IHIET-AI 2023): Artificial\n Intelligence and Future Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1002928","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
With highly automated and Autonomous Vehicles (AVs) being one of the most
prominent emerging technologies in the automotive industry, efforts to achieve SAE Level
3+ vehicles have skyrocketed in recent years. As new technologies emerge on a daily
basis, these systems are becoming increasingly complex. To help people understand - and
also accept - these new technologies, there is a need for explanation. There are three
essential dimensions to designing explanations, namely content, frequency, and timing.
Our goal is to develop an algorithm that optimises explanation in AVs. Most of the
existing research focuses on the content of an explanation, whereas the fine-granularity
of the frequency and timing of an explanation is relatively unexplored. Previous studies
concerning "when to explain" have tended to make broad distinctions between explaining
before, during or after an action is performed. For AVs, studies have shown that
passengers prefer to receive an explanation before an autonomous action takes place.
However, it seems likely that the acclimatisation that occurs through prolonged exposure
to and use of a particular AV will reduce the need for explanation. As comprehension of
explanations is workload-intensive, it is necessary to optimise both the frequency, i.e.
skipping explanations when they are not helpful to reduce workload, and the precise
point in time when an explanation is given, i.e. giving an explanation when it provides
the maximum workload reduction. Extra mental workload for passengers can be caused by
both giving and omitting an explanation. Every explanation that is presented requires
cognitive processing in order to be understood, even if its content is considered to be
redundant or if it will not be remembered by the addressee. On the other hand, skipping
the explanation can cause the passenger to actively scan the environment for potential
cues themselves, if necessary. Such an attention strategy would also impose a
significant cognitive load on the passenger. In our work, to predict the mental workload
of the passenger, we use the state-of-the-art attention model called SEEV (Salience,
Effort, Expectancy, and Value). The SEEV model is dynamically used for forecasting the
likelihood of the direction of attention. Our work aims to generate an optimally timed
strategy for presenting an explanation. Using the SEEV model we build a probabilistic
reactive game, i.e., 1.5-player game or Markov Decision Process, and we use reactive
synthesis to generate an optimal reactive strategy for presenting an explanation that
minimises workload.