Nianyu Li, Mingyue Zhang, Jialong Li, Eunsuk Kang, K. Tei
{"title":"Preference Adaptation: user satisfaction is all you need!","authors":"Nianyu Li, Mingyue Zhang, Jialong Li, Eunsuk Kang, K. Tei","doi":"10.1109/SEAMS59076.2023.00027","DOIUrl":null,"url":null,"abstract":"Decision making in self-adaptive systems often involves trade-offs between multiple quality attributes, with user preferences that indicate the relative importance and priorities among the attributes. However, eliciting such preferences accurately from users is a difficult task, as they may find it challenging to specify their preference in a precise, mathematical form. Instead, they may have an easier time expressing their displeasure when the system does not exhibit behaviors that satisfy their internal preferences. Furthermore, the user’s preference may change over time depending on the environmental context; thus, the system may be required to continuously adapt its behavior to satisfy this change in preference. However, existing self-adaptive frameworks do not explicitly consider dynamic human preference as one of the sources of uncertainty. In this paper, we propose a new adaptation framework that is specifically designed to support self-adaptation to user preference. Our framework takes a human-on-the-loop approach where the user is given an ability to intervene and indicate dissatisfaction and corrections with the current behavior of the system; in such a scenario, the system automatically updates the existing preference values so that the new, resulting behavior of the system is consistent with the user’s notion of satisfactory behavior. To perform this adaptation, we propose a novel similarity analysis to produce changes in the preference that are optimal with respect to the system utility. We illustrate our approach in a case study involving a delivery robot system. Our preliminary results indicate that our approach can effectively adapt its behavior to changing human preference.","PeriodicalId":262204,"journal":{"name":"2023 IEEE/ACM 18th Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACM 18th Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SEAMS59076.2023.00027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Decision making in self-adaptive systems often involves trade-offs between multiple quality attributes, with user preferences that indicate the relative importance and priorities among the attributes. However, eliciting such preferences accurately from users is a difficult task, as they may find it challenging to specify their preference in a precise, mathematical form. Instead, they may have an easier time expressing their displeasure when the system does not exhibit behaviors that satisfy their internal preferences. Furthermore, the user’s preference may change over time depending on the environmental context; thus, the system may be required to continuously adapt its behavior to satisfy this change in preference. However, existing self-adaptive frameworks do not explicitly consider dynamic human preference as one of the sources of uncertainty. In this paper, we propose a new adaptation framework that is specifically designed to support self-adaptation to user preference. Our framework takes a human-on-the-loop approach where the user is given an ability to intervene and indicate dissatisfaction and corrections with the current behavior of the system; in such a scenario, the system automatically updates the existing preference values so that the new, resulting behavior of the system is consistent with the user’s notion of satisfactory behavior. To perform this adaptation, we propose a novel similarity analysis to produce changes in the preference that are optimal with respect to the system utility. We illustrate our approach in a case study involving a delivery robot system. Our preliminary results indicate that our approach can effectively adapt its behavior to changing human preference.