Jan Groeneveld, Judith Herrmann, Nikkel Mollenhauer, Leonard Dreeßen, Nick Bessin, Johann Schulze Tast, Alexander Kastius, Johannes Huegle, Rainer Schlosser
{"title":"再商业市场的自学习代理","authors":"Jan Groeneveld, Judith Herrmann, Nikkel Mollenhauer, Leonard Dreeßen, Nick Bessin, Johann Schulze Tast, Alexander Kastius, Johannes Huegle, Rainer Schlosser","doi":"10.1007/s12599-023-00841-8","DOIUrl":null,"url":null,"abstract":"<p>Nowadays, customers as well as retailers look for increased sustainability. Recommerce markets – which offer the opportunity to trade-in and resell used products – are constantly growing and help to use resources more efficiently. To manage the additional prices for the trade-in and the resale of used product versions challenges retailers as substitution and cannibalization effects have to be taken into account. An unknown customer behavior as well as competition with other merchants regarding both sales and buying back resources further increases the problem’s complexity. Reinforcement learning (RL) algorithms offer the potential to deal with such tasks. However, before being applied in practice, self-learning algorithms need to be tested synthetically to examine whether they and which work in different market scenarios. In the paper, the authors evaluate and compare different state-of-the-art RL algorithms within a recommerce market simulation framework. They find that RL agents outperform rule-based benchmark strategies in duopoly and oligopoly scenarios. Further, the authors investigate the competition between RL agents via self-play and study how performance results are affected if more or less information is observable (cf. state components). Using an ablation study, they test the influence of various model parameters and infer managerial insights. Finally, to be able to apply self-learning agents in practice, the authors show how to calibrate synthetic test environments from observable data to be used for effective pre-training.</p>","PeriodicalId":55296,"journal":{"name":"Business & Information Systems Engineering","volume":"29 34","pages":""},"PeriodicalIF":7.9000,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Self-learning Agents for Recommerce Markets\",\"authors\":\"Jan Groeneveld, Judith Herrmann, Nikkel Mollenhauer, Leonard Dreeßen, Nick Bessin, Johann Schulze Tast, Alexander Kastius, Johannes Huegle, Rainer Schlosser\",\"doi\":\"10.1007/s12599-023-00841-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Nowadays, customers as well as retailers look for increased sustainability. Recommerce markets – which offer the opportunity to trade-in and resell used products – are constantly growing and help to use resources more efficiently. To manage the additional prices for the trade-in and the resale of used product versions challenges retailers as substitution and cannibalization effects have to be taken into account. An unknown customer behavior as well as competition with other merchants regarding both sales and buying back resources further increases the problem’s complexity. Reinforcement learning (RL) algorithms offer the potential to deal with such tasks. However, before being applied in practice, self-learning algorithms need to be tested synthetically to examine whether they and which work in different market scenarios. In the paper, the authors evaluate and compare different state-of-the-art RL algorithms within a recommerce market simulation framework. They find that RL agents outperform rule-based benchmark strategies in duopoly and oligopoly scenarios. Further, the authors investigate the competition between RL agents via self-play and study how performance results are affected if more or less information is observable (cf. state components). Using an ablation study, they test the influence of various model parameters and infer managerial insights. Finally, to be able to apply self-learning agents in practice, the authors show how to calibrate synthetic test environments from observable data to be used for effective pre-training.</p>\",\"PeriodicalId\":55296,\"journal\":{\"name\":\"Business & Information Systems Engineering\",\"volume\":\"29 34\",\"pages\":\"\"},\"PeriodicalIF\":7.9000,\"publicationDate\":\"2023-11-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Business & Information Systems Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s12599-023-00841-8\",\"RegionNum\":3,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Business & Information Systems Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s12599-023-00841-8","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
Nowadays, customers as well as retailers look for increased sustainability. Recommerce markets – which offer the opportunity to trade-in and resell used products – are constantly growing and help to use resources more efficiently. To manage the additional prices for the trade-in and the resale of used product versions challenges retailers as substitution and cannibalization effects have to be taken into account. An unknown customer behavior as well as competition with other merchants regarding both sales and buying back resources further increases the problem’s complexity. Reinforcement learning (RL) algorithms offer the potential to deal with such tasks. However, before being applied in practice, self-learning algorithms need to be tested synthetically to examine whether they and which work in different market scenarios. In the paper, the authors evaluate and compare different state-of-the-art RL algorithms within a recommerce market simulation framework. They find that RL agents outperform rule-based benchmark strategies in duopoly and oligopoly scenarios. Further, the authors investigate the competition between RL agents via self-play and study how performance results are affected if more or less information is observable (cf. state components). Using an ablation study, they test the influence of various model parameters and infer managerial insights. Finally, to be able to apply self-learning agents in practice, the authors show how to calibrate synthetic test environments from observable data to be used for effective pre-training.
期刊介绍:
BISE (Business & Information Systems Engineering) is an international scholarly journal that undergoes double-blind peer review. It publishes scientific research on the effective and efficient design and utilization of information systems by individuals, groups, enterprises, and society to enhance social welfare. Information systems are viewed as socio-technical systems involving tasks, people, and technology. Research in the journal addresses issues in the analysis, design, implementation, and management of information systems.