首页 > 最新文献

EURO Journal on Decision Processes最新文献

英文 中文
Risk attitudes: The Central Tendency Bias 风险态度:集中趋势偏差
IF 1 Q2 Mathematics Pub Date : 2023-11-29 DOI: 10.1016/j.ejdp.2023.100042
Karl Akbari, Markus Eigruber, Rudolf Vetschera

Unincentivized measurement instruments of risk attitudes suffer from several weaknesses. One is that respondents do not consistently assign themselves to their respective risk preference categories. In particular, they are subject to a central tendency bias and classify themselves as risk-neutral when they are in fact not. We test the robustness of the central tendency bias in lottery-type questions for risk evaluations and offer an explanation of why respondents behave in a way that contradicts plausible utility models. We explore a wide range of alternative influencing factors, including careless responding, stake levels, deviations in expected value, the cognitive abilities of the respondents, self-assessment of risk attitudes, and monetary incentives. We find that careless responding and higher stakes increase the central tendency bias in risk assessment, while cognitive capabilities and extreme risk self-assessments (both positive and negative) decrease the bias. Deviations in expected value and incentives do not affect the bias. Our study further points to the fact that such problems have to be taken care of explicitly when eliciting risk attitudes.

不受激励的风险态度测量工具有几个弱点。一是受访者没有始终如一地将自己分配到各自的风险偏好类别。特别是,他们受到集中倾向偏差的影响,并将自己归类为风险中性,而实际上并非如此。我们在风险评估的彩票类型问题中测试了集中倾向偏差的稳健性,并解释了为什么受访者的行为方式与合理的实用新型相矛盾。我们探讨了广泛的其他影响因素,包括粗心的回应、利害关系水平、期望值偏差、受访者的认知能力、风险态度的自我评估和金钱激励。研究发现,草率反应和高风险会增加风险评估的集中倾向偏差,而认知能力和极端风险自我评估(积极和消极)会降低风险评估的集中倾向偏差。期望值和激励的偏差不影响偏差。我们的研究进一步指出,在引发风险态度时,这些问题必须得到明确的照顾。
{"title":"Risk attitudes: The Central Tendency Bias","authors":"Karl Akbari, Markus Eigruber, Rudolf Vetschera","doi":"10.1016/j.ejdp.2023.100042","DOIUrl":"https://doi.org/10.1016/j.ejdp.2023.100042","url":null,"abstract":"<p>Unincentivized measurement instruments of risk attitudes suffer from several weaknesses. One is that respondents do not consistently assign themselves to their respective risk preference categories. In particular, they are subject to a central tendency bias and classify themselves as risk-neutral when they are in fact not. We test the robustness of the central tendency bias in lottery-type questions for risk evaluations and offer an explanation of why respondents behave in a way that contradicts plausible utility models. We explore a wide range of alternative influencing factors, including careless responding, stake levels, deviations in expected value, the cognitive abilities of the respondents, self-assessment of risk attitudes, and monetary incentives. We find that careless responding and higher stakes increase the central tendency bias in risk assessment, while cognitive capabilities and extreme risk self-assessments (both positive and negative) decrease the bias. Deviations in expected value and incentives do not affect the bias. Our study further points to the fact that such problems have to be taken care of explicitly when eliciting risk attitudes.</p>","PeriodicalId":44104,"journal":{"name":"EURO Journal on Decision Processes","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138539728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-objective optimization design to generate surrogate machine learning models in explainable artificial intelligence applications 在可解释的人工智能应用中生成代理机器学习模型的多目标优化设计
IF 1 Q2 Mathematics Pub Date : 2023-01-01 DOI: 10.1016/j.ejdp.2023.100040
Wellington Rodrigo Monteiro , Gilberto Reynoso-Meza

Decision-making is crucial to the performance and well-being of any organization. While artificial intelligence algorithms are increasingly used in the industry for decision-making purposes, the adoption of decision-making techniques to develop new artificial intelligence models does not follow the same trend. Complex artificial intelligence algorithm structures such as gradient boosting, ensembles, and neural networks offer higher accuracy at the expense of transparency. In organizations, however, managers and other stakeholders need to understand how an algorithm came to a given decision to properly criticize, learn from, audit, and improve said algorithms. Among the most recent techniques to address this, explainable artificial intelligence (XAI) algorithms offer a previously unforeseen level of interpretability, explainability, and informativeness to different human roles in the industry. XAI algorithms seek to balance the trade-off between interpretability and accuracy by introducing techniques that, for instance, explain the feature relevance in complex algorithms, generate counterfactual examples in “what-if?” analyses, and train surrogate models that are intrinsically explainable. However, while the trade-off between these two objectives is commonly referred to in the literature, only some proposals use multi-objective optimization in XAI applications. Therefore, this document proposes a new multi-objective optimization application to help decision-makers (for instance, data scientists) to generate new surrogate machine learning models based on black-box models. These surrogates are generated by a multi-objective problem that maximizes, at the same time, interpretability and accuracy. The proposed application also has a multi-criteria decision-making step to rank the best surrogates considering these two objectives. Results from five classification and regression datasets tested on four black-box models show that the proposed method can create simple surrogates maintaining high levels of accuracy.

决策对于任何组织的绩效和福利都是至关重要的。虽然人工智能算法在行业中越来越多地用于决策目的,但采用决策技术开发新的人工智能模型却没有遵循相同的趋势。复杂的人工智能算法结构,如梯度增强、集成和神经网络,以牺牲透明度为代价提供更高的准确性。然而,在组织中,管理人员和其他利益相关者需要了解算法是如何做出给定决策的,以便正确地批评、学习、审计和改进所述算法。在解决这个问题的最新技术中,可解释的人工智能(XAI)算法为行业中不同的人类角色提供了前所未有的可解释性、可解释性和信息性。XAI算法试图通过引入技术来平衡可解释性和准确性之间的权衡,例如,在复杂算法中解释特征相关性,在“如果?”中生成反事实示例。的分析,并训练本质上可解释的代理模型。然而,虽然这两个目标之间的权衡在文献中经常被提及,但只有一些建议在XAI应用程序中使用多目标优化。因此,本文提出了一种新的多目标优化应用,以帮助决策者(例如数据科学家)基于黑盒模型生成新的代理机器学习模型。这些替代品是由一个多目标问题产生的,同时最大化可解释性和准确性。提出的应用程序还具有多标准决策步骤,以考虑这两个目标对最佳代理进行排名。5个分类和回归数据集在4个黑箱模型上的测试结果表明,所提出的方法可以创建简单的代理,并保持较高的精度。
{"title":"A multi-objective optimization design to generate surrogate machine learning models in explainable artificial intelligence applications","authors":"Wellington Rodrigo Monteiro ,&nbsp;Gilberto Reynoso-Meza","doi":"10.1016/j.ejdp.2023.100040","DOIUrl":"https://doi.org/10.1016/j.ejdp.2023.100040","url":null,"abstract":"<div><p>Decision-making is crucial to the performance and well-being of any organization. While artificial intelligence algorithms are increasingly used in the industry for decision-making purposes, the adoption of decision-making techniques to develop new artificial intelligence models does not follow the same trend. Complex artificial intelligence algorithm structures such as gradient boosting, ensembles, and neural networks offer higher accuracy at the expense of transparency. In organizations, however, managers and other stakeholders need to understand how an algorithm came to a given decision to properly criticize, learn from, audit, and improve said algorithms. Among the most recent techniques to address this, explainable artificial intelligence (XAI) algorithms offer a previously unforeseen level of interpretability, explainability, and informativeness to different human roles in the industry. XAI algorithms seek to balance the trade-off between interpretability and accuracy by introducing techniques that, for instance, explain the feature relevance in complex algorithms, generate counterfactual examples in “what-if?” analyses, and train surrogate models that are intrinsically explainable. However, while the trade-off between these two objectives is commonly referred to in the literature, only some proposals use multi-objective optimization in XAI applications. Therefore, this document proposes a new multi-objective optimization application to help decision-makers (for instance, data scientists) to generate new surrogate machine learning models based on black-box models. These surrogates are generated by a multi-objective problem that maximizes, at the same time, interpretability and accuracy. The proposed application also has a multi-criteria decision-making step to rank the best surrogates considering these two objectives. Results from five classification and regression datasets tested on four black-box models show that the proposed method can create simple surrogates maintaining high levels of accuracy.</p></div>","PeriodicalId":44104,"journal":{"name":"EURO Journal on Decision Processes","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2193943823000134/pdfft?md5=c0cfb4113c9d5700533e1ba3c3d4dfd1&pid=1-s2.0-S2193943823000134-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91987226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fairkit, fairkit, on the wall, who’s the fairest of them all? Supporting fairness-related decision-making Fairkit,Fairkit,在墙上,谁是最公平的?支持与公平相关的决策
IF 1 Q2 Mathematics Pub Date : 2023-01-01 DOI: 10.1016/j.ejdp.2023.100031
Brittany Johnson , Jesse Bartola , Rico Angell , Sam Witty , Stephen Giguere , Yuriy Brun

Modern software relies heavily on data and machine learning, and affects decisions that shape our world. Unfortunately, recent studies have shown that because of biases in data, software systems frequently inject bias into their decisions, from producing more errors when transcribing women’s than men’s voices to overcharging people of color for financial loans. To address bias in software, data scientists and software engineers need tools that help them understand the trade-offs between model quality and fairness in their specific data domains. Toward that end, we present fairkit-learn, an interactive toolkit for helping engineers reason about and understand fairness. Fairkit-learn supports over 70 definition of fairness and works with state-of-the-art machine learning tools, using the same interfaces to ease adoption. It can evaluate thousands of models produced by multiple machine learning algorithms, hyperparameters, and data permutations, and compute and visualize a small Pareto-optimal set of models that describe the optimal trade-offs between fairness and quality. Engineers can then iterate, improving their models and evaluating them using fairkit-learn. We evaluate fairkit-learn via a user study with 54 students, showing that students using fairkit-learn produce models that provide a better balance between fairness and quality than students using scikit-learn and IBM AI Fairness 360 toolkits. With fairkit-learn, users can select models that are up to 67% more fair and 10% more accurate than the models they are likely to train with scikit-learn.

现代软件在很大程度上依赖于数据和机器学习,并影响着塑造我们世界的决策。不幸的是,最近的研究表明,由于数据中的偏见,软件系统经常在他们的决策中注入偏见,从转录女性声音时产生的错误多于男性声音,到向有色人种收取过高的金融贷款费用。为了解决软件中的偏见,数据科学家和软件工程师需要一些工具来帮助他们理解特定数据领域中模型质量和公平性之间的权衡。为此,我们推出了fairkit learn,这是一个帮助工程师思考和理解公平的交互式工具包。Fairkit learn支持70多个公平定义,并与最先进的机器学习工具配合使用,使用相同的界面来简化采用。它可以评估由多种机器学习算法、超参数和数据排列产生的数千个模型,并计算和可视化描述公平性和质量之间最佳权衡的帕累托最优模型集。然后,工程师可以迭代,改进他们的模型,并使用fairkit-learn对其进行评估。我们通过对54名学生的用户研究评估了fairkit learn,结果表明,与使用scikit learn和IBM AI fairness 360工具包的学生相比,使用fairkit learn的学生生成的模型在公平性和质量之间提供了更好的平衡。使用fairkit learn,用户可以选择比他们可能使用scikit learn训练的模型公平67%、准确10%的模型。
{"title":"Fairkit, fairkit, on the wall, who’s the fairest of them all? Supporting fairness-related decision-making","authors":"Brittany Johnson ,&nbsp;Jesse Bartola ,&nbsp;Rico Angell ,&nbsp;Sam Witty ,&nbsp;Stephen Giguere ,&nbsp;Yuriy Brun","doi":"10.1016/j.ejdp.2023.100031","DOIUrl":"10.1016/j.ejdp.2023.100031","url":null,"abstract":"<div><p>Modern software relies heavily on data and machine learning, and affects decisions that shape our world. Unfortunately, recent studies have shown that because of biases in data, software systems frequently inject bias into their decisions, from producing more errors when transcribing women’s than men’s voices to overcharging people of color for financial loans. To address bias in software, data scientists and software engineers need tools that help them understand the trade-offs between model quality and fairness in their specific data domains. Toward that end, we present fairkit-learn, an interactive toolkit for helping engineers reason about and understand fairness. Fairkit-learn supports over 70 definition of fairness and works with state-of-the-art machine learning tools, using the same interfaces to ease adoption. It can evaluate thousands of models produced by multiple machine learning algorithms, hyperparameters, and data permutations, and compute and visualize a small Pareto-optimal set of models that describe the optimal trade-offs between fairness and quality. Engineers can then iterate, improving their models and evaluating them using fairkit-learn. We evaluate fairkit-learn via a user study with 54 students, showing that students using fairkit-learn produce models that provide a better balance between fairness and quality than students using scikit-learn and IBM AI Fairness 360 toolkits. With fairkit-learn, users can select models that are up to 67% more fair and 10% more accurate than the models they are likely to train with scikit-learn.</p></div>","PeriodicalId":44104,"journal":{"name":"EURO Journal on Decision Processes","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42622011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Optimal preventive policies for parallel systems using Markov decision process: application to an offshore power plant 基于马尔可夫决策过程的并行系统最优预防策略在海上发电厂的应用
IF 1 Q2 Mathematics Pub Date : 2023-01-01 DOI: 10.1016/j.ejdp.2023.100034
Mario Marcondes Machado , Thiago Lima Silva , Eduardo Camponogara , Edilson Fernandes de Arruda , Virgílio José Martins Ferreira Filho
{"title":"Optimal preventive policies for parallel systems using Markov decision process: application to an offshore power plant","authors":"Mario Marcondes Machado ,&nbsp;Thiago Lima Silva ,&nbsp;Eduardo Camponogara ,&nbsp;Edilson Fernandes de Arruda ,&nbsp;Virgílio José Martins Ferreira Filho","doi":"10.1016/j.ejdp.2023.100034","DOIUrl":"https://doi.org/10.1016/j.ejdp.2023.100034","url":null,"abstract":"","PeriodicalId":44104,"journal":{"name":"EURO Journal on Decision Processes","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50203155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Special Issue on Decision Processes in Policy Design 社论:政策设计中的决策过程特刊
IF 1 Q2 Mathematics Pub Date : 2023-01-01 DOI: 10.1016/j.ejdp.2023.100038
Dr. Irene Pluchinotta , Dr. Ine Steenmans
{"title":"Editorial: Special Issue on Decision Processes in Policy Design","authors":"Dr. Irene Pluchinotta ,&nbsp;Dr. Ine Steenmans","doi":"10.1016/j.ejdp.2023.100038","DOIUrl":"10.1016/j.ejdp.2023.100038","url":null,"abstract":"","PeriodicalId":44104,"journal":{"name":"EURO Journal on Decision Processes","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41445004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reflections on 50 years of MCDM: Issues and future research needs MCDM 50年的反思:问题与未来研究需求
IF 1 Q2 Mathematics Pub Date : 2023-01-01 DOI: 10.1016/j.ejdp.2023.100030
Simon French

Modern discussions of multiple criteria decision-making extend back about half a century. I reflect on key developments, schools of thought and controversies that have taken place over the period, arguing that perhaps those of us in different schools focus too much on our differences and do not capitalise enough on what we share in common. Moreover, the differences between schools are indications of their respective weaknesses and can drive improvements in each. The discussion points to a number of issues and research needs that the community needs to address.

关于多标准决策的现代讨论可以追溯到大约半个世纪前。我反思了这一时期发生的关键发展、思想流派和争议,认为也许我们这些不同流派的人过于关注我们的差异,没有充分利用我们的共同点。此外,学校之间的差异表明了它们各自的弱点,并可以推动每一所学校的改进。讨论指出了社区需要解决的一些问题和研究需求。
{"title":"Reflections on 50 years of MCDM: Issues and future research needs","authors":"Simon French","doi":"10.1016/j.ejdp.2023.100030","DOIUrl":"10.1016/j.ejdp.2023.100030","url":null,"abstract":"<div><p>Modern discussions of multiple criteria decision-making extend back about half a century. I reflect on key developments, schools of thought and controversies that have taken place over the period, arguing that perhaps those of us in different schools focus too much on our differences and do not capitalise enough on what we share in common. Moreover, the differences between schools are indications of their respective weaknesses and can drive improvements in each. The discussion points to a number of issues and research needs that the community needs to address.</p></div>","PeriodicalId":44104,"journal":{"name":"EURO Journal on Decision Processes","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44098510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Survey on fairness notions and related tensions 关于公平观念和相关紧张关系的调查
IF 1 Q2 Mathematics Pub Date : 2023-01-01 DOI: 10.1016/j.ejdp.2023.100033
Guilherme Alves , Fabien Bernier , Miguel Couceiro , Karima Makhlouf , Catuscia Palamidessi , Sami Zhioua

Automated decision systems are increasingly used to take consequential decisions in problems such as job hiring and loan granting with the hope of replacing subjective human decisions with objective machine learning (ML) algorithms. However, ML-based decision systems are prone to bias, which results in yet unfair decisions. Several notions of fairness have been defined in the literature to capture the different subtleties of this ethical and social concept (e.g., statistical parity, equal opportunity, etc.). Fairness requirements to be satisfied while learning models created several types of tensions among the different notions of fairness and other desirable properties such as privacy and classification accuracy. This paper surveys the commonly used fairness notions and discusses the tensions among them with privacy and accuracy. Different methods to address the fairness-accuracy trade-off (classified into four approaches, namely, pre-processing, in-processing, post-processing, and hybrid) are reviewed. The survey is consolidated with experimental analysis carried out on fairness benchmark datasets to illustrate the relationship between fairness measures and accuracy in real-world scenarios.

自动化决策系统越来越多地被用于在招聘和贷款发放等问题上做出相应的决策,希望用客观的机器学习(ML)算法取代主观的人类决策。然而,基于ML的决策系统容易产生偏见,这导致了不公平的决策。文献中定义了几个公平概念,以捕捉这一伦理和社会概念的不同微妙之处(例如,统计平等、机会平等等)。在学习模型时需要满足的公平要求在不同的公平概念和其他期望属性(如隐私和分类准确性)之间产生了几种类型的紧张关系。本文调查了常用的公平概念,并讨论了它们之间的隐私和准确性之间的紧张关系。综述了解决公平-准确性权衡的不同方法(分为四种方法,即预处理、处理中、后处理和混合)。该调查与在公平基准数据集上进行的实验分析相结合,以说明现实世界场景中公平措施与准确性之间的关系。
{"title":"Survey on fairness notions and related tensions","authors":"Guilherme Alves ,&nbsp;Fabien Bernier ,&nbsp;Miguel Couceiro ,&nbsp;Karima Makhlouf ,&nbsp;Catuscia Palamidessi ,&nbsp;Sami Zhioua","doi":"10.1016/j.ejdp.2023.100033","DOIUrl":"https://doi.org/10.1016/j.ejdp.2023.100033","url":null,"abstract":"<div><p>Automated decision systems are increasingly used to take consequential decisions in problems such as job hiring and loan granting with the hope of replacing subjective human decisions with objective machine learning (ML) algorithms. However, ML-based decision systems are prone to bias, which results in yet unfair decisions. Several notions of fairness have been defined in the literature to capture the different subtleties of this ethical and social concept (<em>e.g.,</em> statistical parity, equal opportunity, etc.). Fairness requirements to be satisfied while learning models created several types of tensions among the different notions of fairness and other desirable properties such as privacy and classification accuracy. This paper surveys the commonly used fairness notions and discusses the tensions among them with privacy and accuracy. Different methods to address the fairness-accuracy trade-off (classified into four approaches, namely, pre-processing, in-processing, post-processing, and hybrid) are reviewed. The survey is consolidated with experimental analysis carried out on fairness benchmark datasets to illustrate the relationship between fairness measures and accuracy in real-world scenarios.</p></div>","PeriodicalId":44104,"journal":{"name":"EURO Journal on Decision Processes","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50203156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proposing a bi-objective model for the problem of designing a resilient supply chain network of pharmaceutical-health relief items under disruption conditions by considering lateral transshipment 通过考虑横向转运,提出了一种双目标模型,用于在中断条件下设计具有弹性的医药卫生救济物品供应链网络
IF 1 Q2 Mathematics Pub Date : 2023-01-01 DOI: 10.1016/j.ejdp.2023.100037
Soheil Javaheri Fazel , Mohammad Rostamkhani , Mehdi Rashidnejad

In this paper, a bi-objective mathematical model is presented for the problem of designing a resilient supply chain network of pharmaceutical-health relief items in the condition of disruption, taking into account the possibility of lateral transshipment. The first objective function of the model aims to minimize the total costs and considering the importance of effective and efficient distribution to meet the demand of patients in a humanitarian supply chain network, the second objective function is minimizing the total time required to deliver relief items to the demand points. Given the inherent uncertainty associated with the occurrence of a crisis and its impact on the supply chain network, a scenario-based robust optimization method is used to address the problem. The model is solved using the epsilon constraint method for small sizes and the NSGA-II meta-heuristic algorithm for larger sizes. In addition, the model is solved with and without lateral transshipment, and the results are compared and analyzed. The findings indicate that lateral transshipment can improve the performance of the supply chain and reduce the level of shortage.

本文提出了一个双目标数学模型,用于设计在中断条件下的药品健康救济品弹性供应链网络,同时考虑横向转运的可能性。该模型的第一个目标函数旨在最大限度地减少总成本,并考虑到有效和高效的分配对满足人道主义供应链网络中患者需求的重要性,第二个目标函数是最大限度地缩短将救援物资运送到需求点所需的总时间。考虑到与危机发生及其对供应链网络的影响相关的固有不确定性,使用基于场景的稳健优化方法来解决该问题。该模型使用小尺寸的ε约束方法和大尺寸的NSGA-II元启发式算法求解。此外,对有无横向转运的模型进行了求解,并对结果进行了比较分析。研究结果表明,横向转运可以提高供应链的性能,降低短缺程度。
{"title":"Proposing a bi-objective model for the problem of designing a resilient supply chain network of pharmaceutical-health relief items under disruption conditions by considering lateral transshipment","authors":"Soheil Javaheri Fazel ,&nbsp;Mohammad Rostamkhani ,&nbsp;Mehdi Rashidnejad","doi":"10.1016/j.ejdp.2023.100037","DOIUrl":"10.1016/j.ejdp.2023.100037","url":null,"abstract":"<div><p>In this paper, a bi-objective mathematical model is presented for the problem of designing a resilient supply chain network of pharmaceutical-health relief items in the condition of disruption, taking into account the possibility of lateral transshipment. The first objective function of the model aims to minimize the total costs and considering the importance of effective and efficient distribution to meet the demand of patients in a humanitarian supply chain network, the second objective function is minimizing the total time required to deliver relief items to the demand points. Given the inherent uncertainty associated with the occurrence of a crisis and its impact on the supply chain network, a scenario-based robust optimization method is used to address the problem. The model is solved using the epsilon constraint method for small sizes and the NSGA-II meta-heuristic algorithm for larger sizes. In addition, the model is solved with and without lateral transshipment, and the results are compared and analyzed. The findings indicate that lateral transshipment can improve the performance of the supply chain and reduce the level of shortage.</p></div>","PeriodicalId":44104,"journal":{"name":"EURO Journal on Decision Processes","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46271497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-period fuzzy portfolio optimization model subject to real constraints 真实约束下的多周期模糊投资组合优化模型
IF 1 Q2 Mathematics Pub Date : 2023-01-01 DOI: 10.1016/j.ejdp.2023.100041
Moad El Kharrim

In this paper we examine a multi-period portfolio optimization problem in a fuzzy environment. The proposed optimization model is subject to CVaR constraint, transaction constraint and cardinality constraint. The returns of the assets are assumed to be trapezoidal fuzzy variables and therefore the portfolio return and risk are quantified by the possibilistic mean and semivariance of the fuzzy returns respectively. A dynamic programming method is used to solve the proposed mixed interger optimization model for different cardinality constraints. A numerical study based on real stocks market data is provided to test the efficiency of the proposed algorithm. The sensitivity of the optimal portfolio investment strategies is tested for different confidence levels for the CVaR constraint.

本文研究了模糊环境下的多周期投资组合优化问题。该优化模型受CVaR约束、事务约束和基数约束。假设资产收益为梯形模糊变量,用模糊收益的可能性均值和半方差分别量化组合收益和风险。采用动态规划方法求解了不同基数约束下的混合整数优化模型。基于实际股票市场数据的数值研究验证了该算法的有效性。在CVaR约束的不同置信水平下,检验了最优组合投资策略的敏感性。
{"title":"Multi-period fuzzy portfolio optimization model subject to real constraints","authors":"Moad El Kharrim","doi":"10.1016/j.ejdp.2023.100041","DOIUrl":"https://doi.org/10.1016/j.ejdp.2023.100041","url":null,"abstract":"<div><p>In this paper we examine a multi-period portfolio optimization problem in a fuzzy environment. The proposed optimization model is subject to CVaR constraint, transaction constraint and cardinality constraint. The returns of the assets are assumed to be trapezoidal fuzzy variables and therefore the portfolio return and risk are quantified by the possibilistic mean and semivariance of the fuzzy returns respectively. A dynamic programming method is used to solve the proposed mixed interger optimization model for different cardinality constraints. A numerical study based on real stocks market data is provided to test the efficiency of the proposed algorithm. The sensitivity of the optimal portfolio investment strategies is tested for different confidence levels for the CVaR constraint.</p></div>","PeriodicalId":44104,"journal":{"name":"EURO Journal on Decision Processes","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2193943823000146/pdfft?md5=ff4184ec7c4e689ea8362611efa5c756&pid=1-s2.0-S2193943823000146-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91987225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fairness and explainability in automatic decision-making systems. A challenge for computer science and law 自动决策系统中的公平性和可解释性。计算机科学和法律面临的挑战
IF 1 Q2 Mathematics Pub Date : 2023-01-01 DOI: 10.1016/j.ejdp.2023.100036
Th. Kirat , O. Tambou , V. Do , A. Tsoukiàs

The paper offers a contribution to the interdisciplinary constructs of analyzing fairness issues in automatic algorithmic decisions. Section 2 shows that technical choices in supervised learning have social implications that need to be considered. Section 3 proposes a contextual approach to the issue of unintended group discrimination, i.e. decision rules that are facially neutral but generate disproportionate impacts across social groups (e.g., gender, race or ethnicity). The contextualization will focus on the legal systems of the United States on the one hand and Europe on the other. In particular, legislation and case law tend to promote different standards of fairness on both sides of the Atlantic. Section 4 is devoted to the explainability of algorithmic decisions; it will confront and attempt to cross-reference legal concepts (in European and French law) with technical concepts and will highlight the plurality, even polysemy, of European and French legal texts relating to the explicability of algorithmic decisions. The conclusion proposes directions for further research.

本文为分析自动算法决策中的公平问题的跨学科结构做出了贡献。第2节表明,监督学习中的技术选择具有需要考虑的社会影响。第3节提出了一种处理非故意群体歧视问题的情境方法,即表面中立但对社会群体(如性别、种族或民族)产生不成比例影响的决策规则。背景化将一方面关注美国的法律制度,另一方面关注欧洲的法律制度。特别是,立法和判例法往往在大西洋两岸促进不同的公平标准。第4节专门讨论算法决策的可解释性;它将面对并试图将(欧洲和法国法律中的)法律概念与技术概念交叉引用,并将强调与算法决策的可解释性有关的欧洲和法国的法律文本的多样性,甚至多义性。结论为进一步研究提出了方向。
{"title":"Fairness and explainability in automatic decision-making systems. A challenge for computer science and law","authors":"Th. Kirat ,&nbsp;O. Tambou ,&nbsp;V. Do ,&nbsp;A. Tsoukiàs","doi":"10.1016/j.ejdp.2023.100036","DOIUrl":"https://doi.org/10.1016/j.ejdp.2023.100036","url":null,"abstract":"<div><p>The paper offers a contribution to the interdisciplinary constructs of analyzing fairness issues in automatic algorithmic decisions. <span>Section 2</span> shows that technical choices in supervised learning have social implications that need to be considered. <span>Section 3</span> proposes a contextual approach to the issue of unintended group discrimination, i.e. decision rules that are facially neutral but generate disproportionate impacts across social groups (e.g., gender, race or ethnicity). The contextualization will focus on the legal systems of the United States on the one hand and Europe on the other. In particular, legislation and case law tend to promote different standards of fairness on both sides of the Atlantic. Section 4 is devoted to the explainability of algorithmic decisions; it will confront and attempt to cross-reference legal concepts (in European and French law) with technical concepts and will highlight the plurality, even polysemy, of European and French legal texts relating to the explicability of algorithmic decisions. The conclusion proposes directions for further research.</p></div>","PeriodicalId":44104,"journal":{"name":"EURO Journal on Decision Processes","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50203165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
EURO Journal on Decision Processes
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1