As a generalization of convex functions and derivatives, in this paper, the authors study the concept of a symmetric derivative for preinvex functions. Using symmetrical differentiation, they discuss an important characterization for preinvex functions and define symmetrically pseudo-invex and symmetrically quasi-invex functions. They also generalize the first derivative theorem for symmetrically differentiable functions and establish some relationships between symmetrically pseudo-invex and symmetrically quasi-invex functions. They also discuss the Fritz John type optimality conditions for preinvex, symmetrically pseudo-invex and symmetrically quasi-invex functions using symmetrical differentiability.
{"title":"Optimality conditions for preinvex functions using symmetric derivative","authors":"Sachin Rastogi, Akhlad Iqbal, Sanjeev Rajan","doi":"10.37190/ord220406","DOIUrl":"https://doi.org/10.37190/ord220406","url":null,"abstract":"As a generalization of convex functions and derivatives, in this paper, the authors study the concept of a symmetric derivative for preinvex functions. Using symmetrical differentiation, they discuss an important characterization for preinvex functions and define symmetrically pseudo-invex and symmetrically quasi-invex functions. They also generalize the first derivative theorem for symmetrically differentiable functions and establish some relationships between symmetrically pseudo-invex and symmetrically quasi-invex functions. They also discuss the Fritz John type optimality conditions for preinvex, symmetrically pseudo-invex and symmetrically quasi-invex functions using symmetrical differentiability.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":null,"pages":null},"PeriodicalIF":0.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82487264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Decision-making is a tedious and complex process. In the present competitive scenario, any incorrect decision may excessively harm an organization. Therefore, the parameters involved in the decision-making process should be looked into carefully as they may not always be of a deterministic nature. In the present study, a multiobjective nonlinear transportation problem is formulated, wherein the parameters involved in the objective functions are assumed to be fuzzy and both supply and demand are stochastic. Supply and demand are assumed to follow the exponential distribution. After converting the problem into an equivalent deterministic form, the multiobjective problem is solved using a neutrosophic compromise programming approach. A comparative study indicates that the proposed approach provides the best compromise solution, which is significantly better than the one obtained using the fuzzy programming approach.
{"title":"Neutrosophic compromise programming approach for multiobjective nonlinear transportation problem with supply and demand following the exponential distribution","authors":"A. Adhami, M. Faizan, A. M","doi":"10.37190/ord220301","DOIUrl":"https://doi.org/10.37190/ord220301","url":null,"abstract":"Decision-making is a tedious and complex process. In the present competitive scenario, any incorrect decision may excessively harm an organization. Therefore, the parameters involved in the decision-making process should be looked into carefully as they may not always be of a deterministic nature. In the present study, a multiobjective nonlinear transportation problem is formulated, wherein the parameters involved in the objective functions are assumed to be fuzzy and both supply and demand are stochastic. Supply and demand are assumed to follow the exponential distribution. After converting the problem into an equivalent deterministic form, the multiobjective problem is solved using a neutrosophic compromise programming approach. A comparative study indicates that the proposed approach provides the best compromise solution, which is significantly better than the one obtained using the fuzzy programming approach.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":null,"pages":null},"PeriodicalIF":0.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77235617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Herbert Mukalazi, T. Larsson, Juma Kasozi, Fred Mayambala
We develop a model for asset liability management of pension funds, which is solved by stochastic programming techniques. Using data provided by the Bank of Uganda Defined Benefits Scheme, which is closed to new members, we obtain the optimal investment policies. Randomly sampled scenario trees using the mean and covariance structure of the return distribution are used for generating the coefficients of the stochastic program. Liabilities are modelled by remaining years of life expectancy and guaranteed period for monthly pension. We obtain the funding situation of the scheme at each stage, and the terminal cash injection by the sponsor required to meet all future benefit payments, in absence of contributing members.
{"title":"Asset liability management for the Bank of Uganda defined benefits scheme by stochastic programming","authors":"Herbert Mukalazi, T. Larsson, Juma Kasozi, Fred Mayambala","doi":"10.37190/ord220207","DOIUrl":"https://doi.org/10.37190/ord220207","url":null,"abstract":"We develop a model for asset liability management of pension funds, which is solved by stochastic programming techniques. Using data provided by the Bank of Uganda Defined Benefits Scheme, which is closed to new members, we obtain the optimal investment policies. Randomly sampled scenario trees using the mean and covariance structure of the return distribution are used for generating the coefficients of the stochastic program. Liabilities are modelled by remaining years of life expectancy and guaranteed period for monthly pension. We obtain the funding situation of the scheme at each stage, and the terminal cash injection by the sponsor required to meet all future benefit payments, in absence of contributing members.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":null,"pages":null},"PeriodicalIF":0.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87731657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the characterisation of X-Lindley distribution using the relation between truncated moment and failure rate/reverse failure rate function. An application of this distribution to real data of survival times (in days) of 92 Algerian individuals infected with coronavirus is given.
{"title":"On the characterization of X-Lindley distribution by truncated moments. Properties and application","authors":"Farouk Metiri, Halim zeghdoudi, Abdelali Ezzebsa","doi":"10.37190/ord220105","DOIUrl":"https://doi.org/10.37190/ord220105","url":null,"abstract":"This paper presents the characterisation of X-Lindley distribution using the relation between truncated moment and failure rate/reverse failure rate function. An application of this distribution to real data of survival times (in days) of 92 Algerian individuals infected with coronavirus is given.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":null,"pages":null},"PeriodicalIF":0.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77171702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The inventory of suppliers providing raw materials to industries producing green products faces two challenging problems. The first one is that raw materials are usually deteriorating items and the second one that they emit carbon-based gases during deterioration. Moreover, each item has its unique carbon emission rate and composition, called the pattern of carbon emission, which is a function of the rate of carbon emission. In this present research, we attempt to develop a stochastic inventory model with price, stock, and pattern of carbon emission-dependent demand to maximise the profit of a supplier selling a single product. The rate of deterioration is a function of the rate of carbon emission and effective investment in preservation. The cost of carbon emission is a function of green investment and the pattern of carbon emission. Holding costs and purchase costs are constant. We consider three patterns of carbon emission, and each pattern is defined by a negative exponential function. The rate of carbon emission is assumed to be probabilistic and follows one of the three probabilistic distributions: Uniform, Triangular, and Beta. Numerical validation is provided together with sensitivity analysis of the parameters for managerial insights. Analysis of the effect of carbon emission on the profit earned is made and results are interpreted. Particle swarm optimisation (PSO) and genetic algorithm (GA) are applied to solve the model, while statistical analysis and sensitivity analysis of the parameters of the algorithm are provided along with the graphical representation of convergence.
{"title":"An inventory model to study the effect of the probabilistic rate of carbon emission on the profit earned by a supplier","authors":"Nabajyoti Bhattacharjee, Nabendu Sen","doi":"10.37190/ord210401","DOIUrl":"https://doi.org/10.37190/ord210401","url":null,"abstract":"The inventory of suppliers providing raw materials to industries producing green products faces two challenging problems. The first one is that raw materials are usually deteriorating items and the second one that they emit carbon-based gases during deterioration. Moreover, each item has its unique carbon emission rate and composition, called the pattern of carbon emission, which is a function of the rate of carbon emission. In this present research, we attempt to develop a stochastic inventory model with price, stock, and pattern of carbon emission-dependent demand to maximise the profit of a supplier selling a single product. The rate of deterioration is a function of the rate of carbon emission and effective investment in preservation. The cost of carbon emission is a function of green investment and the pattern of carbon emission. Holding costs and purchase costs are constant. We consider three patterns of carbon emission, and each pattern is defined by a negative exponential function. The rate of carbon emission is assumed to be probabilistic and follows one of the three probabilistic distributions: Uniform, Triangular, and Beta. Numerical validation is provided together with sensitivity analysis of the parameters for managerial insights. Analysis of the effect of carbon emission on the profit earned is made and results are interpreted. Particle swarm optimisation (PSO) and genetic algorithm (GA) are applied to solve the model, while statistical analysis and sensitivity analysis of the parameters of the algorithm are provided along with the graphical representation of convergence.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":null,"pages":null},"PeriodicalIF":0.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84919637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we consider an M/M/1 queue where beneficiary visits occur singly. Once the beneficiary level in the system becomes zero, the server takes a vacation immediately. If the server finds no beneficiaries in the system, then the server is allowed to take another vacation after the return from the vacation. This process continues until the server has exhaustively taken all the J vacations. The closed form transient solution of the considered model and some important time dependent performance measures are obtained. Further, the steady state system size distribution is obtained from the time-dependent solution. A stochastic decomposition structure of waiting time distribution and expression for the additional waiting time due to the presence of server vacations are studied. Numerical assessments are presented.
{"title":"Discussion on the transient behavior of single server Markovian multiple variant vacation queues","authors":"M. Vadivukarasi, K. Kalidass","doi":"10.37190/ORD210107","DOIUrl":"https://doi.org/10.37190/ORD210107","url":null,"abstract":"In this paper, we consider an M/M/1 queue where beneficiary visits occur singly. Once the beneficiary level in the system becomes zero, the server takes a vacation immediately. If the server finds no beneficiaries in the system, then the server is allowed to take another vacation after the return from the vacation. This process continues until the server has exhaustively taken all the J vacations. The closed form transient solution of the considered model and some important time dependent performance measures are obtained. Further, the steady state system size distribution is obtained from the time-dependent solution. A stochastic decomposition structure of waiting time distribution and expression for the additional waiting time due to the presence of server vacations are studied. Numerical assessments are presented.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":null,"pages":null},"PeriodicalIF":0.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89633447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we show that three conditions due to Pattanaik, when satisfied by a given profile of state-dependent preferences (linear orders) on a given and fixed set of alternatives and a probability distribution with which the various states of nature occur, are individually sufficient, for the non-emptiness of the set of alternative(s) which are individually preferred to all alternatives other than itself with probability at least half. Prior to this, we show that since each axiom individually implies Sen-Coherence, then, as a consequence of a result obtained earlier, each axiom along with asymmetry of the ‘preferred with at probability at least half” relation implies the transitivity of the relation. All the sufficient conditions discussed here are required to be applicable at least to all those otherwise relevant events that have positive probability. This observation also applies to a sufficient condition for the non-emptiness of the set of alternative(s) which are individually preferred to all alternatives other than itself with probability at least half, called Generalised Sen Coherence introduced and discussed in earlier research.
{"title":"Pattanaik’s axioms and existence of preferred with probability at least half winners","authors":"S. Lahiri","doi":"10.37190/ord210205","DOIUrl":"https://doi.org/10.37190/ord210205","url":null,"abstract":"In this paper we show that three conditions due to Pattanaik, when satisfied by a given profile of state-dependent preferences (linear orders) on a given and fixed set of alternatives and a probability distribution with which the various states of nature occur, are individually sufficient, for the non-emptiness of the set of alternative(s) which are individually preferred to all alternatives other than itself with probability at least half. Prior to this, we show that since each axiom individually implies Sen-Coherence, then, as a consequence of a result obtained earlier, each axiom along with asymmetry of the ‘preferred with at probability at least half” relation implies the transitivity of the relation. All the sufficient conditions discussed here are required to be applicable at least to all those otherwise relevant events that have positive probability. This observation also applies to a sufficient condition for the non-emptiness of the set of alternative(s) which are individually preferred to all alternatives other than itself with probability at least half, called Generalised Sen Coherence introduced and discussed in earlier research.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":null,"pages":null},"PeriodicalIF":0.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73822815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies the spread of crises across the financial and capital markets of different countries. The standard method of contagion detection is based on the evolution of the correlation matrix for the example of exchange rates or returns, usually after removing univariate dynamics with the GARCH model. It is a common observation that crises that have occurred in one financial market are usually transmitted to other financial markets/countries simultaneously, and that they are visible in different financial variables, such as returns and volatility which determine probability distribution. The changes in distributions can be detected through changes in the descriptive statistics of, e.g., returns characterised by expected value, variance, skewness, kurtosis, and other statistics. They determine the shape of the distribution function of returns. These descriptive statistics display dynamics over time. Moreover, they can interreact within the given financial or capital market and among markets. In this article, we use the FX currency cluster represented by some of the major currencies and currencies of the Višegrad group, namely EUR/USD, EUR/CHF, USD/CHF, EUR/HUF, EUR/PLN, EUR/CZK, USD/CZK, USD/HUF, USD/PLN, CHF/PLN, CHF/PLN, CHF/CZK, CHF/HUF, PLN/CZK, and PLN/HUF. In analysing capital markets in terms of equity indexes, we chose developed markets, such as DAX 30, AEX 25, CAC 40, EURSTOXX 50, FTSE 100, ASX 200, SPX 500, NASDAQ 100, and RUSSEL 2000. Our aim is to check the changes in descriptive statistics, matrices of correlation with respect to exchange rates, returns and volatility on the basis of the data listed above, surrounding two crises: the global financial crisis (GFC) in 2007-2009 and Covid 2019.
{"title":"Contagion effects on capital and forex markets around GFC and COVID-19 crises: A comparative study","authors":"Krzysztof Brania, H. Gurgul","doi":"10.37190/ord210203","DOIUrl":"https://doi.org/10.37190/ord210203","url":null,"abstract":"This paper studies the spread of crises across the financial and capital markets of different countries. The standard method of contagion detection is based on the evolution of the correlation matrix for the example of exchange rates or returns, usually after removing univariate dynamics with the GARCH model. It is a common observation that crises that have occurred in one financial market are usually transmitted to other financial markets/countries simultaneously, and that they are visible in different financial variables, such as returns and volatility which determine probability distribution. The changes in distributions can be detected through changes in the descriptive statistics of, e.g., returns characterised by expected value, variance, skewness, kurtosis, and other statistics. They determine the shape of the distribution function of returns. These descriptive statistics display dynamics over time. Moreover, they can interreact within the given financial or capital market and among markets. In this article, we use the FX currency cluster represented by some of the major currencies and currencies of the Višegrad group, namely EUR/USD, EUR/CHF, USD/CHF, EUR/HUF, EUR/PLN, EUR/CZK, USD/CZK, USD/HUF, USD/PLN, CHF/PLN, CHF/PLN, CHF/CZK, CHF/HUF, PLN/CZK, and PLN/HUF. In analysing capital markets in terms of equity indexes, we chose developed markets, such as DAX 30, AEX 25, CAC 40, EURSTOXX 50, FTSE 100, ASX 200, SPX 500, NASDAQ 100, and RUSSEL 2000. Our aim is to check the changes in descriptive statistics, matrices of correlation with respect to exchange rates, returns and volatility on the basis of the data listed above, surrounding two crises: the global financial crisis (GFC) in 2007-2009 and Covid 2019.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":null,"pages":null},"PeriodicalIF":0.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80377085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maryam Shadab, S. Saati, Reza Farzipoor-Saen, M. Khoveyni, A. Mostafaee
Presence of input congestion is one of the key issues that results in lower efficiency and performance in Decision Making Units (DMUs). So, determination of congestion is of prime importance, and removing it improves performance of DMUs. One of the most appropriate methods for detecting congestion is Data Envelopment Analysis (DEA). Since the output of inefficient units can be increased by keeping the input constant through projecting on the weak efficiency frontier, it is unnecessary to determine the congested inefficient DMUs. Therefore, in this case we solely determine congested vertex units. Towards this aim, only one LP model in DEA is proposed and the status of congestion (strong congestion and weak congestion) obtained. In our method, a vertex unit under evaluation is eliminated from the production technology, and then, if there exists an activity that belongs to the production technology with lower inputs and higher outputs compared with omitted unit, we say vertex unit evidences congestion. One of the features of our model is that by solving only one LP model and with easier and fewer calculations compared to other methods, congested units can be identified. Data set obtained from Japanese chain stores for a period of 27 years is used to demonstrate the applicability of the proposed model and the results are compared with some previous methods.
{"title":"Detecting congestion in DEA by solving one model","authors":"Maryam Shadab, S. Saati, Reza Farzipoor-Saen, M. Khoveyni, A. Mostafaee","doi":"10.37190/ORD210105","DOIUrl":"https://doi.org/10.37190/ORD210105","url":null,"abstract":"Presence of input congestion is one of the key issues that results in lower efficiency and performance in Decision Making Units (DMUs). So, determination of congestion is of prime importance, and removing it improves performance of DMUs. One of the most appropriate methods for detecting congestion is Data Envelopment Analysis (DEA). Since the output of inefficient units can be increased by keeping the input constant through projecting on the weak efficiency frontier, it is unnecessary to determine the congested inefficient DMUs. Therefore, in this case we solely determine congested vertex units. Towards this aim, only one LP model in DEA is proposed and the status of congestion (strong congestion and weak congestion) obtained. In our method, a vertex unit under evaluation is eliminated from the production technology, and then, if there exists an activity that belongs to the production technology with lower inputs and higher outputs compared with omitted unit, we say vertex unit evidences congestion. One of the features of our model is that by solving only one LP model and with easier and fewer calculations compared to other methods, congested units can be identified. Data set obtained from Japanese chain stores for a period of 27 years is used to demonstrate the applicability of the proposed model and the results are compared with some previous methods.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":null,"pages":null},"PeriodicalIF":0.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91047158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi criteria decision-making (MCDM) is one of the most popular problems handled by researchers in the literature. Since the interval-valued intuitionistic fuzzy set (IVIFS) theory generates as realistic as possible evaluation of linguistic expressions, researchers have been expanding traditional MCDM methods to the IVIF environment, especially in the last decade. This study provides a literature review of the relevant articles from several academic databases on applications of IVIF-MCDM methods. The review of 131 publications addresses specific research questions. To understand the research publication trend, this review offers a visual analysis that examines the studies from different perspectives, such as application areas, IVIF-MCDM methods, citations, most relevant journals, and validation methods. One of the most remarkable results of the literature review is that most publications in this field are published in SCIE indexed journals. Another noteworthy issue is that China is the country that produces the most articles in this field. In addition, English journals are mostly selected for the publication of articles. While it is seen that the investment selection problem is chosen mostly as the application area, the TOPSIS method is preferred mostly in the applications. This study stands out as the most comprehensive one that compiles publications containing extended traditional MCDM methods for IVIF sets. This review will be an important reference for future researchers and decision-makers involved in advancing MCDM methods considering vagueness and ambiguity.
{"title":"A literature review of interval-valued intuitionistic fuzzy multi-criteria decision-making methodologies","authors":"Melda Kokoç, S. Ersöz","doi":"10.37190/ord210405","DOIUrl":"https://doi.org/10.37190/ord210405","url":null,"abstract":"Multi criteria decision-making (MCDM) is one of the most popular problems handled by researchers in the literature. Since the interval-valued intuitionistic fuzzy set (IVIFS) theory generates as realistic as possible evaluation of linguistic expressions, researchers have been expanding traditional MCDM methods to the IVIF environment, especially in the last decade. This study provides a literature review of the relevant articles from several academic databases on applications of IVIF-MCDM methods. The review of 131 publications addresses specific research questions. To understand the research publication trend, this review offers a visual analysis that examines the studies from different perspectives, such as application areas, IVIF-MCDM methods, citations, most relevant journals, and validation methods. One of the most remarkable results of the literature review is that most publications in this field are published in SCIE indexed journals. Another noteworthy issue is that China is the country that produces the most articles in this field. In addition, English journals are mostly selected for the publication of articles. While it is seen that the investment selection problem is chosen mostly as the application area, the TOPSIS method is preferred mostly in the applications. This study stands out as the most comprehensive one that compiles publications containing extended traditional MCDM methods for IVIF sets. This review will be an important reference for future researchers and decision-makers involved in advancing MCDM methods considering vagueness and ambiguity.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":null,"pages":null},"PeriodicalIF":0.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87853482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}