Pub Date : 2013-09-01DOI: 10.1016/j.mcm.2012.11.023
Haidar AL-Khalidi, David Taniar, John Betts, Sultan Alamri
The cost of monitoring and keeping the location of a Moving Query updated is very high, as the calculation of the range query needs to be re-evaluated whenever the query moves. Many methods have been proposed to minimize the computation and communication costs for the continuous monitoring of Moving Range Queries. However, because this problem has been only partly solved, more radical efforts are needed. In response, we propose an efficient technique by adopting the concept of a safe region. The safe region is an area where the set of objects of interest does not change. If a moving query is roaming within the safe region then there is no need to update the query. This paper presents efficient techniques to create a competent safe region to reduce the communication costs. We use Monte-Carlo simulation to calculate the area of the safe region due to the irregularity of its shape. As long as the query remains inside its specified safe region, expensive re-computation is not required, which reduces the computational and communication costs in client–server architectures.
{"title":"On finding safe regions for moving range queries","authors":"Haidar AL-Khalidi, David Taniar, John Betts, Sultan Alamri","doi":"10.1016/j.mcm.2012.11.023","DOIUrl":"10.1016/j.mcm.2012.11.023","url":null,"abstract":"<div><p>The cost of monitoring and keeping the location of a Moving Query updated is very high, as the calculation of the range query needs to be re-evaluated whenever the query moves. Many methods have been proposed to minimize the computation and communication costs for the continuous monitoring of Moving Range Queries. However, because this problem has been only partly solved, more radical efforts are needed. In response, we propose an efficient technique by adopting the concept of a safe region. The safe region is an area where the set of objects of interest does not change. If a moving query is roaming within the safe region then there is no need to update the query. This paper presents efficient techniques to create a competent safe region to reduce the communication costs. We use <em>Monte-Carlo</em> simulation to calculate the area of the safe region due to the irregularity of its shape. As long as the query remains inside its specified safe region, expensive re-computation is not required, which reduces the computational and communication costs in client–server architectures.</p></div>","PeriodicalId":49872,"journal":{"name":"Mathematical and Computer Modelling","volume":"58 5","pages":"Pages 1449-1458"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mcm.2012.11.023","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84541778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1016/j.mcm.2012.11.030
Tianhan Gao , Nan Guo , Kangbin Yim
Aiming at establishing secure access and communications for a multi-operator wireless mesh network (WMN), this paper proposes a localized efficient authentication scheme (LEAS) under a broker-based hierarchical security architecture and trust model. Mutual authentication is achieved directly between mesh client and access mesh router through a ticket which is equipped with an identity-based proxy signature. Fast authentication for different roaming scenarios is supported by using HMAC operations on both the mesh client side and mesh router side. As a byproduct, key agreement among network entities is also implemented to protect the subsequent communications after authentication. Our performance and security analysis demonstrate that LEAS is efficient and resilient to various kinds of attacks.
{"title":"LEAS: Localized efficient authentication scheme for multi-operator wireless mesh network with identity-based proxy signature","authors":"Tianhan Gao , Nan Guo , Kangbin Yim","doi":"10.1016/j.mcm.2012.11.030","DOIUrl":"10.1016/j.mcm.2012.11.030","url":null,"abstract":"<div><p>Aiming at establishing secure access and communications for a multi-operator wireless mesh network (WMN), this paper proposes a localized efficient authentication scheme (LEAS) under a broker-based hierarchical security architecture and trust model. Mutual authentication is achieved directly between mesh client and access mesh router through a ticket which is equipped with an identity-based proxy signature. Fast authentication for different roaming scenarios is supported by using HMAC operations on both the mesh client side and mesh router side. As a byproduct, key agreement among network entities is also implemented to protect the subsequent communications after authentication. Our performance and security analysis demonstrate that LEAS is efficient and resilient to various kinds of attacks.</p></div>","PeriodicalId":49872,"journal":{"name":"Mathematical and Computer Modelling","volume":"58 5","pages":"Pages 1427-1440"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mcm.2012.11.030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78844276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1016/j.mcm.2013.02.008
Leandro Marin , Antonio Jara , Antonio Skarmeta Gomez
Security for the Internet of Things (IoT) presents the challenge of offering suitable security primitives to enable IP-based security protocols such as IPSec and DTLS. This challenge is here because host-based implementations and solutions are not providing a proper performance over the devices used in the IoT. This is mainly because of the use of highly constraint devices in terms of computational capabilities. Therefore, it is necessary to implement new optimized and scalable cryptographic primitives which can use existing protocols to provide security, authentication, privacy and integrity to the communications. Our research focus on the mathematical optimization of cryptographic primitives for Public Key Cryptography (PKC) based on Elliptic Curve Cryptography (ECC). PKC has been considered, since the IoT requires high scalability, multi-domain interoperability, self-commissioning, and self-identification.
Specifically, this contribution presents a set of optimizations for ECC over constrained devices, and a brief tutorial of its implementation in the microprocessor Texas Instrument MSP430 (Briel, 2000) [1] (commonly used in IoT devices such as 6LoWPAN, active RFID and DASH7). Our main contribution is the proof that these special pseudo-Mersenne primes, which we have denominated ‘shifting primes’ can be used for ECC primitives with 160-bit keys in a highly optimal way. This paper presents an ECC scalar multiplication with 160-bit keys within 5.4 million clock cycles over MSP430 devices without hardware multiplier. Shifting primes provide a set of features, which make them more compliant with the set of instructions available with tiny CPUs such as the MSP430 and other 8 and 16-bit CPUs.
{"title":"Shifting primes: Optimizing elliptic curve cryptography for 16-bit devices without hardware multiplier","authors":"Leandro Marin , Antonio Jara , Antonio Skarmeta Gomez","doi":"10.1016/j.mcm.2013.02.008","DOIUrl":"10.1016/j.mcm.2013.02.008","url":null,"abstract":"<div><p>Security for the Internet of Things (IoT) presents the challenge of offering suitable security primitives to enable IP-based security protocols such as IPSec and DTLS. This challenge is here because host-based implementations and solutions are not providing a proper performance over the devices used in the IoT. This is mainly because of the use of highly constraint devices in terms of computational capabilities. Therefore, it is necessary to implement new optimized and scalable cryptographic primitives which can use existing protocols to provide security, authentication, privacy and integrity to the communications. Our research focus on the mathematical optimization of cryptographic primitives for Public Key Cryptography (PKC) based on Elliptic Curve Cryptography (ECC). PKC has been considered, since the IoT requires high scalability, multi-domain interoperability, self-commissioning, and self-identification.</p><p>Specifically, this contribution presents a set of optimizations for ECC over constrained devices, and a brief tutorial of its implementation in the microprocessor Texas Instrument MSP430 (Briel, 2000) <span>[1]</span> (commonly used in IoT devices such as 6LoWPAN, active RFID and DASH7). Our main contribution is the proof that these special pseudo-Mersenne primes, which we have denominated ‘shifting primes’ can be used for ECC primitives with 160-bit keys in a highly optimal way. This paper presents an ECC scalar multiplication with 160-bit keys within 5.4 million clock cycles over MSP430 devices without hardware multiplier. Shifting primes provide a set of features, which make them more compliant with the set of instructions available with tiny CPUs such as the MSP430 and other 8 and 16-bit CPUs.</p></div>","PeriodicalId":49872,"journal":{"name":"Mathematical and Computer Modelling","volume":"58 5","pages":"Pages 1155-1174"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mcm.2013.02.008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87737627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1016/j.mcm.2012.09.022
Chong Wu , Yongli Li , Qian Liu , Kunsheng Wang
This paper proposes a stochastic DEA model considering undesirable outputs with weak disposability which not only can deal with the existence of random errors in the collected data, but also depicts the production rules uncovered by weak disposability of the undesirable outputs. This model introduces the concept of risk to define the efficiency of decision making units (DMUs), and utilizes the correlationship matrix of all the variables to portray the weak disposability. On the basis of probability distribution properties, the probabilistic form of the model is transformed to the equivalent deterministic one which is able to be solved. In the application of the model, the environment efficiency evaluation problem is chosen to validate the model by designing different levels of random errors and comparing the new model with the old one. In conclusion, the model, with broad applicability has a more superior analysis capacity than the existing model.
{"title":"A stochastic DEA model considering undesirable outputs with weak disposability","authors":"Chong Wu , Yongli Li , Qian Liu , Kunsheng Wang","doi":"10.1016/j.mcm.2012.09.022","DOIUrl":"10.1016/j.mcm.2012.09.022","url":null,"abstract":"<div><p>This paper proposes a stochastic DEA model considering undesirable outputs with weak disposability which not only can deal with the existence of random errors in the collected data, but also depicts the production rules uncovered by weak disposability of the undesirable outputs. This model introduces the concept of risk to define the efficiency of decision making units (DMUs), and utilizes the correlationship matrix of all the variables to portray the weak disposability. On the basis of probability distribution properties, the probabilistic form of the model is transformed to the equivalent deterministic one which is able to be solved. In the application of the model, the environment efficiency evaluation problem is chosen to validate the model by designing different levels of random errors and comparing the new model with the old one. In conclusion, the model, with broad applicability has a more superior analysis capacity than the existing model.</p></div>","PeriodicalId":49872,"journal":{"name":"Mathematical and Computer Modelling","volume":"58 5","pages":"Pages 980-989"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mcm.2012.09.022","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81329459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1016/j.mcm.2013.06.007
Wendong Lv, Zhixiang Zhou, Hong Huang
{"title":"The measurement of undesirable output based-on DEA in E&E: Models development and empirical analysis","authors":"Wendong Lv, Zhixiang Zhou, Hong Huang","doi":"10.1016/j.mcm.2013.06.007","DOIUrl":"10.1016/j.mcm.2013.06.007","url":null,"abstract":"","PeriodicalId":49872,"journal":{"name":"Mathematical and Computer Modelling","volume":"58 5","pages":"Pages 907-912"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mcm.2013.06.007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86572708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1016/j.mcm.2012.12.035
Chen Li, Linpeng Huang, Luxi Chen, Weichao Luo
Component-based development has gained a lot of attention in recent years. As a software development paradigm, it enhances reusability and reduces complexity but it also brings new challenges in reliability, especially the deadlock problem. In this paper, we present a dynamic probe (DP) strategy for the deadlock problem of component-based systems (CBS). First a formal semantic model is proposed to abstract the interaction among the components for analyzing the deadlock connections, and then the dynamic probe detection (DPD) algorithm is used to detect the deadlock loops. If deadlock connections are detected then the dynamic probe elimination (DPE) algorithm is used to evaluate the component reliability using several measurement indexes to find a component with reliability lower than the other components, and then replace it. Last, in comparison to related work, the results show that the proposed strategy can achieve both lower processing cost and higher reliability.
{"title":"Deadlock detection and recovery for component-based systems","authors":"Chen Li, Linpeng Huang, Luxi Chen, Weichao Luo","doi":"10.1016/j.mcm.2012.12.035","DOIUrl":"10.1016/j.mcm.2012.12.035","url":null,"abstract":"<div><p>Component-based development has gained a lot of attention in recent years. As a software development paradigm, it enhances reusability and reduces complexity but it also brings new challenges in reliability, especially the deadlock problem. In this paper, we present a dynamic probe (DP) strategy for the deadlock problem of component-based systems (CBS). First a formal semantic model is proposed to abstract the interaction among the components for analyzing the deadlock connections, and then the dynamic probe detection (DPD) algorithm is used to detect the deadlock loops. If deadlock connections are detected then the dynamic probe elimination (DPE) algorithm is used to evaluate the component reliability using several measurement indexes to find a component with reliability lower than the other components, and then replace it. Last, in comparison to related work, the results show that the proposed strategy can achieve both lower processing cost and higher reliability.</p></div>","PeriodicalId":49872,"journal":{"name":"Mathematical and Computer Modelling","volume":"58 5","pages":"Pages 1362-1378"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mcm.2012.12.035","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86230723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1016/j.mcm.2012.12.001
Lidia Ogiela
Cognitive categorisation systems are used for in-depth analyses of data which contains significant layers of information. These layers consist of the semantic information found in the data sets, whose information allows the system executing data analysis processes to understand the data to a certain extent and to reason based on this analysed information. Such processes are executed by semantic data analysis systems which are called cognitive categorisation systems in the introduced classification of cognitive systems dedicated to analyses in various fields of application. Cognitive data analysis systems are also expanded by adding processes of learning new solutions hitherto unknown to the system because it had no appropriate pattern defined or because it had no data allowing the analysed data to be unambiguously assigned to its corresponding pattern. The ability to train the system so that it would correctly interpret the analysed data marks the beginning of the development of a new class of systems analysing data/individual features in the course of biological modelling, personalisation and personal identification processes. Identification systems are enhanced by adding elements of cognitive categorisation systems in order to execute an in-depth, more detailed personal analysis using the information collected in the system, whose information concerns not only the anatomical and physical features, but also, or maybe primarily, lesions found in various human organs. Such systems could be used in personal identification cases in which there are doubts and a risk arises due to reasoning from incomplete data sets. Adding semantic analysis modules to personal identification systems represents a novel scientific proposition which marks the beginning of the use of semantic analysis processes for biological modelling and personalisation tasks. The solutions proposed are illustrated with the example of selected E-UBIAS systems which analyse medical image data in combination with the identity analysis. The use of DNA cryptography and DNA code to analyse personal data makes it possible to unanimously assign analysed data to an individual at the personal identification stage. This publication presents also the system with semantic analysis processes conducted based on semantic interpretation and cognitive processes allows the possible lesions that the person suffers from to be identified and authorised.
{"title":"Semantic analysis and biological modelling in selected classes of cognitive information systems","authors":"Lidia Ogiela","doi":"10.1016/j.mcm.2012.12.001","DOIUrl":"10.1016/j.mcm.2012.12.001","url":null,"abstract":"<div><p>Cognitive categorisation systems are used for in-depth analyses of data which contains significant layers of information. These layers consist of the semantic information found in the data sets, whose information allows the system executing data analysis processes to understand the data to a certain extent and to reason based on this analysed information. Such processes are executed by semantic data analysis systems which are called cognitive categorisation systems in the introduced classification of cognitive systems dedicated to analyses in various fields of application. Cognitive data analysis systems are also expanded by adding processes of learning new solutions hitherto unknown to the system because it had no appropriate pattern defined or because it had no data allowing the analysed data to be unambiguously assigned to its corresponding pattern. The ability to train the system so that it would correctly interpret the analysed data marks the beginning of the development of a new class of systems analysing data/individual features in the course of biological modelling, personalisation and personal identification processes. Identification systems are enhanced by adding elements of cognitive categorisation systems in order to execute an in-depth, more detailed personal analysis using the information collected in the system, whose information concerns not only the anatomical and physical features, but also, or maybe primarily, lesions found in various human organs. Such systems could be used in personal identification cases in which there are doubts and a risk arises due to reasoning from incomplete data sets. Adding semantic analysis modules to personal identification systems represents a novel scientific proposition which marks the beginning of the use of semantic analysis processes for biological modelling and personalisation tasks. The solutions proposed are illustrated with the example of selected E-UBIAS systems which analyse medical image data in combination with the identity analysis. The use of DNA cryptography and DNA code to analyse personal data makes it possible to unanimously assign analysed data to an individual at the personal identification stage. This publication presents also the system with semantic analysis processes conducted based on semantic interpretation and cognitive processes allows the possible lesions that the person suffers from to be identified and authorised.</p></div>","PeriodicalId":49872,"journal":{"name":"Mathematical and Computer Modelling","volume":"58 5","pages":"Pages 1405-1414"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mcm.2012.12.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77869480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RFID readers for passive tags suffer from reader-to-reader interference. Mathematical models of reader-to-reader interference can be categorized into single interference and additive interference models. Although it considers only the direct collisions between two readers, the single interference model is commonly adopted since it allows faster simulations. However, the additive interference model is more realistic since it captures the total interference from several readers. In this paper, an analysis of the two models is presented and a comparison between them is conducted according to several evaluation scenarios. Besides, the impact of the different parameters, including path loss exponent, SIR/SINR threshold and noise power, is evaluated for the two models.
{"title":"Evaluation of single and additive interference models for RFID collisions","authors":"Linchao Zhang, Renato Ferrero, Filippo Gandino, Maurizio Rebaudengo","doi":"10.1016/j.mcm.2013.01.011","DOIUrl":"10.1016/j.mcm.2013.01.011","url":null,"abstract":"<div><p>RFID readers for passive tags suffer from reader-to-reader interference. Mathematical models of reader-to-reader interference can be categorized into single interference and additive interference models. Although it considers only the direct collisions between two readers, the single interference model is commonly adopted since it allows faster simulations. However, the additive interference model is more realistic since it captures the total interference from several readers. In this paper, an analysis of the two models is presented and a comparison between them is conducted according to several evaluation scenarios. Besides, the impact of the different parameters, including path loss exponent, SIR/SINR threshold and noise power, is evaluated for the two models.</p></div>","PeriodicalId":49872,"journal":{"name":"Mathematical and Computer Modelling","volume":"58 5","pages":"Pages 1236-1248"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mcm.2013.01.011","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81256887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1016/j.mcm.2011.12.043
Malin Song , Shuhong Wang , Qingling Liu
Production processes often produce by-products which have harmful effects on the environment. However, traditional Data Envelopment Analysis (DEA) models cannot measure the efficiency evaluation of these undesirable outputs. This article aims to solve this problem. It gives an improved DEA-SBM model, named ISBM-DEA, and constructs an illustration to compare it with the Slacks-based measured DEA model. The results show that the new model’s computing conclusions are highly related to the efficiency assessment of the DEA-SBM model, and it has a greater focus on the affects of undesirable outputs on production efficiency than the latter, which means that this new model has a greater extensive value for application and provides a better quantitative theoretical basis for environmental policy analysis.
{"title":"Environmental efficiency evaluation considering the maximization of desirable outputs and its application","authors":"Malin Song , Shuhong Wang , Qingling Liu","doi":"10.1016/j.mcm.2011.12.043","DOIUrl":"10.1016/j.mcm.2011.12.043","url":null,"abstract":"<div><p>Production processes often produce by-products which have harmful effects on the environment. However, traditional Data Envelopment Analysis (DEA) models cannot measure the efficiency evaluation of these undesirable outputs. This article aims to solve this problem. It gives an improved DEA-SBM model, named ISBM-DEA, and constructs an illustration to compare it with the Slacks-based measured DEA model. The results show that the new model’s computing conclusions are highly related to the efficiency assessment of the DEA-SBM model, and it has a greater focus on the affects of undesirable outputs on production efficiency than the latter, which means that this new model has a greater extensive value for application and provides a better quantitative theoretical basis for environmental policy analysis.</p></div>","PeriodicalId":49872,"journal":{"name":"Mathematical and Computer Modelling","volume":"58 5","pages":"Pages 1110-1116"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mcm.2011.12.043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84210510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1016/j.mcm.2012.04.003
Malin Song, Yaqin Song, Huayin Yu, Zeya Wang
With the measuring results of environmental efficiency as a basis, redundancy rates of labor, capital stock, energy consumption and three kinds of industrial wastes in each province are calculated respectively by using hierarchical cluster analysis, which suggests that China can be divided into three kinds of economic-environmental zone, namely, intensively-developed zone, optimally-developed zone and slowly-developed zone. Analysis of redundancy rates indicates that the key to improvement of environmental efficiency is to reduce resource waste and emission of three kinds of industrial wastes. Environmental efficiency of the four main regions in China is measured with inter- and inner-differences among them. A downtrend in efficiency is found for all four regions with efficiency value the biggest for the East, bigger for the Northeast, less for the West and the least for the Central. The above quantitative analysis is helpful when new policies are established to improve environmental efficiency of each province in the future.
{"title":"Calculation of China’s environmental efficiency and relevant hierarchical cluster analysis from the perspective of regional differences","authors":"Malin Song, Yaqin Song, Huayin Yu, Zeya Wang","doi":"10.1016/j.mcm.2012.04.003","DOIUrl":"10.1016/j.mcm.2012.04.003","url":null,"abstract":"<div><p>With the measuring results of environmental efficiency as a basis, redundancy rates of labor, capital stock, energy consumption and three kinds of industrial wastes in each province are calculated respectively by using hierarchical cluster analysis, which suggests that China can be divided into three kinds of economic-environmental zone, namely, intensively-developed zone, optimally-developed zone and slowly-developed zone. Analysis of redundancy rates indicates that the key to improvement of environmental efficiency is to reduce resource waste and emission of three kinds of industrial wastes. Environmental efficiency of the four main regions in China is measured with inter- and inner-differences among them. A downtrend in efficiency is found for all four regions with efficiency value the biggest for the East, bigger for the Northeast, less for the West and the least for the Central. The above quantitative analysis is helpful when new policies are established to improve environmental efficiency of each province in the future.</p></div>","PeriodicalId":49872,"journal":{"name":"Mathematical and Computer Modelling","volume":"58 5","pages":"Pages 1084-1094"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mcm.2012.04.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85374331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}