Pub Date : 2008-12-01DOI: 10.1109/INMIC.2008.4777734
H. Tahir, S. Shah
Increased interest in the field of wireless sensor networks has proved that wireless sensor networks can have a broad variety of applications. Current applications of wireless sensor networks are in the fields of medical care, battlefield monitoring, environment monitoring, surveillance and disaster prevention. Many of these applications require that the sensor network be deployed in an area that is hostile, inaccessible and mission critical. Keeping in view the resource starved nature of sensor networks and its application domains, the requirements for a secure sensor network has become two fold. We have analyzed sensor networks from a security perspective by pointing out vulnerabilities in sensor networks. A summary of threat models and security benchmarks has also been pointed out to explain the aims and objectives of a secure wireless senor network. We have also explained various attacks and their prevention mechanisms.
{"title":"Wireless sensor networks - a security perspective","authors":"H. Tahir, S. Shah","doi":"10.1109/INMIC.2008.4777734","DOIUrl":"https://doi.org/10.1109/INMIC.2008.4777734","url":null,"abstract":"Increased interest in the field of wireless sensor networks has proved that wireless sensor networks can have a broad variety of applications. Current applications of wireless sensor networks are in the fields of medical care, battlefield monitoring, environment monitoring, surveillance and disaster prevention. Many of these applications require that the sensor network be deployed in an area that is hostile, inaccessible and mission critical. Keeping in view the resource starved nature of sensor networks and its application domains, the requirements for a secure sensor network has become two fold. We have analyzed sensor networks from a security perspective by pointing out vulnerabilities in sensor networks. A summary of threat models and security benchmarks has also been pointed out to explain the aims and objectives of a secure wireless senor network. We have also explained various attacks and their prevention mechanisms.","PeriodicalId":112530,"journal":{"name":"2008 IEEE International Multitopic Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126138368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/INMIC.2008.4777772
S. Zaidi, S. N. Danial, B. Usmani
Software inter-failure time series analysis has always been a question mark for the reliability engineers. Many models have been proposed since the problem of reliability discovers, but none of them produces adequate results. This study presents a neural network perspective of modeling inter-failure time of software. We compare different parametric models of software reliability with our proposed neural network model and found the proposed more suitable.
{"title":"Modeling inter-failure time series using neural networks","authors":"S. Zaidi, S. N. Danial, B. Usmani","doi":"10.1109/INMIC.2008.4777772","DOIUrl":"https://doi.org/10.1109/INMIC.2008.4777772","url":null,"abstract":"Software inter-failure time series analysis has always been a question mark for the reliability engineers. Many models have been proposed since the problem of reliability discovers, but none of them produces adequate results. This study presents a neural network perspective of modeling inter-failure time of software. We compare different parametric models of software reliability with our proposed neural network model and found the proposed more suitable.","PeriodicalId":112530,"journal":{"name":"2008 IEEE International Multitopic Conference","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127399525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/INMIC.2008.4777730
A. Yasar, D. Preuveneers, Y. Berbers, G. Bhatti
With the growth of software flaws there is a rise in the demand of security embedding to achieve the goal of secure software development in a more efficient manner. Different practices are in use to keep the software intact. These practices also meant to be scrutinized for better results on the basis of the level of security, efficiency and complexity they are providing. It may also be weighted on the basis of Confidentiality, Integrity and Availability (CIA). Software security is a step by step procedure which can not be achieved just at a specific level but it should be taken into account from the beginning of the Software Development Life Cycle (SDLC). In this paper, we have taken into account some of the best practices for secure software development and categorized them based on the phases in software development lifecycle. The results enable us to draw a clear picture of the best practices in software development which will enable a developer to follow them on a particular SDLC phase.
{"title":"Best practices for software security: An overview","authors":"A. Yasar, D. Preuveneers, Y. Berbers, G. Bhatti","doi":"10.1109/INMIC.2008.4777730","DOIUrl":"https://doi.org/10.1109/INMIC.2008.4777730","url":null,"abstract":"With the growth of software flaws there is a rise in the demand of security embedding to achieve the goal of secure software development in a more efficient manner. Different practices are in use to keep the software intact. These practices also meant to be scrutinized for better results on the basis of the level of security, efficiency and complexity they are providing. It may also be weighted on the basis of Confidentiality, Integrity and Availability (CIA). Software security is a step by step procedure which can not be achieved just at a specific level but it should be taken into account from the beginning of the Software Development Life Cycle (SDLC). In this paper, we have taken into account some of the best practices for secure software development and categorized them based on the phases in software development lifecycle. The results enable us to draw a clear picture of the best practices in software development which will enable a developer to follow them on a particular SDLC phase.","PeriodicalId":112530,"journal":{"name":"2008 IEEE International Multitopic Conference","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115774371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/INMIC.2008.4777744
H. Ramzan, A. Iqbal
Ad hoc networks are a new wireless networking paradigm for mobile hosts. Unlike traditional mobile wireless networks, ad hoc networks do not rely on any fixed infrastructure. Instead, hosts rely on each other to keep the network connected. The military tactical and other security-sensitive operations are still the main applications of ad hoc networks, although there is a trend to adopt ad hoc networks for commercial uses due to their unique properties. One of the most important methods for evaluating the characteristics of ad hoc networking protocols is through the use of simulation. The topology and movement of the nodes in the simulation are key factors in the performance of the network protocol under study. Once the nodes have been initially distributed, the mobility model dictates the movement of the nodes within the network. In the performance evaluation of a protocol for an ad hoc network, the protocol should be tested under realistic conditions which should impersonate the real world conditions. In order to overcome the short comings of the problem, design and analysis of restricted mobility model is the key. In this restricted mobility model we used the fact that mobile nodes moving under OLSR protocol tend to concentrate or settle down in the center and define a smaller mobility region then defined in the simulation parameters.
{"title":"Throughput and connectivity using constraint based mobility model for mobile ad hoc networks","authors":"H. Ramzan, A. Iqbal","doi":"10.1109/INMIC.2008.4777744","DOIUrl":"https://doi.org/10.1109/INMIC.2008.4777744","url":null,"abstract":"Ad hoc networks are a new wireless networking paradigm for mobile hosts. Unlike traditional mobile wireless networks, ad hoc networks do not rely on any fixed infrastructure. Instead, hosts rely on each other to keep the network connected. The military tactical and other security-sensitive operations are still the main applications of ad hoc networks, although there is a trend to adopt ad hoc networks for commercial uses due to their unique properties. One of the most important methods for evaluating the characteristics of ad hoc networking protocols is through the use of simulation. The topology and movement of the nodes in the simulation are key factors in the performance of the network protocol under study. Once the nodes have been initially distributed, the mobility model dictates the movement of the nodes within the network. In the performance evaluation of a protocol for an ad hoc network, the protocol should be tested under realistic conditions which should impersonate the real world conditions. In order to overcome the short comings of the problem, design and analysis of restricted mobility model is the key. In this restricted mobility model we used the fact that mobile nodes moving under OLSR protocol tend to concentrate or settle down in the center and define a smaller mobility region then defined in the simulation parameters.","PeriodicalId":112530,"journal":{"name":"2008 IEEE International Multitopic Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115780591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/INMIC.2008.4777728
M.H. Saraee, Z. Ehghaghi, Hoda Meamarzadeh, B. Zibanezhad
Trauma is the main leading cause of death in children; we need a tool to prevent and predict the outcome in these patients. Data mining is the science of extracting the useful information from a large amount of data sets or databases that leads to statistical and logical analysis and looking for patterns that could help the decision makers. In This paper we offer an approach for using data mining in classifying mortality rate related to accidents in children under 15. These data were gathered from the patient files which were recorded in the medical record section of the Alzahra Hospital in Isfahan. The data mining methods in use are decision tree and Bayes' theorem. Applying DM techniques to the data brings about very interesting and valuable results. It is concluded that in this case, comparing the result of evaluating the models on test set, decision tree works better than Bayes' theorem. In this paper, we have used Clementine12.0 for creating the models.
{"title":"Applying data mining in medical data with focus on mortality related to accident in children","authors":"M.H. Saraee, Z. Ehghaghi, Hoda Meamarzadeh, B. Zibanezhad","doi":"10.1109/INMIC.2008.4777728","DOIUrl":"https://doi.org/10.1109/INMIC.2008.4777728","url":null,"abstract":"Trauma is the main leading cause of death in children; we need a tool to prevent and predict the outcome in these patients. Data mining is the science of extracting the useful information from a large amount of data sets or databases that leads to statistical and logical analysis and looking for patterns that could help the decision makers. In This paper we offer an approach for using data mining in classifying mortality rate related to accidents in children under 15. These data were gathered from the patient files which were recorded in the medical record section of the Alzahra Hospital in Isfahan. The data mining methods in use are decision tree and Bayes' theorem. Applying DM techniques to the data brings about very interesting and valuable results. It is concluded that in this case, comparing the result of evaluating the models on test set, decision tree works better than Bayes' theorem. In this paper, we have used Clementine12.0 for creating the models.","PeriodicalId":112530,"journal":{"name":"2008 IEEE International Multitopic Conference","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116052144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/INMIC.2008.4777771
S. Malik, M. U. Shaikh
Distributed Database is a very well known and popular concept that most of the organizations use when they need to store data over the network or in different locations. Now when they have large amount of operational data they want to use that data to support decision making as a mean of gaining competitive advantages. But the problem is that these operational systems were not designed to support any sort of decision making. Organizations need to have a way in which they can use their archive and operational data for decision making. Data warehouse was designed to meet the requirements of the organizations which receive data from different operational sources and support decision making. Organizations have there data on the web and as the organization matures data also increases. The problem for business organizations was to manage that data effectively because there is no global structure and organization of data available on the web. To maintain data effectively on the web we need a Web Warehouse. Web warehouse is the management of data on the web, it is a data warehouse on the net. This paper describes the effectiveness of the web warehouse leading towards business intelligence.
{"title":"Web warehouse: Towards efficient distributed business management","authors":"S. Malik, M. U. Shaikh","doi":"10.1109/INMIC.2008.4777771","DOIUrl":"https://doi.org/10.1109/INMIC.2008.4777771","url":null,"abstract":"Distributed Database is a very well known and popular concept that most of the organizations use when they need to store data over the network or in different locations. Now when they have large amount of operational data they want to use that data to support decision making as a mean of gaining competitive advantages. But the problem is that these operational systems were not designed to support any sort of decision making. Organizations need to have a way in which they can use their archive and operational data for decision making. Data warehouse was designed to meet the requirements of the organizations which receive data from different operational sources and support decision making. Organizations have there data on the web and as the organization matures data also increases. The problem for business organizations was to manage that data effectively because there is no global structure and organization of data available on the web. To maintain data effectively on the web we need a Web Warehouse. Web warehouse is the management of data on the web, it is a data warehouse on the net. This paper describes the effectiveness of the web warehouse leading towards business intelligence.","PeriodicalId":112530,"journal":{"name":"2008 IEEE International Multitopic Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116513468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/INMIC.2008.4777701
Abolfazl Jalilvand, M. Jabbari, G. Govar, H. Khoshkhoo
This paper focused on modeling and controlling of a doubly-fed induction generator (DFIG). The presented model is developed based on the basic flux linkage, voltage and torque equations. In order to control the DFIG's active power, a suitable method based on PSO algorithm has been proposed. The simulation results show that the presented method is a useful way to improve the output power of DFIG.
{"title":"Modeling and control of a doubly fed induction generator using PSO algorithm","authors":"Abolfazl Jalilvand, M. Jabbari, G. Govar, H. Khoshkhoo","doi":"10.1109/INMIC.2008.4777701","DOIUrl":"https://doi.org/10.1109/INMIC.2008.4777701","url":null,"abstract":"This paper focused on modeling and controlling of a doubly-fed induction generator (DFIG). The presented model is developed based on the basic flux linkage, voltage and torque equations. In order to control the DFIG's active power, a suitable method based on PSO algorithm has been proposed. The simulation results show that the presented method is a useful way to improve the output power of DFIG.","PeriodicalId":112530,"journal":{"name":"2008 IEEE International Multitopic Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124844875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/INMIC.2008.4777736
S. Gilani, M. A. Bangash
Encryption is used to disguise data making it unintelligible to unauthorized observers. Providing such security is especially important when data is being transmitted across open networks such as the Internet. Since, image data have special features such as bulk capacity, high redundancy and high correlation among pixels that imposes special requirements on any encryption technique. In this paper, an extension is proposed to the block-based image encryption algorithm (BBIE) scheme that works in combination with Blowfish encryption algorithm [16]. Whereas BBIE is meant for 256-color bitmap images, the proposed technique also handles RGB color images and, for the cases studied, improves the security of digital images. In this enhanced technique, which we call the enhanced block based image encryption technique (EBBIE) the digital image is decomposed into blocks, then two consecutive operations - rotating each 3D true color image block by 90deg followed by flipping row-wise down - are performed to complicated the relationship between original and processed image. These rendered blocks are then scrambled to form a transformed confused image followed by Blowfish cryptosystem that finally encrypts the image with secret key. Experimental results show that correlation between adjacent pixels is decreased in all color components and entropy is increased for the cases studied.
{"title":"Enhanced Block Based color Image Encryption technique with confusion","authors":"S. Gilani, M. A. Bangash","doi":"10.1109/INMIC.2008.4777736","DOIUrl":"https://doi.org/10.1109/INMIC.2008.4777736","url":null,"abstract":"Encryption is used to disguise data making it unintelligible to unauthorized observers. Providing such security is especially important when data is being transmitted across open networks such as the Internet. Since, image data have special features such as bulk capacity, high redundancy and high correlation among pixels that imposes special requirements on any encryption technique. In this paper, an extension is proposed to the block-based image encryption algorithm (BBIE) scheme that works in combination with Blowfish encryption algorithm [16]. Whereas BBIE is meant for 256-color bitmap images, the proposed technique also handles RGB color images and, for the cases studied, improves the security of digital images. In this enhanced technique, which we call the enhanced block based image encryption technique (EBBIE) the digital image is decomposed into blocks, then two consecutive operations - rotating each 3D true color image block by 90deg followed by flipping row-wise down - are performed to complicated the relationship between original and processed image. These rendered blocks are then scrambled to form a transformed confused image followed by Blowfish cryptosystem that finally encrypts the image with secret key. Experimental results show that correlation between adjacent pixels is decreased in all color components and entropy is increased for the cases studied.","PeriodicalId":112530,"journal":{"name":"2008 IEEE International Multitopic Conference","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126181061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/INMIC.2008.4777718
A. Gupta, S. Vaishnavi, S. Malviya
In this paper, we present a method of management of a dynamic scene using octrees. The use of octrees in image rendering in 3D space is suitable as the octree is essentially a tree data structure in three dimensions. Most such methods resort to modification - namely, resizing and rebuilding - of the nodes of the tree used in order to accomplish the desired results. The main concern in such an approach is to minimize, or preferably, avoid resizing of nodes during runtime, as it takes a great toll on system resources. Here we present an algorithm that completely avoids resizing of nodes, hence achieving greater efficiency. This aspect of the algorithm is also borne out by the experimental conclusions we have obtained.
{"title":"Time-efficient dynamic scene management using octrees","authors":"A. Gupta, S. Vaishnavi, S. Malviya","doi":"10.1109/INMIC.2008.4777718","DOIUrl":"https://doi.org/10.1109/INMIC.2008.4777718","url":null,"abstract":"In this paper, we present a method of management of a dynamic scene using octrees. The use of octrees in image rendering in 3D space is suitable as the octree is essentially a tree data structure in three dimensions. Most such methods resort to modification - namely, resizing and rebuilding - of the nodes of the tree used in order to accomplish the desired results. The main concern in such an approach is to minimize, or preferably, avoid resizing of nodes during runtime, as it takes a great toll on system resources. Here we present an algorithm that completely avoids resizing of nodes, hence achieving greater efficiency. This aspect of the algorithm is also borne out by the experimental conclusions we have obtained.","PeriodicalId":112530,"journal":{"name":"2008 IEEE International Multitopic Conference","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114650804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/INMIC.2008.4777751
U. Rashid
Transducer is a device usually electrical, electronic or electro-mechanical that converts one type of energy to another for various purposes including measurement or information transfer. The type of transducer we are implementing is orthomode transducer. It is a polarization diplexer, a device that forms part of an antenna feed system and serves to combine or separate orthogonally polarized signals. This diplexer can also separate orthogonal polarizations within the same frequency band. In the current paper it has been deployed as a transmitter and receiver at the same time by receiving one signal at some particular downlink frequency (10.95 GHz to 12.75 GHz) at one rectangular port and transmitting the other signal at uplink frequency (14 GHz to 14.5 GHz) by using the second rectangular port. Our model of ortho-mode transducer built on a circular waveguide and supplied with step transitions in input rectangular waveguide ports is proposed. Scattering matrix parameters, return loss, VSWR, coupling and TX loss are analyzed on the HFSS. By using computer aided design we have taken the design figure for the workshop process. After having manufactured it, we tested OMT on network analyzer and compared the results with the simulate ones.
{"title":"Design and simulation of orthomode transducer in Ku-frequency band on HFSS","authors":"U. Rashid","doi":"10.1109/INMIC.2008.4777751","DOIUrl":"https://doi.org/10.1109/INMIC.2008.4777751","url":null,"abstract":"Transducer is a device usually electrical, electronic or electro-mechanical that converts one type of energy to another for various purposes including measurement or information transfer. The type of transducer we are implementing is orthomode transducer. It is a polarization diplexer, a device that forms part of an antenna feed system and serves to combine or separate orthogonally polarized signals. This diplexer can also separate orthogonal polarizations within the same frequency band. In the current paper it has been deployed as a transmitter and receiver at the same time by receiving one signal at some particular downlink frequency (10.95 GHz to 12.75 GHz) at one rectangular port and transmitting the other signal at uplink frequency (14 GHz to 14.5 GHz) by using the second rectangular port. Our model of ortho-mode transducer built on a circular waveguide and supplied with step transitions in input rectangular waveguide ports is proposed. Scattering matrix parameters, return loss, VSWR, coupling and TX loss are analyzed on the HFSS. By using computer aided design we have taken the design figure for the workshop process. After having manufactured it, we tested OMT on network analyzer and compared the results with the simulate ones.","PeriodicalId":112530,"journal":{"name":"2008 IEEE International Multitopic Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121932772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}