Pub Date : 2010-06-15DOI: 10.1109/AHS.2010.5546263
Y. Sapargaliyev, T. Kalganova
The automated synthesis of the analog electronic circuit, including both the topology and the numerical values for each of the circuit's component, is recognized as a difficult problem. This problem is aggregating considerably when the size of a circuit and the number of its input/output pins increases. In this paper for the first time the method of automated synthesis of the analog electronic circuit by mean of evolution is applied to the synthesis of a multi-output circuit, namely 8-output voltage distributor, that distributes the incoming voltage signal among the outputs in filter-like mode. Using the substructure reuse, dynamic fitness function and incremental evolution techniques the largest analogue circuit has been evolved in the area that has 138 components.
{"title":"Automated synthesis of 8-output voltage distributor using incremental, evolution","authors":"Y. Sapargaliyev, T. Kalganova","doi":"10.1109/AHS.2010.5546263","DOIUrl":"https://doi.org/10.1109/AHS.2010.5546263","url":null,"abstract":"The automated synthesis of the analog electronic circuit, including both the topology and the numerical values for each of the circuit's component, is recognized as a difficult problem. This problem is aggregating considerably when the size of a circuit and the number of its input/output pins increases. In this paper for the first time the method of automated synthesis of the analog electronic circuit by mean of evolution is applied to the synthesis of a multi-output circuit, namely 8-output voltage distributor, that distributes the incoming voltage signal among the outputs in filter-like mode. Using the substructure reuse, dynamic fitness function and incremental evolution techniques the largest analogue circuit has been evolved in the area that has 138 components.","PeriodicalId":101655,"journal":{"name":"2010 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115644224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-06-15DOI: 10.1109/AHS.2010.5546265
R. Salvador, F. Moreno, T. Riesgo, L. Sekanina
This paper describes the initial studies of an Evolution Strategy aimed at implementation on embedded systems for the evolution of Wavelet Transforms for image compression. Previous works in the literature have already been proved useful for this application, but they are highly computationally intensive. Therefore, the work described here, deals with the simplifications made to those algorithms to reduce their computing requirements. Several optimizations have been done in the evaluation phase and in the EA operators. The results presented show how the proposed algorithm cut outs still allow for good results to be achieved, while effectively reducing the computing requirements.
{"title":"Evolutionary design and optimization of Wavelet Transforms for image compression in embedded systems","authors":"R. Salvador, F. Moreno, T. Riesgo, L. Sekanina","doi":"10.1109/AHS.2010.5546265","DOIUrl":"https://doi.org/10.1109/AHS.2010.5546265","url":null,"abstract":"This paper describes the initial studies of an Evolution Strategy aimed at implementation on embedded systems for the evolution of Wavelet Transforms for image compression. Previous works in the literature have already been proved useful for this application, but they are highly computationally intensive. Therefore, the work described here, deals with the simplifications made to those algorithms to reduce their computing requirements. Several optimizations have been done in the evaluation phase and in the EA operators. The results presented show how the proposed algorithm cut outs still allow for good results to be achieved, while effectively reducing the computing requirements.","PeriodicalId":101655,"journal":{"name":"2010 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120948357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-06-15DOI: 10.1109/AHS.2010.5546257
Y. Shiyanovskii, F. Wolff, Aravind Rajendran, C. Papachristou, D. Weyer, W. Clay
In this paper, we introduce the notion of process reliability based trojans which reduce the reliability of integrated circuits through malicious alterations of the manufacturing process conditions. In contrast to hardware/software trojans which either alter the circuitry or functionality of the IC respectively, the process reliability trojans appear as a result of alterations in the fabrication process steps. The reduction in reliability is caused by acceleration of the wearing out mechanisms for CMOS transistors, such as Negative Bias Temperature Instability (NBTI) or Hot Carrier Injection (HCI). The minor manufacturing process changes can result in creation of infected ICs with a much shorter lifetime that are difficult to detect. Such infected ICs fail prematurely and might lead to catastrophic consequences. The paper describes possible process alterations for both NBTI and HCI mechanisms that might result in creation of process reliability trojans. The paper also explores some possible detection techniques that can help identify the hidden trojans and discusses the various scenarios when process reliability based trojans lead to severe damages.
{"title":"Process reliability based trojans through NBTI and HCI effects","authors":"Y. Shiyanovskii, F. Wolff, Aravind Rajendran, C. Papachristou, D. Weyer, W. Clay","doi":"10.1109/AHS.2010.5546257","DOIUrl":"https://doi.org/10.1109/AHS.2010.5546257","url":null,"abstract":"In this paper, we introduce the notion of process reliability based trojans which reduce the reliability of integrated circuits through malicious alterations of the manufacturing process conditions. In contrast to hardware/software trojans which either alter the circuitry or functionality of the IC respectively, the process reliability trojans appear as a result of alterations in the fabrication process steps. The reduction in reliability is caused by acceleration of the wearing out mechanisms for CMOS transistors, such as Negative Bias Temperature Instability (NBTI) or Hot Carrier Injection (HCI). The minor manufacturing process changes can result in creation of infected ICs with a much shorter lifetime that are difficult to detect. Such infected ICs fail prematurely and might lead to catastrophic consequences. The paper describes possible process alterations for both NBTI and HCI mechanisms that might result in creation of process reliability trojans. The paper also explores some possible detection techniques that can help identify the hidden trojans and discusses the various scenarios when process reliability based trojans lead to severe damages.","PeriodicalId":101655,"journal":{"name":"2010 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124076308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-06-15DOI: 10.1109/AHS.2010.5546272
Mohamed El-Hadedy, Sohan Purohit, M. Margala, S. J. Knapskog
This paper presents the design and analysis of a power and area efficient transpose memory structure for use in adaptive signal processing systems. The proposed architecture achieves significant improvements in system throughput over competing designs. We demonstrate the throughput performance of the proposed memory on FPGA as well as ASIC implementations. The memory was employed in a watermarking architecture previously proposed. The new memory design allows for 2X speed up in performance for the watermarking algorithm and up to 10X speedup for 2D DCT and IDCT algorithms compared to previously published work, while consuming significantly lower power and area.
{"title":"Performance and area efficient transpose memory architecture for high throughput adaptive signal processing systems","authors":"Mohamed El-Hadedy, Sohan Purohit, M. Margala, S. J. Knapskog","doi":"10.1109/AHS.2010.5546272","DOIUrl":"https://doi.org/10.1109/AHS.2010.5546272","url":null,"abstract":"This paper presents the design and analysis of a power and area efficient transpose memory structure for use in adaptive signal processing systems. The proposed architecture achieves significant improvements in system throughput over competing designs. We demonstrate the throughput performance of the proposed memory on FPGA as well as ASIC implementations. The memory was employed in a watermarking architecture previously proposed. The new memory design allows for 2X speed up in performance for the watermarking algorithm and up to 10X speedup for 2D DCT and IDCT algorithms compared to previously published work, while consuming significantly lower power and area.","PeriodicalId":101655,"journal":{"name":"2010 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133607876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-06-15DOI: 10.1109/AHS.2010.5546252
D. Seto, Minoru Watanabe
Demand for a large-gate-count robust VLSI chip that is usable in a radiation-rich space environment is increasing daily. Optically reconfigurable gate arrays (ORGAs) have been developed to realize a large virtual gate count that is much larger than those of current VLSI chips by exploiting the large storage capacity of a holographic memory. The ORGA architecture is extremely robust for many failure modes caused by high-energy charged particles. However, the ORGA has only an unallowable failure mode, which is a turn-off failure mode of a laser array. This paper therefore presents a proposal of a recovery method for a turn-off failure mode of a laser array on an ORGA and presents its demonstration results.
{"title":"Recovery method for a turn-off failure mode of a laser array on an ORGA","authors":"D. Seto, Minoru Watanabe","doi":"10.1109/AHS.2010.5546252","DOIUrl":"https://doi.org/10.1109/AHS.2010.5546252","url":null,"abstract":"Demand for a large-gate-count robust VLSI chip that is usable in a radiation-rich space environment is increasing daily. Optically reconfigurable gate arrays (ORGAs) have been developed to realize a large virtual gate count that is much larger than those of current VLSI chips by exploiting the large storage capacity of a holographic memory. The ORGA architecture is extremely robust for many failure modes caused by high-energy charged particles. However, the ORGA has only an unallowable failure mode, which is a turn-off failure mode of a laser array. This paper therefore presents a proposal of a recovery method for a turn-off failure mode of a laser array on an ORGA and presents its demonstration results.","PeriodicalId":101655,"journal":{"name":"2010 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131913178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-06-15DOI: 10.1109/AHS.2010.5546253
Jiong Ou, Muhammad Farooq, Jan Haase, C. Grimm
The trend towards multi-function and multistandard communication systems makes the development process more complicated than before. One of the key challenges, which has to be addressed, is the system-level architecture exploration, where a suitable system architecture has to be derived in the very early stage of a design. Available solutions are almost ineffective when targeting this design challenge. One basic problem is the lack of an appropriate formal model, which focuses on the specification and optimization of reconfigurable or adaptive communication system. Based on such a formal model, computer aided system-level architecture exploration can be performed and the design process can be greatly simplified. In this paper, a new design methodology based upon a formal model is proposed, which will provide a solution to the design of flexible communication systems.
{"title":"A formal model for specification and optimization of flexible communication systems","authors":"Jiong Ou, Muhammad Farooq, Jan Haase, C. Grimm","doi":"10.1109/AHS.2010.5546253","DOIUrl":"https://doi.org/10.1109/AHS.2010.5546253","url":null,"abstract":"The trend towards multi-function and multistandard communication systems makes the development process more complicated than before. One of the key challenges, which has to be addressed, is the system-level architecture exploration, where a suitable system architecture has to be derived in the very early stage of a design. Available solutions are almost ineffective when targeting this design challenge. One basic problem is the lack of an appropriate formal model, which focuses on the specification and optimization of reconfigurable or adaptive communication system. Based on such a formal model, computer aided system-level architecture exploration can be performed and the design process can be greatly simplified. In this paper, a new design methodology based upon a formal model is proposed, which will provide a solution to the design of flexible communication systems.","PeriodicalId":101655,"journal":{"name":"2010 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116173861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-06-15DOI: 10.1109/AHS.2010.5546273
G. Flandin, B. Polle, J. Lheritier, P. Vidal
As a passive light-weight system with a natural adaptation capacity to the environment, mimicking the human capacity for detecting hazards, vision-based navigation for space applications has been the subject of a sustained research effort in Europe for more than ten years. The “Navigation for Planetary Approach and Landing” (NPAL) ESA/Astrium Satellites project paved the way for a new European autonomous vision based navigation system called VisNAV, aiming at a proof of concept, with an elegant breadboard realization, preparing for the next step to flight demonstration. This paper presents the consolidated design, and HW/SW architecture of the real time implementation. It also addresses the validation strategy from simulation, making extensive use of virtual scene generation through a realistic modeling environment, to in-flight demonstration experiment, as well as the achievable performances.
{"title":"Vision based navigation for autonomous space exploration","authors":"G. Flandin, B. Polle, J. Lheritier, P. Vidal","doi":"10.1109/AHS.2010.5546273","DOIUrl":"https://doi.org/10.1109/AHS.2010.5546273","url":null,"abstract":"As a passive light-weight system with a natural adaptation capacity to the environment, mimicking the human capacity for detecting hazards, vision-based navigation for space applications has been the subject of a sustained research effort in Europe for more than ten years. The “Navigation for Planetary Approach and Landing” (NPAL) ESA/Astrium Satellites project paved the way for a new European autonomous vision based navigation system called VisNAV, aiming at a proof of concept, with an elegant breadboard realization, preparing for the next step to flight demonstration. This paper presents the consolidated design, and HW/SW architecture of the real time implementation. It also addresses the validation strategy from simulation, making extensive use of virtual scene generation through a realistic modeling environment, to in-flight demonstration experiment, as well as the achievable performances.","PeriodicalId":101655,"journal":{"name":"2010 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121950928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-06-15DOI: 10.1109/AHS.2010.5546236
Xiang Wu, J. Thompson
In this paper, we present an efficient high-throughput threshold based sphere decoder (TSD) for multiple-input multiple-output (MIMO) systems. Depending on the instantaneous channel conditions, the proposed TSD compares the smallest singular value of the channel matrix with a predefined threshold on a frame-by-frame basis and switches between full expansion (FE) and partial expansion (PE) for the tree traversal to accelerate the detection procedure. The TSD has been implemented and validated on an FPGA platform and results indicate that the proposed decoder is very suitable for a highly-parallel and fully-pipelined hardware implementation. The proposed algorithm offers considerable throughput improvement over the original fixed-complexity sphere decoder (FSD) with only slightly increased resource use.
{"title":"FPGA implementation of an efficient high-throughput sphere decoder for MIMO systems based on the smallest singular value threshold","authors":"Xiang Wu, J. Thompson","doi":"10.1109/AHS.2010.5546236","DOIUrl":"https://doi.org/10.1109/AHS.2010.5546236","url":null,"abstract":"In this paper, we present an efficient high-throughput threshold based sphere decoder (TSD) for multiple-input multiple-output (MIMO) systems. Depending on the instantaneous channel conditions, the proposed TSD compares the smallest singular value of the channel matrix with a predefined threshold on a frame-by-frame basis and switches between full expansion (FE) and partial expansion (PE) for the tree traversal to accelerate the detection procedure. The TSD has been implemented and validated on an FPGA platform and results indicate that the proposed decoder is very suitable for a highly-parallel and fully-pipelined hardware implementation. The proposed algorithm offers considerable throughput improvement over the original fixed-complexity sphere decoder (FSD) with only slightly increased resource use.","PeriodicalId":101655,"journal":{"name":"2010 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129020820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-06-15DOI: 10.1109/AHS.2010.5546238
C. Villalpando, R. Some
FPGAs provide a flexible architecture for implementing many different types of machine vision algorithms. They allow heavily parallel portions of those algorithms to be accelerated and optimized for high specific performance (MIPS:Watt ratio). In comparison to ASICS, FPGAs enable low cost, quick turn prototyping and algorithm development as well as lower production costs for small quantity and one off applications. FPGAs also have the ability to be reprogrammed in flight, allowing them to be configured for different applications as mission needs evolve. JPL has developed a suite of machine vision IP cores to accelerate many common machine vision tasks used in robotic mobility applications. Modules such as stereo correlation for ranging, filtering, optical flow, area based correlation, feature detection, and image homography and rectification allow the real-time processing of image data using much smaller systems with much less power draw then an appropriately sized general purpose processor. These modules, along with a vision processing framework, are being re-cast in a generic plug and play form to allow rapid, low cost configuration, reconfiguration, evolution and adaptation of next generation machine vision systems for mobile robotics.
{"title":"Reconfigurable machine vision systems using FPGAs","authors":"C. Villalpando, R. Some","doi":"10.1109/AHS.2010.5546238","DOIUrl":"https://doi.org/10.1109/AHS.2010.5546238","url":null,"abstract":"FPGAs provide a flexible architecture for implementing many different types of machine vision algorithms. They allow heavily parallel portions of those algorithms to be accelerated and optimized for high specific performance (MIPS:Watt ratio). In comparison to ASICS, FPGAs enable low cost, quick turn prototyping and algorithm development as well as lower production costs for small quantity and one off applications. FPGAs also have the ability to be reprogrammed in flight, allowing them to be configured for different applications as mission needs evolve. JPL has developed a suite of machine vision IP cores to accelerate many common machine vision tasks used in robotic mobility applications. Modules such as stereo correlation for ranging, filtering, optical flow, area based correlation, feature detection, and image homography and rectification allow the real-time processing of image data using much smaller systems with much less power draw then an appropriately sized general purpose processor. These modules, along with a vision processing framework, are being re-cast in a generic plug and play form to allow rapid, low cost configuration, reconfiguration, evolution and adaptation of next generation machine vision systems for mobile robotics.","PeriodicalId":101655,"journal":{"name":"2010 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126869187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-06-15DOI: 10.1109/AHS.2010.5546246
S. Mancini, L. Pierrefeu, Zahir Larabi, Y. Mathieu
Pre-fetching in a memory hierarchy is known to alleviate the “memory wall” paradigm but its use is impeded because of the difficulty to estimate efficiency when used in a complex system such as a SoC (System on Chip) or NoC (Network on Chip). Therefore, some methods are needed to evaluate the benefit of pre-fetching at the earliest possible stage in a design flow to help the designer choose architectural parameters or transform the application algorithm. In this paper we show that the emulation platform implementing the nD-AP Cache (n-Dimensional Adaptive and Predictive Cache) allows to perform a platform-independent measurement of this cache efficiency. The nD-AP Cache performs pre-fetching in multidimensional arrays which are commonly used in image processing and multimedia applications. The obtained metric can be used to extrapolate the cache performance in a much broader system configuration. The method to compute this metric is the calibration process. The performed benchmarks show that the calibration process is confident. Also, we measured that the nD-AP Cache is two times faster than a standard PowerPC 2-way set associative cache in the context of an image processing kernel.
{"title":"Calibrating a predictive cache emulator for SoC design","authors":"S. Mancini, L. Pierrefeu, Zahir Larabi, Y. Mathieu","doi":"10.1109/AHS.2010.5546246","DOIUrl":"https://doi.org/10.1109/AHS.2010.5546246","url":null,"abstract":"Pre-fetching in a memory hierarchy is known to alleviate the “memory wall” paradigm but its use is impeded because of the difficulty to estimate efficiency when used in a complex system such as a SoC (System on Chip) or NoC (Network on Chip). Therefore, some methods are needed to evaluate the benefit of pre-fetching at the earliest possible stage in a design flow to help the designer choose architectural parameters or transform the application algorithm. In this paper we show that the emulation platform implementing the nD-AP Cache (n-Dimensional Adaptive and Predictive Cache) allows to perform a platform-independent measurement of this cache efficiency. The nD-AP Cache performs pre-fetching in multidimensional arrays which are commonly used in image processing and multimedia applications. The obtained metric can be used to extrapolate the cache performance in a much broader system configuration. The method to compute this metric is the calibration process. The performed benchmarks show that the calibration process is confident. Also, we measured that the nD-AP Cache is two times faster than a standard PowerPC 2-way set associative cache in the context of an image processing kernel.","PeriodicalId":101655,"journal":{"name":"2010 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132538710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}