This paper addresses the problem of extending the lifetime of a battery-powered mobile host in a client-server wireless network by using task migration and remote processing. This problem is solved by first constructing a stochastic model of the client-server system based on the theory of continuous-time Markovian decision processes. Next the dynamic power management problem with task migration is formulated as a policy optimization problem and solved exactly by using a linear programming approach. Based on the off-line optimal policy derived in this way, an on-line adaptive policy is proposed, which dynamically monitors the channel conditions and the server behavior and adopts a client-side power management policy with task migration that results in optimum energy consumption in the client. Experimental results demonstrate that the proposed method outperforms existing heuristic methods by as much as 35% in terms of the overall energy savings.
{"title":"Extending the lifetime of a network of battery-powered mobile devices by remote processing: a Markovian decision-based approach","authors":"Peng Rong, Massoud Pedram","doi":"10.1145/775832.776060","DOIUrl":"https://doi.org/10.1145/775832.776060","url":null,"abstract":"This paper addresses the problem of extending the lifetime of a battery-powered mobile host in a client-server wireless network by using task migration and remote processing. This problem is solved by first constructing a stochastic model of the client-server system based on the theory of continuous-time Markovian decision processes. Next the dynamic power management problem with task migration is formulated as a policy optimization problem and solved exactly by using a linear programming approach. Based on the off-line optimal policy derived in this way, an on-line adaptive policy is proposed, which dynamically monitors the channel conditions and the server behavior and adopts a client-side power management policy with task migration that results in optimum energy consumption in the client. Experimental results demonstrate that the proposed method outperforms existing heuristic methods by as much as 35% in terms of the overall energy savings.","PeriodicalId":167477,"journal":{"name":"Proceedings 2003. Design Automation Conference (IEEE Cat. No.03CH37451)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130138121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many embedded systems use a simple pipelined RISC processor for computation and an on-chip SRAM for data storage. We present an enhancement called Intelligent SRAM (ISRAM) that consists of a small computation unit with an accumulator that is placed near the on-chip SRAM. The computation unit can perform operations on two words from the same SRAM row or on one word from the SRAM and the other from the accumulator. This ISRAM enhancement requires only a few additional instructions to support the computation unit. We present a computation partitioning algorithm that assigns the computations to the processor or to the new computation unit for a given data flow graph of a program. Performance improvement results from the reduction in the number of accesses to the SRAM, the number of instructions, and the number of pipeline stalls compared to the same operations in the processor. Experimental results on various benchmarks show up to 1.46X speedup with our enhancement.
{"title":"Embedded intelligent SRAM","authors":"P. Jain, G. Suh, S. Devadas","doi":"10.1145/775832.776051","DOIUrl":"https://doi.org/10.1145/775832.776051","url":null,"abstract":"Many embedded systems use a simple pipelined RISC processor for computation and an on-chip SRAM for data storage. We present an enhancement called Intelligent SRAM (ISRAM) that consists of a small computation unit with an accumulator that is placed near the on-chip SRAM. The computation unit can perform operations on two words from the same SRAM row or on one word from the SRAM and the other from the accumulator. This ISRAM enhancement requires only a few additional instructions to support the computation unit. We present a computation partitioning algorithm that assigns the computations to the processor or to the new computation unit for a given data flow graph of a program. Performance improvement results from the reduction in the number of accesses to the SRAM, the number of instructions, and the number of pipeline stalls compared to the same operations in the processor. Experimental results on various benchmarks show up to 1.46X speedup with our enhancement.","PeriodicalId":167477,"journal":{"name":"Proceedings 2003. Design Automation Conference (IEEE Cat. No.03CH37451)","volume":"134 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131051163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The impact of technology scaling on three run-time leakage reduction techniques (input vector control, body bias control and power supply gating) is evaluated by determining limits and benefits, in terms of the potential leakage reduction, performance penalty, and area and power overhead in 0.25 um, 0.18 um, and 0.07 um technologies. HSPICE simulation results are estimations with various functional units and memory structures are presented to support a comprehensive analysis.
{"title":"Implications of technology scaling on leakage reduction techniques","authors":"Y. Tsai, D. Duarte, N. Vijaykrishnan, M. J. Irwin","doi":"10.1145/775832.775880","DOIUrl":"https://doi.org/10.1145/775832.775880","url":null,"abstract":"The impact of technology scaling on three run-time leakage reduction techniques (input vector control, body bias control and power supply gating) is evaluated by determining limits and benefits, in terms of the potential leakage reduction, performance penalty, and area and power overhead in 0.25 um, 0.18 um, and 0.07 um technologies. HSPICE simulation results are estimations with various functional units and memory structures are presented to support a comprehensive analysis.","PeriodicalId":167477,"journal":{"name":"Proceedings 2003. Design Automation Conference (IEEE Cat. No.03CH37451)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134486845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we discuss the global resource sharing problem during synthesis of control data flow graphs for FPGAs. We first define the Global Resource Sharing (GRS) problem. Then, we introduce the Global Inter Basic Block Resource Sharing (GIBBS) technique to solve the GRS problem. The first tries to minimize the number of connections between modules, the second considers the area gain, the third uses the criticality of operations assigned to resources as a measure for deciding on merging any given pair of resources, the fourth tries to capture common resource chains and overlap those to minimize both area and delay, and the fifth is the combination of these heuristics. While applying resource sharing, we also consider the execution frequency of the basic blocks. Using our techniques we synthesized several CDFGs representing applications from MediaBench suite. Our results show that, we can reduce the total area requirement by 44% on average (up to 59%) while increasing the execution time by 6% on average.
{"title":"Global resource sharing for synthesis of control data flow graphs on FPGAs","authors":"S. Memik, G. Memik, R. Jafari, E. Kursun","doi":"10.1145/775832.775985","DOIUrl":"https://doi.org/10.1145/775832.775985","url":null,"abstract":"In this paper we discuss the global resource sharing problem during synthesis of control data flow graphs for FPGAs. We first define the Global Resource Sharing (GRS) problem. Then, we introduce the Global Inter Basic Block Resource Sharing (GIBBS) technique to solve the GRS problem. The first tries to minimize the number of connections between modules, the second considers the area gain, the third uses the criticality of operations assigned to resources as a measure for deciding on merging any given pair of resources, the fourth tries to capture common resource chains and overlap those to minimize both area and delay, and the fifth is the combination of these heuristics. While applying resource sharing, we also consider the execution frequency of the basic blocks. Using our techniques we synthesized several CDFGs representing applications from MediaBench suite. Our results show that, we can reduce the total area requirement by 44% on average (up to 59%) while increasing the execution time by 6% on average.","PeriodicalId":167477,"journal":{"name":"Proceedings 2003. Design Automation Conference (IEEE Cat. No.03CH37451)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127431397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose an energy-aware MPEG-4 FGS video streaming system with client feedback. In this client-server system, the battery-powered mobile client sends its maximum decoding capability (i.e., its decoding aptitude) to the server in order to help the server determine the additional amount of data (in the form of enhancement layers on top of the base layer) per frame that it sends to the client, and thereby, set its data rate. On the client side, a dynamic voltage and frequency scaling technique is used to adjust the decoding aptitude of the client while meeting a constraint on the minimum achieved video quality. As a measure of energy efficiency of the video streamer, the notion of a normalized decoding load is introduced. It is shown that a video streaming system that maintains this normalized load at unity produces the optimum video quality with no energy waste. We implemented an MPEG-4 FGS video streaming system on an XScale-based testbed in which a server and a mobile client are wirelessly connected by a feedback channel. Based on the actual current measurements in this testbed, we obtain an average of 20% communication energy reduction in the client by making the MPEG-4 FGS streamer energy-aware.
{"title":"Energy-aware MPEG-4 FGS streaming","authors":"Kihwan Choi, Kwanho Kim, Massoud Pedram","doi":"10.1145/775832.776061","DOIUrl":"https://doi.org/10.1145/775832.776061","url":null,"abstract":"In this paper, we propose an energy-aware MPEG-4 FGS video streaming system with client feedback. In this client-server system, the battery-powered mobile client sends its maximum decoding capability (i.e., its decoding aptitude) to the server in order to help the server determine the additional amount of data (in the form of enhancement layers on top of the base layer) per frame that it sends to the client, and thereby, set its data rate. On the client side, a dynamic voltage and frequency scaling technique is used to adjust the decoding aptitude of the client while meeting a constraint on the minimum achieved video quality. As a measure of energy efficiency of the video streamer, the notion of a normalized decoding load is introduced. It is shown that a video streaming system that maintains this normalized load at unity produces the optimum video quality with no energy waste. We implemented an MPEG-4 FGS video streaming system on an XScale-based testbed in which a server and a mobile client are wirelessly connected by a feedback channel. Based on the actual current measurements in this testbed, we obtain an average of 20% communication energy reduction in the client by making the MPEG-4 FGS streamer energy-aware.","PeriodicalId":167477,"journal":{"name":"Proceedings 2003. Design Automation Conference (IEEE Cat. No.03CH37451)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115808184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Kahng, S. Borkar, J. M. Cohn, A. Domic, P. Groeneveld, L. Scheffer, J. Schoellkopf
Overview Two years ago, DAC-2001 attendees enjoyed a thrilling debatepanel, “Who’s Got Nanometer Design Under Control?”, pitting sky-is-falling Physics die-hards against not-to-worry Methodology gurus. Then, the DAC audience overwhelmingly voted the match for the Methodologists. Now, we've just gone through the biggest business downturn in the industry's history, and we're hearing more and more about chip failures due to 130nm physical effects. Both physics and economics are a lot worse than we thought two years ago. Where are those simple, correct-by-construction methodologies for signal integrity, power integrity, low-power, etc. that we were promised? Were we bamboozled by glib promises from those Methodologists? In this session, we bring back the panelists from two years ago, not for another debate, but to hear well-reasoned perspectives on how to prioritize spending to address nanometer design challenges. Yes, methodology can solve any problem – but now we want to know which problems, in what priority order, at what cost. The panel will address the following questions. • What are the economic impacts and significance of the key nanometer design challenges, relative to each other? • Which nanometer design problems merit responsible R&D investment, in what amounts and proportion? • What is the likelihood of success, both near-term and longterm, in solving key nanometer design challenges? • Where will the answers come from? To keep the discussion very concrete, each panelist will be given a $100 budget, and must defend their allocation of this budget to attack various design problems. Where should the $100 be spent? The audience will determine the best-reasoned allocation, and the winning panelist keeps all the money.
{"title":"Nanometer design: place your bets","authors":"A. Kahng, S. Borkar, J. M. Cohn, A. Domic, P. Groeneveld, L. Scheffer, J. Schoellkopf","doi":"10.1145/775832.775971","DOIUrl":"https://doi.org/10.1145/775832.775971","url":null,"abstract":"Overview Two years ago, DAC-2001 attendees enjoyed a thrilling debatepanel, “Who’s Got Nanometer Design Under Control?”, pitting sky-is-falling Physics die-hards against not-to-worry Methodology gurus. Then, the DAC audience overwhelmingly voted the match for the Methodologists. Now, we've just gone through the biggest business downturn in the industry's history, and we're hearing more and more about chip failures due to 130nm physical effects. Both physics and economics are a lot worse than we thought two years ago. Where are those simple, correct-by-construction methodologies for signal integrity, power integrity, low-power, etc. that we were promised? Were we bamboozled by glib promises from those Methodologists? In this session, we bring back the panelists from two years ago, not for another debate, but to hear well-reasoned perspectives on how to prioritize spending to address nanometer design challenges. Yes, methodology can solve any problem – but now we want to know which problems, in what priority order, at what cost. The panel will address the following questions. • What are the economic impacts and significance of the key nanometer design challenges, relative to each other? • Which nanometer design problems merit responsible R&D investment, in what amounts and proportion? • What is the likelihood of success, both near-term and longterm, in solving key nanometer design challenges? • Where will the answers come from? To keep the discussion very concrete, each panelist will be given a $100 budget, and must defend their allocation of this budget to attack various design problems. Where should the $100 be spent? The audience will determine the best-reasoned allocation, and the winning panelist keeps all the money.","PeriodicalId":167477,"journal":{"name":"Proceedings 2003. Design Automation Conference (IEEE Cat. No.03CH37451)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115873910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new methodology for designing fractional-N frequency synthesizers and other phase locked loop (PLL) circuits is presented. The approach achieves direct realization of the desired closed loop PLL transfer function given a set of user-specified parameters and automatically calculates the corresponding open loop PLL parameters. The algorithm also accommodates nonidealities such as parasitic poles and zeros. The entire methodology has been implemented in a GUI-based software package, which is used to verify the approach through comparison of the calculated and simulated dynamic and noise performance of a third order /spl Sigma/-/spl Delta/ fractional-N frequency synthesizer.
{"title":"Fractional-N frequency synthesizer design at the transfer function level using a direct closed loop realization algorithm","authors":"C. Y. Lau, M. Perrott","doi":"10.1145/775832.775966","DOIUrl":"https://doi.org/10.1145/775832.775966","url":null,"abstract":"A new methodology for designing fractional-N frequency synthesizers and other phase locked loop (PLL) circuits is presented. The approach achieves direct realization of the desired closed loop PLL transfer function given a set of user-specified parameters and automatically calculates the corresponding open loop PLL parameters. The algorithm also accommodates nonidealities such as parasitic poles and zeros. The entire methodology has been implemented in a GUI-based software package, which is used to verify the approach through comparison of the calculated and simulated dynamic and noise performance of a third order /spl Sigma/-/spl Delta/ fractional-N frequency synthesizer.","PeriodicalId":167477,"journal":{"name":"Proceedings 2003. Design Automation Conference (IEEE Cat. No.03CH37451)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124277133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuchun Ma, Xianlong Hong, Sheqin Dong, Song Chen, Yici Cai, Chung-Kuan Cheng, Jun Gu
By dividing the packing area into routing tiles, we can give the budget of the buffer insertion. And the detail locating of the blocks in their rooms can be implemented for each iterations during the annealing process to favor the later buffer planning. The buffer insertion will affect the possible routes as well the congestion of the packing. The congestion estimation in this paper takes the buffer insertion into account. So we devise a buffer planning algorithm to allocate the buffer into tiles with congestion information considered. The buffer allocation problem is formulated into a net flow problem and the buffer allocation can be handled as an integral part in the floorplanning process. Since there is more freedom for floorplan optimization, the floorplanning algorithm integrated with buffer planning can result in better performance and chip area.
{"title":"Dynamic global buffer planning optimization based on detail block locating and congestion analysis","authors":"Yuchun Ma, Xianlong Hong, Sheqin Dong, Song Chen, Yici Cai, Chung-Kuan Cheng, Jun Gu","doi":"10.1145/775832.776036","DOIUrl":"https://doi.org/10.1145/775832.776036","url":null,"abstract":"By dividing the packing area into routing tiles, we can give the budget of the buffer insertion. And the detail locating of the blocks in their rooms can be implemented for each iterations during the annealing process to favor the later buffer planning. The buffer insertion will affect the possible routes as well the congestion of the packing. The congestion estimation in this paper takes the buffer insertion into account. So we devise a buffer planning algorithm to allocate the buffer into tiles with congestion information considered. The buffer allocation problem is formulated into a net flow problem and the buffer allocation can be handled as an integral part in the floorplanning process. Since there is more freedom for floorplan optimization, the floorplanning algorithm integrated with buffer planning can result in better performance and chip area.","PeriodicalId":167477,"journal":{"name":"Proceedings 2003. Design Automation Conference (IEEE Cat. No.03CH37451)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115094585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aarti Gupta, Malay K. Ganai, Chao Wang, Z. Yang, P. Ashar
Bounded Model Checking (BMC) based on Boolean Satisfiability (SAT) procedures has recently gained popularity as an alternative to BDD-based model checking techniques for finding bugs in large designs. In this paper, we explore the use of learning from BDDs, where learned clauses generated by BDD-based analysis are added to the SAT solver, to supplement its other learning mechanisms. We propose several heuristics for guiding this process, aimed at increasing the usefulness of the learned clauses, while reducing the overheads. We demonstrate the effectiveness of our approach on several industrial designs, where BMC performance is improved and the design can be searched up to a greater depth by use of BDD-based learning.
{"title":"Learning from BDDs in SAT-based bounded model checking","authors":"Aarti Gupta, Malay K. Ganai, Chao Wang, Z. Yang, P. Ashar","doi":"10.1145/775832.776040","DOIUrl":"https://doi.org/10.1145/775832.776040","url":null,"abstract":"Bounded Model Checking (BMC) based on Boolean Satisfiability (SAT) procedures has recently gained popularity as an alternative to BDD-based model checking techniques for finding bugs in large designs. In this paper, we explore the use of learning from BDDs, where learned clauses generated by BDD-based analysis are added to the SAT solver, to supplement its other learning mechanisms. We propose several heuristics for guiding this process, aimed at increasing the usefulness of the learned clauses, while reducing the overheads. We demonstrate the effectiveness of our approach on several industrial designs, where BMC performance is improved and the design can be searched up to a greater depth by use of BDD-based learning.","PeriodicalId":167477,"journal":{"name":"Proceedings 2003. Design Automation Conference (IEEE Cat. No.03CH37451)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115116659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rob A. Rutenbar, D. Harame, K. Johnson, P. Kempf, T. Meng, R. Rofougaran, J. Spoto
CMOS dominates digital microelectronics. However, wireless applications require RF circuits at 1-5GHz, and exotic higher frequency applications are on the horizon. Silicon-Germanium (SiGe) is a growing choice for these designs. But is it "the" answer? Some argue that scaled CMOS will handle all tomorrow's RF ICs. Others argue that one-chip SoC solutions will never be the winning strategy for these highly heterogeneous designs, and place their bets on system-in-package (SiP) technologies. Is there a right answer here? Is CMOS the "only" way, or just "another" way?
{"title":"Mixed signals on mixed-signal: the right next technology","authors":"Rob A. Rutenbar, D. Harame, K. Johnson, P. Kempf, T. Meng, R. Rofougaran, J. Spoto","doi":"10.1145/775832.775904","DOIUrl":"https://doi.org/10.1145/775832.775904","url":null,"abstract":"CMOS dominates digital microelectronics. However, wireless applications require RF circuits at 1-5GHz, and exotic higher frequency applications are on the horizon. Silicon-Germanium (SiGe) is a growing choice for these designs. But is it \"the\" answer? Some argue that scaled CMOS will handle all tomorrow's RF ICs. Others argue that one-chip SoC solutions will never be the winning strategy for these highly heterogeneous designs, and place their bets on system-in-package (SiP) technologies. Is there a right answer here? Is CMOS the \"only\" way, or just \"another\" way?","PeriodicalId":167477,"journal":{"name":"Proceedings 2003. Design Automation Conference (IEEE Cat. No.03CH37451)","volume":"10 13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117160933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}