Pub Date : 2016-09-01DOI: 10.1109/ICNIDC.2016.7974586
Myungin Lee, Joon‐Hyuk Chang
In this paper, we propose a method to estimate reverberation time (T60) from the observed reverberant speech signal using deep neural network (DNN). Reverberation of speech signal is a critical issue in speech processing as the reverberation results smearing of the sound characteristics in both temporal and spectral domain resulting unfavorable effects on the performance of speech processing algorithms. Employing room acoustic characteristics of a reverberant speech can enhance the performance of the speech processing system so that the blind estimation of reverberation time has been studied based on the numerical interpretation of reverberation. In this paper, we adopt the speech decay rate and its distribution for each frequency bin as input feature vectors of DNN. Complex relation between each input feature vector and each T60 target label through multiple nonlinear hidden layers. We also introduce an approach to mitigate the computational complexity whilst maintaining rational performance.
{"title":"Blind estimation of reverberation time using deep neural network","authors":"Myungin Lee, Joon‐Hyuk Chang","doi":"10.1109/ICNIDC.2016.7974586","DOIUrl":"https://doi.org/10.1109/ICNIDC.2016.7974586","url":null,"abstract":"In this paper, we propose a method to estimate reverberation time (T60) from the observed reverberant speech signal using deep neural network (DNN). Reverberation of speech signal is a critical issue in speech processing as the reverberation results smearing of the sound characteristics in both temporal and spectral domain resulting unfavorable effects on the performance of speech processing algorithms. Employing room acoustic characteristics of a reverberant speech can enhance the performance of the speech processing system so that the blind estimation of reverberation time has been studied based on the numerical interpretation of reverberation. In this paper, we adopt the speech decay rate and its distribution for each frequency bin as input feature vectors of DNN. Complex relation between each input feature vector and each T60 target label through multiple nonlinear hidden layers. We also introduce an approach to mitigate the computational complexity whilst maintaining rational performance.","PeriodicalId":439987,"journal":{"name":"2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127851207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/ICNIDC.2016.7974546
Yuexin Wu, C. Fan
In most sematic based service discovery procedure, subsumption testing and LCA (lowest common ancestor) detection are the necessary step. There are a lot of algorisms to do this job. However, in the multiple inheritance hierarchies' scenery, many traditional algorisms cannot work properly. In this paper, we propose a gene based ontology encoding method which can implement subsumption testing and LCA detection in constant time. Experimental results show that the proposed method can effectively reduce the time of semantic service discovery.
{"title":"A gene based semantic encoding method for subsumption testing and LCA detection","authors":"Yuexin Wu, C. Fan","doi":"10.1109/ICNIDC.2016.7974546","DOIUrl":"https://doi.org/10.1109/ICNIDC.2016.7974546","url":null,"abstract":"In most sematic based service discovery procedure, subsumption testing and LCA (lowest common ancestor) detection are the necessary step. There are a lot of algorisms to do this job. However, in the multiple inheritance hierarchies' scenery, many traditional algorisms cannot work properly. In this paper, we propose a gene based ontology encoding method which can implement subsumption testing and LCA detection in constant time. Experimental results show that the proposed method can effectively reduce the time of semantic service discovery.","PeriodicalId":439987,"journal":{"name":"2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128242814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/ICNIDC.2016.7974553
Xueying Li, Hailun Xia, Zhimin Zeng
In this paper, we investigate a user-centric Coordinated Multiple Point (CoMP) transmission scheme to improve Energy Efficiency (EE) in dense Heterogeneous Network (HetNet), which takes advantage of both Dynamic Point Reduced Power (DPRP) and Joint Transmission (JT). DPRP with a user voting method rather than Dynamic Point Blanking (DPB) is introduced to fully use the power while reducing energy consumption and mitigating inter-cell interference. The pattern of reducing power is decided in a user-centric way. Firstly, CoMP user equipments (CUEs) report to two base stations (BSs) to inform their urgency of CUEs to reduce power. Then the BSs reduce power on some resource blocks (RBs) which affect most UEs so as to get EE improved. JT with overlapping cooperation set is applied only to the CUEs to improve their throughput, therefore not bringing too much pressure to backhaul. The cooperation set of JT is decided by CUEs instead of the location of BSs to better adapt to the requirement of CUEs. Simulation results show that the scheme we proposed can improve EE obviously without leading to too much drop in user throughput.
在本文中,我们研究了一种以用户为中心的协调多点(CoMP)传输方案,以提高密集异构网络(HetNet)中的能源效率(EE),该方案利用了动态点降低功率(DPRP)和联合传输(JT)。采用用户投票方式代替动态点消隐(Dynamic Point blantting, DPB)的DPRP,在充分利用电能的同时降低了能耗,减轻了小区间干扰。以用户为中心来决定降低能耗的模式。首先,CoMP用户设备(cue)向两个基站(BSs)报告,告知它们需要cue来降低功率的紧迫性。然后,BSs降低一些影响大多数终端的资源块(RBs)的功耗,从而提高EE。具有重叠合作集的JT仅应用于cue以提高其吞吐量,因此不会给回程带来太大的压力。JT的合作集由cue决定,而不是由BSs的位置决定,以更好地适应cue的要求。仿真结果表明,我们提出的方案可以在不导致用户吞吐量下降太多的情况下明显提高EE。
{"title":"A novel user-centric comp scheme for energy efficiency of dense heterogeneous network","authors":"Xueying Li, Hailun Xia, Zhimin Zeng","doi":"10.1109/ICNIDC.2016.7974553","DOIUrl":"https://doi.org/10.1109/ICNIDC.2016.7974553","url":null,"abstract":"In this paper, we investigate a user-centric Coordinated Multiple Point (CoMP) transmission scheme to improve Energy Efficiency (EE) in dense Heterogeneous Network (HetNet), which takes advantage of both Dynamic Point Reduced Power (DPRP) and Joint Transmission (JT). DPRP with a user voting method rather than Dynamic Point Blanking (DPB) is introduced to fully use the power while reducing energy consumption and mitigating inter-cell interference. The pattern of reducing power is decided in a user-centric way. Firstly, CoMP user equipments (CUEs) report to two base stations (BSs) to inform their urgency of CUEs to reduce power. Then the BSs reduce power on some resource blocks (RBs) which affect most UEs so as to get EE improved. JT with overlapping cooperation set is applied only to the CUEs to improve their throughput, therefore not bringing too much pressure to backhaul. The cooperation set of JT is decided by CUEs instead of the location of BSs to better adapt to the requirement of CUEs. Simulation results show that the scheme we proposed can improve EE obviously without leading to too much drop in user throughput.","PeriodicalId":439987,"journal":{"name":"2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133948795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/ICNIDC.2016.7974594
Huiming Huang, Fang Wei
The next generation video coding standard, High Efficiency Video Coding (HEVC) has achieved great gains over the previous standards. However, the high efficiency brings huge computation complexity. In order to solve the problem of balance between complexity and performance, this paper presents a fast algorithm for the intra part in HEVC. The algorithm includes a pre-partitioning for CUs based on the edge density of the textures to simplify the partitioning and a new rough modes decision based on the most probable gradient angles to reduce candidate modes. Compared to HEVC reference software HM8.0, the proposed algorithm can save about 33% encoding time for all intra configurations with a slight loss of PSNR.
新一代视频编码标准——高效视频编码(High Efficiency video coding, HEVC)比以前的标准有了很大的进步。然而,高效率带来了巨大的计算复杂度。为了解决复杂度和性能之间的平衡问题,本文提出了一种快速的HEVC内分割算法。该算法包括基于纹理边缘密度对cu进行预划分以简化划分,以及基于最可能梯度角的粗糙模式决策以减少候选模式。与HEVC参考软件HM8.0相比,该算法在PSNR略有损失的情况下,可将所有帧内配置的编码时间节省约33%。
{"title":"Fast algorithm based on edge density and gradient angle for intra encoding in HEVC","authors":"Huiming Huang, Fang Wei","doi":"10.1109/ICNIDC.2016.7974594","DOIUrl":"https://doi.org/10.1109/ICNIDC.2016.7974594","url":null,"abstract":"The next generation video coding standard, High Efficiency Video Coding (HEVC) has achieved great gains over the previous standards. However, the high efficiency brings huge computation complexity. In order to solve the problem of balance between complexity and performance, this paper presents a fast algorithm for the intra part in HEVC. The algorithm includes a pre-partitioning for CUs based on the edge density of the textures to simplify the partitioning and a new rough modes decision based on the most probable gradient angles to reduce candidate modes. Compared to HEVC reference software HM8.0, the proposed algorithm can save about 33% encoding time for all intra configurations with a slight loss of PSNR.","PeriodicalId":439987,"journal":{"name":"2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124935906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/ICNIDC.2016.7974587
Hiroyuki Munakata, S. Koshita, M. Abe, M. Kawamata
This paper proposes a new adaptive notch filter with high convergence speed and low steady state error. The new adaptive notch filter is based on an improved Simplified Lattice Algorithm (SLA) which has variable step sizes using piloted adaptive notch filters. Computer simulations demonstrate that the adaptive notch filter claims an improved convergence speed under the same mean square error condition in comparison to the conventional SLA.
{"title":"Improvement of convergence speed for an adaptive notch filter based on simplified lattice algorithm using pilot notch filters","authors":"Hiroyuki Munakata, S. Koshita, M. Abe, M. Kawamata","doi":"10.1109/ICNIDC.2016.7974587","DOIUrl":"https://doi.org/10.1109/ICNIDC.2016.7974587","url":null,"abstract":"This paper proposes a new adaptive notch filter with high convergence speed and low steady state error. The new adaptive notch filter is based on an improved Simplified Lattice Algorithm (SLA) which has variable step sizes using piloted adaptive notch filters. Computer simulations demonstrate that the adaptive notch filter claims an improved convergence speed under the same mean square error condition in comparison to the conventional SLA.","PeriodicalId":439987,"journal":{"name":"2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115865396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/ICNIDC.2016.7974614
Ya-Ming Shen, Guang Chen
Evaluation system is the basis of product reviews mining. This paper introduces an unsupervised method to establish product evaluation system based on aspects. We extract product feature words with the syntax parser and achieve 72.33% F-value. This paper analyzes the mobile reviews, clusters the labeled feature phrases in the SemEval task and achieves 71.5% precision, which verifies the effectiveness of the method. Finally we make mobile features' clustering result visible and draw some conclusions by analyzing the relationship between different aspects.
{"title":"Study of feature word extraction and cluster in Chinese product reviews","authors":"Ya-Ming Shen, Guang Chen","doi":"10.1109/ICNIDC.2016.7974614","DOIUrl":"https://doi.org/10.1109/ICNIDC.2016.7974614","url":null,"abstract":"Evaluation system is the basis of product reviews mining. This paper introduces an unsupervised method to establish product evaluation system based on aspects. We extract product feature words with the syntax parser and achieve 72.33% F-value. This paper analyzes the mobile reviews, clusters the labeled feature phrases in the SemEval task and achieves 71.5% precision, which verifies the effectiveness of the method. Finally we make mobile features' clustering result visible and draw some conclusions by analyzing the relationship between different aspects.","PeriodicalId":439987,"journal":{"name":"2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132169176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/ICNIDC.2016.7974627
Heungsoo Kim, S. Wee, Jechang Jeong
In this paper, we proposed tone mapping method which uses threshold function. It was impossible for conventional Multi-Scale Retinex algorithm to express detail parts due to damage in bright region of dynamic range at compression processing. We reduce the damage, which occurs in tone mapping process, by setting the threshold of Log Scale image in this algorithm. The experimental tests demonstrated that proposed algorithm reserves detailed information and significantly improves the quality of image, compared to conventional algorithms.
{"title":"Tone mapping method based on retinex algorithm using threshold function","authors":"Heungsoo Kim, S. Wee, Jechang Jeong","doi":"10.1109/ICNIDC.2016.7974627","DOIUrl":"https://doi.org/10.1109/ICNIDC.2016.7974627","url":null,"abstract":"In this paper, we proposed tone mapping method which uses threshold function. It was impossible for conventional Multi-Scale Retinex algorithm to express detail parts due to damage in bright region of dynamic range at compression processing. We reduce the damage, which occurs in tone mapping process, by setting the threshold of Log Scale image in this algorithm. The experimental tests demonstrated that proposed algorithm reserves detailed information and significantly improves the quality of image, compared to conventional algorithms.","PeriodicalId":439987,"journal":{"name":"2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130507271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/ICNIDC.2016.7974575
Junhyuk Kim, Jaehoon Lee, Hyowon Kim, Young-Mi Park, Jaehoon Choi, K. Hwang, Sunwoo Kim
In this paper, we analyze performances of direction of arrival (DOA) estimation with existence of mutual coupling. Received signals are in different degrees, and equal power uncorrelated. The method in this paper proceeds in three steps. We consider a steering vector which is applicable to unmanned aerial vehicle (UAV). Then we analyze differences between theoretical DOA estimation and practical DOA estimation with mutual coupling. We propose an improved approach of estimation with mutual coupling. Simulation results demonstrate effectiveness and accuracy of the improved approach under various conditions.
{"title":"Performance analysis of DOA estimation in the presence of mutual coupling for UAV","authors":"Junhyuk Kim, Jaehoon Lee, Hyowon Kim, Young-Mi Park, Jaehoon Choi, K. Hwang, Sunwoo Kim","doi":"10.1109/ICNIDC.2016.7974575","DOIUrl":"https://doi.org/10.1109/ICNIDC.2016.7974575","url":null,"abstract":"In this paper, we analyze performances of direction of arrival (DOA) estimation with existence of mutual coupling. Received signals are in different degrees, and equal power uncorrelated. The method in this paper proceeds in three steps. We consider a steering vector which is applicable to unmanned aerial vehicle (UAV). Then we analyze differences between theoretical DOA estimation and practical DOA estimation with mutual coupling. We propose an improved approach of estimation with mutual coupling. Simulation results demonstrate effectiveness and accuracy of the improved approach under various conditions.","PeriodicalId":439987,"journal":{"name":"2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126225374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/ICNIDC.2016.7974569
Junyong Jeong, Y. Song
Many recent mobile devices use flash memory as a storage media. A flash-based storage interacts with its host system using a pre-defined interface protocol. Therefore, in order to verify the operation correctness of storage devices and to evaluate the performance of them, it is important to analyze the interaction between a host and its storage device at the interface protocol level. However, due to the emergence of applications with intensive storage accesses and the support by operating systems for multicore systems running multiple concurrent applications, it becomes make the analysis more complicated. In this paper, we propose a two-stage analysis method for large-scale protocol information and also introduce an efficient data organization to help easy and random access to protocol information. The proposed method has been implemented in a protocol analysis system for mobile storage.
{"title":"Two-stage analysis of large-scale protocol information in mobile storage systems","authors":"Junyong Jeong, Y. Song","doi":"10.1109/ICNIDC.2016.7974569","DOIUrl":"https://doi.org/10.1109/ICNIDC.2016.7974569","url":null,"abstract":"Many recent mobile devices use flash memory as a storage media. A flash-based storage interacts with its host system using a pre-defined interface protocol. Therefore, in order to verify the operation correctness of storage devices and to evaluate the performance of them, it is important to analyze the interaction between a host and its storage device at the interface protocol level. However, due to the emergence of applications with intensive storage accesses and the support by operating systems for multicore systems running multiple concurrent applications, it becomes make the analysis more complicated. In this paper, we propose a two-stage analysis method for large-scale protocol information and also introduce an efficient data organization to help easy and random access to protocol information. The proposed method has been implemented in a protocol analysis system for mobile storage.","PeriodicalId":439987,"journal":{"name":"2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128167267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/ICNIDC.2016.7974537
Song Wang, Tiankui Zhang, Chunyan Feng
Large-scale regionally-correlated failures resulting from natural disasters or intentional attacks pose a great threat to the physical backbone networks, since the impact of large-scale failures can cause network nodes and links co-located in a large geographical area to fail. When the same intensity of threats occurs at different physical locations, the damage to the network performance varies greatly. In the network vulnerability assessment, the critical region is defined as the destructed area which would lead to the highest network disruption. Traditional critical region identification for network vulnerability assessment is only determined by nodes, without considering the failures of the links. To this end, this paper proposes a critical region identification method that joints nodes and links to find the critical region. We study this vulnerability assessment problem in two cases, the special case of the failure center constrained at a network node and the general one of that at an arbitrary location, and propose two algorithms for these two cases respectively. The simulation-based experiment on synthetic network is given with different criticality metrics. The simulation results verify the feasibility and effectiveness of our proposed critical region identification method in comparison to others.
{"title":"Nodes and links jointed critical region identification based network vulnerability assessing","authors":"Song Wang, Tiankui Zhang, Chunyan Feng","doi":"10.1109/ICNIDC.2016.7974537","DOIUrl":"https://doi.org/10.1109/ICNIDC.2016.7974537","url":null,"abstract":"Large-scale regionally-correlated failures resulting from natural disasters or intentional attacks pose a great threat to the physical backbone networks, since the impact of large-scale failures can cause network nodes and links co-located in a large geographical area to fail. When the same intensity of threats occurs at different physical locations, the damage to the network performance varies greatly. In the network vulnerability assessment, the critical region is defined as the destructed area which would lead to the highest network disruption. Traditional critical region identification for network vulnerability assessment is only determined by nodes, without considering the failures of the links. To this end, this paper proposes a critical region identification method that joints nodes and links to find the critical region. We study this vulnerability assessment problem in two cases, the special case of the failure center constrained at a network node and the general one of that at an arbitrary location, and propose two algorithms for these two cases respectively. The simulation-based experiment on synthetic network is given with different criticality metrics. The simulation results verify the feasibility and effectiveness of our proposed critical region identification method in comparison to others.","PeriodicalId":439987,"journal":{"name":"2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128536711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}