Pub Date : 2013-03-20DOI: 10.1109/CISS.2013.6624267
Dawei Ying, D. Love, B. Hochwald
Wireless technology is now a ubiquitous part of life worldwide. A continuing and evolving concern is the possibility of adverse health effects from long term exposure to electromagnetic radiation. In most countries, regulatory agencies set an exposure threshold in terms of the specific absorption rate (SAR). Surprisingly, portable wireless communication devices, such as mobile phones, are often designed with little attention to the SAR thresholds, instead focusing on transmit power constraints. As cellular handsets continue to become more and more multifunctional, designing devices that provide high rate performance and satisfy SAR constraints will become increasingly difficult. In this paper, we present a design strategy that considers both transmit power and SAR for multiple antenna transmit beamforming. Analytical and numerical results show substantial performance improvement over schemes that are designed using only the power constraint.
{"title":"Beamformer optimization with a constraint on user electromagnetic radiation exposure","authors":"Dawei Ying, D. Love, B. Hochwald","doi":"10.1109/CISS.2013.6624267","DOIUrl":"https://doi.org/10.1109/CISS.2013.6624267","url":null,"abstract":"Wireless technology is now a ubiquitous part of life worldwide. A continuing and evolving concern is the possibility of adverse health effects from long term exposure to electromagnetic radiation. In most countries, regulatory agencies set an exposure threshold in terms of the specific absorption rate (SAR). Surprisingly, portable wireless communication devices, such as mobile phones, are often designed with little attention to the SAR thresholds, instead focusing on transmit power constraints. As cellular handsets continue to become more and more multifunctional, designing devices that provide high rate performance and satisfy SAR constraints will become increasingly difficult. In this paper, we present a design strategy that considers both transmit power and SAR for multiple antenna transmit beamforming. Analytical and numerical results show substantial performance improvement over schemes that are designed using only the power constraint.","PeriodicalId":268095,"journal":{"name":"2013 47th Annual Conference on Information Sciences and Systems (CISS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131503037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-20DOI: 10.1109/CISS.2013.6552265
I. Fofanah, A. Cole-Rhodes, R. Dean
In this paper, we present the results of using a turbo decoder to retrieve a serially encoded data stream, which has been transmitted over a multipath channel. The transmitted data streams are Binary Phase Shift Keying (BPSK) modulated signals, which are encoded using two rate one-half (1/2) recursive systematic convolutional (RSC) encoders, and then transmitted over the channel. An aeronautical channel is very fast fading and time varying, and it has been modeled as an FIR filter with complex channel gains determined by Doppler and the multipath. At the receiver, we utilize turbo decoding to estimate the effect of the channel on the transmitted data. Simulation results are provided comparing the bit error rates of Turbo decoded BPSK to that of un-coded BPSK sent over an A WGN and a time-invariant multipath channel.
{"title":"Performance of turbo coded BPSK for an aeronautical channel","authors":"I. Fofanah, A. Cole-Rhodes, R. Dean","doi":"10.1109/CISS.2013.6552265","DOIUrl":"https://doi.org/10.1109/CISS.2013.6552265","url":null,"abstract":"In this paper, we present the results of using a turbo decoder to retrieve a serially encoded data stream, which has been transmitted over a multipath channel. The transmitted data streams are Binary Phase Shift Keying (BPSK) modulated signals, which are encoded using two rate one-half (1/2) recursive systematic convolutional (RSC) encoders, and then transmitted over the channel. An aeronautical channel is very fast fading and time varying, and it has been modeled as an FIR filter with complex channel gains determined by Doppler and the multipath. At the receiver, we utilize turbo decoding to estimate the effect of the channel on the transmitted data. Simulation results are provided comparing the bit error rates of Turbo decoded BPSK to that of un-coded BPSK sent over an A WGN and a time-invariant multipath channel.","PeriodicalId":268095,"journal":{"name":"2013 47th Annual Conference on Information Sciences and Systems (CISS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129382012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-20DOI: 10.1109/CISS.2013.6552297
Sudarshan Ramenahalli, E. Niebur
A saliency map is a model of visual selective attention using purely bottom-up features of an image like color, intensity and orientation. Another bottom-up feature of visual input is depth, the distance between eye (or sensor) and objects in the visual field. In this report we study the effect of depth on saliency. Different from previous work, we do not use measured depth (disparity) information but, instead, compute a 3D depth map from the 2D image using a depth learning algorithm. This computed depth is then added as an additional feature channel to the 2D saliency map, and all feature channels are linearly combined with equal weights to obtain a 3-dimensional saliency map. We compare the efficacy of saliency maps (2D and 3D) in predicting human eye fixations using three different performance measures. The 3D saliency map outperforms its 2D counterpart in predicting human eye fixations on all measures. Perhaps surprisingly, our 3D saliency map computed from a 2D image performs better than an existing 3D saliency model that uses explicit depth information.
{"title":"Computing 3D saliency from a 2D image","authors":"Sudarshan Ramenahalli, E. Niebur","doi":"10.1109/CISS.2013.6552297","DOIUrl":"https://doi.org/10.1109/CISS.2013.6552297","url":null,"abstract":"A saliency map is a model of visual selective attention using purely bottom-up features of an image like color, intensity and orientation. Another bottom-up feature of visual input is depth, the distance between eye (or sensor) and objects in the visual field. In this report we study the effect of depth on saliency. Different from previous work, we do not use measured depth (disparity) information but, instead, compute a 3D depth map from the 2D image using a depth learning algorithm. This computed depth is then added as an additional feature channel to the 2D saliency map, and all feature channels are linearly combined with equal weights to obtain a 3-dimensional saliency map. We compare the efficacy of saliency maps (2D and 3D) in predicting human eye fixations using three different performance measures. The 3D saliency map outperforms its 2D counterpart in predicting human eye fixations on all measures. Perhaps surprisingly, our 3D saliency map computed from a 2D image performs better than an existing 3D saliency model that uses explicit depth information.","PeriodicalId":268095,"journal":{"name":"2013 47th Annual Conference on Information Sciences and Systems (CISS)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130828106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-20DOI: 10.1109/CISS.2013.6552250
Joseph H. Lin, P. Pouliquen, A. Andreou
We demonstrate an all-digital ultra-wideband transmitter that implements programmable Gaussian monocycles. We show test results from a prototype chip in 0.5 um CMOS process and show simulation results from a 65nm CMOS process.
我们演示了一个实现可编程高斯单周期的全数字超宽带发射机。我们展示了0.5 um CMOS工艺的原型芯片的测试结果,并展示了65nm CMOS工艺的模拟结果。
{"title":"All digital programmable Gaussian pulse generator for ultra-wideband transmitter","authors":"Joseph H. Lin, P. Pouliquen, A. Andreou","doi":"10.1109/CISS.2013.6552250","DOIUrl":"https://doi.org/10.1109/CISS.2013.6552250","url":null,"abstract":"We demonstrate an all-digital ultra-wideband transmitter that implements programmable Gaussian monocycles. We show test results from a prototype chip in 0.5 um CMOS process and show simulation results from a 65nm CMOS process.","PeriodicalId":268095,"journal":{"name":"2013 47th Annual Conference on Information Sciences and Systems (CISS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131064740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-20DOI: 10.1109/CISS.2013.6624253
Yuhan Dong, Zhang Tao, Xuedan Zhang
Pulse Position Modulation (PPM) improves both the power efficiency and bit error rate (BER) performance compared with the On-Off Keying (OOK) in wireless optical communication. Polarization Shift Keying (PolSK) can prevent the negative effects of ambient light. However, they still suffer from short distance and low data rate especially in underwater environment. By combining PPM and PolSK schemes, we proposed a novel Polarized Pulse Position Modulation (P-PPM) scheme with two modes of full-rate or non-full-rate determined by a pattern selective ratio. Numerical results suggest that the new full-rate scheme improves the data rate, BER, and transmission distance compared with the PPM scheme with the same peak power, and also obtains an enhancement in power efficiency compared with the PolSK scheme. A proper pattern selective ratio is also determined to balance the data rate and BER performance.
{"title":"Polarized pulse position modulation for wireless optical communications","authors":"Yuhan Dong, Zhang Tao, Xuedan Zhang","doi":"10.1109/CISS.2013.6624253","DOIUrl":"https://doi.org/10.1109/CISS.2013.6624253","url":null,"abstract":"Pulse Position Modulation (PPM) improves both the power efficiency and bit error rate (BER) performance compared with the On-Off Keying (OOK) in wireless optical communication. Polarization Shift Keying (PolSK) can prevent the negative effects of ambient light. However, they still suffer from short distance and low data rate especially in underwater environment. By combining PPM and PolSK schemes, we proposed a novel Polarized Pulse Position Modulation (P-PPM) scheme with two modes of full-rate or non-full-rate determined by a pattern selective ratio. Numerical results suggest that the new full-rate scheme improves the data rate, BER, and transmission distance compared with the PPM scheme with the same peak power, and also obtains an enhancement in power efficiency compared with the PolSK scheme. A proper pattern selective ratio is also determined to balance the data rate and BER performance.","PeriodicalId":268095,"journal":{"name":"2013 47th Annual Conference on Information Sciences and Systems (CISS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133545890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-20DOI: 10.1109/CISS.2013.6552257
Longji Sun, Qi Cheng
In this paper, the problem of sound source separation and localization is studied using a microphone array. A pure delay mixture model which is typical in outdoor environments is adopted. Our proposed approach utilizes the subspace method to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since sound signals are generally considered broadband, the DOA estimates for a source at different frequencies are used to approximate the probability density function of the DOA. The maximum likelihood criterion is used to determine the final DOA estimate for the source. Using the estimated DOAs, the corresponding mixing and demixing matrices in the frequency domain are computed, and the source signals are recovered using the inverse short time Fourier transform (STFT). Our algorithm inherits the robustness to noise of the subspace method and also supports real-time implementation. Comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm.
{"title":"Real-time microphone array processing for sound source separation and localization","authors":"Longji Sun, Qi Cheng","doi":"10.1109/CISS.2013.6552257","DOIUrl":"https://doi.org/10.1109/CISS.2013.6552257","url":null,"abstract":"In this paper, the problem of sound source separation and localization is studied using a microphone array. A pure delay mixture model which is typical in outdoor environments is adopted. Our proposed approach utilizes the subspace method to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since sound signals are generally considered broadband, the DOA estimates for a source at different frequencies are used to approximate the probability density function of the DOA. The maximum likelihood criterion is used to determine the final DOA estimate for the source. Using the estimated DOAs, the corresponding mixing and demixing matrices in the frequency domain are computed, and the source signals are recovered using the inverse short time Fourier transform (STFT). Our algorithm inherits the robustness to noise of the subspace method and also supports real-time implementation. Comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm.","PeriodicalId":268095,"journal":{"name":"2013 47th Annual Conference on Information Sciences and Systems (CISS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133635398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-20DOI: 10.1109/CISS.2013.6552289
Sungjoon Park, W. Stark
In this paper, we consider a half-duplex decode-and-forward multi-hop relay network. The model for the channel that we consider includes path loss, shadowing, and fast fading. For this system and channel model, we find the outage probability for the multi-hop relay communication strategy that allows a packet to follow any path through the relays in the network. Based on the outage probability and the rate that used in the network, we find the exact throughput of the system. From this understanding of the system throughput, we find the optimal operating rate and the optimal number of hops that maximize the throughput. We also consider a system in which the relays have buffers that allow them to delay transmission and transmit when the channel conditions are favorable. We compare the system throughput of this buffer-equipped multi-hop relay network with the conventional multihop relay network without buffers.
{"title":"Throughput analysis of multi-hop relaying: The optimal rate and the optimal number of hops","authors":"Sungjoon Park, W. Stark","doi":"10.1109/CISS.2013.6552289","DOIUrl":"https://doi.org/10.1109/CISS.2013.6552289","url":null,"abstract":"In this paper, we consider a half-duplex decode-and-forward multi-hop relay network. The model for the channel that we consider includes path loss, shadowing, and fast fading. For this system and channel model, we find the outage probability for the multi-hop relay communication strategy that allows a packet to follow any path through the relays in the network. Based on the outage probability and the rate that used in the network, we find the exact throughput of the system. From this understanding of the system throughput, we find the optimal operating rate and the optimal number of hops that maximize the throughput. We also consider a system in which the relays have buffers that allow them to delay transmission and transmit when the channel conditions are favorable. We compare the system throughput of this buffer-equipped multi-hop relay network with the conventional multihop relay network without buffers.","PeriodicalId":268095,"journal":{"name":"2013 47th Annual Conference on Information Sciences and Systems (CISS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127852119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-20DOI: 10.1109/CISS.2013.6552317
G. Coxson, J. Russo
Binary complementary code sets offer a possibility that single binary codes cannot-zero aperiodic autocorrelation sidelobe levels. These code sets can be viewed as the columns of so-called complementary code matrices, or CCMs. This matrix formulation is particularly useful in gaining the insight needed for developing an efficient exhaustive search for complementary code sets. An exhaustive search approach is described, designed to find all sets of K complementary binary codes of length N, for specified N and K. Results for several cases are examined.
{"title":"Efficient exhaustive search for binary complementary code sets","authors":"G. Coxson, J. Russo","doi":"10.1109/CISS.2013.6552317","DOIUrl":"https://doi.org/10.1109/CISS.2013.6552317","url":null,"abstract":"Binary complementary code sets offer a possibility that single binary codes cannot-zero aperiodic autocorrelation sidelobe levels. These code sets can be viewed as the columns of so-called complementary code matrices, or CCMs. This matrix formulation is particularly useful in gaining the insight needed for developing an efficient exhaustive search for complementary code sets. An exhaustive search approach is described, designed to find all sets of K complementary binary codes of length N, for specified N and K. Results for several cases are examined.","PeriodicalId":268095,"journal":{"name":"2013 47th Annual Conference on Information Sciences and Systems (CISS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134472750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-20DOI: 10.1109/CISS.2013.6624256
S. Gunal, S. Ergin, E. S. Gunal, A. Uysal
In the literature, countless efforts have been made to analyze and classify electrocardiogram (ECG) signals belonging to various heart problems. In all these efforts, many feature extraction strategies have been used to expose discriminative information from ECG signals. In this paper, the contributions of widely used features to the classification performance and the required processing times to extract those features are comparatively analyzed. The utilized features can be briefly listed as time domain (TD), wavelet transform (WT), and power spectral density (PSD) based features. These feature sets are employed individually and in combination within well-known pattern classifiers, namely decision tree and artificial neural network, to assess classification performance in each case. Later, a wrapper-based feature selection strategy is used to reveal the most discriminative feature subset among the entire feature set containing all the three previously mentioned feature sets. The proposed framework is assessed considering four classes of heart conditions including normal, congestive heart failure, ventricular tachyarrhythmia and atrial fibrillation. The results of the experiments conducted on a large dataset reveal that appropriate subset of TD, WT, and PSD features rather than individual features offer higher classification performance. On the other hand, if the processing time is of concern, TD features come out on top with moderate classification performance.
{"title":"ECG classification using ensemble of features","authors":"S. Gunal, S. Ergin, E. S. Gunal, A. Uysal","doi":"10.1109/CISS.2013.6624256","DOIUrl":"https://doi.org/10.1109/CISS.2013.6624256","url":null,"abstract":"In the literature, countless efforts have been made to analyze and classify electrocardiogram (ECG) signals belonging to various heart problems. In all these efforts, many feature extraction strategies have been used to expose discriminative information from ECG signals. In this paper, the contributions of widely used features to the classification performance and the required processing times to extract those features are comparatively analyzed. The utilized features can be briefly listed as time domain (TD), wavelet transform (WT), and power spectral density (PSD) based features. These feature sets are employed individually and in combination within well-known pattern classifiers, namely decision tree and artificial neural network, to assess classification performance in each case. Later, a wrapper-based feature selection strategy is used to reveal the most discriminative feature subset among the entire feature set containing all the three previously mentioned feature sets. The proposed framework is assessed considering four classes of heart conditions including normal, congestive heart failure, ventricular tachyarrhythmia and atrial fibrillation. The results of the experiments conducted on a large dataset reveal that appropriate subset of TD, WT, and PSD features rather than individual features offer higher classification performance. On the other hand, if the processing time is of concern, TD features come out on top with moderate classification performance.","PeriodicalId":268095,"journal":{"name":"2013 47th Annual Conference on Information Sciences and Systems (CISS)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127394660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-20DOI: 10.1109/CISS.2013.6624258
Fulu Li, J. Barabas, Ankit Mohan, R. Raskar
We investigate the problem of image sensing under unfavorable photographic conditions in a wireless image sensor network. In the scenes with deflective and/or reflective medium such as fogs, mirrors, glasses, degraded images are captured by those image sensors. Such degraded images often lack perceptual vividness and they offer a poor visibility of the scene contents. Notably, computation-intensive method to recover a better image based on single image [2] may not be applicable for wireless image sensors due to the limited computation capacities and the limited power resources (batteries) typically equipped at those wireless image sensors. In this paper, we propose a framework to recover better images under unfavorable photographic conditions in a wireless image sensor network, where a light-weighted computation method based on multiple images is employed to recover better images. Toward the realization of the whole system, we have built image sensor prototypes with commodity cameras and we validated our approach by indepth analysis, extensive simulations and field experiments in real-world situations.
{"title":"Information processing for image sensing under unfavorable photographic conditions","authors":"Fulu Li, J. Barabas, Ankit Mohan, R. Raskar","doi":"10.1109/CISS.2013.6624258","DOIUrl":"https://doi.org/10.1109/CISS.2013.6624258","url":null,"abstract":"We investigate the problem of image sensing under unfavorable photographic conditions in a wireless image sensor network. In the scenes with deflective and/or reflective medium such as fogs, mirrors, glasses, degraded images are captured by those image sensors. Such degraded images often lack perceptual vividness and they offer a poor visibility of the scene contents. Notably, computation-intensive method to recover a better image based on single image [2] may not be applicable for wireless image sensors due to the limited computation capacities and the limited power resources (batteries) typically equipped at those wireless image sensors. In this paper, we propose a framework to recover better images under unfavorable photographic conditions in a wireless image sensor network, where a light-weighted computation method based on multiple images is employed to recover better images. Toward the realization of the whole system, we have built image sensor prototypes with commodity cameras and we validated our approach by indepth analysis, extensive simulations and field experiments in real-world situations.","PeriodicalId":268095,"journal":{"name":"2013 47th Annual Conference on Information Sciences and Systems (CISS)","volume":"388 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133836685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}