There is an increasing demand for high quality real time graphics nowadays. Shadows play an important role to the realism of computer-generated images, enhancing depth, curvature and localization senses. Due to their global nature, shadows introduce overwhelming complexity to rendering algorithms. Recently, screen-space ambient occlusion techniques started to flourish, and are now the de facto standard for real-time dynamic shadow synthesis. A few issues remain, though, such as the sampling quality and noise artifacts. The contributions of this work are two-folded: a variation of screen-space ambient occlusion that uses Summed-Area Tables, yielding to satisfactory results yet performing better than previous attempts, and serves as a new application to the arsenal of Summed-Area Tables.
{"title":"Screen-Space Ambient Occlusion through Summed-Area Tables","authors":"M. Slomp, Toru Tamaki, K. Kaneda","doi":"10.1109/IC-NC.2010.18","DOIUrl":"https://doi.org/10.1109/IC-NC.2010.18","url":null,"abstract":"There is an increasing demand for high quality real time graphics nowadays. Shadows play an important role to the realism of computer-generated images, enhancing depth, curvature and localization senses. Due to their global nature, shadows introduce overwhelming complexity to rendering algorithms. Recently, screen-space ambient occlusion techniques started to flourish, and are now the de facto standard for real-time dynamic shadow synthesis. A few issues remain, though, such as the sampling quality and noise artifacts. The contributions of this work are two-folded: a variation of screen-space ambient occlusion that uses Summed-Area Tables, yielding to satisfactory results yet performing better than previous attempts, and serves as a new application to the arsenal of Summed-Area Tables.","PeriodicalId":375145,"journal":{"name":"2010 First International Conference on Networking and Computing","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124921439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shintarou Sano, M. Sano, Shimpei Sato, T. Miyoshi, Kenji Kise
The Network-on-Chip (NoC) is a promising interconnection for many-core processors. On the NoC-based many core processors, the network performance of multi-thread programs depends on the method of task mapping. In this paper, we propose a pattern-based task mapping method in order to improve the performance of many-core processors. Evaluation of the proposed method using a detailed software simulator reveals an average performance improvement of at least 4.4%, as compared with standard task mapping using NAS parallel benchmarks.
{"title":"Pattern-Based Systematic Task Mapping for Many-Core Processors","authors":"Shintarou Sano, M. Sano, Shimpei Sato, T. Miyoshi, Kenji Kise","doi":"10.1109/IC-NC.2010.33","DOIUrl":"https://doi.org/10.1109/IC-NC.2010.33","url":null,"abstract":"The Network-on-Chip (NoC) is a promising interconnection for many-core processors. On the NoC-based many core processors, the network performance of multi-thread programs depends on the method of task mapping. In this paper, we propose a pattern-based task mapping method in order to improve the performance of many-core processors. Evaluation of the proposed method using a detailed software simulator reveals an average performance improvement of at least 4.4%, as compared with standard task mapping using NAS parallel benchmarks.","PeriodicalId":375145,"journal":{"name":"2010 First International Conference on Networking and Computing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123959877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recursive dual-net (RDN) is a newly proposed interconnection network for massive parallel computers. The RDN is based on recursive dual-construction of a symmetric base-network. A {bm{${k}$}}-level dual-construction for {bm{${k>0}$}} creates a network containing {bm{${(2n_0)^{2^k}/2}$}} nodes with node-degree {bm{${d_0+k}$}}, where {bm{${n_0}$}} and {bm{${d_0}$}} are the number of nodes and the node-degree of the base network, respectively. The RDN is node and edge symmetric and can contain huge number of nodes with small node-degree and short diameter. Node-to-set disjoint-paths routing is fundamental and has many applications for fault-tolerant and secure communication in a network. In this paper, we propose an efficient algorithm for node-to-set disjoint-paths routing on RDN.
{"title":"Node-to-Set Disjoint-Paths Routing in Recursive Dual-Net","authors":"Yamin Li, S. Peng, Wanming Chu","doi":"10.1109/IC-NC.2010.11","DOIUrl":"https://doi.org/10.1109/IC-NC.2010.11","url":null,"abstract":"Recursive dual-net (RDN) is a newly proposed interconnection network for massive parallel computers. The RDN is based on recursive dual-construction of a symmetric base-network. A {bm{${k}$}}-level dual-construction for {bm{${k>0}$}} creates a network containing {bm{${(2n_0)^{2^k}/2}$}} nodes with node-degree {bm{${d_0+k}$}}, where {bm{${n_0}$}} and {bm{${d_0}$}} are the number of nodes and the node-degree of the base network, respectively. The RDN is node and edge symmetric and can contain huge number of nodes with small node-degree and short diameter. Node-to-set disjoint-paths routing is fundamental and has many applications for fault-tolerant and secure communication in a network. In this paper, we propose an efficient algorithm for node-to-set disjoint-paths routing on RDN.","PeriodicalId":375145,"journal":{"name":"2010 First International Conference on Networking and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129285530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qin Wang, Junichi Ohmura, Shan Axida, T. Miyoshi, H. Irie, T. Yoshinaga
In this paper, we propose an approach for significantly improving the performance of parallel matrix-matrix multiplication using a GPU-accelerated cluster. For one node, we implement a CPUs-GPU parallel double-precision general matrix-matrix multiplication (dgemm) operation and achieve a performance improvement of 32% as compared to the GPU-only case and 56% as compared to the CPUs-only case. For the entire cluster, we use the overlap GPU acceleration solution to high-performance Linpack (HPL), which eliminates the close dependency between the LU decomposition and the dgemm operation, and achieve a performance improvement of 5.72% as compared to the flat GPU acceleration case.
{"title":"Parallel Matrix-Matrix Multiplication Based on HPL with a GPU-Accelerated PC Cluster","authors":"Qin Wang, Junichi Ohmura, Shan Axida, T. Miyoshi, H. Irie, T. Yoshinaga","doi":"10.1109/IC-NC.2010.39","DOIUrl":"https://doi.org/10.1109/IC-NC.2010.39","url":null,"abstract":"In this paper, we propose an approach for significantly improving the performance of parallel matrix-matrix multiplication using a GPU-accelerated cluster. For one node, we implement a CPUs-GPU parallel double-precision general matrix-matrix multiplication (dgemm) operation and achieve a performance improvement of 32% as compared to the GPU-only case and 56% as compared to the CPUs-only case. For the entire cluster, we use the overlap GPU acceleration solution to high-performance Linpack (HPL), which eliminates the close dependency between the LU decomposition and the dgemm operation, and achieve a performance improvement of 5.72% as compared to the flat GPU acceleration case.","PeriodicalId":375145,"journal":{"name":"2010 First International Conference on Networking and Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128604795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an effective power reduction scheme for recent mobile devices, e.g., Android devices, which tend to have problems with battery life because some of their applications may be running continuous sensor operations. We propose a context-aware method to determine the minimum set of resources (processor cores and peripherals) that results in meeting a given level of performance. With it, unnecessary processor cores and peripherals can be switched-off without degrading overall performance. Our experimental results indicate that its use can result in a 45% reduction in total power consumption. Since our method does not require applications to be modified, it can even be used easily with downloaded applications.
{"title":"Power Saving in Mobile Devices Using Context-Aware Resource Control","authors":"Kosuke Nishihara, K. Ishizaka, J. Sakai","doi":"10.1109/IC-NC.2010.50","DOIUrl":"https://doi.org/10.1109/IC-NC.2010.50","url":null,"abstract":"We present an effective power reduction scheme for recent mobile devices, e.g., Android devices, which tend to have problems with battery life because some of their applications may be running continuous sensor operations. We propose a context-aware method to determine the minimum set of resources (processor cores and peripherals) that results in meeting a given level of performance. With it, unnecessary processor cores and peripherals can be switched-off without degrading overall performance. Our experimental results indicate that its use can result in a 45% reduction in total power consumption. Since our method does not require applications to be modified, it can even be used easily with downloaded applications.","PeriodicalId":375145,"journal":{"name":"2010 First International Conference on Networking and Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114883095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A cluster of commodity computers and general-purpose computers with accelerators such as GPGPUs are now common platforms to solve computationally intensive tasks like scientific simulations. Both technologies provide users with high performance at relatively low cost. However, the low bandwidth of interconnect compared to the computing performance hinders efficient operation of both cluster and accelerator in the case of many algorithms that require heavy data transmission. For clusters the network is one of the major performance bottlenecks, and for accelerators the peripheral bus to transfer data from host to the memory on the accelerator card is. In this paper, we propose a method of accelerating the performance of floating-point intensive algorithms by compressing the floating point number stream. With the efficient software encoder and hardware decoder, the method eliminates redundancy in the exponential part in the array of numbers on the stream and compacts the entire array to 82.8% of its original size at theoretical limit. The compression ratio is better than Gzip or Bzip2 for floating point numbers. The reduction in communication time directly leads to the reduction in total application running time for programs whose processing time is largely dominated by communication performance. We implemented a high-speed decoder using FPGA that operates at over 6 GB/s. We estimated the application performance using FFT and matrix multiplication on a cluster and the GRAPE-DR accelerator respectively, and our approach is useful in both configurations.
{"title":"Compressing Floating-Point Number Stream for Numerical Applications","authors":"Hisanobu Tomari, M. Inaba, K. Hiraki","doi":"10.1109/IC-NC.2010.24","DOIUrl":"https://doi.org/10.1109/IC-NC.2010.24","url":null,"abstract":"A cluster of commodity computers and general-purpose computers with accelerators such as GPGPUs are now common platforms to solve computationally intensive tasks like scientific simulations. Both technologies provide users with high performance at relatively low cost. However, the low bandwidth of interconnect compared to the computing performance hinders efficient operation of both cluster and accelerator in the case of many algorithms that require heavy data transmission. For clusters the network is one of the major performance bottlenecks, and for accelerators the peripheral bus to transfer data from host to the memory on the accelerator card is. In this paper, we propose a method of accelerating the performance of floating-point intensive algorithms by compressing the floating point number stream. With the efficient software encoder and hardware decoder, the method eliminates redundancy in the exponential part in the array of numbers on the stream and compacts the entire array to 82.8% of its original size at theoretical limit. The compression ratio is better than Gzip or Bzip2 for floating point numbers. The reduction in communication time directly leads to the reduction in total application running time for programs whose processing time is largely dominated by communication performance. We implemented a high-speed decoder using FPGA that operates at over 6 GB/s. We estimated the application performance using FFT and matrix multiplication on a cluster and the GRAPE-DR accelerator respectively, and our approach is useful in both configurations.","PeriodicalId":375145,"journal":{"name":"2010 First International Conference on Networking and Computing","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125729705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet was designed around the end-to-end principle, mimicking in many ways the architecture of the old telephone network: services were accessed by naming the specific end-host offering the service. The demands of robustness, performance, and ubiquitous low latency for a worldwide population have led to an architecture where the names of services are largely symbolic, and do not name specific hosts or locations. Traffic is redirected onto a service network through the use of proxies. A typical example is a web proxy. Currently, proxies are generally accessed through layer 4-7 scripts and commands, such as the route command on Posix systems and, usually, manual configuration or Javascript code for a web proxy. This process is tedious and error-prone, and far from robust. New open protocols at the switching layer (layer 2) now enable far more robust and seamless packet redirection, without need for user configuration or unreliable scripts. In this paper, we describe Open web, a layer-2 redirection engine implemented as an application of the Open flow switch architecture.
{"title":"Open Web: Seamless Proxy Interconnection at the Switching Layer","authors":"Yoshio Sakurauchi, R. McGeer, H. Takada","doi":"10.1109/IC-NC.2010.19","DOIUrl":"https://doi.org/10.1109/IC-NC.2010.19","url":null,"abstract":"The Internet was designed around the end-to-end principle, mimicking in many ways the architecture of the old telephone network: services were accessed by naming the specific end-host offering the service. The demands of robustness, performance, and ubiquitous low latency for a worldwide population have led to an architecture where the names of services are largely symbolic, and do not name specific hosts or locations. Traffic is redirected onto a service network through the use of proxies. A typical example is a web proxy. Currently, proxies are generally accessed through layer 4-7 scripts and commands, such as the route command on Posix systems and, usually, manual configuration or Javascript code for a web proxy. This process is tedious and error-prone, and far from robust. New open protocols at the switching layer (layer 2) now enable far more robust and seamless packet redirection, without need for user configuration or unreliable scripts. In this paper, we describe Open web, a layer-2 redirection engine implemented as an application of the Open flow switch architecture.","PeriodicalId":375145,"journal":{"name":"2010 First International Conference on Networking and Computing","volume":"15 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128866854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryo Aoki, S. Oikawa, Ryoji Tsuchiyama, Takashi Nakamura
We developed Hybrid OpenCL, which enables the connection between different OpenCL implementations over the network. Hybrid OpenCL consists of two elements, a runtime system that provides the abstraction of different OpenCL implementations and a bridge program that connects multiple OpenCL runtime systems over the network. Problems in OpenCL are not being able to use different OpenCL devices from a single OpenCL runtime and being limited the number of OpenCL devices that we can use to the number of internal bus slots. Hybrid OpenCL enables the construction of the scalable OpenCL environments. It enables applications written in OpenCL to be easily ported to high performance cluster computers, thus, Hybrid OpenCL can provide more various parallel computing platforms and the progress of utility value of OpenCL applications. This paper describes the improvement of Hybrid OpenCL by using high speed networks and its results from experimentation. The experimental results show that high speed networks reduce the overhead introduced by Hybrid OpenCL, and InfiniBand SDP shows the best performance among the results.
{"title":"Improving Hybrid OpenCL Performance by High Speed Networks","authors":"Ryo Aoki, S. Oikawa, Ryoji Tsuchiyama, Takashi Nakamura","doi":"10.1109/IC-NC.2010.42","DOIUrl":"https://doi.org/10.1109/IC-NC.2010.42","url":null,"abstract":"We developed Hybrid OpenCL, which enables the connection between different OpenCL implementations over the network. Hybrid OpenCL consists of two elements, a runtime system that provides the abstraction of different OpenCL implementations and a bridge program that connects multiple OpenCL runtime systems over the network. Problems in OpenCL are not being able to use different OpenCL devices from a single OpenCL runtime and being limited the number of OpenCL devices that we can use to the number of internal bus slots. Hybrid OpenCL enables the construction of the scalable OpenCL environments. It enables applications written in OpenCL to be easily ported to high performance cluster computers, thus, Hybrid OpenCL can provide more various parallel computing platforms and the progress of utility value of OpenCL applications. This paper describes the improvement of Hybrid OpenCL by using high speed networks and its results from experimentation. The experimental results show that high speed networks reduce the overhead introduced by Hybrid OpenCL, and InfiniBand SDP shows the best performance among the results.","PeriodicalId":375145,"journal":{"name":"2010 First International Conference on Networking and Computing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133844517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Mikamo, M. Slomp, Shun Yanase, B. Raytchev, Toru Tamaki, K. Kaneda
Non-photo realistic rendering (NPR) is an appealing subject in computer graphics with a wide array of applications. As opposed to photo realistic rendering, NPR focuses on highlighting features and artistic traits instead of physical accuracy. Photo mosaic generation is one of the most popular NPR techniques, where a single image is assembled from several smaller ones. Visual responses change depending on the proximity to the photo mosaic, leading to many creative prospects for publicity. Synthesizing photo mosaics typically requires very large image databases in order to produce pleasing results. Moreover, repetitions are allowed to occur which may locally bias the mosaic. This paper provides alternatives to prevent repetitions while still being robust enough to work with coarse image subsets. Three approaches were devised for the matching stage of photo mosaics: a greedy-based procedural algorithm, simulated annealing and Soft Assign. We found that the latter two approaches deliver adequate arrangements in cases where only a restricted number of images is available.
{"title":"Maximizing Image Utilization in Photomosaics","authors":"M. Mikamo, M. Slomp, Shun Yanase, B. Raytchev, Toru Tamaki, K. Kaneda","doi":"10.1109/IC-NC.2010.17","DOIUrl":"https://doi.org/10.1109/IC-NC.2010.17","url":null,"abstract":"Non-photo realistic rendering (NPR) is an appealing subject in computer graphics with a wide array of applications. As opposed to photo realistic rendering, NPR focuses on highlighting features and artistic traits instead of physical accuracy. Photo mosaic generation is one of the most popular NPR techniques, where a single image is assembled from several smaller ones. Visual responses change depending on the proximity to the photo mosaic, leading to many creative prospects for publicity. Synthesizing photo mosaics typically requires very large image databases in order to produce pleasing results. Moreover, repetitions are allowed to occur which may locally bias the mosaic. This paper provides alternatives to prevent repetitions while still being robust enough to work with coarse image subsets. Three approaches were devised for the matching stage of photo mosaics: a greedy-based procedural algorithm, simulated annealing and Soft Assign. We found that the latter two approaches deliver adequate arrangements in cases where only a restricted number of images is available.","PeriodicalId":375145,"journal":{"name":"2010 First International Conference on Networking and Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132254147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main contribution of this paper is to present an efficient hardware algorithm for RSA encryption/decryption based on Montgomery multiplication. Modern FPGAs have a number of embedded DSP blocks (DSP48E1) and embedded memory blocks (BRAM). Our hardware algorithm supporting 2048-bit RSA encryption/decryption is designed to be implemented using one DSP48E1, one BRAM and few logic blocks (slices) in the Xilinx Virtex-6 family FPGA. The implementation results showed that our RSA module for 2048-bit RSA encryption/decryption runs in 277.26ms. Quite surprisingly, the multiplier in DSP48E1 used to compute Montgomery multiplication works in more than 97% clock cycles over all clock cycles. Hence, our implementation is close to optimal in the sense that it has only less than 3% overhead in multiplication and no further improvement is possible as long as Montgomery multiplication based algorithm is used. Also, since our circuit uses only one DSP48E1 block and one Block RAM, we can implement a number of RSA modules in an FPGA that can work in parallel to attain high throughput RSA encryption/decryption.
{"title":"An RSA Encryption Hardware Algorithm Using a Single DSP Block and a Single Block RAM on the FPGA","authors":"Bo Song, K. Kawakami, K. Nakano, Yasuaki Ito","doi":"10.1109/IC-NC.2010.56","DOIUrl":"https://doi.org/10.1109/IC-NC.2010.56","url":null,"abstract":"The main contribution of this paper is to present an efficient hardware algorithm for RSA encryption/decryption based on Montgomery multiplication. Modern FPGAs have a number of embedded DSP blocks (DSP48E1) and embedded memory blocks (BRAM). Our hardware algorithm supporting 2048-bit RSA encryption/decryption is designed to be implemented using one DSP48E1, one BRAM and few logic blocks (slices) in the Xilinx Virtex-6 family FPGA. The implementation results showed that our RSA module for 2048-bit RSA encryption/decryption runs in 277.26ms. Quite surprisingly, the multiplier in DSP48E1 used to compute Montgomery multiplication works in more than 97% clock cycles over all clock cycles. Hence, our implementation is close to optimal in the sense that it has only less than 3% overhead in multiplication and no further improvement is possible as long as Montgomery multiplication based algorithm is used. Also, since our circuit uses only one DSP48E1 block and one Block RAM, we can implement a number of RSA modules in an FPGA that can work in parallel to attain high throughput RSA encryption/decryption.","PeriodicalId":375145,"journal":{"name":"2010 First International Conference on Networking and Computing","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130139740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}