Pub Date : 2014-09-01DOI: 10.1109/SNPD.2014.6888733
Hiroshi Kikuchi, T. Goto, Mitsuo Wakatsuki, T. Nishino
Learning to program is an important subject in computer science courses. During programming exercises, plagiarism by copying and pasting can lead to problems for fair evaluation. Some methods of plagiarism detection are currently available, such as sim. However, because sim is easily influenced by changing the identifier or program statement order, it fails to do enough to support plagiarism detection. In this paper, we propose a plagiarism detection method which is not influenced by changing the identifier or program statement order. We also explain our method's capabilities by comparing it to the sim plagiarism detector. Furthermore, we reveal how our method successfully detects the presence of plagiarism.
{"title":"A source code plagiarism detecting method using alignment with abstract syntax tree elements","authors":"Hiroshi Kikuchi, T. Goto, Mitsuo Wakatsuki, T. Nishino","doi":"10.1109/SNPD.2014.6888733","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888733","url":null,"abstract":"Learning to program is an important subject in computer science courses. During programming exercises, plagiarism by copying and pasting can lead to problems for fair evaluation. Some methods of plagiarism detection are currently available, such as sim. However, because sim is easily influenced by changing the identifier or program statement order, it fails to do enough to support plagiarism detection. In this paper, we propose a plagiarism detection method which is not influenced by changing the identifier or program statement order. We also explain our method's capabilities by comparing it to the sim plagiarism detector. Furthermore, we reveal how our method successfully detects the presence of plagiarism.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114406691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-09-01DOI: 10.1109/SNPD.2014.6888740
Y. Takahashi, Yuhki Kitazono, Shota Nakashima
In this study, we developed Leaving-bed Detection System to Prevent Midnight Prowl and checked the operation of the system constructed. First, installed the camera on the ceiling, get depth information in the field of view, and also get background's depth information. The detection of human get up was performed by taking the difference between the depth and the current depth of the background. Converted into numeric depth information a distance from the camera to be retrieved as a string, and obtains the height of the object from the difference between them. Take a threshold value than the height that was acquired to detect anything more than a certain height, and also in after removing those narrow areas coordinates from the coordinates, and performs the tracking to obtain the center coordinates of the target. However, in view of the error occurring in the case of detecting a plurality of persons, human tracking is performed after the extraction of the target person by labeling process. We went from the center point when the track is interrupted the entry and exit detection.
{"title":"Development of Leaving-bed Detection System to Prevent Midnight Prowl","authors":"Y. Takahashi, Yuhki Kitazono, Shota Nakashima","doi":"10.1109/SNPD.2014.6888740","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888740","url":null,"abstract":"In this study, we developed Leaving-bed Detection System to Prevent Midnight Prowl and checked the operation of the system constructed. First, installed the camera on the ceiling, get depth information in the field of view, and also get background's depth information. The detection of human get up was performed by taking the difference between the depth and the current depth of the background. Converted into numeric depth information a distance from the camera to be retrieved as a string, and obtains the height of the object from the difference between them. Take a threshold value than the height that was acquired to detect anything more than a certain height, and also in after removing those narrow areas coordinates from the coordinates, and performs the tracking to obtain the center coordinates of the target. However, in view of the error occurring in the case of detecting a plurality of persons, human tracking is performed after the extraction of the target person by labeling process. We went from the center point when the track is interrupted the entry and exit detection.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"53 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113978581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Universal Communication Research Institute (UCRI), NICT conducts research and development on universal communication technologies: multi-lingual machine translation, spoken dialogue, information analysis and ultra-realistic interaction technologies, through which people can truly interconnect, anytime, anywhere, about any topic, and by any method, transcending the boundaries of language, culture, ability and distance. To enhance the universal communication technology, we are trying to develop a large-scale information infrastructure which collects and stores diverse information including huge volumes of web pages from the networks. The one of most important key technologies to realize a large-scaled information infrastructure is a distributed in memory database system. In this paper, we introduced a large-scale information infrastructure, mainly explaining a distributed in-memory database system “okuyama” which is a key technology on our project. We examined the I/O performance of the in memory storage, which verified whether if “okuyama” meet requirements for the infrastructure. Furthermore we give a blueprint of cluster systems on which the infrastructure will be constructed.
NICT的通用通信研究所(Universal Communication Research Institute, UCRI)致力于研究和开发通用通信技术:多语言机器翻译、口语对话、信息分析和超现实交互技术,使人们能够超越语言、文化、能力和距离的界限,在任何时间、任何地点、任何话题、以任何方式实现真正的互联互通。为了提高通用通信技术,我们正在尝试开发一个大规模的信息基础设施,以收集和存储来自网络的各种信息,包括大量的网页。分布式内存数据库系统是实现大规模信息基础设施的关键技术之一。本文介绍了一种大规模的信息基础设施,重点阐述了分布式内存数据库系统“okuyama”,这是本项目的关键技术。我们检查了内存存储的I/O性能,验证了“okuyama”是否满足基础设施的要求。此外,我们还给出了构建基础设施的集群系统的蓝图。
{"title":"Big data in memory: Benchimarking in memory database using the distributed key-value store for machine to machine communication","authors":"M. Iwazume, Takahiro Iwase, Kouji Tanaka, Hideaki Fujii, Makoto Hijiya, Hiroshi Haraguchi","doi":"10.1109/SNPD.2014.6888748","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888748","url":null,"abstract":"The Universal Communication Research Institute (UCRI), NICT conducts research and development on universal communication technologies: multi-lingual machine translation, spoken dialogue, information analysis and ultra-realistic interaction technologies, through which people can truly interconnect, anytime, anywhere, about any topic, and by any method, transcending the boundaries of language, culture, ability and distance. To enhance the universal communication technology, we are trying to develop a large-scale information infrastructure which collects and stores diverse information including huge volumes of web pages from the networks. The one of most important key technologies to realize a large-scaled information infrastructure is a distributed in memory database system. In this paper, we introduced a large-scale information infrastructure, mainly explaining a distributed in-memory database system “okuyama” which is a key technology on our project. We examined the I/O performance of the in memory storage, which verified whether if “okuyama” meet requirements for the infrastructure. Furthermore we give a blueprint of cluster systems on which the infrastructure will be constructed.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114759906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-09-01DOI: 10.1109/SNPD.2014.6888736
Md. Mahfuzus Salam Khan, Md. Anwarus Salam Khan, T. Goto, T. Nishino, N. Debnath
In the field of software engineering, a very old and important issue is how to understand the software. Understanding software means more than understanding the source code; it also refers to the other facts related to that particular software. Sometimes even experienced developers can be overwhelmed by a project's extensive development capabilities. In the development process, project leaders (PLs) have overall knowledge about the project and are keenly aware of its vision. Other members have only partial knowledge of the functions assigned to them. In this research, we propose a model to design ontology to support software comprehension and handle issues of knowledge management throughout the development process. By applying our methodology, understanding software and managing knowledge can become possible in a systematic way for open source and commercial projects. Furthermore, it will help beginners become more involved in a project and contribute to it in a productive way.
{"title":"Software ontology design to support organized open source software development","authors":"Md. Mahfuzus Salam Khan, Md. Anwarus Salam Khan, T. Goto, T. Nishino, N. Debnath","doi":"10.1109/SNPD.2014.6888736","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888736","url":null,"abstract":"In the field of software engineering, a very old and important issue is how to understand the software. Understanding software means more than understanding the source code; it also refers to the other facts related to that particular software. Sometimes even experienced developers can be overwhelmed by a project's extensive development capabilities. In the development process, project leaders (PLs) have overall knowledge about the project and are keenly aware of its vision. Other members have only partial knowledge of the functions assigned to them. In this research, we propose a model to design ontology to support software comprehension and handle issues of knowledge management throughout the development process. By applying our methodology, understanding software and managing knowledge can become possible in a systematic way for open source and commercial projects. Furthermore, it will help beginners become more involved in a project and contribute to it in a productive way.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126363053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-09-01DOI: 10.1109/SNPD.2014.6888700
Limin Sha
This paper analyzes two proxy signature schemes' security, and proves that they exist loopholes. Xu's ID-based proxy signature scheme without trusted PKG is not safe, because there exists a security attack which original signer can tamper with the proxy warrant allayed with PKG.Hu's proxy blind multi-signature scheme also exists a security attack that the signature requester can tamper with the proxy warrant.
{"title":"Analysis of an ID-based proxy signature scheme without trusted PKG and a proxy blind multi-signature scheme","authors":"Limin Sha","doi":"10.1109/SNPD.2014.6888700","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888700","url":null,"abstract":"This paper analyzes two proxy signature schemes' security, and proves that they exist loopholes. Xu's ID-based proxy signature scheme without trusted PKG is not safe, because there exists a security attack which original signer can tamper with the proxy warrant allayed with PKG.Hu's proxy blind multi-signature scheme also exists a security attack that the signature requester can tamper with the proxy warrant.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133795782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-09-01DOI: 10.1109/SNPD.2014.6888702
Jahangir Dewan, M. Chowdhury
Mobile eLearning (mLearning) can create a revolution in eLearning with the popularity of smart mobile devices and Application. However, contents are the king to make this revolution happen. Moreover, for an effective mLearning system, analytical aspects such as, quality of contents, quality of results, performance of learners, needs to be addressed. This paper presents a framework for personal mLearning. In this paper, we have used graph-based model called bipartite graph for content authentication and identification of the quality of results. Furthermore, we have used statistical estimation process for trustworthiness of weights in the bipartite graph using confidence interval and hypothesis test as analytical decision model tool.
{"title":"A framework for mobile elearning (mLearning) with analytical decision model","authors":"Jahangir Dewan, M. Chowdhury","doi":"10.1109/SNPD.2014.6888702","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888702","url":null,"abstract":"Mobile eLearning (mLearning) can create a revolution in eLearning with the popularity of smart mobile devices and Application. However, contents are the king to make this revolution happen. Moreover, for an effective mLearning system, analytical aspects such as, quality of contents, quality of results, performance of learners, needs to be addressed. This paper presents a framework for personal mLearning. In this paper, we have used graph-based model called bipartite graph for content authentication and identification of the quality of results. Furthermore, we have used statistical estimation process for trustworthiness of weights in the bipartite graph using confidence interval and hypothesis test as analytical decision model tool.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122057850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-09-01DOI: 10.1109/SNPD.2014.6888721
Shunsuke Akai, T. Hochin, Hiroki Nomiya
This paper proposes an analysis method of the evaluation results obtained through the Impression Evaluation Method by Space (IEMS). The IEMS uses a plane containing impression words as the Kansei space. The impression of an object is specified by circling the areas matching the impression. The degree of matching the impression is expressed by painting color. As the impression words can be moved and/or added in the IEMS, it is difficult to analyze the evaluation results obtained from many subjects. The proposed analysis method focuses on the peaks of darkness. It is called the analysis method focusing on the peaks of darkness (abbr. AM_PD). By mapping the peaks of the darkness in each evaluation result to the same Kansei space, this method can analyze characteristic impressions. In this paper, the algorithm of extracting obvious peaks automatically is proposed toward realization of the AM_PD. The parameters of the AM_PD required to extract the obvious peaks are experimentally determined. This paper shows that the obvious peaks in the evaluation results can be extracted by using this algorithm.
{"title":"The analysis method focusing on peaks of darkness for the impression evaluation method by space","authors":"Shunsuke Akai, T. Hochin, Hiroki Nomiya","doi":"10.1109/SNPD.2014.6888721","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888721","url":null,"abstract":"This paper proposes an analysis method of the evaluation results obtained through the Impression Evaluation Method by Space (IEMS). The IEMS uses a plane containing impression words as the Kansei space. The impression of an object is specified by circling the areas matching the impression. The degree of matching the impression is expressed by painting color. As the impression words can be moved and/or added in the IEMS, it is difficult to analyze the evaluation results obtained from many subjects. The proposed analysis method focuses on the peaks of darkness. It is called the analysis method focusing on the peaks of darkness (abbr. AM_PD). By mapping the peaks of the darkness in each evaluation result to the same Kansei space, this method can analyze characteristic impressions. In this paper, the algorithm of extracting obvious peaks automatically is proposed toward realization of the AM_PD. The parameters of the AM_PD required to extract the obvious peaks are experimentally determined. This paper shows that the obvious peaks in the evaluation results can be extracted by using this algorithm.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125797404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-09-01DOI: 10.1109/SNPD.2014.6888728
Hiroyasu Horiuchi, S. Saiki, S. Matsumoto, Masahide Nakamura
In order to achieve intuitive and easy operations for home network system (HNS), we have previously proposed user interface with virtual agent (called HNS virtual agent user interface, HNS-VAUI). The HNS-VAUI was implemented with MMDAgent toolkit. A user can operate appliances and services interactively through dialog with a virtual agent in a screen. However, the previous prototype heavily depends on MMDAgent, which causes a tight coupling between HNS operations and agent behaviors, and poor capability of using external information. To cope with the problem, this paper proposes a service-oriented framework that allows the HNS-VAUI to provide richer interaction. Specifically, we decompose the tightly-coupled system into two separate services: MMC Service and MSM service. The MMC service concentrates on controlling detailed behaviors of a virtual agent, whereas the MSM service defines logic of HNS operations and dialog with the agent with richer state machines. The two services are loosely coupled to enable more flexible and sophisticated dialog in the HNS-VAUI. The proposed framework is implemented in a real HNS environment. We also conduct a case study with practical service scenarios, to demonstrate effectiveness of the proposed framework.
{"title":"Designing and implementing service framework for virtual agents in home network system","authors":"Hiroyasu Horiuchi, S. Saiki, S. Matsumoto, Masahide Nakamura","doi":"10.1109/SNPD.2014.6888728","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888728","url":null,"abstract":"In order to achieve intuitive and easy operations for home network system (HNS), we have previously proposed user interface with virtual agent (called HNS virtual agent user interface, HNS-VAUI). The HNS-VAUI was implemented with MMDAgent toolkit. A user can operate appliances and services interactively through dialog with a virtual agent in a screen. However, the previous prototype heavily depends on MMDAgent, which causes a tight coupling between HNS operations and agent behaviors, and poor capability of using external information. To cope with the problem, this paper proposes a service-oriented framework that allows the HNS-VAUI to provide richer interaction. Specifically, we decompose the tightly-coupled system into two separate services: MMC Service and MSM service. The MMC service concentrates on controlling detailed behaviors of a virtual agent, whereas the MSM service defines logic of HNS operations and dialog with the agent with richer state machines. The two services are loosely coupled to enable more flexible and sophisticated dialog in the HNS-VAUI. The proposed framework is implemented in a real HNS environment. We also conduct a case study with practical service scenarios, to demonstrate effectiveness of the proposed framework.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127228132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-09-01DOI: 10.1109/SNPD.2014.6888710
Kun Lu, Dong Dai, Xuehai Zhou, Mingming Sun, Changlong Li, Hang Zhuang
Hadoop is a popular framework that provides easy programming interface of parallel programs to process large scale of data on clusters of commodity machines. Data intensive programs are the important part running on the cluster especially in large scale machine learning algorithm which executes of the same program iteratively. In-memory cache of input data is an efficient way to speed up these data intensive programs. However, we cannot be able to load all the data in memory because of the limitation of memory capacity. So, the key challenge is how we can accurately know when data should be cached in memory and when it ought to be released. The other problem is that memory capacity may even not enough to hold the input data of the running program. This leads to there is some data cannot be cached in memory. Prefetching is an effective method for such situation. We provide a unbinding technology which do not put the programs and data binded together before the real computation start. With unbinding technology, Hadoop can get a better performance when using caching and prefetching technology. We provide a Hadoop framework with unbinding technology named unbinding-Hadoop which decide the map tasks' input data in the map starting up phase, not at the job submission phase. Prefetching as well can be used in unbinding-Hadoop and can get better performance compared with the programs without unbinding. Evaluations on this system show that unbinding-Hadoop reduces the execution time of jobs by 40.2% and 29.2% with WordCount programs and K-means algorithm.
{"title":"Unbinds data and tasks to improving the Hadoop performance","authors":"Kun Lu, Dong Dai, Xuehai Zhou, Mingming Sun, Changlong Li, Hang Zhuang","doi":"10.1109/SNPD.2014.6888710","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888710","url":null,"abstract":"Hadoop is a popular framework that provides easy programming interface of parallel programs to process large scale of data on clusters of commodity machines. Data intensive programs are the important part running on the cluster especially in large scale machine learning algorithm which executes of the same program iteratively. In-memory cache of input data is an efficient way to speed up these data intensive programs. However, we cannot be able to load all the data in memory because of the limitation of memory capacity. So, the key challenge is how we can accurately know when data should be cached in memory and when it ought to be released. The other problem is that memory capacity may even not enough to hold the input data of the running program. This leads to there is some data cannot be cached in memory. Prefetching is an effective method for such situation. We provide a unbinding technology which do not put the programs and data binded together before the real computation start. With unbinding technology, Hadoop can get a better performance when using caching and prefetching technology. We provide a Hadoop framework with unbinding technology named unbinding-Hadoop which decide the map tasks' input data in the map starting up phase, not at the job submission phase. Prefetching as well can be used in unbinding-Hadoop and can get better performance compared with the programs without unbinding. Evaluations on this system show that unbinding-Hadoop reduces the execution time of jobs by 40.2% and 29.2% with WordCount programs and K-means algorithm.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130618985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-09-01DOI: 10.1109/SNPD.2014.6888686
Tianyu Bai, Spencer Davis, Juanjuan Li, Hai Jiang
Lattice based cryptography is attractive for its quantum computing resistance and efficient encryption/decryption process. However, the big data problem has perplexed lattice based cryptographic systems with the slow processing speed. This paper intends to analyze one of the major lattice-based cryptographic systems, Nth-degree truncated polynomial ring (NTRU), and accelerate its execution with Graphic Processing Unit (GPU) for acceptable processing performance. Three strategies, including single GPU with zero copy, single GPU with data transfer, and multi-GPU versions are proposed. GPU computing techniques such as stream and zero copy are applied to overlap the computation and communication for possible speedup. Experimental results have demonstrated the effectiveness of GPU acceleration of NTRU. As the number of involved devices increases, better NTRU performance will be achieved.
{"title":"Analysis and acceleration of NTRU lattice-based cryptographic system","authors":"Tianyu Bai, Spencer Davis, Juanjuan Li, Hai Jiang","doi":"10.1109/SNPD.2014.6888686","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888686","url":null,"abstract":"Lattice based cryptography is attractive for its quantum computing resistance and efficient encryption/decryption process. However, the big data problem has perplexed lattice based cryptographic systems with the slow processing speed. This paper intends to analyze one of the major lattice-based cryptographic systems, Nth-degree truncated polynomial ring (NTRU), and accelerate its execution with Graphic Processing Unit (GPU) for acceptable processing performance. Three strategies, including single GPU with zero copy, single GPU with data transfer, and multi-GPU versions are proposed. GPU computing techniques such as stream and zero copy are applied to overlap the computation and communication for possible speedup. Experimental results have demonstrated the effectiveness of GPU acceleration of NTRU. As the number of involved devices increases, better NTRU performance will be achieved.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132557417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}