首页 > 最新文献

2017 International Conference on Progress in Informatics and Computing (PIC)最新文献

英文 中文
Image enhancement based on adaptive demarcation between underexposure and overexposure 基于欠曝光和过度曝光自适应划分的图像增强
Pub Date : 2017-12-01 DOI: 10.1109/PIC.2017.8359541
Yanfang Wang, Qian Huang, Jing Hu
Images taken under non-uniform illumination usually suffer from degenerated details because of underexposure and overexposure. In order to improve the visual quality of color images, underexposure needs to be brightened and overexposure should be dimmed accordingly. Hence, an important procedure is discriminating between underexposure and overexposure in color images. Traditional methods utilize a certain discriminating threshold throughout an image. However, illumination variation occurs easily in real life. To cope with this, we propose an adaptive discriminating principle according to local and global luminance. Then, a nonlinear modification is applied to image luminance to light up underexposure and dim overexposure regions. Further, based on the modified luminance and original chromatic information, a natural color image is constructed via an exponential technique. Finally, a local and image-dependent exponential technique is applied to RGB channels to improve image contrast. Experimental results shows that the proposed method produces clear and vivid details for both non-uniform illumination images and images with normal illumination.
在不均匀光照下拍摄的图像通常会因曝光不足和曝光过度而导致细节退化。为了提高彩色图像的视觉质量,欠曝需要调亮,过曝需要调暗。因此,在彩色图像中区分曝光不足和曝光过度是一个重要的步骤。传统方法在整个图像中使用一定的判别阈值。然而,在现实生活中,照明变化很容易发生。为了解决这一问题,我们提出了一种基于局部和全局亮度的自适应区分原则。然后,对图像亮度进行非线性修改,使曝光不足区域亮起来,过曝光区域变暗。在此基础上,利用修正后的亮度信息和原始色度信息,通过指数技术构建自然彩色图像。最后,在RGB通道中应用了一种局部和图像相关的指数技术来提高图像对比度。实验结果表明,该方法对非均匀光照图像和正常光照图像都能产生清晰生动的细节。
{"title":"Image enhancement based on adaptive demarcation between underexposure and overexposure","authors":"Yanfang Wang, Qian Huang, Jing Hu","doi":"10.1109/PIC.2017.8359541","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359541","url":null,"abstract":"Images taken under non-uniform illumination usually suffer from degenerated details because of underexposure and overexposure. In order to improve the visual quality of color images, underexposure needs to be brightened and overexposure should be dimmed accordingly. Hence, an important procedure is discriminating between underexposure and overexposure in color images. Traditional methods utilize a certain discriminating threshold throughout an image. However, illumination variation occurs easily in real life. To cope with this, we propose an adaptive discriminating principle according to local and global luminance. Then, a nonlinear modification is applied to image luminance to light up underexposure and dim overexposure regions. Further, based on the modified luminance and original chromatic information, a natural color image is constructed via an exponential technique. Finally, a local and image-dependent exponential technique is applied to RGB channels to improve image contrast. Experimental results shows that the proposed method produces clear and vivid details for both non-uniform illumination images and images with normal illumination.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122425292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The application of natural language processing in compiler principle system 自然语言处理在编译原理系统中的应用
Pub Date : 2017-12-01 DOI: 10.1109/PIC.2017.8359551
Yujia Zhai, Lizhen Liu, Wei Song, Chao Du, Xinlei Zhao
Compiling principle is an important course of computer science major, which mainly introduces general principles and basic methods of the construction of compiling programs mainly. Due to high demands of the logic analysis ability, the course bring abstract and unintelligible experience to many students. Thus it is quite difficult for students to master the main points of this course within the limited class time. Based on the requirement above, this paper mainly proposed a method of making use of natural language processing in the research and application of compiling process, which utilizes Maximum Probability Word Segmentation algorithm during the process of lexical analysis and syntax analysis, to offer more effective interface between human and computer. The proposed method can provide students with intuitive and profound knowledge concept in the process of learning how to compile, makes it easier and quicker for students to understand the principle of computer compiling.
编译原理是计算机专业的一门重要课程,主要介绍编译程序构建的一般原理和基本方法。由于对逻辑分析能力的要求较高,这门课程给很多学生带来了抽象和难以理解的体验。因此,学生很难在有限的上课时间内掌握这门课程的要点。基于以上需求,本文主要提出了在编写过程的研究和应用中利用自然语言处理的方法,即在词法分析和句法分析过程中利用最大概率分词算法,提供更有效的人机接口。所提出的方法可以为学生在学习如何编写的过程中提供直观而深刻的知识概念,使学生更容易、更快地理解计算机编写的原理。
{"title":"The application of natural language processing in compiler principle system","authors":"Yujia Zhai, Lizhen Liu, Wei Song, Chao Du, Xinlei Zhao","doi":"10.1109/PIC.2017.8359551","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359551","url":null,"abstract":"Compiling principle is an important course of computer science major, which mainly introduces general principles and basic methods of the construction of compiling programs mainly. Due to high demands of the logic analysis ability, the course bring abstract and unintelligible experience to many students. Thus it is quite difficult for students to master the main points of this course within the limited class time. Based on the requirement above, this paper mainly proposed a method of making use of natural language processing in the research and application of compiling process, which utilizes Maximum Probability Word Segmentation algorithm during the process of lexical analysis and syntax analysis, to offer more effective interface between human and computer. The proposed method can provide students with intuitive and profound knowledge concept in the process of learning how to compile, makes it easier and quicker for students to understand the principle of computer compiling.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122921656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Capacities-based distant-water fishery cold chain network design considering yield uncertainty and demand dynamics 考虑产量不确定性和需求动态的基于容量的远洋渔业冷链网络设计
Pub Date : 2017-12-01 DOI: 10.1109/PIC.2017.8359571
Xiangqun Song, Q. Ma, Wenyuan Wang, Yun Peng
In distant-water fishing industry, production, supply and distribution activities are realized in a dynamic environment including uncertainties of resources in fishing grounds and demands from global markets. Moreover, the capacity of the port cold storage is of great importance to the cold chain network as most of catch is handled and temporarily stored at seaports. Here, a multi-product, three-echelon and multi-period network model is applied to a distant-water fishery cold chain design problem based on capacities. This network-design includes optimal cold storage facility locations and optimal flow amounts. And a two-stage stochastic programming method is used to handle the uncertainties.
在远洋渔业中,生产、供应和分配活动是在一个动态的环境中实现的,包括渔场资源的不确定性和全球市场的需求。此外,由于大部分渔获都是在海港处理和临时储存的,因此港口冷库的容量对冷链网络非常重要。本文将多产品、三梯队、多周期网络模型应用于基于容量的远洋渔业冷链设计问题。该网络设计包括最佳冷库设施位置和最佳流量。采用两阶段随机规划方法处理不确定性。
{"title":"Capacities-based distant-water fishery cold chain network design considering yield uncertainty and demand dynamics","authors":"Xiangqun Song, Q. Ma, Wenyuan Wang, Yun Peng","doi":"10.1109/PIC.2017.8359571","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359571","url":null,"abstract":"In distant-water fishing industry, production, supply and distribution activities are realized in a dynamic environment including uncertainties of resources in fishing grounds and demands from global markets. Moreover, the capacity of the port cold storage is of great importance to the cold chain network as most of catch is handled and temporarily stored at seaports. Here, a multi-product, three-echelon and multi-period network model is applied to a distant-water fishery cold chain design problem based on capacities. This network-design includes optimal cold storage facility locations and optimal flow amounts. And a two-stage stochastic programming method is used to handle the uncertainties.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129010380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Particle swarm optimization based task scheduling for multi-core systems under aging effect 老化效应下基于粒子群算法的多核系统任务调度
Pub Date : 2017-12-01 DOI: 10.1109/PIC.2017.8359556
Jinbin Tu, Tianhao Yang, Yi Zhang, Jin Sun
As the size of the transistor continues to shrink, a number of reliability issues have emerged in network-on-chip (NoC) design. Taking into account the performance degradation induced by Negative Bias Temperature Instability (NBTI) aging effect, this paper proposes an aging-aware task scheduling framework for NoC-based multi-core systems. This framework relies on a NBTI aging model to evaluate the degradation of core's operating frequency to establish the task scheduling model under aging effect. Then, we develop a particle swarm optimization (PSO)-based heuristic to solve the scheduling problem with an optimization objective of total task completion time, and finally obtain a scheduling result with higher efficiency compared with traditional scheduling algorithms without considering of NBTI aging effect. Experiments show that the proposed aging-aware task-scheduling algorithm achieves not only shorter makespan and higher throughput, but also better reliability over non-aging-aware ones.
随着晶体管尺寸的不断缩小,在片上网络(NoC)设计中出现了许多可靠性问题。考虑到负偏置温度不稳定性(NBTI)老化效应导致的性能下降,提出了一种基于cpu的多核系统的感知老化任务调度框架。该框架依靠NBTI老化模型来评估核心工作频率的退化,建立老化效应下的任务调度模型。在此基础上,提出了一种基于粒子群算法(PSO)的启发式算法,以任务总完成时间为优化目标求解调度问题,并在不考虑NBTI老化效应的情况下,获得了比传统调度算法效率更高的调度结果。实验表明,与非老化感知任务调度算法相比,该算法不仅具有更短的完工时间和更高的吞吐量,而且具有更好的可靠性。
{"title":"Particle swarm optimization based task scheduling for multi-core systems under aging effect","authors":"Jinbin Tu, Tianhao Yang, Yi Zhang, Jin Sun","doi":"10.1109/PIC.2017.8359556","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359556","url":null,"abstract":"As the size of the transistor continues to shrink, a number of reliability issues have emerged in network-on-chip (NoC) design. Taking into account the performance degradation induced by Negative Bias Temperature Instability (NBTI) aging effect, this paper proposes an aging-aware task scheduling framework for NoC-based multi-core systems. This framework relies on a NBTI aging model to evaluate the degradation of core's operating frequency to establish the task scheduling model under aging effect. Then, we develop a particle swarm optimization (PSO)-based heuristic to solve the scheduling problem with an optimization objective of total task completion time, and finally obtain a scheduling result with higher efficiency compared with traditional scheduling algorithms without considering of NBTI aging effect. Experiments show that the proposed aging-aware task-scheduling algorithm achieves not only shorter makespan and higher throughput, but also better reliability over non-aging-aware ones.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131276162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A parameterized flattening control flow based obfuscation algorithm with opaque predicate for reduplicate obfuscation 基于模糊谓词的参数化平坦化控制流模糊算法
Pub Date : 2017-12-01 DOI: 10.1109/PIC.2017.8359575
Zheheng Liang, Wenlin Li, Jing Guo, Deyu Qi, Jijun Zeng
In order to enhance the white box security of software, we proposed a reduplicate code obfuscation algorithm to protect the source code. Firstly, we apply the parameter decomposition tree to formalize the code, and then we utilize flattening control flow system to decompose the source code into a multi-branch WHILE-SWITCH loop structure. Finally, we apply opaque predicates to obfuscate the flattened code for the secondary obfuscation. In this paper, opaque predicate code representation and different methods of inserting opaque predicates into program braches and sequence blocks were given. Experiments has been made to compare time-space cost of source code and obfuscated code. The results demonstrate that the proposed algorithm can improve code's anti-attack ability, increasing the difficulty of reverse engineering as well.
为了提高软件的白盒安全性,我们提出了一种重复代码混淆算法来保护源代码。首先利用参数分解树对代码进行形式化,然后利用扁平化控制流系统将源代码分解为多分支WHILE-SWITCH循环结构。最后,我们应用不透明的谓词来混淆被平化的代码以进行二次混淆。本文给出了不透明谓词码的表示以及在程序分支和序列块中插入不透明谓词的不同方法。通过实验比较了源代码和混淆代码的时间空间成本。结果表明,该算法可以提高代码的抗攻击能力,同时也增加了逆向工程的难度。
{"title":"A parameterized flattening control flow based obfuscation algorithm with opaque predicate for reduplicate obfuscation","authors":"Zheheng Liang, Wenlin Li, Jing Guo, Deyu Qi, Jijun Zeng","doi":"10.1109/PIC.2017.8359575","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359575","url":null,"abstract":"In order to enhance the white box security of software, we proposed a reduplicate code obfuscation algorithm to protect the source code. Firstly, we apply the parameter decomposition tree to formalize the code, and then we utilize flattening control flow system to decompose the source code into a multi-branch WHILE-SWITCH loop structure. Finally, we apply opaque predicates to obfuscate the flattened code for the secondary obfuscation. In this paper, opaque predicate code representation and different methods of inserting opaque predicates into program braches and sequence blocks were given. Experiments has been made to compare time-space cost of source code and obfuscated code. The results demonstrate that the proposed algorithm can improve code's anti-attack ability, increasing the difficulty of reverse engineering as well.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129082664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A rule-based method for detecting the missing common requirements in software product line 一种基于规则的检测软件产品线中缺失的公共需求的方法
Pub Date : 2017-12-01 DOI: 10.1109/PIC.2017.8359557
Jianzhang Zhang, Yinglin Wang, Wentao Wang, Nan Niu
The problem of incomplete requirements has been one of the most critical issues in requirements engineering. Common requirements in the software product line establish the foundation to implement the core functionalities of a family of software products. In this paper, we propose a framework to detect the missing common requirements in the context of software product line. The framework mainly consists of two parts: a set of rules to extract the common requirements, the dependency relationships among them and a procedure for detecting the omission in an incoming set of requirements. Preliminary experiments are conducted on a group of medical applications to validate our approach. The results indicate that our proposed framework is effective.
不完整需求的问题一直是需求工程中最关键的问题之一。软件产品线中的公共需求为实现一系列软件产品的核心功能奠定了基础。在本文中,我们提出了一个框架来检测软件产品线中缺失的公共需求。该框架主要由两部分组成:用于提取公共需求的一组规则、它们之间的依赖关系和用于检测传入需求集中的遗漏的过程。在一组医学应用中进行了初步实验来验证我们的方法。结果表明,该框架是有效的。
{"title":"A rule-based method for detecting the missing common requirements in software product line","authors":"Jianzhang Zhang, Yinglin Wang, Wentao Wang, Nan Niu","doi":"10.1109/PIC.2017.8359557","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359557","url":null,"abstract":"The problem of incomplete requirements has been one of the most critical issues in requirements engineering. Common requirements in the software product line establish the foundation to implement the core functionalities of a family of software products. In this paper, we propose a framework to detect the missing common requirements in the context of software product line. The framework mainly consists of two parts: a set of rules to extract the common requirements, the dependency relationships among them and a procedure for detecting the omission in an incoming set of requirements. Preliminary experiments are conducted on a group of medical applications to validate our approach. The results indicate that our proposed framework is effective.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129150522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain storm optimization with adaptive search radius for optimization 头脑风暴优化与自适应搜索半径的优化
Pub Date : 2017-12-01 DOI: 10.1109/PIC.2017.8359579
Yang Yu, Lei Wu, Hang Yu, Sheng Li, Shi Wang, Shangce Gao
Brain storm optimization algorithm is a newly proposed algorithm which is based on the social behaviors of human beings. It gets inspiration from the procedure of brain storming to generate new individuals. The properties of brain storming highly ensure the diversity of the whole population and can efficiently eliminate premature convergence. In this paper, an adaptive search radius method has been proposed to help brain storm optimization to enhance its search ability. The proposed algorithm equipped with adaptive search strategies introduces a success and failure memory to choose the best suitable search strategy to improve the individual quality in each iteration. Three strategies can be chosen in the duration of convergence adaptively and guarantee better solutions compared with traditional brain storm optimization.
头脑风暴优化算法是一种基于人类社会行为的新算法。它从头脑风暴的过程中获得灵感,以产生新的个体。头脑风暴的特性极大地保证了整个群体的多样性,并能有效地消除过早收敛。本文提出了一种自适应搜索半径方法,帮助头脑风暴优化提高其搜索能力。该算法采用自适应搜索策略,引入成功和失败记忆,在每次迭代中选择最合适的搜索策略,以提高个体质量。在收敛过程中可以自适应选择三种策略,保证比传统的头脑风暴优化更优的解。
{"title":"Brain storm optimization with adaptive search radius for optimization","authors":"Yang Yu, Lei Wu, Hang Yu, Sheng Li, Shi Wang, Shangce Gao","doi":"10.1109/PIC.2017.8359579","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359579","url":null,"abstract":"Brain storm optimization algorithm is a newly proposed algorithm which is based on the social behaviors of human beings. It gets inspiration from the procedure of brain storming to generate new individuals. The properties of brain storming highly ensure the diversity of the whole population and can efficiently eliminate premature convergence. In this paper, an adaptive search radius method has been proposed to help brain storm optimization to enhance its search ability. The proposed algorithm equipped with adaptive search strategies introduces a success and failure memory to choose the best suitable search strategy to improve the individual quality in each iteration. Three strategies can be chosen in the duration of convergence adaptively and guarantee better solutions compared with traditional brain storm optimization.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"61 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129121484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Join order algorithm using predefined optimal join order 连接顺序算法使用预定义的最优连接顺序
Pub Date : 2017-12-01 DOI: 10.1109/PIC.2017.8359577
Areerat Trongratsameethong
This paper proposes an optimization algorithm named Join Order Algorithm Using Predefined Optimal Join Order or is called JAPO algorithm to optimize join cost. Optimal join order solutions for all possible join patterns are predefined and stored in a file using Dynamic Programming with Memorization technique or is called DPM algorithm. JAPO algorithm searches join order solutions from the predefined optimal join orders using hash function instead of traversing all search space. Experiments are conducted and join costs obtained by JAPO algorithm are compared with DPM algorithm and greedy algorithm named GOO. The experimental results show that JAPO algorithm with polynomial time complexity obtains almost 100 percent of optimal join order solutions. DPM algorithm obtains 100 percent of optimal join order solutions with factorial time complexity. GOO algorithm with polynomial time complexity obtains sub-optimal solutions and number of optimal solutions obtained by GOO algorithm decreases when number of relations to be joined is increased.
本文提出了一种基于预定义最优连接顺序的连接顺序算法,即JAPO算法来优化连接成本。所有可能的连接模式的最优连接顺序解决方案都是预定义的,并使用带记忆的动态规划技术或称为DPM算法存储在文件中。JAPO算法使用散列函数从预定义的最优连接顺序中搜索连接顺序解,而不是遍历所有搜索空间。通过实验,将JAPO算法获得的连接代价与DPM算法和贪心算法GOO进行了比较。实验结果表明,具有多项式时间复杂度的JAPO算法几乎可以获得100%的最优连接顺序解。DPM算法以阶乘时间复杂度获得100%的最优连接顺序解。时间复杂度为多项式的GOO算法得到次优解,且随着待连接关系个数的增加,GOO算法得到的最优解个数减少。
{"title":"Join order algorithm using predefined optimal join order","authors":"Areerat Trongratsameethong","doi":"10.1109/PIC.2017.8359577","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359577","url":null,"abstract":"This paper proposes an optimization algorithm named Join Order Algorithm Using Predefined Optimal Join Order or is called JAPO algorithm to optimize join cost. Optimal join order solutions for all possible join patterns are predefined and stored in a file using Dynamic Programming with Memorization technique or is called DPM algorithm. JAPO algorithm searches join order solutions from the predefined optimal join orders using hash function instead of traversing all search space. Experiments are conducted and join costs obtained by JAPO algorithm are compared with DPM algorithm and greedy algorithm named GOO. The experimental results show that JAPO algorithm with polynomial time complexity obtains almost 100 percent of optimal join order solutions. DPM algorithm obtains 100 percent of optimal join order solutions with factorial time complexity. GOO algorithm with polynomial time complexity obtains sub-optimal solutions and number of optimal solutions obtained by GOO algorithm decreases when number of relations to be joined is increased.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127524599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallelizing convolutional neural network for the handwriting recognition problems with different architectures 并行卷积神经网络在不同结构下的手写识别问题
Pub Date : 2017-12-01 DOI: 10.1109/PIC.2017.8359517
Junhao Zhou, Weibin Chen, Guishen Peng, Hong Xiao, Hao Wang, Zhigang Chen
As the convolutional neural network (CNN) algorithm is proved to be uncomplicated in the image preconditioning and relatively simple train the original image, it has become popular in image classification. Apart from the field of image classification, CNN has been widely used in many scientific area, especially in the field of pattern classification. In this paper, we use CNN for handwritten numeral recognition. The basic idea of our method is to use the multi-process to process the training samples in parallel, to exchange the training results and to get the final weight parameters. Compared with the conventional algorithm, the training time is greatly reduced, and the result can be obtained more quickly. Besides, the accuracy of the algorithm is proved to be almost the same as that of the conventional algorithm with sufficient training testing samples. This significantly improves the efficiency of CNN in the hand written numeral recognition. Finally, we also implemented our proposed method with parallel acceleration optimization based on Many Integrated Core Architecture (MIC) architecture of Intel and GPU architecture of Nvidia.
卷积神经网络(convolutional neural network, CNN)算法被证明在图像预处理方面不复杂,对原始图像的训练相对简单,因此在图像分类中得到了广泛的应用。除了图像分类领域,CNN在许多科学领域都得到了广泛的应用,尤其是在模式分类领域。在本文中,我们使用CNN进行手写数字识别。该方法的基本思想是利用多进程并行处理训练样本,交换训练结果,得到最终的权重参数。与传统算法相比,大大减少了训练时间,并且可以更快地获得结果。在训练测试样本充足的情况下,证明了该算法与传统算法的准确率基本一致。这大大提高了CNN在手写数字识别中的效率。最后,我们还基于Intel的MIC架构和Nvidia的GPU架构实现了我们提出的并行加速优化方法。
{"title":"Parallelizing convolutional neural network for the handwriting recognition problems with different architectures","authors":"Junhao Zhou, Weibin Chen, Guishen Peng, Hong Xiao, Hao Wang, Zhigang Chen","doi":"10.1109/PIC.2017.8359517","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359517","url":null,"abstract":"As the convolutional neural network (CNN) algorithm is proved to be uncomplicated in the image preconditioning and relatively simple train the original image, it has become popular in image classification. Apart from the field of image classification, CNN has been widely used in many scientific area, especially in the field of pattern classification. In this paper, we use CNN for handwritten numeral recognition. The basic idea of our method is to use the multi-process to process the training samples in parallel, to exchange the training results and to get the final weight parameters. Compared with the conventional algorithm, the training time is greatly reduced, and the result can be obtained more quickly. Besides, the accuracy of the algorithm is proved to be almost the same as that of the conventional algorithm with sufficient training testing samples. This significantly improves the efficiency of CNN in the hand written numeral recognition. Finally, we also implemented our proposed method with parallel acceleration optimization based on Many Integrated Core Architecture (MIC) architecture of Intel and GPU architecture of Nvidia.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125791680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Model-based DDPG for motor control 基于模型的DDPG电机控制
Pub Date : 2017-12-01 DOI: 10.1109/PIC.2017.8359558
Haibo Shi, Yaoru Sun, Guangyuan Li
The deep deterministic policy gradient (DDPG) is a recently developed reinforcement learning method that could learn the control policy with a deterministic representation. The policy learning directly follows the gradient of the action-value function with respect to the actions. Similarly, the DDPG provides the gradient of the action-value function to the state readily. This mechanism allows the incorporation of the model information to improve the original DDPG. In this study, a model-based DDPG as an improvement to the original DDPG was implemented. An additional deep network was embedded into the framework of the conventional DDPG, based on which the gradient of the model dynamics for the maximization of the action-value is also exploited to learn the control policy. The model-based DDPG showed a relative advantage over the original DDPG through an experiment of simulated arm reaching movement control.
深度确定性策略梯度(deep deterministic policy gradient, DDPG)是近年来发展起来的一种强化学习方法,它可以学习具有确定性表示的控制策略。策略学习直接遵循动作-价值函数相对于动作的梯度。类似地,DDPG很容易地提供动作值函数到状态的梯度。该机制允许合并模型信息以改进原始DDPG。在本研究中,基于模型的DDPG作为原始DDPG的改进实现。在传统DDPG的框架中嵌入了一个额外的深度网络,在此基础上利用模型动力学的梯度使动作值最大化来学习控制策略。通过模拟手臂伸展运动控制实验,显示了基于模型的DDPG相对于原始DDPG的优势。
{"title":"Model-based DDPG for motor control","authors":"Haibo Shi, Yaoru Sun, Guangyuan Li","doi":"10.1109/PIC.2017.8359558","DOIUrl":"https://doi.org/10.1109/PIC.2017.8359558","url":null,"abstract":"The deep deterministic policy gradient (DDPG) is a recently developed reinforcement learning method that could learn the control policy with a deterministic representation. The policy learning directly follows the gradient of the action-value function with respect to the actions. Similarly, the DDPG provides the gradient of the action-value function to the state readily. This mechanism allows the incorporation of the model information to improve the original DDPG. In this study, a model-based DDPG as an improvement to the original DDPG was implemented. An additional deep network was embedded into the framework of the conventional DDPG, based on which the gradient of the model dynamics for the maximization of the action-value is also exploited to learn the control policy. The model-based DDPG showed a relative advantage over the original DDPG through an experiment of simulated arm reaching movement control.","PeriodicalId":370588,"journal":{"name":"2017 International Conference on Progress in Informatics and Computing (PIC)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126709602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2017 International Conference on Progress in Informatics and Computing (PIC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1