首页 > 最新文献

Proceedings of the 1st Symposium on Information and Communication Technology最新文献

英文 中文
Data mining and integration for environmental scenarios 环境场景的数据挖掘和集成
Pub Date : 2010-08-27 DOI: 10.1145/1852611.1852622
V. Tran, L. Hluchý, O. Habala
In this paper we describe our work on the framework for integration and mining of environmental data. We present a suite of selected scenarios which are created within a data mining and integration framework being developed in the project ADMIRE. The scenarios have been chosen for their suitability for data mining by environmental experts which deal with meteorological and hydrological problems, and apply the chosen solutions to pilot areas within Slovakia. The main challenge is that the environmental data required by scenarios are maintained and provided by different organizations and are often in different formats. We present our approach to the specification and execution of data integration tasks, which deals with the distributed nature and heterogeneity of required data resources.
在本文中,我们描述了我们在环境数据集成和挖掘框架方面的工作。我们展示了一套选定的场景,这些场景是在项目中开发的数据挖掘和集成框架中创建的。这些方案是由处理气象和水文问题的环境专家根据它们是否适合数据挖掘而选定的,并将选定的解决办法应用于斯洛伐克境内的试点地区。主要的挑战是,场景所需的环境数据由不同的组织维护和提供,并且通常采用不同的格式。我们提出了规范和执行数据集成任务的方法,该方法处理所需数据资源的分布式特性和异构性。
{"title":"Data mining and integration for environmental scenarios","authors":"V. Tran, L. Hluchý, O. Habala","doi":"10.1145/1852611.1852622","DOIUrl":"https://doi.org/10.1145/1852611.1852622","url":null,"abstract":"In this paper we describe our work on the framework for integration and mining of environmental data. We present a suite of selected scenarios which are created within a data mining and integration framework being developed in the project ADMIRE. The scenarios have been chosen for their suitability for data mining by environmental experts which deal with meteorological and hydrological problems, and apply the chosen solutions to pilot areas within Slovakia. The main challenge is that the environmental data required by scenarios are maintained and provided by different organizations and are often in different formats. We present our approach to the specification and execution of data integration tasks, which deals with the distributed nature and heterogeneity of required data resources.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132316349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Constraint-based local search for solving non-simple paths problems on graphs: application to the routing for network covering problem 基于约束的局部搜索解决图上非简单路径问题:在网络覆盖问题路由中的应用
Pub Date : 2010-08-27 DOI: 10.1145/1852611.1852613
Pham Quang Dung, Phan-Thuan Do, Y. Deville, Hô Tuòng Vinh
Routing problems have been considered as central problems in the fields of transportation, distribution and logistics. LS(Graph) is a generic framework allowing to model and solve constrained optimum paths problems on graphs by local search where paths are known to be elementary (i.e., edges, vertices cannot be repeated on paths). In many real-world situations, the paths to be determined are not known to be neither simple nor elementary. In this paper, we extend the LS(Graph) framework by designing and implementing abstractions that allow to model and solve constrained paths problem where edges, vertices can be repeated on paths (call non-simple paths). We also propose an instance of such problem class: the routing for network covering (RNC) problem which arises in the context of rescue after a natural disaster in which we have to route a fleet of identical vehicles with limited capacity on a transportation network in order to collect the informations of the disaster. Given an undirected weighted graph G = (V, E) representing a transportation network and a vertex v0 ∈ V representing the depot, the RNC problem consists of routing a fleet of unlimited number of identical vehicles with limited capacity that cannot perform a path of length > L such that each vehicle starts from and teminates at the depot and all the edges of a given set S (S ⊆ E) must be visited. The objective of the routing plan is to minimize the number of vehicles used. This paper discusses the challenge around this problem and applies the constructed framework to the resolution of this problem. The proposed model is generic; it allows to solve some variants of the problem where side constraints are required to be added.
路线问题一直被认为是运输、配送和物流领域的核心问题。LS(图)是一个通用框架,允许通过局部搜索来建模和解决图上的约束最优路径问题,其中路径已知是基本的(即,边,顶点不能在路径上重复)。在许多现实世界的情况下,要确定的路径既不简单也不基本。在本文中,我们通过设计和实现抽象来扩展LS(图)框架,允许建模和解决约束路径问题,其中边,顶点可以在路径上重复(称为非简单路径)。我们还提出了此类问题类的一个实例:在自然灾害后的救援背景下出现的网络覆盖路由(RNC)问题,在这种情况下,为了收集灾难信息,我们必须在运输网络上为容量有限的相同车队进行路由。给定一个代表交通网络的无向加权图G = (V, E)和一个代表仓库的顶点v0∈V, RNC问题包括路由一个由无限数量、容量有限的相同车辆组成的车队,这些车辆不能执行长度> L的路径,使得每辆车都从仓库出发和终止,并且必须访问给定集合S (S (E))的所有边缘。路线规划的目标是尽量减少使用的车辆数量。本文讨论了围绕这一问题所面临的挑战,并将构建的框架应用于解决这一问题。所提出的模型是通用的;它允许解决需要添加侧约束的问题的某些变体。
{"title":"Constraint-based local search for solving non-simple paths problems on graphs: application to the routing for network covering problem","authors":"Pham Quang Dung, Phan-Thuan Do, Y. Deville, Hô Tuòng Vinh","doi":"10.1145/1852611.1852613","DOIUrl":"https://doi.org/10.1145/1852611.1852613","url":null,"abstract":"Routing problems have been considered as central problems in the fields of transportation, distribution and logistics. LS(Graph) is a generic framework allowing to model and solve constrained optimum paths problems on graphs by local search where paths are known to be elementary (i.e., edges, vertices cannot be repeated on paths). In many real-world situations, the paths to be determined are not known to be neither simple nor elementary. In this paper, we extend the LS(Graph) framework by designing and implementing abstractions that allow to model and solve constrained paths problem where edges, vertices can be repeated on paths (call non-simple paths). We also propose an instance of such problem class: the routing for network covering (RNC) problem which arises in the context of rescue after a natural disaster in which we have to route a fleet of identical vehicles with limited capacity on a transportation network in order to collect the informations of the disaster. Given an undirected weighted graph G = (V, E) representing a transportation network and a vertex v0 ∈ V representing the depot, the RNC problem consists of routing a fleet of unlimited number of identical vehicles with limited capacity that cannot perform a path of length > L such that each vehicle starts from and teminates at the depot and all the edges of a given set S (S ⊆ E) must be visited. The objective of the routing plan is to minimize the number of vehicles used. This paper discusses the challenge around this problem and applies the constructed framework to the resolution of this problem. The proposed model is generic; it allows to solve some variants of the problem where side constraints are required to be added.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130209089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MemMON: run-time off-chip detection for memory access violation in embedded systems 嵌入式系统中内存访问违规的运行时芯片外检测
Pub Date : 2010-08-27 DOI: 10.1145/1852611.1852634
Nam Ho, Anh-Vu Dinh-Duc
To deploy a memory protection mechanism, it requires CPU support hardware components like Memory Management Unit (MMU) or Memory Protection Unit (MPU). However, in embedded system, most of microcontrollers lack to be equipped these features because they cause the system incurred hardware cost and performance penalty. In this paper, a method to detect memory corruption at run-time without incurring hardware cost is proposed. Embedded system processor does not require having MMU or MPU. Off-chip detection based on FPGA by hooking on memory bus to monitor memory access for multitasking Realtime Operating System (RTOS) application is explored. Our solution, called MemMON, by combining hardware/software can detect memory access error such as task's stack overflow, task's reading/writing to code/data segments of the other tasks or memory access violation to OS kernel efficiently. In experimental evaluation, a comparison of realtime schedulability is carried out for both using and not using MemMON. Using our MemMON causes realtime schedulability of the system dropped-off about 3 times.
为了部署内存保护机制,需要CPU支持的硬件组件,如memory Management Unit (MMU)或memory protection Unit (MPU)。然而,在嵌入式系统中,大多数微控制器缺乏这些特性,因为它们会导致系统产生硬件成本和性能损失。本文提出了一种在不产生硬件成本的情况下在运行时检测内存损坏的方法。嵌入式系统处理器不需要MMU或MPU。针对多任务实时操作系统(RTOS)的应用,探讨了基于FPGA的挂接内存总线的片外检测,以监控内存访问。我们的解决方案MemMON,通过硬件/软件的结合,可以有效地检测内存访问错误,如任务的堆栈溢出、任务对其他任务的代码/数据段的读写或对操作系统内核的内存访问冲突。在实验评估中,对使用和不使用MemMON时的实时可调度性进行了比较。使用我们的MemMON导致系统的实时可调度性下降了大约3倍。
{"title":"MemMON: run-time off-chip detection for memory access violation in embedded systems","authors":"Nam Ho, Anh-Vu Dinh-Duc","doi":"10.1145/1852611.1852634","DOIUrl":"https://doi.org/10.1145/1852611.1852634","url":null,"abstract":"To deploy a memory protection mechanism, it requires CPU support hardware components like Memory Management Unit (MMU) or Memory Protection Unit (MPU). However, in embedded system, most of microcontrollers lack to be equipped these features because they cause the system incurred hardware cost and performance penalty. In this paper, a method to detect memory corruption at run-time without incurring hardware cost is proposed. Embedded system processor does not require having MMU or MPU. Off-chip detection based on FPGA by hooking on memory bus to monitor memory access for multitasking Realtime Operating System (RTOS) application is explored. Our solution, called MemMON, by combining hardware/software can detect memory access error such as task's stack overflow, task's reading/writing to code/data segments of the other tasks or memory access violation to OS kernel efficiently. In experimental evaluation, a comparison of realtime schedulability is carried out for both using and not using MemMON. Using our MemMON causes realtime schedulability of the system dropped-off about 3 times.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122060980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Name entity recognition using inductive logic programming 名称实体识别使用归纳逻辑编程
Pub Date : 2010-08-27 DOI: 10.1145/1852611.1852626
H. T. Le, Thien Huu Nguyen
Named entity recognition (NER) is the process of seeking to locate atomic elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, and percentages. It is useful in applying NER to other natural language tasks such as question-answering, text summarization, building semantic web, etc. This paper presents a system, called BKIE, that uses SRV -- an inductive logic program - to extract name entities in Vietnamese text. New predicates and features are added to SRV to deal with characteristics of Vietnamese language. Also, several strategies are proposed in this paper to improve the efficiency of the SRV algorithm. The data set using in experiments is 80 homepages of scientists in Vietnamese language that were tagged manually. The experiments give us the best F-score of 83% for extracting the "name" entity. It shows that SRV is an efficient NER algorithm given its advantages of generality and flexibility. In order to increase the system's performance, our future work includes (i) building a larger set of training data to improve system's performance; (ii) implementing BKIE using parallel programming to increase system efficiency; and (iii) testing BKIE with other application domains to get a more accurate evaluation of the system.
命名实体识别(NER)是将文本中的原子元素定位到预定义类别(如人名、组织、地点、时间表达式、数量和百分比)的过程。它可以应用于其他自然语言任务,如问答、文本摘要、构建语义网等。本文提出了一个名为BKIE的系统,该系统使用SRV(一种归纳逻辑程序)来提取越南文文本中的名称实体。针对越南语的特点,在SRV中增加了新的谓词和特征。此外,本文还提出了几种提高SRV算法效率的策略。实验中使用的数据集是80个人工标记的越南语科学家的主页。实验给出了提取“name”实体的最佳f值83%。结果表明,SRV算法具有通用性和灵活性等优点,是一种有效的NER算法。为了提高系统的性能,我们未来的工作包括(i)建立更大的训练数据集来提高系统的性能;(ii)采用并行编程方式实施BKIE,以提高系统效率;(iii)将BKIE与其他应用领域进行测试,以获得更准确的系统评估。
{"title":"Name entity recognition using inductive logic programming","authors":"H. T. Le, Thien Huu Nguyen","doi":"10.1145/1852611.1852626","DOIUrl":"https://doi.org/10.1145/1852611.1852626","url":null,"abstract":"Named entity recognition (NER) is the process of seeking to locate atomic elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, and percentages. It is useful in applying NER to other natural language tasks such as question-answering, text summarization, building semantic web, etc. This paper presents a system, called BKIE, that uses SRV -- an inductive logic program - to extract name entities in Vietnamese text. New predicates and features are added to SRV to deal with characteristics of Vietnamese language. Also, several strategies are proposed in this paper to improve the efficiency of the SRV algorithm. The data set using in experiments is 80 homepages of scientists in Vietnamese language that were tagged manually. The experiments give us the best F-score of 83% for extracting the \"name\" entity. It shows that SRV is an efficient NER algorithm given its advantages of generality and flexibility. In order to increase the system's performance, our future work includes (i) building a larger set of training data to improve system's performance; (ii) implementing BKIE using parallel programming to increase system efficiency; and (iii) testing BKIE with other application domains to get a more accurate evaluation of the system.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124042077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Generating qualified summarization answers using fuzzy concept hierarchies 使用模糊概念层次结构生成合格的摘要答案
Pub Date : 2010-08-27 DOI: 10.1145/1852611.1852620
Ngo Tuan Phong, N. Phuong, N. K. Anh
In this paper, we introduce a partially automated method to generate qualified answers at multiple abstraction levels for database queries. We examine the issues involving data summarization by Attribute-Oriented Induction (AOI) on large databases using fuzzy concept hierarchies. Because a node may have many abstracts, the fuzzy hierarchies become more complex and vaguer than crisp ones. Therefore, we cannot use exactly the original AOI algorithm with crisp hierarchies, applied for fuzzy hierarchies, to get interesting answers. The main contribution of this paper is that we propose a new approach to refine fuzzy hierarchies and evaluate tuple-terminal conditions to reduce noisy tuples. The foundations of our approach are the generalization hierarchy and a new method to estimate tuple quality. We implemented the algorithm in our knowledge discovery system and the experimental results show that the approach is efficient and suitable for knowledge discovery in large databases.
在本文中,我们介绍了一种部分自动化的方法来为数据库查询在多个抽象级别上生成合格的答案。本文研究了利用模糊概念层次对大型数据库进行面向属性归纳法(AOI)数据总结的问题。因为一个节点可能有许多抽象,所以模糊层次结构会比清晰层次结构更加复杂和模糊。因此,我们不能完全使用原始的层次清晰的AOI算法,应用模糊层次来得到有趣的答案。本文的主要贡献是我们提出了一种新的方法来细化模糊层次和评估元终端条件,以减少噪声元组。该方法的基础是泛化层次结构和一种新的元组质量估计方法。在我们的知识发现系统中实现了该算法,实验结果表明该方法是有效的,适用于大型数据库中的知识发现。
{"title":"Generating qualified summarization answers using fuzzy concept hierarchies","authors":"Ngo Tuan Phong, N. Phuong, N. K. Anh","doi":"10.1145/1852611.1852620","DOIUrl":"https://doi.org/10.1145/1852611.1852620","url":null,"abstract":"In this paper, we introduce a partially automated method to generate qualified answers at multiple abstraction levels for database queries. We examine the issues involving data summarization by Attribute-Oriented Induction (AOI) on large databases using fuzzy concept hierarchies. Because a node may have many abstracts, the fuzzy hierarchies become more complex and vaguer than crisp ones. Therefore, we cannot use exactly the original AOI algorithm with crisp hierarchies, applied for fuzzy hierarchies, to get interesting answers. The main contribution of this paper is that we propose a new approach to refine fuzzy hierarchies and evaluate tuple-terminal conditions to reduce noisy tuples. The foundations of our approach are the generalization hierarchy and a new method to estimate tuple quality. We implemented the algorithm in our knowledge discovery system and the experimental results show that the approach is efficient and suitable for knowledge discovery in large databases.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124808205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Particle-based simulation of blood flow and vessel wall interactions in virtual surgery 虚拟手术中基于粒子的血流和血管壁相互作用模拟
Pub Date : 2010-08-27 DOI: 10.1145/1852611.1852636
J. Qin, Wai-Man Pang, Binh P. Nguyen, Dong Ni, C. Chui
We propose a particle-based solution to simulate the interactions between blood flow and vessel wall for virtual surgery. By coupling two particle-based techniques, the smoothed particle hydrodynamics (SPH) and mass-spring model (MSM), we can simulate the blood flow and deformation of vessel seamlessly. At the vessel wall, particles are considered as both boundary particles for SPH solver and mass points for the MSM solver. We implement an improved repulsive boundary condition to simulate the interactions. The computation of blood flow dynamics and vessel wall deformations are performed in an alternating fashion in every time step. To ensure realism, parameters of both SPH and MSM are carefully configured. Experimental results demonstrate the potential of the proposed method in providing real-time and realistic interactions for virtual vascular surgery systems.
我们提出了一种基于粒子的解决方案来模拟虚拟手术中血流和血管壁之间的相互作用。通过将平滑粒子流体力学(SPH)和质量弹簧模型(MSM)这两种基于粒子的技术相结合,可以无缝地模拟血管的血流和变形。在管壁处,粒子既是SPH解算器的边界粒子,也是MSM解算器的质量点。我们实现了一个改进的排斥边界条件来模拟相互作用。血流动力学和血管壁变形的计算在每个时间步上交替进行。为了保证真实感,SPH和MSM的参数都经过了仔细的配置。实验结果证明了该方法在为虚拟血管手术系统提供实时和真实交互方面的潜力。
{"title":"Particle-based simulation of blood flow and vessel wall interactions in virtual surgery","authors":"J. Qin, Wai-Man Pang, Binh P. Nguyen, Dong Ni, C. Chui","doi":"10.1145/1852611.1852636","DOIUrl":"https://doi.org/10.1145/1852611.1852636","url":null,"abstract":"We propose a particle-based solution to simulate the interactions between blood flow and vessel wall for virtual surgery. By coupling two particle-based techniques, the smoothed particle hydrodynamics (SPH) and mass-spring model (MSM), we can simulate the blood flow and deformation of vessel seamlessly. At the vessel wall, particles are considered as both boundary particles for SPH solver and mass points for the MSM solver. We implement an improved repulsive boundary condition to simulate the interactions. The computation of blood flow dynamics and vessel wall deformations are performed in an alternating fashion in every time step. To ensure realism, parameters of both SPH and MSM are carefully configured. Experimental results demonstrate the potential of the proposed method in providing real-time and realistic interactions for virtual vascular surgery systems.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132834901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
The LogP and MLogP models for parallel image processing with multi-core microprocessor 多核微处理器并行图像处理的LogP和MLogP模型
Pub Date : 2010-08-27 DOI: 10.1145/1852611.1852616
C. Chui
Despite the advancement and availability of the multiple core microprocessors, it remains an issue on how to fully utilize this relatively new computing platform to achieve optimal performance for a parallel algorithm. There are limitations to the existing theoretical model in analyzing parallel algorithms for multi-core microprocessor systems. The proposed Multi-core LogP (MLogP) model is a more realistic model for parallel computing with multi-core microprocessor. The MLogP model is a variant of the popular LogP model for parallel computation. Experiment with parallel image processing algorithms were used to determine the abilities of LogP and MLogP models in predicting the performance of parallel image processing algorithms on a Intel Core2 Quad 2.44 GHz microprocessor.
尽管多核微处理器的进步和可用性,但如何充分利用这个相对较新的计算平台来实现并行算法的最佳性能仍然是一个问题。现有的理论模型在分析多核微处理器系统的并行算法时存在局限性。所提出的多核LogP (MLogP)模型是一种更为现实的多核微处理器并行计算模型。MLogP模型是流行的用于并行计算的LogP模型的一个变体。通过并行图像处理算法实验,确定了LogP和MLogP模型在Intel Core2 Quad 2.44 GHz微处理器上预测并行图像处理算法性能的能力。
{"title":"The LogP and MLogP models for parallel image processing with multi-core microprocessor","authors":"C. Chui","doi":"10.1145/1852611.1852616","DOIUrl":"https://doi.org/10.1145/1852611.1852616","url":null,"abstract":"Despite the advancement and availability of the multiple core microprocessors, it remains an issue on how to fully utilize this relatively new computing platform to achieve optimal performance for a parallel algorithm. There are limitations to the existing theoretical model in analyzing parallel algorithms for multi-core microprocessor systems. The proposed Multi-core LogP (MLogP) model is a more realistic model for parallel computing with multi-core microprocessor. The MLogP model is a variant of the popular LogP model for parallel computation. Experiment with parallel image processing algorithms were used to determine the abilities of LogP and MLogP models in predicting the performance of parallel image processing algorithms on a Intel Core2 Quad 2.44 GHz microprocessor.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132178459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Comparative analysis of transliteration techniques based on statistical machine translation and joint-sequence model 基于统计机器翻译和联合序列模型的音译技术比较分析
Pub Date : 2010-08-27 DOI: 10.1145/1852611.1852624
Nam Cao, Nhut M. Pham, Q. Vu
The inability to deal with words in foreign languages imposes difficulties to both Vietnamese speech recognition and text-to-speech systems. A common solution is to look up a dictionary, but the number of available entries is finite and therefore not flexible because speech recognition and text-to-speech systems are expected to handle arbitrary words. Alternatively, data-driven approaches can be employed to transliterate a foreign word into its Vietnamese pronunciation by learning samples and predicting unseen words. This paper presents a comparative analysis between two data-driven approaches based on statistical machine translation and joint-sequence model. Two systems based on these approaches are developed and tested using the same experimental protocol and a dataset consisting of 8050 English words. Results show that joint-sequence model outperforms statistical machine translation in English-to-Vietnamese transliteration.
无法处理外文单词给越南语语音识别和文本转语音系统都带来了困难。一种常见的解决方案是查找字典,但是可用条目的数量有限,因此不灵活,因为语音识别和文本到语音系统需要处理任意单词。另外,数据驱动的方法可以通过学习样本和预测未见过的单词来将外来词音译成越南语发音。本文对基于统计机器翻译和联合序列模型的两种数据驱动方法进行了比较分析。使用相同的实验方案和包含8050个英语单词的数据集,开发并测试了基于这些方法的两个系统。结果表明,联合序列模型在英越音译中优于统计机器翻译。
{"title":"Comparative analysis of transliteration techniques based on statistical machine translation and joint-sequence model","authors":"Nam Cao, Nhut M. Pham, Q. Vu","doi":"10.1145/1852611.1852624","DOIUrl":"https://doi.org/10.1145/1852611.1852624","url":null,"abstract":"The inability to deal with words in foreign languages imposes difficulties to both Vietnamese speech recognition and text-to-speech systems. A common solution is to look up a dictionary, but the number of available entries is finite and therefore not flexible because speech recognition and text-to-speech systems are expected to handle arbitrary words. Alternatively, data-driven approaches can be employed to transliterate a foreign word into its Vietnamese pronunciation by learning samples and predicting unseen words. This paper presents a comparative analysis between two data-driven approaches based on statistical machine translation and joint-sequence model. Two systems based on these approaches are developed and tested using the same experimental protocol and a dataset consisting of 8050 English words. Results show that joint-sequence model outperforms statistical machine translation in English-to-Vietnamese transliteration.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122006844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Prediction-based directional search for fast block-matching motion estimation 基于预测的定向搜索快速块匹配运动估计
Pub Date : 2010-08-27 DOI: 10.1145/1852611.1852629
Binh P. Nguyen, T. Do, C. Chui, S. Ong
This paper proposes an efficient block-matching motion estimation algorithm known as prediction-based directional search (PDS). This new algorithm is applicable to a wide range of video processing applications. The algorithm uses the motion vectors in two neighboring blocks to predict a starting search point for the current block. The subsequent refining search relies on the hypothesis of monotonic block distortion surface and the center-biased characteristic of motion vector probability distribution. The cross pattern in a step and one of four possible directional rectangle search patterns in the next step are iteratively used to find the motion vector. Experiments on eleven video sequences with different characteristics shows that PDS can achieve a faster computation speed with similar or even better distortion performance compared to some existing well-known algorithms.
本文提出了一种高效的块匹配运动估计算法——基于预测的定向搜索(PDS)。该算法适用于广泛的视频处理应用。该算法使用两个相邻块中的运动向量来预测当前块的起始搜索点。后续的细化搜索依赖于单调块畸变面假设和运动矢量概率分布的中心偏性特征。迭代地使用一步中的交叉模式和下一步中四种可能的方向矩形搜索模式中的一种来寻找运动向量。在11个不同特征的视频序列上进行的实验表明,与现有的一些知名算法相比,PDS算法可以实现更快的计算速度和相似甚至更好的失真性能。
{"title":"Prediction-based directional search for fast block-matching motion estimation","authors":"Binh P. Nguyen, T. Do, C. Chui, S. Ong","doi":"10.1145/1852611.1852629","DOIUrl":"https://doi.org/10.1145/1852611.1852629","url":null,"abstract":"This paper proposes an efficient block-matching motion estimation algorithm known as prediction-based directional search (PDS). This new algorithm is applicable to a wide range of video processing applications. The algorithm uses the motion vectors in two neighboring blocks to predict a starting search point for the current block. The subsequent refining search relies on the hypothesis of monotonic block distortion surface and the center-biased characteristic of motion vector probability distribution. The cross pattern in a step and one of four possible directional rectangle search patterns in the next step are iteratively used to find the motion vector. Experiments on eleven video sequences with different characteristics shows that PDS can achieve a faster computation speed with similar or even better distortion performance compared to some existing well-known algorithms.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116207382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Password recovery for encrypted ZIP archives using GPUs 密码恢复加密ZIP档案使用gpu
Pub Date : 2010-08-27 DOI: 10.1145/1852611.1852617
P. Phong, Phan Duc Dung, Duong Nhat Tan, N. Duc, N. T. Thuy
Protecting data by passwords in documents such as DOC, PDF or RAR, ZIP archives has been demonstrated to be weak under dictionary attacks. Time for recovering the passwords of such documents mainly depends on two factors: the size of the password search space and the computing power of the underline system. In this paper, we present an approach using modern multi-core graphic processing units (GPUs) as computing devices for finding lost passwords of ZIP archives. The combination of GPU's extremely high computing power and the state-of-the-art password structure analysis methods would bring us a feasible solution for recovering ZIP file password. We first apply password generation rules[9] in generating a reasonable password space, and then use GPUs for exhaustively verifying every password in the space. The experimental results have shown that the password verification speed increases about from 48 to 170 times (depends on the number of GPUs) compared to sequential execution on the Intel Core 2 Quad Q8400 2.66 Ghz. These results have demonstrated the potential applicability of GPUs in this cryptanalysis field.
通过密码保护文档中的数据,如DOC, PDF或RAR, ZIP档案已被证明在字典攻击下是弱的。这类文档的密码恢复时间主要取决于两个因素:密码搜索空间的大小和下划线系统的计算能力。在本文中,我们提出了一种使用现代多核图形处理单元(gpu)作为计算设备来查找ZIP档案丢失密码的方法。GPU极高的计算能力与最先进的密码结构分析方法相结合,将为我们提供一种可行的ZIP文件密码恢复方案。我们首先应用密码生成规则[9]生成合理的密码空间,然后使用gpu对空间中的每个密码进行穷举验证。实验结果表明,与在Intel Core 2 Quad Q8400 2.66 Ghz处理器上顺序执行相比,密码验证速度提高了约48到170倍(取决于gpu的数量)。这些结果证明了gpu在这个密码分析领域的潜在适用性。
{"title":"Password recovery for encrypted ZIP archives using GPUs","authors":"P. Phong, Phan Duc Dung, Duong Nhat Tan, N. Duc, N. T. Thuy","doi":"10.1145/1852611.1852617","DOIUrl":"https://doi.org/10.1145/1852611.1852617","url":null,"abstract":"Protecting data by passwords in documents such as DOC, PDF or RAR, ZIP archives has been demonstrated to be weak under dictionary attacks. Time for recovering the passwords of such documents mainly depends on two factors: the size of the password search space and the computing power of the underline system. In this paper, we present an approach using modern multi-core graphic processing units (GPUs) as computing devices for finding lost passwords of ZIP archives. The combination of GPU's extremely high computing power and the state-of-the-art password structure analysis methods would bring us a feasible solution for recovering ZIP file password. We first apply password generation rules[9] in generating a reasonable password space, and then use GPUs for exhaustively verifying every password in the space. The experimental results have shown that the password verification speed increases about from 48 to 170 times (depends on the number of GPUs) compared to sequential execution on the Intel Core 2 Quad Q8400 2.66 Ghz. These results have demonstrated the potential applicability of GPUs in this cryptanalysis field.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123855839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Proceedings of the 1st Symposium on Information and Communication Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1