首页 > 最新文献

2009 International Conference of the Chilean Computer Science Society最新文献

英文 中文
High-Performance Reverse Time Migration on GPU GPU上的高性能反向时间迁移
Pub Date : 2009-11-10 DOI: 10.1109/SCCC.2009.19
Javier Cabezas, M. Araya-Polo, Isaac Gelado, N. Navarro, E. Morancho, J. Cela
Partial Differential Equations (PDE) are the heart of most simulations in many scientific fields, from Fluid Mechanics to Astrophysics. One the most popular mathematical schemes to solve a PDE is Finite Difference (FD). In this work we map a PDE-FD algorithm called Reverse Time Migration to a GPU using CUDA. This seismic imaging (Geophysics) algorithm is widely used in the oil industry. GPUs are natural contenders in the aftermath of the clock race, in particular for High-performance Computing (HPC). Due to GPU characteristics, the parallelism paradigm shifts from the classical threads plus SIMD to Single Program Multiple Data (SPMD). The NVIDIA GTX 280 implementation outperforms homogeneous CPUs up to 9x (Intel Harpertown E5420) and up to 14x (IBM PPC 970). These preliminary results confirm that GPUs are a real option for HPC, from performance to programmability.
偏微分方程(PDE)是许多科学领域中大多数模拟的核心,从流体力学到天体物理学。求解偏微分方程最常用的数学方案之一是有限差分(FD)。在这项工作中,我们将称为反向时间迁移的PDE-FD算法映射到使用CUDA的GPU。这种地震成像(地球物理)算法被广泛应用于石油工业。gpu是时钟竞赛之后的天然竞争者,特别是对于高性能计算(HPC)。由于GPU的特点,并行模式从传统的线程加SIMD转变为单程序多数据(SPMD)。NVIDIA GTX 280实现的性能最高可达9倍(Intel Harpertown E5420),最高可达14倍(IBM PPC 970)。这些初步结果证实,从性能到可编程性,gpu是HPC的真正选择。
{"title":"High-Performance Reverse Time Migration on GPU","authors":"Javier Cabezas, M. Araya-Polo, Isaac Gelado, N. Navarro, E. Morancho, J. Cela","doi":"10.1109/SCCC.2009.19","DOIUrl":"https://doi.org/10.1109/SCCC.2009.19","url":null,"abstract":"Partial Differential Equations (PDE) are the heart of most simulations in many scientific fields, from Fluid Mechanics to Astrophysics. One the most popular mathematical schemes to solve a PDE is Finite Difference (FD). In this work we map a PDE-FD algorithm called Reverse Time Migration to a GPU using CUDA. This seismic imaging (Geophysics) algorithm is widely used in the oil industry. GPUs are natural contenders in the aftermath of the clock race, in particular for High-performance Computing (HPC). Due to GPU characteristics, the parallelism paradigm shifts from the classical threads plus SIMD to Single Program Multiple Data (SPMD). The NVIDIA GTX 280 implementation outperforms homogeneous CPUs up to 9x (Intel Harpertown E5420) and up to 14x (IBM PPC 970). These preliminary results confirm that GPUs are a real option for HPC, from performance to programmability.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121156029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Implementation of an Improvement Cycle Using the Competisoft Methodological Framework and the Tutelkan Platform 利用竞争方法框架和Tutelkan平台实施改进周期
Pub Date : 2009-11-10 DOI: 10.19153/cleiej.13.1.2
R. Villarroel, Yessica Gómez, Roman Gajardo, Oscar Rodriguez
Formalizing and institutionalizing software processes has become a necessity in recent years requiring the management and enhancement of software production and, at the same time, achieving certification in accordance with international standards. Due to the lack of collaboration tools in Small and Medium-sized Enterprises (SMEs) which could contribute to the improvement of software processes, different proposals have been made to enable these companies to develop and grow. This paper presents the experimental implementation of an improvement cycle in an internal area of a small company, considering the basic profile of the Competisoft process model with support on the Tutelkan platform. Through this experiment, it was noted that Competisoft supplied the basic elements to formalize and institutionalize the processes and that Tutelkan was a good complement to achieving this aim.
近年来,软件过程的正规化和制度化已经成为管理和增强软件生产的必要条件,同时,实现符合国际标准的认证。由于中小型企业(SMEs)缺乏协作工具来改善软件流程,因此提出了不同的建议,以使这些公司能够发展和成长。本文介绍了在Tutelkan平台上支持的compesoft过程模型的基本概况,并在一家小公司的内部区域进行了改进周期的实验实施。通过这一实验,人们注意到compesoft提供了使这一过程正规化和制度化的基本要素,而Tutelkan是实现这一目标的一个很好的补充。
{"title":"Implementation of an Improvement Cycle Using the Competisoft Methodological Framework and the Tutelkan Platform","authors":"R. Villarroel, Yessica Gómez, Roman Gajardo, Oscar Rodriguez","doi":"10.19153/cleiej.13.1.2","DOIUrl":"https://doi.org/10.19153/cleiej.13.1.2","url":null,"abstract":"Formalizing and institutionalizing software processes has become a necessity in recent years requiring the management and enhancement of software production and, at the same time, achieving certification in accordance with international standards. Due to the lack of collaboration tools in Small and Medium-sized Enterprises (SMEs) which could contribute to the improvement of software processes, different proposals have been made to enable these companies to develop and grow. This paper presents the experimental implementation of an improvement cycle in an internal area of a small company, considering the basic profile of the Competisoft process model with support on the Tutelkan platform. Through this experiment, it was noted that Competisoft supplied the basic elements to formalize and institutionalize the processes and that Tutelkan was a good complement to achieving this aim.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116360712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Face Recognition with Local Binary Patterns, Spatial Pyramid Histograms and Naive Bayes Nearest Neighbor Classification 基于局部二值模式、空间金字塔直方图和朴素贝叶斯最近邻分类的人脸识别
Pub Date : 2009-11-10 DOI: 10.1109/SCCC.2009.21
Daniel Maturana, D. Mery, Á. Soto
Face recognition algorithms commonly assume that face images are well aligned and have a similar pose -- yet in many practical applications it is impossible to meet these conditions. Therefore extending face recognition to unconstrained face images has become an active area of research. To this end, histograms of Local Binary Patterns (LBP) have proven to be highly discriminative descriptors for face recognition. Nonetheless, most LBP-based algorithms use a rigid descriptor matching strategy that is not robust against pose variation and misalignment. We propose two algorithms for face recognition that are designed to deal with pose variations and misalignment. We also incorporate an illumination normalization step that increases robustness against lighting variations. The proposed algorithms use descriptors based on histograms of LBP and perform descriptor matching with spatial pyramid matching (SPM) and Naive Bayes Nearest Neighbor (NBNN), respectively. Our contribution is the inclusion of flexible spatial matching schemes that use an image-to-class relation to provide an improved robustness with respect to intra-class variations. We compare the accuracy of the proposed algorithms against Ahonen's original LBP-based face recognition system and two baseline holistic classifiers on four standard datasets. Our results indicate that the algorithm based on NBNN outperforms the other solutions, and does so more markedly in presence of pose variations.
人脸识别算法通常假设人脸图像对齐良好并具有相似的姿势,但在许多实际应用中,不可能满足这些条件。因此,将人脸识别扩展到无约束的人脸图像已成为一个活跃的研究领域。为此,直方图局部二值模式(LBP)已被证明是人脸识别的高度判别描述符。尽管如此,大多数基于lbp的算法使用严格的描述符匹配策略,对姿态变化和不对齐没有鲁棒性。我们提出了两种人脸识别算法,旨在处理姿态变化和不对齐。我们还纳入了一个照明规范化步骤,增加了对照明变化的鲁棒性。该算法使用基于LBP直方图的描述符,并分别与空间金字塔匹配(SPM)和朴素贝叶斯最近邻(NBNN)进行描述符匹配。我们的贡献是包含灵活的空间匹配方案,该方案使用图像到类的关系来提供关于类内变化的改进的鲁棒性。我们将提出的算法与Ahonen原始的基于lbp的人脸识别系统和两个基线整体分类器在四个标准数据集上的准确性进行了比较。我们的结果表明,基于NBNN的算法优于其他解决方案,并且在存在姿势变化的情况下表现得更加明显。
{"title":"Face Recognition with Local Binary Patterns, Spatial Pyramid Histograms and Naive Bayes Nearest Neighbor Classification","authors":"Daniel Maturana, D. Mery, Á. Soto","doi":"10.1109/SCCC.2009.21","DOIUrl":"https://doi.org/10.1109/SCCC.2009.21","url":null,"abstract":"Face recognition algorithms commonly assume that face images are well aligned and have a similar pose -- yet in many practical applications it is impossible to meet these conditions. Therefore extending face recognition to unconstrained face images has become an active area of research. To this end, histograms of Local Binary Patterns (LBP) have proven to be highly discriminative descriptors for face recognition. Nonetheless, most LBP-based algorithms use a rigid descriptor matching strategy that is not robust against pose variation and misalignment. We propose two algorithms for face recognition that are designed to deal with pose variations and misalignment. We also incorporate an illumination normalization step that increases robustness against lighting variations. The proposed algorithms use descriptors based on histograms of LBP and perform descriptor matching with spatial pyramid matching (SPM) and Naive Bayes Nearest Neighbor (NBNN), respectively. Our contribution is the inclusion of flexible spatial matching schemes that use an image-to-class relation to provide an improved robustness with respect to intra-class variations. We compare the accuracy of the proposed algorithms against Ahonen's original LBP-based face recognition system and two baseline holistic classifiers on four standard datasets. Our results indicate that the algorithm based on NBNN outperforms the other solutions, and does so more markedly in presence of pose variations.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122240898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 77
Delayed Insertion Strategies in Dynamic Metric Indexes 动态度量索引中的延迟插入策略
Pub Date : 2009-11-10 DOI: 10.1109/SCCC.2009.23
Edgar Chávez, Nora Reyes, Patricia Roggero
Dynamic data structures are sensitive to insertion order, particularly tree-based data structures. In this paper we present a buffering heuristic allowing delayed root selection (when enough data has arrived to have valid statistics) useful for hierarchical indexes. Initially, when less than $M$ objects have been inserted queries are answered from the buffer itself using an online-friendly algorithm which can be simulated by AESA (Approximating and Eliminating Search Algorithm) or can be implemented with the dynamic data structure being optimized. When the buffer is full the tree root can be selected in a more informed way using the distances between the $M$ objects in the buffer. Buffering has an additional usage, multiple routing strategies can be designed depending on statistics of the query. A complete picture of the technique includes doing a recursive best-root selection with much more parameters. We focus on the Dynamic Spatial Approximation Tree ({em DSAT}) investigating the improvement obtained in the first level of the tree (the root and its children). Notice that if the buffering strategy is repeated recursively we can obtain a boosting on the performance when the data structure reaches a stable state. For this reason even a very small improvement in performance is significant. We present a systematic improvement in the query complexity for several real time, publicly available data sets from the SISAP repository with our buffering strategies.
动态数据结构对插入顺序很敏感,尤其是基于树的数据结构。在本文中,我们提出了一种缓冲启发式方法,允许延迟根选择(当足够的数据到达时具有有效的统计数据),这对分层索引很有用。最初,当插入的对象少于$M$时,使用在线友好的算法从缓冲区本身回答查询,该算法可以由AESA(近似和消除搜索算法)模拟,也可以通过优化的动态数据结构来实现。当缓冲区已满时,可以使用缓冲区中$M$对象之间的距离以更明智的方式选择树的根。缓冲还有一个额外的用途,可以根据查询的统计信息设计多个路由策略。该技术的完整描述包括使用更多参数进行递归最优根选择。我们专注于动态空间近似树({em DSAT}),研究在树的第一级(根及其子节点)获得的改进。注意,如果缓冲策略递归地重复,我们可以在数据结构达到稳定状态时获得性能提升。由于这个原因,即使是非常小的性能改进也是非常重要的。通过我们的缓冲策略,我们系统地改进了来自SISAP存储库的几个实时、公开可用的数据集的查询复杂性。
{"title":"Delayed Insertion Strategies in Dynamic Metric Indexes","authors":"Edgar Chávez, Nora Reyes, Patricia Roggero","doi":"10.1109/SCCC.2009.23","DOIUrl":"https://doi.org/10.1109/SCCC.2009.23","url":null,"abstract":"Dynamic data structures are sensitive to insertion order, particularly tree-based data structures. In this paper we present a buffering heuristic allowing delayed root selection (when enough data has arrived to have valid statistics) useful for hierarchical indexes. Initially, when less than $M$ objects have been inserted queries are answered from the buffer itself using an online-friendly algorithm which can be simulated by AESA (Approximating and Eliminating Search Algorithm) or can be implemented with the dynamic data structure being optimized. When the buffer is full the tree root can be selected in a more informed way using the distances between the $M$ objects in the buffer. Buffering has an additional usage, multiple routing strategies can be designed depending on statistics of the query. A complete picture of the technique includes doing a recursive best-root selection with much more parameters. We focus on the Dynamic Spatial Approximation Tree ({em DSAT}) investigating the improvement obtained in the first level of the tree (the root and its children). Notice that if the buffering strategy is repeated recursively we can obtain a boosting on the performance when the data structure reaches a stable state. For this reason even a very small improvement in performance is significant. We present a systematic improvement in the query complexity for several real time, publicly available data sets from the SISAP repository with our buffering strategies.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115500646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Combining a Probabilistic Sampling Technique and Simple Heuristics to Solve the Dynamic Path Planning Problem 结合概率抽样技术和简单启发式方法求解动态路径规划问题
Pub Date : 2009-11-10 DOI: 10.1109/SCCC.2009.11
Nicolas A. Barriga, M. Solar, Mauricio Araya-López
Probabilistic sampling methods have become very popular to solve single-shot path planning problems. Rapidly-exploring Random Trees (RRTs) in particular have been shown to be very efficient in solving high dimensional problems. Even though several RRT variants have been proposed to tackle the dynamic replanning problem, these methods only perform well in environments with infrequent changes. This paper addresses the dynamic path planning problem by combining simple techniques in a multi-stage probabilistic algorithm. This algorithm uses RRTs as an initial solution, informed local search to fix unfeasible paths and a simple greedy optimizer. The algorithm is capable of recognizing when the local search is stuck, and subsequently restart the RRT. We show that this combination of simple techniques provides better responses to a highly dynamic environment than the dynamic RRT variants.
概率抽样方法已成为解决单次路径规划问题的常用方法。特别是快速探索随机树(RRTs)在解决高维问题方面非常有效。尽管已经提出了几种RRT变体来解决动态重新规划问题,但这些方法仅在变化不频繁的环境中表现良好。本文结合多阶段概率算法中的简单技术,解决了动态路径规划问题。该算法使用RRTs作为初始解,通知局部搜索来修复不可行路径,并使用简单的贪婪优化器。该算法能够识别本地搜索是否卡住,并随后重新启动RRT。我们表明,与动态RRT变体相比,这种简单技术的组合提供了对高度动态环境的更好响应。
{"title":"Combining a Probabilistic Sampling Technique and Simple Heuristics to Solve the Dynamic Path Planning Problem","authors":"Nicolas A. Barriga, M. Solar, Mauricio Araya-López","doi":"10.1109/SCCC.2009.11","DOIUrl":"https://doi.org/10.1109/SCCC.2009.11","url":null,"abstract":"Probabilistic sampling methods have become very popular to solve single-shot path planning problems. Rapidly-exploring Random Trees (RRTs) in particular have been shown to be very efficient in solving high dimensional problems. Even though several RRT variants have been proposed to tackle the dynamic replanning problem, these methods only perform well in environments with infrequent changes. This paper addresses the dynamic path planning problem by combining simple techniques in a multi-stage probabilistic algorithm. This algorithm uses RRTs as an initial solution, informed local search to fix unfeasible paths and a simple greedy optimizer. The algorithm is capable of recognizing when the local search is stuck, and subsequently restart the RRT. We show that this combination of simple techniques provides better responses to a highly dynamic environment than the dynamic RRT variants.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114625957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Feature Extraction Based on Circular Summary Statistics in ECG Signal Classification 基于循环汇总统计的心电信号分类特征提取
Pub Date : 2009-11-10 DOI: 10.1109/SCCC.2009.24
Gustavo Soto, Sergio Torres
In order to explore new patterns for classification of cardiac signals, taken from the electrocardiogram (ECG), the circular statistic approach is introduced. Features are extracted from instantaneous phase of ECG signal using the analytic signal model based on the Hilbert transform theory. Feature vectors are used as patterns to distinguish among different ECG signals. Five types of ECG signals are obtained from MIT-BIH database. Preliminar results shown that the proposed features can be used on ECG signal classification problem.
为了探索心电信号分类的新模式,引入了循环统计方法。利用基于希尔伯特变换理论的分析信号模型,从心电信号的瞬时相位提取特征。利用特征向量作为模式来区分不同的心电信号。从MIT-BIH数据库中获得五种类型的心电信号。初步结果表明,所提出的特征可以用于心电信号的分类问题。
{"title":"Feature Extraction Based on Circular Summary Statistics in ECG Signal Classification","authors":"Gustavo Soto, Sergio Torres","doi":"10.1109/SCCC.2009.24","DOIUrl":"https://doi.org/10.1109/SCCC.2009.24","url":null,"abstract":"In order to explore new patterns for classification of cardiac signals, taken from the electrocardiogram (ECG), the circular statistic approach is introduced. Features are extracted from instantaneous phase of ECG signal using the analytic signal model based on the Hilbert transform theory. Feature vectors are used as patterns to distinguish among different ECG signals. Five types of ECG signals are obtained from MIT-BIH database. Preliminar results shown that the proposed features can be used on ECG signal classification problem.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130677499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Formal Specification and Analysis of the MIDP 3.0 Security Model MIDP 3.0安全模型的形式化规范与分析
Pub Date : 2009-11-10 DOI: 10.1109/SCCC.2009.18
Gustavo Mazeikis, Gustavo Betarte, C. Luna
The Mobile Information Device Profile (MIDP) of the Java Platform Micro Edition (JME), provides a standard run-time environment for mobile phones and personal digital assistants. The third and latest version of MIDP introduces anew dimension in the security model of MIDP at the application level. For the second version of MIDP, Zanella, Betarte and Luna had proposed a formal specification of the security model in the Calculus of Inductive Constructions using the Coq Proof Assistant. This paper presents an extension of that formal specification that incorporates the changes introduced in the third version of MIDP. The obtained specification it is proven to preserve the security properties of the second version of MIDP and enables the research of new security properties for the version 3.0 of the profile.
Java平台微型版(JME)的移动信息设备配置文件(MIDP)为移动电话和个人数字助理提供了一个标准的运行时环境。MIDP的第三个也是最新版本在应用程序级别引入了MIDP安全模型的新维度。对于MIDP的第二个版本,Zanella, Betarte和Luna使用Coq证明助手提出了《归纳构造演算》中安全模型的正式规范。本文介绍了该正式规范的扩展,其中包含了MIDP第三版中引入的更改。所获得的规范被证明保留了MIDP第二版本的安全属性,并允许为该配置文件的3.0版本研究新的安全属性。
{"title":"Formal Specification and Analysis of the MIDP 3.0 Security Model","authors":"Gustavo Mazeikis, Gustavo Betarte, C. Luna","doi":"10.1109/SCCC.2009.18","DOIUrl":"https://doi.org/10.1109/SCCC.2009.18","url":null,"abstract":"The Mobile Information Device Profile (MIDP) of the Java Platform Micro Edition (JME), provides a standard run-time environment for mobile phones and personal digital assistants. The third and latest version of MIDP introduces anew dimension in the security model of MIDP at the application level. For the second version of MIDP, Zanella, Betarte and Luna had proposed a formal specification of the security model in the Calculus of Inductive Constructions using the Coq Proof Assistant. This paper presents an extension of that formal specification that incorporates the changes introduced in the third version of MIDP. The obtained specification it is proven to preserve the security properties of the second version of MIDP and enables the research of new security properties for the version 3.0 of the profile.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122637258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Performance Evaluation of the Covariance Descriptor for Target Detection 目标检测的协方差描述符性能评价
Pub Date : 2009-11-10 DOI: 10.1109/SCCC.2009.7
Pedro Cortez Cargill, Cristobal Undurraga Rius, D. Mery, Á. Soto
In computer vision, there has been a strong advance in creating new image descriptors. A descriptor that has recently appeared is the Covariance Descriptor, but there have not been any studies about the different methodologies for its construction. To address this problem we have conducted an analysis on the contribution of diverse features of an image to the descriptor and therefore their contribution to the detection of varied targets, in our case: faces and pedestrians. That is why we have defined a methodology to determinate the performance of the covariance matrix created from different characteristics. Now we are able to determinate the best set of features for face and people detection, for each problem. We have also achieved to establish that not any kind of combination of features can be used because it might not exist a correlation between them. Finally, when an analysis is performed with the best set of features, for the face detection problem we reach a performance of 99%, meanwhile for the pedestrian detection problem we reach a performance of 85%. With this we hope we have built a more solid base when choosing features for this descriptor, allowing to move forward to other topics such as object recognition or tracking.
在计算机视觉领域,在创建新的图像描述符方面有了很大的进步。协方差描述符(Covariance descriptor)是近年来出现的一种描述符,但关于协方差描述符的不同构造方法还没有任何研究。为了解决这个问题,我们对图像的不同特征对描述符的贡献进行了分析,从而分析了它们对检测不同目标的贡献,在我们的例子中是人脸和行人。这就是为什么我们定义了一种方法来确定由不同特征创建的协方差矩阵的性能。现在我们能够为每个问题确定人脸和人物检测的最佳特征集。我们还建立了不能使用任何类型的特征组合,因为它们之间可能不存在相关性。最后,当使用最佳特征集进行分析时,对于人脸检测问题,我们达到了99%的性能,同时对于行人检测问题,我们达到了85%的性能。有了这些,我们希望在为这个描述符选择特征时,我们已经建立了一个更坚实的基础,从而可以继续讨论其他主题,如物体识别或跟踪。
{"title":"Performance Evaluation of the Covariance Descriptor for Target Detection","authors":"Pedro Cortez Cargill, Cristobal Undurraga Rius, D. Mery, Á. Soto","doi":"10.1109/SCCC.2009.7","DOIUrl":"https://doi.org/10.1109/SCCC.2009.7","url":null,"abstract":"In computer vision, there has been a strong advance in creating new image descriptors. A descriptor that has recently appeared is the Covariance Descriptor, but there have not been any studies about the different methodologies for its construction. To address this problem we have conducted an analysis on the contribution of diverse features of an image to the descriptor and therefore their contribution to the detection of varied targets, in our case: faces and pedestrians. That is why we have defined a methodology to determinate the performance of the covariance matrix created from different characteristics. Now we are able to determinate the best set of features for face and people detection, for each problem. We have also achieved to establish that not any kind of combination of features can be used because it might not exist a correlation between them. Finally, when an analysis is performed with the best set of features, for the face detection problem we reach a performance of 99%, meanwhile for the pedestrian detection problem we reach a performance of 85%. With this we hope we have built a more solid base when choosing features for this descriptor, allowing to move forward to other topics such as object recognition or tracking.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121772206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Efficient Algorithms for Context Query Evaluation over a Tagged Corpus 标记语料库上上下文查询评估的高效算法
Pub Date : 2009-11-10 DOI: 10.1109/SCCC.2009.16
Jérémy Félix Barbay, A. López-Ortiz
We present an optimal adaptive algorithm for context queries in tagged content. The queries consist of locating instances of a tag within a context specified by the query using patterns with preorder, ancestor-descendant and proximity operators in the document tree implied by the tagged content. The time taken to resolve a query $Q$ on a document tree $T$ is logarithmic in the size of $T$, proportional to the size of $Q$, and to the difficulty of the combination of $Q$ with $T$, as measured by the minimal size of a certificate of the answer. The performance of the algorithm is no worse than the classical worst-case optimal, while provably better on simpler queries and corpora. More formally, the algorithm runs in time $bigo(difficultynbkeywordslg(nbobjects/difficultynbkeywords))$ in the standard RAM model and in time $bigo(difficultynbkeywordslglgmin(nbobjects,nblabels))$ in the $Theta(lg(nbobjects))$-word RAM model, where $nbkeywords$ is the number of edges in the query, $difficulty$ is the minimum number of operations required to certify the answer to the query, $nbobjects$ is the number of nodes in the tree, and $nblabels$ is the number of labels indexed.
我们提出了一种最优的自适应算法,用于标记内容的上下文查询。这些查询包括在查询指定的上下文中定位标记的实例,使用带有标记内容所隐含的文档树中的预购、祖先-后代和接近操作符的模式。在文档树$T$上解析查询$Q$所花费的时间与$T$的大小成对数关系,与$Q$的大小成正比,并与$Q$与$T$组合的难度成正比,以答案证书的最小大小来衡量。该算法的性能并不比经典的最坏情况最优算法差,而且在更简单的查询和语料库上表现得更好。更正式地说,该算法在标准RAM模型中运行时间为$bigo(difficultynbkeywordslg(nbobjects/difficultynbkeywords))$,在$Theta(lg(nbobjects))$ -word RAM模型中运行时间为$bigo(difficultynbkeywordslglgmin(nbobjects,nblabels))$,其中$nbkeywords$是查询中的边数,$difficulty$是验证查询答案所需的最小操作数,$nbobjects$是树中的节点数,$nblabels$是索引的标签数。
{"title":"Efficient Algorithms for Context Query Evaluation over a Tagged Corpus","authors":"Jérémy Félix Barbay, A. López-Ortiz","doi":"10.1109/SCCC.2009.16","DOIUrl":"https://doi.org/10.1109/SCCC.2009.16","url":null,"abstract":"We present an optimal adaptive algorithm for context queries in tagged content. The queries consist of locating instances of a tag within a context specified by the query using patterns with preorder, ancestor-descendant and proximity operators in the document tree implied by the tagged content. The time taken to resolve a query $Q$ on a document tree $T$ is logarithmic in the size of $T$, proportional to the size of $Q$, and to the difficulty of the combination of $Q$ with $T$, as measured by the minimal size of a certificate of the answer. The performance of the algorithm is no worse than the classical worst-case optimal, while provably better on simpler queries and corpora. More formally, the algorithm runs in time $bigo(difficultynbkeywordslg(nbobjects/difficultynbkeywords))$ in the standard RAM model and in time $bigo(difficultynbkeywordslglgmin(nbobjects,nblabels))$ in the $Theta(lg(nbobjects))$-word RAM model, where $nbkeywords$ is the number of edges in the query, $difficulty$ is the minimum number of operations required to certify the answer to the query, $nbobjects$ is the number of nodes in the tree, and $nblabels$ is the number of labels indexed.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129241826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Typing Textual Entities and M2T/T2M Transformations in a Model Management Environment 在模型管理环境中输入文本实体和M2T/T2M转换
Pub Date : 2009-11-10 DOI: 10.1109/SCCC.2009.25
Andrés Vignaga
Global Model Management (GMM) is a model-based approach for managing large sets of interrelated heterogeneous and complex MDE artifacts. Such artifacts are usually represented as models, however as many Domain Specific Languages have a textual concrete syntax, GMM also supports textual entities and model-to-text/text-to-model transformations which are projectors that bridge the MDE technical space and the Grammarware technical space. As the transformations supported by GMM are executable artifacts, typing is critical for preventing type errors during execution. We proposed the cGMM calculus which formalizes the notion of typing in GMM. In this work, we extend cGMM with new types and rules for supporting textual entities and projectors. With such an extension, those artifacts may participate in transformation compositions addressing larger transformation problems. We illustrate the new constructs in the context of an interoperability case study.
全局模型管理(GMM)是一种基于模型的方法,用于管理大量相互关联的异构和复杂的MDE工件。这样的工件通常被表示为模型,然而,由于许多领域特定语言具有文本的具体语法,GMM也支持文本实体和模型到文本/文本到模型的转换,它们是连接MDE技术空间和Grammarware技术空间的投影器。由于GMM支持的转换是可执行的工件,因此键入对于在执行期间防止类型错误至关重要。我们提出了cGMM演算,它形式化了GMM中的类型概念。在这项工作中,我们用新的类型和规则扩展了cGMM,以支持文本实体和投影。有了这样的扩展,这些工件可以参与到处理更大的转换问题的转换组合中。我们将在互操作性案例研究的上下文中说明这些新构造。
{"title":"Typing Textual Entities and M2T/T2M Transformations in a Model Management Environment","authors":"Andrés Vignaga","doi":"10.1109/SCCC.2009.25","DOIUrl":"https://doi.org/10.1109/SCCC.2009.25","url":null,"abstract":"Global Model Management (GMM) is a model-based approach for managing large sets of interrelated heterogeneous and complex MDE artifacts. Such artifacts are usually represented as models, however as many Domain Specific Languages have a textual concrete syntax, GMM also supports textual entities and model-to-text/text-to-model transformations which are projectors that bridge the MDE technical space and the Grammarware technical space. As the transformations supported by GMM are executable artifacts, typing is critical for preventing type errors during execution. We proposed the cGMM calculus which formalizes the notion of typing in GMM. In this work, we extend cGMM with new types and rules for supporting textual entities and projectors. With such an extension, those artifacts may participate in transformation compositions addressing larger transformation problems. We illustrate the new constructs in the context of an interoperability case study.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121293126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2009 International Conference of the Chilean Computer Science Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1