首页 > 最新文献

2011 Sixth International Conference on Digital Information Management最新文献

英文 中文
BatCave: Adding security to the BATMAN protocol 蝙蝠洞:增加蝙蝠侠协议的安全性
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093328
Anne Gabrielle Bowitz, Espen Grannes Graarud, L. Brown, M. Jaatun
The Better Approach To Mobile Ad-hoc Networking (BATMAN) protocol is intended as a replacement for protocols such as OLSR, but just like most such efforts, BATMAN has no built-in security features. In this paper we describe security extensions to BATMAN that control network participation and prevent unauthorized nodes from influencing network routing.
“移动自组织网络的更好方法”(BATMAN)协议旨在替代OLSR等协议,但就像大多数此类努力一样,BATMAN没有内置安全功能。在本文中,我们描述了对BATMAN的安全扩展,以控制网络参与并防止未经授权的节点影响网络路由。
{"title":"BatCave: Adding security to the BATMAN protocol","authors":"Anne Gabrielle Bowitz, Espen Grannes Graarud, L. Brown, M. Jaatun","doi":"10.1109/ICDIM.2011.6093328","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093328","url":null,"abstract":"The Better Approach To Mobile Ad-hoc Networking (BATMAN) protocol is intended as a replacement for protocols such as OLSR, but just like most such efforts, BATMAN has no built-in security features. In this paper we describe security extensions to BATMAN that control network participation and prevent unauthorized nodes from influencing network routing.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115363050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Chart image understanding and numerical data extraction 图表图像理解和数字数据提取
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093320
Ales Mishchenko, N. Vassilieva
Chart images in digital documents are an important source of valuable information that is largely under-utilized for data indexing and information extraction purposes. We developed a framework to automatically extract data carried by charts and convert them to XML format. The proposed algorithm classifies image by chart type, detects graphical and textual components, extracts semantic relations between graphics and text. Classification is performed by a novel model-based method, which was extensively tested against the state-of-the-art supervised learning methods and showed high accuracy, comparable to those of the best supervised approaches. The proposed text detection algorithm is applied prior to optical character recognition and leads to significant improvement in text recognition rate (up to 20 times better). The analysis of graphical components and their relations to textual cues allows the recovering of chart data. For testing purpose, a benchmark set was created with the XML/SWF Chart tool. By comparing the recovered data and the original data used for chart generation, we are able to evaluate our information extraction framework and confirm its validity.
数字文档中的图表图像是有价值信息的重要来源,但在数据索引和信息提取方面基本上没有得到充分利用。我们开发了一个框架来自动提取图表所携带的数据并将其转换为XML格式。该算法根据图表类型对图像进行分类,检测图形和文本成分,提取图形和文本之间的语义关系。分类是通过一种新的基于模型的方法进行的,该方法与最先进的监督学习方法进行了广泛的测试,并显示出与最好的监督学习方法相当的高精度。本文提出的文本检测算法应用于光学字符识别之前,显著提高了文本识别率(提高了20倍)。通过分析图形组件及其与文本线索的关系,可以恢复图表数据。出于测试目的,使用XML/SWF Chart工具创建了一个基准集。通过将恢复的数据与生成图表的原始数据进行比较,我们可以评估我们的信息提取框架并确认其有效性。
{"title":"Chart image understanding and numerical data extraction","authors":"Ales Mishchenko, N. Vassilieva","doi":"10.1109/ICDIM.2011.6093320","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093320","url":null,"abstract":"Chart images in digital documents are an important source of valuable information that is largely under-utilized for data indexing and information extraction purposes. We developed a framework to automatically extract data carried by charts and convert them to XML format. The proposed algorithm classifies image by chart type, detects graphical and textual components, extracts semantic relations between graphics and text. Classification is performed by a novel model-based method, which was extensively tested against the state-of-the-art supervised learning methods and showed high accuracy, comparable to those of the best supervised approaches. The proposed text detection algorithm is applied prior to optical character recognition and leads to significant improvement in text recognition rate (up to 20 times better). The analysis of graphical components and their relations to textual cues allows the recovering of chart data. For testing purpose, a benchmark set was created with the XML/SWF Chart tool. By comparing the recovered data and the original data used for chart generation, we are able to evaluate our information extraction framework and confirm its validity.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114804036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Converting Myanmar printed document image into machine understandable text format 将缅甸印刷文件图像转换为机器可理解的文本格式
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093371
Htwe Pa Pa Win, Phyo Thu Thu Khine, Khin Nwe Ni Tun
The large amount of Myanmar document images are getting archived by the Digital Libraries, an efficient strategy is needed to convert document image into machine understandable text format. The state of the art OCR systems can't do for Myanmar scripts as our language pose many challenges for document understanding. Therefore, this paper plans an OCR system for Myanmar Printed Document (OCRMPD) with several proposed methods that can automatically convert Myanmar printed text to machine understandable text. Firstly, the input image is enhanced by making some correction on noise variants. Then, the characters are segmented with a novel segmentation method. The features of the isolated characters are extracted with a hybrid feature extraction method to overcome the similarity problems of the Myanmar scripts. Finally, hierarchical mechanism is used for SVM classifier for recognition of the character image. The experiments are carried out on a variety of Myanmar printed documents and results show the efficiency of the proposed algorithms.
数字图书馆正在归档大量的缅甸文件图像,需要一种有效的策略将文件图像转换为机器可理解的文本格式。最先进的OCR系统无法处理缅甸文字,因为我们的语言对文档理解构成了许多挑战。因此,本文设计了一个缅甸语打印文档OCR系统,并提出了几种方法,可以自动将缅甸语打印文本转换为机器可理解的文本。首先,通过对噪声变量进行校正,增强输入图像。然后,采用一种新颖的分割方法对字符进行分割。采用混合特征提取方法提取孤立汉字的特征,克服了缅文文字的相似度问题。最后,利用层次机制对SVM分类器进行字符图像的识别。在各种缅甸印刷品上进行了实验,结果表明了所提算法的有效性。
{"title":"Converting Myanmar printed document image into machine understandable text format","authors":"Htwe Pa Pa Win, Phyo Thu Thu Khine, Khin Nwe Ni Tun","doi":"10.1109/ICDIM.2011.6093371","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093371","url":null,"abstract":"The large amount of Myanmar document images are getting archived by the Digital Libraries, an efficient strategy is needed to convert document image into machine understandable text format. The state of the art OCR systems can't do for Myanmar scripts as our language pose many challenges for document understanding. Therefore, this paper plans an OCR system for Myanmar Printed Document (OCRMPD) with several proposed methods that can automatically convert Myanmar printed text to machine understandable text. Firstly, the input image is enhanced by making some correction on noise variants. Then, the characters are segmented with a novel segmentation method. The features of the isolated characters are extracted with a hybrid feature extraction method to overcome the similarity problems of the Myanmar scripts. Finally, hierarchical mechanism is used for SVM classifier for recognition of the character image. The experiments are carried out on a variety of Myanmar printed documents and results show the efficiency of the proposed algorithms.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114864729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Towards a virtual environment for capturing behavior in cultural crowds 朝着一个虚拟的环境来捕捉文化人群的行为
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093362
Divesh Lala, Sutasinee Thovutikul, T. Nishida
Cultural behavior is an area of research that can allow us to further cross-cultural understanding, and is now starting to integrate itself within the field of information technology. One domain that expresses these behaviors is inside a crowd, however the analysis of micro-level crowd behavior is impractical in a real-world setting as passive observation has limitations on understanding true behavior. By using a virtual environment to simulate a crowd situation, measuring an individual's in-crowd behavior becomes feasible. This paper introduces the development of a virtual environment which enables the creation of different types of cultural crowds with which the user may interact. The parameterization of the crowds is based on the famous cultural dimensions put forward by Hofstede. One of the cultural dimensions, individualism/collectivism, was mapped to agent characteristics during a series of simulations and it was found that two distinct types of crowd could be generated. For the dimensions have not yet been examined, the proposed environment provides an ideal opportunity to address this gap in the research as well as becoming a tool with which other types of experimentation can be performed.
文化行为是一个可以让我们进一步跨文化理解的研究领域,现在开始融入信息技术领域。表达这些行为的一个领域是群体内部,然而,在现实世界中,分析微观层面的群体行为是不切实际的,因为被动观察对理解真实行为有限制。通过使用虚拟环境来模拟人群情境,测量个体在人群中的行为变得可行。本文介绍了一种虚拟环境的开发,它可以创建不同类型的文化人群,用户可以与之交互。群体的参数化是基于Hofstede提出的著名的文化维度。在一系列的模拟中,将文化维度中的个人主义/集体主义映射到agent特征中,发现可以生成两种不同类型的人群。由于尚未对这些维度进行检查,所提出的环境提供了一个理想的机会来解决研究中的这一差距,并成为可以进行其他类型实验的工具。
{"title":"Towards a virtual environment for capturing behavior in cultural crowds","authors":"Divesh Lala, Sutasinee Thovutikul, T. Nishida","doi":"10.1109/ICDIM.2011.6093362","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093362","url":null,"abstract":"Cultural behavior is an area of research that can allow us to further cross-cultural understanding, and is now starting to integrate itself within the field of information technology. One domain that expresses these behaviors is inside a crowd, however the analysis of micro-level crowd behavior is impractical in a real-world setting as passive observation has limitations on understanding true behavior. By using a virtual environment to simulate a crowd situation, measuring an individual's in-crowd behavior becomes feasible. This paper introduces the development of a virtual environment which enables the creation of different types of cultural crowds with which the user may interact. The parameterization of the crowds is based on the famous cultural dimensions put forward by Hofstede. One of the cultural dimensions, individualism/collectivism, was mapped to agent characteristics during a series of simulations and it was found that two distinct types of crowd could be generated. For the dimensions have not yet been examined, the proposed environment provides an ideal opportunity to address this gap in the research as well as becoming a tool with which other types of experimentation can be performed.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124787532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
PRIS: Image processing tool for dealing with criminal cases using steganography technique PRIS:使用隐写技术处理刑事案件的图像处理工具
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093348
R. Ibrahim, Teoh Suk Kuan
Hiding data inside an image is a practical way of hiding secret information from intruders. Image processing can then be used to get the data back from the image. In this paper, we propose a new algorithm to hide data inside an image using the steganography technique. The original data can also be retrieved from the image using the same approach. By applying the proposed algorithm, a system called Police Report Imaging System (PRIS) is developed. PRIS is developed to handle secret information for criminal cases. The system is then tested to see the viability of the proposed algorithm. The PSNR (Peak signal-to-noise ratio) is also captured for each of the images tested. Based on the PSNR value of each image, the stego image has a higher PSNR value. Hence this new steganography algorithm is very efficient to hide data inside an image to handle information for the criminal cases.
在图像中隐藏数据是一种向入侵者隐藏秘密信息的实用方法。然后可以使用图像处理从图像中获取数据。在本文中,我们提出了一种利用隐写技术隐藏图像中的数据的新算法。原始数据也可以使用相同的方法从图像中检索。应用所提出的算法,开发了警察报告成像系统(PRIS)。该系统是为处理刑事案件的秘密情报而开发的。然后对系统进行测试,以查看所提出算法的可行性。PSNR(峰值信噪比)也被捕获为每个测试图像。从各图像的PSNR值来看,步进图像的PSNR值更高。因此,这种新的隐写算法可以非常有效地将数据隐藏在图像中,以处理刑事案件的信息。
{"title":"PRIS: Image processing tool for dealing with criminal cases using steganography technique","authors":"R. Ibrahim, Teoh Suk Kuan","doi":"10.1109/ICDIM.2011.6093348","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093348","url":null,"abstract":"Hiding data inside an image is a practical way of hiding secret information from intruders. Image processing can then be used to get the data back from the image. In this paper, we propose a new algorithm to hide data inside an image using the steganography technique. The original data can also be retrieved from the image using the same approach. By applying the proposed algorithm, a system called Police Report Imaging System (PRIS) is developed. PRIS is developed to handle secret information for criminal cases. The system is then tested to see the viability of the proposed algorithm. The PSNR (Peak signal-to-noise ratio) is also captured for each of the images tested. Based on the PSNR value of each image, the stego image has a higher PSNR value. Hence this new steganography algorithm is very efficient to hide data inside an image to handle information for the criminal cases.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127879983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
On mining association rules with semantic constraints in XML 基于XML语义约束的关联规则挖掘
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093337
Md. Sumon Shahriar, Jixue Liu
An improved association rule mining technique with semantic constraints is proposed in XML. The semantic constraints are expressed through the use of close properties of items in an XML document that conforms to a schema definition. The proposed association rule mining with semantic constraints can be used for mining both contents and structures in XML.
提出了一种改进的XML语义约束关联规则挖掘技术。语义约束是通过使用XML文档中符合模式定义的项的封闭属性来表示的。所提出的带有语义约束的关联规则挖掘可用于挖掘XML中的内容和结构。
{"title":"On mining association rules with semantic constraints in XML","authors":"Md. Sumon Shahriar, Jixue Liu","doi":"10.1109/ICDIM.2011.6093337","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093337","url":null,"abstract":"An improved association rule mining technique with semantic constraints is proposed in XML. The semantic constraints are expressed through the use of close properties of items in an XML document that conforms to a schema definition. The proposed association rule mining with semantic constraints can be used for mining both contents and structures in XML.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116982323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Applying multi-correlation for improving forecasting in cyber security 应用多相关改进网络安全预测
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093323
E. Pontes, A. Guelfi, S. Kofuji, Anderson A. A. Silva
Currently, defense of the cyber space is mostly based on detection and/or blocking of attacks (Intrusion Detection and Prevention System — IDPS). But, a significant improvement for IDPS is the employment of forecasting techniques in a Distributed Intrusion Forecasting System (DIFS), which enables the capability for predicting attacks. Notwithstanding, during our earlier works, one of the issues we have faced was the huge amount of alerts produced by IDPS, several of them were false positives. Checking the veracity of alerts through other sources (multi-correlation), e.g. logs taken from the operating system (OS), is a way of reducing the number of false alerts, and, therefore, improving data (historical series) to be used by the DIFS. The goal of this paper is to propose a two stage system which allows: (1) employment of an Event Analysis System (EAS) for making multi-correlation between alerts from an IDPS with the OS' logs; and (2) applying forecasting techniques on data generated by the EAS. Tests applied on laboratory by the use of the two stage system allow concluding about the improvement of the historical series reliability, and the consequent improvement of the forecasts accuracy.
目前,网络空间的防御主要是基于检测和/或阻止攻击(入侵检测和预防系统- IDPS)。但是,IDPS的一个重大改进是在分布式入侵预测系统(DIFS)中使用预测技术,使预测攻击的能力成为可能。尽管如此,在我们早期的工作中,我们面临的一个问题是国内流离失所者产生的大量警报,其中一些是误报。通过其他来源(多相关)检查警报的准确性,例如从操作系统(OS)获取的日志,是减少错误警报数量的一种方法,因此,可以改进DIFS使用的数据(历史序列)。本文的目标是提出一个两阶段系统,该系统允许:(1)使用事件分析系统(EAS)将IDPS的警报与操作系统的日志进行多重关联;(2)对EAS生成的数据应用预测技术。利用两级系统在实验室进行的试验表明,历史序列的可靠性得到了提高,预测的准确性也得到了提高。
{"title":"Applying multi-correlation for improving forecasting in cyber security","authors":"E. Pontes, A. Guelfi, S. Kofuji, Anderson A. A. Silva","doi":"10.1109/ICDIM.2011.6093323","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093323","url":null,"abstract":"Currently, defense of the cyber space is mostly based on detection and/or blocking of attacks (Intrusion Detection and Prevention System — IDPS). But, a significant improvement for IDPS is the employment of forecasting techniques in a Distributed Intrusion Forecasting System (DIFS), which enables the capability for predicting attacks. Notwithstanding, during our earlier works, one of the issues we have faced was the huge amount of alerts produced by IDPS, several of them were false positives. Checking the veracity of alerts through other sources (multi-correlation), e.g. logs taken from the operating system (OS), is a way of reducing the number of false alerts, and, therefore, improving data (historical series) to be used by the DIFS. The goal of this paper is to propose a two stage system which allows: (1) employment of an Event Analysis System (EAS) for making multi-correlation between alerts from an IDPS with the OS' logs; and (2) applying forecasting techniques on data generated by the EAS. Tests applied on laboratory by the use of the two stage system allow concluding about the improvement of the historical series reliability, and the consequent improvement of the forecasts accuracy.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117016191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Data management and analysis at the Large Scale Data Facility 大型数据设施的数据管理和分析
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093357
A. García, S. Bourov, A. Hammad, T. Jejkal, Jens C. Otte, S. Pfeiffer, T. Schenker, Christian Schmidt, J. V. Wezel, Bernhard Neumair, A. Streit
The Large Scale Data Facility (LSDF) was started at the Karlsruhe Institute of Technology (KIT) end of 2009 to address the growing need for value-added storage services for its data intensive experiments. The main focus of the project is to provide scientific communities producing data collections in the tera — to petabyte range with the necessary hardware infrastructure as well as with adequate value-added services and support for the data management, processing, and preservation. In this work we describe the project's infrastructure and services design, as well as its meta data handling. Both community specific meta data schemes, a meta data repository, an application programming interface and a graphical tool for accessing the resources were developed to further support the processing workflows of the partner scientific communities. The analysis workflow of high throughput microscopy images for studying biomedical processes is described in detail.
大规模数据设施(LSDF)于2009年底在卡尔斯鲁厄理工学院(KIT)启动,以满足其数据密集型实验对增值存储服务日益增长的需求。该项目的主要重点是为科学界提供必要的硬件基础设施,以及为数据管理、处理和保存提供足够的增值服务和支持。在这项工作中,我们描述了项目的基础设施和服务设计,以及它的元数据处理。开发了社区特定的元数据方案、元数据存储库、应用程序编程接口和用于访问资源的图形工具,以进一步支持合作科学社区的处理工作流程。详细介绍了用于生物医学过程研究的高通量显微镜图像的分析工作流程。
{"title":"Data management and analysis at the Large Scale Data Facility","authors":"A. García, S. Bourov, A. Hammad, T. Jejkal, Jens C. Otte, S. Pfeiffer, T. Schenker, Christian Schmidt, J. V. Wezel, Bernhard Neumair, A. Streit","doi":"10.1109/ICDIM.2011.6093357","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093357","url":null,"abstract":"The Large Scale Data Facility (LSDF) was started at the Karlsruhe Institute of Technology (KIT) end of 2009 to address the growing need for value-added storage services for its data intensive experiments. The main focus of the project is to provide scientific communities producing data collections in the tera — to petabyte range with the necessary hardware infrastructure as well as with adequate value-added services and support for the data management, processing, and preservation. In this work we describe the project's infrastructure and services design, as well as its meta data handling. Both community specific meta data schemes, a meta data repository, an application programming interface and a graphical tool for accessing the resources were developed to further support the processing workflows of the partner scientific communities. The analysis workflow of high throughput microscopy images for studying biomedical processes is described in detail.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133194923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Geo-local contents system with mobile devices 具有移动设备的地理本地内容系统
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093360
Kazunari Ishida
This paper investigates the quality of geographical information provided by Geo-media, such as Foursquare, and its geographical distribution. According to the result, geographical information described by autonomous individuals contains numerous errors and lacks data, even though it tends to have detailed street addresses. In addition, the information is heavily clustered in metropolitan areas. In order to reduce the errors, lack of data, and geographically-biased distribution of information, geo-local content systems have been developed with mobile devices for people in local communities. Members of a local community, e.g., shopping districts and tourist spots, have strong incentives to provide high quality information to their customers. Hence, the systems are provided to people in these communities so that a vast amount of geo-local contents is going to be published on the Internet.
本文调查了地理媒体(如Foursquare)提供的地理信息的质量及其地理分布。根据结果,自治个人描述的地理信息包含许多错误,缺乏数据,即使它往往有详细的街道地址。此外,这些信息大量集中在大都市地区。为了减少错误、数据缺乏和信息分布的地理偏差,已经为当地社区的人们开发了带有移动设备的地理本地内容系统。当地社区的成员,例如购物区和旅游景点,有强烈的动机向他们的顾客提供高质量的信息。因此,向这些社区的人们提供这些系统,以便将大量的地理本地内容发布到Internet上。
{"title":"Geo-local contents system with mobile devices","authors":"Kazunari Ishida","doi":"10.1109/ICDIM.2011.6093360","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093360","url":null,"abstract":"This paper investigates the quality of geographical information provided by Geo-media, such as Foursquare, and its geographical distribution. According to the result, geographical information described by autonomous individuals contains numerous errors and lacks data, even though it tends to have detailed street addresses. In addition, the information is heavily clustered in metropolitan areas. In order to reduce the errors, lack of data, and geographically-biased distribution of information, geo-local content systems have been developed with mobile devices for people in local communities. Members of a local community, e.g., shopping districts and tourist spots, have strong incentives to provide high quality information to their customers. Hence, the systems are provided to people in these communities so that a vast amount of geo-local contents is going to be published on the Internet.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115836481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of voice features for Arabic speech recognition 阿拉伯语语音识别的语音特征比较
Pub Date : 2011-12-01 DOI: 10.1109/ICDIM.2011.6093369
M. Alsulaiman, Muhammad Ghulam, Z. Ali
Selection of the speech feature for speech recognition has been investigated for languages other than Arabic. Arabic Language has its own characteristics hence some speech features may be more suited for Arabic speech recognition than the others. In this paper, some feature extraction techniques are explored to find the features that will give the highest speech recognition rate. Our investigation in this paper showed that Mel-Frequency Cepstral Coefficients (MFCC) gave the best result. We also look at using an operator well know in image processing field to modify the way we calculate MFCC, this results in a new feature that we call LBPCC. We propose the way we use this operator. Then we conduct some experiments to test the proposed feature.
对阿拉伯语以外的语言进行了语音识别的语音特征选择研究。阿拉伯语有自己的特点,因此一些语音特征可能比其他的更适合阿拉伯语语音识别。本文探索了一些特征提取技术,以找到能够提供最高语音识别率的特征。本文的研究表明,mel -频率倒谱系数(MFCC)给出了最好的结果。我们还考虑使用图像处理领域中众所周知的算子来修改我们计算MFCC的方式,这导致了一个新的特征,我们称之为LBPCC。我们提出使用这个运算符的方法。然后,我们进行了一些实验来测试所提出的特征。
{"title":"Comparison of voice features for Arabic speech recognition","authors":"M. Alsulaiman, Muhammad Ghulam, Z. Ali","doi":"10.1109/ICDIM.2011.6093369","DOIUrl":"https://doi.org/10.1109/ICDIM.2011.6093369","url":null,"abstract":"Selection of the speech feature for speech recognition has been investigated for languages other than Arabic. Arabic Language has its own characteristics hence some speech features may be more suited for Arabic speech recognition than the others. In this paper, some feature extraction techniques are explored to find the features that will give the highest speech recognition rate. Our investigation in this paper showed that Mel-Frequency Cepstral Coefficients (MFCC) gave the best result. We also look at using an operator well know in image processing field to modify the way we calculate MFCC, this results in a new feature that we call LBPCC. We propose the way we use this operator. Then we conduct some experiments to test the proposed feature.","PeriodicalId":355775,"journal":{"name":"2011 Sixth International Conference on Digital Information Management","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123621252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
期刊
2011 Sixth International Conference on Digital Information Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1