首页 > 最新文献

2017 3rd International Conference on Science in Information Technology (ICSITech)最新文献

英文 中文
Myanmar optical character recognition using block definition and featured approach 缅文光学字符识别采用分块定义和特征方法
Pub Date : 2017-10-01 DOI: 10.1109/ICSITECH.2017.8257131
Zu Zu Aung, Cho Me Me Maung
Optical Character Recognition (OCR) can be used in many applications such as machine translation, postal processing, script recognition, text-to-speech, reading aid for blind, etc. Myanmar OCR system is essential to convert numerous published books, newspapers and journals of Myanmar into editable computer text files. It is a challenge for recognizing Myanmar old printed document in case of bad quality, absence of standard alphabets, absence of known fonts, ink through page, uneven background, broken characters, overlapped scripts and mixed scripts. This paper presents a new proposed block definition method for isolation printed Myanmar historical text. The proposed Myanmar optical character recognition (MOCR) system consists of local adaptive thresholding method for binarization and skew-slant correction, thinning algorithm is applied to obtain separation lines and words. For isolation of characters, block definition method is applied and adaptive neuro-fuzzy inference system (ANFIS) is matched the features in the trained database as machine readable text. Myanmar alphabets include consonants, vowels, medials and digits. By using block definition method, consonants and vowels are isolated easily and we obtained more accuracy rate of the OCR. The efficient experimental results are presented by using different Myanmar old documents in our proposed algorithms.
光学字符识别(OCR)可用于许多应用,如机器翻译,邮政处理,文字识别,文本到语音,盲人阅读辅助等。缅甸OCR系统对于将缅甸出版的众多书籍、报纸和期刊转换为可编辑的计算机文本文件至关重要。对缅甸旧印刷文件的识别是一项挑战,如质量差,缺乏标准字母,缺乏已知字体,墨水贯穿页面,背景不均匀,字符破碎,重叠脚本和混合脚本。本文提出了一种新的缅甸印刷历史文本隔离块定义方法。所提出的缅文光学字符识别(MOCR)系统由局部自适应阈值法二值化和斜斜校正组成,采用细化算法获得分隔线和字。对于字符的分离,采用块定义方法,自适应神经模糊推理系统(ANFIS)将训练好的数据库中的特征匹配为机器可读文本。缅甸字母包括辅音、元音、中间音和数字。采用块定义方法,可以很容易地分离出辅音和元音,提高了OCR的正确率。在本文提出的算法中,使用不同的缅甸旧文档,得到了有效的实验结果。
{"title":"Myanmar optical character recognition using block definition and featured approach","authors":"Zu Zu Aung, Cho Me Me Maung","doi":"10.1109/ICSITECH.2017.8257131","DOIUrl":"https://doi.org/10.1109/ICSITECH.2017.8257131","url":null,"abstract":"Optical Character Recognition (OCR) can be used in many applications such as machine translation, postal processing, script recognition, text-to-speech, reading aid for blind, etc. Myanmar OCR system is essential to convert numerous published books, newspapers and journals of Myanmar into editable computer text files. It is a challenge for recognizing Myanmar old printed document in case of bad quality, absence of standard alphabets, absence of known fonts, ink through page, uneven background, broken characters, overlapped scripts and mixed scripts. This paper presents a new proposed block definition method for isolation printed Myanmar historical text. The proposed Myanmar optical character recognition (MOCR) system consists of local adaptive thresholding method for binarization and skew-slant correction, thinning algorithm is applied to obtain separation lines and words. For isolation of characters, block definition method is applied and adaptive neuro-fuzzy inference system (ANFIS) is matched the features in the trained database as machine readable text. Myanmar alphabets include consonants, vowels, medials and digits. By using block definition method, consonants and vowels are isolated easily and we obtained more accuracy rate of the OCR. The efficient experimental results are presented by using different Myanmar old documents in our proposed algorithms.","PeriodicalId":165045,"journal":{"name":"2017 3rd International Conference on Science in Information Technology (ICSITech)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115692922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Discovering optimized process model using rule discovery hybrid particle swarm optimization 利用规则发现混合粒子群算法发现优化过程模型
Pub Date : 2017-10-01 DOI: 10.1109/ICSITECH.2017.8257092
Yutika Amelia Effendi, R. Sarno
This paper presents a bio-inspired hybrid method which concentrate on the optimal or a near-optimal business process model from an event log. The discovery of Hybrid Particle Swarm Optimization (Hybrid PSO) algorithm comes from the combination of Particle Swarm Optimization (PSO) algorithm and Simulated Annealing (SA) method. This paper presents a method which combines Rule discovery task and Hybrid PSO. The proposed method can discover not only classification rules that produce the most optimal business process model from event logs, but also can optimize the quality of process model. To be formulated into an optimization problem, we use rule discovery task to get the high accuracy, comprehensibility and generalization performance. After we get the results from rule discovery task, we use Hybrid PSO to resolve the problem. In this proposed method, we use continuous data as data set and fitness function as evaluation criteria of quality of discovered business process model. As final results, we prove that the proposed method has the best results in terms of average fitness and number of iterations, compared with classical PSO algorithm and original hybrid PSO algorithm.
本文提出了一种生物启发的混合方法,该方法侧重于从事件日志中获得最优或接近最优的业务流程模型。混合粒子群优化算法(Hybrid Particle Swarm Optimization,简称Hybrid PSO)是将粒子群优化算法(PSO)与模拟退火算法(SA)相结合的结果。本文提出了一种将规则发现任务与混合粒子群算法相结合的方法。该方法不仅可以从事件日志中发现生成最优业务流程模型的分类规则,而且可以优化流程模型的质量。将规则发现任务转化为优化问题,以获得较高的准确性、可理解性和泛化性能。在得到规则发现任务的结果后,使用混合粒子群算法解决该问题。在该方法中,我们使用连续数据作为数据集,并使用适应度函数作为所发现业务流程模型质量的评价标准。结果表明,与经典粒子群算法和原始混合粒子群算法相比,所提方法在平均适应度和迭代次数方面均具有较好的效果。
{"title":"Discovering optimized process model using rule discovery hybrid particle swarm optimization","authors":"Yutika Amelia Effendi, R. Sarno","doi":"10.1109/ICSITECH.2017.8257092","DOIUrl":"https://doi.org/10.1109/ICSITECH.2017.8257092","url":null,"abstract":"This paper presents a bio-inspired hybrid method which concentrate on the optimal or a near-optimal business process model from an event log. The discovery of Hybrid Particle Swarm Optimization (Hybrid PSO) algorithm comes from the combination of Particle Swarm Optimization (PSO) algorithm and Simulated Annealing (SA) method. This paper presents a method which combines Rule discovery task and Hybrid PSO. The proposed method can discover not only classification rules that produce the most optimal business process model from event logs, but also can optimize the quality of process model. To be formulated into an optimization problem, we use rule discovery task to get the high accuracy, comprehensibility and generalization performance. After we get the results from rule discovery task, we use Hybrid PSO to resolve the problem. In this proposed method, we use continuous data as data set and fitness function as evaluation criteria of quality of discovered business process model. As final results, we prove that the proposed method has the best results in terms of average fitness and number of iterations, compared with classical PSO algorithm and original hybrid PSO algorithm.","PeriodicalId":165045,"journal":{"name":"2017 3rd International Conference on Science in Information Technology (ICSITech)","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124261275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Physical document validation with perceptual hash 具有感知哈希的物理文档验证
Pub Date : 2017-10-01 DOI: 10.1109/ICSITECH.2017.8257180
Prasetyo Adi Wibowo Putro
Validation requirements documents electronically is not only needed for electronic documents. For the specific needs of the physical document validation also needs to be done electronically. existing problems, physical documents will always have a different hash values each time digitized. Through this research is reviewed whether perceptual hash can be used for electronic validation of the physical document. The resulting conclusion of this study, perceptual can hash to use and can detect all modifications that occur in the main information document.
验证需求文档电子化不仅仅是电子文档所需要的。对于物理文档的特定需求,验证也需要以电子方式完成。存在的问题是,物理文档每次数字化时总会有不同的哈希值。通过本文的研究,综述了感知哈希是否可以用于物理文档的电子验证。本研究得出的结论是,感知可以散列使用,并且可以检测主信息文档中发生的所有修改。
{"title":"Physical document validation with perceptual hash","authors":"Prasetyo Adi Wibowo Putro","doi":"10.1109/ICSITECH.2017.8257180","DOIUrl":"https://doi.org/10.1109/ICSITECH.2017.8257180","url":null,"abstract":"Validation requirements documents electronically is not only needed for electronic documents. For the specific needs of the physical document validation also needs to be done electronically. existing problems, physical documents will always have a different hash values each time digitized. Through this research is reviewed whether perceptual hash can be used for electronic validation of the physical document. The resulting conclusion of this study, perceptual can hash to use and can detect all modifications that occur in the main information document.","PeriodicalId":165045,"journal":{"name":"2017 3rd International Conference on Science in Information Technology (ICSITech)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116534287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Analyzing knowledge management in research laboratories based on organizational culture 基于组织文化的科研实验室知识管理分析
Pub Date : 2017-10-01 DOI: 10.1109/ICSITECH.2017.8257125
I. F. Akmaliah, D. I. Sensuse, I. A. Wulandari, Isnaeni Nurrohmah, Rahmi Imanda, Elin Cahyaningsih, Handrie Noprisson
This study was conducted to determine how the organizational culture conditions in a research laboratory in order to develop a strategy for the implementation of knowledge management system (KMS). We adopted a survey-based research approach was supported by an instrument of Organization Culture Assessment Instrument (OCAI). We collected data from the member of the research laboratory in Faculty of Computer Science, University of Indonesia. The questionnaire was given to 73 potential respondents with 51 valid responses. The results of this study are three of seven research laboratories have a distinction between the current conditions and the preferred conditions. We found that the preferred conditions of all research laboratories are organizational culture ‘Clan’.
本研究旨在确定某研究实验室的组织文化状况,以制定实施知识管理系统(KMS)的策略。本文采用基于调查的研究方法,并辅以组织文化评估工具(OCAI)。我们从印度尼西亚大学计算机科学学院研究实验室的成员那里收集了数据。问卷共发放73份,有效回复51份。本研究的结果是七个研究实验室中有三个区分了当前条件和首选条件。我们发现所有研究实验室的首选条件是组织文化“氏族”。
{"title":"Analyzing knowledge management in research laboratories based on organizational culture","authors":"I. F. Akmaliah, D. I. Sensuse, I. A. Wulandari, Isnaeni Nurrohmah, Rahmi Imanda, Elin Cahyaningsih, Handrie Noprisson","doi":"10.1109/ICSITECH.2017.8257125","DOIUrl":"https://doi.org/10.1109/ICSITECH.2017.8257125","url":null,"abstract":"This study was conducted to determine how the organizational culture conditions in a research laboratory in order to develop a strategy for the implementation of knowledge management system (KMS). We adopted a survey-based research approach was supported by an instrument of Organization Culture Assessment Instrument (OCAI). We collected data from the member of the research laboratory in Faculty of Computer Science, University of Indonesia. The questionnaire was given to 73 potential respondents with 51 valid responses. The results of this study are three of seven research laboratories have a distinction between the current conditions and the preferred conditions. We found that the preferred conditions of all research laboratories are organizational culture ‘Clan’.","PeriodicalId":165045,"journal":{"name":"2017 3rd International Conference on Science in Information Technology (ICSITech)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122896602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Color and texture features extraction on content-based image retrieval 基于内容的图像检索中的颜色和纹理特征提取
Pub Date : 2017-10-01 DOI: 10.1109/ICSITECH.2017.8257205
Rahmaniansyah Dwi Putri, H. W. Prabawa, Y. Wihardi
The study on Content Based Image Retrieval (CBIR) has been a concern for many researchers. To conduct CBIR study, some essential things should be considered that are determining the image dataset, extraction method, and image measurement method. In this study, the dataset used is the Oxford Flower 17 dataset. The feature extraction employed is the feature extraction of the HSV color, the Gray Level Cooccurrence Matrix (GLCM) texture extraction feature, and the combination of both features. This study is purposely generates precision from CBIR test based on the proposed method. At first, digital image is segmented by applying thresholding. Moreover, the image is converted into vector to be subsequently processed using feature extraction. Further, the similarity level of the image is measured by Euclidean Distance. Tests on the system are based on segmented and unsegmented image. The system test with segmented image yields mean average precision of 83.35% for HSV feature extraction, 83.4% for GLCM feature extraction, and 80.94% for combined feature extraction. Meanwhile, the system test for unsegmented image generates mean average precision of 82.64% for HSV feature extraction, 87.32% for GLCM feature extraction, and 85.73% for extraction of combined features.
基于内容的图像检索(CBIR)的研究一直受到许多研究者的关注。进行CBIR研究,需要考虑图像数据集的确定、提取方法和图像测量方法。在本研究中,使用的数据集是牛津花17数据集。所采用的特征提取是HSV颜色的特征提取、灰度共生矩阵(GLCM)纹理提取特征以及两者的结合。本研究是基于所提出的方法有意地从CBIR测试中获得精度。首先,采用阈值分割法对数字图像进行分割。然后将图像转换成矢量,进行特征提取处理。进一步,用欧几里得距离度量图像的相似度。对系统进行了分割图像和未分割图像的测试。分割图像的系统测试结果表明,HSV特征提取的平均精度为83.35%,GLCM特征提取的平均精度为83.4%,组合特征提取的平均精度为80.94%。同时,对未分割图像进行系统测试,HSV特征提取的平均精度为82.64%,GLCM特征提取的平均精度为87.32%,组合特征提取的平均精度为85.73%。
{"title":"Color and texture features extraction on content-based image retrieval","authors":"Rahmaniansyah Dwi Putri, H. W. Prabawa, Y. Wihardi","doi":"10.1109/ICSITECH.2017.8257205","DOIUrl":"https://doi.org/10.1109/ICSITECH.2017.8257205","url":null,"abstract":"The study on Content Based Image Retrieval (CBIR) has been a concern for many researchers. To conduct CBIR study, some essential things should be considered that are determining the image dataset, extraction method, and image measurement method. In this study, the dataset used is the Oxford Flower 17 dataset. The feature extraction employed is the feature extraction of the HSV color, the Gray Level Cooccurrence Matrix (GLCM) texture extraction feature, and the combination of both features. This study is purposely generates precision from CBIR test based on the proposed method. At first, digital image is segmented by applying thresholding. Moreover, the image is converted into vector to be subsequently processed using feature extraction. Further, the similarity level of the image is measured by Euclidean Distance. Tests on the system are based on segmented and unsegmented image. The system test with segmented image yields mean average precision of 83.35% for HSV feature extraction, 83.4% for GLCM feature extraction, and 80.94% for combined feature extraction. Meanwhile, the system test for unsegmented image generates mean average precision of 82.64% for HSV feature extraction, 87.32% for GLCM feature extraction, and 85.73% for extraction of combined features.","PeriodicalId":165045,"journal":{"name":"2017 3rd International Conference on Science in Information Technology (ICSITech)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122983963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Real-time location recommendation system for field data collection 现场数据采集实时位置推荐系统
Pub Date : 2017-10-01 DOI: 10.1109/ICSITECH.2017.8257123
Aris Prawisudatama, I. B. Nugraha
Field data collection is one of the main activities performed by national statistical agencies in every country. Data collection activities have a similar workflow with Multi-Depot Vehicle Routing Problem (MDVRP). The use of MDVRP to generate pre-calculated routes resulted in total route costs with high standard deviation. The real-time mechanism by utilizing the publish/subscribe paradigm combined with MDVRP based on Cooperative Coevolution Algorithms (CoEAs) is proposed to reduce the inequality (large variation) of the completion time. The test results show that routes produced by the combination of publish/subscribe paradigm and CoEAs are more prevalent in enumerator's total route times compared with the pre-calculated routes produced by MDVRP based on CoEAs only.
实地数据收集是各国国家统计机构开展的主要活动之一。数据收集活动具有与多站点车辆路线问题(MDVRP)相似的工作流程。使用MDVRP生成预先计算的路线导致总路线成本具有较高的标准偏差。提出了利用发布/订阅模式和基于协同进化算法(coea)的MDVRP相结合的实时机制,以减少完工时间的不平等(大变化)。测试结果表明,与仅基于coea的MDVRP产生的预计算路由相比,发布/订阅模式与coea相结合产生的路由在枚举员的总路由时间中更为普遍。
{"title":"Real-time location recommendation system for field data collection","authors":"Aris Prawisudatama, I. B. Nugraha","doi":"10.1109/ICSITECH.2017.8257123","DOIUrl":"https://doi.org/10.1109/ICSITECH.2017.8257123","url":null,"abstract":"Field data collection is one of the main activities performed by national statistical agencies in every country. Data collection activities have a similar workflow with Multi-Depot Vehicle Routing Problem (MDVRP). The use of MDVRP to generate pre-calculated routes resulted in total route costs with high standard deviation. The real-time mechanism by utilizing the publish/subscribe paradigm combined with MDVRP based on Cooperative Coevolution Algorithms (CoEAs) is proposed to reduce the inequality (large variation) of the completion time. The test results show that routes produced by the combination of publish/subscribe paradigm and CoEAs are more prevalent in enumerator's total route times compared with the pre-calculated routes produced by MDVRP based on CoEAs only.","PeriodicalId":165045,"journal":{"name":"2017 3rd International Conference on Science in Information Technology (ICSITech)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114213649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reusability metric on procurement of goods and services 货物和服务采购的可重用性度量
Pub Date : 2017-10-01 DOI: 10.1109/ICSITECH.2017.8257089
M. Untoro, R. Sarno
Reusability is generally used to determine the level of integration a program, one of them the procurement of goods and services. The objective of this paper to analyze the reusability of components at the business process of the procurement of goods and services using metrics. This component we're analyzed by calculating of frequency and percentages by involving components of the registration of document, the completeness of document, procurement procedures, tender, selection of suppliers, delivery time, the total of goods or services, quality, payment, as well as taxes and inspection. The analysis showed that 7 components that can reusability (more than 50%) are the registration of documents, tender, selection of suppliers, the total of goods or services, quality, payment, as well as taxes and inspection. The highest percentage from of all component is the registration of document (100%).
可重用性通常用来确定一个计划的集成水平,其中之一就是采购的商品和服务。本文的目的是利用度量分析商品和服务采购业务流程中组件的可重用性。我们通过计算频率和百分比来分析这一组成部分,包括文件的注册、文件的完整性、采购程序、投标、供应商的选择、交货时间、货物或服务的总数、质量、付款、以及税收和检验。分析显示,可重复使用的7个组成部分(超过50%)分别是文件登记、招标、供应商选择、货物或服务总量、质量、付款以及税务和检验。所有组件中最高的百分比是文件注册(100%)。
{"title":"Reusability metric on procurement of goods and services","authors":"M. Untoro, R. Sarno","doi":"10.1109/ICSITECH.2017.8257089","DOIUrl":"https://doi.org/10.1109/ICSITECH.2017.8257089","url":null,"abstract":"Reusability is generally used to determine the level of integration a program, one of them the procurement of goods and services. The objective of this paper to analyze the reusability of components at the business process of the procurement of goods and services using metrics. This component we're analyzed by calculating of frequency and percentages by involving components of the registration of document, the completeness of document, procurement procedures, tender, selection of suppliers, delivery time, the total of goods or services, quality, payment, as well as taxes and inspection. The analysis showed that 7 components that can reusability (more than 50%) are the registration of documents, tender, selection of suppliers, the total of goods or services, quality, payment, as well as taxes and inspection. The highest percentage from of all component is the registration of document (100%).","PeriodicalId":165045,"journal":{"name":"2017 3rd International Conference on Science in Information Technology (ICSITech)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124097120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Question answering system with Hidden Markov Model speech recognition 基于隐马尔可夫模型的语音识别问答系统
Pub Date : 2017-10-01 DOI: 10.1109/ICSITECH.2017.8257121
Hobert Ho, V. C. Mawardi, Agus Budi Dharmawan
Question answering system is a system that can give an answer from the user. In general, question answering can generate answer to text questions. This paper reports the result of question answering system that can receive input questions from speech and text. Hidden Markov Model (HMM) used to recognize the voice provided by the user. The HMM speech recognition used the feature value obtained from Mel Frequency Cepstrum Coefficients method (MFCC). The question answering system used Vector Space Model from Lucene search engine to retrieve relevant documents. The result shows that HMM speech recognition system's success rate in recognizing words is 83.31% which obtained from 13 tested questions. The result also shows that question answering system can answer 4 out of 6 questions that correctly identified by speech recognition system.
问答系统是一个可以从用户那里得到答案的系统。一般来说,问答可以生成文本问题的答案。本文报道了一种能够接收语音和文本输入问题的问答系统。隐马尔可夫模型(HMM)用于识别用户提供的声音。HMM语音识别使用Mel频率倒频谱系数法(MFCC)得到的特征值。问答系统采用Lucene搜索引擎中的向量空间模型检索相关文档。结果表明,HMM语音识别系统对13个测试题的单词识别成功率为83.31%。结果还表明,在语音识别系统正确识别的6个问题中,问答系统可以回答4个问题。
{"title":"Question answering system with Hidden Markov Model speech recognition","authors":"Hobert Ho, V. C. Mawardi, Agus Budi Dharmawan","doi":"10.1109/ICSITECH.2017.8257121","DOIUrl":"https://doi.org/10.1109/ICSITECH.2017.8257121","url":null,"abstract":"Question answering system is a system that can give an answer from the user. In general, question answering can generate answer to text questions. This paper reports the result of question answering system that can receive input questions from speech and text. Hidden Markov Model (HMM) used to recognize the voice provided by the user. The HMM speech recognition used the feature value obtained from Mel Frequency Cepstrum Coefficients method (MFCC). The question answering system used Vector Space Model from Lucene search engine to retrieve relevant documents. The result shows that HMM speech recognition system's success rate in recognizing words is 83.31% which obtained from 13 tested questions. The result also shows that question answering system can answer 4 out of 6 questions that correctly identified by speech recognition system.","PeriodicalId":165045,"journal":{"name":"2017 3rd International Conference on Science in Information Technology (ICSITech)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131554684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Design of a system for detection of environmental variables applied in data centers 数据中心环境变量检测系统的设计
Pub Date : 2017-10-01 DOI: 10.1109/ICSITECH.2017.8257144
Leonel Hernandez, Y. Calderón, Hugo E. Martinez, A. Pranolo, I. Riyanto
The information processing centers including Data Centers in the companies have evolved and the distributed processing of the technologies for servers and network devices elevates the demand of energy and cooling. Therefore, it is important to analyze the environment with more caution, taking into account that the increase in density and variations are aspects that cause changes in the monitoring methodology of the computer environment. Critical threats that can be detected are elevated temperatures, humidity and unauthorized access, or inappropriate actions by data center staff. In this project, a prototype will be designed and implemented to detect some basic environmental variables such as temperature, pressure, current and humidity, which are vital to controlling within a data center, based on the concepts provided by Internet of Things (IoT). It will also focus on reviewing the current status of the University's data centers to determine if they are energy efficient. It will show the process of the detection system implementation, the components used and network connection for sending the information. This project will serve as a starting point for future development of similar systems, allowing the definition of more variables and serve as a solution for the business sector in the region and the country. In our business environment, there are not many similar developments, at a good cost, that the various entities can execute. It is offering an idea of a system of detection of environmental variables in the data center that is competitive and that represents security and reliability.
包括数据中心在内的信息处理中心已经发展起来,服务器和网络设备的分布式处理技术提高了对能源和冷却的需求。因此,更谨慎地分析环境是很重要的,要考虑到密度的增加和变化是导致计算机环境监测方法发生变化的方面。可以检测到的关键威胁是温度升高、湿度升高和未经授权的访问,或者数据中心工作人员的不当操作。在这个项目中,将设计并实现一个原型,以检测一些基本的环境变量,如温度、压力、电流和湿度,这些变量对数据中心的控制至关重要,基于物联网(IoT)提供的概念。它还将重点审查该大学数据中心的现状,以确定它们是否节能。将展示检测系统的实现过程、使用的组件和发送信息的网络连接。该项目将作为今后开发类似系统的起点,允许定义更多变量,并作为该区域和该国商业部门的解决方案。在我们的商业环境中,没有多少类似的开发项目能够以良好的成本让各个实体执行。它提供了一种检测数据中心环境变量的系统的想法,这种系统具有竞争力,并且代表了安全性和可靠性。
{"title":"Design of a system for detection of environmental variables applied in data centers","authors":"Leonel Hernandez, Y. Calderón, Hugo E. Martinez, A. Pranolo, I. Riyanto","doi":"10.1109/ICSITECH.2017.8257144","DOIUrl":"https://doi.org/10.1109/ICSITECH.2017.8257144","url":null,"abstract":"The information processing centers including Data Centers in the companies have evolved and the distributed processing of the technologies for servers and network devices elevates the demand of energy and cooling. Therefore, it is important to analyze the environment with more caution, taking into account that the increase in density and variations are aspects that cause changes in the monitoring methodology of the computer environment. Critical threats that can be detected are elevated temperatures, humidity and unauthorized access, or inappropriate actions by data center staff. In this project, a prototype will be designed and implemented to detect some basic environmental variables such as temperature, pressure, current and humidity, which are vital to controlling within a data center, based on the concepts provided by Internet of Things (IoT). It will also focus on reviewing the current status of the University's data centers to determine if they are energy efficient. It will show the process of the detection system implementation, the components used and network connection for sending the information. This project will serve as a starting point for future development of similar systems, allowing the definition of more variables and serve as a solution for the business sector in the region and the country. In our business environment, there are not many similar developments, at a good cost, that the various entities can execute. It is offering an idea of a system of detection of environmental variables in the data center that is competitive and that represents security and reliability.","PeriodicalId":165045,"journal":{"name":"2017 3rd International Conference on Science in Information Technology (ICSITech)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127844875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Community and important actors analysis with different keywords in social network 基于不同关键词的社群与重要行动者分析
Pub Date : 2017-10-01 DOI: 10.1109/ICSITECH.2017.8257163
Nanang Cahyana, R. Munir
Twitter has hundreds of millions of users around the world. Using the Twitter as a social network analysis material is very much in demand. Social network analysis can analyze groups and actors of a social network so that it can detect early behaviors that will be performed by groups and actors. But social network analysis in general has not shown strong groups and actors because it uses only one keyword. As a result, this method is quite difficult in detecting early events of a group and actors, especially those associated with cyberterrorist. For that, it needs a method of social network analysis so that the group and the actors produced are really strong and can be detected early behavior that will be done group and actors. The method in question is the use of several different keywords but have the same topic. With this method, it can be obtained a network pattern of groups and powerful actors related to the desired topic so that it can detect earlier behavior that will be done groups and actors. The results obtained are different keywords but have a high value of similarity topics can produce groups and actors are getting stronger. It can increase in the value of graph metric. So this method is feasible to search relationships between different keywords to find the powerfull community and important actor in social network.
推特在全球拥有数亿用户。使用Twitter作为社会网络分析材料的需求非常大。社会网络分析可以分析社会网络中的群体和行动者,从而发现群体和行动者将会做出的早期行为。但一般来说,社交网络分析并没有显示出强大的群体和参与者,因为它只使用了一个关键词。因此,这种方法很难发现一个群体和行动者的早期事件,特别是与网络恐怖分子有关的事件。为此,它需要一种社会网络分析的方法,这样产生的群体和参与者是非常强大的,并且可以早期发现群体和参与者将要做的行为。问题的方法是使用几个不同的关键字,但有相同的主题。通过这种方法,可以获得与期望主题相关的群体和强大参与者的网络模式,从而可以检测到将要做的群体和参与者的早期行为。得到的结果是不同的关键词,但具有较高的相似度值的主题可以产生组和演员越来越强。它可以增加图形度量的值。因此,该方法可以通过搜索不同关键词之间的关系来寻找社会网络中强大的社区和重要的行动者。
{"title":"Community and important actors analysis with different keywords in social network","authors":"Nanang Cahyana, R. Munir","doi":"10.1109/ICSITECH.2017.8257163","DOIUrl":"https://doi.org/10.1109/ICSITECH.2017.8257163","url":null,"abstract":"Twitter has hundreds of millions of users around the world. Using the Twitter as a social network analysis material is very much in demand. Social network analysis can analyze groups and actors of a social network so that it can detect early behaviors that will be performed by groups and actors. But social network analysis in general has not shown strong groups and actors because it uses only one keyword. As a result, this method is quite difficult in detecting early events of a group and actors, especially those associated with cyberterrorist. For that, it needs a method of social network analysis so that the group and the actors produced are really strong and can be detected early behavior that will be done group and actors. The method in question is the use of several different keywords but have the same topic. With this method, it can be obtained a network pattern of groups and powerful actors related to the desired topic so that it can detect earlier behavior that will be done groups and actors. The results obtained are different keywords but have a high value of similarity topics can produce groups and actors are getting stronger. It can increase in the value of graph metric. So this method is feasible to search relationships between different keywords to find the powerfull community and important actor in social network.","PeriodicalId":165045,"journal":{"name":"2017 3rd International Conference on Science in Information Technology (ICSITech)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124332381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2017 3rd International Conference on Science in Information Technology (ICSITech)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1