首页 > 最新文献

The Kips Transactions:partd最新文献

英文 中文
Functional Test Automation for Android GUI Widgets Using XML 使用XML的Android GUI小部件的功能测试自动化
Pub Date : 2012-04-30 DOI: 10.3745/KIPSTD.2012.19D.2.203
Yingzhe Ma, Eun-Man Choi
Capture-and-replay technique is a common automatic method for GUI testing. Testing applications on Android platform cannot use directly capture-and-replay technique due to the testing framework which is already set up and technical supported by Google and lack of automatic linking GUI elements to actions handling widget events. Without capture-and-replay testing tools testers must design and implement testing scenarios according to the specification, and make linking every GUI elements to event handling parts all by hand. This paper proposes a more improved and optimized approach than common capture-and-replay technique for automatic testing Android GUI widgets. XML is applied to extract GUI elements from applications based on tracing the actions to handle widget events. After tracing click events using monitoring in capture phase test cases will be created by communicating status of activated widget in replay phase with API events.
捕获-重放技术是一种常用的GUI测试自动化方法。在Android平台上测试应用程序不能直接使用捕获和重放技术,因为测试框架已经建立,技术上由谷歌支持,并且缺乏将GUI元素自动链接到处理小部件事件的操作。如果没有捕获和重放测试工具,测试人员必须根据规范设计和实现测试场景,并手工将每个GUI元素链接到事件处理部分。本文提出了一种比常见的捕获和重放技术更改进和优化的方法来自动测试Android GUI小部件。基于跟踪处理小部件事件的操作,应用XML从应用程序中提取GUI元素。在使用捕获阶段监控跟踪单击事件之后,将通过与API事件通信重播阶段激活小部件的状态来创建测试用例。
{"title":"Functional Test Automation for Android GUI Widgets Using XML","authors":"Yingzhe Ma, Eun-Man Choi","doi":"10.3745/KIPSTD.2012.19D.2.203","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.2.203","url":null,"abstract":"Capture-and-replay technique is a common automatic method for GUI testing. Testing applications on Android platform cannot use directly capture-and-replay technique due to the testing framework which is already set up and technical supported by Google and lack of automatic linking GUI elements to actions handling widget events. Without capture-and-replay testing tools testers must design and implement testing scenarios according to the specification, and make linking every GUI elements to event handling parts all by hand. This paper proposes a more improved and optimized approach than common capture-and-replay technique for automatic testing Android GUI widgets. XML is applied to extract GUI elements from applications based on tracing the actions to handle widget events. After tracing click events using monitoring in capture phase test cases will be created by communicating status of activated widget in replay phase with API events.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127973460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Analytic Study on the Categorization of Query through Automatic Term Classification 基于自动词分类的查询分类分析研究
Pub Date : 2012-04-30 DOI: 10.3745/KIPSTD.2012.19D.2.133
Taeseok Lee, Do-Heon Jeong, Young-Su Moon, Minsoo Park, mi-hwan Hyun
Queries entered in a search box are the results of users' activities to actively seek information. Therefore, search logs are important data which represent users' information needs. The purpose of this study is to examine if there is a relationship between the results of queries automatically classified and the categories of documents accessed. Search sessions were identified in 2009 NDSL(National Discovery for Science Leaders) log dataset of KISTI (Korea Institute of Science and Technology Information). Queries and items used were extracted by session. The queries were processed using an automatic classifier. The identified queries were then compared with the subject categories of items used. As a result, it was found that the average similarity was 58.8% for the automatic classification of the top 100 queries. Interestingly, this result is a numerical value lower than 76.8%, the result of search evaluated by experts. The reason for this difference explains that the terms used as queries are newly emerging as those of concern in other fields of research.
在搜索框中输入的查询是用户主动搜索信息活动的结果。因此,搜索日志是代表用户信息需求的重要数据。本研究的目的是检验自动分类查询的结果与所访问文档的类别之间是否存在关系。在2009年KISTI(韩国科学技术信息研究所)的NDSL(国家科学领袖发现)日志数据集中确定了搜索会话。使用的查询和项是按会话提取的。使用自动分类器处理查询。然后将确定的查询与所使用的项目的主题类别进行比较。结果发现,自动分类前100个查询的平均相似度为58.8%。有趣的是,这一结果低于专家评估的搜索结果76.8%。造成这种差异的原因是,用作查询的术语是新出现的,是其他研究领域关注的术语。
{"title":"An Analytic Study on the Categorization of Query through Automatic Term Classification","authors":"Taeseok Lee, Do-Heon Jeong, Young-Su Moon, Minsoo Park, mi-hwan Hyun","doi":"10.3745/KIPSTD.2012.19D.2.133","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.2.133","url":null,"abstract":"Queries entered in a search box are the results of users' activities to actively seek information. Therefore, search logs are important data which represent users' information needs. The purpose of this study is to examine if there is a relationship between the results of queries automatically classified and the categories of documents accessed. Search sessions were identified in 2009 NDSL(National Discovery for Science Leaders) log dataset of KISTI (Korea Institute of Science and Technology Information). Queries and items used were extracted by session. The queries were processed using an automatic classifier. The identified queries were then compared with the subject categories of items used. As a result, it was found that the average similarity was 58.8% for the automatic classification of the top 100 queries. Interestingly, this result is a numerical value lower than 76.8%, the result of search evaluated by experts. The reason for this difference explains that the terms used as queries are newly emerging as those of concern in other fields of research.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129422451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Methods to Apply GoF Design Patterns in Service-Oriented Computing GoF设计模式在面向服务计算中的应用方法
Pub Date : 2012-04-30 DOI: 10.3745/KIPSTD.2012.19D.2.187
M. Kim, H. La, Soo Dong Kim
As a representative reuse paradigm, the theme of service-oriented Paradigm (SOC) is largely centered on publishing and subscribing reusable services. Here, SOC is the term including service oriented architecture and cloud computing. Service providers can produce high profits with reusable services, and service consumers can develop their applications with less time and effort by reusing the services. Design Patterns (DP) is a set of reusable methods to resolve commonly occurring design problems and to provide design structures to deal with the problems by following open/close princples. However, since DPs are mainly proposed for building object-oriented systems and there are distinguishable differences between object-oriented paradigm and SOC, it is challenging to apply the DPs to SOC design problems. Hence, DPs need to be customized by considering the two aspects; for service providers to design services which are highly reusable and reflect their unique characteristics and for service consumers to develop their target applications by reusing and customizing services as soon as possible. Therefore, we propose a set of DPs that are customized to SOC. With the proposed DPs, we believe that service provider can effectively develop highly reusable services, and service consumers can efficiently adapt services for their applications.
作为一种代表性的重用范式,面向服务的范式(SOC)的主题主要集中在发布和订阅可重用服务上。在这里,SOC是一个术语,包括面向服务的体系结构和云计算。服务提供者可以通过可重用的服务获得高额利润,服务消费者可以通过重用服务以更少的时间和精力开发他们的应用程序。设计模式(DP)是一组可重用的方法,用于解决常见的设计问题,并通过遵循开/闭原则提供处理问题的设计结构。然而,由于DPs主要用于构建面向对象的系统,并且在面向对象范式和SOC之间存在明显的差异,因此将DPs应用于SOC设计问题是具有挑战性的。因此,DPs需要从两个方面进行定制;服务提供者可以设计高度可重用并反映其独特特征的服务,服务消费者可以通过尽快重用和自定义服务来开发其目标应用程序。因此,我们提出了一套针对SOC定制的dp。通过建议的dp,我们相信服务提供者可以有效地开发高度可重用的服务,服务消费者可以有效地为其应用程序调整服务。
{"title":"Methods to Apply GoF Design Patterns in Service-Oriented Computing","authors":"M. Kim, H. La, Soo Dong Kim","doi":"10.3745/KIPSTD.2012.19D.2.187","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.2.187","url":null,"abstract":"As a representative reuse paradigm, the theme of service-oriented Paradigm (SOC) is largely centered on publishing and subscribing reusable services. Here, SOC is the term including service oriented architecture and cloud computing. Service providers can produce high profits with reusable services, and service consumers can develop their applications with less time and effort by reusing the services. Design Patterns (DP) is a set of reusable methods to resolve commonly occurring design problems and to provide design structures to deal with the problems by following open/close princples. However, since DPs are mainly proposed for building object-oriented systems and there are distinguishable differences between object-oriented paradigm and SOC, it is challenging to apply the DPs to SOC design problems. Hence, DPs need to be customized by considering the two aspects; for service providers to design services which are highly reusable and reflect their unique characteristics and for service consumers to develop their target applications by reusing and customizing services as soon as possible. Therefore, we propose a set of DPs that are customized to SOC. With the proposed DPs, we believe that service provider can effectively develop highly reusable services, and service consumers can efficiently adapt services for their applications.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123355542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Performance Enhancement of a DVA-tree by the Independent Vector Approximation 通过独立矢量逼近提高 DVA 树的性能
Pub Date : 2012-04-30 DOI: 10.3745/KIPSTD.2012.19D.2.151
Hyun-Hwa Choi, Kyuchul Lee
Most of the distributed high-dimensional indexing structures provide a reasonable search performance especially when the dataset is uniformly distributed. However, in case when the dataset is clustered or skewed, the search performances gradually degrade as compared with the uniformly distributed dataset. We propose a method of improving the k-nearest neighbor search performance for the distributed vector approximation-tree based on the strongly clustered or skewed dataset. The basic idea is to compute volumes of the leaf nodes on the top-tree of a distributed vector approximation-tree and to assign different number of bits to them in order to assure an identification performance of vector approximation. In other words, it can be done by assigning more bits to the high-density clusters. We conducted experiments to compare the search performance with the distributed hybrid spill-tree and distributed vector approximation-tree by using the synthetic and real data sets. The experimental results show that our proposed scheme provides consistent results with significant performance improvements of the distributed vector approximation-tree for strongly clustered or skewed datasets.
大多数分布式高维索引结构都能提供合理的搜索性能,尤其是在数据集均匀分布的情况下。然而,如果数据集是聚类或倾斜的,搜索性能就会比均匀分布的数据集逐渐下降。我们提出了一种改善基于强聚类或倾斜数据集的分布式向量近似树的 k 近邻搜索性能的方法。其基本思想是计算分布式向量近似树顶树上叶节点的体积,并为其分配不同的比特数,以确保向量近似的识别性能。换句话说,可以为高密度簇分配更多比特。我们使用合成数据集和真实数据集进行了实验,比较了分布式混合溢出树和分布式矢量近似树的搜索性能。实验结果表明,对于强聚类或倾斜的数据集,我们提出的方案与分布式向量近似树的性能改善效果一致。
{"title":"Performance Enhancement of a DVA-tree by the Independent Vector Approximation","authors":"Hyun-Hwa Choi, Kyuchul Lee","doi":"10.3745/KIPSTD.2012.19D.2.151","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.2.151","url":null,"abstract":"Most of the distributed high-dimensional indexing structures provide a reasonable search performance especially when the dataset is uniformly distributed. However, in case when the dataset is clustered or skewed, the search performances gradually degrade as compared with the uniformly distributed dataset. We propose a method of improving the k-nearest neighbor search performance for the distributed vector approximation-tree based on the strongly clustered or skewed dataset. The basic idea is to compute volumes of the leaf nodes on the top-tree of a distributed vector approximation-tree and to assign different number of bits to them in order to assure an identification performance of vector approximation. In other words, it can be done by assigning more bits to the high-density clusters. We conducted experiments to compare the search performance with the distributed hybrid spill-tree and distributed vector approximation-tree by using the synthetic and real data sets. The experimental results show that our proposed scheme provides consistent results with significant performance improvements of the distributed vector approximation-tree for strongly clustered or skewed datasets.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126545735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Application of the Flow-Capturing Location-Allocation Model to the Seoul Metropolitan Bus Network for Selecting Pickup Points 流捕获位置分配模型在首尔城市公交网络上车点选择中的应用
Pub Date : 2012-04-30 DOI: 10.3745/KIPSTD.2012.19D.2.127
Jong Soo Park
In the Seoul metropolitan bus network, it may be necessary for a bus passenger to pick up a parcel, which has been purchased through e-commerce, at his or her convenient bus stop on the way to home or office. The flow-capturing location-allocation model can be applied to select pickup points for such bus stops so that they maximize the captured passenger flows, where each passenger flow represents an origin-destination (O-D) pair of a passenger trip. In this paper, we propose a fast heuristic algorithm to select pickup points using a large O-D matrix, which has been extracted from five million transportation card transactions. The experimental results demonstrate the bus stops chosen as pickup points in terms of passenger flow and capture ratio, and illustrate the spatial distribution of the top 20 pickup points on a map.
在首都圈的公交网络中,乘坐公交车的乘客可能需要在方便的车站领取通过电子商务购买的包裹,然后回家或上班。流量捕获位置-分配模型可以应用于为这些公交站点选择上车点,使它们最大化捕获的客流,其中每个客流代表一次乘客旅程的始发-目的地(O-D)对。在本文中,我们提出了一种快速启发式算法,该算法使用从500万交通卡交易中提取的大型O-D矩阵来选择取货点。实验结果从客流和捕获率两方面论证了公交车站作为上车点的选择,并在地图上说明了前20个上车点的空间分布。
{"title":"Application of the Flow-Capturing Location-Allocation Model to the Seoul Metropolitan Bus Network for Selecting Pickup Points","authors":"Jong Soo Park","doi":"10.3745/KIPSTD.2012.19D.2.127","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.2.127","url":null,"abstract":"In the Seoul metropolitan bus network, it may be necessary for a bus passenger to pick up a parcel, which has been purchased through e-commerce, at his or her convenient bus stop on the way to home or office. The flow-capturing location-allocation model can be applied to select pickup points for such bus stops so that they maximize the captured passenger flows, where each passenger flow represents an origin-destination (O-D) pair of a passenger trip. In this paper, we propose a fast heuristic algorithm to select pickup points using a large O-D matrix, which has been extracted from five million transportation card transactions. The experimental results demonstrate the bus stops chosen as pickup points in terms of passenger flow and capture ratio, and illustrate the spatial distribution of the top 20 pickup points on a map.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130069804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Semi-supervised Dimension Reduction Method Using Ensemble Approach 基于集成方法的半监督降维方法
Pub Date : 2012-04-30 DOI: 10.3745/KIPSTD.2012.19D.2.147
C. Park
While LDA is a supervised dimension reduction method which finds projective directions to maximize separability between classes, the performance of LDA is severely degraded when the number of labeled data is small. Recently semi-supervised dimension reduction methods have been proposed which utilize abundant unlabeled data and overcome the shortage of labeled data. However, matrix computation usually used in statistical dimension reduction methods becomes hindrance to make the utilization of a large number of unlabeled data difficult, and moreover too much information from unlabeled data may not so helpful compared to the increase of its processing time. In order to solve these problems, we propose an ensemble approach for semi-supervised dimension reduction. Extensive experimental results in text classification demonstrates the effectiveness of the proposed method.
虽然LDA是一种寻找投影方向以最大化类间可分离性的监督降维方法,但当标记数据数量较少时,LDA的性能会严重下降。近年来提出的半监督降维方法利用了大量的未标记数据,克服了标记数据的不足。然而,统计降维方法中常用的矩阵计算成为阻碍,使得大量未标记数据的利用变得困难,而且来自未标记数据的过多信息与处理时间的增加相比可能没有太大的帮助。为了解决这些问题,我们提出了一种半监督降维的集成方法。大量的文本分类实验结果证明了该方法的有效性。
{"title":"A Semi-supervised Dimension Reduction Method Using Ensemble Approach","authors":"C. Park","doi":"10.3745/KIPSTD.2012.19D.2.147","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.2.147","url":null,"abstract":"While LDA is a supervised dimension reduction method which finds projective directions to maximize separability between classes, the performance of LDA is severely degraded when the number of labeled data is small. Recently semi-supervised dimension reduction methods have been proposed which utilize abundant unlabeled data and overcome the shortage of labeled data. However, matrix computation usually used in statistical dimension reduction methods becomes hindrance to make the utilization of a large number of unlabeled data difficult, and moreover too much information from unlabeled data may not so helpful compared to the increase of its processing time. In order to solve these problems, we propose an ensemble approach for semi-supervised dimension reduction. Extensive experimental results in text classification demonstrates the effectiveness of the proposed method.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132136181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Method for Frequent Itemsets Mining from Data Stream 一种数据流频繁项集挖掘方法
Pub Date : 2012-04-30 DOI: 10.3745/KIPSTD.2012.19D.2.139
Bok-Il Seo, Jae-In Kim, Bu-Hyun Hwang
Data Mining is widely used to discover knowledge in many fields. Although there are many methods to discover association rule, most of them are based on frequency-based approaches. Therefore it is not appropriate for stream environment. Because the stream environment has a property that event data are generated continuously. it is expensive to store all data. In this paper, we propose a new method to discover association rules based on stream environment. Our new method is using a variable window for extracting data items. Variable windows have variable size according to the gap of same target event. Our method extracts data using COBJ(Count object) calculation method. FPMDSTN(Frequent pattern Mining over Data Stream using Terminal Node) discovers association rules from the extracted data items. Through experiment, our method is more efficient to apply stream environment than conventional methods.
数据挖掘被广泛应用于许多领域的知识发现。虽然发现关联规则的方法很多,但大多数都是基于频率的方法。因此,它不适合流环境。因为流环境具有连续生成事件数据的属性。存储所有数据的成本很高。本文提出了一种基于流环境的关联规则发现方法。我们的新方法是使用一个变量窗口来提取数据项。可变窗口根据同一目标事件的间隔具有不同的大小。我们的方法使用COBJ(Count object)计算方法提取数据。FPMDSTN(使用终端节点的数据流频繁模式挖掘)从提取的数据项中发现关联规则。通过实验证明,该方法比传统方法更有效地应用于流环境。
{"title":"A Method for Frequent Itemsets Mining from Data Stream","authors":"Bok-Il Seo, Jae-In Kim, Bu-Hyun Hwang","doi":"10.3745/KIPSTD.2012.19D.2.139","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.2.139","url":null,"abstract":"Data Mining is widely used to discover knowledge in many fields. Although there are many methods to discover association rule, most of them are based on frequency-based approaches. Therefore it is not appropriate for stream environment. Because the stream environment has a property that event data are generated continuously. it is expensive to store all data. In this paper, we propose a new method to discover association rules based on stream environment. Our new method is using a variable window for extracting data items. Variable windows have variable size according to the gap of same target event. Our method extracts data using COBJ(Count object) calculation method. FPMDSTN(Frequent pattern Mining over Data Stream using Terminal Node) discovers association rules from the extracted data items. Through experiment, our method is more efficient to apply stream environment than conventional methods.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126104618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Intelligent Library Management System using RFID and USN 基于RFID和USN的智能图书馆管理系统
Pub Date : 2012-03-01 DOI: 10.3745/KIPSTD.2012.19D.3.247
Chang-Soo Lee, Sang-Kyoon Park, Jaehong Ahn
It`s not easy for medium or large sized libraries to effectively manage their vast array of books and media data. Recently, in place of magnetic stripes and barcodes, RFID technology has been applied on a small scale to simple book management and theft-prevention initiatives. The development of RFID and USN applied systems and technology has led to RFID and USN being used in a diverse range of industrial fields, including book management systems in libraries. Using the aforementioned technology, the intelligent book management system suggested in this thesis can provide a more practical, effective, content-rich and convenient book management system.
对于中型或大型图书馆来说,有效地管理大量的图书和媒体数据并不容易。最近,RFID技术取代了磁条和条形码,被小规模地应用于简单的图书管理和防盗措施。RFID和USN应用系统和技术的发展使得RFID和USN被广泛应用于各种工业领域,包括图书馆的图书管理系统。利用上述技术,本文提出的智能图书管理系统可以提供一个更加实用、有效、内容丰富、方便的图书管理系统。
{"title":"Intelligent Library Management System using RFID and USN","authors":"Chang-Soo Lee, Sang-Kyoon Park, Jaehong Ahn","doi":"10.3745/KIPSTD.2012.19D.3.247","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.3.247","url":null,"abstract":"It`s not easy for medium or large sized libraries to effectively manage their vast array of books and media data. Recently, in place of magnetic stripes and barcodes, RFID technology has been applied on a small scale to simple book management and theft-prevention initiatives. The development of RFID and USN applied systems and technology has led to RFID and USN being used in a diverse range of industrial fields, including book management systems in libraries. Using the aforementioned technology, the intelligent book management system suggested in this thesis can provide a more practical, effective, content-rich and convenient book management system.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128159375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Model to Predict Popularity of Internet Posts on Internet Forum Sites 网络论坛网站上网络帖子受欢迎程度的预测模型
Pub Date : 2012-02-29 DOI: 10.3745/KIPSTD.2012.19D.1.113
Yun-Jung Lee, In-Jun Jung, G. Woo
Today, Internet users can easily create and share the digital contents with others through various online content sharing services such as YouTube. So, many portal sites are flooded with lots of user created contents (UCC) in various media such as texts and videos. Estimating popularity of UCC is a crucial concern to both users and the site administrators. This paper proposes a method to predict the popularity of Internet articles, a kind of UCC, using the dynamics of the online contents themselves. To analyze the dynamics, we regarded the access counts of Internet posts as the popularity of them and analyzed the variation of the access counts. We derived a model to predict the popularity of a post represented by the time series of access counts, which is based on an exponential function. According to the experimental results, the difference between the actual access counts and the predicted ones is not more than 10 for 20,532 posts, which cover about 90.7% of the test set.
如今,互联网用户可以通过YouTube等各种在线内容共享服务,轻松地制作数字内容并与他人分享。因此,很多门户网站都充斥着文字、视频等各种形式的用户创作内容(UCC)。估计UCC的流行程度对用户和站点管理员来说都是一个至关重要的问题。本文提出了一种利用网络内容本身的动态来预测网络文章(UCC)受欢迎程度的方法。为了分析其动态,我们将网络帖子的访问次数视为其受欢迎程度,并分析了访问次数的变化。我们推导了一个模型来预测一个帖子的受欢迎程度,该模型由访问次数的时间序列表示,它是基于指数函数的。实验结果显示,20532篇文章的实际访问数与预测访问数相差不超过10,约占测试集的90.7%。
{"title":"A Model to Predict Popularity of Internet Posts on Internet Forum Sites","authors":"Yun-Jung Lee, In-Jun Jung, G. Woo","doi":"10.3745/KIPSTD.2012.19D.1.113","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.1.113","url":null,"abstract":"Today, Internet users can easily create and share the digital contents with others through various online content sharing services such as YouTube. So, many portal sites are flooded with lots of user created contents (UCC) in various media such as texts and videos. Estimating popularity of UCC is a crucial concern to both users and the site administrators. This paper proposes a method to predict the popularity of Internet articles, a kind of UCC, using the dynamics of the online contents themselves. To analyze the dynamics, we regarded the access counts of Internet posts as the popularity of them and analyzed the variation of the access counts. We derived a model to predict the popularity of a post represented by the time series of access counts, which is based on an exponential function. According to the experimental results, the difference between the actual access counts and the predicted ones is not more than 10 for 20,532 posts, which cover about 90.7% of the test set.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130589921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Prediction of Protein-Protein Interaction Sites Based on 3D Surface Patches Using SVM 基于支持向量机的三维表面斑块蛋白-蛋白相互作用位点预测
Pub Date : 2012-02-29 DOI: 10.3745/KIPSTD.2012.19D.1.021
Sung-Hee Park, B. Hansen
Predication of protein interaction sites for monomer structures can reduce the search space for protein docking and has been regarded as very significant for predicting unknown functions of proteins from their interacting proteins whose functions are known. In the other hand, the prediction of interaction sites has been limited in crystallizing weakly interacting complexes which are transient and do not form the complexes stable enough for obtaining experimental structures by crystallization or even NMR for the most important protein-protein interactions. This work reports the calculation of 3D surface patches of complex structures and their properties and a machine learning approach to build a predictive model for the 3D surface patches in interaction and non-interaction sites using support vector machine. To overcome classification problems for class imbalanced data, we employed an under-sampling technique. 9 properties of the patches were calculated from amino acid compositions and secondary structure elements. With 10 fold cross validation, the predictive model built from SVM achieved an accuracy of 92.7% for classification of 3D patches in interaction and non-interaction sites from 147 complexes.
预测单体结构的蛋白质相互作用位点可以减少蛋白质对接的搜索空间,并且对于从已知功能的蛋白质相互作用中预测蛋白质的未知功能具有重要意义。另一方面,相互作用位点的预测一直局限于弱相互作用配合物的结晶,这些弱相互作用配合物是短暂的,不能形成足够稳定的配合物,无法通过结晶甚至核磁共振获得最重要的蛋白质-蛋白质相互作用的实验结构。本文报道了复杂结构的三维表面斑块及其性质的计算,并采用支持向量机的机器学习方法建立了相互作用和非相互作用部位的三维表面斑块预测模型。为了克服类不平衡数据的分类问题,我们采用了欠采样技术。根据氨基酸组成和二级结构元素计算了贴片的9个性质。通过10倍交叉验证,SVM构建的预测模型对147个复合物中相互作用位点和非相互作用位点的3D斑块分类准确率达到92.7%。
{"title":"Prediction of Protein-Protein Interaction Sites Based on 3D Surface Patches Using SVM","authors":"Sung-Hee Park, B. Hansen","doi":"10.3745/KIPSTD.2012.19D.1.021","DOIUrl":"https://doi.org/10.3745/KIPSTD.2012.19D.1.021","url":null,"abstract":"Predication of protein interaction sites for monomer structures can reduce the search space for protein docking and has been regarded as very significant for predicting unknown functions of proteins from their interacting proteins whose functions are known. In the other hand, the prediction of interaction sites has been limited in crystallizing weakly interacting complexes which are transient and do not form the complexes stable enough for obtaining experimental structures by crystallization or even NMR for the most important protein-protein interactions. This work reports the calculation of 3D surface patches of complex structures and their properties and a machine learning approach to build a predictive model for the 3D surface patches in interaction and non-interaction sites using support vector machine. To overcome classification problems for class imbalanced data, we employed an under-sampling technique. 9 properties of the patches were calculated from amino acid compositions and secondary structure elements. With 10 fold cross validation, the predictive model built from SVM achieved an accuracy of 92.7% for classification of 3D patches in interaction and non-interaction sites from 147 complexes.","PeriodicalId":348746,"journal":{"name":"The Kips Transactions:partd","volume":"63 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131410228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
The Kips Transactions:partd
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1