首页 > 最新文献

2017 International Conference on Data and Software Engineering (ICoDSE)最新文献

英文 中文
Graph analysis on ATCS data in road network for congestion detection 路网中用于拥堵检测的ATCS数据的图形分析
Pub Date : 2017-11-01 DOI: 10.1109/ICODSE.2017.8285861
Apip Ramdlani, G. Saptawati, Y. Asnar
This research is development a framework for detecting congestion on the urban road network. ATCS (Area Traffic Control System) data in Bandung city with traffic volume are used in congestion detection process. Traffic flow data is collected by vehicles detector located at crossroads within 15 minutes. To compute spatial correlation, graph modelling are used in the adjacency matrix. Assuming the location of the detector as the vertices and the direction of the vehicle as the edge, the graph modeled with vehicle's detector location and the flow direction at nine locations on road nework. The adjacency matrix used consists of 3 matrices in each period of time, which describes the order of spatial distances traveled by vehicle at the intersection location. To calculate spatial correlation, the autocorrelation function and the cross-correlation function which are derived from Pearson's simple correlation is used to looking influence at each location on road network. The result of calculation of spatial correlation, shows the existence of seasonal pattern on the autocorrelation results even though the value scale is getting smaller as it increases time lags. This provides that the process of calculating cross-correlation functions and it can be concluded that the volume of vehicles at each location that are connected in the road network can be known by making observations in the time series of previous seasonal periods. The conclusion that can be formulated that graph modeling is needed to simplify the spatial correlation calculation process by performing the graph representation into a matrix. The Simpson rules on cross-correlation results, can be detected congestion at intersection locations on the road network to find the most critically locations causing congestion on the road network at time periods.
本研究旨在开发一个城市道路网络拥堵检测框架。使用万隆市区域交通控制系统(ATCS)的交通量数据进行拥堵检测。交通流量数据由位于十字路口的车辆探测器在15分钟内采集。为了计算空间相关性,在邻接矩阵中使用图建模。以检测器的位置为顶点,以车辆的方向为边缘,以车辆检测器的位置和路网上九个位置的车流方向为模型。所使用的邻接矩阵在每个时间段由3个矩阵组成,描述了车辆在交叉口位置行驶的空间距离顺序。在计算空间相关性时,采用由Pearson简单相关性推导出的自相关函数和互相关函数来考察路网中各个位置的影响。空间相关计算结果表明,随着时间滞后的增加,自相关结果的值尺度越来越小,但仍存在季节特征。这提供了计算相互关联函数的过程,并且可以得出结论,可以通过对以前季节期间的时间序列进行观察来了解道路网络中连接的每个地点的车辆数量。得出的结论是,通过将图表示为矩阵来简化空间相关计算过程,需要图形建模。辛普森规则对相互关联结果,可以检测到路网中十字路口位置的拥堵情况,从而找到在时间段内导致路网拥堵的最关键位置。
{"title":"Graph analysis on ATCS data in road network for congestion detection","authors":"Apip Ramdlani, G. Saptawati, Y. Asnar","doi":"10.1109/ICODSE.2017.8285861","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285861","url":null,"abstract":"This research is development a framework for detecting congestion on the urban road network. ATCS (Area Traffic Control System) data in Bandung city with traffic volume are used in congestion detection process. Traffic flow data is collected by vehicles detector located at crossroads within 15 minutes. To compute spatial correlation, graph modelling are used in the adjacency matrix. Assuming the location of the detector as the vertices and the direction of the vehicle as the edge, the graph modeled with vehicle's detector location and the flow direction at nine locations on road nework. The adjacency matrix used consists of 3 matrices in each period of time, which describes the order of spatial distances traveled by vehicle at the intersection location. To calculate spatial correlation, the autocorrelation function and the cross-correlation function which are derived from Pearson's simple correlation is used to looking influence at each location on road network. The result of calculation of spatial correlation, shows the existence of seasonal pattern on the autocorrelation results even though the value scale is getting smaller as it increases time lags. This provides that the process of calculating cross-correlation functions and it can be concluded that the volume of vehicles at each location that are connected in the road network can be known by making observations in the time series of previous seasonal periods. The conclusion that can be formulated that graph modeling is needed to simplify the spatial correlation calculation process by performing the graph representation into a matrix. The Simpson rules on cross-correlation results, can be detected congestion at intersection locations on the road network to find the most critically locations causing congestion on the road network at time periods.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122846115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Content based image retrieval for multi-objects fruits recognition using k-means and k-nearest neighbor 基于内容的图像检索,基于k均值和k近邻的多目标水果识别
Pub Date : 2017-11-01 DOI: 10.1109/ICODSE.2017.8285855
Erwin, M. Fachrurrozi, Ahmad Fiqih, Bahardiansyah Rua Saputra, Rachmad Algani, Anggina Primanita
The uniqueness of fruits can be observed using the colors and shapes. The fruit recognition process consists of 3 stages, namely feature extraction, clustering, and recognition. Each of stage uses different methods. The color extraction process using Fuzzy Color Histogram (FCH) method and shaping extraction using Moment Invariants (MI) method. The clustering process uses the K-Means Clustering Algorithm. The recognition process uses the k-NN method. The Content-Based Image Retrieval (CBIR) process uses image features (visual contents) to perform image searches from the database. Experimental results and analysis of fruit recognition system obtained an accuracy of 92.5% for single-object images and 90% for the multi-object image.
水果的独特性可以通过颜色和形状来观察。水果识别过程包括特征提取、聚类和识别三个阶段。每个阶段使用不同的方法。颜色提取过程采用模糊颜色直方图法(FCH),形状提取采用矩不变法(MI)。聚类过程使用K-Means聚类算法。识别过程使用k-NN方法。基于内容的图像检索(CBIR)过程使用图像特征(视觉内容)从数据库中执行图像搜索。实验结果和分析表明,该水果识别系统对单目标图像的识别准确率为92.5%,对多目标图像的识别准确率为90%。
{"title":"Content based image retrieval for multi-objects fruits recognition using k-means and k-nearest neighbor","authors":"Erwin, M. Fachrurrozi, Ahmad Fiqih, Bahardiansyah Rua Saputra, Rachmad Algani, Anggina Primanita","doi":"10.1109/ICODSE.2017.8285855","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285855","url":null,"abstract":"The uniqueness of fruits can be observed using the colors and shapes. The fruit recognition process consists of 3 stages, namely feature extraction, clustering, and recognition. Each of stage uses different methods. The color extraction process using Fuzzy Color Histogram (FCH) method and shaping extraction using Moment Invariants (MI) method. The clustering process uses the K-Means Clustering Algorithm. The recognition process uses the k-NN method. The Content-Based Image Retrieval (CBIR) process uses image features (visual contents) to perform image searches from the database. Experimental results and analysis of fruit recognition system obtained an accuracy of 92.5% for single-object images and 90% for the multi-object image.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117237940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Implementation of landmarc method with adaptive K-NN algorithm on distance determination program in UHF RFID system 基于自适应K-NN算法的地标法在超高频RFID系统距离确定程序中的实现
Pub Date : 2017-11-01 DOI: 10.1109/ICODSE.2017.8285863
Ahmad Fali Oklilas, Fithri Halim Ahmad, R. F. Malik
This research was conducted to find the distance prediction between reader and tag using distance determinant program that called “distance program” which applied LANDMARC method with adaptive k-NN algorithm. This method works by assigning a weighted value to k-NN algorithm between all reference tags and tested tag with k determined by key reference tag. This research is different from the research using the same method before [5] which used 2 antennas and has the position of tag in form of coordinates as the output, this study uses 1 antenna and has the distance estimation between reader's antenna and tag as the output. The use of 1 antenna is expected to increase the efficiency of the number of antennas used in one environment to search tags by distance, but still has a good accuracy, in order to not to reduce the performance of the LANDMARC method to get distance determination between reader's antenna and tag. The test was performed on 4 tracking tags, with a distance of 1.4 meters, 1.9 meters, 2.8 meters, and 3.35 meters respectively. Data retrieval is done 5 times on each tracking tag. There are 2 experiment that are applied. The first experiment is to apply 2 test scenarios, first scenario is when there is no object around the tag and second is when there are object around the tag. The second experiment is to calculate the difference of percentage error from test result from both scenarios. The first experimental result showed that the scenario 1 can produce result with the average percentage error of each tracking tag is 1.280%, 1.452%, 2.107%, and 2.470%. While scenario 2 can produce larger percentage error, with the average percentage error for each tag is 3.687%, 4.225%, 4.466%, and 7.430%. The second experimental result showed that the scenario 2 results can have larger percentage error than the scenario 1 results because of the surrounding objects near the tracking tags. The average difference of percentage error between two scenarios is 3.125%.
本研究采用距离决定程序,即“距离程序”,将LANDMARC方法与自适应k-NN算法相结合,对阅读器与标签之间的距离进行预测。该方法通过在所有参考标签和被测标签之间分配k- nn算法的加权值,k由关键参考标签确定。与之前[5]采用相同方法的研究使用2根天线,以坐标形式输出标签位置不同,本研究使用1根天线,以阅读器天线与标签之间的距离估计作为输出。使用1个天线有望提高在一个环境中使用的天线数按距离搜索标签的效率,但仍具有良好的精度,以不降低LANDMARC方法获得阅读器天线与标签之间距离确定的性能。测试对4个跟踪标签进行测试,跟踪标签的距离分别为1.4米、1.9米、2.8米和3.35米。对每个跟踪标签进行5次数据检索。应用了两个实验。第一个实验是应用两个测试场景,第一个场景是当标签周围没有物体时,第二个场景是当标签周围有物体时。第二个实验是计算两种情况下测试结果的百分比误差之差。第一个实验结果表明,场景1可以产生每个跟踪标签的平均百分比误差分别为1.280%、1.452%、2.107%和2.470%的结果。而场景2可以产生更大的百分比误差,每个标签的平均百分比误差分别为3.687%、4.225%、4.466%和7.430%。第二个实验结果表明,由于跟踪标签附近的周围物体,场景2的结果可能比场景1的结果具有更大的百分比误差。两种场景的平均误差百分比差为3.125%。
{"title":"Implementation of landmarc method with adaptive K-NN algorithm on distance determination program in UHF RFID system","authors":"Ahmad Fali Oklilas, Fithri Halim Ahmad, R. F. Malik","doi":"10.1109/ICODSE.2017.8285863","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285863","url":null,"abstract":"This research was conducted to find the distance prediction between reader and tag using distance determinant program that called “distance program” which applied LANDMARC method with adaptive k-NN algorithm. This method works by assigning a weighted value to k-NN algorithm between all reference tags and tested tag with k determined by key reference tag. This research is different from the research using the same method before [5] which used 2 antennas and has the position of tag in form of coordinates as the output, this study uses 1 antenna and has the distance estimation between reader's antenna and tag as the output. The use of 1 antenna is expected to increase the efficiency of the number of antennas used in one environment to search tags by distance, but still has a good accuracy, in order to not to reduce the performance of the LANDMARC method to get distance determination between reader's antenna and tag. The test was performed on 4 tracking tags, with a distance of 1.4 meters, 1.9 meters, 2.8 meters, and 3.35 meters respectively. Data retrieval is done 5 times on each tracking tag. There are 2 experiment that are applied. The first experiment is to apply 2 test scenarios, first scenario is when there is no object around the tag and second is when there are object around the tag. The second experiment is to calculate the difference of percentage error from test result from both scenarios. The first experimental result showed that the scenario 1 can produce result with the average percentage error of each tracking tag is 1.280%, 1.452%, 2.107%, and 2.470%. While scenario 2 can produce larger percentage error, with the average percentage error for each tag is 3.687%, 4.225%, 4.466%, and 7.430%. The second experimental result showed that the scenario 2 results can have larger percentage error than the scenario 1 results because of the surrounding objects near the tracking tags. The average difference of percentage error between two scenarios is 3.125%.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"288 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115213924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid attribute and personality based recommender system for book recommendation 基于属性和个性的图书推荐混合系统
Pub Date : 2017-11-01 DOI: 10.1109/ICODSE.2017.8285874
'Adli Ihsan Hariadi, Dade Nurjanah
In recent years, with the rapid increases of books, finding relevant books has been a problem. For that, people might need their peers' opinion to complete this task. The problem is that relevant books can be gained only if there are other users or peers have same interests with them. Otherwise, they will never get relevant books. Recommender systems can be a solution for that problem. They work on finding relevant items based on other users' experience. Although research on recommender system increases, there is still not much research that considers user personality in recommender systems, even though personal preferences are really important these days. This paper discusses our research on a hybrid-based method that combines attribute-based and user personality-based methods for book recommender system. The attribute-based method has been implemented previously. In our research, we have implemented the MSV-MSL (Most Similar Visited Material to the Most Similar Learner) method, since it is the best method among hybrid attribute-based methods. The personality factor is used to find the similarity between users when creating neighborhood relationships. The method is tested using Book-crossing and Amazon Review on book category datasets. Our experiment shows that the combined method that considers user personality gives a better result than those without user personality on Book-crossing dataset. In contrary, it resulted in a lower performance on Amazon Review dataset. It can be concluded that user personality consideration can give a better result in a certain condition depending on the dataset itself and the usage proportion.
近年来,随着图书的迅速增加,寻找相关书籍已经成为一个问题。为此,人们可能需要同伴的意见来完成这项任务。问题是,只有当有其他用户或同行与他们有相同的兴趣时,才能获得相关的书籍。否则,他们将永远得不到相关的书籍。推荐系统可以解决这个问题。他们的工作是根据其他用户的经验找到相关的项目。尽管对推荐系统的研究越来越多,但考虑到推荐系统中用户个性的研究仍然不多,尽管个人偏好在当今确实很重要。本文讨论了基于属性和基于用户个性的图书推荐系统混合方法的研究。前面已经实现了基于属性的方法。在我们的研究中,我们实现了MSV-MSL(最相似访问材料到最相似学习者)方法,因为它是混合属性方法中最好的方法。在建立邻里关系时,使用个性因素来寻找用户之间的相似性。使用book -crossing和Amazon Review对图书类别数据集进行了测试。我们的实验表明,在Book-crossing数据集上,考虑用户个性的组合方法比不考虑用户个性的组合方法效果更好。相反,它导致亚马逊评论数据集的性能较低。可以得出,在特定的条件下,根据数据集本身和使用比例,考虑用户个性可以得到更好的结果。
{"title":"Hybrid attribute and personality based recommender system for book recommendation","authors":"'Adli Ihsan Hariadi, Dade Nurjanah","doi":"10.1109/ICODSE.2017.8285874","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285874","url":null,"abstract":"In recent years, with the rapid increases of books, finding relevant books has been a problem. For that, people might need their peers' opinion to complete this task. The problem is that relevant books can be gained only if there are other users or peers have same interests with them. Otherwise, they will never get relevant books. Recommender systems can be a solution for that problem. They work on finding relevant items based on other users' experience. Although research on recommender system increases, there is still not much research that considers user personality in recommender systems, even though personal preferences are really important these days. This paper discusses our research on a hybrid-based method that combines attribute-based and user personality-based methods for book recommender system. The attribute-based method has been implemented previously. In our research, we have implemented the MSV-MSL (Most Similar Visited Material to the Most Similar Learner) method, since it is the best method among hybrid attribute-based methods. The personality factor is used to find the similarity between users when creating neighborhood relationships. The method is tested using Book-crossing and Amazon Review on book category datasets. Our experiment shows that the combined method that considers user personality gives a better result than those without user personality on Book-crossing dataset. In contrary, it resulted in a lower performance on Amazon Review dataset. It can be concluded that user personality consideration can give a better result in a certain condition depending on the dataset itself and the usage proportion.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130201827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
The grouping of facial images using agglomerative hierarchical clustering to improve the CBIR based face recognition system 采用聚类分层聚类对人脸图像进行分组,改进了基于CBIR的人脸识别系统
Pub Date : 2017-11-01 DOI: 10.1109/ICODSE.2017.8285868
M. Fachrurrozi, Clara Fin Badillah, Saparudin, Junia Erlina, Erwin, Mardiana, Auzan Lazuardi
The grouping of face images can be done automatically using the Agglomerative Hierarchical Clustering (AHC) algorithm. The pre-processing performed is feature extraction in getting the face image vector feature. The AHC algorithm performs grouping using linkage average, single, and complete method. Grouping face images can help improve the search speed of the CBIR based face recognition system. The cluster validation test uses the value of Cophenetic Correlation Coefficien (CCC). From the test results, it is known that the complete method has a higher CCC value than other methods, that is equal to 0.904938 with the difference value of 0.127558 on single method and the difference of 0.02291 on the average method. The face recognition system using pre-processing clustering can perform faster face recognition better than without pre-processing clustering.
采用聚类分层聚类(AHC)算法对人脸图像进行自动分组。预处理主要是提取人脸图像的矢量特征。AHC算法使用链接平均、单一和完整的方法进行分组。对人脸图像进行分组可以提高基于CBIR的人脸识别系统的搜索速度。聚类验证检验使用Cophenetic Correlation coefficient (CCC)的值。由测试结果可知,完整方法的CCC值高于其他方法,为0.904938,单一方法的差异值为0.127558,平均方法的差异值为0.02291。采用预处理聚类的人脸识别系统比不采用预处理聚类的人脸识别速度更快。
{"title":"The grouping of facial images using agglomerative hierarchical clustering to improve the CBIR based face recognition system","authors":"M. Fachrurrozi, Clara Fin Badillah, Saparudin, Junia Erlina, Erwin, Mardiana, Auzan Lazuardi","doi":"10.1109/ICODSE.2017.8285868","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285868","url":null,"abstract":"The grouping of face images can be done automatically using the Agglomerative Hierarchical Clustering (AHC) algorithm. The pre-processing performed is feature extraction in getting the face image vector feature. The AHC algorithm performs grouping using linkage average, single, and complete method. Grouping face images can help improve the search speed of the CBIR based face recognition system. The cluster validation test uses the value of Cophenetic Correlation Coefficien (CCC). From the test results, it is known that the complete method has a higher CCC value than other methods, that is equal to 0.904938 with the difference value of 0.127558 on single method and the difference of 0.02291 on the average method. The face recognition system using pre-processing clustering can perform faster face recognition better than without pre-processing clustering.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127538547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A classification of sequential patterns for numerical and time series multiple source data — A preliminary application on extreme weather prediction 数值和时间序列多源数据序列模式的分类。在极端天气预报中的初步应用
Pub Date : 2017-11-01 DOI: 10.1109/ICODSE.2017.8285845
Regina Yulia Yasmin, A. E. Sakya, Untung Merdijanto
Classification based on sequential patterns has become very important method in data mining. It is useful to make predictions for alert warning system and strategic decision. Moreover the necessity to improve the speed performance of sequential pattern mining also increases. However, previous researches on this area uses categorical data as input. There is necessity to process numerical data and classify sequential patterns found from the data. High accuracy numerical data are difficult to mine. Moreover, numerical data to be mined consist of many observational parameter data. This study proposes framework to overcome the problem. The framework proposes to categorize the data in preprocessing and prepare it to be ready as input for sequential pattern mining and the subsequent classification process. The framework will improve classification speed, scalability and also maintain the classification accuracy.
基于顺序模式的分类已经成为数据挖掘中非常重要的方法。它对预警系统的预测和战略决策具有重要的指导意义。此外,提高顺序模式挖掘的速度性能的必要性也增加了。然而,在这方面的先前研究使用分类数据作为输入。有必要对数字数据进行处理,并对从数据中发现的顺序模式进行分类。高精度的数值数据是难以挖掘的。此外,要挖掘的数值数据由许多观测参数数据组成。本研究提出了克服这一问题的框架。该框架建议在预处理过程中对数据进行分类,并准备好作为顺序模式挖掘和后续分类过程的输入。该框架将提高分类速度、可扩展性和保持分类精度。
{"title":"A classification of sequential patterns for numerical and time series multiple source data — A preliminary application on extreme weather prediction","authors":"Regina Yulia Yasmin, A. E. Sakya, Untung Merdijanto","doi":"10.1109/ICODSE.2017.8285845","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285845","url":null,"abstract":"Classification based on sequential patterns has become very important method in data mining. It is useful to make predictions for alert warning system and strategic decision. Moreover the necessity to improve the speed performance of sequential pattern mining also increases. However, previous researches on this area uses categorical data as input. There is necessity to process numerical data and classify sequential patterns found from the data. High accuracy numerical data are difficult to mine. Moreover, numerical data to be mined consist of many observational parameter data. This study proposes framework to overcome the problem. The framework proposes to categorize the data in preprocessing and prepare it to be ready as input for sequential pattern mining and the subsequent classification process. The framework will improve classification speed, scalability and also maintain the classification accuracy.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133289600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Comparison of similarity measures in HSV quantization for CBIR CBIR中HSV量化相似性度量的比较
Pub Date : 2017-11-01 DOI: 10.1109/ICODSE.2017.8285854
Jasman Pardede, B. Sitohang, Saiful Akbar, M. L. Khodra
Researchers implemented various similarity measure for CBIR using HSV Quantization. Implemented similarity measures on this study is Euclidean Distance, Cramer-von Mises Divergence, Manhattan Distance, Cosine Similarity, Chi-Square Dissimilarity, Jeffrey Divergence, Pearson Correlation Coefficient, and Mahalanobis Distance. The purpose of study is to measure the performance of image retrieval of the CBIR system using HSV Quantization for each of the similarity measures. The performance of similarity measures are evaluated based on precision, recall, and F-measure value that obtained from test results performed on the Wang dataset. Similarity measures were performed on each of the categories (Africa, Beaches, Building, Bus, Dinosaur, Elephant, Flower, Horses, Mountain, and Food) that has 100 images of each its category. The test results showed that the highest precision valued are 100% provided with Jeffrey Divergence on Dinosaur category. The best average precision value of all categories is provided with Jeffrey Divergence, i.e. 87.298%. In generally, the best average precision value is Dinosaur category (Euclidean Distance, Manhattan Distance, Cosine Similarity, Chi-Square Dissimilarity, Jeffrey Divergence, and Pearson Correlation Coefficient). The next of average precision value is on Flower category for Cramer-von Mises Divergence, and the last category is on Bus category that provided with Mahalanobis Distance. The highest average recall valued is 92% on Horses category that established to Cosine Similarity. The best average recall valued for all categories is on Manhattan Distance, i.e. 38.700%. In generally, the best average recall valued is on Horses category that provided with Cramer-von Mises Divergence, Manhattan Distance, Cosine Similarity, Chi-Square Dissimilarity, Jeffrey Divergence, Pearson Correlation Coefficient, and Mahalanobis Distance. The best average recall value of the Euclidean Distance is Africa category. The highest F-measure value is 87.255% on Horses category provided with Cosine Similarity. The experiment result showed that the highest F-measure valued is always on Horses category. The highest F-measure value in general provided with Manhattan Distance (Africa, Beaches, Building, Bus, Dinosaur, Elephant, Flower, Mountain, and Food), while the highest F-measure valued of Horses category provided with Cosine Similarity.
研究人员利用HSV量化实现了不同的相似性度量。在本研究中实施的相似性度量是欧几里得距离、克拉默-冯·米塞斯散度、曼哈顿距离、余弦相似度、卡方不相似度、杰弗里散度、Pearson相关系数和马氏距离。本研究的目的是利用HSV量化方法对各相似性度量值进行图像检索,以衡量CBIR系统的图像检索性能。相似性度量的性能基于精度、召回率和从Wang数据集上执行的测试结果中获得的f度量值进行评估。对每个类别(非洲、海滩、建筑、公共汽车、恐龙、大象、花、马、山和食物)进行相似性测量,每个类别有100张图像。测试结果表明,Jeffrey Divergence在恐龙类别上提供的最高精度值为100%。Jeffrey Divergence给出了各类别的最佳平均精度值,为87.298%。一般来说,平均精度值最好的是恐龙类(欧几里得距离、曼哈顿距离、余弦相似度、卡方不相似度、杰弗里散度和Pearson相关系数)。下一个平均精度值是基于克拉默-冯·米塞斯散度的花类,最后一个平均精度值是基于马氏距离的巴士类。在基于余弦相似度的马类中,最高的平均召回值是92%。所有类别的最佳平均召回值是曼哈顿距离,即38.700%。总体而言,具有克拉默-冯·米塞斯散度、曼哈顿距离、余弦相似度、卡方不相似度、杰弗里散度、皮尔逊相关系数和马哈拉诺比斯距离的马类平均召回值最佳。欧几里得距离的最佳平均回忆值是非洲类别。在具有余弦相似度的马类中,f测量值最高为87.255%。实验结果表明,f值最高的总是马类。一般来说,最高的f值提供了曼哈顿距离(非洲,海滩,建筑物,公共汽车,恐龙,大象,花,山和食物),而马类的最高f值提供了余弦相似度。
{"title":"Comparison of similarity measures in HSV quantization for CBIR","authors":"Jasman Pardede, B. Sitohang, Saiful Akbar, M. L. Khodra","doi":"10.1109/ICODSE.2017.8285854","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285854","url":null,"abstract":"Researchers implemented various similarity measure for CBIR using HSV Quantization. Implemented similarity measures on this study is Euclidean Distance, Cramer-von Mises Divergence, Manhattan Distance, Cosine Similarity, Chi-Square Dissimilarity, Jeffrey Divergence, Pearson Correlation Coefficient, and Mahalanobis Distance. The purpose of study is to measure the performance of image retrieval of the CBIR system using HSV Quantization for each of the similarity measures. The performance of similarity measures are evaluated based on precision, recall, and F-measure value that obtained from test results performed on the Wang dataset. Similarity measures were performed on each of the categories (Africa, Beaches, Building, Bus, Dinosaur, Elephant, Flower, Horses, Mountain, and Food) that has 100 images of each its category. The test results showed that the highest precision valued are 100% provided with Jeffrey Divergence on Dinosaur category. The best average precision value of all categories is provided with Jeffrey Divergence, i.e. 87.298%. In generally, the best average precision value is Dinosaur category (Euclidean Distance, Manhattan Distance, Cosine Similarity, Chi-Square Dissimilarity, Jeffrey Divergence, and Pearson Correlation Coefficient). The next of average precision value is on Flower category for Cramer-von Mises Divergence, and the last category is on Bus category that provided with Mahalanobis Distance. The highest average recall valued is 92% on Horses category that established to Cosine Similarity. The best average recall valued for all categories is on Manhattan Distance, i.e. 38.700%. In generally, the best average recall valued is on Horses category that provided with Cramer-von Mises Divergence, Manhattan Distance, Cosine Similarity, Chi-Square Dissimilarity, Jeffrey Divergence, Pearson Correlation Coefficient, and Mahalanobis Distance. The best average recall value of the Euclidean Distance is Africa category. The highest F-measure value is 87.255% on Horses category provided with Cosine Similarity. The experiment result showed that the highest F-measure valued is always on Horses category. The highest F-measure value in general provided with Manhattan Distance (Africa, Beaches, Building, Bus, Dinosaur, Elephant, Flower, Mountain, and Food), while the highest F-measure valued of Horses category provided with Cosine Similarity.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133757029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Implementation of regular expression (regex) on knowledge management system 正则表达式在知识管理系统中的实现
Pub Date : 2017-11-01 DOI: 10.1109/ICODSE.2017.8285877
Ken Dhita Tania, Bayu Adhi Tama
Hitherto, previous research of string matching techniques in knowledge sharing of explicit knowledge have shown a great success. However, their implementation in a knowledge management system is still underexplored. The aim of this paper is to propose an implementation of regular expression (regex) techniques for supporting all processes in knowledge management systems and producing a better accuracy of searching knowledge within an organization. A web-based application prototype of regex is built and several experiments are performed in order to prove the correctness of our implementation. It is obvious that regex performs better than traditional SQL concerning with knowledge searching/query.
迄今为止,字符串匹配技术在显式知识共享中的研究已经取得了很大的成功。然而,它们在知识管理系统中的实现仍未得到充分的探索。本文的目的是提出正则表达式(regex)技术的实现,以支持知识管理系统中的所有过程,并在组织内产生更好的搜索知识的准确性。建立了基于web的正则表达式应用原型,并通过实验验证了实现的正确性。很明显,在知识搜索/查询方面,regex的性能优于传统SQL。
{"title":"Implementation of regular expression (regex) on knowledge management system","authors":"Ken Dhita Tania, Bayu Adhi Tama","doi":"10.1109/ICODSE.2017.8285877","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285877","url":null,"abstract":"Hitherto, previous research of string matching techniques in knowledge sharing of explicit knowledge have shown a great success. However, their implementation in a knowledge management system is still underexplored. The aim of this paper is to propose an implementation of regular expression (regex) techniques for supporting all processes in knowledge management systems and producing a better accuracy of searching knowledge within an organization. A web-based application prototype of regex is built and several experiments are performed in order to prove the correctness of our implementation. It is obvious that regex performs better than traditional SQL concerning with knowledge searching/query.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121321900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Strategic intelligence model in supporting brand equity assessment 支持品牌资产评估的战略情报模型
Pub Date : 2017-11-01 DOI: 10.1109/ICODSE.2017.8285867
Agung Aldhiyat, M. L. Khodra
This paper investigates how information can be valuable resource for enterprise to face uncertainty of it's environmental change, especially to determine the branding strategies. Strategic intelligence model is a model that is expected to support management in establishing a brand positioning strategy effectively. It is effective in achieving strategic objectives based on existing condition of the brand equity. This paper analyzes facebook comments to build strategic intelligence model of telecommunication provider brands, comments are analyzed by performing a number of word processing, find out the topic of the comment whether it matches the brand image criteria, i.e. price, ability to serve, characteristic and feature. This paper employs Naïve Bayes classifiers and DBSCAN clustering to help classify the facebook comments based brand equity criteria, and achieved F-Measure of 0.7684%.
本文探讨了信息如何成为企业面对环境变化的不确定性,特别是确定品牌战略的宝贵资源。战略智能模型是一种能够有效支持管理层制定品牌定位战略的模型。基于品牌资产的现有状况,有效实现战略目标。本文通过对facebook评论进行分析,构建电信运营商品牌战略情报模型,通过对评论进行大量文字处理分析,找出评论主题是否符合品牌形象标准,即价格、服务能力、特色和特色。本文采用Naïve贝叶斯分类器和DBSCAN聚类对基于facebook评论的品牌资产标准进行分类,F-Measure达到0.7684%。
{"title":"Strategic intelligence model in supporting brand equity assessment","authors":"Agung Aldhiyat, M. L. Khodra","doi":"10.1109/ICODSE.2017.8285867","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285867","url":null,"abstract":"This paper investigates how information can be valuable resource for enterprise to face uncertainty of it's environmental change, especially to determine the branding strategies. Strategic intelligence model is a model that is expected to support management in establishing a brand positioning strategy effectively. It is effective in achieving strategic objectives based on existing condition of the brand equity. This paper analyzes facebook comments to build strategic intelligence model of telecommunication provider brands, comments are analyzed by performing a number of word processing, find out the topic of the comment whether it matches the brand image criteria, i.e. price, ability to serve, characteristic and feature. This paper employs Naïve Bayes classifiers and DBSCAN clustering to help classify the facebook comments based brand equity criteria, and achieved F-Measure of 0.7684%.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126723446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2017 International Conference on Data and Software Engineering (ICoDSE)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1