Multiscale fuzzy entropy (MFE) is an effective algorithm which has been successfully applied in many fields for measuring the complexity of a time series. Though, MFE can yield inaccurate entropy estimations as the coarse-graining procedure used by the algorithm reduces the length of the time series under investigation. A modified multiscale fuzzy entropy (MMFE) algorithm is presented in this paper to overcome this problem. In this new approach, the coarse-graining procedure is replaced by a moving-average procedure which constructs template vectors in calculating the fuzzy entropy. The effectiveness of the proposed MMFE algorithm is evaluated on several mixed data (i.e., data mixed with white noise) of various data length. The result shows that the MMFE algorithm can effectively reduce the deviation in entropy estimation as compared to that using MFE algorithm. The MMFE algorithm is further employed in the study to estimate the complexity and irregularity of vibration data of a roller element bearing for fault diagnosis. It is shown that the MMFE algorithm can effectively discriminate the four bearing operation conditions under study.
{"title":"Analysis of Complex Time Series Using a Modified Multiscale Fuzzy Entropy Algorithm","authors":"Tian Han, Cheng Cheng Shi, Z. Wei, T. Lin","doi":"10.1109/IIKI.2016.12","DOIUrl":"https://doi.org/10.1109/IIKI.2016.12","url":null,"abstract":"Multiscale fuzzy entropy (MFE) is an effective algorithm which has been successfully applied in many fields for measuring the complexity of a time series. Though, MFE can yield inaccurate entropy estimations as the coarse-graining procedure used by the algorithm reduces the length of the time series under investigation. A modified multiscale fuzzy entropy (MMFE) algorithm is presented in this paper to overcome this problem. In this new approach, the coarse-graining procedure is replaced by a moving-average procedure which constructs template vectors in calculating the fuzzy entropy. The effectiveness of the proposed MMFE algorithm is evaluated on several mixed data (i.e., data mixed with white noise) of various data length. The result shows that the MMFE algorithm can effectively reduce the deviation in entropy estimation as compared to that using MFE algorithm. The MMFE algorithm is further employed in the study to estimate the complexity and irregularity of vibration data of a roller element bearing for fault diagnosis. It is shown that the MMFE algorithm can effectively discriminate the four bearing operation conditions under study.","PeriodicalId":371106,"journal":{"name":"2016 International Conference on Identification, Information and Knowledge in the Internet of Things (IIKI)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128666653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To move the application software and data into the cloud has become a trend. To prevent the data from being lost, modified and corrupted, the data integrity needs to be verified. The provable data possession (PDP) protocol is used to solve this problem. However, users often do some dynamic operations, such as insertion, modification and deletion, and it increases the difficulty and complexity of verification. How to construct the PDP scheme supporting dynamic updating of data becomes a hot research topic. The technology of hash aggregation is considered as one of the solutions to reduce costs of verification due to data dynamic operation. We analyze Wang et al.'s dynamic provable data possession (DPDP) solution and identify its security flaws during hash aggregation phase. This paper proposes an improved scheme to resolve security problem in Wang et al.'s scheme.
{"title":"Dynamic Provable Data Possession Based on Ranked Merkle Hash Tree","authors":"Jing Zou, Yunchuan Sun, Shixian Li","doi":"10.1109/IIKI.2016.69","DOIUrl":"https://doi.org/10.1109/IIKI.2016.69","url":null,"abstract":"To move the application software and data into the cloud has become a trend. To prevent the data from being lost, modified and corrupted, the data integrity needs to be verified. The provable data possession (PDP) protocol is used to solve this problem. However, users often do some dynamic operations, such as insertion, modification and deletion, and it increases the difficulty and complexity of verification. How to construct the PDP scheme supporting dynamic updating of data becomes a hot research topic. The technology of hash aggregation is considered as one of the solutions to reduce costs of verification due to data dynamic operation. We analyze Wang et al.'s dynamic provable data possession (DPDP) solution and identify its security flaws during hash aggregation phase. This paper proposes an improved scheme to resolve security problem in Wang et al.'s scheme.","PeriodicalId":371106,"journal":{"name":"2016 International Conference on Identification, Information and Knowledge in the Internet of Things (IIKI)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128256956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Global Data on Events, Location, and Tone (GDELT) is a real time large scale database of global human society for open research which monitors the worlds broadcast, print, and web news since 1979, creating a free open platform for computing on the entire world. In this paper, first, we designed and implemented a data crawler, which collects metadata of GDELT database in real time and stores them in Hadoop Distributed File System (HDFS). Then, we proposed a hashbased method to correlate "Event" table, "Mentions" table and "GKG" table in GDELT, in order to digest every detailed information of each event. Finally, we took South Korea as example to make spatiotemporal visualization analysis, such as event spatiotemporal heat map, distribution of media attention and event extraction confidence dot map.
Global Data on Events, Location, and Tone (GDELT)是一个用于开放研究的全球人类社会的实时大型数据库,它监测了自1979年以来世界广播,印刷和网络新闻,为整个世界创造了一个免费开放的计算平台。本文首先设计并实现了一个数据爬虫,实时收集GDELT数据库的元数据并存储在Hadoop分布式文件系统(HDFS)中。然后,我们提出了一种基于哈希的方法来关联GDELT中的“Event”表、“mention”表和“GKG”表,以消化每个事件的每个详细信息。最后,以韩国为例,进行事件时空热图、媒体关注度分布、事件提取置信度点图等时空可视化分析。
{"title":"Correlation and Visualization Analysis of Large Scale Dataset GDELT","authors":"Fengcai Qiao, Kedi Chen","doi":"10.1109/IIKI.2016.19","DOIUrl":"https://doi.org/10.1109/IIKI.2016.19","url":null,"abstract":"The Global Data on Events, Location, and Tone (GDELT) is a real time large scale database of global human society for open research which monitors the worlds broadcast, print, and web news since 1979, creating a free open platform for computing on the entire world. In this paper, first, we designed and implemented a data crawler, which collects metadata of GDELT database in real time and stores them in Hadoop Distributed File System (HDFS). Then, we proposed a hashbased method to correlate \"Event\" table, \"Mentions\" table and \"GKG\" table in GDELT, in order to digest every detailed information of each event. Finally, we took South Korea as example to make spatiotemporal visualization analysis, such as event spatiotemporal heat map, distribution of media attention and event extraction confidence dot map.","PeriodicalId":371106,"journal":{"name":"2016 International Conference on Identification, Information and Knowledge in the Internet of Things (IIKI)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122677812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kun Li, Junsheng Zhang, Changqing Yao, Chongde Shi
Relation extraction is an important task for understanding text. In the big data era, automatic relation extraction from unstructured texts is urgently needed for structured information organization and information analysis. In this paper, we survey the automatic relation extraction methods, especially the traditional machine learning on closed data set and open information environment such as Web, including supervised and semi-supervised methods. And then, we discuss the applications based on relation extraction such as event extraction and QA systems.
{"title":"Automatic Relation Extraction from Text: A Survey","authors":"Kun Li, Junsheng Zhang, Changqing Yao, Chongde Shi","doi":"10.1109/IIKI.2016.58","DOIUrl":"https://doi.org/10.1109/IIKI.2016.58","url":null,"abstract":"Relation extraction is an important task for understanding text. In the big data era, automatic relation extraction from unstructured texts is urgently needed for structured information organization and information analysis. In this paper, we survey the automatic relation extraction methods, especially the traditional machine learning on closed data set and open information environment such as Web, including supervised and semi-supervised methods. And then, we discuss the applications based on relation extraction such as event extraction and QA systems.","PeriodicalId":371106,"journal":{"name":"2016 International Conference on Identification, Information and Knowledge in the Internet of Things (IIKI)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126304034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the amount of target characteristics data increasing rapidly, the tradition methods cannot satisfy the need of the storage and management of those data. According to the features of those data, a new storage system is proposed base on RDBMS and Hadoop. The structured data and the metadata of unstructured data is stored in the RDBMS under certain schema, while the large amount of unstructured one allocated among numbers of nodes in the hadoop cluster. In order to maximize the superiority of storage, the HBase is used for storing massive small-size unstructured data and the HDFS is applied for holding the large-scale ones. Meanwhile, the access control and the multi-thread upload and download approach combined with load balancing and caching mechanism is applied for improving the efficiency of data transmission. Experiment results show that the proposed storage system is reasonable and practicable.
{"title":"Research of Target Characteristics Storage Based on RDBMS and Hadoop","authors":"Yanqi Wang, Yusheng Jia, Xiaodan Xie","doi":"10.1109/IIKI.2016.33","DOIUrl":"https://doi.org/10.1109/IIKI.2016.33","url":null,"abstract":"As the amount of target characteristics data increasing rapidly, the tradition methods cannot satisfy the need of the storage and management of those data. According to the features of those data, a new storage system is proposed base on RDBMS and Hadoop. The structured data and the metadata of unstructured data is stored in the RDBMS under certain schema, while the large amount of unstructured one allocated among numbers of nodes in the hadoop cluster. In order to maximize the superiority of storage, the HBase is used for storing massive small-size unstructured data and the HDFS is applied for holding the large-scale ones. Meanwhile, the access control and the multi-thread upload and download approach combined with load balancing and caching mechanism is applied for improving the efficiency of data transmission. Experiment results show that the proposed storage system is reasonable and practicable.","PeriodicalId":371106,"journal":{"name":"2016 International Conference on Identification, Information and Knowledge in the Internet of Things (IIKI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134128619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recommender systems have been applied by E-commerce or other application sites to recommend their produces that customers might be interested in. This paper refines a bipartite graph to a two-layer graph model by adding the similarity information between consumers and products, the related metric is presented to measure the relationship among them, and the shortest path algorithm is introduced to obtain suitable recommendations.
{"title":"An Algorithm Based on Two-Layer Graph Model for E-Commerce Recommendation","authors":"Li Pan, Xiaosha Xu, Zhimeng Tan, Xin Peng","doi":"10.1109/IIKI.2016.53","DOIUrl":"https://doi.org/10.1109/IIKI.2016.53","url":null,"abstract":"Recommender systems have been applied by E-commerce or other application sites to recommend their produces that customers might be interested in. This paper refines a bipartite graph to a two-layer graph model by adding the similarity information between consumers and products, the related metric is presented to measure the relationship among them, and the shortest path algorithm is introduced to obtain suitable recommendations.","PeriodicalId":371106,"journal":{"name":"2016 International Conference on Identification, Information and Knowledge in the Internet of Things (IIKI)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124749875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The routing in DTN (Delay/Disruption Tolerant Network) deals with network partition and the mobility of nodes. In order to motivate involvement in routing, several incentive schemes are created, one of them is by payment. Suppose the budget is limited, the question is how to make a routing plan based on payments. That is to say how a sender can decide the data amount assigned to one special next hop and the payment for it. This paper proposes a mechanism design approach to define the utility function of the sender and the next hops, then maximize the utility functions for all the participants, including the sender and next hops, using the tool of KKT (Karush-Kuhn-Tucker) conditions solving a nonlinear programming problem with one inequality constraint. Index
{"title":"A Mechanism Design Solution for DTN Routing","authors":"Zhi Lin, Shengling Wang, Chun-Chi Liu, Madiha Ikram","doi":"10.1109/IIKI.2016.42","DOIUrl":"https://doi.org/10.1109/IIKI.2016.42","url":null,"abstract":"The routing in DTN (Delay/Disruption Tolerant Network) deals with network partition and the mobility of nodes. In order to motivate involvement in routing, several incentive schemes are created, one of them is by payment. Suppose the budget is limited, the question is how to make a routing plan based on payments. That is to say how a sender can decide the data amount assigned to one special next hop and the payment for it. This paper proposes a mechanism design approach to define the utility function of the sender and the next hops, then maximize the utility functions for all the participants, including the sender and next hops, using the tool of KKT (Karush-Kuhn-Tucker) conditions solving a nonlinear programming problem with one inequality constraint. Index","PeriodicalId":371106,"journal":{"name":"2016 International Conference on Identification, Information and Knowledge in the Internet of Things (IIKI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115922044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper presents Fablabs, as co-working spaces, where different stakeholders, such as policy makers, companies and citizens can co-create innovative products and services. We are focusing mainly on Fablabs and the opportunities they are providing for the development of rural areas. As an example, we are presenting two cases from two rural municipalities, Ptuj and Ribnica, in Slovenia, Europe.
{"title":"Fablabs as Drivers for Open Innovation and Co-creation to Foster Rural Development","authors":"Emilija Stojmenova Duh, A. Kos","doi":"10.1109/IIKI.2016.70","DOIUrl":"https://doi.org/10.1109/IIKI.2016.70","url":null,"abstract":"The paper presents Fablabs, as co-working spaces, where different stakeholders, such as policy makers, companies and citizens can co-create innovative products and services. We are focusing mainly on Fablabs and the opportunities they are providing for the development of rural areas. As an example, we are presenting two cases from two rural municipalities, Ptuj and Ribnica, in Slovenia, Europe.","PeriodicalId":371106,"journal":{"name":"2016 International Conference on Identification, Information and Knowledge in the Internet of Things (IIKI)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126981097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the modern battlefield environment has become increasingly complex, the traditional SAR recognition methods are too dependent on the training data source to be robust and universal, which makes it can not meet the demand of modern warfare. So how to automatically interpret the fast increased SAR images becomes an urgent problem. The emergence of big data, cloud computing and deep learning technology makes the automatic and intelligent interpretation of large volume of SAR images become possible. This paper proposes a SAR image storage and recognition system based on cloud platform to automatically obtain and identify all kinds of military targets from a complex scene. The system combines the cloud-based platform and deep learning method to achieve real-time recognizing analyses and batch processing of data. The seamless integration of distributed storage and cloud services meets the needs of large-scale data recognition and management requirements. The assessment shows that the method is more efficient in terms of performance, storage, and fault tolerance.
{"title":"The SAR Image Storage and Recognition System Based on Cloud Platform","authors":"Jia Zhai, Xiaodan Xie, Yusheng Jia","doi":"10.1109/IIKI.2016.113","DOIUrl":"https://doi.org/10.1109/IIKI.2016.113","url":null,"abstract":"As the modern battlefield environment has become increasingly complex, the traditional SAR recognition methods are too dependent on the training data source to be robust and universal, which makes it can not meet the demand of modern warfare. So how to automatically interpret the fast increased SAR images becomes an urgent problem. The emergence of big data, cloud computing and deep learning technology makes the automatic and intelligent interpretation of large volume of SAR images become possible. This paper proposes a SAR image storage and recognition system based on cloud platform to automatically obtain and identify all kinds of military targets from a complex scene. The system combines the cloud-based platform and deep learning method to achieve real-time recognizing analyses and batch processing of data. The seamless integration of distributed storage and cloud services meets the needs of large-scale data recognition and management requirements. The assessment shows that the method is more efficient in terms of performance, storage, and fault tolerance.","PeriodicalId":371106,"journal":{"name":"2016 International Conference on Identification, Information and Knowledge in the Internet of Things (IIKI)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127497110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hybrid mobile applications (or apps) are based on web technologies, such as HTML5 and JavaScript, and run in a browser environment. They facilitate cross-platform development. However, the security issues of web technologies are inherited by hybrid mobile apps, where the injected code may execute with the system-level privilege. In this paper, we propose a behavior model to detect malicious behaviors in hybrid mobile apps. Our model uses function-level information to describe how an app's behaviors are activated. Furthermore, once script injection happens, the behaviors made by the injected code can be detected according to the deviation from the app's behavior model.
{"title":"A Function-Level Behavior Model for Anomalous Behavior Detection in Hybrid Mobile Applications","authors":"Jian Mao, Ruilong Wang, Yueh-Ting Chen, Yinhao Xiao, Yaoqi Jia, Zhenkai Liang","doi":"10.1109/IIKI.2016.2","DOIUrl":"https://doi.org/10.1109/IIKI.2016.2","url":null,"abstract":"Hybrid mobile applications (or apps) are based on web technologies, such as HTML5 and JavaScript, and run in a browser environment. They facilitate cross-platform development. However, the security issues of web technologies are inherited by hybrid mobile apps, where the injected code may execute with the system-level privilege. In this paper, we propose a behavior model to detect malicious behaviors in hybrid mobile apps. Our model uses function-level information to describe how an app's behaviors are activated. Furthermore, once script injection happens, the behaviors made by the injected code can be detected according to the deviation from the app's behavior model.","PeriodicalId":371106,"journal":{"name":"2016 International Conference on Identification, Information and Knowledge in the Internet of Things (IIKI)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124169567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}