An episode rule of associating two episodes represents a temporal implication of the antecedent episode to the consequent episode. Episode-rule mining is a task of extracting useful patterns/episodes from large event databases. We present an episode-rule mining algorithm for finding frequent and confident serial-episode rules via first local-maximum confidence in yielding ideal window widths, if exist, in event sequences based on minimal occurrences constrained by a constant maximum gap. Results from our preliminary empirical study confirm the applicability of the episode-rule mining algorithm for Web-site traversal-pattern discovery, and show that the first local maximization yielding ideal window widths exists in real data but rarely in synthetic random data sets.
{"title":"Episode-Rule Mining with Minimal Occurrences via First Local Maximization in Confidence","authors":"H. K. Dai","doi":"10.1145/3287921.3287982","DOIUrl":"https://doi.org/10.1145/3287921.3287982","url":null,"abstract":"An episode rule of associating two episodes represents a temporal implication of the antecedent episode to the consequent episode. Episode-rule mining is a task of extracting useful patterns/episodes from large event databases. We present an episode-rule mining algorithm for finding frequent and confident serial-episode rules via first local-maximum confidence in yielding ideal window widths, if exist, in event sequences based on minimal occurrences constrained by a constant maximum gap. Results from our preliminary empirical study confirm the applicability of the episode-rule mining algorithm for Web-site traversal-pattern discovery, and show that the first local maximization yielding ideal window widths exists in real data but rarely in synthetic random data sets.","PeriodicalId":448008,"journal":{"name":"Proceedings of the 9th International Symposium on Information and Communication Technology","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123383118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hussein Hazimeh, E. Mugellini, Simon Ruffieux, Omar Abou Khaled, P. Cudré-Mauroux
Recent Knowledge Graphs (KGs) like Wikidata and YAGO are often constructed by incorporating knowledge from semi-structured heterogeneous data resources such as Wikipedia. However, despite their large amount of knowledge, these graphs are still incomplete. In this paper, we posit that Online Social Networks (OSNs) can become prominent data resources comprising abundant knowledge about real-world entities. An entity on an OSN is represented by a profile; the link to this profile is called a social link. In this paper, we propose a KG refinement method for adding missing knowledge to a KG, i.e., social links. We target specific entity types, in the scientific community, such as researchers. Our approach uses both scholarly data resources and existing KG for building knowledge bases. Then, it matches this knowledge with OSNs to detect the corresponding social link(s) for a specific entity. It uses a novel matching algorithm, in combination with supervised and unsupervised learning methods. We empirically validate that our system is able to detect a large number of social links with high confidence.
{"title":"Automatic Embedding of Social Network Profile Links into Knowledge Graphs","authors":"Hussein Hazimeh, E. Mugellini, Simon Ruffieux, Omar Abou Khaled, P. Cudré-Mauroux","doi":"10.1145/3287921.3287926","DOIUrl":"https://doi.org/10.1145/3287921.3287926","url":null,"abstract":"Recent Knowledge Graphs (KGs) like Wikidata and YAGO are often constructed by incorporating knowledge from semi-structured heterogeneous data resources such as Wikipedia. However, despite their large amount of knowledge, these graphs are still incomplete. In this paper, we posit that Online Social Networks (OSNs) can become prominent data resources comprising abundant knowledge about real-world entities. An entity on an OSN is represented by a profile; the link to this profile is called a social link. In this paper, we propose a KG refinement method for adding missing knowledge to a KG, i.e., social links. We target specific entity types, in the scientific community, such as researchers. Our approach uses both scholarly data resources and existing KG for building knowledge bases. Then, it matches this knowledge with OSNs to detect the corresponding social link(s) for a specific entity. It uses a novel matching algorithm, in combination with supervised and unsupervised learning methods. We empirically validate that our system is able to detect a large number of social links with high confidence.","PeriodicalId":448008,"journal":{"name":"Proceedings of the 9th International Symposium on Information and Communication Technology","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130120540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nuntiya Chiensriwimol, P. Mongkolnam, Jonathan H. Chan
Frozen shoulder treatment is normally a time-consuming process. Continual physical therapy is required in practice for a patient to gradually recover over time. With the advent of mobile technology, there is an increasing number of smartphone applications being developed to facilitate patients to perform telerehabilitation. In this study, we incorporate animation to simulate arm movement in various exercise types via a mobile app to augment the use of biofeedback data for the treatment process. The main contribution of this paper is to simulate the frozen shoulder exercise using a Unity 3D model. Patients can do rehabilitation exercises at home by putting their smartphone on the shoulder using an armband and the data will be sent to the physiotherapist without the need to wait in long queue at the clinic to see the practitioner. The results indicated that our mobile app and web dashboard is useful for physiotherapists to easily monitor as well as manage a patient's rehabilitation process remotely.
{"title":"Frozen Shoulder Rehabilitation: Exercise Simulation and Usability Study","authors":"Nuntiya Chiensriwimol, P. Mongkolnam, Jonathan H. Chan","doi":"10.1145/3287921.3287951","DOIUrl":"https://doi.org/10.1145/3287921.3287951","url":null,"abstract":"Frozen shoulder treatment is normally a time-consuming process. Continual physical therapy is required in practice for a patient to gradually recover over time. With the advent of mobile technology, there is an increasing number of smartphone applications being developed to facilitate patients to perform telerehabilitation. In this study, we incorporate animation to simulate arm movement in various exercise types via a mobile app to augment the use of biofeedback data for the treatment process. The main contribution of this paper is to simulate the frozen shoulder exercise using a Unity 3D model. Patients can do rehabilitation exercises at home by putting their smartphone on the shoulder using an armband and the data will be sent to the physiotherapist without the need to wait in long queue at the clinic to see the practitioner. The results indicated that our mobile app and web dashboard is useful for physiotherapists to easily monitor as well as manage a patient's rehabilitation process remotely.","PeriodicalId":448008,"journal":{"name":"Proceedings of the 9th International Symposium on Information and Communication Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116194377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study proposes a Feed Forward Neural Net (FFNN) to forecast renewable energy generation of marine wind parks located in Denmark. The neural network uses historical weather and power generation data for training and applies the learned pattern to forecast wind energy production. Furthermore, the study shows how to improve prediction quality by leveraging specific parameters. Especially, we study the impact of the distance and direction of the weather station related to the production site in detail. In addition, we examined various parameters of the network to improve the accuracy. The proposed model distinguishes itself from other models by the fact that the optimal validation accuracy of more than 90 percent can be reached with training data sets of only a limited size, here two months of data with hourly resolution.
{"title":"High Accuracy Forecasting with Limited Input Data: Using FFNNs to Predict Offshore Wind Power Generation","authors":"Elaine Zaunseder, Larissa Müller, S. Blankenburg","doi":"10.1145/3287921.3287936","DOIUrl":"https://doi.org/10.1145/3287921.3287936","url":null,"abstract":"This study proposes a Feed Forward Neural Net (FFNN) to forecast renewable energy generation of marine wind parks located in Denmark. The neural network uses historical weather and power generation data for training and applies the learned pattern to forecast wind energy production. Furthermore, the study shows how to improve prediction quality by leveraging specific parameters. Especially, we study the impact of the distance and direction of the weather station related to the production site in detail. In addition, we examined various parameters of the network to improve the accuracy. The proposed model distinguishes itself from other models by the fact that the optimal validation accuracy of more than 90 percent can be reached with training data sets of only a limited size, here two months of data with hourly resolution.","PeriodicalId":448008,"journal":{"name":"Proceedings of the 9th International Symposium on Information and Communication Technology","volume":"134 44","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120851436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Along with rapid development of electronic scientific publication repositories, automatic topics identification from papers has helped a lot for the researchers in their research. Latent Dirichlet Allocation (LDA) model is the most popular method which is used to discover hidden topics in texts basing on the co-occurrence of words in a corpus. LDA algorithm has achieved good results for large documents. However, article repositories usually only store title and abstract that are too short for LDA algorithm to work effectively. In this paper, we propose CitationLDA++ model that can improve the performance of the LDA algorithm in inferring topics of the papers basing on the title or/and abstract and citation information. The proposed model is based on the assumption that the topics of the cited papers also reflects the topics of the original paper. In this study, we divide the dataset into two sets. The first one is used to build prior knowledge source using LDA algorithm. The second is training dataset used in CitationLDA++. In the inference process with Gibbs sampling, CitationLDA++ algorithm use topics distribution of prior knowledge source and citation information to guide the process of assigning the topic to words in the text. The use of topics of cited papers helps to tackle the limit of word co-occurrence in case of linked short text. Experiments with the AMiner dataset including title or/and abstract of papers and citation information, CitationLDA++ algorithm gains better perplexity measurement than no additional knowledge. Experimental results suggest that the citation information can improve the performance of LDA algorithm to discover topics of papers in the case of full content of them are not available.
{"title":"CitationLDA++: an Extension of LDA for Discovering Topics in Document Network","authors":"T. Nguyen, P. Do","doi":"10.1145/3287921.3287930","DOIUrl":"https://doi.org/10.1145/3287921.3287930","url":null,"abstract":"Along with rapid development of electronic scientific publication repositories, automatic topics identification from papers has helped a lot for the researchers in their research. Latent Dirichlet Allocation (LDA) model is the most popular method which is used to discover hidden topics in texts basing on the co-occurrence of words in a corpus. LDA algorithm has achieved good results for large documents. However, article repositories usually only store title and abstract that are too short for LDA algorithm to work effectively. In this paper, we propose CitationLDA++ model that can improve the performance of the LDA algorithm in inferring topics of the papers basing on the title or/and abstract and citation information. The proposed model is based on the assumption that the topics of the cited papers also reflects the topics of the original paper. In this study, we divide the dataset into two sets. The first one is used to build prior knowledge source using LDA algorithm. The second is training dataset used in CitationLDA++. In the inference process with Gibbs sampling, CitationLDA++ algorithm use topics distribution of prior knowledge source and citation information to guide the process of assigning the topic to words in the text. The use of topics of cited papers helps to tackle the limit of word co-occurrence in case of linked short text. Experiments with the AMiner dataset including title or/and abstract of papers and citation information, CitationLDA++ algorithm gains better perplexity measurement than no additional knowledge. Experimental results suggest that the citation information can improve the performance of LDA algorithm to discover topics of papers in the case of full content of them are not available.","PeriodicalId":448008,"journal":{"name":"Proceedings of the 9th International Symposium on Information and Communication Technology","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125099134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quoc Bao Nguyen, Van Tuan Mai, Quang Trung Le, Ba Quyen Dam, Van Hai Do
In this paper, we first present our effort to collect a 500-hour corpus for Vietnamese read speech. After that, various techniques such as data augmentation, recurrent neural network language model rescoring, language model adaptation, bottleneck feature, system combination are applied to build the speech recognition system. Our final system achieves a low word error rate at 6.9% on the noisy test set.
{"title":"Development of a Vietnamese Large Vocabulary Continuous Speech Recognition System under Noisy Conditions","authors":"Quoc Bao Nguyen, Van Tuan Mai, Quang Trung Le, Ba Quyen Dam, Van Hai Do","doi":"10.1145/3287921.3287938","DOIUrl":"https://doi.org/10.1145/3287921.3287938","url":null,"abstract":"In this paper, we first present our effort to collect a 500-hour corpus for Vietnamese read speech. After that, various techniques such as data augmentation, recurrent neural network language model rescoring, language model adaptation, bottleneck feature, system combination are applied to build the speech recognition system. Our final system achieves a low word error rate at 6.9% on the noisy test set.","PeriodicalId":448008,"journal":{"name":"Proceedings of the 9th International Symposium on Information and Communication Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131102616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ensemble is an universal machine learning method that is based on the divide-and-conquer principle. The ensemble aims to improve performance of system in terms of processing speed and quality. The assessment of cluster tendency is a method determining whether a considering data-set contains meaningful clusters. Recently, a silhouette-based assessment of cluster tendency method (SACT) has been proposed to simultaneously determine the appropriate number of data clusters and the prototypes. The advantages of SACT are accuracy and less the parameter, while there are limitations in data size and processing speed. In this paper, we proposed an improved SACT method for data clustering. We call eSACT algorithm. Experiments were conducted on synthetic data-sets and color image images. The proposed algorithm exhibited high performance, reliability and accuracy compared to previous proposed algorithms in the assessment of cluster tendency.
{"title":"A New Assessment of Cluster Tendency Ensemble approach for Data Clustering","authors":"Van Nha Pham, L. Ngo, L. T. Pham, Pham Van Hai","doi":"10.1145/3287921.3287927","DOIUrl":"https://doi.org/10.1145/3287921.3287927","url":null,"abstract":"The ensemble is an universal machine learning method that is based on the divide-and-conquer principle. The ensemble aims to improve performance of system in terms of processing speed and quality. The assessment of cluster tendency is a method determining whether a considering data-set contains meaningful clusters. Recently, a silhouette-based assessment of cluster tendency method (SACT) has been proposed to simultaneously determine the appropriate number of data clusters and the prototypes. The advantages of SACT are accuracy and less the parameter, while there are limitations in data size and processing speed. In this paper, we proposed an improved SACT method for data clustering. We call eSACT algorithm. Experiments were conducted on synthetic data-sets and color image images. The proposed algorithm exhibited high performance, reliability and accuracy compared to previous proposed algorithms in the assessment of cluster tendency.","PeriodicalId":448008,"journal":{"name":"Proceedings of the 9th International Symposium on Information and Communication Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129367157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inertial-sensors based gait has been considered as a promising approach for user authentication in mobile devices. However, securing enrolled template in such system remains a challenging task. Biometric Cryptosystems (BCS) provide elegant approaches for this matter. The primary task of adopting BCS is to extract from raw biometric data a discriminative, high entropy and stable binary string, which will be used as input of BCS. Unfortunately, the state-of-the-art researches does not notice the gait features' population distribution when extracting such string. Thus, the extracted binary string has low entropy, and degrades the overall system security. In this study, we address the aforementioned drawback to improve entropy of the extracted string, and also enhance the system security. Specifically, we design a binarization scheme, in which the distribution population of gait features are analyzed and utilized to allow the extracted binary string achieving maximal entropy. In addition, the binarization is also designed to provide strong variation toleration to produce highly stable binary string which enhances the system friendliness. We analyzed the proposed method using a gait dataset of 38 volunteers which were collected under nearly realistic conditions. The experiment results show that our proposed binarization method improves the extracted binary string's entropy 30%, and the system achieved competitive performance (i.e., 0.01% FAR, 9.5% FRR with 139-bit key).
{"title":"A Binarization Method for Extracting High Entropy String in Gait Biometric Cryptosystem","authors":"Lam Tran, Thao M. Dang, Deokjai Choi","doi":"10.1145/3287921.3287960","DOIUrl":"https://doi.org/10.1145/3287921.3287960","url":null,"abstract":"Inertial-sensors based gait has been considered as a promising approach for user authentication in mobile devices. However, securing enrolled template in such system remains a challenging task. Biometric Cryptosystems (BCS) provide elegant approaches for this matter. The primary task of adopting BCS is to extract from raw biometric data a discriminative, high entropy and stable binary string, which will be used as input of BCS. Unfortunately, the state-of-the-art researches does not notice the gait features' population distribution when extracting such string. Thus, the extracted binary string has low entropy, and degrades the overall system security. In this study, we address the aforementioned drawback to improve entropy of the extracted string, and also enhance the system security. Specifically, we design a binarization scheme, in which the distribution population of gait features are analyzed and utilized to allow the extracted binary string achieving maximal entropy. In addition, the binarization is also designed to provide strong variation toleration to produce highly stable binary string which enhances the system friendliness. We analyzed the proposed method using a gait dataset of 38 volunteers which were collected under nearly realistic conditions. The experiment results show that our proposed binarization method improves the extracted binary string's entropy 30%, and the system achieved competitive performance (i.e., 0.01% FAR, 9.5% FRR with 139-bit key).","PeriodicalId":448008,"journal":{"name":"Proceedings of the 9th International Symposium on Information and Communication Technology","volume":"50 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131369027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phan The Duy, Do Thi Thu Hien, Do Hoang Hien, V. Pham
In the "Industry 4.0" era, blockchain as well as related distributed ledger technologies has been an unmissable trend for both academy and industry recently. Blockchain technology has become famous as the innovative technology that underlies cryptocurrencies such as Bitcoin and Ethereum platform. It also has been spreading with multiple industries exploring their capabilities and new blockchain use cases springing up on a daily basis. Its emergence has brought a great deal of impact on how the information will be stored and processed securely. Furthermore, almost of advocates say that blockchain will disrupt and change everything from education to financial payments, insurance, intellectual property, healthcare,... in the years to come. However, a comprehensive survey on potential and issues of blockchain adoption in academy and industry has not been yet accomplished. This paper tries to conduct a comprehensive survey on the blockchain technology adoption by discussing its influences as well as the opportunities and challenges when utilizing it in the real-world scenarios.
{"title":"A survey on opportunities and challenges of Blockchain technology adoption for revolutionary innovation","authors":"Phan The Duy, Do Thi Thu Hien, Do Hoang Hien, V. Pham","doi":"10.1145/3287921.3287978","DOIUrl":"https://doi.org/10.1145/3287921.3287978","url":null,"abstract":"In the \"Industry 4.0\" era, blockchain as well as related distributed ledger technologies has been an unmissable trend for both academy and industry recently. Blockchain technology has become famous as the innovative technology that underlies cryptocurrencies such as Bitcoin and Ethereum platform. It also has been spreading with multiple industries exploring their capabilities and new blockchain use cases springing up on a daily basis. Its emergence has brought a great deal of impact on how the information will be stored and processed securely. Furthermore, almost of advocates say that blockchain will disrupt and change everything from education to financial payments, insurance, intellectual property, healthcare,... in the years to come. However, a comprehensive survey on potential and issues of blockchain adoption in academy and industry has not been yet accomplished. This paper tries to conduct a comprehensive survey on the blockchain technology adoption by discussing its influences as well as the opportunities and challenges when utilizing it in the real-world scenarios.","PeriodicalId":448008,"journal":{"name":"Proceedings of the 9th International Symposium on Information and Communication Technology","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123359944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thoi Hoang Dinh, Toan Pham Van, Ta Minh Thanh, Hau Nguyen Thanh, Anh Pham Hoang
Recently, the problems of clothes recognition and clothing item retrieval have attracted a number of researchers, due to its practical and potential values to real-world applications. The main task is to automatically find relevant clothing items given a single user-provided image without any extra metadata. Most existing systems mainly focus on clothes classification, attribute prediction, and matching the exact in-shop items with the query image. However, these systems do not mention the problem of latency period or the amount of time that users have to wait when they query an image until the query results are retrieved. In this paper, we propose a fashion search system that automatically recognizes clothes and suggests multiple similar clothing items with an impressively low latency. Through extensive experiments, it is verified that our system outperforms almost existing systems in term of clothing item retrieval time.
{"title":"Large Scale Fashion Search System with Deep Learning and Quantization Indexing","authors":"Thoi Hoang Dinh, Toan Pham Van, Ta Minh Thanh, Hau Nguyen Thanh, Anh Pham Hoang","doi":"10.1145/3287921.3287964","DOIUrl":"https://doi.org/10.1145/3287921.3287964","url":null,"abstract":"Recently, the problems of clothes recognition and clothing item retrieval have attracted a number of researchers, due to its practical and potential values to real-world applications. The main task is to automatically find relevant clothing items given a single user-provided image without any extra metadata. Most existing systems mainly focus on clothes classification, attribute prediction, and matching the exact in-shop items with the query image. However, these systems do not mention the problem of latency period or the amount of time that users have to wait when they query an image until the query results are retrieved. In this paper, we propose a fashion search system that automatically recognizes clothes and suggests multiple similar clothing items with an impressively low latency. Through extensive experiments, it is verified that our system outperforms almost existing systems in term of clothing item retrieval time.","PeriodicalId":448008,"journal":{"name":"Proceedings of the 9th International Symposium on Information and Communication Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127942272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}