2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)最新文献
Pub Date : 2017-06-01DOI: 10.1109/SNPD.2017.8022708
Srinual Nalintippayawong, K. Atchariyachanvanich, Thanakrit Julavanich
In this paper, an integrated architecture approach in designing and developing a DBLearn web-based application is presented. The DBLearn system is a personalized and adaptive e-learning system designed especially for learning practices in database courses. This approach focused on topics that are important but difficult for new learners, such as database design and structured query language (SQL) command query. The concept of adaptive e-learning and autonomous agents were applied in this system to eliminate the traditional constraints of effective e-learning, such as the problem of different learning sensory and knowledge levels. Four approaches were used to solve this problem. First, learning style theory was used to classify the way of learning for each student. Second, the student activity (historical data) is kept in the system to analyze the next knowledge the student should learn or review. Next, the SQL query automated grader was used to judge the correctness of the student's query. This grader supports all the necessary commands in both DML and DDL. Finally, the SQL query question generator module that can generate SQL query questions automatically is presented. This will reduce the instructor's work load in creating enough questions and allow the students to practice at their own pace as much as they want. By using these four techniques, the students will have a better learning experience and becoming more successful in learning outcomes.
{"title":"DBLearn: Adaptive e-learning for practical database course — An integrated architecture approach","authors":"Srinual Nalintippayawong, K. Atchariyachanvanich, Thanakrit Julavanich","doi":"10.1109/SNPD.2017.8022708","DOIUrl":"https://doi.org/10.1109/SNPD.2017.8022708","url":null,"abstract":"In this paper, an integrated architecture approach in designing and developing a DBLearn web-based application is presented. The DBLearn system is a personalized and adaptive e-learning system designed especially for learning practices in database courses. This approach focused on topics that are important but difficult for new learners, such as database design and structured query language (SQL) command query. The concept of adaptive e-learning and autonomous agents were applied in this system to eliminate the traditional constraints of effective e-learning, such as the problem of different learning sensory and knowledge levels. Four approaches were used to solve this problem. First, learning style theory was used to classify the way of learning for each student. Second, the student activity (historical data) is kept in the system to analyze the next knowledge the student should learn or review. Next, the SQL query automated grader was used to judge the correctness of the student's query. This grader supports all the necessary commands in both DML and DDL. Finally, the SQL query question generator module that can generate SQL query questions automatically is presented. This will reduce the instructor's work load in creating enough questions and allow the students to practice at their own pace as much as they want. By using these four techniques, the students will have a better learning experience and becoming more successful in learning outcomes.","PeriodicalId":186094,"journal":{"name":"2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125451712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-01DOI: 10.1109/SNPD.2017.8022725
Md. Shahidul Islam, R. Pears, B. Bačić
Precursor pattern identification addresses the problem of detecting warning signals in data that herald an impending event of extraordinary interest. In the context of electrical power systems, identifying precursors to fluctuations in power generation in advance would enable engineers to put in place measures that mitigate against the effects of such fluctuations. In this research we use the Morlet wavelet to transform a time series defined on electrical power generation frequency which was sampled at intervals of 30 seconds to identify potential precursor patterns. The power spectrum that results is then used to select high coefficient regions that capture a large faction of the energy in the spectrum. We then subjected the high coefficient regions together with a contrasting low coefficient region to a non-parametric ANOVA test and our results indicate that one high coefficient region dominates by predicting an overwhelming percentage of the variation that occurs during the subsequent fluctuation event. These results suggest that the wavelet is an effective mechanism to identify precursor activity in electricity time series data.
{"title":"Detecting precursor patterns for frequency fluctuation in an electrical grid","authors":"Md. Shahidul Islam, R. Pears, B. Bačić","doi":"10.1109/SNPD.2017.8022725","DOIUrl":"https://doi.org/10.1109/SNPD.2017.8022725","url":null,"abstract":"Precursor pattern identification addresses the problem of detecting warning signals in data that herald an impending event of extraordinary interest. In the context of electrical power systems, identifying precursors to fluctuations in power generation in advance would enable engineers to put in place measures that mitigate against the effects of such fluctuations. In this research we use the Morlet wavelet to transform a time series defined on electrical power generation frequency which was sampled at intervals of 30 seconds to identify potential precursor patterns. The power spectrum that results is then used to select high coefficient regions that capture a large faction of the energy in the spectrum. We then subjected the high coefficient regions together with a contrasting low coefficient region to a non-parametric ANOVA test and our results indicate that one high coefficient region dominates by predicting an overwhelming percentage of the variation that occurs during the subsequent fluctuation event. These results suggest that the wavelet is an effective mechanism to identify precursor activity in electricity time series data.","PeriodicalId":186094,"journal":{"name":"2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125198589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-01DOI: 10.1109/SNPD.2017.8022736
Yanfei Wu, Yanqin Zhu, Zhe Yang
Mobile social network (MSN) is a type of delay tolerant network explicitly considering social characteristics of the terminal nodes. The existing Ad Hoc routing protocols assume that there is at least one complete communication path between the source node and the target node. So they cannot be applied to MSN directly. The key to solve the problem of content distribution in mobile social network is how to transmit the data to the target node in the case of there is no complete communication path between the source node and the target node. As the routing algorithm based on ant colony optimization has great ability to adapt it, it is an effective method to deal with the dynamic topologies of MSN. Based on the social network characteristics of MSN, this paper proposes a new MSN routing algorithm ACOMSN based on ant colony optimization. The algorithm uses the method of processing the node information on the transmission path to get the information list between the node pairs, so as to select the appropriate relay node to provide effective information when forwarding data to other nodes. In addition, ACOMSN designs methodologies for pheromone updating and data forwarding. The simulation experiments on real data sets show that comparing with typical MSN routing algorithms, ACOMSN can effectively improve the critical performance of data transmission with considerable overhead in MSN.
{"title":"Routing algorithm based on ant colony optimization for mobile social network","authors":"Yanfei Wu, Yanqin Zhu, Zhe Yang","doi":"10.1109/SNPD.2017.8022736","DOIUrl":"https://doi.org/10.1109/SNPD.2017.8022736","url":null,"abstract":"Mobile social network (MSN) is a type of delay tolerant network explicitly considering social characteristics of the terminal nodes. The existing Ad Hoc routing protocols assume that there is at least one complete communication path between the source node and the target node. So they cannot be applied to MSN directly. The key to solve the problem of content distribution in mobile social network is how to transmit the data to the target node in the case of there is no complete communication path between the source node and the target node. As the routing algorithm based on ant colony optimization has great ability to adapt it, it is an effective method to deal with the dynamic topologies of MSN. Based on the social network characteristics of MSN, this paper proposes a new MSN routing algorithm ACOMSN based on ant colony optimization. The algorithm uses the method of processing the node information on the transmission path to get the information list between the node pairs, so as to select the appropriate relay node to provide effective information when forwarding data to other nodes. In addition, ACOMSN designs methodologies for pheromone updating and data forwarding. The simulation experiments on real data sets show that comparing with typical MSN routing algorithms, ACOMSN can effectively improve the critical performance of data transmission with considerable overhead in MSN.","PeriodicalId":186094,"journal":{"name":"2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122507433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-01DOI: 10.1109/SNPD.2017.8022733
L. Meegahapola, Roshan Alwis, Eranga Nimalarathna, V. Mallawaarachchi, D. Meedeniya, S. Jayarathna
Digital documents are likely to have problems associated with the persistence of links, especially when dealing with references to external resources. People keep track of various webpages of their interest using distributed digital collections and without possession of these documents; the curator cannot control how they change. In the current context, managing these distributed digital collections and getting notifications about various changes have become a significant challenge. In this paper, we address the architectural aspects of change detection systems and present optimized change detection architecture, including a web service and a browser plugin, along with an email notification service. We have performed an experimental study on our hybrid architecture for change detection in a distributed digital collection. The proposed method introduces a preliminary framework that can serve as a useful tool to mitigate the impact of unexpected change in documents stored in decentralized collections in the future.
{"title":"Optimizing change detection in distributed digital collections: An architectural perspective of change detection","authors":"L. Meegahapola, Roshan Alwis, Eranga Nimalarathna, V. Mallawaarachchi, D. Meedeniya, S. Jayarathna","doi":"10.1109/SNPD.2017.8022733","DOIUrl":"https://doi.org/10.1109/SNPD.2017.8022733","url":null,"abstract":"Digital documents are likely to have problems associated with the persistence of links, especially when dealing with references to external resources. People keep track of various webpages of their interest using distributed digital collections and without possession of these documents; the curator cannot control how they change. In the current context, managing these distributed digital collections and getting notifications about various changes have become a significant challenge. In this paper, we address the architectural aspects of change detection systems and present optimized change detection architecture, including a web service and a browser plugin, along with an email notification service. We have performed an experimental study on our hybrid architecture for change detection in a distributed digital collection. The proposed method introduces a preliminary framework that can serve as a useful tool to mitigate the impact of unexpected change in documents stored in decentralized collections in the future.","PeriodicalId":186094,"journal":{"name":"2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129782564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-01DOI: 10.1109/SNPD.2017.8022715
P. Thu, Nwe New
Recognition of satirical language in social multimedia outlets turn out to be a trending research area in computational linguistics. Many researchers have analyzed satirical language from various point of views: lexically, syntactically, and semantically. However, due to the ironic dimension of emotion embedded in satirical language, emotional study of satirical language has ever left behind. In this study, we propose the new emotion-based satire detection model using supervised and unsupervised weighting approaches (TFRF and TFIDF). We implement the model with Ensemble Bagging classifier compared with benchmark classifier: SVM. The model not only outperform the word-based baseline: BoW but also handle both short text and long text configurations. Our work in recognition of satirical language can aid in lessening the impact of implicit language in public opinion mining, sentiment analysis, fake news detection and cyberbullying.
{"title":"Implementation of emotional features on satire detection","authors":"P. Thu, Nwe New","doi":"10.1109/SNPD.2017.8022715","DOIUrl":"https://doi.org/10.1109/SNPD.2017.8022715","url":null,"abstract":"Recognition of satirical language in social multimedia outlets turn out to be a trending research area in computational linguistics. Many researchers have analyzed satirical language from various point of views: lexically, syntactically, and semantically. However, due to the ironic dimension of emotion embedded in satirical language, emotional study of satirical language has ever left behind. In this study, we propose the new emotion-based satire detection model using supervised and unsupervised weighting approaches (TFRF and TFIDF). We implement the model with Ensemble Bagging classifier compared with benchmark classifier: SVM. The model not only outperform the word-based baseline: BoW but also handle both short text and long text configurations. Our work in recognition of satirical language can aid in lessening the impact of implicit language in public opinion mining, sentiment analysis, fake news detection and cyberbullying.","PeriodicalId":186094,"journal":{"name":"2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133056526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-01DOI: 10.1109/SNPD.2017.8022762
K. Higuchi, M. Yoshida, T. Tsuji, Naoyuki Miyamoto
In this paper, the correctness of the routing algorithm for the distributed key-value store based on order preserving linear hashing and Skip Graph is proved. In this system, data are divided by linear hashing and Skip Graph is used for overlay network. The routing table of this system is very uniform. Then, short detours can exist in the route of forwarding. By using these detours, the number of hops for the query forwarding is reduced.
{"title":"Correctness of the routing algorithm for distributed key-value store based on order preserving linear hashing and skip graph","authors":"K. Higuchi, M. Yoshida, T. Tsuji, Naoyuki Miyamoto","doi":"10.1109/SNPD.2017.8022762","DOIUrl":"https://doi.org/10.1109/SNPD.2017.8022762","url":null,"abstract":"In this paper, the correctness of the routing algorithm for the distributed key-value store based on order preserving linear hashing and Skip Graph is proved. In this system, data are divided by linear hashing and Skip Graph is used for overlay network. The routing table of this system is very uniform. Then, short detours can exist in the route of forwarding. By using these detours, the number of hops for the query forwarding is reduced.","PeriodicalId":186094,"journal":{"name":"2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133174126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-01DOI: 10.1109/SNPD.2017.8022748
T. Suphakul, T. Senivongse
Privacy is a major quality attribute of any software. Since personal data of users are collected, stored, processed, and transferred by the applications they use, they need to be assured that proper data protection is in place. Since privacy principles should be taken into account and incorporated into application design, this paper aims to promote privacy by design and presents a development of privacy design patterns. The patterns follow the privacy principles of the Organisation for Economic Co-operation and Development (OECD) and describe details of the privacy principles and how to apply them to the design and implementation of the applications. Software design models realizing the privacy principles are also proposed, using UML notations, so as to enable reuse of the design in privacy-aware applications.
{"title":"Development of privacy design patterns based on privacy principles and UML","authors":"T. Suphakul, T. Senivongse","doi":"10.1109/SNPD.2017.8022748","DOIUrl":"https://doi.org/10.1109/SNPD.2017.8022748","url":null,"abstract":"Privacy is a major quality attribute of any software. Since personal data of users are collected, stored, processed, and transferred by the applications they use, they need to be assured that proper data protection is in place. Since privacy principles should be taken into account and incorporated into application design, this paper aims to promote privacy by design and presents a development of privacy design patterns. The patterns follow the privacy principles of the Organisation for Economic Co-operation and Development (OECD) and describe details of the privacy principles and how to apply them to the design and implementation of the applications. Software design models realizing the privacy principles are also proposed, using UML notations, so as to enable reuse of the design in privacy-aware applications.","PeriodicalId":186094,"journal":{"name":"2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123829048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-01DOI: 10.1109/SNPD.2017.8022739
Teerapong Leelanupab, Orapin Anonthanasap
This laboratory-based user study is designed to evaluate automated mnemonic keywords generation systems for Japanese vocabulary learning. We examine our successful methodology and in particular a new phonetic algorithm, named JemSoundex, for Japanese-to-English-Mnemonic phonetic matching, in a learning and immediate retention task. Our methodology retrieves and ranks candidate keywords by considering phonetic, orthographic and semantic similarities, as well as psycholinguistic power. Experimental results showed that keywords provided by JemSoundex improved learner performance in the task of a short-term vocabulary learning, in comparison with no keyword support and two traditional phonetic transcriptions (i.e., IPA and Soundex). This improvement was even more evident for difficult words having more syllables. Participants also rated keywords generated by our JemSoundex as more phonetically relevant and useful than those by other baselines.
{"title":"Learning and immediate retention of Japanese vocabulary using generated mnemonic keywords","authors":"Teerapong Leelanupab, Orapin Anonthanasap","doi":"10.1109/SNPD.2017.8022739","DOIUrl":"https://doi.org/10.1109/SNPD.2017.8022739","url":null,"abstract":"This laboratory-based user study is designed to evaluate automated mnemonic keywords generation systems for Japanese vocabulary learning. We examine our successful methodology and in particular a new phonetic algorithm, named JemSoundex, for Japanese-to-English-Mnemonic phonetic matching, in a learning and immediate retention task. Our methodology retrieves and ranks candidate keywords by considering phonetic, orthographic and semantic similarities, as well as psycholinguistic power. Experimental results showed that keywords provided by JemSoundex improved learner performance in the task of a short-term vocabulary learning, in comparison with no keyword support and two traditional phonetic transcriptions (i.e., IPA and Soundex). This improvement was even more evident for difficult words having more syllables. Participants also rated keywords generated by our JemSoundex as more phonetically relevant and useful than those by other baselines.","PeriodicalId":186094,"journal":{"name":"2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130235489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-01DOI: 10.1109/SNPD.2017.8022767
Shohei Miyashita, Xinyu Lian, Xiao Zeng, Takashi Matsubara, K. Uehara
Artificial intelligence (AI) agent created with Deep Q-Networks (DQN) can defeat human agents in video games. Despite its high performance, DQN often exhibits odd behaviors, which could be immersion-breaking against the purpose of creating game AI. Moreover, DQN is capable of reacting to the game environment much faster than humans, making itself invincible (thus not fun to play with) in certain types of games. On the other hand, supervised learning framework trains an AI agent using historical play data of human agents as training data. Supervised learning agent exhibits a more human-like behavior than reinforcement learning agents because of imitating training data. However, its performance is often no better than human agents. The ultimate purpose of AI agents is to entertain human players. A good performance and a humanlike behavior are important factors of the AI agents, and both of them should be achieved simultaneously. This study proposes frameworks combining reinforcement learning and supervised learning and we call then separated network model and shared network model. We evaluated their performances by the game scores and behaviors by Turing test. The experimental results demonstrate that the proposed frameworks develop an AI agent of better performance than human agent and natural behavior than reinforcement learning agents.
{"title":"Developing game AI agent behaving like human by mixing reinforcement learning and supervised learning","authors":"Shohei Miyashita, Xinyu Lian, Xiao Zeng, Takashi Matsubara, K. Uehara","doi":"10.1109/SNPD.2017.8022767","DOIUrl":"https://doi.org/10.1109/SNPD.2017.8022767","url":null,"abstract":"Artificial intelligence (AI) agent created with Deep Q-Networks (DQN) can defeat human agents in video games. Despite its high performance, DQN often exhibits odd behaviors, which could be immersion-breaking against the purpose of creating game AI. Moreover, DQN is capable of reacting to the game environment much faster than humans, making itself invincible (thus not fun to play with) in certain types of games. On the other hand, supervised learning framework trains an AI agent using historical play data of human agents as training data. Supervised learning agent exhibits a more human-like behavior than reinforcement learning agents because of imitating training data. However, its performance is often no better than human agents. The ultimate purpose of AI agents is to entertain human players. A good performance and a humanlike behavior are important factors of the AI agents, and both of them should be achieved simultaneously. This study proposes frameworks combining reinforcement learning and supervised learning and we call then separated network model and shared network model. We evaluated their performances by the game scores and behaviors by Turing test. The experimental results demonstrate that the proposed frameworks develop an AI agent of better performance than human agent and natural behavior than reinforcement learning agents.","PeriodicalId":186094,"journal":{"name":"2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129510737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-01DOI: 10.1109/SNPD.2017.8022728
T. San, Nu War
Stereo matching is an active research area in computer vision for decades. Most of the existing stereo matching algorithms assume that the corresponding pixels have the same intensity or color in both images. But in real world situations, image color values are often affected by various radiometric factors such as exposure and lighting variations. This paper introduces a robust stereo matching algorithm for images captured under varying radiometric conditions. In this paper, histogram equalization and binary singleton expansion are performed as preprocessing step for local stereo matching. For the purpose of eliminating the discrepancy of illumination between reference image and corresponding image in stereo pair, the histogram equalization is first explored to remove the global discrepancy. As the second step, binary singleton expansion is performed to reduce noise and normalize histogram results for window cost computation efficient. Afterwards, local pixel matching on preprocessed stereo images is performed with Sum of Absolute Difference (SAD) on intensity and gradient. Finally, the final disparity map is obtained by left-right consistency checking and filtering with mean shift segments. Experimental results show that the proposed algorithm can reduce illumination differences and improve the matching accuracy of stereo image pairs effectively.
{"title":"Local stereo matching under radiometric variations","authors":"T. San, Nu War","doi":"10.1109/SNPD.2017.8022728","DOIUrl":"https://doi.org/10.1109/SNPD.2017.8022728","url":null,"abstract":"Stereo matching is an active research area in computer vision for decades. Most of the existing stereo matching algorithms assume that the corresponding pixels have the same intensity or color in both images. But in real world situations, image color values are often affected by various radiometric factors such as exposure and lighting variations. This paper introduces a robust stereo matching algorithm for images captured under varying radiometric conditions. In this paper, histogram equalization and binary singleton expansion are performed as preprocessing step for local stereo matching. For the purpose of eliminating the discrepancy of illumination between reference image and corresponding image in stereo pair, the histogram equalization is first explored to remove the global discrepancy. As the second step, binary singleton expansion is performed to reduce noise and normalize histogram results for window cost computation efficient. Afterwards, local pixel matching on preprocessed stereo images is performed with Sum of Absolute Difference (SAD) on intensity and gradient. Finally, the final disparity map is obtained by left-right consistency checking and filtering with mean shift segments. Experimental results show that the proposed algorithm can reduce illumination differences and improve the matching accuracy of stereo image pairs effectively.","PeriodicalId":186094,"journal":{"name":"2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127576601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}