Pub Date : 2020-09-26DOI: 10.1109/IICAIET49801.2020.9257813
Sharmeena Naido, R. R. Porle
For the past few decades, the evolution of human-computer interaction has significantly impacted face detection methods. In this paper, an experiment was conducted to detect frontal and side-view faces from indoor surveillance videos. The proposed method comprises skin colour segmentation, Haar feature extraction and classification. Skin colour segmentation involves the conversion of RGB images to the YCbCr colour space. Then, histogram analysis is performed to extract skin pixels in images. Afterward, Haar features are used. Finally, the cascaded AdaBoost classifier is used to classify faces into frontal and sideview faces while removing non-face regions. The proposed method successfully detected an average of 70.96% of frontal faces. The detection for side-view faces, however, have low performance, with average of 32.67%.
{"title":"Face Detection Using Colour and Haar Features for Indoor Surveillance","authors":"Sharmeena Naido, R. R. Porle","doi":"10.1109/IICAIET49801.2020.9257813","DOIUrl":"https://doi.org/10.1109/IICAIET49801.2020.9257813","url":null,"abstract":"For the past few decades, the evolution of human-computer interaction has significantly impacted face detection methods. In this paper, an experiment was conducted to detect frontal and side-view faces from indoor surveillance videos. The proposed method comprises skin colour segmentation, Haar feature extraction and classification. Skin colour segmentation involves the conversion of RGB images to the YCbCr colour space. Then, histogram analysis is performed to extract skin pixels in images. Afterward, Haar features are used. Finally, the cascaded AdaBoost classifier is used to classify faces into frontal and sideview faces while removing non-face regions. The proposed method successfully detected an average of 70.96% of frontal faces. The detection for side-view faces, however, have low performance, with average of 32.67%.","PeriodicalId":300885,"journal":{"name":"2020 IEEE 2nd International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127378682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-26DOI: 10.1109/IICAIET49801.2020.9257852
Naoto Kosaka, Yumi Wakita
We develop a conversation support system for the public community. Our concept is that supporting elderly person's active life by assisting human-to-human conversation is more effective than providing a speech dialogue system. To use a conversation support system in an actual restaurant or lounge environment, it is necessary to separate the conversation of the target near the microphone from the ambient noise. We have already proposed the identification method of the utterances spoken between near a microphone and far from it using the standard deviation values of the fundamental frequency (SD-F0) and those of the speech power level (SD-SP) for each utterance. In the paper, we evaluate the effectiveness of our identification method for an actual free conversation using Support Vector Machine(SVM) method. As a result, the precision rate of the utterances near the microphone is 87.8%. This means that the identification method using the standard deviations of the fundamental frequency and speech power would be effective even if they are used in real environments. However, the performance depends on the utterances lengths, the F0 value's stability of the utterance part of over the threshold and the position of the microphones. In future, it evaluation should be done using more number of speakers and variable situations to define a suitable system specification.
{"title":"Evaluating target utterance identification method using practical free conversation","authors":"Naoto Kosaka, Yumi Wakita","doi":"10.1109/IICAIET49801.2020.9257852","DOIUrl":"https://doi.org/10.1109/IICAIET49801.2020.9257852","url":null,"abstract":"We develop a conversation support system for the public community. Our concept is that supporting elderly person's active life by assisting human-to-human conversation is more effective than providing a speech dialogue system. To use a conversation support system in an actual restaurant or lounge environment, it is necessary to separate the conversation of the target near the microphone from the ambient noise. We have already proposed the identification method of the utterances spoken between near a microphone and far from it using the standard deviation values of the fundamental frequency (SD-F0) and those of the speech power level (SD-SP) for each utterance. In the paper, we evaluate the effectiveness of our identification method for an actual free conversation using Support Vector Machine(SVM) method. As a result, the precision rate of the utterances near the microphone is 87.8%. This means that the identification method using the standard deviations of the fundamental frequency and speech power would be effective even if they are used in real environments. However, the performance depends on the utterances lengths, the F0 value's stability of the utterance part of over the threshold and the position of the microphones. In future, it evaluation should be done using more number of speakers and variable situations to define a suitable system specification.","PeriodicalId":300885,"journal":{"name":"2020 IEEE 2nd International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127077819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-26DOI: 10.1109/IICAIET49801.2020.9257811
M. Dabbagh, Mohsen Kakavand, Mohammad Tahir, A. Amphawan
Recent years have witnessed a sizeable growth of Blockchain applications in enterprises. Blockchain is transforming the traditional approach of storing and managing data in a single location into a decentralized ledger. Although many industries are keen on adopting Blockchain technology for enhanced transaction transparency, the performance of the current Blockchain platforms is still perplexing to stakeholders. Therefore, this research aims to conduct an empirical study to evaluate the performance of two prominent Blockchain platforms, Hyperledger Fabric and Ethereum. The performance evaluation is based on measuring four metrics including success rate, average latency, throughput, and resource consumption. The experimental results of executing 100 transactions show that Hyperledger Fabric generally surpasses Ethereum against the four performance metrics. The presented results in this research would assist practitioners in their decision-making process of adopting the ideal Blockchain platform into their IT systems based on application requirements.
{"title":"Performance Analysis of Blockchain Platforms: Empirical Evaluation of Hyperledger Fabric and Ethereum","authors":"M. Dabbagh, Mohsen Kakavand, Mohammad Tahir, A. Amphawan","doi":"10.1109/IICAIET49801.2020.9257811","DOIUrl":"https://doi.org/10.1109/IICAIET49801.2020.9257811","url":null,"abstract":"Recent years have witnessed a sizeable growth of Blockchain applications in enterprises. Blockchain is transforming the traditional approach of storing and managing data in a single location into a decentralized ledger. Although many industries are keen on adopting Blockchain technology for enhanced transaction transparency, the performance of the current Blockchain platforms is still perplexing to stakeholders. Therefore, this research aims to conduct an empirical study to evaluate the performance of two prominent Blockchain platforms, Hyperledger Fabric and Ethereum. The performance evaluation is based on measuring four metrics including success rate, average latency, throughput, and resource consumption. The experimental results of executing 100 transactions show that Hyperledger Fabric generally surpasses Ethereum against the four performance metrics. The presented results in this research would assist practitioners in their decision-making process of adopting the ideal Blockchain platform into their IT systems based on application requirements.","PeriodicalId":300885,"journal":{"name":"2020 IEEE 2nd International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122290904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-26DOI: 10.1109/IICAIET49801.2020.9257830
G. M. Abro, V. Asirvadam, S. Zulkifli
This work presents performance analysis between conventional fuzzy-based sliding mode control (F-SMC) and single-dimension fuzzy-sliding mode control (SDF-SMC) for an underactuated quadrotor craft. It has been observed that fuzzy logic control (FLC) can resolve various issues of sliding mode controller (SMC) for the quadrotor, such as reduction in chattering noise or Zeno phenomenon. However, FLC increases required computational power and processing time, which directly rely on the list of rules defined for the FLC. These rules trigger the gains of the SMC. This paper presents an approach to convert the conventional FLC two-dimensional rule-based table into a single-dimension table, leading to a control algorithm referred to as single-input fuzzy-sliding mode control. Numerical simulation work has been implemented on MATLAB-Simulink software, which demonstrates the same control performance of the SDF-SMC as the conventional F-SMC, but with the improvement of reduced computational power and processing time.
{"title":"Single-Input Fuzzy-Sliding Mode Control for an Underactuated Quadrotor Craft","authors":"G. M. Abro, V. Asirvadam, S. Zulkifli","doi":"10.1109/IICAIET49801.2020.9257830","DOIUrl":"https://doi.org/10.1109/IICAIET49801.2020.9257830","url":null,"abstract":"This work presents performance analysis between conventional fuzzy-based sliding mode control (F-SMC) and single-dimension fuzzy-sliding mode control (SDF-SMC) for an underactuated quadrotor craft. It has been observed that fuzzy logic control (FLC) can resolve various issues of sliding mode controller (SMC) for the quadrotor, such as reduction in chattering noise or Zeno phenomenon. However, FLC increases required computational power and processing time, which directly rely on the list of rules defined for the FLC. These rules trigger the gains of the SMC. This paper presents an approach to convert the conventional FLC two-dimensional rule-based table into a single-dimension table, leading to a control algorithm referred to as single-input fuzzy-sliding mode control. Numerical simulation work has been implemented on MATLAB-Simulink software, which demonstrates the same control performance of the SDF-SMC as the conventional F-SMC, but with the improvement of reduced computational power and processing time.","PeriodicalId":300885,"journal":{"name":"2020 IEEE 2nd International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129086334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-26DOI: 10.1109/IICAIET49801.2020.9257860
Ivan Yong-Sing Lau, T. Chua, W. Lee, Chya-Wei Wong, T. Toh, H. Ting
Measurement of the gait parameter typically requires a combination of force plate and motion tracking system, which restricts the calculated value to the laboratory environment. The possibility of a portable tracking system has been investigated in some recent studies, such as Microsoft Kinect sensors. The present research collaborated with Sibu Hospital and KPJ Sibu Specialist Hospital to collect the data from subjects. Concurrently, the law of cosine and dot cross product was used as primary measures to determine the scalar value of vector knee, ankle, and hip and the angle that formed by knee, ankle, and hip. The result generated by the proposed knee osteoarthritis severity diagnostics system is presented, specifically, demonstrate the analysis algorithm of various gait parameters system. In summary, Microsoft Kinect v2 sensor can be utilised in the present research to capture subject movement, and a knee osteoarthritis severity diagnostics system is proposed as clinically feasible options for gait analysis.
{"title":"Kinect-Based Knee Osteoarthritis Gait Analysis System","authors":"Ivan Yong-Sing Lau, T. Chua, W. Lee, Chya-Wei Wong, T. Toh, H. Ting","doi":"10.1109/IICAIET49801.2020.9257860","DOIUrl":"https://doi.org/10.1109/IICAIET49801.2020.9257860","url":null,"abstract":"Measurement of the gait parameter typically requires a combination of force plate and motion tracking system, which restricts the calculated value to the laboratory environment. The possibility of a portable tracking system has been investigated in some recent studies, such as Microsoft Kinect sensors. The present research collaborated with Sibu Hospital and KPJ Sibu Specialist Hospital to collect the data from subjects. Concurrently, the law of cosine and dot cross product was used as primary measures to determine the scalar value of vector knee, ankle, and hip and the angle that formed by knee, ankle, and hip. The result generated by the proposed knee osteoarthritis severity diagnostics system is presented, specifically, demonstrate the analysis algorithm of various gait parameters system. In summary, Microsoft Kinect v2 sensor can be utilised in the present research to capture subject movement, and a knee osteoarthritis severity diagnostics system is proposed as clinically feasible options for gait analysis.","PeriodicalId":300885,"journal":{"name":"2020 IEEE 2nd International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126114206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-26DOI: 10.1109/IICAIET49801.2020.9257863
Pooi Shiang Tan, K. Lim, C. Lee
This paper presents a video-based human action recognition method leveraging deep learning model. Prior to the filtering phase, the input images are pre-processed by converting them into grayscale images. Thereafter, the region of interest that contains human performing action are cropped out by a pre-trained pedestrian detector. Next, the region of interest will be resized and passed as the input image to the filtering phase. In this phase, the filter kernels are trained using Sparse Autoencoder on the natural images. After obtaining the filter kernels, convolution operation is performed in the input image and the filter kernels. The filtered images are then passed to the feature extraction phase. The Histogram of Oriented Gradients descriptor is used to encode the local and global texture information of the filtered images. Lastly, in the classification phase, a Modified Hausdorff Distance is applied to classify the test sample to its nearest match based on the histograms. The performance of the deep learning algorithm is evaluated on three benchmark datasets, namely Weizmann Action Dataset, CAD-60 Dataset and Multimedia University (MMU) Human Action Dataset. The experimental results show that the proposed deep learning algorithm outperforms other methods on the Weizmann Dataset, CAD-60 Dataset and MMU Human Action Dataset with recognition rates of 100%, 88.24% and 99.5% respectively.
{"title":"Human Action Recognition with Sparse Autoencoder and Histogram of Oriented Gradients","authors":"Pooi Shiang Tan, K. Lim, C. Lee","doi":"10.1109/IICAIET49801.2020.9257863","DOIUrl":"https://doi.org/10.1109/IICAIET49801.2020.9257863","url":null,"abstract":"This paper presents a video-based human action recognition method leveraging deep learning model. Prior to the filtering phase, the input images are pre-processed by converting them into grayscale images. Thereafter, the region of interest that contains human performing action are cropped out by a pre-trained pedestrian detector. Next, the region of interest will be resized and passed as the input image to the filtering phase. In this phase, the filter kernels are trained using Sparse Autoencoder on the natural images. After obtaining the filter kernels, convolution operation is performed in the input image and the filter kernels. The filtered images are then passed to the feature extraction phase. The Histogram of Oriented Gradients descriptor is used to encode the local and global texture information of the filtered images. Lastly, in the classification phase, a Modified Hausdorff Distance is applied to classify the test sample to its nearest match based on the histograms. The performance of the deep learning algorithm is evaluated on three benchmark datasets, namely Weizmann Action Dataset, CAD-60 Dataset and Multimedia University (MMU) Human Action Dataset. The experimental results show that the proposed deep learning algorithm outperforms other methods on the Weizmann Dataset, CAD-60 Dataset and MMU Human Action Dataset with recognition rates of 100%, 88.24% and 99.5% respectively.","PeriodicalId":300885,"journal":{"name":"2020 IEEE 2nd International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125831758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}