World Academy of Science, Engineering and Technology, International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering最新文献
Pub Date : 2019-01-01DOI: 10.17706/IJCCE.2019.8.1.32-39
Noura A Khreis, A. Lau, A. Al-jedai, Salma Al-khani, Ezdehar H. Alruwaili
Therapeutic duplication alert is one of the Clinical Decision Support Systems (CDSS) that was implemented to help physicians and other healthcare providers in making clinical judgements about the patients’ management of therapy and decreasing medication errors. However, there were high override rates of these alerts by physicians as they were deemed to be of non-clinical significance. The quantity of the alerts fired by the system was high leading to “alert fatigue”. Thus, the hospital administrators reached an agreement to deactivate it. To assess the validity of this decision, the aim of the study was to analyze the impact of therapeutic duplication alert deactivation on medication errors’ rate. This study retrospectively screened a total of 593 electronic Medication Administration Records (eMAR) of hospitalized patients with 297 e-MARs in the pre-therapeutic duplication alert deactivation period and 296 e-MARs in the post-therapeutic duplication alert deactivation period in a tertiary care hospital in Saudi Arabia. The number and type of duplicate medication errors in each period was documented to determine whether there was a significant difference between the two periods. The results detected 51 out of 297 e-MARs with medication errors in the pre-therapeutic duplication alert deactivation period versus 47 out of 296 in the post alert deactivation therapeutic duplication. Chi square test showed that there was no significant difference in the incidence of medication errors detected among the two periods with a p-value of 0.672. Therefore, we concluded that there was no significant difference on the medication error after the therapeutic duplication alert deactivation. An integration of machine learning into the clinical decision support design was recommended to filter the duplicated and unimportant alerts and reduce the alert fatigue of physicians.
{"title":"An Evaluation of Clinical Decision Support and Use of Machine Learning to Reduce Alert Fatigue","authors":"Noura A Khreis, A. Lau, A. Al-jedai, Salma Al-khani, Ezdehar H. Alruwaili","doi":"10.17706/IJCCE.2019.8.1.32-39","DOIUrl":"https://doi.org/10.17706/IJCCE.2019.8.1.32-39","url":null,"abstract":"Therapeutic duplication alert is one of the Clinical Decision Support Systems (CDSS) that was implemented to help physicians and other healthcare providers in making clinical judgements about the patients’ management of therapy and decreasing medication errors. However, there were high override rates of these alerts by physicians as they were deemed to be of non-clinical significance. The quantity of the alerts fired by the system was high leading to “alert fatigue”. Thus, the hospital administrators reached an agreement to deactivate it. To assess the validity of this decision, the aim of the study was to analyze the impact of therapeutic duplication alert deactivation on medication errors’ rate. This study retrospectively screened a total of 593 electronic Medication Administration Records (eMAR) of hospitalized patients with 297 e-MARs in the pre-therapeutic duplication alert deactivation period and 296 e-MARs in the post-therapeutic duplication alert deactivation period in a tertiary care hospital in Saudi Arabia. The number and type of duplicate medication errors in each period was documented to determine whether there was a significant difference between the two periods. The results detected 51 out of 297 e-MARs with medication errors in the pre-therapeutic duplication alert deactivation period versus 47 out of 296 in the post alert deactivation therapeutic duplication. Chi square test showed that there was no significant difference in the incidence of medication errors detected among the two periods with a p-value of 0.672. Therefore, we concluded that there was no significant difference on the medication error after the therapeutic duplication alert deactivation. An integration of machine learning into the clinical decision support design was recommended to filter the duplicated and unimportant alerts and reduce the alert fatigue of physicians.","PeriodicalId":23787,"journal":{"name":"World Academy of Science, Engineering and Technology, International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78500666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-01DOI: 10.17706/ijcce.2019.8.2.50-59
X. Zhu, J. Yang, Y. Li
The maneuverability of modern targets becomes more and more complex and variable, which raises higher requirements on the tracking performance of detection systems. Especially the stable and accurate tracking of maneuvering targets is more critical. For the problem that statistical properties of detection system noise are unknown and the state of motion of targets is complex and variable, a new adaptive maneuvering target tracking algorithm is proposed. The algorithm adopts the combination of adaptive Kalman filtering under the spherical coordinate system and its counterpart under the Cartesian coordinate system. The adaptive Kalman filtering algorithm under the spherical coordinate system is based on Sage-Husa noise statistics estimator to estimate the statistical property of measurement noise. In the Cartesian coordinate system, the Singer model is used to describe the target motion. Relevant results of the adaptive Kalman filtering algorithm under the spherical coordinate system are used to achieve high-precision estimation of target motion information. Simulation results show that the proposed algorithm has satisfactory tracking accuracy.
{"title":"Research on an Adaptive Maneuvering Target Tracking Algorithm","authors":"X. Zhu, J. Yang, Y. Li","doi":"10.17706/ijcce.2019.8.2.50-59","DOIUrl":"https://doi.org/10.17706/ijcce.2019.8.2.50-59","url":null,"abstract":"The maneuverability of modern targets becomes more and more complex and variable, which raises higher requirements on the tracking performance of detection systems. Especially the stable and accurate tracking of maneuvering targets is more critical. For the problem that statistical properties of detection system noise are unknown and the state of motion of targets is complex and variable, a new adaptive maneuvering target tracking algorithm is proposed. The algorithm adopts the combination of adaptive Kalman filtering under the spherical coordinate system and its counterpart under the Cartesian coordinate system. The adaptive Kalman filtering algorithm under the spherical coordinate system is based on Sage-Husa noise statistics estimator to estimate the statistical property of measurement noise. In the Cartesian coordinate system, the Singer model is used to describe the target motion. Relevant results of the adaptive Kalman filtering algorithm under the spherical coordinate system are used to achieve high-precision estimation of target motion information. Simulation results show that the proposed algorithm has satisfactory tracking accuracy.","PeriodicalId":23787,"journal":{"name":"World Academy of Science, Engineering and Technology, International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87431919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-01DOI: 10.17706/ijcce.2019.8.4.184-194
Hongge Wei, Zhengjun Li
{"title":"Review and Prospect of the Development of Multi-beam Reflector Antennas","authors":"Hongge Wei, Zhengjun Li","doi":"10.17706/ijcce.2019.8.4.184-194","DOIUrl":"https://doi.org/10.17706/ijcce.2019.8.4.184-194","url":null,"abstract":"","PeriodicalId":23787,"journal":{"name":"World Academy of Science, Engineering and Technology, International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84345143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
—In this paper, we have developed a sliding mode controller for PUMA 600 manipulator robot, to control the remote robot a teleoperation system was developed. This system includes two sites, local and remote. The sliding mode controller is installed at the remote site. The client asks for a position through an interface and receives the real positions after running of the task by the remote robot. Both sites are interconnected via the Internet. In order to verify the effectiveness of the sliding mode controller, that is compared with a classic PID controller. The developed approach is tested on a virtual robot. The results confirmed the high performance of this approach. techniques and their application in electric drives control.
{"title":"Sliding Mode Control of an Internet Teleoperated PUMA 600 Robot","authors":"A. Ghoul, B. Ouamri, I. K. Bousserhane","doi":"10.5281/zenodo.1316313","DOIUrl":"https://doi.org/10.5281/zenodo.1316313","url":null,"abstract":"—In this paper, we have developed a sliding mode controller for PUMA 600 manipulator robot, to control the remote robot a teleoperation system was developed. This system includes two sites, local and remote. The sliding mode controller is installed at the remote site. The client asks for a position through an interface and receives the real positions after running of the task by the remote robot. Both sites are interconnected via the Internet. In order to verify the effectiveness of the sliding mode controller, that is compared with a classic PID controller. The developed approach is tested on a virtual robot. The results confirmed the high performance of this approach. techniques and their application in electric drives control.","PeriodicalId":23787,"journal":{"name":"World Academy of Science, Engineering and Technology, International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73373610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-01-01DOI: 10.17706/IJCCE.2018.7.2.20-31
V. C. Gungor, Z. Aydın, Ramazan Karademir
To construct and evaluate intrusion detection, system researchers are limited to only a few available public datasets unless they prepare their own. Although the most prevalent KDDCUP’99 dataset provides a comparative analysis among researchers, the community needs a new dataset which reflects new attack types in current high-speed networks. The aim of this study is to prepare a new alternative dataset for the community for detection of denial of service attacks and to conduct performance analysis of different data mining methods on this dataset. To develop the dataset, distributed DoS attacks have been generated that target a commercial website in a real network environment, which has a million of users from all over the world. In addition to this, a richer attack dataset has been produced in a laboratory environment with the help of Labris Networks. After capturing data, significant network features have been identified and processed and labeled with related attack types. Furthermore, the performances of different data mining techniques have been evaluated, including binary classification, multi-class classification, outlier detection, feature selection methods and hybrid approaches with our dataset by using the following algorithms: K-Means clustering, Naïve Bayes, Decision Tree, Multilayer Perceptron, LibSVM, Random Forest and Random Tree.
{"title":"Intelligent Anomaly Detection Techniques for Denial of Service Attacks","authors":"V. C. Gungor, Z. Aydın, Ramazan Karademir","doi":"10.17706/IJCCE.2018.7.2.20-31","DOIUrl":"https://doi.org/10.17706/IJCCE.2018.7.2.20-31","url":null,"abstract":"To construct and evaluate intrusion detection, system researchers are limited to only a few available public datasets unless they prepare their own. Although the most prevalent KDDCUP’99 dataset provides a comparative analysis among researchers, the community needs a new dataset which reflects new attack types in current high-speed networks. The aim of this study is to prepare a new alternative dataset for the community for detection of denial of service attacks and to conduct performance analysis of different data mining methods on this dataset. To develop the dataset, distributed DoS attacks have been generated that target a commercial website in a real network environment, which has a million of users from all over the world. In addition to this, a richer attack dataset has been produced in a laboratory environment with the help of Labris Networks. After capturing data, significant network features have been identified and processed and labeled with related attack types. Furthermore, the performances of different data mining techniques have been evaluated, including binary classification, multi-class classification, outlier detection, feature selection methods and hybrid approaches with our dataset by using the following algorithms: K-Means clustering, Naïve Bayes, Decision Tree, Multilayer Perceptron, LibSVM, Random Forest and Random Tree.","PeriodicalId":23787,"journal":{"name":"World Academy of Science, Engineering and Technology, International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87001243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-01-01DOI: 10.17706/ijcce.2018.7.3.32-44
Yunping Zheng, Tong Chang, M. Sarem
Image segmentation is a key step in medical image processing, since it affects the quality of the medical image in the follow-up steps. However, in the practice of processing MRI images, we find out that the segmentation process involves much difficulty due to the poorly defined boundaries of medical images, meanwhile, there are usually more than one target area. In this study, an improved algorithm based on the fuzzy connectedness framework for medical image is developed. The improved algorithm has involved an adaptive fuzzy connectedness segmentation combined with multiple seeds selection. Also, the algorithm can effectively overcome many problems when manual selection is used, such as the un-precise result of each target region segmented of the medical image and the difficulty of completion the segmentation when the areas are not connected. For testing the proposed method, some original real images, taken from a large hospital, were analyzed. The results have been evaluated with some rules, such as Dice’s coefficient, over segmentation rate, and under segmentation rate. The results show that the proposed method has an ideal segmentation boundary on medical images, meanwhile, it has a low time cost. In conclusion, the proposed method is superior to the traditional fuzzy connectedness segmentation methods for medical images.
{"title":"Improved Fuzzy Connectedness Segmentation Method for Medical Images with Multiple Seeds in MRI","authors":"Yunping Zheng, Tong Chang, M. Sarem","doi":"10.17706/ijcce.2018.7.3.32-44","DOIUrl":"https://doi.org/10.17706/ijcce.2018.7.3.32-44","url":null,"abstract":"Image segmentation is a key step in medical image processing, since it affects the quality of the medical image in the follow-up steps. However, in the practice of processing MRI images, we find out that the segmentation process involves much difficulty due to the poorly defined boundaries of medical images, meanwhile, there are usually more than one target area. In this study, an improved algorithm based on the fuzzy connectedness framework for medical image is developed. The improved algorithm has involved an adaptive fuzzy connectedness segmentation combined with multiple seeds selection. Also, the algorithm can effectively overcome many problems when manual selection is used, such as the un-precise result of each target region segmented of the medical image and the difficulty of completion the segmentation when the areas are not connected. For testing the proposed method, some original real images, taken from a large hospital, were analyzed. The results have been evaluated with some rules, such as Dice’s coefficient, over segmentation rate, and under segmentation rate. The results show that the proposed method has an ideal segmentation boundary on medical images, meanwhile, it has a low time cost. In conclusion, the proposed method is superior to the traditional fuzzy connectedness segmentation methods for medical images.","PeriodicalId":23787,"journal":{"name":"World Academy of Science, Engineering and Technology, International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85508261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-01-01DOI: 10.17706/IJCCE.2018.7.3.85-97
Abir Ahmed, Tamim Hossain, Kefayet Ullah, M. Kabir
A popular approach, called as Radiolocation, measures parameters of radio signals that travel between a Mobile Station (MS) and a set of fixed transceivers, which are subsequently used to derive the location estimation of MS. The purpose of this research was to investigate the performance of Time of Arrival (TOA) based Radiolocation approach for finding the location of MS in the CDMA cellular networks. Another aim was to find out suitable location estimation algorithm using measured parameters by Radiolocation approach. Finally, the accuracy of the Radiolocation was examined by comparing two different location estimation algorithms. Two different algorithms for position estimation methods, named as Neural Networks and Least Square algorithms, were used to determine the location of MS. The simulation results suggested that the Neural Network algorithm provides better accuracy in position estimation which were depicted by supportive simulation results in the article.
{"title":"Time of Arrival (TOA) Based Radiolocation Architecture in CDMA Systems and Its Performance Analysis","authors":"Abir Ahmed, Tamim Hossain, Kefayet Ullah, M. Kabir","doi":"10.17706/IJCCE.2018.7.3.85-97","DOIUrl":"https://doi.org/10.17706/IJCCE.2018.7.3.85-97","url":null,"abstract":"A popular approach, called as Radiolocation, measures parameters of radio signals that travel between a Mobile Station (MS) and a set of fixed transceivers, which are subsequently used to derive the location estimation of MS. The purpose of this research was to investigate the performance of Time of Arrival (TOA) based Radiolocation approach for finding the location of MS in the CDMA cellular networks. Another aim was to find out suitable location estimation algorithm using measured parameters by Radiolocation approach. Finally, the accuracy of the Radiolocation was examined by comparing two different location estimation algorithms. Two different algorithms for position estimation methods, named as Neural Networks and Least Square algorithms, were used to determine the location of MS. The simulation results suggested that the Neural Network algorithm provides better accuracy in position estimation which were depicted by supportive simulation results in the article.","PeriodicalId":23787,"journal":{"name":"World Academy of Science, Engineering and Technology, International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85879084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-01-01DOI: 10.17706/IJCCE.2018.7.4.145-166
H. Herath, G. Wimalaratne
Software developed referring to a poor design often causes the introduction of security issues which could spread into other phases of the Software Development Life Cycle if not address in the initial stages. This could lead to major security breaches and loss of valuable assets to the consumers. Identifying and fixing security issues as early as possible in a software product is the most cost-effective way of implementing software security. This research proposes a proactive approach to build security into the product itself with the aid of a new tool developed as a proof of concept. The proposed semi-automatic tool will address limitations in current approaches to secure software engineering when developing a software product by providing visibility, tracking, awareness, and progress monitoring. Additionally Developers, Architects, QA, BA, and Management, as well as the Users, can participate in the Threat Modeling and architectural security analysis contributing their input for Security Engineering with the support provided by the tool as an interactive platform, a knowledge base and as an integration platform. The Microsoft Threat Modeling Tool is being used to generate the threat models. The tool extracts threat model information and produces detailed mitigations using known vulnerability databases and classification techniques. Developers can better understand the potential threats, vulnerabilities when coding and integration functionality with a Project Management Tool can provide visibility and tracking of Building Security In throughout SDLC.
{"title":"A Collaborative Platform Featuring Visibility, Tracking, Monitoring and Awareness for Building Security In.","authors":"H. Herath, G. Wimalaratne","doi":"10.17706/IJCCE.2018.7.4.145-166","DOIUrl":"https://doi.org/10.17706/IJCCE.2018.7.4.145-166","url":null,"abstract":"Software developed referring to a poor design often causes the introduction of security issues which could spread into other phases of the Software Development Life Cycle if not address in the initial stages. This could lead to major security breaches and loss of valuable assets to the consumers. Identifying and fixing security issues as early as possible in a software product is the most cost-effective way of implementing software security. This research proposes a proactive approach to build security into the product itself with the aid of a new tool developed as a proof of concept. The proposed semi-automatic tool will address limitations in current approaches to secure software engineering when developing a software product by providing visibility, tracking, awareness, and progress monitoring. Additionally Developers, Architects, QA, BA, and Management, as well as the Users, can participate in the Threat Modeling and architectural security analysis contributing their input for Security Engineering with the support provided by the tool as an interactive platform, a knowledge base and as an integration platform. The Microsoft Threat Modeling Tool is being used to generate the threat models. The tool extracts threat model information and produces detailed mitigations using known vulnerability databases and classification techniques. Developers can better understand the potential threats, vulnerabilities when coding and integration functionality with a Project Management Tool can provide visibility and tracking of Building Security In throughout SDLC.","PeriodicalId":23787,"journal":{"name":"World Academy of Science, Engineering and Technology, International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82287360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-01-01DOI: 10.17706/IJCCE.2018.7.3.98-106
Reynaldo E. Castillo, Cubao Quezon City Metro Manila Philippines Aurora Boulevard, Paula Jean M. Castro, M. C. Aragon, Henry C. Macugay
Unfamiliarity about the school campus grounds results common problems for new students or visitors. They tend to get lost inside the campus, arrive late in classes or meetings. They navigate by themselves and ask security personnel, still they tend to take time to reach their destination or most often they get lost. This study is for new students and visitors. It helps them to explore and navigate the campus grounds thru a mobile application. The mobile application provides a user-friendly interface which considers user experience. The mobile application serves as a guiding tool in navigating around the school campus grounds. In the study, the mobile application used Google Map to track the current location of the user and plot the route from the origin to the destination inside the Technological Institute of the Philippines Quezon City campus using fuzzy logic algorithm to get the shortest route and channel selection algorithm to get the nearby user within a perimeter. The study used the Rapid Application Development model in order to deliver expected outputs. Surveys using a questioner drafted from ISO 25010 were conducted to determine the effectiveness of the developed mobile application in terms of its functionality, reliability and efficiency. It was concluded that the mobile application is an effective tool in giving navigation and campus information for its users. For the result, the average mean of all the criteria yields 4.12 interpreted as very effective using the 5 point Likert’s Scale. Thus, the mobile application is useful reliable, functional, and efficient to serve its purpose.
{"title":"TIP EXPRESS: An Android School Navigation Application","authors":"Reynaldo E. Castillo, Cubao Quezon City Metro Manila Philippines Aurora Boulevard, Paula Jean M. Castro, M. C. Aragon, Henry C. Macugay","doi":"10.17706/IJCCE.2018.7.3.98-106","DOIUrl":"https://doi.org/10.17706/IJCCE.2018.7.3.98-106","url":null,"abstract":"Unfamiliarity about the school campus grounds results common problems for new students or visitors. They tend to get lost inside the campus, arrive late in classes or meetings. They navigate by themselves and ask security personnel, still they tend to take time to reach their destination or most often they get lost. This study is for new students and visitors. It helps them to explore and navigate the campus grounds thru a mobile application. The mobile application provides a user-friendly interface which considers user experience. The mobile application serves as a guiding tool in navigating around the school campus grounds. In the study, the mobile application used Google Map to track the current location of the user and plot the route from the origin to the destination inside the Technological Institute of the Philippines Quezon City campus using fuzzy logic algorithm to get the shortest route and channel selection algorithm to get the nearby user within a perimeter. The study used the Rapid Application Development model in order to deliver expected outputs. Surveys using a questioner drafted from ISO 25010 were conducted to determine the effectiveness of the developed mobile application in terms of its functionality, reliability and efficiency. It was concluded that the mobile application is an effective tool in giving navigation and campus information for its users. For the result, the average mean of all the criteria yields 4.12 interpreted as very effective using the 5 point Likert’s Scale. Thus, the mobile application is useful reliable, functional, and efficient to serve its purpose.","PeriodicalId":23787,"journal":{"name":"World Academy of Science, Engineering and Technology, International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78473555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-01-01DOI: 10.17706/IJCCE.2018.7.3.45-57
Kazem Qazanfari, Abdou Youssef
Bag of words, bigram, or more complex combinations of words are the most among general and widely used features in text classification. However, in almost all real-world text classification problems, the distribution of the available training dataset for each class often does not match the real distribution of the class concept, which reduces the accuracy of the classifiers. Let W(f) and R(f) be the discriminating power of feature f based on the world knowledge and the repository knowledge, respectively. In an ideal situation, W(f) = R(f) is desirable; however, in most situations, W(f) and R(f) are not equal and sometimes they are quite different, because the repository knowledge and the world knowledge do not have the same statistics about the discriminating power of feature f. In this paper, this phenomenon is called inadequacy of knowledge and we show how this phenomenon could reduce the performance of the text classifiers. To solve this issue, a novel feature weighting method is proposed which combines two bodies of knowledge, world knowledge and repository knowledge, using a particular transformation T. In this method, if both the world knowledge and the repository knowledge indicate a significantly high (resp., low) discriminating power of feature f, the weight of this feature is increased (resp., decreased); otherwise, the weight of the feature will be determined by a linear combination of the two weights. Experimental results show that the performance of classifiers like SVM, KNN and Bayes improves significantly if the proposed feature weighting method is applied on the contextual features such as bigram and unigram. It is shown also that pruning some words from the dataset using the proposed feature weighting method could improve the performance of the text classifier when the feature sets are created using Doc2vec.
{"title":"Contextual Feature Weighting Using Knowledge beyond the Repository Knowledge","authors":"Kazem Qazanfari, Abdou Youssef","doi":"10.17706/IJCCE.2018.7.3.45-57","DOIUrl":"https://doi.org/10.17706/IJCCE.2018.7.3.45-57","url":null,"abstract":"Bag of words, bigram, or more complex combinations of words are the most among general and widely used features in text classification. However, in almost all real-world text classification problems, the distribution of the available training dataset for each class often does not match the real distribution of the class concept, which reduces the accuracy of the classifiers. Let W(f) and R(f) be the discriminating power of feature f based on the world knowledge and the repository knowledge, respectively. In an ideal situation, W(f) = R(f) is desirable; however, in most situations, W(f) and R(f) are not equal and sometimes they are quite different, because the repository knowledge and the world knowledge do not have the same statistics about the discriminating power of feature f. In this paper, this phenomenon is called inadequacy of knowledge and we show how this phenomenon could reduce the performance of the text classifiers. To solve this issue, a novel feature weighting method is proposed which combines two bodies of knowledge, world knowledge and repository knowledge, using a particular transformation T. In this method, if both the world knowledge and the repository knowledge indicate a significantly high (resp., low) discriminating power of feature f, the weight of this feature is increased (resp., decreased); otherwise, the weight of the feature will be determined by a linear combination of the two weights. Experimental results show that the performance of classifiers like SVM, KNN and Bayes improves significantly if the proposed feature weighting method is applied on the contextual features such as bigram and unigram. It is shown also that pruning some words from the dataset using the proposed feature weighting method could improve the performance of the text classifier when the feature sets are created using Doc2vec.","PeriodicalId":23787,"journal":{"name":"World Academy of Science, Engineering and Technology, International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90455634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
World Academy of Science, Engineering and Technology, International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering