Pub Date : 2011-03-04DOI: 10.1109/CSPA.2011.5759842
C. Nickel, C. Busch, S. Rangarajan, Manuel Mobius
Biometric gait recognition based on accelerometer data is still a new field of research. It has the merit of offering an unobtrusive and hence user-friendly method for authentication on mobile phones. Most publications in this area are based on extracting cycles (two steps) from the gait data which are later used as features in the authentication process. In this paper the application of Hidden Markov Models is proposed instead. These have already been successfully implemented in speaker recognition systems. The advantage is that no error-prone cycle extraction has to be performed, but the accelerometer data can be directly used to construct the model and thus form the basis for successful recognition. Testing this method with accelerometer data of 48 subjects recorded using a commercial of the shelve mobile phone a false non match rate (FNMR) of 10.42% at a false match rate (FMR) of 10.29% was obtained. This is half of the error rate obtained when applying an advanced cycle extraction method to the same data set in previous work.
{"title":"Using Hidden Markov Models for accelerometer-based biometric gait recognition","authors":"C. Nickel, C. Busch, S. Rangarajan, Manuel Mobius","doi":"10.1109/CSPA.2011.5759842","DOIUrl":"https://doi.org/10.1109/CSPA.2011.5759842","url":null,"abstract":"Biometric gait recognition based on accelerometer data is still a new field of research. It has the merit of offering an unobtrusive and hence user-friendly method for authentication on mobile phones. Most publications in this area are based on extracting cycles (two steps) from the gait data which are later used as features in the authentication process. In this paper the application of Hidden Markov Models is proposed instead. These have already been successfully implemented in speaker recognition systems. The advantage is that no error-prone cycle extraction has to be performed, but the accelerometer data can be directly used to construct the model and thus form the basis for successful recognition. Testing this method with accelerometer data of 48 subjects recorded using a commercial of the shelve mobile phone a false non match rate (FNMR) of 10.42% at a false match rate (FMR) of 10.29% was obtained. This is half of the error rate obtained when applying an advanced cycle extraction method to the same data set in previous work.","PeriodicalId":282179,"journal":{"name":"2011 IEEE 7th International Colloquium on Signal Processing and its Applications","volume":"817 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132031605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-04DOI: 10.1109/CSPA.2011.5759875
S. A. Al Shamsi, A. Ahmad, G. Desa
Many professionals think that predesigned solutions could solve the problem regardless the nature, individual attributes and culture of the different countries. Developed countries have huge computing infrastructures which make data handling and sharing through local and global networks easy and mandatory to every user. However, usually developing and undeveloped countries is lack of computing infrastructures. A poor running computer network could be a major problem of not having an effective system to share and handle geospatial data. Therefore a good understanding of the critical successful factors (CSFs) of a given national geospatial data infrastructure (NSDI) is important to improve and obtain effectiveness of the NSDI framework. The main aim of this study is to develop primary CSFs model derived from scientific points of view. Therefore the researchers designed a CSFs model in order to measure SDIs effectiveness. Extensive literature review has been made to establish a primary CSFs model consisting of six main categories and their respective criteria. This primary model was developed using different types of criteria. The developed criteria helped to determine the primary CFSs and they are as follows: high priority CSFs which include organization, coordination and institutional agreements, strategic planning management, communication and computing infrastructure, on-line access service and web mapping, awareness, standards in general, financial support and spatial data availability. Other factors were considered as second priority which include: legal aspect, market demand and needs for service providing, policies, effective mechanism, vision, participants, leadership and political support, new technologies, user's satisfaction and user's involvement, education, expertise, interoperability, socio-political satiability, culture, economical and living standards, information availability, metadata availability through the internet and data updating. The low priority factors were eliminated.
{"title":"Development of critical successful factors model for spatial data infrastructure implementation","authors":"S. A. Al Shamsi, A. Ahmad, G. Desa","doi":"10.1109/CSPA.2011.5759875","DOIUrl":"https://doi.org/10.1109/CSPA.2011.5759875","url":null,"abstract":"Many professionals think that predesigned solutions could solve the problem regardless the nature, individual attributes and culture of the different countries. Developed countries have huge computing infrastructures which make data handling and sharing through local and global networks easy and mandatory to every user. However, usually developing and undeveloped countries is lack of computing infrastructures. A poor running computer network could be a major problem of not having an effective system to share and handle geospatial data. Therefore a good understanding of the critical successful factors (CSFs) of a given national geospatial data infrastructure (NSDI) is important to improve and obtain effectiveness of the NSDI framework. The main aim of this study is to develop primary CSFs model derived from scientific points of view. Therefore the researchers designed a CSFs model in order to measure SDIs effectiveness. Extensive literature review has been made to establish a primary CSFs model consisting of six main categories and their respective criteria. This primary model was developed using different types of criteria. The developed criteria helped to determine the primary CFSs and they are as follows: high priority CSFs which include organization, coordination and institutional agreements, strategic planning management, communication and computing infrastructure, on-line access service and web mapping, awareness, standards in general, financial support and spatial data availability. Other factors were considered as second priority which include: legal aspect, market demand and needs for service providing, policies, effective mechanism, vision, participants, leadership and political support, new technologies, user's satisfaction and user's involvement, education, expertise, interoperability, socio-political satiability, culture, economical and living standards, information availability, metadata availability through the internet and data updating. The low priority factors were eliminated.","PeriodicalId":282179,"journal":{"name":"2011 IEEE 7th International Colloquium on Signal Processing and its Applications","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117197012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-04DOI: 10.1109/CSPA.2011.5759889
Mohammad Karimi, H. Pourghassem, G. Shahgholian
This paper presents a novel approach to optimize pattern recognition system using genetic algorithm (GA) to identify the type of hand motion employing artificial neural networks (ANNs) with high performance and accuracy suited for practical implementations. To achieve this approach, electromyographic (EMG) signals were obtained from sixteen locations on the forearm of six subjects in ten hand motion classes. In the first step of feature extraction of forearm EMG signals, WPT is utilized to generate a wavelet decomposition tree from which WPT coefficients are extracted. In the second step, standard deviation of wavelet packet coefficients of EMG signals is considered as the feature vector for training purposes of the ANN. To improve the algorithm, GA was employed to optimize the algorithm in such a way that to determine the best values for “mother wavelet function”, “decomposition level of wavelet packet analysis”, and “number of neurons in hidden layer” concluded in a high-speed, precise two-layer ANN with a particularly small-sized structure. This proposed network with a small size can recognize ten hand motions with recognition accuracy of over 98% and also resulted in improvement of stability and reliability of the system for practical considerations.
{"title":"A novel prosthetic hand control approach based on genetic algorithm and wavelet transform features","authors":"Mohammad Karimi, H. Pourghassem, G. Shahgholian","doi":"10.1109/CSPA.2011.5759889","DOIUrl":"https://doi.org/10.1109/CSPA.2011.5759889","url":null,"abstract":"This paper presents a novel approach to optimize pattern recognition system using genetic algorithm (GA) to identify the type of hand motion employing artificial neural networks (ANNs) with high performance and accuracy suited for practical implementations. To achieve this approach, electromyographic (EMG) signals were obtained from sixteen locations on the forearm of six subjects in ten hand motion classes. In the first step of feature extraction of forearm EMG signals, WPT is utilized to generate a wavelet decomposition tree from which WPT coefficients are extracted. In the second step, standard deviation of wavelet packet coefficients of EMG signals is considered as the feature vector for training purposes of the ANN. To improve the algorithm, GA was employed to optimize the algorithm in such a way that to determine the best values for “mother wavelet function”, “decomposition level of wavelet packet analysis”, and “number of neurons in hidden layer” concluded in a high-speed, precise two-layer ANN with a particularly small-sized structure. This proposed network with a small size can recognize ten hand motions with recognition accuracy of over 98% and also resulted in improvement of stability and reliability of the system for practical considerations.","PeriodicalId":282179,"journal":{"name":"2011 IEEE 7th International Colloquium on Signal Processing and its Applications","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122554367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-04DOI: 10.1109/CSPA.2011.5759864
N. M. Arshad, M. F. Misnan, Noorfadzli Abdul Razak
Floor-based line-tracking technique is widely implemented for use in many autonomous mobile vehicle applications in the industries such as for transportation of goods within an enclosed warehouse or factories. It is also popular in competition robots. Many line-tracking systems utilize several kinds of discrete sensors such as the reflective infra-red LED, light dependent resistor (LDR), and multi-array inductance. Multiple of these discrete sensors are required to be arranged closed together in front of the vehicle and facing the floor to trace the profile of the line. A minimum of two sensors are required to ensure effectiveness of the line detection algorithm, especially to determine the vehicle's position, either at left or at right with respect to line. This paper proposed a latest idea of using only one discrete sensor for mobile vehicle to detect a line having two colour shades on a white background surface. The results show that single sensor managed to allow the vehicle to maneuver as effectively on the line.
{"title":"Single infra-red sensor technique for line-tracking autonomous mobile vehicle","authors":"N. M. Arshad, M. F. Misnan, Noorfadzli Abdul Razak","doi":"10.1109/CSPA.2011.5759864","DOIUrl":"https://doi.org/10.1109/CSPA.2011.5759864","url":null,"abstract":"Floor-based line-tracking technique is widely implemented for use in many autonomous mobile vehicle applications in the industries such as for transportation of goods within an enclosed warehouse or factories. It is also popular in competition robots. Many line-tracking systems utilize several kinds of discrete sensors such as the reflective infra-red LED, light dependent resistor (LDR), and multi-array inductance. Multiple of these discrete sensors are required to be arranged closed together in front of the vehicle and facing the floor to trace the profile of the line. A minimum of two sensors are required to ensure effectiveness of the line detection algorithm, especially to determine the vehicle's position, either at left or at right with respect to line. This paper proposed a latest idea of using only one discrete sensor for mobile vehicle to detect a line having two colour shades on a white background surface. The results show that single sensor managed to allow the vehicle to maneuver as effectively on the line.","PeriodicalId":282179,"journal":{"name":"2011 IEEE 7th International Colloquium on Signal Processing and its Applications","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122093227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-04DOI: 10.1109/CSPA.2011.5759862
Li Mu-chen, Xin Jin, S. Goto
In surveillance systems, low bit rate video coding is necessary to reduce the storage and bandwidth consumption, especially for high definition video coding. In [1], we proposed an adaptive coding method to code the active regions with traditional slices and background with the proposed S slice respectively. The S slice contains less syntax elements than the conventional P slice. This paper further analyzes the influence factors to the S slice coding. Moreover, a control module is proposed to enable the S slice coding and to find the best partition pattern between active regions and background that could get the most bit reduction [3], [4].
{"title":"Adaptive coding based on a new slice type in surveillance systems","authors":"Li Mu-chen, Xin Jin, S. Goto","doi":"10.1109/CSPA.2011.5759862","DOIUrl":"https://doi.org/10.1109/CSPA.2011.5759862","url":null,"abstract":"In surveillance systems, low bit rate video coding is necessary to reduce the storage and bandwidth consumption, especially for high definition video coding. In [1], we proposed an adaptive coding method to code the active regions with traditional slices and background with the proposed S slice respectively. The S slice contains less syntax elements than the conventional P slice. This paper further analyzes the influence factors to the S slice coding. Moreover, a control module is proposed to enable the S slice coding and to find the best partition pattern between active regions and background that could get the most bit reduction [3], [4].","PeriodicalId":282179,"journal":{"name":"2011 IEEE 7th International Colloquium on Signal Processing and its Applications","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131572258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-04DOI: 10.1109/CSPA.2011.5759863
Ning Jiang, Wenxin Yu, Shaopeng Tang, S. Goto
In recent years, LBP feature based svm detector and Haar feature based cascade detector are the two types of efficient detectors for face detection. In this paper, we proposed to improve the performance on Haar feature based cascade detector. First, we describe a new feature for cascade detector. The feature is called Separate Haar Feature. Secondly, we describe a new decision algorithm in cascade detection to improve the detection rate. There are three key contributions. The first is “Separate Haar Feature”, which adds a don't-care area between the rectangles of Haar Feature. The second is the algorithm for selecting the best width for this don't-care area. Finally, we proposed a new decision algorithm which makes the decision by not only a stage result in cascade detection process to improve the detection rate.
{"title":"A cascade detector for rapid face detection","authors":"Ning Jiang, Wenxin Yu, Shaopeng Tang, S. Goto","doi":"10.1109/CSPA.2011.5759863","DOIUrl":"https://doi.org/10.1109/CSPA.2011.5759863","url":null,"abstract":"In recent years, LBP feature based svm detector and Haar feature based cascade detector are the two types of efficient detectors for face detection. In this paper, we proposed to improve the performance on Haar feature based cascade detector. First, we describe a new feature for cascade detector. The feature is called Separate Haar Feature. Secondly, we describe a new decision algorithm in cascade detection to improve the detection rate. There are three key contributions. The first is “Separate Haar Feature”, which adds a don't-care area between the rectangles of Haar Feature. The second is the algorithm for selecting the best width for this don't-care area. Finally, we proposed a new decision algorithm which makes the decision by not only a stage result in cascade detection process to improve the detection rate.","PeriodicalId":282179,"journal":{"name":"2011 IEEE 7th International Colloquium on Signal Processing and its Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127797144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-04DOI: 10.1109/CSPA.2011.5759854
H. Izzeldin, V. Asirvadam, N. Saad
This paper investigates the performance of conjugate gradient algorithms with sliding-window approach for training multilayer perceptron (MLP). Online learning is implemented when the system under investigation is time varying or when it is not convenient to obtain a full history of offline data about the system variables. Sliding window framework is proposed to combine the robustness of offline learning with the ability of online learning to track time varying elements of the process under investigation. A sliding window based second order conjugate gradient algorithms SWCG is presented. The performance of SWCG is compared with a sliding window based first order back propagation SWBP.
{"title":"Online sliding-window based for training MLP networks using advanced conjugate gradient","authors":"H. Izzeldin, V. Asirvadam, N. Saad","doi":"10.1109/CSPA.2011.5759854","DOIUrl":"https://doi.org/10.1109/CSPA.2011.5759854","url":null,"abstract":"This paper investigates the performance of conjugate gradient algorithms with sliding-window approach for training multilayer perceptron (MLP). Online learning is implemented when the system under investigation is time varying or when it is not convenient to obtain a full history of offline data about the system variables. Sliding window framework is proposed to combine the robustness of offline learning with the ability of online learning to track time varying elements of the process under investigation. A sliding window based second order conjugate gradient algorithms SWCG is presented. The performance of SWCG is compared with a sliding window based first order back propagation SWBP.","PeriodicalId":282179,"journal":{"name":"2011 IEEE 7th International Colloquium on Signal Processing and its Applications","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121079829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-04DOI: 10.1109/CSPA.2011.5759869
Z. Nopiah, M. H. Osman, S. Abdullah, M. N. Baharin
Identifying damaging or non-damaging events in long records of fatigue data is a crux of recapitulating pristine data. In this article, a fuzzy double clustering framework (DCf) is utilized to classify the fatigue segment by exploiting two typical statistical features; kurtosis and the standard deviation. In the first stage, segments are assigned to a number of similar groups to generate multi-dimensional prototypes. Then, the resulting multi-dimensional prototypes are projected onto each featuring space of the input variables. On each dimension, a hierarchical clustering is applied to extract the information granules. For ease of interpretability, the granules are translated into a set of antecedent-consequent rules by means of a fuzzy set theory where for the model output, two distinct classes namely low and high with different degrees of evidence are assigned. The results reveal that the fatigue segments could be classified according to the value of kurtosis and standard deviation in a specific range where further, it can be a part of a fatigue data editing process
{"title":"The identification of low fatigue damage using fuzzy double clustering framework","authors":"Z. Nopiah, M. H. Osman, S. Abdullah, M. N. Baharin","doi":"10.1109/CSPA.2011.5759869","DOIUrl":"https://doi.org/10.1109/CSPA.2011.5759869","url":null,"abstract":"Identifying damaging or non-damaging events in long records of fatigue data is a crux of recapitulating pristine data. In this article, a fuzzy double clustering framework (DCf) is utilized to classify the fatigue segment by exploiting two typical statistical features; kurtosis and the standard deviation. In the first stage, segments are assigned to a number of similar groups to generate multi-dimensional prototypes. Then, the resulting multi-dimensional prototypes are projected onto each featuring space of the input variables. On each dimension, a hierarchical clustering is applied to extract the information granules. For ease of interpretability, the granules are translated into a set of antecedent-consequent rules by means of a fuzzy set theory where for the model output, two distinct classes namely low and high with different degrees of evidence are assigned. The results reveal that the fatigue segments could be classified according to the value of kurtosis and standard deviation in a specific range where further, it can be a part of a fatigue data editing process","PeriodicalId":282179,"journal":{"name":"2011 IEEE 7th International Colloquium on Signal Processing and its Applications","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125753504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-04DOI: 10.1109/CSPA.2011.5759903
Javad Abbaszadeh Bargoshadia, Herlina Binti Abdul Rahimb, Sahar Sarrafic, Ruzairi Bin, Abdul Rahimd
In This paper a novel ultrasonic dispersion system for the cleaning application or dispersing of particles which are mixed in liquid, has been proposed. The frequency band of designed system is 30 kHz so that the frequency of ultrasonic wave sweeps from 30 kHz to 60 kHz with 100 Hz steps. One of the superiority of manufactured system in compare with the other similar systems which are available in markets is that this system can transfer the maximum and optimum energy of ultrasonic wave inside the liquid tank with the high efficiency in the whole of the usage time of the system. The used ultrasonic transducers in this system as the generator of ultrasonic wave is the type of air coupled ceramic ultrasonic piezoelectric with the nominal maximum power 50 watt. The frequency characteristic of applied piezoelectric is that it produces the maximum amplitude of ultrasonic wave on the resonance frequency, so this system is designed to work on resonance frequency of piezoelectric, continuously. This is done by the use of control system which is consisted of two major parts, sensing part and controlling part. In this system the Hall Effect current sensors is used as the sensing part and the controlling program is implemented on AVR microcontrollers. In addition, the control algorithm of program is presented in this paper. The manufactured ultrasonic dispersion system is consisted of 9 piezoelectrics so that it can produce 450 watt ultrasonic energy, totally.
{"title":"Design and manufacture an ultrasonic dispersion system with automatic frequency adjusting property","authors":"Javad Abbaszadeh Bargoshadia, Herlina Binti Abdul Rahimb, Sahar Sarrafic, Ruzairi Bin, Abdul Rahimd","doi":"10.1109/CSPA.2011.5759903","DOIUrl":"https://doi.org/10.1109/CSPA.2011.5759903","url":null,"abstract":"In This paper a novel ultrasonic dispersion system for the cleaning application or dispersing of particles which are mixed in liquid, has been proposed. The frequency band of designed system is 30 kHz so that the frequency of ultrasonic wave sweeps from 30 kHz to 60 kHz with 100 Hz steps. One of the superiority of manufactured system in compare with the other similar systems which are available in markets is that this system can transfer the maximum and optimum energy of ultrasonic wave inside the liquid tank with the high efficiency in the whole of the usage time of the system. The used ultrasonic transducers in this system as the generator of ultrasonic wave is the type of air coupled ceramic ultrasonic piezoelectric with the nominal maximum power 50 watt. The frequency characteristic of applied piezoelectric is that it produces the maximum amplitude of ultrasonic wave on the resonance frequency, so this system is designed to work on resonance frequency of piezoelectric, continuously. This is done by the use of control system which is consisted of two major parts, sensing part and controlling part. In this system the Hall Effect current sensors is used as the sensing part and the controlling program is implemented on AVR microcontrollers. In addition, the control algorithm of program is presented in this paper. The manufactured ultrasonic dispersion system is consisted of 9 piezoelectrics so that it can produce 450 watt ultrasonic energy, totally.","PeriodicalId":282179,"journal":{"name":"2011 IEEE 7th International Colloquium on Signal Processing and its Applications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127728540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-01DOI: 10.1109/CSPA.2011.5759924
R. Ghazali, Z. Latif, Abdul Rauf Abdul Rasam, A. Samad
Cadastral surveys deal with one of the oldest and most fundamental facets of human society, which is ownership of land. They are the surveys that create, mark, define, retrace, or reestablish the boundaries and subdivisions of the public lands in Malaysia. Knowing exactly where the boundary lines are is important for many different activities. If there any consideration of constructing a building on the property, conducting a cadastral survey is a requirement to avoid any possibility of encroaching into a neighboring land. This type of survey can ensure that fences and other improvements are located in exactly the right position within the boundary. Originally used to ensure reliable land valuation and taxation, today these surveys are mainly used to solve boundary disputes. The boundary mark is important in starting a Cadastral survey job. This is because the coordinate as well as bearing and distance is a necessity to start the survey job in determining all the boundary of a new or existing parcel. One of the common issues in starting a new cadastral survey job is finding the location of the work site or lot parcel. It may take a long time to refer on the certified plan and locate the site manually. The surveyor may take time to go and find the location of the work site only by referring the certified plan especially in the area that not really familiarize with it. They may take a few days in locating the area. They also need to find the land owner to conform the location of the lot parcel. Therefore, using this new portable GPS that has been integrated with cadastral GIS database, it can help surveyor to save their time to reach at lot parcel. This automatically saves their time in locating the boundary mark. It helped surveyor when the lot parcel now can also be navigated in this portable GPS. Therefore, in this paper will study on the integration of land parcel data in the Malaysian Cadastral database (PDUK) into GPS system to assist surveyor in locating parcel location based lot number of the land.
{"title":"Integrating Cadastral GIS Database into GPS Navigation System for Locating Land Parcel Location in cadastral surveying","authors":"R. Ghazali, Z. Latif, Abdul Rauf Abdul Rasam, A. Samad","doi":"10.1109/CSPA.2011.5759924","DOIUrl":"https://doi.org/10.1109/CSPA.2011.5759924","url":null,"abstract":"Cadastral surveys deal with one of the oldest and most fundamental facets of human society, which is ownership of land. They are the surveys that create, mark, define, retrace, or reestablish the boundaries and subdivisions of the public lands in Malaysia. Knowing exactly where the boundary lines are is important for many different activities. If there any consideration of constructing a building on the property, conducting a cadastral survey is a requirement to avoid any possibility of encroaching into a neighboring land. This type of survey can ensure that fences and other improvements are located in exactly the right position within the boundary. Originally used to ensure reliable land valuation and taxation, today these surveys are mainly used to solve boundary disputes. The boundary mark is important in starting a Cadastral survey job. This is because the coordinate as well as bearing and distance is a necessity to start the survey job in determining all the boundary of a new or existing parcel. One of the common issues in starting a new cadastral survey job is finding the location of the work site or lot parcel. It may take a long time to refer on the certified plan and locate the site manually. The surveyor may take time to go and find the location of the work site only by referring the certified plan especially in the area that not really familiarize with it. They may take a few days in locating the area. They also need to find the land owner to conform the location of the lot parcel. Therefore, using this new portable GPS that has been integrated with cadastral GIS database, it can help surveyor to save their time to reach at lot parcel. This automatically saves their time in locating the boundary mark. It helped surveyor when the lot parcel now can also be navigated in this portable GPS. Therefore, in this paper will study on the integration of land parcel data in the Malaysian Cadastral database (PDUK) into GPS system to assist surveyor in locating parcel location based lot number of the land.","PeriodicalId":282179,"journal":{"name":"2011 IEEE 7th International Colloquium on Signal Processing and its Applications","volume":"27 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129765479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}