Pub Date : 1900-01-01DOI: 10.1109/ANZIIS.2001.974115
D. R. Iskander, M. Collins, B. Davis
Monochromatic aberrations of the human eye can be objectively measured with an aberroscope or a Hartmann-Shack sensor, which can calculate the wavefront aberrations using displacements in a grid image. These displacements are determined from the centroid locations of the grid points. Often, in optometry practice, the acquired images are of low resolution and low intensity. Current methods for detecting centroids in such images are essentially based on a Canny-Deriche oriented edge-detecting filter. However, its performance has been found to be insufficient in some clinical applications. We propose an alternative method for detecting centroids in grid images based on watershed transformation. The proposed methodology accurately detects the number of grid points for the subsequent estimation of the centroid locations.
{"title":"Centroid detection in ophthalmic applications","authors":"D. R. Iskander, M. Collins, B. Davis","doi":"10.1109/ANZIIS.2001.974115","DOIUrl":"https://doi.org/10.1109/ANZIIS.2001.974115","url":null,"abstract":"Monochromatic aberrations of the human eye can be objectively measured with an aberroscope or a Hartmann-Shack sensor, which can calculate the wavefront aberrations using displacements in a grid image. These displacements are determined from the centroid locations of the grid points. Often, in optometry practice, the acquired images are of low resolution and low intensity. Current methods for detecting centroids in such images are essentially based on a Canny-Deriche oriented edge-detecting filter. However, its performance has been found to be insufficient in some clinical applications. We propose an alternative method for detecting centroids in grid images based on watershed transformation. The proposed methodology accurately detects the number of grid points for the subsequent estimation of the centroid locations.","PeriodicalId":383878,"journal":{"name":"The Seventh Australian and New Zealand Intelligent Information Systems Conference, 2001","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116833742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ANZIIS.2001.974093
A. Behrad, K. Faez
Recognition of QRS-Wave in the ECG signal is one of the important stages for ECG signal processing and most of the ECG noise removal algorithms, and automatic ECG interpreter systems need to detect these points. In most cases ECG signals are noisy and we need to detect these points using noisy signals. We have developed a QRS-wave recognition system using MART (multi-channel ART) neural network. In this method signal of two leads of ECG is used for detection, so our method has low sensitivity to noises. We tested our method for noiseless and noisy ECG signals and we compared results against those of an older one, which uses ART2 neural network. Results showed that our method has good results for noisy signals.
{"title":"New method for QRS-wave recognition in ECG using MART neural network","authors":"A. Behrad, K. Faez","doi":"10.1109/ANZIIS.2001.974093","DOIUrl":"https://doi.org/10.1109/ANZIIS.2001.974093","url":null,"abstract":"Recognition of QRS-Wave in the ECG signal is one of the important stages for ECG signal processing and most of the ECG noise removal algorithms, and automatic ECG interpreter systems need to detect these points. In most cases ECG signals are noisy and we need to detect these points using noisy signals. We have developed a QRS-wave recognition system using MART (multi-channel ART) neural network. In this method signal of two leads of ECG is used for detection, so our method has low sensitivity to noises. We tested our method for noiseless and noisy ECG signals and we compared results against those of an older one, which uses ART2 neural network. Results showed that our method has good results for noisy signals.","PeriodicalId":383878,"journal":{"name":"The Seventh Australian and New Zealand Intelligent Information Systems Conference, 2001","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129357855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ANZIIS.2001.974058
M. Mašek, R. Chandrasekhar, C. Desilva, Y. Attikiouzel
A threshold-based algorithm is presented for the extraction of the pectoral muscle edge in mediolateral oblique view mammograms. The minimum cross-entropy thresholding algorithm is applied to local areas around the pectoral muscle to determine a series of thresholds as a function of area size. Using a model image it is shown that art inflection point in this function corresponds to a threshold that will separate the pectoral muscle from the rest of the breast. Post processing is performed on mammograms to eliminate false positive points of inflection and a straight line is fitted to the detected pectoral boundary in order to smooth jaggedness caused by the non-uniform intensity of the pectoral muscle edge.
{"title":"Spatially based application of the minimum cross-entropy thresholding algorithm to segment the pectoral muscle in mammograms","authors":"M. Mašek, R. Chandrasekhar, C. Desilva, Y. Attikiouzel","doi":"10.1109/ANZIIS.2001.974058","DOIUrl":"https://doi.org/10.1109/ANZIIS.2001.974058","url":null,"abstract":"A threshold-based algorithm is presented for the extraction of the pectoral muscle edge in mediolateral oblique view mammograms. The minimum cross-entropy thresholding algorithm is applied to local areas around the pectoral muscle to determine a series of thresholds as a function of area size. Using a model image it is shown that art inflection point in this function corresponds to a threshold that will separate the pectoral muscle from the rest of the breast. Post processing is performed on mammograms to eliminate false positive points of inflection and a straight line is fitted to the detected pectoral boundary in order to smooth jaggedness caused by the non-uniform intensity of the pectoral muscle edge.","PeriodicalId":383878,"journal":{"name":"The Seventh Australian and New Zealand Intelligent Information Systems Conference, 2001","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126133656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ANZIIS.2001.974109
Y. Hassan, E. Tazaki
A methodology for using rough sets for preference modeling in decision problems is presented in this paper, where we introduce a new approach for deriving knowledge rules from medical databases based on rough sets combined with genetic programming. Genetic programming is one of the newest techniques in applications of artificial intelligence. Rough set theory (Z. Pawluk, 1982), is nowadays rapidly developing branch of artificial intelligence and soft computing. At first glance, the two methodologies have nothing in common. Rough sets construct the representation of knowledge in terms of attributes, semantic decision rules, etc. On the other hand, genetic programming attempts to automatically create computer programs from a high-level statement of the problem requirements. However, in spite of these differences, it is interesting to try to incorporate both approaches into a combined system. The challenge is to get as much as possible from this association.
{"title":"Rule extraction based on rough set theory combined with genetic programming and its application to medical data analysis","authors":"Y. Hassan, E. Tazaki","doi":"10.1109/ANZIIS.2001.974109","DOIUrl":"https://doi.org/10.1109/ANZIIS.2001.974109","url":null,"abstract":"A methodology for using rough sets for preference modeling in decision problems is presented in this paper, where we introduce a new approach for deriving knowledge rules from medical databases based on rough sets combined with genetic programming. Genetic programming is one of the newest techniques in applications of artificial intelligence. Rough set theory (Z. Pawluk, 1982), is nowadays rapidly developing branch of artificial intelligence and soft computing. At first glance, the two methodologies have nothing in common. Rough sets construct the representation of knowledge in terms of attributes, semantic decision rules, etc. On the other hand, genetic programming attempts to automatically create computer programs from a high-level statement of the problem requirements. However, in spite of these differences, it is interesting to try to incorporate both approaches into a combined system. The challenge is to get as much as possible from this association.","PeriodicalId":383878,"journal":{"name":"The Seventh Australian and New Zealand Intelligent Information Systems Conference, 2001","volume":"142 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124327239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ANZIIS.2001.974081
Ho Kuen Kiat, H. Schroder, G. Leedham
In this paper we consider and assess the concept of using codebooks of curves to characterise a persons handwriting. This is similar to the successful methods by which handwriting has been applied to speech recognition. The handwritten signatures are scanned as binary images at 200 dpi, thinned to a single pixel width and characterised as a set of curves. Matching of signatures is achieved using a curve similarity measure. Experiments on a set of 120 handwritten signatures from six writers (20 per writer), including some forgeries, indicate the technique has potential. Whilst it does not currently perform as well as state-of-the-art signature verifiers there are numerous improvements that can be made to the technique. A number of refinements are proposed for discussion and further research.
{"title":"Codebooks for signature verification and handwriting recognition","authors":"Ho Kuen Kiat, H. Schroder, G. Leedham","doi":"10.1109/ANZIIS.2001.974081","DOIUrl":"https://doi.org/10.1109/ANZIIS.2001.974081","url":null,"abstract":"In this paper we consider and assess the concept of using codebooks of curves to characterise a persons handwriting. This is similar to the successful methods by which handwriting has been applied to speech recognition. The handwritten signatures are scanned as binary images at 200 dpi, thinned to a single pixel width and characterised as a set of curves. Matching of signatures is achieved using a curve similarity measure. Experiments on a set of 120 handwritten signatures from six writers (20 per writer), including some forgeries, indicate the technique has potential. Whilst it does not currently perform as well as state-of-the-art signature verifiers there are numerous improvements that can be made to the technique. A number of refinements are proposed for discussion and further research.","PeriodicalId":383878,"journal":{"name":"The Seventh Australian and New Zealand Intelligent Information Systems Conference, 2001","volume":"278 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121318901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ANZIIS.2001.974117
J. Cai, R. Begg, R. Best, T. Karaharju-Huisman, S. Taylor
Artificial neural networks (ANN) have been increasingly used in gait analysis. Back-propagation neural network has been widely used because of its good predicting power in supervised training mode for gait data analysis. In this paper an artificial neural network was used to model relationships between minimum toe clearance (MTC) characteristics derived from fewer gait trials and that derived from gait data during a 30-minute continuous treadmill walking. The ANN was separately trained and tested with nine statistics calculated from 10 different data segment lengths as inputs, and the mean and standard deviation of MTC data calculated from 30 minutes gait trials as outputs. The results suggest that a trained ANN is able to accurately predict stabilized MTC data, even a 5-gait cycles' data predicted with about 80% accuracy and the prediction accuracy was seen to improve with increase in the length of input data segment.
{"title":"Minimization of number of gait trials for predicting the stabilized minimum toe clearance during gait using artificial neural networks","authors":"J. Cai, R. Begg, R. Best, T. Karaharju-Huisman, S. Taylor","doi":"10.1109/ANZIIS.2001.974117","DOIUrl":"https://doi.org/10.1109/ANZIIS.2001.974117","url":null,"abstract":"Artificial neural networks (ANN) have been increasingly used in gait analysis. Back-propagation neural network has been widely used because of its good predicting power in supervised training mode for gait data analysis. In this paper an artificial neural network was used to model relationships between minimum toe clearance (MTC) characteristics derived from fewer gait trials and that derived from gait data during a 30-minute continuous treadmill walking. The ANN was separately trained and tested with nine statistics calculated from 10 different data segment lengths as inputs, and the mean and standard deviation of MTC data calculated from 30 minutes gait trials as outputs. The results suggest that a trained ANN is able to accurately predict stabilized MTC data, even a 5-gait cycles' data predicted with about 80% accuracy and the prediction accuracy was seen to improve with increase in the length of input data segment.","PeriodicalId":383878,"journal":{"name":"The Seventh Australian and New Zealand Intelligent Information Systems Conference, 2001","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115955815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ANZIIS.2001.974052
G. Anastassopoulos, I. Stephanakis, S. Gardikis
The performance of wavelet compression techniques, in conjunction with enhancement methods like histogram normalization, is evaluated in this paper for a child trauma emergency case. The compression techniques are applied to original CT digitized images as well as to enhanced images. The processing of the images consists of alternative application of enhancement and compression techniques. The criteria used for rating the results are both subjective and objective. The evaluation based on objective criteria is comparable to the evaluation based on subjective criteria, as far as the original images are concerned. Despite the fact that the relative Mean-Square-Error (MSE) of the enhanced images as compared to the original images is slightly worse for the same compression ratio, the rating according to subjective criteria indicates better results for the enhanced images. The expert observers view more diagnostic information in the enhanced images rather than in the original ones. Furthermore, enhancing original images after compression yields higher MSEs than vice versa. Other operators in medical image processing are to be investigated in the future in order to determine the optimum operator, which retains the diagnostic information and, at the same time, assures the best compression using wavelets.
{"title":"Evaluation of histogram enhancement techniques used in conjunction with wavelet compression methods for improved signal processing of child trauma images","authors":"G. Anastassopoulos, I. Stephanakis, S. Gardikis","doi":"10.1109/ANZIIS.2001.974052","DOIUrl":"https://doi.org/10.1109/ANZIIS.2001.974052","url":null,"abstract":"The performance of wavelet compression techniques, in conjunction with enhancement methods like histogram normalization, is evaluated in this paper for a child trauma emergency case. The compression techniques are applied to original CT digitized images as well as to enhanced images. The processing of the images consists of alternative application of enhancement and compression techniques. The criteria used for rating the results are both subjective and objective. The evaluation based on objective criteria is comparable to the evaluation based on subjective criteria, as far as the original images are concerned. Despite the fact that the relative Mean-Square-Error (MSE) of the enhanced images as compared to the original images is slightly worse for the same compression ratio, the rating according to subjective criteria indicates better results for the enhanced images. The expert observers view more diagnostic information in the enhanced images rather than in the original ones. Furthermore, enhancing original images after compression yields higher MSEs than vice versa. Other operators in medical image processing are to be investigated in the future in order to determine the optimum operator, which retains the diagnostic information and, at the same time, assures the best compression using wavelets.","PeriodicalId":383878,"journal":{"name":"The Seventh Australian and New Zealand Intelligent Information Systems Conference, 2001","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127644645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ANZIIS.2001.974103
S. Marusic, G. Deng
An adaptive prediction technique is proposed which is based on the training of prediction coefficients using a local causal training area. The training technique is applied in conjunction with the recursive LMS (RLMS) algorithm, incorporating feedback of the prediction error to update the predictor coefficients. The local area training is shown to improve the stability of the RLMS algorithm. The ability of the implementation to track nonstationary data is demonstrated through the improved accuracy of predictions. Applied to lossless coding; of images, the proposed technique using RLMS and adaptive arithmetic coding produces results comparable to state of the art techniques.
{"title":"Adaptive prediction using local area training","authors":"S. Marusic, G. Deng","doi":"10.1109/ANZIIS.2001.974103","DOIUrl":"https://doi.org/10.1109/ANZIIS.2001.974103","url":null,"abstract":"An adaptive prediction technique is proposed which is based on the training of prediction coefficients using a local causal training area. The training technique is applied in conjunction with the recursive LMS (RLMS) algorithm, incorporating feedback of the prediction error to update the predictor coefficients. The local area training is shown to improve the stability of the RLMS algorithm. The ability of the implementation to track nonstationary data is demonstrated through the improved accuracy of predictions. Applied to lossless coding; of images, the proposed technique using RLMS and adaptive arithmetic coding produces results comparable to state of the art techniques.","PeriodicalId":383878,"journal":{"name":"The Seventh Australian and New Zealand Intelligent Information Systems Conference, 2001","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124272445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ANZIIS.2001.974063
I. Murray, T. Dias
This is the first of a pair of papers that describe a prototype portable device for optically scanning embossed Braille and conversion of the scanned text to binary Braille representation. This prototype has been developed in conjunction with the Association for the Blind (WA). An application to convert the literary Braille code to expanded text has also been implemented and is described in part II. The system developed utilises a hand held scanner that captures the embossed Braille image, in real time, via a linear 128-pixel CCD array. A Texas Instruments digital signal processor performs recognition processing.
{"title":"A portable device for optically recognizing Braille. I. Hardware development","authors":"I. Murray, T. Dias","doi":"10.1109/ANZIIS.2001.974063","DOIUrl":"https://doi.org/10.1109/ANZIIS.2001.974063","url":null,"abstract":"This is the first of a pair of papers that describe a prototype portable device for optically scanning embossed Braille and conversion of the scanned text to binary Braille representation. This prototype has been developed in conjunction with the Association for the Blind (WA). An application to convert the literary Braille code to expanded text has also been implemented and is described in part II. The system developed utilises a hand held scanner that captures the embossed Braille image, in real time, via a linear 128-pixel CCD array. A Texas Instruments digital signal processor performs recognition processing.","PeriodicalId":383878,"journal":{"name":"The Seventh Australian and New Zealand Intelligent Information Systems Conference, 2001","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122402176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ANZIIS.2001.974091
H. Hosseini
In this paper, the design and implementation processes of an intelligent biosignal analyser axe discussed. If the complexity and costs associated with computer-aided diagnosis of biosignal can be reduced, it is likely that the availability of these systems will increase. Moreover, the real-time implementation of biosignal analysis can be achieved. The Biosignal analysers are being demanded in research, health care centres, and other organisations. A research project is carrying on at Auckland University of Technology to enhance the theoretical and experimental components in the area of intelligent biosignal processing. The overall aim of this project is to implement on-line (real-time) processing techniques, based on neural networks, for intelligent, cost effective, and easy-to-use biosignal analysers. This configuration allows flexibility in mapping biosignal inputs to output code and also allows for interactive applications to be created in software. Therefore, it allows users to control computer functions directly from their bioelectric signals. This approach is extremely valuable to people with devastating neuromuscular handicaps. The idea of providing easy access to such analysers will contribute to a reduction in the mortality rate due to certain medical conditions, which produce abnormal biosignals (i.e. cardiac failure).
{"title":"An intelligent and interactive biosignal analyser","authors":"H. Hosseini","doi":"10.1109/ANZIIS.2001.974091","DOIUrl":"https://doi.org/10.1109/ANZIIS.2001.974091","url":null,"abstract":"In this paper, the design and implementation processes of an intelligent biosignal analyser axe discussed. If the complexity and costs associated with computer-aided diagnosis of biosignal can be reduced, it is likely that the availability of these systems will increase. Moreover, the real-time implementation of biosignal analysis can be achieved. The Biosignal analysers are being demanded in research, health care centres, and other organisations. A research project is carrying on at Auckland University of Technology to enhance the theoretical and experimental components in the area of intelligent biosignal processing. The overall aim of this project is to implement on-line (real-time) processing techniques, based on neural networks, for intelligent, cost effective, and easy-to-use biosignal analysers. This configuration allows flexibility in mapping biosignal inputs to output code and also allows for interactive applications to be created in software. Therefore, it allows users to control computer functions directly from their bioelectric signals. This approach is extremely valuable to people with devastating neuromuscular handicaps. The idea of providing easy access to such analysers will contribute to a reduction in the mortality rate due to certain medical conditions, which produce abnormal biosignals (i.e. cardiac failure).","PeriodicalId":383878,"journal":{"name":"The Seventh Australian and New Zealand Intelligent Information Systems Conference, 2001","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116498723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}