In this paper some modern methods of physiological identification of people, the importance of biometrics in company management, and technical resources used in this field are presented. Authors describe the achievements of biometrics and its efficiency in ensuring the security of data and equipment resources from industrial management point of view. Also given the advantages of the solutions that are based on biometric methods, and evidence showing their increasing significance in company activity and its development.
{"title":"A Study on the Importance of Biometric Technique Selection in the Protection of Company Resources","authors":"A. Zajkowska, Wojciech Zimnoch, K. Saeed","doi":"10.1109/CISIM.2007.7","DOIUrl":"https://doi.org/10.1109/CISIM.2007.7","url":null,"abstract":"In this paper some modern methods of physiological identification of people, the importance of biometrics in company management, and technical resources used in this field are presented. Authors describe the achievements of biometrics and its efficiency in ensuring the security of data and equipment resources from industrial management point of view. Also given the advantages of the solutions that are based on biometric methods, and evidence showing their increasing significance in company activity and its development.","PeriodicalId":350490,"journal":{"name":"6th International Conference on Computer Information Systems and Industrial Management Applications (CISIM'07)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122696978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research work aims to investigate the performance of a suggested wavelet based image compression system. The scheme of the proposed system utilizes 9/7 biorthogonal wavelet transforms to decompose the image signal, then uses run-length coding, with a little modification, to compress the detail sub-bands. A hierarchal quantization scheme was applied to reduce the number of bits required to encode the wavelet coefficients. The test results indicate that the proposed compression scheme shows good performance aspects in addition to its simplicity.
{"title":"Using Wavelet Transform, DPCM and Adaptive Run-length Coding to Compress Images","authors":"Ban N. Thanoon","doi":"10.1109/CISIM.2007.74","DOIUrl":"https://doi.org/10.1109/CISIM.2007.74","url":null,"abstract":"This research work aims to investigate the performance of a suggested wavelet based image compression system. The scheme of the proposed system utilizes 9/7 biorthogonal wavelet transforms to decompose the image signal, then uses run-length coding, with a little modification, to compress the detail sub-bands. A hierarchal quantization scheme was applied to reduce the number of bits required to encode the wavelet coefficients. The test results indicate that the proposed compression scheme shows good performance aspects in addition to its simplicity.","PeriodicalId":350490,"journal":{"name":"6th International Conference on Computer Information Systems and Industrial Management Applications (CISIM'07)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123377269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, the problem of semi-supervised handwriting segmentation into isolated character images is considered. Semi-supervised segmentation means here that the character sequence constituting a word presented on the image is known, but the character boundaries are not given and need to be automatically determined. The semi-supervised word segmentation can be useful in analytic writer-dependent approach to handwriting recognition, where the training set for personalized character classifier must be created for each writer from the text corpus consisting of text samples of an individual writer. The method described here over-segments the word images into sequences of graphemes in the first step. Then such grapheme sequences subdivision is sought, which results in the hypothetical character images sets maximizing average similarity in subsets corresponding to characters from the alphabet. It leads to the combinatorial optimization problem with enormously large search space. The suboptimal solution of this problem can be found using evolutionary algorithm. The sample character images extracted in this way can be used to train character classifiers. Some preliminary results of handwriting segmentation are presented in the paper and compared with fully supervised segmentation carried out by a human.
{"title":"Semi-Supervised Handwritten Word Segmentation Using Character Samples Similarity Maximization and Evolutionary Algorithm","authors":"J. Sas, Urszula Markowska-Kaczmar","doi":"10.1109/CISIM.2007.58","DOIUrl":"https://doi.org/10.1109/CISIM.2007.58","url":null,"abstract":"In this paper, the problem of semi-supervised handwriting segmentation into isolated character images is considered. Semi-supervised segmentation means here that the character sequence constituting a word presented on the image is known, but the character boundaries are not given and need to be automatically determined. The semi-supervised word segmentation can be useful in analytic writer-dependent approach to handwriting recognition, where the training set for personalized character classifier must be created for each writer from the text corpus consisting of text samples of an individual writer. The method described here over-segments the word images into sequences of graphemes in the first step. Then such grapheme sequences subdivision is sought, which results in the hypothetical character images sets maximizing average similarity in subsets corresponding to characters from the alphabet. It leads to the combinatorial optimization problem with enormously large search space. The suboptimal solution of this problem can be found using evolutionary algorithm. The sample character images extracted in this way can be used to train character classifiers. Some preliminary results of handwriting segmentation are presented in the paper and compared with fully supervised segmentation carried out by a human.","PeriodicalId":350490,"journal":{"name":"6th International Conference on Computer Information Systems and Industrial Management Applications (CISIM'07)","volume":"22 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115565313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Finite difference flow modelling of runoff on a terrain surface has usually been done using a regular grid. This has various disadvantages, as the regular pattern does not conform well to observed features such as watersheds, the runoff pattern is biased to the grid axes, and original data points are lost. We propose a flow modelling method using TIN models. A random Voronoi pattern is added to the original data. This avoids the issues of grid based methods, as there is no axis bias, points may be added anywhere and original data points may be retained. Our flow model simply requires a set of "buckets " to hold the water (the Voronoi cells) and slope information to provide the local runoff rate (the Delaunay edges).
{"title":"Finite Difference Runoff Modelling Using \"Voronoi Buckets\"","authors":"M. Dakowicz, C. Gold","doi":"10.1109/CISIM.2007.30","DOIUrl":"https://doi.org/10.1109/CISIM.2007.30","url":null,"abstract":"Finite difference flow modelling of runoff on a terrain surface has usually been done using a regular grid. This has various disadvantages, as the regular pattern does not conform well to observed features such as watersheds, the runoff pattern is biased to the grid axes, and original data points are lost. We propose a flow modelling method using TIN models. A random Voronoi pattern is added to the original data. This avoids the issues of grid based methods, as there is no axis bias, points may be added anywhere and original data points may be retained. Our flow model simply requires a set of \"buckets \" to hold the water (the Voronoi cells) and slope information to provide the local runoff rate (the Delaunay edges).","PeriodicalId":350490,"journal":{"name":"6th International Conference on Computer Information Systems and Industrial Management Applications (CISIM'07)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125740349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authentication of people using iris-based recognition is a widely developing technology. Iris recognition is feasible for use in differentiating between identical twins. Though the iris color and the overall statistical quality of the iris texture may be dependent on genetic factors, the textural details are independent and uncorrelated for genetically identical iris pairs. The feature extraction and classification are heavily based on the rich textural details of the iris.
{"title":"Iris Image Recognition","authors":"R. Choras","doi":"10.1109/CISIM.2007.44","DOIUrl":"https://doi.org/10.1109/CISIM.2007.44","url":null,"abstract":"The authentication of people using iris-based recognition is a widely developing technology. Iris recognition is feasible for use in differentiating between identical twins. Though the iris color and the overall statistical quality of the iris texture may be dependent on genetic factors, the textural details are independent and uncorrelated for genetically identical iris pairs. The feature extraction and classification are heavily based on the rich textural details of the iris.","PeriodicalId":350490,"journal":{"name":"6th International Conference on Computer Information Systems and Industrial Management Applications (CISIM'07)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121969255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today's perpetually improving industrial automation technology brings new equipment, systems, solutions and semantics. Project requirements become greater and the level of complexity, cost, documentation, and humbug tend to increase. Managing information becomes critical for project success. This paper offers solutions based on the use of 'nutshell' learning models that are simplified presentations of complex or vague concepts, designed to accelerate the learning process and extend retention of the acquired knowledge. A brief review of some aspects of automation technologies follows. Some traps and urban myths surrounding industrial projects and automation are unveiled. Needs for integration of technical and nontechnical content, as well as an investment in project's zz hi capital" are stressed. Pragmatic management si__k and practices are pointed out. Case study stories address data integration challenges and lead to a 'meta consulting' concept where technical and non-technical aspects are quickly assessed to form a basis for the powerful, rapid-engagement, priority-based solutions.
{"title":"Managing Information on Industrial Automation Projects","authors":"J. Jekielek","doi":"10.1109/CISIM.2007.46","DOIUrl":"https://doi.org/10.1109/CISIM.2007.46","url":null,"abstract":"Today's perpetually improving industrial automation technology brings new equipment, systems, solutions and semantics. Project requirements become greater and the level of complexity, cost, documentation, and humbug tend to increase. Managing information becomes critical for project success. This paper offers solutions based on the use of 'nutshell' learning models that are simplified presentations of complex or vague concepts, designed to accelerate the learning process and extend retention of the acquired knowledge. A brief review of some aspects of automation technologies follows. Some traps and urban myths surrounding industrial projects and automation are unveiled. Needs for integration of technical and nontechnical content, as well as an investment in project's zz hi capital\" are stressed. Pragmatic management si__k and practices are pointed out. Case study stories address data integration challenges and lead to a 'meta consulting' concept where technical and non-technical aspects are quickly assessed to form a basis for the powerful, rapid-engagement, priority-based solutions.","PeriodicalId":350490,"journal":{"name":"6th International Conference on Computer Information Systems and Industrial Management Applications (CISIM'07)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124970368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper presents the classification performance of an automatic classifier of the electrocardiogram (ECG) for the detection abnormal beats with new concept of feature extraction stage. Feature sets were based on ECG morphology and RR-intervals. Configuration adopted a Kohonen self-organizing maps (SOM) for analysis of signal features and clustering. In this study, a classifier was developed with SOM and learning vector quantization (LVQ) algorithms using the data from the records recommended by ANSI/AAMI EC57 standard. This paper compares two strategies for classification of annotated QRS complexes: based on original ECG morphology features and proposed new approach - based on preprocessed ECG morphology features. The mathematical morphology filtering is used for the preprocessing of ECG signal. The problem of choosing an appropriate structuring element of mathematical morphology filtering for ECG signal processing was studied. The performance of the algorithm is evaluated on the MIT-BIH Arrhythmia Database following the AAMI recommendations. Using this method the results of recognition beats either as normal or arrhythmias was improved.
{"title":"Mathematical Morphology Based ECG Feature Extraction for the Purpose of Heartbeat Classification","authors":"P. Tadejko, W. Rakowski","doi":"10.1109/CISIM.2007.47","DOIUrl":"https://doi.org/10.1109/CISIM.2007.47","url":null,"abstract":"The paper presents the classification performance of an automatic classifier of the electrocardiogram (ECG) for the detection abnormal beats with new concept of feature extraction stage. Feature sets were based on ECG morphology and RR-intervals. Configuration adopted a Kohonen self-organizing maps (SOM) for analysis of signal features and clustering. In this study, a classifier was developed with SOM and learning vector quantization (LVQ) algorithms using the data from the records recommended by ANSI/AAMI EC57 standard. This paper compares two strategies for classification of annotated QRS complexes: based on original ECG morphology features and proposed new approach - based on preprocessed ECG morphology features. The mathematical morphology filtering is used for the preprocessing of ECG signal. The problem of choosing an appropriate structuring element of mathematical morphology filtering for ECG signal processing was studied. The performance of the algorithm is evaluated on the MIT-BIH Arrhythmia Database following the AAMI recommendations. Using this method the results of recognition beats either as normal or arrhythmias was improved.","PeriodicalId":350490,"journal":{"name":"6th International Conference on Computer Information Systems and Industrial Management Applications (CISIM'07)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133855213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of the paper was the identification of frequency synchronization phenomenon occurring between different groups of e-learning system users. The changes in time of daily logs of: administration workers, teachers and students have been analyzed. The following analyses: correlation, Fourier and wavelet have been used to identify the nature of data. Basing on the wavelet analysis it has been proposed the new criterion of evaluation of frequency synchronization of two chaotic systems. The modified wavelet power spectrum has been used to identify the frequency synchronization between chaotic behaviors of three groups of users of e-learning web system. Obtained results have shown that the proposed method is useful for analyzing the phenomena of frequency synchronization of user groups of e-learning system.
{"title":"On Frequency Synchronization of e-Learning Web System Users","authors":"R. Mosdorf, B. Ignatowska","doi":"10.1109/CISIM.2007.52","DOIUrl":"https://doi.org/10.1109/CISIM.2007.52","url":null,"abstract":"The aim of the paper was the identification of frequency synchronization phenomenon occurring between different groups of e-learning system users. The changes in time of daily logs of: administration workers, teachers and students have been analyzed. The following analyses: correlation, Fourier and wavelet have been used to identify the nature of data. Basing on the wavelet analysis it has been proposed the new criterion of evaluation of frequency synchronization of two chaotic systems. The modified wavelet power spectrum has been used to identify the frequency synchronization between chaotic behaviors of three groups of users of e-learning web system. Obtained results have shown that the proposed method is useful for analyzing the phenomena of frequency synchronization of user groups of e-learning system.","PeriodicalId":350490,"journal":{"name":"6th International Conference on Computer Information Systems and Industrial Management Applications (CISIM'07)","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129598446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a new approach using independent component analysis (ica) and hybrid Flexible Neural Tree (FNT) is put forward for face recognition. To improve the quality of the face images, a series of image pre-processing techniques, which include histogram equalization, edge detection and geometrical transformation are used. The ICA based on Kernel principal component analysis (KPCA) and FastICA is employed to extract features, and the Hybrid FNT is used to identify the faces. To accelerate the convergence of the FNT and improve the quality of the solutions, the extended compact genetic programming (ECGP) and particle swarm optimization (PSO) are applied to optimize the FNT structure and parameters. The experimental results show that the proposed framework is efficient for face recognition.
{"title":"ICA Based on KPCA and Hybrid Flexible Neural Tree to Face Recognition","authors":"Jin Zhou, Yang Liu, Yuehui Chen","doi":"10.1109/CISIM.2007.37","DOIUrl":"https://doi.org/10.1109/CISIM.2007.37","url":null,"abstract":"In this paper, a new approach using independent component analysis (ica) and hybrid Flexible Neural Tree (FNT) is put forward for face recognition. To improve the quality of the face images, a series of image pre-processing techniques, which include histogram equalization, edge detection and geometrical transformation are used. The ICA based on Kernel principal component analysis (KPCA) and FastICA is employed to extract features, and the Hybrid FNT is used to identify the faces. To accelerate the convergence of the FNT and improve the quality of the solutions, the extended compact genetic programming (ECGP) and particle swarm optimization (PSO) are applied to optimize the FNT structure and parameters. The experimental results show that the proposed framework is efficient for face recognition.","PeriodicalId":350490,"journal":{"name":"6th International Conference on Computer Information Systems and Industrial Management Applications (CISIM'07)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122766571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the paper we try to answer, whether the Gaussian distribution - called widely the 'normal' distribution - is really basic, natural and normal. In particular, we investigate how the above statement conforms with the distribution of real data, namely daily returns of some stock indexes. It was the authors former experience that, when looking at the distributions of real data, it was very difficult to find there a 'normal', i.e. Gaussian distribution. The data, by their nature, are heterogeneous. If so, then the data should be modelled taking into account their possible heterogeneity. This can be done using mixture models - with mixtures composed from finite or infinite number of components. Students' T (univariate or multivariate) is one prominent example of distributions which may be obtained as a mixture of infinitesimal number of Gaussian distributions. The considerations are illustrated by an example of application to financial time series, namely daily returns of the indexes WIG20 and S&P500. We show, why the normality (i.e. 'Gaussianity') should be rejected and why the 't' distribution is plausible.
{"title":"Should Normal Distribution be Normal? The Student's T Alternative","authors":"A. Bartkowiak","doi":"10.1109/CISIM.2007.59","DOIUrl":"https://doi.org/10.1109/CISIM.2007.59","url":null,"abstract":"In the paper we try to answer, whether the Gaussian distribution - called widely the 'normal' distribution - is really basic, natural and normal. In particular, we investigate how the above statement conforms with the distribution of real data, namely daily returns of some stock indexes. It was the authors former experience that, when looking at the distributions of real data, it was very difficult to find there a 'normal', i.e. Gaussian distribution. The data, by their nature, are heterogeneous. If so, then the data should be modelled taking into account their possible heterogeneity. This can be done using mixture models - with mixtures composed from finite or infinite number of components. Students' T (univariate or multivariate) is one prominent example of distributions which may be obtained as a mixture of infinitesimal number of Gaussian distributions. The considerations are illustrated by an example of application to financial time series, namely daily returns of the indexes WIG20 and S&P500. We show, why the normality (i.e. 'Gaussianity') should be rejected and why the 't' distribution is plausible.","PeriodicalId":350490,"journal":{"name":"6th International Conference on Computer Information Systems and Industrial Management Applications (CISIM'07)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116407543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}