Pub Date : 2007-12-01DOI: 10.1109/ICCITECHN.2007.4579389
Md Atiqur Rahman Ahad, T. Ogata, J. Tan, Hyoungseop Kim, S. Ishikawa
In this paper, we compare the basic motion history image (MHI) and our developed multi-directional motion history image (DMHI) for human gesture recognition. One of the constraints of the MHI is that it erases past motion by overwriting new motion onto the past one, thereby creating a template that does not correspond the motion properly. We have solved this overwrite problem by employing the concept of motion descriptors from optical flow vector. We have separated the optical flow vector into four components based on the four directions, namely up, down, left and right. We have employed Hu moments to calculate the feature vectors for both the MHI and the DMHI methods. We have experimentally verified the superiority of the DMHI method in terms of recognition rate for complex motion. In this paper, we have also analyzed the importance of motion energy image for both methods, and with different motions, we have found that presence of energy image is more evident in the DMHI technique than in the MHI technique.
{"title":"Comparative analysis between two view-based methods: MHI and DMHI","authors":"Md Atiqur Rahman Ahad, T. Ogata, J. Tan, Hyoungseop Kim, S. Ishikawa","doi":"10.1109/ICCITECHN.2007.4579389","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2007.4579389","url":null,"abstract":"In this paper, we compare the basic motion history image (MHI) and our developed multi-directional motion history image (DMHI) for human gesture recognition. One of the constraints of the MHI is that it erases past motion by overwriting new motion onto the past one, thereby creating a template that does not correspond the motion properly. We have solved this overwrite problem by employing the concept of motion descriptors from optical flow vector. We have separated the optical flow vector into four components based on the four directions, namely up, down, left and right. We have employed Hu moments to calculate the feature vectors for both the MHI and the DMHI methods. We have experimentally verified the superiority of the DMHI method in terms of recognition rate for complex motion. In this paper, we have also analyzed the importance of motion energy image for both methods, and with different motions, we have found that presence of energy image is more evident in the DMHI technique than in the MHI technique.","PeriodicalId":338170,"journal":{"name":"2007 10th international conference on computer and information technology","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115227311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ICCITECHN.2007.4579432
M. Khaer, M. Hashem, M.R. Masud
In this paper, we propose a new, simple and quantitative approach to specify design level of object oriented software systems. The exploratory analysis method proposed here uses GoF (gang of four) design patterns as our assessment criteria. We formulate an empirical study and develop a method to measure software quality. We tested our proposed method on several open source projects and also validate it by making a comparison with current approach. Our approach that addresses design patterns can be an excellent alternative to current systems such as OO metric, software fault proneness, visualization and anti-pattern based approaches. Our approach also can be helpful to practitioners for software quality assurance.
本文提出了一种新的、简单的、定量的方法来确定面向对象软件系统的设计层次。本文提出的探索性分析方法使用GoF (gang of four)设计模式作为我们的评估标准。本文对软件质量进行了实证研究,并提出了一种测量软件质量的方法。我们在几个开源项目中测试了我们提出的方法,并通过与当前方法进行比较来验证它。我们处理设计模式的方法可以成为当前系统(如OO度量、软件故障倾向、可视化和基于反模式的方法)的优秀替代方案。我们的方法对软件质量保证的实践者也有帮助。
{"title":"An empirical analysis of software systems for measurement of design quality level based on design patterns","authors":"M. Khaer, M. Hashem, M.R. Masud","doi":"10.1109/ICCITECHN.2007.4579432","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2007.4579432","url":null,"abstract":"In this paper, we propose a new, simple and quantitative approach to specify design level of object oriented software systems. The exploratory analysis method proposed here uses GoF (gang of four) design patterns as our assessment criteria. We formulate an empirical study and develop a method to measure software quality. We tested our proposed method on several open source projects and also validate it by making a comparison with current approach. Our approach that addresses design patterns can be an excellent alternative to current systems such as OO metric, software fault proneness, visualization and anti-pattern based approaches. Our approach also can be helpful to practitioners for software quality assurance.","PeriodicalId":338170,"journal":{"name":"2007 10th international conference on computer and information technology","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127585623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ICCITECHN.2007.4579415
S.K. Saha, M. Rahaman
In this paper a new data compression technique has been proposed based on Lampel Ziv Welch (LZW) coding and block sorting. At first block sorting is performed on input data to produce permuted data. LZW coding is then applied on that permuted data to produce more compressed output. Though block sorting takes some extra time (the amount is very negligible), it increases the performance of LWZ compression much. The proposed model is compared with respect to LZW compression.
{"title":"Boosting the performance of LZW compression through block sorting for universal lossless data compression","authors":"S.K. Saha, M. Rahaman","doi":"10.1109/ICCITECHN.2007.4579415","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2007.4579415","url":null,"abstract":"In this paper a new data compression technique has been proposed based on Lampel Ziv Welch (LZW) coding and block sorting. At first block sorting is performed on input data to produce permuted data. LZW coding is then applied on that permuted data to produce more compressed output. Though block sorting takes some extra time (the amount is very negligible), it increases the performance of LWZ compression much. The proposed model is compared with respect to LZW compression.","PeriodicalId":338170,"journal":{"name":"2007 10th international conference on computer and information technology","volume":"149 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126211601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ICCITECHN.2007.4579420
N. Fauzia, T. Dey, I. Bhuiyan, M. Rahman
Voting is usually recognized as one of the main characteristics of Democracy. Electronic election is a very recent idea regarding voting. Voter, once given his vote, has to rely upon the election systempsilas honesty and security. Free and fairness of an election is desired by almost everyone associated with it. Hence designing an election system needs special care. Furthermore, an electronic election should be more secure, transparent and trustworthy, as common people have less faith in computers due to system crashes and hacking threats. In this paper, we are going to describe our implementation of an efficient and secured electronic voting system based on the Fujioka- Okamoto-Ohta protocol which is the most practical and suitable protocol for large scale elections. Our implementation contains the automation of an online voting system providing some features which were absent in the previous implementations. We have made our system even more user friendly and secured but faster than the others using recent technologies and resources.
{"title":"An efficient implementation of electronic election system","authors":"N. Fauzia, T. Dey, I. Bhuiyan, M. Rahman","doi":"10.1109/ICCITECHN.2007.4579420","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2007.4579420","url":null,"abstract":"Voting is usually recognized as one of the main characteristics of Democracy. Electronic election is a very recent idea regarding voting. Voter, once given his vote, has to rely upon the election systempsilas honesty and security. Free and fairness of an election is desired by almost everyone associated with it. Hence designing an election system needs special care. Furthermore, an electronic election should be more secure, transparent and trustworthy, as common people have less faith in computers due to system crashes and hacking threats. In this paper, we are going to describe our implementation of an efficient and secured electronic voting system based on the Fujioka- Okamoto-Ohta protocol which is the most practical and suitable protocol for large scale elections. Our implementation contains the automation of an online voting system providing some features which were absent in the previous implementations. We have made our system even more user friendly and secured but faster than the others using recent technologies and resources.","PeriodicalId":338170,"journal":{"name":"2007 10th international conference on computer and information technology","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120993159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ICCITECHN.2007.4579380
M.A. Rahman, A. H. Al Muktadir
QoS is the collective effect of service performance which determines the degree of satisfaction of a service. Since it is the collective effect of a service, it must have significant impact on the performance of routing protocols of any network. In MANET where the network has no fixed infrastructure and eventually the topology is continuously changing, QoS is a more challenging issue. However, the ability of a MANET to provide adequate quality of service (QoS) is limited by the ability of the underlying routing protocol and MANETpsilas QoS requirements can be quantified in terms of Packet Delivery Fraction, Average end-to-end delay of data packets and Normalized Routing Load. So, this paper, for the first time, presents the effect of data send rate, node velocity and transmission range on QoS parameters of the two contrasting MANET routing protocols OLSR and DYMO. These two protocols are simulated and compared with NS-2 under Gauss Markov mobility model. The simulation results show significant QoS performance differences.
{"title":"The impact of data send rate, node velocity and transmission range on QoS parameters of OLSR and DYMO MANET routing protocols.","authors":"M.A. Rahman, A. H. Al Muktadir","doi":"10.1109/ICCITECHN.2007.4579380","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2007.4579380","url":null,"abstract":"QoS is the collective effect of service performance which determines the degree of satisfaction of a service. Since it is the collective effect of a service, it must have significant impact on the performance of routing protocols of any network. In MANET where the network has no fixed infrastructure and eventually the topology is continuously changing, QoS is a more challenging issue. However, the ability of a MANET to provide adequate quality of service (QoS) is limited by the ability of the underlying routing protocol and MANETpsilas QoS requirements can be quantified in terms of Packet Delivery Fraction, Average end-to-end delay of data packets and Normalized Routing Load. So, this paper, for the first time, presents the effect of data send rate, node velocity and transmission range on QoS parameters of the two contrasting MANET routing protocols OLSR and DYMO. These two protocols are simulated and compared with NS-2 under Gauss Markov mobility model. The simulation results show significant QoS performance differences.","PeriodicalId":338170,"journal":{"name":"2007 10th international conference on computer and information technology","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122653653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ICCITECHN.2007.4579422
A. Islam, M. Hasan, R. Rahaman, S. Kabir, S. Ahmmed
Now a day artificial neural network (ANN) has become one of the most prominent concepts in the field of artificial intelligence. ANN has already been applied in the thousands of real life applications. In the arena of classification problem ANN is used massively. But the key issue is in almost all situations the performance of it depends on the architecture of the ANN. As a result designing a proper ANN is always a vital issue in the field of neural networks. The determination of an appropriate ANN architecture is always a challenging task for the ANN designers. This paper proposes a pruning algorithm for designing a three layered ANN architectures. It is well known that a three layered ANN can solve any kind of linear and nonlinear problems. The proposed algorithm uses some major mathematical concepts: correlation coefficients, standard deviations, and statistical hypothesis testing scheme for designing the ANNs. For that reason the authors propose the new pruning algorithm, ANN designing by sensitivity and hypothesis correlations testing (SHCT), to determine ANN architectures automatically. The salient features of SHCT are that it uses statistical hypothesis testing scheme, standard deviations, correlation coefficients, merging with proper replacements to design the ANNs. To justify the performances of SHCT it has been tested on a number of benchmark problem datasets such as Australian credit cards, breast cancer, diabetes, heart disease, and thyroid.
{"title":"Designing ANN using sensitivity & hypothesis correlation testing","authors":"A. Islam, M. Hasan, R. Rahaman, S. Kabir, S. Ahmmed","doi":"10.1109/ICCITECHN.2007.4579422","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2007.4579422","url":null,"abstract":"Now a day artificial neural network (ANN) has become one of the most prominent concepts in the field of artificial intelligence. ANN has already been applied in the thousands of real life applications. In the arena of classification problem ANN is used massively. But the key issue is in almost all situations the performance of it depends on the architecture of the ANN. As a result designing a proper ANN is always a vital issue in the field of neural networks. The determination of an appropriate ANN architecture is always a challenging task for the ANN designers. This paper proposes a pruning algorithm for designing a three layered ANN architectures. It is well known that a three layered ANN can solve any kind of linear and nonlinear problems. The proposed algorithm uses some major mathematical concepts: correlation coefficients, standard deviations, and statistical hypothesis testing scheme for designing the ANNs. For that reason the authors propose the new pruning algorithm, ANN designing by sensitivity and hypothesis correlations testing (SHCT), to determine ANN architectures automatically. The salient features of SHCT are that it uses statistical hypothesis testing scheme, standard deviations, correlation coefficients, merging with proper replacements to design the ANNs. To justify the performances of SHCT it has been tested on a number of benchmark problem datasets such as Australian credit cards, breast cancer, diabetes, heart disease, and thyroid.","PeriodicalId":338170,"journal":{"name":"2007 10th international conference on computer and information technology","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123352713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ICCITECHN.2007.4579369
R. Pathan
The consequences of missing deadline of hard real time system tasks may be catastrophic. Moreover, in case of faults, a deadline can be missed if the time taken for recovery is not taken into account during the phase when tasks are submitted or accepted to the system. However, when faults occur tasks may miss deadline even if fault tolerance is employed. Because when an erroneous task with larger execution time executes up to end of its total execution time even if the error is detected early, this unnecessary execution of the erroneous task provides no additional slack time in the schedule to mitigate the effect of error by running additional copy of the same task without missing deadline. In this paper, a recovery mechanism is proposed to augment the fault-tolerant real-time scheduling algorithm RM-FT that achieves node level fault tolerance (NLFT) using temporal error masking (TEM) technique based on rate monotonic (RM) scheduling algorithm. Several hardware and software error detection mechanisms (EDM), i.e. watchdog processor or executable assertions, can detect an error before an erroneous task finishes its full execution, and can immediately stops execution. In this paper, using the advantage of such early detection by EDM, a recovery algorithm RM-FT-RECOVERY is proposed to find an upper bound, denoted by Edm Bound, on the execution time of the tasks, and mechanism is developed to provide additional slack time to a fault-tolerant real-time schedule so that additional task copies can be scheduled when error occurs.
{"title":"Recovery of fault-tolerant real-time scheduling algorithm for tolerating multiple transient faults","authors":"R. Pathan","doi":"10.1109/ICCITECHN.2007.4579369","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2007.4579369","url":null,"abstract":"The consequences of missing deadline of hard real time system tasks may be catastrophic. Moreover, in case of faults, a deadline can be missed if the time taken for recovery is not taken into account during the phase when tasks are submitted or accepted to the system. However, when faults occur tasks may miss deadline even if fault tolerance is employed. Because when an erroneous task with larger execution time executes up to end of its total execution time even if the error is detected early, this unnecessary execution of the erroneous task provides no additional slack time in the schedule to mitigate the effect of error by running additional copy of the same task without missing deadline. In this paper, a recovery mechanism is proposed to augment the fault-tolerant real-time scheduling algorithm RM-FT that achieves node level fault tolerance (NLFT) using temporal error masking (TEM) technique based on rate monotonic (RM) scheduling algorithm. Several hardware and software error detection mechanisms (EDM), i.e. watchdog processor or executable assertions, can detect an error before an erroneous task finishes its full execution, and can immediately stops execution. In this paper, using the advantage of such early detection by EDM, a recovery algorithm RM-FT-RECOVERY is proposed to find an upper bound, denoted by Edm Bound, on the execution time of the tasks, and mechanism is developed to provide additional slack time to a fault-tolerant real-time schedule so that additional task copies can be scheduled when error occurs.","PeriodicalId":338170,"journal":{"name":"2007 10th international conference on computer and information technology","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124718206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ICCITECHN.2007.4579412
M. Monzur Morshed, Ye Kyaw Thu, Y. Urano
The interface of mobile phone in Bangladesh is mostly in English. Few mobile operators introduced Short Messaging Services in Bangla. Text entry in Bangla in mobile phone is not simple due to the structure and large number of characters in this script. The objective of this paper is to propose an easy and faster mobile input method. We proposed frequency based two-layer multitap (FBTLM) Bangla input method for mobile phones. The evaluation was done by keystroke comparison and user experiments by text entry. According to the result, we found our proposed FBTLM takes 42% less keystrokes than existing one layer multitap (OLM) and 17% less keystrokes than two layer multitap (TLM). Moreover, our proposed FBTLM method takes 42% less tapping time than OLM and 26% less tapping time than TLM.
{"title":"Frequency Based Two-Layer Multitap Bangla Input method for Mobile phones","authors":"M. Monzur Morshed, Ye Kyaw Thu, Y. Urano","doi":"10.1109/ICCITECHN.2007.4579412","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2007.4579412","url":null,"abstract":"The interface of mobile phone in Bangladesh is mostly in English. Few mobile operators introduced Short Messaging Services in Bangla. Text entry in Bangla in mobile phone is not simple due to the structure and large number of characters in this script. The objective of this paper is to propose an easy and faster mobile input method. We proposed frequency based two-layer multitap (FBTLM) Bangla input method for mobile phones. The evaluation was done by keystroke comparison and user experiments by text entry. According to the result, we found our proposed FBTLM takes 42% less keystrokes than existing one layer multitap (OLM) and 17% less keystrokes than two layer multitap (TLM). Moreover, our proposed FBTLM method takes 42% less tapping time than OLM and 26% less tapping time than TLM.","PeriodicalId":338170,"journal":{"name":"2007 10th international conference on computer and information technology","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116483686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ICCITECHN.2007.4579449
M.Z. Alam, M. Moniruzzaman, M. Alom, M. Sobhan
In orthogonal frequency division multiplexing (OFDM) based wireless communication systems; extra guard period is inserted to eliminate ISI (inter symbol interference) and ICI (inter carrier interference) effect. The guard period decreases the symbol rate. The authors describe here a model that eliminates the effects of ICI and ISI without inserting extra guard period. The sub-carrier frequencies and carrier interferometry (CI) code distribute alternatively between consecutive OFDM symbols to eliminate the ISI effect. The CI code is used to each sub-carrier in such a way that the subcarrier frequency, orthogonal to each other, when Doppler frequency shift occurs due to relative movement of the transmitter and receiver. A pilot symbol is used in between the two symbols for linear operation of all the electronic devices. A properly designed matched filter at the transmitter and receiver can reduce the effect of ICI. Here, we consider the effect of ICI for carrier frequency synchronizing error, Doppler shift and phase error with a time invariant channel as well.
{"title":"To enhance bit rate in orthogonal frequency division multiplexing by carrier interferometry spreading code","authors":"M.Z. Alam, M. Moniruzzaman, M. Alom, M. Sobhan","doi":"10.1109/ICCITECHN.2007.4579449","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2007.4579449","url":null,"abstract":"In orthogonal frequency division multiplexing (OFDM) based wireless communication systems; extra guard period is inserted to eliminate ISI (inter symbol interference) and ICI (inter carrier interference) effect. The guard period decreases the symbol rate. The authors describe here a model that eliminates the effects of ICI and ISI without inserting extra guard period. The sub-carrier frequencies and carrier interferometry (CI) code distribute alternatively between consecutive OFDM symbols to eliminate the ISI effect. The CI code is used to each sub-carrier in such a way that the subcarrier frequency, orthogonal to each other, when Doppler frequency shift occurs due to relative movement of the transmitter and receiver. A pilot symbol is used in between the two symbols for linear operation of all the electronic devices. A properly designed matched filter at the transmitter and receiver can reduce the effect of ICI. Here, we consider the effect of ICI for carrier frequency synchronizing error, Doppler shift and phase error with a time invariant channel as well.","PeriodicalId":338170,"journal":{"name":"2007 10th international conference on computer and information technology","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129651908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ICCITECHN.2007.4579399
M. Mollah, S. Eguchi
Recently, independent component analysis (ICA) is the most popular and promising statistical technique for blind audio source separation. This paper proposes the minimum beta-divergence based ICA as an adaptive robust audio source separation algorithm. This algorithm explores local structures of audio source signals in which the observed signals follow a mixture of several ICA models. The performance of this algorithm is equivalent to the standard ICA algorithms if observed signals are not corrupted by outliers and there exist only one structure of audio source signals in the entire data space, while it keeps better performance otherwise. It is able to extract all local audio source structures sequentially in presence of huge outliers. Our experimental results also support the above statements.
{"title":"Adaptively robust blind audio signals separation by the minimum β-divergence method","authors":"M. Mollah, S. Eguchi","doi":"10.1109/ICCITECHN.2007.4579399","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2007.4579399","url":null,"abstract":"Recently, independent component analysis (ICA) is the most popular and promising statistical technique for blind audio source separation. This paper proposes the minimum beta-divergence based ICA as an adaptive robust audio source separation algorithm. This algorithm explores local structures of audio source signals in which the observed signals follow a mixture of several ICA models. The performance of this algorithm is equivalent to the standard ICA algorithms if observed signals are not corrupted by outliers and there exist only one structure of audio source signals in the entire data space, while it keeps better performance otherwise. It is able to extract all local audio source structures sequentially in presence of huge outliers. Our experimental results also support the above statements.","PeriodicalId":338170,"journal":{"name":"2007 10th international conference on computer and information technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126128608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}