While Modern Standard Arabic is the formal spoken and written language of the Arab world; dialects are the major communication mode for everyday life. Therefore, identifying a speaker`s dialect is critical in the Arabic-speaking world for speech processing tasks, such as automatic speech recognition or identification. In this paper, we examine two approaches that reduce the Universal Background Model (UBM) in the automatic dialect identification system across the five following Arabic Maghreb dialects: Moroccan, Tunisian, and 3 dialects of the western (Oranian), central (Algiersian), and eastern (Constantinian) regions of Algeria. We applied our approaches to the Maghreb dialect detection domain that contains a collection of 10-second utterances and we compared the performance precision gained against the dialect samples from a baseline GMM-UBM system and the ones from our own improved GMM-UBM system that uses a Reduced UBM algorithm. Our experiments show that our approaches significantly improve identification performance over purely acoustic features with an identification rate of 80.49%.
{"title":"GMM-Based Maghreb Dialect IdentificationSystem","authors":"Lachachi Nour-Eddine, Adla Abdelkader","doi":"10.3745/JIPS.02.0015","DOIUrl":"https://doi.org/10.3745/JIPS.02.0015","url":null,"abstract":"While Modern Standard Arabic is the formal spoken and written language of the Arab world; dialects are the major communication mode for everyday life. Therefore, identifying a speaker`s dialect is critical in the Arabic-speaking world for speech processing tasks, such as automatic speech recognition or identification. In this paper, we examine two approaches that reduce the Universal Background Model (UBM) in the automatic dialect identification system across the five following Arabic Maghreb dialects: Moroccan, Tunisian, and 3 dialects of the western (Oranian), central (Algiersian), and eastern (Constantinian) regions of Algeria. We applied our approaches to the Maghreb dialect detection domain that contains a collection of 10-second utterances and we compared the performance precision gained against the dialect samples from a baseline GMM-UBM system and the ones from our own improved GMM-UBM system that uses a Reduced UBM algorithm. Our experiments show that our approaches significantly improve identification performance over purely acoustic features with an identification rate of 80.49%.","PeriodicalId":46825,"journal":{"name":"Journal of Information Processing Systems","volume":"1 1","pages":"22-38"},"PeriodicalIF":1.6,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70056739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-31DOI: 10.3745/JIPS.2014.10.1.145
K. Salim, B. Hafida, Rahal Sid Ahmed
Recently, many large organizations have multiple data sources (MDS') distributed over different branches of an interstate company. Local patterns analysis has become an effective strategy for MDS mining in national and international organizations. It consists of mining different datasets in order to obtain frequent patterns, which are forwarded to a centralized place for global pattern analysis. Various synthesizing models (2,3,4,5,6,7,8,26) have been proposed to build global patterns from the forwarded patterns. It is desired that the synthesized rules from such forwarded patterns must closely match with the mono-mining results (i.e., the results that would be obtained if all of the databases are put together and mining has been done). When the pattern is present in the site, but fails to satisfy the minimum support threshold value, it is not allowed to take part in the pattern synthesizing process. Therefore, this process can lose some interesting patterns, which can help the decider to make the right decision. In such situations we propose the application of a probabilistic model in the synthesizing process. An adequate choice for a probabilistic model can improve the quality of patterns that have been discovered. In this paper, we perform a comprehensive study on various probabilistic models that can be applied in the synthesizing process and we choose and improve one of them that works to ameliorate the synthesizing results. Finally, some experiments are presented in public database in order to improve the efficiency of our proposed synthesizing method.
{"title":"Probabilistic Models for Local Patterns Analysis","authors":"K. Salim, B. Hafida, Rahal Sid Ahmed","doi":"10.3745/JIPS.2014.10.1.145","DOIUrl":"https://doi.org/10.3745/JIPS.2014.10.1.145","url":null,"abstract":"Recently, many large organizations have multiple data sources (MDS') distributed over different branches of an interstate company. Local patterns analysis has become an effective strategy for MDS mining in national and international organizations. It consists of mining different datasets in order to obtain frequent patterns, which are forwarded to a centralized place for global pattern analysis. Various synthesizing models (2,3,4,5,6,7,8,26) have been proposed to build global patterns from the forwarded patterns. It is desired that the synthesized rules from such forwarded patterns must closely match with the mono-mining results (i.e., the results that would be obtained if all of the databases are put together and mining has been done). When the pattern is present in the site, but fails to satisfy the minimum support threshold value, it is not allowed to take part in the pattern synthesizing process. Therefore, this process can lose some interesting patterns, which can help the decider to make the right decision. In such situations we propose the application of a probabilistic model in the synthesizing process. An adequate choice for a probabilistic model can improve the quality of patterns that have been discovered. In this paper, we perform a comprehensive study on various probabilistic models that can be applied in the synthesizing process and we choose and improve one of them that works to ameliorate the synthesizing results. Finally, some experiments are presented in public database in order to improve the efficiency of our proposed synthesizing method.","PeriodicalId":46825,"journal":{"name":"Journal of Information Processing Systems","volume":"10 1","pages":"145-161"},"PeriodicalIF":1.6,"publicationDate":"2014-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70056943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-30DOI: 10.3745/JIPS.2013.9.3.499
Panduranga H, Naveen Kumar, Sharath Kumar
Hardware-Software co-simulation of a multiple image encryption technique shall be described in this paper. Our proposed multiple image encryption technique is based on the Latin Square Image Cipher (LSIC). First, a carrier image that is based on the Latin Square is generated by using 256-bits of length key. The XOR operation is applied between an input image and the Latin Square Image to generate an encrypted image. Then, the XOR operation is applied between the encrypted image and the second input image to encrypt the second image. This process is continues until the nth input image is encrypted. We achieved hardware co-simulation of the proposed multiple image encryption technique by using the Xilinx System Generator (XSG). This encryption technique is modeled using Simulink and XSG Block set and synthesized onto Virtex 2 pro FPGA device. We validated our proposed technique by using the hardware software co-simulation method.
本文将描述一种多图像加密技术的硬件-软件联合仿真。我们提出了基于拉丁平方图像密码(LSIC)的多图像加密技术。首先,利用256位密钥生成基于拉丁方的载波图像。在输入图像和拉丁方图像之间应用异或操作来生成加密图像。然后,在加密后的图像与第二张输入图像之间进行异或运算,对第二张图像进行加密。这个过程一直持续到第n个输入图像被加密为止。我们使用Xilinx System Generator (XSG)实现了所提出的多图像加密技术的硬件联合仿真。利用Simulink和XSG Block对该加密技术进行建模,并将其合成到Virtex 2 pro FPGA器件上。采用软硬件联合仿真的方法对所提出的技术进行了验证。
{"title":"Hardware Software Co-Simulation of the Multiple Image Encryption Technique Using the Xilinx System Generator","authors":"Panduranga H, Naveen Kumar, Sharath Kumar","doi":"10.3745/JIPS.2013.9.3.499","DOIUrl":"https://doi.org/10.3745/JIPS.2013.9.3.499","url":null,"abstract":"Hardware-Software co-simulation of a multiple image encryption technique shall be described in this paper. Our proposed multiple image encryption technique is based on the Latin Square Image Cipher (LSIC). First, a carrier image that is based on the Latin Square is generated by using 256-bits of length key. The XOR operation is applied between an input image and the Latin Square Image to generate an encrypted image. Then, the XOR operation is applied between the encrypted image and the second input image to encrypt the second image. This process is continues until the nth input image is encrypted. We achieved hardware co-simulation of the proposed multiple image encryption technique by using the Xilinx System Generator (XSG). This encryption technique is modeled using Simulink and XSG Block set and synthesized onto Virtex 2 pro FPGA device. We validated our proposed technique by using the hardware software co-simulation method.","PeriodicalId":46825,"journal":{"name":"Journal of Information Processing Systems","volume":"151 1","pages":"499"},"PeriodicalIF":1.6,"publicationDate":"2013-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70056932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-09-30DOI: 10.3745/JIPS.2010.6.3.295
M. Azim
Abstract —Network lifetime is a critical issue in Wireless Sensor Networks (WSNs). In which, a large number of sensor nodes communicate together to perform a predetermined sensing task. In such networks, the network life time depends mainly on the lifetime of the sensor nodes constituting the network. Therefore, it is essential to balance the energy consumption among all sensor nodes to ensure the network connectivity. In this paper, we propose an energy-efficient data routing protocol for wireless sensor networks. Contrary to the protocol proposed in [6], that always selects the path with minimum hop count to the base station, our proposed routing protocol may choose a longer path that will provide better distribution of the energy consumption among the sensor nodes. Simulation results indicate clearly that compared to the routing protocol proposed in [6], our proposed protocol evenly distributes the energy consumption among the network nodes thus maximizing the network life time. Keywords
{"title":"MAP : A Balanced Energy Consumption Routing Protocol for Wireless Sensor Networks","authors":"M. Azim","doi":"10.3745/JIPS.2010.6.3.295","DOIUrl":"https://doi.org/10.3745/JIPS.2010.6.3.295","url":null,"abstract":"Abstract —Network lifetime is a critical issue in Wireless Sensor Networks (WSNs). In which, a large number of sensor nodes communicate together to perform a predetermined sensing task. In such networks, the network life time depends mainly on the lifetime of the sensor nodes constituting the network. Therefore, it is essential to balance the energy consumption among all sensor nodes to ensure the network connectivity. In this paper, we propose an energy-efficient data routing protocol for wireless sensor networks. Contrary to the protocol proposed in [6], that always selects the path with minimum hop count to the base station, our proposed routing protocol may choose a longer path that will provide better distribution of the energy consumption among the sensor nodes. Simulation results indicate clearly that compared to the routing protocol proposed in [6], our proposed protocol evenly distributes the energy consumption among the network nodes thus maximizing the network life time. Keywords","PeriodicalId":46825,"journal":{"name":"Journal of Information Processing Systems","volume":"6 1","pages":"295-306"},"PeriodicalIF":1.6,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70056856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-30DOI: 10.3745/JIPS.2009.5.3.135
Zhao Zhibin, Gao Fuxiang
Most of the tasks in wireless sensor networks (WSN) are requested to run in a real-time way. Neither EDF nor FIFO can ensure real-time scheduling in WSN. A real-time scheduling strategy (RTS) is proposed in this paper. All tasks are divided into two layers and endued diverse priorities. RTS utilizes a preemptive way to ensure hard real-time scheduling. The experimental results indicate that RTS has a good performance both in communication throughput and over-load.
{"title":"Study on Preemptive Real-Time Scheduling Strategy for Wireless Sensor Networks","authors":"Zhao Zhibin, Gao Fuxiang","doi":"10.3745/JIPS.2009.5.3.135","DOIUrl":"https://doi.org/10.3745/JIPS.2009.5.3.135","url":null,"abstract":"Most of the tasks in wireless sensor networks (WSN) are requested to run in a real-time way. Neither EDF nor FIFO can ensure real-time scheduling in WSN. A real-time scheduling strategy (RTS) is proposed in this paper. All tasks are divided into two layers and endued diverse priorities. RTS utilizes a preemptive way to ensure hard real-time scheduling. The experimental results indicate that RTS has a good performance both in communication throughput and over-load.","PeriodicalId":46825,"journal":{"name":"Journal of Information Processing Systems","volume":"5 1","pages":"135-144"},"PeriodicalIF":1.6,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70056814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-03-31DOI: 10.3745/JIPS.2008.4.1.027
M. Dorn, W. Hou, D. Che, Zhewei Jiang
Minimum support and confidence have been used as criteria for generating association rules in all association rule mining algorithms. These criteria have their natural appeals, such as simplicity; few researchers have suspected the quality of generated rules. In this paper, we examine the rules from a more rigorous point of view by conducting statistical tests. Specifically, we use contingency tables and chi-square test to analyze the data. Experimental results show that one third of the association rules derived based on the support and confidence criteria are not significant, that is, the antecedent and consequent of the rules are not correlated. It indicates that minimum support and minimum confidence do not provide adequate discovery of meaningful associations. The chi-square test can be considered as an enhancement or an alternative solution.
{"title":"An Empirical Study of Qualities of Association Rules from a Statistical View Point","authors":"M. Dorn, W. Hou, D. Che, Zhewei Jiang","doi":"10.3745/JIPS.2008.4.1.027","DOIUrl":"https://doi.org/10.3745/JIPS.2008.4.1.027","url":null,"abstract":"Minimum support and confidence have been used as criteria for generating association rules in all association rule mining algorithms. These criteria have their natural appeals, such as simplicity; few researchers have suspected the quality of generated rules. In this paper, we examine the rules from a more rigorous point of view by conducting statistical tests. Specifically, we use contingency tables and chi-square test to analyze the data. Experimental results show that one third of the association rules derived based on the support and confidence criteria are not significant, that is, the antecedent and consequent of the rules are not correlated. It indicates that minimum support and minimum confidence do not provide adequate discovery of meaningful associations. The chi-square test can be considered as an enhancement or an alternative solution.","PeriodicalId":46825,"journal":{"name":"Journal of Information Processing Systems","volume":"113 1","pages":"404-409"},"PeriodicalIF":1.6,"publicationDate":"2008-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76675307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}