Pub Date : 2015-04-12DOI: 10.14419/JACST.V4I1.4009
Ali Saleh, R. Javidan, Mohammad Taghi FatehiKhajeh
Nowadays, scientific applications generate a huge amount of data in terabytes or petabytes. Data grids currently proposed solutions to large scale data management problems including efficient file transfer and replication. Data is typically replicated in a Data Grid to improve the job response time and data availability. A reasonable number and right locations for replicas has become a challenge in the Data Grid. In this paper, a four-phase dynamic data replication algorithm based on Temporal and Geographical locality is proposed. It includes: 1) evaluating and identifying the popular data and triggering a replication operation when the popularity data passes a dynamic threshold; 2) analyzing and modeling the relationship between system availability and the number of replicas, and calculating a suitable number of new replicas; 3) evaluating and identifying the popular data in each site, and placing replicas among them; 4) removing files with least cost of average access time when encountering insufficient space for replication. The algorithm was tested using a grid simulator, OptorSim developed by European Data Grid Projects. The simulation results show that the proposed algorithm has better performance in comparison with other algorithms in terms of job execution time, effective network usage and percentage of storage filled.
{"title":"A four-phase data replication algorithm for data grid","authors":"Ali Saleh, R. Javidan, Mohammad Taghi FatehiKhajeh","doi":"10.14419/JACST.V4I1.4009","DOIUrl":"https://doi.org/10.14419/JACST.V4I1.4009","url":null,"abstract":"Nowadays, scientific applications generate a huge amount of data in terabytes or petabytes. Data grids currently proposed solutions to large scale data management problems including efficient file transfer and replication. Data is typically replicated in a Data Grid to improve the job response time and data availability. A reasonable number and right locations for replicas has become a challenge in the Data Grid. In this paper, a four-phase dynamic data replication algorithm based on Temporal and Geographical locality is proposed. It includes: 1) evaluating and identifying the popular data and triggering a replication operation when the popularity data passes a dynamic threshold; 2) analyzing and modeling the relationship between system availability and the number of replicas, and calculating a suitable number of new replicas; 3) evaluating and identifying the popular data in each site, and placing replicas among them; 4) removing files with least cost of average access time when encountering insufficient space for replication. The algorithm was tested using a grid simulator, OptorSim developed by European Data Grid Projects. The simulation results show that the proposed algorithm has better performance in comparison with other algorithms in terms of job execution time, effective network usage and percentage of storage filled.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129189820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-06DOI: 10.14419/JACST.V4I1.4398
Nada Hussein, A. Alashqur, Bilal I. Sowan
In this digital age, organizations have to deal with huge amounts of data, sometimes called Big Data. In recent years, the volume of data has increased substantially. Consequently, finding efficient and automated techniques for discovering useful patterns and relationships in the data becomes very important. In data mining, patterns and relationships can be represented in the form of association rules. Current techniques for discovering association rules rely on measures such as support for finding frequent patterns and confidence for finding association rules. A shortcoming of confidence is that it does not capture the correlation that exists between the left-hand side (LHS) and the right-hand side (RHS) of an association rule. On the other hand, the interestingness measure lift captures such as correlation in the sense that it tells us whether the LHS influences the RHS positively or negatively. Therefore, using Lift instead of confidence as a criteria for discovering association rules can be more effective. It also gives the user more choices in determining the kind of association rules to be discovered. This in turn helps to narrow down the search space and consequently, improves performance. In this paper, we describe a new approach for discovering association rules that is based on Lift and not based on confidence.
{"title":"Using the interestingness measure lift to generate association rules","authors":"Nada Hussein, A. Alashqur, Bilal I. Sowan","doi":"10.14419/JACST.V4I1.4398","DOIUrl":"https://doi.org/10.14419/JACST.V4I1.4398","url":null,"abstract":"In this digital age, organizations have to deal with huge amounts of data, sometimes called Big Data. In recent years, the volume of data has increased substantially. Consequently, finding efficient and automated techniques for discovering useful patterns and relationships in the data becomes very important. In data mining, patterns and relationships can be represented in the form of association rules. Current techniques for discovering association rules rely on measures such as support for finding frequent patterns and confidence for finding association rules. A shortcoming of confidence is that it does not capture the correlation that exists between the left-hand side (LHS) and the right-hand side (RHS) of an association rule. On the other hand, the interestingness measure lift captures such as correlation in the sense that it tells us whether the LHS influences the RHS positively or negatively. Therefore, using Lift instead of confidence as a criteria for discovering association rules can be more effective. It also gives the user more choices in determining the kind of association rules to be discovered. This in turn helps to narrow down the search space and consequently, improves performance. In this paper, we describe a new approach for discovering association rules that is based on Lift and not based on confidence.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129555000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-31DOI: 10.14419/JACST.V4I1.4353
Nahid Khorashadizade, H. Rezaei
Hepatitis disease is caused by liver injury. Rapid diagnosis of this disease prevents its development and suffering to cirrhosis of the liver. Data mining is a new branch of science that helps physicians for proper decision making. In data mining using reduction feature and machine learning algorithms are useful for reducing the complexity of the problem and method of disease diagnosis, respectively. In this study, a new algorithm is proposed for hepatitis diagnosis according to Principal Component Analysis (PCA) and Error Minimized Extreme Learning Machine (EMELM). The algorithm includes two stages; in reduction feature phase, missing records were deleted and hepatitis dataset was normalized in [0,1] range. Thereafter, analysis of the principal component was applied for reduction feature. In classification phase, the reduced dataset is classified using EMELM. For evaluation of the algorithm, hepatitis disease dataset from UCI Machine Learning Repository (University of California) was selected. The features of this dataset reduced from 19 to 6 using PCA and the accuracy of the reduced dataset was obtained using EMELM. The results revealed that the proposed hybrid intelligent diagnosis system reached the higher classification accuracy and shorter time compared with other methods.
肝炎是由肝损伤引起的疾病。这种疾病的快速诊断可以防止其发展和肝硬化。数据挖掘是一门新的科学分支,它可以帮助医生做出正确的决策。在数据挖掘中,使用约简特征和机器学习算法分别有助于降低问题和疾病诊断方法的复杂性。本文提出了一种基于主成分分析(PCA)和误差最小化极限学习机(EMELM)的肝炎诊断新算法。该算法包括两个阶段;在约简特征阶段,删除缺失记录,并在[0,1]范围内对肝炎数据集进行归一化。然后,应用主成分分析对特征进行约简。在分类阶段,使用EMELM对约简后的数据集进行分类。为了对算法进行评估,我们选择了来自UCI机器学习库(University of California)的肝炎疾病数据集。利用主成分分析(PCA)将该数据集的特征从19个减少到6个,并利用EMELM获得了约简后的数据集的精度。结果表明,与其他方法相比,所提出的混合智能诊断系统具有更高的分类准确率和更短的分类时间。
{"title":"New method for rapid diagnosis of Hepatitis disease based on reduction feature and machine learning","authors":"Nahid Khorashadizade, H. Rezaei","doi":"10.14419/JACST.V4I1.4353","DOIUrl":"https://doi.org/10.14419/JACST.V4I1.4353","url":null,"abstract":"Hepatitis disease is caused by liver injury. Rapid diagnosis of this disease prevents its development and suffering to cirrhosis of the liver. Data mining is a new branch of science that helps physicians for proper decision making. In data mining using reduction feature and machine learning algorithms are useful for reducing the complexity of the problem and method of disease diagnosis, respectively. In this study, a new algorithm is proposed for hepatitis diagnosis according to Principal Component Analysis (PCA) and Error Minimized Extreme Learning Machine (EMELM). The algorithm includes two stages; in reduction feature phase, missing records were deleted and hepatitis dataset was normalized in [0,1] range. Thereafter, analysis of the principal component was applied for reduction feature. In classification phase, the reduced dataset is classified using EMELM. For evaluation of the algorithm, hepatitis disease dataset from UCI Machine Learning Repository (University of California) was selected. The features of this dataset reduced from 19 to 6 using PCA and the accuracy of the reduced dataset was obtained using EMELM. The results revealed that the proposed hybrid intelligent diagnosis system reached the higher classification accuracy and shorter time compared with other methods.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129430210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-18DOI: 10.14419/JACST.V4I1.4309
Erwien Christianto, W. H. Utomo, Wiwin Sulistyo
Data processing is an important part of a garment company. With increasingly complex and developing a garment company, data processing and integration became a very important requirement. The need for data integration in determining the cost of materials becomes a very important part in the garment industry. Data distribution or dissemination from one to another section gives results in data duplication, so that it may cause the data to be inconsistent. In addition, the efficiency of the process in determining the cost of materials is highly needed to achieve the selling price determination target. Nowadays, there is web-based technology, which is capable of handle data integration service, called SOA (Service Oriented Architecture). Business processes (work flows) involving the supplier need to go back to the supplier with the output price that has been determined by the system, can be integrated with Web service with concentration of BPELSOA. By utilizing the SOA technology then data processing and integration problems that occurred in the garment industry could be made into an integrated information system, so that the problems in the garment industry can be solved.
{"title":"Data integration of cost materials needs through a service oriented architecture (study case: pt x garment ungaran)","authors":"Erwien Christianto, W. H. Utomo, Wiwin Sulistyo","doi":"10.14419/JACST.V4I1.4309","DOIUrl":"https://doi.org/10.14419/JACST.V4I1.4309","url":null,"abstract":"Data processing is an important part of a garment company. With increasingly complex and developing a garment company, data processing and integration became a very important requirement. The need for data integration in determining the cost of materials becomes a very important part in the garment industry. Data distribution or dissemination from one to another section gives results in data duplication, so that it may cause the data to be inconsistent. In addition, the efficiency of the process in determining the cost of materials is highly needed to achieve the selling price determination target. Nowadays, there is web-based technology, which is capable of handle data integration service, called SOA (Service Oriented Architecture). Business processes (work flows) involving the supplier need to go back to the supplier with the output price that has been determined by the system, can be integrated with Web service with concentration of BPELSOA. By utilizing the SOA technology then data processing and integration problems that occurred in the garment industry could be made into an integrated information system, so that the problems in the garment industry can be solved.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"76 1-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121200030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-09DOI: 10.14419/JACST.V4I1.4307
Babak Nouri-Moghaddam, H. Naji
Wireless Sensor Networking continues to evolve as one of the most challenging research areas. Considering the insecure nature of these networks and the fact that sensor nodes are distributed in a hostile environment, having a well-implemented security scheme is absolutely essential. Bearing in mind the important security services like authentication and access control, we have proposed a novel security framework for these networks. The new framework is based on Kerberos authentication and access control system. The Kerberos has been adopted for WSNs by utilizing Bloom Filter data structure and Elliptic Curve cryptography. In the proposed scheme, Bloom Filter data structure is used in a novel way; we have used this data structure to get rid of Public Key’s certificates. By combining Bloom Filter data structure and Elliptic Curve cryptography, we achieved a very light robust security framework that offers Authentication, Access Control, and key sharing services. The analysis results showed that our scheme provides more security services and is more robust in the presence of attacks compared to the previous schemes. In contrast, simulation results indicated that our system had significant improvements over the other schemes in many aspects such as power and time expenditure.
{"title":"A novel authentication and access control framework in wireless sensor networks","authors":"Babak Nouri-Moghaddam, H. Naji","doi":"10.14419/JACST.V4I1.4307","DOIUrl":"https://doi.org/10.14419/JACST.V4I1.4307","url":null,"abstract":"Wireless Sensor Networking continues to evolve as one of the most challenging research areas. Considering the insecure nature of these networks and the fact that sensor nodes are distributed in a hostile environment, having a well-implemented security scheme is absolutely essential. Bearing in mind the important security services like authentication and access control, we have proposed a novel security framework for these networks. The new framework is based on Kerberos authentication and access control system. The Kerberos has been adopted for WSNs by utilizing Bloom Filter data structure and Elliptic Curve cryptography. In the proposed scheme, Bloom Filter data structure is used in a novel way; we have used this data structure to get rid of Public Key’s certificates. By combining Bloom Filter data structure and Elliptic Curve cryptography, we achieved a very light robust security framework that offers Authentication, Access Control, and key sharing services. The analysis results showed that our scheme provides more security services and is more robust in the presence of attacks compared to the previous schemes. In contrast, simulation results indicated that our system had significant improvements over the other schemes in many aspects such as power and time expenditure.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126632390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-04DOI: 10.14419/JACST.V4I1.4156
Behnam Rahmani Delijani, H. Tavakoli
Ad Hoc networks are a type of mobile wireless networks composed of mobile and stationary nodes which are moving freely and independently or they are stable. Setting up Ad Hoc networks is very simple and as these networks do not need a standard fixed infrastructure and central legal license, their setting up cost is very low. So, in specials, temporary or short-term situations such as flood, earthquake and fire as well as in military environments where all telecommunication platforms are destroyed, these networks are used as a new solution for creating communications between network elements. As Ad Hoc networks have a limited energy and nodes are continuously displacing, consequently, the accuracy of these networks is important. Nodes can be easily added to the network at any time or leave the network. It not only leads to easy creation of a network and its fast, easy and low-cost expansion, but also makes it possible for an enemy to enter the network. Therefore, the security of these networks must also be considered. In this paper, using a new method and imposing limitations on some network nodes, we created a more reliable network with higher accuracy and security.
Ad Hoc网络是由移动节点和固定节点组成的一种移动无线网络,它们自由独立地或稳定地移动。建立Ad Hoc网络非常简单,由于这些网络不需要标准的固定基础设施和中央法律许可,因此其建立成本非常低。因此,在洪水、地震和火灾等特殊、临时或短期情况下,以及在所有电信平台都被破坏的军事环境中,这些网络被用作在网元之间创建通信的新解决方案。由于Ad Hoc网络能量有限,且节点不断迁移,因此网络的准确性非常重要。节点可以在任何时候轻松地加入网络或离开网络。它不仅使网络易于创建和快速、容易、低成本地扩展,而且使敌人有可能进入网络。因此,这些网络的安全性也必须加以考虑。本文采用了一种新的方法,并对一些网络节点施加了限制,从而创建了一个更可靠、精度和安全性更高的网络。
{"title":"Strategies for enhancing the accuracy and security in ad hoc networks","authors":"Behnam Rahmani Delijani, H. Tavakoli","doi":"10.14419/JACST.V4I1.4156","DOIUrl":"https://doi.org/10.14419/JACST.V4I1.4156","url":null,"abstract":"Ad Hoc networks are a type of mobile wireless networks composed of mobile and stationary nodes which are moving freely and independently or they are stable. Setting up Ad Hoc networks is very simple and as these networks do not need a standard fixed infrastructure and central legal license, their setting up cost is very low. So, in specials, temporary or short-term situations such as flood, earthquake and fire as well as in military environments where all telecommunication platforms are destroyed, these networks are used as a new solution for creating communications between network elements. As Ad Hoc networks have a limited energy and nodes are continuously displacing, consequently, the accuracy of these networks is important. Nodes can be easily added to the network at any time or leave the network. It not only leads to easy creation of a network and its fast, easy and low-cost expansion, but also makes it possible for an enemy to enter the network. Therefore, the security of these networks must also be considered. In this paper, using a new method and imposing limitations on some network nodes, we created a more reliable network with higher accuracy and security.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128963799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-02-28DOI: 10.14419/JACST.V4I1.4149
Abdolghader Pourali, Elham Hashempour
Due to the significant growth in the usage of cell phone by customers and its availability, this tool can be considered such the best tool. For e-payment for implementing the mobile payment service, other objects and players should be considered such as banks, operators, service providers and the used technology as the effective interaction role. In addition, for effective optimization of parameters for implementing the mobile payment solution a proper model of business should be used. First of all, in this experiment, different business models in the field of mobile payment and the role of each stakeholder in these models and their positive and negative points are discussed. Moreover, by using method of Multiple Criteria Decision Making, four famous business models of the world are evaluated and the result of this evaluation highlights that the cooperation model is the most Appropriate Model in terms of mobile payment methods.
{"title":"An evaluation of business models in e-mobile payment by using multiple criteria decision making","authors":"Abdolghader Pourali, Elham Hashempour","doi":"10.14419/JACST.V4I1.4149","DOIUrl":"https://doi.org/10.14419/JACST.V4I1.4149","url":null,"abstract":"Due to the significant growth in the usage of cell phone by customers and its availability, this tool can be considered such the best tool. For e-payment for implementing the mobile payment service, other objects and players should be considered such as banks, operators, service providers and the used technology as the effective interaction role. In addition, for effective optimization of parameters for implementing the mobile payment solution a proper model of business should be used. First of all, in this experiment, different business models in the field of mobile payment and the role of each stakeholder in these models and their positive and negative points are discussed. Moreover, by using method of Multiple Criteria Decision Making, four famous business models of the world are evaluated and the result of this evaluation highlights that the cooperation model is the most Appropriate Model in terms of mobile payment methods.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133773877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-02-27DOI: 10.14419/JACST.V4I1.4100
M. Shahryari, H. Naji
Due to the need of cooperation among nodes to relay packets, wireless sensor networks are very vulnerable to attacks in all layers of the network. One of these severe attacks is the wormhole attack. Detection of wormhole attack is hard, because it can be easily implemented by attacker without having knowledge of nodes in the network or it can be compromised by any legal node in the network. To date, the most of proposed protocols to defend against wormhole attacks are made by adopting synchronized clocks, directional antennas or strong assumptions in order to detect wormhole attacks. A method based on clustering is presented in this paper to detect this type of attacks. This method is implemented in static and mobile networks. The superiority of our protocol is that during the attack prevention or attack detection, the malicious nodes are detected and requires no additional hardware or complex calculations. Simulation results are perused with the NS-2 simulator and the protocol has been evaluated in terms of packet drop ratio, throughput, delay and energy consumption compared to a network without or under attack. Simulation results show that our protocol is practical and effective in improving resilience against wormhole attacks.
{"title":"A cluster based approach for wormhole attack detection in wireless sensor networks","authors":"M. Shahryari, H. Naji","doi":"10.14419/JACST.V4I1.4100","DOIUrl":"https://doi.org/10.14419/JACST.V4I1.4100","url":null,"abstract":"Due to the need of cooperation among nodes to relay packets, wireless sensor networks are very vulnerable to attacks in all layers of the network. One of these severe attacks is the wormhole attack. Detection of wormhole attack is hard, because it can be easily implemented by attacker without having knowledge of nodes in the network or it can be compromised by any legal node in the network. To date, the most of proposed protocols to defend against wormhole attacks are made by adopting synchronized clocks, directional antennas or strong assumptions in order to detect wormhole attacks. A method based on clustering is presented in this paper to detect this type of attacks. This method is implemented in static and mobile networks. The superiority of our protocol is that during the attack prevention or attack detection, the malicious nodes are detected and requires no additional hardware or complex calculations. Simulation results are perused with the NS-2 simulator and the protocol has been evaluated in terms of packet drop ratio, throughput, delay and energy consumption compared to a network without or under attack. Simulation results show that our protocol is practical and effective in improving resilience against wormhole attacks.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114873572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-02-25DOI: 10.14419/JACST.V4I1.4163
Mohammad Kenarkouhi, H. Tavakoli
In this paper, polar codes recently presented by Arikan are introduced. Polar codes have a number of channels with high capacity where information is located. In addition, these codes are composed of a number of channels with low capacity called frozen bits. In the first proposed design, we optimally use frozen and useless bits of the polar code and insert the encryption key on all the bits of the design (information bits and frozen bits). In fact, in Arkian’s proposed 8-bit design, we use 8 encryption keys. Then, in the rest of the article, a method is presented through which the number of encryption keys applied can be reduced. Because, the encryption system is effective and desired in which in addition to the high complexity and the lack of correlation between bits, the minimum number of encryption keys are used.
{"title":"A NEW METHOD FOR COMBINING THE CHANNEL CODING WITH POLAR CODING-BASED ENCRYPTION","authors":"Mohammad Kenarkouhi, H. Tavakoli","doi":"10.14419/JACST.V4I1.4163","DOIUrl":"https://doi.org/10.14419/JACST.V4I1.4163","url":null,"abstract":"In this paper, polar codes recently presented by Arikan are introduced. Polar codes have a number of channels with high capacity where information is located. In addition, these codes are composed of a number of channels with low capacity called frozen bits. In the first proposed design, we optimally use frozen and useless bits of the polar code and insert the encryption key on all the bits of the design (information bits and frozen bits). In fact, in Arkian’s proposed 8-bit design, we use 8 encryption keys. Then, in the rest of the article, a method is presented through which the number of encryption keys applied can be reduced. Because, the encryption system is effective and desired in which in addition to the high complexity and the lack of correlation between bits, the minimum number of encryption keys are used.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115371387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-02-22DOI: 10.14419/JACST.V4I1.4194
A. Haeri, S. M. Hatefi, K. Rezaie
The goal of this paper is forecasting direction (increase or decrease) of EURJPY exchange rate in a day. For this purpose five major indicators are used. The indicators are exponential moving average (EMA), stochastic oscillator (KD), moving average convergence divergence (MACD), relative strength index (RSI) and Williams %R (WMS %R). Then a hybrid approach using hidden Markov models and CART classification algorithms is developed. Proposed approach is used for forecasting direcation (increase or decrease) of Euro-Yen exchange rates in a day. Also the approach is used for forecasting differnece between intial and maximum exchange rates in a day. As well as it is used for forecasting differnece between intial and minimum exchange rates in a day. Reslut of proposed method is compared with CART and neural network. Comparison shows that the forecasting with proposed method has higher accuracy.
{"title":"Forecasting about EURJPY exchange rate using hidden Markova model and CART classification algorithm","authors":"A. Haeri, S. M. Hatefi, K. Rezaie","doi":"10.14419/JACST.V4I1.4194","DOIUrl":"https://doi.org/10.14419/JACST.V4I1.4194","url":null,"abstract":"The goal of this paper is forecasting direction (increase or decrease) of EURJPY exchange rate in a day. For this purpose five major indicators are used. The indicators are exponential moving average (EMA), stochastic oscillator (KD), moving average convergence divergence (MACD), relative strength index (RSI) and Williams %R (WMS %R). Then a hybrid approach using hidden Markov models and CART classification algorithms is developed. Proposed approach is used for forecasting direcation (increase or decrease) of Euro-Yen exchange rates in a day. Also the approach is used for forecasting differnece between intial and maximum exchange rates in a day. As well as it is used for forecasting differnece between intial and minimum exchange rates in a day. Reslut of proposed method is compared with CART and neural network. Comparison shows that the forecasting with proposed method has higher accuracy.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"93 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126228313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}