Pub Date : 2020-12-23DOI: 10.1109/ICOASE51841.2020.9436579
Shahed Mohammed, M. H. Al-Jammas
The Internet of Things (IoT); can be defined as a think connected to the internet anywhere and anytime. Data security is one of the vital concepts in IoT security. So, the data to be uploaded should be made secure. There are many differenced cryptography algorithms such as Advanced Encryption Standard (AES), Nth degree truncates polynomial ring (NTRU), RSA, DES, and others to make security for data in IoT. In this paper, we suggest an algorithm that combines the feature of symmetric and asymmetric cryptography algorithms. Where the AES algorithm and NTRU public key used to create the special key at the receive side, then send the key to the sender side to make data security for IoT. The proposed algorithms provide strong security and low computation. The model has been simulating by MATLAB. The execution time for text (526 characters) for key generation is 0.092414 seconds, 0.020521 seconds for encryption and 0.060921 seconds for decryption and the execution time for image with size (512 * 512) for key generation is 0.101900 seconds, 1.699665 seconds for encryption and 12.82071 seconds for decryption.
{"title":"Data Security System for IoT Applications","authors":"Shahed Mohammed, M. H. Al-Jammas","doi":"10.1109/ICOASE51841.2020.9436579","DOIUrl":"https://doi.org/10.1109/ICOASE51841.2020.9436579","url":null,"abstract":"The Internet of Things (IoT); can be defined as a think connected to the internet anywhere and anytime. Data security is one of the vital concepts in IoT security. So, the data to be uploaded should be made secure. There are many differenced cryptography algorithms such as Advanced Encryption Standard (AES), Nth degree truncates polynomial ring (NTRU), RSA, DES, and others to make security for data in IoT. In this paper, we suggest an algorithm that combines the feature of symmetric and asymmetric cryptography algorithms. Where the AES algorithm and NTRU public key used to create the special key at the receive side, then send the key to the sender side to make data security for IoT. The proposed algorithms provide strong security and low computation. The model has been simulating by MATLAB. The execution time for text (526 characters) for key generation is 0.092414 seconds, 0.020521 seconds for encryption and 0.060921 seconds for decryption and the execution time for image with size (512 * 512) for key generation is 0.101900 seconds, 1.699665 seconds for encryption and 12.82071 seconds for decryption.","PeriodicalId":126112,"journal":{"name":"2020 International Conference on Advanced Science and Engineering (ICOASE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130409016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-23DOI: 10.1109/ICOASE51841.2020.9436576
Zahraa T. Al Ali, Ahmad M. Al Kababji, Mohammad B. Shukur
Recently transcripts and confirmations verification become one of the important issues for universities, institutes and organizations. This problem arises due to the spread out of many programs and modern technologies that allows the forgers to forge transcripts and confirmations which cannot be recognized by registrar persons how work for institutions or other entities. So, it becomes necessary for the entities to insure the validation of transcripts, confirmations or other documents from the original source. In this paper, we design a transcript validation system that can be used by different entities to reduce the time required for transcripts verification based on the information saved in a database. The system has been supported by a Graphic User Interface (GUI) to make the system as easy as possible to be used by users. The system works to verify the existing signatures on the transcript and compare them with the database by using offline signature verification depends on neural network method, also the system verifies the existing photo on the transcript using regression method in order to increase the reliability. The system has proven its reliability through high acceptance ratio of the genuine signatures and high rejection ratio for forged and random signatures.
{"title":"Transcript Validation System using biometric characteristics","authors":"Zahraa T. Al Ali, Ahmad M. Al Kababji, Mohammad B. Shukur","doi":"10.1109/ICOASE51841.2020.9436576","DOIUrl":"https://doi.org/10.1109/ICOASE51841.2020.9436576","url":null,"abstract":"Recently transcripts and confirmations verification become one of the important issues for universities, institutes and organizations. This problem arises due to the spread out of many programs and modern technologies that allows the forgers to forge transcripts and confirmations which cannot be recognized by registrar persons how work for institutions or other entities. So, it becomes necessary for the entities to insure the validation of transcripts, confirmations or other documents from the original source. In this paper, we design a transcript validation system that can be used by different entities to reduce the time required for transcripts verification based on the information saved in a database. The system has been supported by a Graphic User Interface (GUI) to make the system as easy as possible to be used by users. The system works to verify the existing signatures on the transcript and compare them with the database by using offline signature verification depends on neural network method, also the system verifies the existing photo on the transcript using regression method in order to increase the reliability. The system has proven its reliability through high acceptance ratio of the genuine signatures and high rejection ratio for forged and random signatures.","PeriodicalId":126112,"journal":{"name":"2020 International Conference on Advanced Science and Engineering (ICOASE)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114685066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-23DOI: 10.1109/ICOASE51841.2020.9436590
R. H. H. Aziz, Nazife Dimililer
Sentiment analysis extracts the emotions expressed in text and has been employed in many fields including politics, elections, movies, retail businesses and in recent years microblogs to understand, track and control the human sentiments or reactions toward products events or ideas. Nevertheless challenges such as different styles of writing, use of negation and sarcasm, existence of spelling mistakes, invention of new words etc. provide obstacle in the correct classification of sentiments. This paper provides an ensemble of classifiers framework for sentiment analysis. The proposed weighted majority voting ensemble method combines six models including Naïve Bayes, Logistic Regression, Stochastic Gradient Descent, Random Forest, Decision Tree and Support Vector Machine to form a single classifier. Weights of the individual classifiers of the ensemble are chosen as accuracy or Fl-score by optimizing their performance. This approach combines models based on the simple majority voting as opposed to the one based on weighted majority voting. Additionally, a comparison is drawn among these six individual classifiers to evaluate their performance. The proposed ensemble model is tested on some existing sentiment datasets, including SemEval 2017 Task 4A, 4B and 4C. The results demonstrate that the Logistic Regression classifier is optimal as compared to other individual classifiers. Furthermore, the proposed ensemble weighted majority voting classifier with the six individual classifiers performs better compared to the simple majority voting and all independent classifiers.
{"title":"Twitter Sentiment Analysis using an Ensemble Weighted Majority Vote Classifier","authors":"R. H. H. Aziz, Nazife Dimililer","doi":"10.1109/ICOASE51841.2020.9436590","DOIUrl":"https://doi.org/10.1109/ICOASE51841.2020.9436590","url":null,"abstract":"Sentiment analysis extracts the emotions expressed in text and has been employed in many fields including politics, elections, movies, retail businesses and in recent years microblogs to understand, track and control the human sentiments or reactions toward products events or ideas. Nevertheless challenges such as different styles of writing, use of negation and sarcasm, existence of spelling mistakes, invention of new words etc. provide obstacle in the correct classification of sentiments. This paper provides an ensemble of classifiers framework for sentiment analysis. The proposed weighted majority voting ensemble method combines six models including Naïve Bayes, Logistic Regression, Stochastic Gradient Descent, Random Forest, Decision Tree and Support Vector Machine to form a single classifier. Weights of the individual classifiers of the ensemble are chosen as accuracy or Fl-score by optimizing their performance. This approach combines models based on the simple majority voting as opposed to the one based on weighted majority voting. Additionally, a comparison is drawn among these six individual classifiers to evaluate their performance. The proposed ensemble model is tested on some existing sentiment datasets, including SemEval 2017 Task 4A, 4B and 4C. The results demonstrate that the Logistic Regression classifier is optimal as compared to other individual classifiers. Furthermore, the proposed ensemble weighted majority voting classifier with the six individual classifiers performs better compared to the simple majority voting and all independent classifiers.","PeriodicalId":126112,"journal":{"name":"2020 International Conference on Advanced Science and Engineering (ICOASE)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128137020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-23DOI: 10.1109/ICOASE51841.2020.9436624
Falak Uossien Hasan
Ontology represents a best tool for knowledge management, concepts definitions and semantic search. It is considered a successful style work to organize and manage the knowledge in a single given domain. This paper is an attempt to establish knowledge base for email weaknesses using ontology, which represents a best method for illustrating relationships among Common Weakness Enumeration (CWE) email entries as defined by MITRE Corporation Weakness List Dictionary. The ontology usage is demonstrated with sufficient samples of queries by analyzing email software products weaknesses according to software customer point of view. This work is based on the MITRE community effort CWE List, Version 3.1 - Research Concept view CWE-1000, the effort in this work is limited by software customer perspective.
本体是知识管理、概念定义和语义搜索的最佳工具。组织和管理单个给定领域中的知识被认为是一种成功的风格工作。本文尝试用本体的方法建立电子邮件弱点知识库,本体是描述MITRE公司弱点列表词典定义的CWE (Common Weakness Enumeration)电子邮件条目之间关系的最佳方法。从软件客户的角度出发,通过对电子邮件软件产品弱点的分析,结合足够的查询样本,论证了本体的使用。本工作基于MITRE社区成果CWE List, Version 3.1 - Research Concept view CWE-1000,本工作的努力受到软件客户角度的限制。
{"title":"Email Common Weaknesses and Enumeration Through Software Customer Perspective","authors":"Falak Uossien Hasan","doi":"10.1109/ICOASE51841.2020.9436624","DOIUrl":"https://doi.org/10.1109/ICOASE51841.2020.9436624","url":null,"abstract":"Ontology represents a best tool for knowledge management, concepts definitions and semantic search. It is considered a successful style work to organize and manage the knowledge in a single given domain. This paper is an attempt to establish knowledge base for email weaknesses using ontology, which represents a best method for illustrating relationships among Common Weakness Enumeration (CWE) email entries as defined by MITRE Corporation Weakness List Dictionary. The ontology usage is demonstrated with sufficient samples of queries by analyzing email software products weaknesses according to software customer point of view. This work is based on the MITRE community effort CWE List, Version 3.1 - Research Concept view CWE-1000, the effort in this work is limited by software customer perspective.","PeriodicalId":126112,"journal":{"name":"2020 International Conference on Advanced Science and Engineering (ICOASE)","volume":"380 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131786494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-23DOI: 10.1109/ICOASE51841.2020.9436545
R. Zebari, Subhi R. M. Zeebaree, A. Sallow, Hanan M. Shukur, Omar M. Ahmad, Karwan Jacksi
Nowadays, cybersecurity threat is a big challenge to all organizations that present their services over the Internet. Distributed Denial of Service (DDoS) attack is the most effective and used attack and seriously affects the quality of service of each E-organization. Hence, mitigation this type of attack is considered a persistent need. In this paper, we used Network Load Balancing (NLB) and High Availability Proxy (HAProxy) as mitigation techniques. The NLB is used in the Windows platform and HAProxy in the Linux platform. Moreover, Internet Information Service (IIS) 10.0 is implemented on Windows server 2016 and Apache 2 on Linux Ubuntu 16.04 as web servers. We evaluated each load balancer efficiency in mitigating synchronize (SYN) DDoS attack on each platform separately. The evaluation process is accomplished in a real network and average response time and average CPU are utilized as metrics. The results illustrated that the NLB in the Windows platform achieved better performance in mitigation SYN DDOS compared to HAProxy in the Linux platform. Whereas, the average response time of the Window webservers is reduced with NLB. However, the impact of the SYN DDoS on the average CPU usage of the IIS 10.0 webservers was more than those of the Apache 2 webservers.
如今,网络安全威胁对所有通过互联网提供服务的组织来说都是一个巨大的挑战。分布式拒绝服务攻击(Distributed Denial of Service, DDoS)是一种最有效、最常用的攻击方式,严重影响着各个电子机构的服务质量。因此,缓解这种类型的攻击被认为是一种持久的需求。在本文中,我们使用网络负载平衡(NLB)和高可用性代理(HAProxy)作为缓解技术。NLB应用于Windows平台,HAProxy应用于Linux平台。Internet Information Service (IIS) 10.0是在Windows server 2016上实现的,Apache 2是在Linux Ubuntu 16.04上实现的。我们在每个平台上分别评估了每个负载均衡器在缓解同步(SYN) DDoS攻击方面的效率。评估过程在真实网络中完成,并使用平均响应时间和平均CPU作为指标。结果表明,与Linux平台的HAProxy相比,Windows平台的NLB在缓解SYN DDOS攻击方面具有更好的性能。然而,使用NLB可以减少windows web服务器的平均响应时间。但是,SYN DDoS攻击对IIS 10.0服务器平均CPU占用率的影响要大于Apache 2服务器。
{"title":"Distributed Denial of Service Attack Mitigation using High Availability Proxy and Network Load Balancing","authors":"R. Zebari, Subhi R. M. Zeebaree, A. Sallow, Hanan M. Shukur, Omar M. Ahmad, Karwan Jacksi","doi":"10.1109/ICOASE51841.2020.9436545","DOIUrl":"https://doi.org/10.1109/ICOASE51841.2020.9436545","url":null,"abstract":"Nowadays, cybersecurity threat is a big challenge to all organizations that present their services over the Internet. Distributed Denial of Service (DDoS) attack is the most effective and used attack and seriously affects the quality of service of each E-organization. Hence, mitigation this type of attack is considered a persistent need. In this paper, we used Network Load Balancing (NLB) and High Availability Proxy (HAProxy) as mitigation techniques. The NLB is used in the Windows platform and HAProxy in the Linux platform. Moreover, Internet Information Service (IIS) 10.0 is implemented on Windows server 2016 and Apache 2 on Linux Ubuntu 16.04 as web servers. We evaluated each load balancer efficiency in mitigating synchronize (SYN) DDoS attack on each platform separately. The evaluation process is accomplished in a real network and average response time and average CPU are utilized as metrics. The results illustrated that the NLB in the Windows platform achieved better performance in mitigation SYN DDOS compared to HAProxy in the Linux platform. Whereas, the average response time of the Window webservers is reduced with NLB. However, the impact of the SYN DDoS on the average CPU usage of the IIS 10.0 webservers was more than those of the Apache 2 webservers.","PeriodicalId":126112,"journal":{"name":"2020 International Conference on Advanced Science and Engineering (ICOASE)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123909223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-23DOI: 10.1109/ICOASE51841.2020.9436625
Z. Khayrullina, K. Ahmed, M. Agliullin
The authors have proposed a method for selective crystallization of a high dispersion SAPO-11 silicoaluminophosphate molecular sieve. X-ray diffraction analysis, low-temperature nitrogen adsorption and scanning electron microscopy were used to determine the chemical and phase compositions, characteristics of the porous structure, and the morphology of SAPO-11. It has been shown that crystallization of a silicoaluminophosphate gel prepared using pseudoboehmite as a source of aluminum and aged at 90 0 C allows obtaining the above zeolite of high phase purity and a degree of crystallinity equal to 96%. According to the data of scanning electron microscopy, the obtained SAPO-11 samples are characterized by a morphology close to cubic and a crystal size from 0.2 to 0.5 µm.
{"title":"Selective Crystallization of Highly Dispersed Silicoaluminophosphate SAPO-11","authors":"Z. Khayrullina, K. Ahmed, M. Agliullin","doi":"10.1109/ICOASE51841.2020.9436625","DOIUrl":"https://doi.org/10.1109/ICOASE51841.2020.9436625","url":null,"abstract":"The authors have proposed a method for selective crystallization of a high dispersion SAPO-11 silicoaluminophosphate molecular sieve. X-ray diffraction analysis, low-temperature nitrogen adsorption and scanning electron microscopy were used to determine the chemical and phase compositions, characteristics of the porous structure, and the morphology of SAPO-11. It has been shown that crystallization of a silicoaluminophosphate gel prepared using pseudoboehmite as a source of aluminum and aged at 90 0 C allows obtaining the above zeolite of high phase purity and a degree of crystallinity equal to 96%. According to the data of scanning electron microscopy, the obtained SAPO-11 samples are characterized by a morphology close to cubic and a crystal size from 0.2 to 0.5 µm.","PeriodicalId":126112,"journal":{"name":"2020 International Conference on Advanced Science and Engineering (ICOASE)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125641017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-23DOI: 10.1109/ICOASE51841.2020.9436603
W. A. Salman, S. Sadkhan
Privacy-Preserving Data Mining (PPDM) is a modern technique through which data is mined while maintaining the confidentiality and privacy of sensitive information from unauthorized persons. The Privacy-Preserving Association Rules Mining (PPARM) is the most important technique for privacy-preserving data mining. PPARM means the mining of association rules with preserving the non-disclosure of sensitive correlations among items or features for competitors or the public, especially data of sensitive organizations such as financial organizations and others. In this paper, we propose an approach to hiding association rules after performing the mining process and obtaining knowledge through vertical and horizontal compressing then encoded the compressing form by using cryptography methods. The proposed approach is resistant to many known attacks and is undetectable because it includes three stages of compression and encryption in which the basic representation and size of the data change dramatically. The proposed approach significantly reduces storage space, maintains knowledge security, reduces transmission time, and facilitates the transmission of knowledge over any network.
{"title":"Privacy Preserving Association Rules based on Compression and Cryptography (PPAR-CC)","authors":"W. A. Salman, S. Sadkhan","doi":"10.1109/ICOASE51841.2020.9436603","DOIUrl":"https://doi.org/10.1109/ICOASE51841.2020.9436603","url":null,"abstract":"Privacy-Preserving Data Mining (PPDM) is a modern technique through which data is mined while maintaining the confidentiality and privacy of sensitive information from unauthorized persons. The Privacy-Preserving Association Rules Mining (PPARM) is the most important technique for privacy-preserving data mining. PPARM means the mining of association rules with preserving the non-disclosure of sensitive correlations among items or features for competitors or the public, especially data of sensitive organizations such as financial organizations and others. In this paper, we propose an approach to hiding association rules after performing the mining process and obtaining knowledge through vertical and horizontal compressing then encoded the compressing form by using cryptography methods. The proposed approach is resistant to many known attacks and is undetectable because it includes three stages of compression and encryption in which the basic representation and size of the data change dramatically. The proposed approach significantly reduces storage space, maintains knowledge security, reduces transmission time, and facilitates the transmission of knowledge over any network.","PeriodicalId":126112,"journal":{"name":"2020 International Conference on Advanced Science and Engineering (ICOASE)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134265974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-23DOI: 10.1109/ICOASE51841.2020.9436589
Hussein Z. Almngoshi, Eman S. Alshamery
The competition between scientific institutions is increased every day. Every institution tends to improve its reputation by producing and publishing high-quality scientific research. Clustering and evaluating the educational institutions are important for professors, policymakers, as well as students. This research aims to develop a Jarvis-Patrick algorithm for scientific institutions clustering, which is one of the graph-based techniques. It suffers from the problem of a large number of clusters. In addition to the Shared Nearest Neighbor (SNN) similarity included in the standard Mini Jarvis-Patrick (MJP) algorithm, the merging clusters of low separation are proposed to improve algorithm performance. The SNN similarity measures the number of shared neighbors between every two points in the data. Besides that, the merging is implemented by combining the clusters that have low separation. The proposed algorithm takes advantage of cluster validity measures (separation) to produce rational and reasonable clusters. The SciVal dataset for USA scientific institutions 2016–2018 dataset is used. The proposed MJP detected 8 clusters (Cluster0 %6, Cluster16%, Cluster2 6%, Cluster3 2%, Cluster4 7%, Cluster5 7.3%, Cluster6 26.6%, Cluster7 32%). In addition to the standard MJP, the proposed technique is compared with known methods; the cobweb, DBSCAN, and HierarchicalClusterer. The results have proved that the MJP is superior to other methods.
{"title":"Mini Jarvis Patrick -Based Graph Clustering for Scientific Institutions","authors":"Hussein Z. Almngoshi, Eman S. Alshamery","doi":"10.1109/ICOASE51841.2020.9436589","DOIUrl":"https://doi.org/10.1109/ICOASE51841.2020.9436589","url":null,"abstract":"The competition between scientific institutions is increased every day. Every institution tends to improve its reputation by producing and publishing high-quality scientific research. Clustering and evaluating the educational institutions are important for professors, policymakers, as well as students. This research aims to develop a Jarvis-Patrick algorithm for scientific institutions clustering, which is one of the graph-based techniques. It suffers from the problem of a large number of clusters. In addition to the Shared Nearest Neighbor (SNN) similarity included in the standard Mini Jarvis-Patrick (MJP) algorithm, the merging clusters of low separation are proposed to improve algorithm performance. The SNN similarity measures the number of shared neighbors between every two points in the data. Besides that, the merging is implemented by combining the clusters that have low separation. The proposed algorithm takes advantage of cluster validity measures (separation) to produce rational and reasonable clusters. The SciVal dataset for USA scientific institutions 2016–2018 dataset is used. The proposed MJP detected 8 clusters (Cluster0 %6, Cluster16%, Cluster2 6%, Cluster3 2%, Cluster4 7%, Cluster5 7.3%, Cluster6 26.6%, Cluster7 32%). In addition to the standard MJP, the proposed technique is compared with known methods; the cobweb, DBSCAN, and HierarchicalClusterer. The results have proved that the MJP is superior to other methods.","PeriodicalId":126112,"journal":{"name":"2020 International Conference on Advanced Science and Engineering (ICOASE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123092257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-23DOI: 10.1109/ICOASE51841.2020.9436597
Akeel Bdrany, S. Sadkhan
In the past few years, we have seen a lot of research in the field of cognitive radio technology, because this technology represents a promising behavior for the problem of scarcity of spectrum, and the increase in the demand for spectrum frequencies by individuals, companies, and the so-called Internet of things. The cognitive radio cycle consists of several steps, and one of the most important steps is the step of analysis and decision-making. There are many algorithms and approaches to investigate at this step. This article will survey the most important current trends in this field, as well as the problems and challenges that are still open in this field, and it will review future trends algorithms, and techniques for analysis and decision-making.
{"title":"Decision Making Approaches in Cognitive Radio-Status, Challenges and Future Trends","authors":"Akeel Bdrany, S. Sadkhan","doi":"10.1109/ICOASE51841.2020.9436597","DOIUrl":"https://doi.org/10.1109/ICOASE51841.2020.9436597","url":null,"abstract":"In the past few years, we have seen a lot of research in the field of cognitive radio technology, because this technology represents a promising behavior for the problem of scarcity of spectrum, and the increase in the demand for spectrum frequencies by individuals, companies, and the so-called Internet of things. The cognitive radio cycle consists of several steps, and one of the most important steps is the step of analysis and decision-making. There are many algorithms and approaches to investigate at this step. This article will survey the most important current trends in this field, as well as the problems and challenges that are still open in this field, and it will review future trends algorithms, and techniques for analysis and decision-making.","PeriodicalId":126112,"journal":{"name":"2020 International Conference on Advanced Science and Engineering (ICOASE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123908130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-23DOI: 10.1109/ICOASE51841.2020.9436599
Sultan B. Fayyadh, A. Ibrahim
Detection of brain tumors through image processing is done by using an integrated approach. This work was planned to present a system to classify and detect brain tumors using the CNN algorithm and deep learning techniques from MRI images to the most popular tumors in the world. This work was performed using an MRI image dataset as input, Preprocessing and segmentation were performed to enhance the images. Our neural network design is simpler to train and it's possible to run it on another computer because the designed algorithm requires fewer resources. The dataset was used contains 3064 images related to different tumors meningioma (708 slices), glioma (1426 slices), and pituitary tumor (930 slices), the convolution neural network (CNN) was used through which the brain tumor is classified according to a special structure of this algorithm consisting of several layers, The implementation of the neural network consist blocks each block include many types of layer, first, the input layer then followed by convolution layer, then the activation function that used was Rectified Linear Units (ReLU), normalization layer, and pooling layer. Also, it contains the classification layer fully connected and softmax layer the overall accuracy rate obtained from the proposed approach was (98,029%) in the testing stage and (98.29%) in the training stage for the data set were used.
{"title":"Brain Tumor Detection and Classifiaction Using CNN Algorithm and Deep Learning Techniques","authors":"Sultan B. Fayyadh, A. Ibrahim","doi":"10.1109/ICOASE51841.2020.9436599","DOIUrl":"https://doi.org/10.1109/ICOASE51841.2020.9436599","url":null,"abstract":"Detection of brain tumors through image processing is done by using an integrated approach. This work was planned to present a system to classify and detect brain tumors using the CNN algorithm and deep learning techniques from MRI images to the most popular tumors in the world. This work was performed using an MRI image dataset as input, Preprocessing and segmentation were performed to enhance the images. Our neural network design is simpler to train and it's possible to run it on another computer because the designed algorithm requires fewer resources. The dataset was used contains 3064 images related to different tumors meningioma (708 slices), glioma (1426 slices), and pituitary tumor (930 slices), the convolution neural network (CNN) was used through which the brain tumor is classified according to a special structure of this algorithm consisting of several layers, The implementation of the neural network consist blocks each block include many types of layer, first, the input layer then followed by convolution layer, then the activation function that used was Rectified Linear Units (ReLU), normalization layer, and pooling layer. Also, it contains the classification layer fully connected and softmax layer the overall accuracy rate obtained from the proposed approach was (98,029%) in the testing stage and (98.29%) in the training stage for the data set were used.","PeriodicalId":126112,"journal":{"name":"2020 International Conference on Advanced Science and Engineering (ICOASE)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127739640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}