Pub Date : 2020-04-01DOI: 10.1109/CSASE48920.2020.9142055
Enas Mohammed Hussein Saeed, Hayder Adnan Saleh
Breast cancer is one of the most common causes of death among women globally. Accurate and early detection is necessary for decreasing mortality and increase treatment success rates. Mammogram image is currently one of the best ways to detect breast cancer in the early stages, but it contains many artifacts such as noise, labels, and pectoral muscles, that must be deleted or suppressed because it greatly affects the results of the diagnosis in the coming stages. Removing the pectorals muscle is the biggest problem because it possesses an intensity tissue that closely resembles the tissue of fat, glands, and tumors in the form of mammograms. In this paper, an effective algorithm has been suggested by Hybridization Bounding Box and Region growing algorithm (HBBRG) algorithm to solve the problem of pectoral muscle removal which greatly affects the results of tumor detection in the next stages by combines the Bounding Box (BB) and Region growing (RG). To perform this work, pre-processing for mammogram images was applied in two stages. In the first stage, a medium filter and binary image with a specific threshold were used to remove noise and label respectively. In the second phase, the pectoral muscles were removed by applying the (BB) and (RG) algorithm separately, and then we proposed merging the two methods to set up an HBBRG algorithm with the aim to get better results for remove pectoral muscles. The proposed algorithms were tested on all the Mammographic Image Analysis Society (MIAS) database images, and the results showed a significant advantage in the HBBRG algorithm compared to other algorithms as it achieved results in over 98% to completely remove the pectoral muscles of all types of images.
{"title":"Pectoral Muscles Removal in Mammogram Image by Hybrid Bounding Box and Region Growing Algorithm","authors":"Enas Mohammed Hussein Saeed, Hayder Adnan Saleh","doi":"10.1109/CSASE48920.2020.9142055","DOIUrl":"https://doi.org/10.1109/CSASE48920.2020.9142055","url":null,"abstract":"Breast cancer is one of the most common causes of death among women globally. Accurate and early detection is necessary for decreasing mortality and increase treatment success rates. Mammogram image is currently one of the best ways to detect breast cancer in the early stages, but it contains many artifacts such as noise, labels, and pectoral muscles, that must be deleted or suppressed because it greatly affects the results of the diagnosis in the coming stages. Removing the pectorals muscle is the biggest problem because it possesses an intensity tissue that closely resembles the tissue of fat, glands, and tumors in the form of mammograms. In this paper, an effective algorithm has been suggested by Hybridization Bounding Box and Region growing algorithm (HBBRG) algorithm to solve the problem of pectoral muscle removal which greatly affects the results of tumor detection in the next stages by combines the Bounding Box (BB) and Region growing (RG). To perform this work, pre-processing for mammogram images was applied in two stages. In the first stage, a medium filter and binary image with a specific threshold were used to remove noise and label respectively. In the second phase, the pectoral muscles were removed by applying the (BB) and (RG) algorithm separately, and then we proposed merging the two methods to set up an HBBRG algorithm with the aim to get better results for remove pectoral muscles. The proposed algorithms were tested on all the Mammographic Image Analysis Society (MIAS) database images, and the results showed a significant advantage in the HBBRG algorithm compared to other algorithms as it achieved results in over 98% to completely remove the pectoral muscles of all types of images.","PeriodicalId":254581,"journal":{"name":"2020 International Conference on Computer Science and Software Engineering (CSASE)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133772977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/CSASE48920.2020.9142072
Mustafa Abbas, Suadad S. Mahdi, S. A. Hussien
One of the significant advancements in information technology is Cloud computing, but the security issue of data storage is a big problem in the cloud environment. That is why a system is proposed in this paper for improving the security of cloud data using encryption, information concealment, and hashing functions. In the data encryption phase, we implemented hybrid encryption using the algorithm of AES symmetric encryption and the algorithm of RSA asymmetric encryption. Next, the encrypted data will be hidden in an image using LSB algorithm. In the data validation phase, we use the SHA hashing algorithm. Also, in our suggestion, we compress the data using the LZW algorithm before hiding it in the image. Thus, it allows hiding as much data as possible. By using information concealment technology and mixed encryption, we can achieve strong data security. In this paper, PSNR and SSIM values were calculated in addition to the graph to evaluate the image masking performance before and after applying the compression process. The results showed that PSNR values of stego-image are better for compressed data compared to data before compression.
{"title":"Security Improvement of Cloud Data Using Hybrid Cryptography and Steganography","authors":"Mustafa Abbas, Suadad S. Mahdi, S. A. Hussien","doi":"10.1109/CSASE48920.2020.9142072","DOIUrl":"https://doi.org/10.1109/CSASE48920.2020.9142072","url":null,"abstract":"One of the significant advancements in information technology is Cloud computing, but the security issue of data storage is a big problem in the cloud environment. That is why a system is proposed in this paper for improving the security of cloud data using encryption, information concealment, and hashing functions. In the data encryption phase, we implemented hybrid encryption using the algorithm of AES symmetric encryption and the algorithm of RSA asymmetric encryption. Next, the encrypted data will be hidden in an image using LSB algorithm. In the data validation phase, we use the SHA hashing algorithm. Also, in our suggestion, we compress the data using the LZW algorithm before hiding it in the image. Thus, it allows hiding as much data as possible. By using information concealment technology and mixed encryption, we can achieve strong data security. In this paper, PSNR and SSIM values were calculated in addition to the graph to evaluate the image masking performance before and after applying the compression process. The results showed that PSNR values of stego-image are better for compressed data compared to data before compression.","PeriodicalId":254581,"journal":{"name":"2020 International Conference on Computer Science and Software Engineering (CSASE)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128186054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/CSASE48920.2020.9142069
Daniela Alejandra Gomez Cravioto, Ramon Eduardo Diaz Ramos, M. Galaz, N. H. Gress, Héctor Gibrán Ceballos Cancino
In Mexico, higher education is constantly suffering from low percentage of placement and interest of individuals for a graduate degree. Mexico needs more postgraduate students to increase the research and development activities and boost innovation in the private sector, especially in strategic industries. This paper suggests the use of data mining techniques to explore alumni factors and understand if these have a relationship with the alumnus returning to study a postgraduate degree. Fifteen attributes obtained from an alumni survey study were analyzed; this survey contains information from 12,780 former students, which graduated from a bachelor’s degree in Tec de Monterrey. The Cross-Industry Standard Process for Data Mining (CRISP-DM) methodology is used, and the machine learning algorithms, Random Forest, J48 and REPTree are compared to identify the best approach to build a classification model which can predict whether an alumni will study or not a postgraduate degree. For the purpose of this research, the data mining tool used was the Waikato Environment for Knowledge Analysis (WEKA). The resulting model shows that random forest outperforms the other decision tree algorithms based on the accuracy and classifier error, which drives the conclusion that this is a more suitable classifier for the explored dataset.
在墨西哥,高等教育一直受到低就业率和个人对研究生学位兴趣的困扰。墨西哥需要更多的研究生来增加研究和开发活动,促进私营部门的创新,特别是在战略行业。本文建议使用数据挖掘技术来探索校友因素,并了解这些因素是否与校友回国攻读研究生学位有关。分析了从校友调查研究中获得的15个属性;这项调查包含了从蒙特雷理工大学学士学位毕业的12,780名前学生的信息。使用数据挖掘跨行业标准过程(CRISP-DM)方法,并比较机器学习算法、随机森林、J48和REPTree,以确定构建分类模型的最佳方法,该模型可以预测校友是否将攻读研究生学位。本研究使用的数据挖掘工具是Waikato Environment For Knowledge Analysis (WEKA)。结果表明,基于准确率和分类器误差,随机森林算法优于其他决策树算法,从而得出结论,该算法更适合于所探索的数据集。
{"title":"Analysing Factors That Influence Alumni Graduate Studies Attainment with Decision Trees","authors":"Daniela Alejandra Gomez Cravioto, Ramon Eduardo Diaz Ramos, M. Galaz, N. H. Gress, Héctor Gibrán Ceballos Cancino","doi":"10.1109/CSASE48920.2020.9142069","DOIUrl":"https://doi.org/10.1109/CSASE48920.2020.9142069","url":null,"abstract":"In Mexico, higher education is constantly suffering from low percentage of placement and interest of individuals for a graduate degree. Mexico needs more postgraduate students to increase the research and development activities and boost innovation in the private sector, especially in strategic industries. This paper suggests the use of data mining techniques to explore alumni factors and understand if these have a relationship with the alumnus returning to study a postgraduate degree. Fifteen attributes obtained from an alumni survey study were analyzed; this survey contains information from 12,780 former students, which graduated from a bachelor’s degree in Tec de Monterrey. The Cross-Industry Standard Process for Data Mining (CRISP-DM) methodology is used, and the machine learning algorithms, Random Forest, J48 and REPTree are compared to identify the best approach to build a classification model which can predict whether an alumni will study or not a postgraduate degree. For the purpose of this research, the data mining tool used was the Waikato Environment for Knowledge Analysis (WEKA). The resulting model shows that random forest outperforms the other decision tree algorithms based on the accuracy and classifier error, which drives the conclusion that this is a more suitable classifier for the explored dataset.","PeriodicalId":254581,"journal":{"name":"2020 International Conference on Computer Science and Software Engineering (CSASE)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127710857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart home systems have gained importance nowadays owing to the various applications they provide to the users. Applications of smart home systems cover many aspects of our daily life and help to reduce the cost of living via controlling and managing home appliances as an example. Currently, there are a huge number of studies on smart home systems; these studies mostly cover smart home visions, enabling technologies, etc. So far, a limited number of surveys point out, comprehensively, the applications and services of smart home systems. This paper presents a survey on smart home applications alongside many directions. The applications are classified into many categories each with a brief discussion on its purpose, advantages, and limitations.
{"title":"A Survey on the Applications of Smart Home Systems","authors":"Jawaher Abdulwahab Fadhil, Omar Ammar Omar, Q. Sarhan","doi":"10.1109/CSASE48920.2020.9142103","DOIUrl":"https://doi.org/10.1109/CSASE48920.2020.9142103","url":null,"abstract":"Smart home systems have gained importance nowadays owing to the various applications they provide to the users. Applications of smart home systems cover many aspects of our daily life and help to reduce the cost of living via controlling and managing home appliances as an example. Currently, there are a huge number of studies on smart home systems; these studies mostly cover smart home visions, enabling technologies, etc. So far, a limited number of surveys point out, comprehensively, the applications and services of smart home systems. This paper presents a survey on smart home applications alongside many directions. The applications are classified into many categories each with a brief discussion on its purpose, advantages, and limitations.","PeriodicalId":254581,"journal":{"name":"2020 International Conference on Computer Science and Software Engineering (CSASE)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116699065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/CSASE48920.2020.9142059
E. M. Thajeel, M. M. Mahdi, E. Abbas
This paper presents the performance investigation of three phase shunt active power filter (SAPF) using proportional integral (PI) and Fuzzy Logic controller (FLC). The control designing applied to active power filters (APFs) play a key role on the improvement the performance of APF. Compensate the harmonic component for the supply current, can improve the power quality and enhance the reliability and stability on power utility. In classical control systems like PI controller, information of the controlled system is essential in the formulation of a set of algebraic and differential equations, which analytically relate inputs and outputs. To overcome all these problems FL based control techniques can be used; it is designed to improve compensation capability by adjusting the current error using a fuzzy rule. The performance of these controllers acting on SAPF has been done on MATLAB simulation. The proposed system is composed by three phase source that fed a non-linear rectifier and impedance consisting of the combination of resistance and capacitance. The obtained results using the proposed controller give satisfactory results and verified through the simulations.
{"title":"Fuzzy logic controller based Shunt Active Power Filter for Current Harmonic Compensation","authors":"E. M. Thajeel, M. M. Mahdi, E. Abbas","doi":"10.1109/CSASE48920.2020.9142059","DOIUrl":"https://doi.org/10.1109/CSASE48920.2020.9142059","url":null,"abstract":"This paper presents the performance investigation of three phase shunt active power filter (SAPF) using proportional integral (PI) and Fuzzy Logic controller (FLC). The control designing applied to active power filters (APFs) play a key role on the improvement the performance of APF. Compensate the harmonic component for the supply current, can improve the power quality and enhance the reliability and stability on power utility. In classical control systems like PI controller, information of the controlled system is essential in the formulation of a set of algebraic and differential equations, which analytically relate inputs and outputs. To overcome all these problems FL based control techniques can be used; it is designed to improve compensation capability by adjusting the current error using a fuzzy rule. The performance of these controllers acting on SAPF has been done on MATLAB simulation. The proposed system is composed by three phase source that fed a non-linear rectifier and impedance consisting of the combination of resistance and capacitance. The obtained results using the proposed controller give satisfactory results and verified through the simulations.","PeriodicalId":254581,"journal":{"name":"2020 International Conference on Computer Science and Software Engineering (CSASE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116164123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/CSASE48920.2020.9142075
Hassan B. Hassan, Q. Sarhan
With the growing usage and migration of interactive applications and systems into today’s environment, the significance of the graphical user interfaces (GUIs) increases as they act as the gates into using systems efficiently. Therefore, extensive efforts are spent to enhance the usability of the GUIs. However, most of the works focus on testing the functional properties of applications’ GUIs rather than nonfunctional features such as performance. For this reason, it is worthwhile to assess the performance of various GUI components at runtime. In this paper, the most popular programming languages used to create GUI based applications namely, Java and C# were compared experimentally to evaluate their performance in terms of the creation and manipulation of GUI components/controls. The experimental results of this study, which is based on 32 testing scenarios, showed that Java outperformed C# in all test scenarios. This might be because of the “HotSpot” performance engine that Java uses. This study is useful for developers to get insights into the performance of different GUI components provided by different programming languages. Also, it helps them to choose the right programming language for their GUI based applications hence enhance the overall applications’ performance and user satisfaction.
{"title":"Performance Evaluation of Graphical User Interfaces in Java and C#","authors":"Hassan B. Hassan, Q. Sarhan","doi":"10.1109/CSASE48920.2020.9142075","DOIUrl":"https://doi.org/10.1109/CSASE48920.2020.9142075","url":null,"abstract":"With the growing usage and migration of interactive applications and systems into today’s environment, the significance of the graphical user interfaces (GUIs) increases as they act as the gates into using systems efficiently. Therefore, extensive efforts are spent to enhance the usability of the GUIs. However, most of the works focus on testing the functional properties of applications’ GUIs rather than nonfunctional features such as performance. For this reason, it is worthwhile to assess the performance of various GUI components at runtime. In this paper, the most popular programming languages used to create GUI based applications namely, Java and C# were compared experimentally to evaluate their performance in terms of the creation and manipulation of GUI components/controls. The experimental results of this study, which is based on 32 testing scenarios, showed that Java outperformed C# in all test scenarios. This might be because of the “HotSpot” performance engine that Java uses. This study is useful for developers to get insights into the performance of different GUI components provided by different programming languages. Also, it helps them to choose the right programming language for their GUI based applications hence enhance the overall applications’ performance and user satisfaction.","PeriodicalId":254581,"journal":{"name":"2020 International Conference on Computer Science and Software Engineering (CSASE)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127206780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/CSASE48920.2020.9142102
Elaf Adel Abbas, H. N. Nawaf
Community detection is one of the most important fields that help us in understand and analyze the structure of social networks. It is a tool to identify closely related groups in terms of social relations or common interests. In fact, community detection can be applied in social media, web clients, or e-commerce. For this purpose, the traditional Louvain algorithm is used for community detection as a suitable algorithm, since it provides fast, efficient and robust community detection on large static networks. However, the high computing complexity of this algorithm is a motivation of this work. Initially, the existing cliques and the other nodes which have not included in cliques are considered as separated communities instead of considering each node in the network is a community as in the traditional method, then the gain of integrating neighboring communities is calculated. A specific research methodology is followed to ensure that the work is rigorous in achieving the aim of the work. In synthetic and real-world data, the traditional and improved algorithms had to be applied to record the results, then analyze them. Experimentally, the results prove the execution time has reduced if it is compared with the traditional algorithm while preserving the quality of partitions at the same time somewhat.
{"title":"Improving Louvain Algorithm by Leveraging Cliques for Community Detection","authors":"Elaf Adel Abbas, H. N. Nawaf","doi":"10.1109/CSASE48920.2020.9142102","DOIUrl":"https://doi.org/10.1109/CSASE48920.2020.9142102","url":null,"abstract":"Community detection is one of the most important fields that help us in understand and analyze the structure of social networks. It is a tool to identify closely related groups in terms of social relations or common interests. In fact, community detection can be applied in social media, web clients, or e-commerce. For this purpose, the traditional Louvain algorithm is used for community detection as a suitable algorithm, since it provides fast, efficient and robust community detection on large static networks. However, the high computing complexity of this algorithm is a motivation of this work. Initially, the existing cliques and the other nodes which have not included in cliques are considered as separated communities instead of considering each node in the network is a community as in the traditional method, then the gain of integrating neighboring communities is calculated. A specific research methodology is followed to ensure that the work is rigorous in achieving the aim of the work. In synthetic and real-world data, the traditional and improved algorithms had to be applied to record the results, then analyze them. Experimentally, the results prove the execution time has reduced if it is compared with the traditional algorithm while preserving the quality of partitions at the same time somewhat.","PeriodicalId":254581,"journal":{"name":"2020 International Conference on Computer Science and Software Engineering (CSASE)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126890096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/CSASE48920.2020.9142066
K. Pati
Multicollinearity and outliers are seen as one of the most common problems in the models of multiple linear regression. In the present paper, a robust ridge regression is proposed on the basis of weighted ridge least trimmed squares (WRLTS). The suggested method WRLTS is compared with the following methods of estimation: The Ordinary Least Squares (OLS), Ridge Regression (RR), Robust Ridge Regression (RRR), such as Ridge Least Median Squares (RLMS), Ridge Least Trimmed Squares (RLTS), regression which is based on LTS estimator and Weighted Ridge (WRID) as far as Standard Error is concerned. For the sake of illustration of the suggested method, two examples are given through the use of R programming to test the data. Both examples have shown that WRLTS is the best estimator in comparison to the other methods in the present paper.
{"title":"Using Standard Error to Find the Best Robust Regression in Presence of Multicollinearity and Outliers","authors":"K. Pati","doi":"10.1109/CSASE48920.2020.9142066","DOIUrl":"https://doi.org/10.1109/CSASE48920.2020.9142066","url":null,"abstract":"Multicollinearity and outliers are seen as one of the most common problems in the models of multiple linear regression. In the present paper, a robust ridge regression is proposed on the basis of weighted ridge least trimmed squares (WRLTS). The suggested method WRLTS is compared with the following methods of estimation: The Ordinary Least Squares (OLS), Ridge Regression (RR), Robust Ridge Regression (RRR), such as Ridge Least Median Squares (RLMS), Ridge Least Trimmed Squares (RLTS), regression which is based on LTS estimator and Weighted Ridge (WRID) as far as Standard Error is concerned. For the sake of illustration of the suggested method, two examples are given through the use of R programming to test the data. Both examples have shown that WRLTS is the best estimator in comparison to the other methods in the present paper.","PeriodicalId":254581,"journal":{"name":"2020 International Conference on Computer Science and Software Engineering (CSASE)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123688596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/CSASE48920.2020.9142088
Musab T. S. Al-Kaltakchi, R. Al-Nima, Mahmood Alfathe, Mohammed A. M. Abdullah
In this paper, a robust yet simple speaker verification system is implemented. The speaker verification system is investigated employing the i-vector approach with the Cosine Distance Scoring (CDS) for system classification. In addition, to measure the system performance, Equal Error Rate (EER), Detection Error Trade-off (DET) Curve, Receiver Operating Characteristic (ROC) curve as well as Detection Cost Function (DCF) were utilized. Experimental results are conducted on the TMIT database using 64 randomly selected speakers. The proposed system utilizes the Mel Frequency Cepstral Coefficients (MFCC) and Power Normalized Cepstral Coefficients (PNCC) for feature extraction. In addition, features normalization methods such as Feature Warping (FW) and Cepstral Mean-Variance Normalization (CMVN) are used in order to mitigate channel effect noise. The speakers are modeled with the i-vector while CDS is used for classification. Experimental results demonstrate that the proposed system achieved promising results while being computationally efficient.
{"title":"Speaker Verification Using Cosine Distance Scoring with i-vector Approach","authors":"Musab T. S. Al-Kaltakchi, R. Al-Nima, Mahmood Alfathe, Mohammed A. M. Abdullah","doi":"10.1109/CSASE48920.2020.9142088","DOIUrl":"https://doi.org/10.1109/CSASE48920.2020.9142088","url":null,"abstract":"In this paper, a robust yet simple speaker verification system is implemented. The speaker verification system is investigated employing the i-vector approach with the Cosine Distance Scoring (CDS) for system classification. In addition, to measure the system performance, Equal Error Rate (EER), Detection Error Trade-off (DET) Curve, Receiver Operating Characteristic (ROC) curve as well as Detection Cost Function (DCF) were utilized. Experimental results are conducted on the TMIT database using 64 randomly selected speakers. The proposed system utilizes the Mel Frequency Cepstral Coefficients (MFCC) and Power Normalized Cepstral Coefficients (PNCC) for feature extraction. In addition, features normalization methods such as Feature Warping (FW) and Cepstral Mean-Variance Normalization (CMVN) are used in order to mitigate channel effect noise. The speakers are modeled with the i-vector while CDS is used for classification. Experimental results demonstrate that the proposed system achieved promising results while being computationally efficient.","PeriodicalId":254581,"journal":{"name":"2020 International Conference on Computer Science and Software Engineering (CSASE)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121931463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/CSASE48920.2020.9142058
H. Najeeb, R. F. Ghani
The detection of the ball is the first step for tracking in soccer broadcasted video. In some cases, it is difficult to detect the ball by shape and color. Especially, when it overlaps with other objects (line or players). Therefore, we have been suggested a new technique of real-time ball tracking. First, reducing the rate of a missing ball through determining the candidate position of balls rather than attempting to identify the position of ball, computing the distance between the ball and candidate balls to delete the false candidate position of the balls by the threshold. At last, estimating the ball position via Extended Kalman filter. The proposed work has achieved higher accuracy and speed than other methods which are used Kalman filter and template matching or used only a Kalman filter for tracking the ball.
{"title":"Tracking Ball in Soccer Game Video using Extended Kalman Filter","authors":"H. Najeeb, R. F. Ghani","doi":"10.1109/CSASE48920.2020.9142058","DOIUrl":"https://doi.org/10.1109/CSASE48920.2020.9142058","url":null,"abstract":"The detection of the ball is the first step for tracking in soccer broadcasted video. In some cases, it is difficult to detect the ball by shape and color. Especially, when it overlaps with other objects (line or players). Therefore, we have been suggested a new technique of real-time ball tracking. First, reducing the rate of a missing ball through determining the candidate position of balls rather than attempting to identify the position of ball, computing the distance between the ball and candidate balls to delete the false candidate position of the balls by the threshold. At last, estimating the ball position via Extended Kalman filter. The proposed work has achieved higher accuracy and speed than other methods which are used Kalman filter and template matching or used only a Kalman filter for tracking the ball.","PeriodicalId":254581,"journal":{"name":"2020 International Conference on Computer Science and Software Engineering (CSASE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122238056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}