Pub Date : 2021-05-24DOI: 10.1109/ICICS52457.2021.9464621
T. Koukaras, Kostas Kolomvatsos
The evolution of the Internet of Things (IoT) drives the incorporation of numerous devices into a huge infrastructure where various services can be provided. Devices are located close to users being capable to interact with them and their environment to collect data. The collected data are transferred to the Cloud through the ‘intervention’ of the Edge Computing (EC) infrastructure. Multiple nodes are present at the EC that can undertake the responsibility of keeping some processing activities close to end users, thus, minimizing the latency in the provision of responses. In this paper, we elaborate on a model that supports an efficient data management mechanism to proactively decide the location where data should be stored. We aim at concluding a number of datasets exhibiting a high accuracy as exposed by their solidity. The proposed approach deals with the similarity of the collected data and the already formulated datasets before we decide the final allocation. Any decision is made upon the synopses of the discussed datasets avoiding the processing of huge volumes of data. Additionally, we elaborate on an ensemble scheme for matching the incoming observations and the available synopses. The performance of the proposed scheme is depicted by the relevant numerical outcomes.
{"title":"Proactive Data Allocation in Distributed Datasets based on an Ensemble Model","authors":"T. Koukaras, Kostas Kolomvatsos","doi":"10.1109/ICICS52457.2021.9464621","DOIUrl":"https://doi.org/10.1109/ICICS52457.2021.9464621","url":null,"abstract":"The evolution of the Internet of Things (IoT) drives the incorporation of numerous devices into a huge infrastructure where various services can be provided. Devices are located close to users being capable to interact with them and their environment to collect data. The collected data are transferred to the Cloud through the ‘intervention’ of the Edge Computing (EC) infrastructure. Multiple nodes are present at the EC that can undertake the responsibility of keeping some processing activities close to end users, thus, minimizing the latency in the provision of responses. In this paper, we elaborate on a model that supports an efficient data management mechanism to proactively decide the location where data should be stored. We aim at concluding a number of datasets exhibiting a high accuracy as exposed by their solidity. The proposed approach deals with the similarity of the collected data and the already formulated datasets before we decide the final allocation. Any decision is made upon the synopses of the discussed datasets avoiding the processing of huge volumes of data. Additionally, we elaborate on an ensemble scheme for matching the incoming observations and the available synopses. The performance of the proposed scheme is depicted by the relevant numerical outcomes.","PeriodicalId":421803,"journal":{"name":"2021 12th International Conference on Information and Communication Systems (ICICS)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128714230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-24DOI: 10.1109/ICICS52457.2021.9464549
Kaouther Boumalek, M. Benslama, S. Aris
Nowadays, free space researches are focused on increasing the link’s data transfer rate capacity. For this purpose, optical wavelengths are preferred over radio-frequency (RF) wavelengths as signal carriers to satisfy the high data transfer rate over huge distances since optical frequencies are greater than RF ones and a greater frequency implies better carrier capacity. Low earth orbit (LEO) satellite constellations with laser links are well suited to display a global broadband network. Since both LEO communicating satellites are in a continuous movement, the relative distance between them changes periodically. Doppler effect is detected on the received signal onboard terminal of the receiver satellite which shifts the characteristics of the received signal. In this paper, we study the influence of the operating wavelength on the Doppler effect in LEO inter-satellite laser links for full-time accessing inter-satellite links. The choice of the operating wavelength is an important task because Doppler wavelength shift is directly proportional to the operating wavelength. Results show that the lower laser operating wavelength the minimum Doppler wavelength shift appearance.
{"title":"The influence of operating laser wavelengths on Doppler effect in LEO Optical satellite constellation","authors":"Kaouther Boumalek, M. Benslama, S. Aris","doi":"10.1109/ICICS52457.2021.9464549","DOIUrl":"https://doi.org/10.1109/ICICS52457.2021.9464549","url":null,"abstract":"Nowadays, free space researches are focused on increasing the link’s data transfer rate capacity. For this purpose, optical wavelengths are preferred over radio-frequency (RF) wavelengths as signal carriers to satisfy the high data transfer rate over huge distances since optical frequencies are greater than RF ones and a greater frequency implies better carrier capacity. Low earth orbit (LEO) satellite constellations with laser links are well suited to display a global broadband network. Since both LEO communicating satellites are in a continuous movement, the relative distance between them changes periodically. Doppler effect is detected on the received signal onboard terminal of the receiver satellite which shifts the characteristics of the received signal. In this paper, we study the influence of the operating wavelength on the Doppler effect in LEO inter-satellite laser links for full-time accessing inter-satellite links. The choice of the operating wavelength is an important task because Doppler wavelength shift is directly proportional to the operating wavelength. Results show that the lower laser operating wavelength the minimum Doppler wavelength shift appearance.","PeriodicalId":421803,"journal":{"name":"2021 12th International Conference on Information and Communication Systems (ICICS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114169779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-24DOI: 10.1109/ICICS52457.2021.9464569
Ziad Al-Sharif, Wafa F. Abdalrahman, C. Jeffery
Test case minimization can be critical to meeting the release date of a software product. Identifying redundant test cases can help improve the quality of the test suite and speed up the testing process. Thus, there is a need to uniquely characterize test cases. This identification can support the test engineer to remove redundancy in the test suite and prioritize test cases that are highly affected by the most recent modification in source code. This paper proposes a test case encoding approach that allows engineers to facilitate execution traces to classify and identify their test cases. It will empower test engineers and allow them to minimize the time and cost of testing by reducing the number of test cases, especially in regression testing. Furthermore, it enhances the documentation of the testing process by providing a one-to-one mapping between test cases and their corresponding execution traces, each of which is a sequence of execution events triggered during the execution of the test case. The one-to-one mapping allows the approach to uniquely represent the control-flow and data-flow within the source code. This trace can be used as a signature for the test case. Whenever a modification occurred in the source code, the newly captured signatures are compared against the previous ones; any mismatch indicates that the test case has been affected by the modification. Repeating this process will help classify the test suite into four groups of test cases. This provides the ability to put the testing efforts where it is needed. Additionally, keeping a hashed value for each of the captured sequences simplifies the comparison and unifies the mapping between test cases and captured traces. It also allows detection of minor modifications in the traced events, and reduces the lengthy traces to a set of fixed size hashed values.
{"title":"Encoding Test Cases using Execution Traces","authors":"Ziad Al-Sharif, Wafa F. Abdalrahman, C. Jeffery","doi":"10.1109/ICICS52457.2021.9464569","DOIUrl":"https://doi.org/10.1109/ICICS52457.2021.9464569","url":null,"abstract":"Test case minimization can be critical to meeting the release date of a software product. Identifying redundant test cases can help improve the quality of the test suite and speed up the testing process. Thus, there is a need to uniquely characterize test cases. This identification can support the test engineer to remove redundancy in the test suite and prioritize test cases that are highly affected by the most recent modification in source code. This paper proposes a test case encoding approach that allows engineers to facilitate execution traces to classify and identify their test cases. It will empower test engineers and allow them to minimize the time and cost of testing by reducing the number of test cases, especially in regression testing. Furthermore, it enhances the documentation of the testing process by providing a one-to-one mapping between test cases and their corresponding execution traces, each of which is a sequence of execution events triggered during the execution of the test case. The one-to-one mapping allows the approach to uniquely represent the control-flow and data-flow within the source code. This trace can be used as a signature for the test case. Whenever a modification occurred in the source code, the newly captured signatures are compared against the previous ones; any mismatch indicates that the test case has been affected by the modification. Repeating this process will help classify the test suite into four groups of test cases. This provides the ability to put the testing efforts where it is needed. Additionally, keeping a hashed value for each of the captured sequences simplifies the comparison and unifies the mapping between test cases and captured traces. It also allows detection of minor modifications in the traced events, and reduces the lengthy traces to a set of fixed size hashed values.","PeriodicalId":421803,"journal":{"name":"2021 12th International Conference on Information and Communication Systems (ICICS)","volume":"11 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122685709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-24DOI: 10.1109/ICICS52457.2021.9464625
Hilal Naimi, A. Adamou-Mitiche, L. Mitiche
The main problem in the area of medical imaging has been image denoising. The most defying for image denoising is to secure data carrying structures like surfaces and edges in order to achieve good visual quality. Different algorithms with different denoising performances have been proposed in previous decades. More recently, models focused on deep learning have shown a great promise to outperform all traditional approaches. However, these techniques are limited to the necessity of large sample size training and high computational costs. In this research, we propose a denoising approach basing on Lifting Dual Tree Complex Wavelet Transform (LDTCWT) using Hybrid Thresholding with Wiener filter to enhance the quality image. We describe the LDTCWT, a type of lifting wavelets remodeling that produce complex coefficients by employing a dual tree of lifting wavelets filters to get its real part and imaginary part. Permits the remodel to produce approximate shift invariance, directionally selective filters and reduces the computation time (properties lacking within the classical wavelets transform). To develop this approach, a hybrid thresholding function is modeled by integrating the Wiener filter into the thresholding function.
{"title":"Hybrid Thresholding Lifting Dual Tree Complex Wavelet Transform with Wiener filter for quality assurance of medical image","authors":"Hilal Naimi, A. Adamou-Mitiche, L. Mitiche","doi":"10.1109/ICICS52457.2021.9464625","DOIUrl":"https://doi.org/10.1109/ICICS52457.2021.9464625","url":null,"abstract":"The main problem in the area of medical imaging has been image denoising. The most defying for image denoising is to secure data carrying structures like surfaces and edges in order to achieve good visual quality. Different algorithms with different denoising performances have been proposed in previous decades. More recently, models focused on deep learning have shown a great promise to outperform all traditional approaches. However, these techniques are limited to the necessity of large sample size training and high computational costs. In this research, we propose a denoising approach basing on Lifting Dual Tree Complex Wavelet Transform (LDTCWT) using Hybrid Thresholding with Wiener filter to enhance the quality image. We describe the LDTCWT, a type of lifting wavelets remodeling that produce complex coefficients by employing a dual tree of lifting wavelets filters to get its real part and imaginary part. Permits the remodel to produce approximate shift invariance, directionally selective filters and reduces the computation time (properties lacking within the classical wavelets transform). To develop this approach, a hybrid thresholding function is modeled by integrating the Wiener filter into the thresholding function.","PeriodicalId":421803,"journal":{"name":"2021 12th International Conference on Information and Communication Systems (ICICS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128627246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-24DOI: 10.1109/ICICS52457.2021.9464576
Wafaa Al-Kahla, Ahmed S. Shatnawi, E. Taqieddin
the rapid evolution in web applications and software development has not witnessed a similar pace in development opposite what has taken place in the security and cybersecurity arenas. Web applications security has become a significant issue since insecure applications and software undermine various areas including finance, health care, defense, and other mission critical infrastructures. Web application vulnerability results from misconfiguration, flaws in the design, implementation, operation, or management at the different levels of a web application (input side, output side, or both sides). This taxonomy paper studies web vulnerabilities, their impact on users’ infrastructures, and their detection tools (static, dynamic, or hybrid) and mitigation mechanisms at different software architecture levels.
{"title":"A Taxonomy of Web Security Vulnerabilities","authors":"Wafaa Al-Kahla, Ahmed S. Shatnawi, E. Taqieddin","doi":"10.1109/ICICS52457.2021.9464576","DOIUrl":"https://doi.org/10.1109/ICICS52457.2021.9464576","url":null,"abstract":"the rapid evolution in web applications and software development has not witnessed a similar pace in development opposite what has taken place in the security and cybersecurity arenas. Web applications security has become a significant issue since insecure applications and software undermine various areas including finance, health care, defense, and other mission critical infrastructures. Web application vulnerability results from misconfiguration, flaws in the design, implementation, operation, or management at the different levels of a web application (input side, output side, or both sides). This taxonomy paper studies web vulnerabilities, their impact on users’ infrastructures, and their detection tools (static, dynamic, or hybrid) and mitigation mechanisms at different software architecture levels.","PeriodicalId":421803,"journal":{"name":"2021 12th International Conference on Information and Communication Systems (ICICS)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126371664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-24DOI: 10.1109/ICICS52457.2021.9464535
Fatima Khattab Abu-Akleek, A. Alma'aitah
In real time service’s applications that have to process user’s queries and information updates have a challenge in balancing the system performance and the information freshness where these properties are related by the response time and Age-of-Information (AOI) respectively. Such applications are fed with periodic incoming of updates and queries and have to decide to which process first. In this paper we are discussing this challenge and provide a simple algorithm for trading-off the response time and AOI. We first discuss the Query-k, Update-k and Joint(M,N) policies that already proposed in previous work then we introduce a simple algorithm for the Joint(M,N)-policy. The algorithm is working based on the output of Query-k and Update-k policies in addition to predefined threshold. Results presents the efficient balancing between response time and AOI.
{"title":"Information Freshness and System Performance Trading-off","authors":"Fatima Khattab Abu-Akleek, A. Alma'aitah","doi":"10.1109/ICICS52457.2021.9464535","DOIUrl":"https://doi.org/10.1109/ICICS52457.2021.9464535","url":null,"abstract":"In real time service’s applications that have to process user’s queries and information updates have a challenge in balancing the system performance and the information freshness where these properties are related by the response time and Age-of-Information (AOI) respectively. Such applications are fed with periodic incoming of updates and queries and have to decide to which process first. In this paper we are discussing this challenge and provide a simple algorithm for trading-off the response time and AOI. We first discuss the Query-k, Update-k and Joint(M,N) policies that already proposed in previous work then we introduce a simple algorithm for the Joint(M,N)-policy. The algorithm is working based on the output of Query-k and Update-k policies in addition to predefined threshold. Results presents the efficient balancing between response time and AOI.","PeriodicalId":421803,"journal":{"name":"2021 12th International Conference on Information and Communication Systems (ICICS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122356450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-24DOI: 10.1109/ICICS52457.2021.9464577
Mahmoud Ghandour, Raffi Al-Qurran, M. Al-Ayyoub, A. Shatnawi, M. Alsmirat, F. Costen
Obtaining the highest quality olive oil (OO) during the milling process is greatly desirable. Since the quality of the produced oil depends mainly on the olive fruits (OF), it is important to manually check each batch of OF before milling them in addition to performing lab tests to verify the quality of the produced OO. The goal of this work is to automate the process of classifying OF based on whether they produce extra virgin OO (EVOO) or not. We collect a large dataset of more than 11K OF images and label them as positive/negative based on whether they produced EVOO or not. We then fine-tune several state-of-the-art deep learning models on this dataset. The results show that most pretrained models are very accurate for this dataset leading the suggestion that we use the most efficient one.
{"title":"Classifying Olive Fruits Based on Produced Oil Quality: A Benchmark Dataset and Strong Baselines","authors":"Mahmoud Ghandour, Raffi Al-Qurran, M. Al-Ayyoub, A. Shatnawi, M. Alsmirat, F. Costen","doi":"10.1109/ICICS52457.2021.9464577","DOIUrl":"https://doi.org/10.1109/ICICS52457.2021.9464577","url":null,"abstract":"Obtaining the highest quality olive oil (OO) during the milling process is greatly desirable. Since the quality of the produced oil depends mainly on the olive fruits (OF), it is important to manually check each batch of OF before milling them in addition to performing lab tests to verify the quality of the produced OO. The goal of this work is to automate the process of classifying OF based on whether they produce extra virgin OO (EVOO) or not. We collect a large dataset of more than 11K OF images and label them as positive/negative based on whether they produced EVOO or not. We then fine-tune several state-of-the-art deep learning models on this dataset. The results show that most pretrained models are very accurate for this dataset leading the suggestion that we use the most efficient one.","PeriodicalId":421803,"journal":{"name":"2021 12th International Conference on Information and Communication Systems (ICICS)","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115319365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-24DOI: 10.1109/ICICS52457.2021.9464565
H. Wahsheh, Mohammed S. Al-zahrani
Recently, especially during the COVID-19 pandemic, there is a growing interest in employing secure technology in health care systems, enhancing workflow, reducing costs, and developing the flexibility of providing healthcare services to users. QR codes are accessible, free, easy to use, and can be scanned through several free apps on smartphones. QR codes can help healthcare systems as an effective alternative to using paper-based documents for a patient identity system. Sensitive information can be stored in the QR codes of patients and medical staff as offline data or stored in a particular URL. Besides, QR codes can hold prescriptions and electronic permits used during the lockdown period. As there is no standard structure or authentication phase in QR code generation, such codes are vulnerable to fake content. This study’s main objective is to protect healthcare information embedded inside QR codes by employing cryptographic mechanisms. Several experiments were conducted to determine how different features and users deal with barcode scanning. We introduce an authentication and integrity phase during QR code generation. We integrated digital signature, encryption, and access control techniques into a secure, real-time barcode generator and scanner implementation to protect QR code content against unauthorized users. The results demonstrate that the proposed system enables generating secure and usable QR codes for healthcare information protection among several scenarios.
{"title":"Secure and Usable QR Codes for Healthcare Systems: The Case of Covid-19 Pandemic","authors":"H. Wahsheh, Mohammed S. Al-zahrani","doi":"10.1109/ICICS52457.2021.9464565","DOIUrl":"https://doi.org/10.1109/ICICS52457.2021.9464565","url":null,"abstract":"Recently, especially during the COVID-19 pandemic, there is a growing interest in employing secure technology in health care systems, enhancing workflow, reducing costs, and developing the flexibility of providing healthcare services to users. QR codes are accessible, free, easy to use, and can be scanned through several free apps on smartphones. QR codes can help healthcare systems as an effective alternative to using paper-based documents for a patient identity system. Sensitive information can be stored in the QR codes of patients and medical staff as offline data or stored in a particular URL. Besides, QR codes can hold prescriptions and electronic permits used during the lockdown period. As there is no standard structure or authentication phase in QR code generation, such codes are vulnerable to fake content. This study’s main objective is to protect healthcare information embedded inside QR codes by employing cryptographic mechanisms. Several experiments were conducted to determine how different features and users deal with barcode scanning. We introduce an authentication and integrity phase during QR code generation. We integrated digital signature, encryption, and access control techniques into a secure, real-time barcode generator and scanner implementation to protect QR code content against unauthorized users. The results demonstrate that the proposed system enables generating secure and usable QR codes for healthcare information protection among several scenarios.","PeriodicalId":421803,"journal":{"name":"2021 12th International Conference on Information and Communication Systems (ICICS)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124214334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-24DOI: 10.1109/ICICS52457.2021.9464600
Moath Alzyout, Emran Al Bashabsheh, Hassan M. Najadat, Ahmad Alaiad
Social Media platforms, such as Twitter became a significant pulse in smart societies that are shaping our communities by sensitizing people’s information and perceptions across living areas over space and time. Social media sentiment analysis helps in recognizing people’s emotions and attitudes and helps in assessing various public issues, such as, women’s rights and violence against women. In this paper, we used the sentence based sentiment analysis to study the notion of women’s rights. We collected Arabic dialect tweets from the whole Arab world as data via a Twitter API, then we cleaned the data to use it in the classification step. We have examined different types of traditional classification algorithms namely, Support Vector Machine, K-Nearest-Neighbour, Decision Trees, and Naive Bayes. Then, we compared these results with deep learning results. Finally, we compared the classification results using the precision, recall and accuracy measurements. We found that the Support Vector Machine algorithm gained the best results, while the Naive Bayes was the worst. We also noticed that there is an increasing attention to women’s rights in the Arab world.
{"title":"Sentiment Analysis of Arabic Tweets about Violence Against Women using Machine Learning","authors":"Moath Alzyout, Emran Al Bashabsheh, Hassan M. Najadat, Ahmad Alaiad","doi":"10.1109/ICICS52457.2021.9464600","DOIUrl":"https://doi.org/10.1109/ICICS52457.2021.9464600","url":null,"abstract":"Social Media platforms, such as Twitter became a significant pulse in smart societies that are shaping our communities by sensitizing people’s information and perceptions across living areas over space and time. Social media sentiment analysis helps in recognizing people’s emotions and attitudes and helps in assessing various public issues, such as, women’s rights and violence against women. In this paper, we used the sentence based sentiment analysis to study the notion of women’s rights. We collected Arabic dialect tweets from the whole Arab world as data via a Twitter API, then we cleaned the data to use it in the classification step. We have examined different types of traditional classification algorithms namely, Support Vector Machine, K-Nearest-Neighbour, Decision Trees, and Naive Bayes. Then, we compared these results with deep learning results. Finally, we compared the classification results using the precision, recall and accuracy measurements. We found that the Support Vector Machine algorithm gained the best results, while the Naive Bayes was the worst. We also noticed that there is an increasing attention to women’s rights in the Arab world.","PeriodicalId":421803,"journal":{"name":"2021 12th International Conference on Information and Communication Systems (ICICS)","volume":"78 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126047405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-24DOI: 10.1109/ICICS52457.2021.9464624
Aya Nuseir, A. Nuseir, M. Alsmirat, M. Al-Ayyoub, Mohammed Mahdi, H. Al-Balas
In this project, a sinonasal diseases dataset is created with the help of Ear, Nose, and Throat (ENT) specialists at King Abdullah University Hospital (KAUH), Jordan. This dataset is then used to experiment with different features extraction and selection methods and different machine learning classification methods. The work can be summarized as follows. We start by selecting the participating patients according to their history information that is acquired from the patients using a questionnaire. Then, we construct a tool that employs image processing techniques to process the selected patients’ CT scan images to extract useful information from them. After that, different filters and wrapper selection methods with various machine learning techniques are utilized to classify the cases in the dataset. The results show that the performance of wrapper feature selection (using PART classifier with best first search) with all used classifiers produces better results compared with the other feature selection methods.
{"title":"Performance of Different Machine Learning Methods for Sinus Diseases Classification","authors":"Aya Nuseir, A. Nuseir, M. Alsmirat, M. Al-Ayyoub, Mohammed Mahdi, H. Al-Balas","doi":"10.1109/ICICS52457.2021.9464624","DOIUrl":"https://doi.org/10.1109/ICICS52457.2021.9464624","url":null,"abstract":"In this project, a sinonasal diseases dataset is created with the help of Ear, Nose, and Throat (ENT) specialists at King Abdullah University Hospital (KAUH), Jordan. This dataset is then used to experiment with different features extraction and selection methods and different machine learning classification methods. The work can be summarized as follows. We start by selecting the participating patients according to their history information that is acquired from the patients using a questionnaire. Then, we construct a tool that employs image processing techniques to process the selected patients’ CT scan images to extract useful information from them. After that, different filters and wrapper selection methods with various machine learning techniques are utilized to classify the cases in the dataset. The results show that the performance of wrapper feature selection (using PART classifier with best first search) with all used classifiers produces better results compared with the other feature selection methods.","PeriodicalId":421803,"journal":{"name":"2021 12th International Conference on Information and Communication Systems (ICICS)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124629820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}