Pub Date : 2022-12-04DOI: 10.1109/CICN56167.2022.10008341
Raed Shafei
A hash is a fixed-length output of some data that has been through a one-way function that cannot be reversed, called the hashing algorithm. Hashing algorithms are used to store secure information, such as passwords. They are stored as hashes after they have been through a hashing algorithm. Also, hashing algorithms are used to insure the checksum of certain data over the internet. This paper discusses how Ibn Omar's hashing algorithm will provide higher security for data than other hash functions used nowadays. Ibn Omar's hashing algorithm in produces an output of 1024 bits, four times as SHA256 and twice as SHA512. Ibn Omar's hashing algorithm reduces the vulnerability of a hash collision due to its size. Also, it would require enormous computational power to find a collision. There are eight salts per input. This hashing algorithm aims to provide high privacy and security for users.
{"title":"Ibn Omar Hash Algorithm","authors":"Raed Shafei","doi":"10.1109/CICN56167.2022.10008341","DOIUrl":"https://doi.org/10.1109/CICN56167.2022.10008341","url":null,"abstract":"A hash is a fixed-length output of some data that has been through a one-way function that cannot be reversed, called the hashing algorithm. Hashing algorithms are used to store secure information, such as passwords. They are stored as hashes after they have been through a hashing algorithm. Also, hashing algorithms are used to insure the checksum of certain data over the internet. This paper discusses how Ibn Omar's hashing algorithm will provide higher security for data than other hash functions used nowadays. Ibn Omar's hashing algorithm in produces an output of 1024 bits, four times as SHA256 and twice as SHA512. Ibn Omar's hashing algorithm reduces the vulnerability of a hash collision due to its size. Also, it would require enormous computational power to find a collision. There are eight salts per input. This hashing algorithm aims to provide high privacy and security for users.","PeriodicalId":287589,"journal":{"name":"2022 14th International Conference on Computational Intelligence and Communication Networks (CICN)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121708611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-04DOI: 10.1109/CICN56167.2022.10008282
Asma Z. Yamani, Shikah J. Alsunaidi, Imane Boudellioua
Artificial intelligence (AI) and machine learning significantly improve many sectors, such as education, healthcare, and industry. Machine learning techniques mainly depend on the volume and diversity of training data. With the digital transformation we live in, an abundant amount of data can be collected from different sources. However, the problem that needs to be addressed is how this amount of data can be processed and where it can be stored. Cloud services and distributed file systems (DFSs) help address this issue. Many DFSs such as Hadoop, Quantcast, and Apache Spark differ in many aspects, including scheduling algorithms, data management protocol, throughput, and runtime. Some DFSs may be better for working with specific applications than others. Apache Spark is capable of handling iterative operations like machine learning operations as well as it provides an integrated library of different machine learning algorithms called MLlib. In this paper, we evaluated the use of Spark using two machine learning algorithms, namely Logistic Regression (LR) and Random Forests (RF). We investigated the effect of varying the memory allocation configuration and the use of GPU. We concluded that the use of Spark greatly improves the runtime and memory consumption. However, its use has to be justifiable and needed for the size of the data due to different factors that affect the machine learning model's accuracy. The memory allocation should be kept to the minimum needed, and GPU should only be used when the machine learning algorithm used supports parallelization.
{"title":"Performance Evaluation of Machine Learning Models on Apache Spark: An Empirical Study","authors":"Asma Z. Yamani, Shikah J. Alsunaidi, Imane Boudellioua","doi":"10.1109/CICN56167.2022.10008282","DOIUrl":"https://doi.org/10.1109/CICN56167.2022.10008282","url":null,"abstract":"Artificial intelligence (AI) and machine learning significantly improve many sectors, such as education, healthcare, and industry. Machine learning techniques mainly depend on the volume and diversity of training data. With the digital transformation we live in, an abundant amount of data can be collected from different sources. However, the problem that needs to be addressed is how this amount of data can be processed and where it can be stored. Cloud services and distributed file systems (DFSs) help address this issue. Many DFSs such as Hadoop, Quantcast, and Apache Spark differ in many aspects, including scheduling algorithms, data management protocol, throughput, and runtime. Some DFSs may be better for working with specific applications than others. Apache Spark is capable of handling iterative operations like machine learning operations as well as it provides an integrated library of different machine learning algorithms called MLlib. In this paper, we evaluated the use of Spark using two machine learning algorithms, namely Logistic Regression (LR) and Random Forests (RF). We investigated the effect of varying the memory allocation configuration and the use of GPU. We concluded that the use of Spark greatly improves the runtime and memory consumption. However, its use has to be justifiable and needed for the size of the data due to different factors that affect the machine learning model's accuracy. The memory allocation should be kept to the minimum needed, and GPU should only be used when the machine learning algorithm used supports parallelization.","PeriodicalId":287589,"journal":{"name":"2022 14th International Conference on Computational Intelligence and Communication Networks (CICN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114830981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-04DOI: 10.1109/CICN56167.2022.10008372
H. Nuha, Rizka Reza Pahlevi, M. Mohandes, S. Rehman, A. Al-Shaikhi, H. Tella
The general plan for the provision of electricity of Indonesia Electricity Company for 2010–2019 states that the annual electricity demand is 55,000 MW. Wind speed (WS) assessment is required for wind farm site candidates. This paper uses the generalized additive model (GAM) for vertical WS estimation. The method is evaluated in terms of symmetric mean absolute percentage error (SMAPE), mean absolute error (MAE), and the adjusted coefficient of determination (R2adj). The highest values of R2adj between the measured and the estimated WS values achieved by GAM method at 60, 100, 140, and 180 m of heights are 96.34%, 81.66%, 64.68 %, and 62.90 % respectively.
{"title":"Vertical Wind Speed Estimation Using Generalized Additive Model (GAM) for Regression","authors":"H. Nuha, Rizka Reza Pahlevi, M. Mohandes, S. Rehman, A. Al-Shaikhi, H. Tella","doi":"10.1109/CICN56167.2022.10008372","DOIUrl":"https://doi.org/10.1109/CICN56167.2022.10008372","url":null,"abstract":"The general plan for the provision of electricity of Indonesia Electricity Company for 2010–2019 states that the annual electricity demand is 55,000 MW. Wind speed (WS) assessment is required for wind farm site candidates. This paper uses the generalized additive model (GAM) for vertical WS estimation. The method is evaluated in terms of symmetric mean absolute percentage error (SMAPE), mean absolute error (MAE), and the adjusted coefficient of determination (R2adj). The highest values of R2adj between the measured and the estimated WS values achieved by GAM method at 60, 100, 140, and 180 m of heights are 96.34%, 81.66%, 64.68 %, and 62.90 % respectively.","PeriodicalId":287589,"journal":{"name":"2022 14th International Conference on Computational Intelligence and Communication Networks (CICN)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126314849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-04DOI: 10.1109/CICN56167.2022.10008263
K. Makhlouf, Sami Zhioua, C. Palamidessi
Machine learning algorithms can produce biased outcome/prediction, typically, against minorities and under-represented sub-populations. Therefore, fairness is emerging as an important requirement for the safe application of machine learning based technologies. The most commonly used fairness notions (e.g. statistical parity, equalized odds, predictive parity, etc.) are observational and rely on mere correlation between variables. These notions fail to identify bias in case of statistical anomalies such as Simpson's or Berkson's paradoxes. Causality-based fairness notions (e.g. counterfactual fairness, no-proxy discrimination, etc.) are immune to such anomalies and hence more reliable to assess fairness. The problem of causality-based fairness notions, however, is that they are defined in terms of quantities (e.g. causal, counterfactual, and path-specific effects) that are not always measurable. This is known as the identifiability problem and is the topic of a large body of work in the causal inference literature. The first contribution of this paper is a compilation of the major identifiability results which are of particular relevance for machine learning fairness. To the best of our knowledge, no previous work in the field of ML fairness or causal inference provides such systemization of knowledge. The second contribution is more general and addresses the main problem of using causality in machine learning, that is, how to extract causal knowledge from observational data in real scenarios. This paper shows how this can be achieved using identifiability.
{"title":"Identifiability of Causal-based ML Fairness Notions","authors":"K. Makhlouf, Sami Zhioua, C. Palamidessi","doi":"10.1109/CICN56167.2022.10008263","DOIUrl":"https://doi.org/10.1109/CICN56167.2022.10008263","url":null,"abstract":"Machine learning algorithms can produce biased outcome/prediction, typically, against minorities and under-represented sub-populations. Therefore, fairness is emerging as an important requirement for the safe application of machine learning based technologies. The most commonly used fairness notions (e.g. statistical parity, equalized odds, predictive parity, etc.) are observational and rely on mere correlation between variables. These notions fail to identify bias in case of statistical anomalies such as Simpson's or Berkson's paradoxes. Causality-based fairness notions (e.g. counterfactual fairness, no-proxy discrimination, etc.) are immune to such anomalies and hence more reliable to assess fairness. The problem of causality-based fairness notions, however, is that they are defined in terms of quantities (e.g. causal, counterfactual, and path-specific effects) that are not always measurable. This is known as the identifiability problem and is the topic of a large body of work in the causal inference literature. The first contribution of this paper is a compilation of the major identifiability results which are of particular relevance for machine learning fairness. To the best of our knowledge, no previous work in the field of ML fairness or causal inference provides such systemization of knowledge. The second contribution is more general and addresses the main problem of using causality in machine learning, that is, how to extract causal knowledge from observational data in real scenarios. This paper shows how this can be achieved using identifiability.","PeriodicalId":287589,"journal":{"name":"2022 14th International Conference on Computational Intelligence and Communication Networks (CICN)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128191302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-04DOI: 10.1109/CICN56167.2022.10008239
Azeddine Elhassouny, Soufiane Idbrahim
In recent years, learning from data with noisy labels (Label Noise) has emerged as a critical issue for supervised learning. This issue has become even more concerning as a result of recent concerns about Deep Learning's generalization capabilities. Indeed, deep learning necessitates a large amount of data, which is typically gathered by search engines. However, these engines frequently return data with Noisy labels. In this study, the variational inference is used to investigate Label Noise in Deep Learning. (1) Using the Label Noise concept, observable labels are learned discriminatively while true labels are learned using reparameterization variational inference. (2) The noise transition matrix is learned during training without the use of any special methods, heuristics, or initial stages. The effectiveness of our approach is shown on several test datasets, including MNIST and CIFAR32, and theoretical results show how variational inference in any discriminating neural network can be used to learn the correct label distribution.
{"title":"Deep learning with noisy labels: Learning True Labels as Discrete Latent Variable","authors":"Azeddine Elhassouny, Soufiane Idbrahim","doi":"10.1109/CICN56167.2022.10008239","DOIUrl":"https://doi.org/10.1109/CICN56167.2022.10008239","url":null,"abstract":"In recent years, learning from data with noisy labels (Label Noise) has emerged as a critical issue for supervised learning. This issue has become even more concerning as a result of recent concerns about Deep Learning's generalization capabilities. Indeed, deep learning necessitates a large amount of data, which is typically gathered by search engines. However, these engines frequently return data with Noisy labels. In this study, the variational inference is used to investigate Label Noise in Deep Learning. (1) Using the Label Noise concept, observable labels are learned discriminatively while true labels are learned using reparameterization variational inference. (2) The noise transition matrix is learned during training without the use of any special methods, heuristics, or initial stages. The effectiveness of our approach is shown on several test datasets, including MNIST and CIFAR32, and theoretical results show how variational inference in any discriminating neural network can be used to learn the correct label distribution.","PeriodicalId":287589,"journal":{"name":"2022 14th International Conference on Computational Intelligence and Communication Networks (CICN)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131385746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-04DOI: 10.1109/CICN56167.2022.10008294
Kritika Rathi, Vanshika Sharma, Swati Gupta, A. Bagwari, G. Tomar
The term Home Automation describes the technologies used in the home for the essential needs like interconnection, wireless communication and making the quality of the component to use in a smart way just through the internet of things. The term HOME AUTOMATION indicates the quality of the automatic working of the systems and controlling of the different variety of home appliances. In the information communication technology, cloud computing and the IOT are the major types of services used in the model for the advancement of the new generation in which these two are making a great influence for making and deploying new applications. On the other hand, the interfaces between the hardware component, communication, and the programming are a main application for the technology used in the home which works to combine every device through the internet or Wi-Fi. The every device used in the model is connected to the Wi-Fi due to which we can get the output through our mobile phones, it doesn't depends whether we are at home or at the working place or anywhere else in the world, the app in the mobile phone shows the update information of our lights, motor, induction and also face recognize for the safety purpose in the front door.
{"title":"Home Appliances using loT and Machine Learning: The Smart Home","authors":"Kritika Rathi, Vanshika Sharma, Swati Gupta, A. Bagwari, G. Tomar","doi":"10.1109/CICN56167.2022.10008294","DOIUrl":"https://doi.org/10.1109/CICN56167.2022.10008294","url":null,"abstract":"The term Home Automation describes the technologies used in the home for the essential needs like interconnection, wireless communication and making the quality of the component to use in a smart way just through the internet of things. The term HOME AUTOMATION indicates the quality of the automatic working of the systems and controlling of the different variety of home appliances. In the information communication technology, cloud computing and the IOT are the major types of services used in the model for the advancement of the new generation in which these two are making a great influence for making and deploying new applications. On the other hand, the interfaces between the hardware component, communication, and the programming are a main application for the technology used in the home which works to combine every device through the internet or Wi-Fi. The every device used in the model is connected to the Wi-Fi due to which we can get the output through our mobile phones, it doesn't depends whether we are at home or at the working place or anywhere else in the world, the app in the mobile phone shows the update information of our lights, motor, induction and also face recognize for the safety purpose in the front door.","PeriodicalId":287589,"journal":{"name":"2022 14th International Conference on Computational Intelligence and Communication Networks (CICN)","volume":"323 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133896495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-04DOI: 10.1109/CICN56167.2022.10008330
Haifa Al-Shammare, Nehal Al-Otaiby
The popularity of cloud computing platforms has risen dramatically in recent years. As cloud computing serves millions of users at the same time, it must be able to handle all those users' demands efficiently. Thus, choosing a suitable scheduling algorithm is crucial in the cloud computing environment in order to ensure efficient performance with a reasonable degree of quality of service (QoS). The primary goal of this research is to empirically implement and evaluate a recently proposed Round-Robin algorithm with smart time quantum (RR-STQ) in a cloud computing environment, as well as, to enhance the RR-STQ with a dynamic smart time quantum. The CloudSim tool was used to simulate the cloud computing platform to implement RR-STQ and evaluate it with several algorithms using different scenarios. In addition, three scheduling performance metrics were used in the evaluation process. In all comparison scenarios, the (RR-STQ) achieved a significant improvement rate in terms of average response time (RT). Moreover, (RR-STQ) has a better performance in the average turnaround time (TAT), waiting time (WT), and response time (RT) than the traditional RR algorithm. Also, the implemented algorithm (RR-STQ) with dynamic time quantum has a better performance than static time quantum. Based on the evaluation results, it is beneficial to integrate the RR algorithm with other scheduling models such as shortest job first (SJF) to enhance the WT and TAT. Furthermore, the investigations revealed that the dynamic time quantum improves the performance of the RR algorithm.
{"title":"An Implementation of a New Proposed Round-Robin Algorithm with Smart Time Quantum in Cloud Computing Environment","authors":"Haifa Al-Shammare, Nehal Al-Otaiby","doi":"10.1109/CICN56167.2022.10008330","DOIUrl":"https://doi.org/10.1109/CICN56167.2022.10008330","url":null,"abstract":"The popularity of cloud computing platforms has risen dramatically in recent years. As cloud computing serves millions of users at the same time, it must be able to handle all those users' demands efficiently. Thus, choosing a suitable scheduling algorithm is crucial in the cloud computing environment in order to ensure efficient performance with a reasonable degree of quality of service (QoS). The primary goal of this research is to empirically implement and evaluate a recently proposed Round-Robin algorithm with smart time quantum (RR-STQ) in a cloud computing environment, as well as, to enhance the RR-STQ with a dynamic smart time quantum. The CloudSim tool was used to simulate the cloud computing platform to implement RR-STQ and evaluate it with several algorithms using different scenarios. In addition, three scheduling performance metrics were used in the evaluation process. In all comparison scenarios, the (RR-STQ) achieved a significant improvement rate in terms of average response time (RT). Moreover, (RR-STQ) has a better performance in the average turnaround time (TAT), waiting time (WT), and response time (RT) than the traditional RR algorithm. Also, the implemented algorithm (RR-STQ) with dynamic time quantum has a better performance than static time quantum. Based on the evaluation results, it is beneficial to integrate the RR algorithm with other scheduling models such as shortest job first (SJF) to enhance the WT and TAT. Furthermore, the investigations revealed that the dynamic time quantum improves the performance of the RR algorithm.","PeriodicalId":287589,"journal":{"name":"2022 14th International Conference on Computational Intelligence and Communication Networks (CICN)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134373199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-04DOI: 10.1109/CICN56167.2022.10008348
Abdulrahman Abu Elkhail, U. Baroudi, Mohammed S. H. Younis
A Wireless Sensor Network (WSN) is composed of a number of sensor nodes and a single Base-Station (BS), distributed randomly in an area of interest. WSNs have proven very beneficial in various applications in unattended setup in many domains such as scientific, civil, and military. In these applications, sensors send their measurements to the Base Station over multi-hop wireless routes where the Base Station is responsible for collecting and processing the sensed data. Given the importance of the BS, a potential attacker would look to locate the base station by examining network traffic patterns in order to launch specific attacks intended to interfere with network functionality. In this paper, we analyze traffic analysis attack models from the viewpoint of an adversary. Additionally, we examine the benefits and drawbacks of various routing protocols in terms of exposing the network to traffic analysis attacks. Our evaluation is supported by simulation results.
{"title":"WSN Routing Protocols: Anonymity Prospective Analysis","authors":"Abdulrahman Abu Elkhail, U. Baroudi, Mohammed S. H. Younis","doi":"10.1109/CICN56167.2022.10008348","DOIUrl":"https://doi.org/10.1109/CICN56167.2022.10008348","url":null,"abstract":"A Wireless Sensor Network (WSN) is composed of a number of sensor nodes and a single Base-Station (BS), distributed randomly in an area of interest. WSNs have proven very beneficial in various applications in unattended setup in many domains such as scientific, civil, and military. In these applications, sensors send their measurements to the Base Station over multi-hop wireless routes where the Base Station is responsible for collecting and processing the sensed data. Given the importance of the BS, a potential attacker would look to locate the base station by examining network traffic patterns in order to launch specific attacks intended to interfere with network functionality. In this paper, we analyze traffic analysis attack models from the viewpoint of an adversary. Additionally, we examine the benefits and drawbacks of various routing protocols in terms of exposing the network to traffic analysis attacks. Our evaluation is supported by simulation results.","PeriodicalId":287589,"journal":{"name":"2022 14th International Conference on Computational Intelligence and Communication Networks (CICN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130687952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-04DOI: 10.1109/CICN56167.2022.10008304
Khaled Al Butainy, Muhamad Felemban, H. Luqman
Understanding facial expressions is important for the interactions among humans as it conveys a lot about the person's identity and emotions. Research in human emotion recognition has become more popular nowadays due to the advances in the machine learning and deep learning techniques. However, the spread of COVID-19, and the need for wearing masks in the public has impacted the current emotion recognition models' performance. Therefore, improving the performance of these models requires datasets with masked faces. In this paper, we propose a model to generate realistic face masks using generative adversarial network models, in particular image inpainting. The MAFA dataset was used to train the generative image inpainting model. In addition, a face detection model was proposed to identify the mask area. The model was evaluated using the MAFA and CelebA datasets, and promising results were obtained.
{"title":"Realistic Face Masks Generation Using Generative Adversarial Networks","authors":"Khaled Al Butainy, Muhamad Felemban, H. Luqman","doi":"10.1109/CICN56167.2022.10008304","DOIUrl":"https://doi.org/10.1109/CICN56167.2022.10008304","url":null,"abstract":"Understanding facial expressions is important for the interactions among humans as it conveys a lot about the person's identity and emotions. Research in human emotion recognition has become more popular nowadays due to the advances in the machine learning and deep learning techniques. However, the spread of COVID-19, and the need for wearing masks in the public has impacted the current emotion recognition models' performance. Therefore, improving the performance of these models requires datasets with masked faces. In this paper, we propose a model to generate realistic face masks using generative adversarial network models, in particular image inpainting. The MAFA dataset was used to train the generative image inpainting model. In addition, a face detection model was proposed to identify the mask area. The model was evaluated using the MAFA and CelebA datasets, and promising results were obtained.","PeriodicalId":287589,"journal":{"name":"2022 14th International Conference on Computational Intelligence and Communication Networks (CICN)","volume":"316 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124294619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-04DOI: 10.1109/CICN56167.2022.10008299
Rajesh K, V. G, M. S
RRC (Root Raised Cosine) filters used in wireless communication as transmit and receive filters help in mitigating the lSI (Inter Symbol Interference). In this paper, the effect of root raised cosine pulse shaping filter with different roll off factors in a 2xl MIMO-OFDM system employing 256 sub carriers is analysed. In particular, the work focuses on the selection of quantization bits, truncation length and rolloff factors in a practical WiMAX system. BER analysis at different rolloff factors is presented. To evaluate the performance of the proposed design, simulations were carried out in Matlab-Simulink.
{"title":"Transmit and receive filter design for MIMO-OFDM based WiMAX systems","authors":"Rajesh K, V. G, M. S","doi":"10.1109/CICN56167.2022.10008299","DOIUrl":"https://doi.org/10.1109/CICN56167.2022.10008299","url":null,"abstract":"RRC (Root Raised Cosine) filters used in wireless communication as transmit and receive filters help in mitigating the lSI (Inter Symbol Interference). In this paper, the effect of root raised cosine pulse shaping filter with different roll off factors in a 2xl MIMO-OFDM system employing 256 sub carriers is analysed. In particular, the work focuses on the selection of quantization bits, truncation length and rolloff factors in a practical WiMAX system. BER analysis at different rolloff factors is presented. To evaluate the performance of the proposed design, simulations were carried out in Matlab-Simulink.","PeriodicalId":287589,"journal":{"name":"2022 14th International Conference on Computational Intelligence and Communication Networks (CICN)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124544995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}