Pub Date : 2022-12-21DOI: 10.1109/I4C57141.2022.10057885
Sudheer D. Kulkarni, S. Selvi, Mohammed Zuber M Momin, N. S. Bharadwaj, S. R. Navya, Sudesh, Shiv Kumar S Thanki
Welding is one of the most common approaches employed for fusing metals. However, welding errors are usually encountered during the process due to external factors. The detection and classification of these welding faults are of great importance for the reliability of the weld and the materials which were welded. Traditionally, welding error detection is performed through visual inspection carried out by inspectors or quality control personnel, which is an error-prone and slow process. In this paper, an image processing and machine learning based algorithm is proposed to automatically detect welding defects. The detection of component irregularities and classification is essential in quality control in manufacturing processes. Machine learning algorithms are widely employed for various applications as it reduces the precious human visualization time to classify the welded components manually and errors due this, which cannot be completely eliminated. The proposed algorithm also allows acceptance sampling or 100% inspection of the components as the speed of the classification process is within a few milliseconds with higher accuracy. The presented components are classified into Grades - A, B, C, and D with images obtained in visible spectrum. Experimental results prove that random forest classifier provided an accuracy of 82.8% compared to 77.6% of decision trees. Therefore, the welding evaluation process is made effective through machine learning based algorithms. The advantage is that automating the evaluation process makes it quicker and provides an unbiased grading.
{"title":"Machine Learning Based Classification of Welded Components","authors":"Sudheer D. Kulkarni, S. Selvi, Mohammed Zuber M Momin, N. S. Bharadwaj, S. R. Navya, Sudesh, Shiv Kumar S Thanki","doi":"10.1109/I4C57141.2022.10057885","DOIUrl":"https://doi.org/10.1109/I4C57141.2022.10057885","url":null,"abstract":"Welding is one of the most common approaches employed for fusing metals. However, welding errors are usually encountered during the process due to external factors. The detection and classification of these welding faults are of great importance for the reliability of the weld and the materials which were welded. Traditionally, welding error detection is performed through visual inspection carried out by inspectors or quality control personnel, which is an error-prone and slow process. In this paper, an image processing and machine learning based algorithm is proposed to automatically detect welding defects. The detection of component irregularities and classification is essential in quality control in manufacturing processes. Machine learning algorithms are widely employed for various applications as it reduces the precious human visualization time to classify the welded components manually and errors due this, which cannot be completely eliminated. The proposed algorithm also allows acceptance sampling or 100% inspection of the components as the speed of the classification process is within a few milliseconds with higher accuracy. The presented components are classified into Grades - A, B, C, and D with images obtained in visible spectrum. Experimental results prove that random forest classifier provided an accuracy of 82.8% compared to 77.6% of decision trees. Therefore, the welding evaluation process is made effective through machine learning based algorithms. The advantage is that automating the evaluation process makes it quicker and provides an unbiased grading.","PeriodicalId":204296,"journal":{"name":"2022 4th International Conference on Circuits, Control, Communication and Computing (I4C)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122553212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-21DOI: 10.1109/I4C57141.2022.10057858
K. Dutta, Premila Manohar, S. Poornima, Ayush Renith, Chirag Vasist
Language plays a significant role in the identity of a person, it expresses history and culture. But with increasing popularity of cosmopolitan culture, the new generation is moving away from their origin. Karnataka is one of the most popular states in India which welcomes people from different geographical locations, because of its hospitality, weather, technological forefront etc. On the other hand, it impacts the usage of Kannada language and it challenges the demographic identity of the place. This paper aims to enhance the usage of Kannada language by automatic handwritten Kannada character recognition. In this work MSRIT Kannada handwritten dataset is used to classify 603 characters, which includes consonants, vowels, numbers, ottaksharas and consonants with vowels, using Decision Tree (DT), Convolutional Neural networks (CNN), Support Vector Classifier (SVC). The machine learning algorithms are modeled in python and achieved test accuracy of 97.42% using the decision tree method.
{"title":"Multi-Class Kannada Character Recognition Using Machine Learning Methods","authors":"K. Dutta, Premila Manohar, S. Poornima, Ayush Renith, Chirag Vasist","doi":"10.1109/I4C57141.2022.10057858","DOIUrl":"https://doi.org/10.1109/I4C57141.2022.10057858","url":null,"abstract":"Language plays a significant role in the identity of a person, it expresses history and culture. But with increasing popularity of cosmopolitan culture, the new generation is moving away from their origin. Karnataka is one of the most popular states in India which welcomes people from different geographical locations, because of its hospitality, weather, technological forefront etc. On the other hand, it impacts the usage of Kannada language and it challenges the demographic identity of the place. This paper aims to enhance the usage of Kannada language by automatic handwritten Kannada character recognition. In this work MSRIT Kannada handwritten dataset is used to classify 603 characters, which includes consonants, vowels, numbers, ottaksharas and consonants with vowels, using Decision Tree (DT), Convolutional Neural networks (CNN), Support Vector Classifier (SVC). The machine learning algorithms are modeled in python and achieved test accuracy of 97.42% using the decision tree method.","PeriodicalId":204296,"journal":{"name":"2022 4th International Conference on Circuits, Control, Communication and Computing (I4C)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129236632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-21DOI: 10.1109/I4C57141.2022.10057786
Surekha Kb, Geeta Patil, Mohan Ba, Anil Kumar
The stock market has long drawn investors' attention. Stock trend forecasting tools are in great demand since they aid in the direct transfer of gains. The more accurate the results, the greater the likelihood of profit. Politics, economics, and society all influence stock market patterns. Fundamental or technical analysis can be used to evaluate stock trends. Stock market forecasting has entered a technologically upgraded era with the rise of technical marvels like global digitalization, reworking the conventional trading approach. To predict stock price movements and help investors make wise choices, several tools and methods have been created. The suggested approach aims to illustrate the optimal trading range that investors should take into consideration by graphically representing the upper bound and lower bound of the expected stock prices. The proposed technique employs supervised machine learning and a dataset obtained from Yahoo Finance. The bid prices for the stock fluctuate at different periods with almost straight names. ARIMA and LSTM algorithms are applied separately for this hypothesis.
{"title":"Arbitrage: Stock Market Comparative Analysis","authors":"Surekha Kb, Geeta Patil, Mohan Ba, Anil Kumar","doi":"10.1109/I4C57141.2022.10057786","DOIUrl":"https://doi.org/10.1109/I4C57141.2022.10057786","url":null,"abstract":"The stock market has long drawn investors' attention. Stock trend forecasting tools are in great demand since they aid in the direct transfer of gains. The more accurate the results, the greater the likelihood of profit. Politics, economics, and society all influence stock market patterns. Fundamental or technical analysis can be used to evaluate stock trends. Stock market forecasting has entered a technologically upgraded era with the rise of technical marvels like global digitalization, reworking the conventional trading approach. To predict stock price movements and help investors make wise choices, several tools and methods have been created. The suggested approach aims to illustrate the optimal trading range that investors should take into consideration by graphically representing the upper bound and lower bound of the expected stock prices. The proposed technique employs supervised machine learning and a dataset obtained from Yahoo Finance. The bid prices for the stock fluctuate at different periods with almost straight names. ARIMA and LSTM algorithms are applied separately for this hypothesis.","PeriodicalId":204296,"journal":{"name":"2022 4th International Conference on Circuits, Control, Communication and Computing (I4C)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123356754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-21DOI: 10.1109/I4C57141.2022.10057747
Raghuram S, Anirudh S Bharadwaj, Deepika S K, Mridula S Khadabadi, Aditya Jayaprakash
An increase in interest in Deep Neural Networks can be attributed to the recent successes of Deep Learning in various AI applications. Deep Neural Networks form the implementation platform for all these application domains. The next level of adoption is through dedicated hardware implementations of these models, for example in edge-based applications. If a Deep Neural Network is used to represent a classification problem, the last layer is typically the Softmax activation function. Due to the appearance of the exponential function in these implementations, additional effort must be made to realize a digital implementation. In this work, two activation functions-the Softmax and the Inverse Softmax function-as well as the digital implementations of each are explored for their effectiveness in performance and power consumption. The CORDIC technique is used to model the exponential functions in this paper. The Inverse Softmax function, proposed in this paper for the first time, avoid the requirement of the division operator in the Softmax function. Through experiments it has been shown that this function leads to an optimized implementation, as compared to the Softmax activation function.
{"title":"Digital Implementation of the Softmax Activation Function and the Inverse Softmax Function","authors":"Raghuram S, Anirudh S Bharadwaj, Deepika S K, Mridula S Khadabadi, Aditya Jayaprakash","doi":"10.1109/I4C57141.2022.10057747","DOIUrl":"https://doi.org/10.1109/I4C57141.2022.10057747","url":null,"abstract":"An increase in interest in Deep Neural Networks can be attributed to the recent successes of Deep Learning in various AI applications. Deep Neural Networks form the implementation platform for all these application domains. The next level of adoption is through dedicated hardware implementations of these models, for example in edge-based applications. If a Deep Neural Network is used to represent a classification problem, the last layer is typically the Softmax activation function. Due to the appearance of the exponential function in these implementations, additional effort must be made to realize a digital implementation. In this work, two activation functions-the Softmax and the Inverse Softmax function-as well as the digital implementations of each are explored for their effectiveness in performance and power consumption. The CORDIC technique is used to model the exponential functions in this paper. The Inverse Softmax function, proposed in this paper for the first time, avoid the requirement of the division operator in the Softmax function. Through experiments it has been shown that this function leads to an optimized implementation, as compared to the Softmax activation function.","PeriodicalId":204296,"journal":{"name":"2022 4th International Conference on Circuits, Control, Communication and Computing (I4C)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127768225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-21DOI: 10.1109/I4C57141.2022.10057697
K. Amit Kumar, T. Y. Satheesha
Melanoma skin cancer is widely propagating cancer in USA. The processes of biological changes are restricted for customized processing and observation. The researchers have proposed various classifications and categorization techniques to validate skin cancer. In this paper, a novel classification and cluster validation technique to minimize the error estimation and image depth validation. The proposed technique has included Convolutional Neural Network (CNN) framework to ensure dataset (2D and 3D images) depth analysis under attribute extraction. The process of dilation residual inceptions assures the overall dataset is computed under convolution feature decomposition. The extracted attributes and schematic representation of decomposed CNN is fetched for depth computation. The technique has successfully processed and validated on Kaggle based melanoma datasets. The technique has secured an accuracy of 95.68% with respect to 60:40 training and testing ratio and an accuracy of 95.16% with 70:30 respectively.
{"title":"An Efficient Method to Minimize the Depth Estimation Error in Melanoma Skin Cancer Classification","authors":"K. Amit Kumar, T. Y. Satheesha","doi":"10.1109/I4C57141.2022.10057697","DOIUrl":"https://doi.org/10.1109/I4C57141.2022.10057697","url":null,"abstract":"Melanoma skin cancer is widely propagating cancer in USA. The processes of biological changes are restricted for customized processing and observation. The researchers have proposed various classifications and categorization techniques to validate skin cancer. In this paper, a novel classification and cluster validation technique to minimize the error estimation and image depth validation. The proposed technique has included Convolutional Neural Network (CNN) framework to ensure dataset (2D and 3D images) depth analysis under attribute extraction. The process of dilation residual inceptions assures the overall dataset is computed under convolution feature decomposition. The extracted attributes and schematic representation of decomposed CNN is fetched for depth computation. The technique has successfully processed and validated on Kaggle based melanoma datasets. The technique has secured an accuracy of 95.68% with respect to 60:40 training and testing ratio and an accuracy of 95.16% with 70:30 respectively.","PeriodicalId":204296,"journal":{"name":"2022 4th International Conference on Circuits, Control, Communication and Computing (I4C)","volume":"97 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128001944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-21DOI: 10.1109/I4C57141.2022.10057623
K. Suma, D. Raghavan, Puneeth Ganesh
Alzheimer's disease (AD) is an irremediable brain disorder that is progressive and causes irreparable damage to brain cells, neurotransmitters, and nerves. This in turn severely affects brain functionalities and ultimately leads to dementia. Although there is currently no cure for AD, there are treatments that can slow down the disease's development. Hence, early diagnosis of AD is the need of the hour and researchers across the world have shifted their focus on developing robust and intelligent systems that can aid in early and accurate diagnosis of AD this has been the main motivation behind this study. The main objective of this paper is to present a comparative study of 2D and 3D Convolutional Neural Network (CNN) architectures for AD classification and to choose the most robust model for AD classification. The models are trained on MRI and PET individually and with the fusion of MRI and PET. 2D feature fusion is performed using pre-trained neural networks and 3D fusion involves a series of operations such as skull-stripping, image segmentation, and co-registration. 2D CNN provided the highest accuracy of 91.29% on MRI images followed by 3D CNN with an accuracy of 91.07%. Comparing the performance on multimodal fusion, 3D MRI -PET fusion exhibited a significantly good accuracy of 86.90%. This paper briefly describes the GUI developed for easy visualization of AD classification and the possibilities of integrating the trained machine learning models with various mobile and web applications and with instruments that facilitate real-time diagnosis and classification of AD.
{"title":"Deep Learning for Alzheimer's Disease Detection using Multimodal MRI-PET Fusion","authors":"K. Suma, D. Raghavan, Puneeth Ganesh","doi":"10.1109/I4C57141.2022.10057623","DOIUrl":"https://doi.org/10.1109/I4C57141.2022.10057623","url":null,"abstract":"Alzheimer's disease (AD) is an irremediable brain disorder that is progressive and causes irreparable damage to brain cells, neurotransmitters, and nerves. This in turn severely affects brain functionalities and ultimately leads to dementia. Although there is currently no cure for AD, there are treatments that can slow down the disease's development. Hence, early diagnosis of AD is the need of the hour and researchers across the world have shifted their focus on developing robust and intelligent systems that can aid in early and accurate diagnosis of AD this has been the main motivation behind this study. The main objective of this paper is to present a comparative study of 2D and 3D Convolutional Neural Network (CNN) architectures for AD classification and to choose the most robust model for AD classification. The models are trained on MRI and PET individually and with the fusion of MRI and PET. 2D feature fusion is performed using pre-trained neural networks and 3D fusion involves a series of operations such as skull-stripping, image segmentation, and co-registration. 2D CNN provided the highest accuracy of 91.29% on MRI images followed by 3D CNN with an accuracy of 91.07%. Comparing the performance on multimodal fusion, 3D MRI -PET fusion exhibited a significantly good accuracy of 86.90%. This paper briefly describes the GUI developed for easy visualization of AD classification and the possibilities of integrating the trained machine learning models with various mobile and web applications and with instruments that facilitate real-time diagnosis and classification of AD.","PeriodicalId":204296,"journal":{"name":"2022 4th International Conference on Circuits, Control, Communication and Computing (I4C)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132554233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-21DOI: 10.1109/I4C57141.2022.10057711
Anagha Gopinath, Akshitha H, Arshya Loomba, Ranveer Kumar, CK Narayanappa
Hearing loss is defined as the inability to hear partially or completely, in one or both the ears. It is present in people of all age groups. The continuous exposure to noise in today's world, aging and congenital defects are leading causes of hearing loss. Hearing loss can be present in new born as a result of maternal infections during pregnancy, complications after birth and head trauma. This study will develop a model to estimate the degree of Hearing loss of a sample set of people in the 18–22 age group. The hearing loss was calculated based on the intensity threshold values that was generated by the Smartphone mobile application-based hearing test [1] [2]. This threshold value was compared with the standard audiometric table to classify the sample set into two groups. Support Vector Machine (SVM) was used for building the binary classification model. The Support Vector Machine searches for an optimum hyperplane to classify the two groups. It uses the extreme points, termed as support vectors, to create the hyperplane. The hyperplane is created so as to maximize the margin, which is the distance between the hyperplane and the support vectors. The support vector machine algorithm supports different kernels for building a model. Three different kernels - Linear kernel, Polynomial kernel and Radial Basis function were used with three different training set sizes - 80% training size, 75% training size and 70% training size to select a model of high accuracy. The model with the highest accuracy was tested, and the confusion matrix of the test set data is obtained to verify the results. A classification report provides the values of Precision, Recall and F1 score to assess the quality of the model developed.
{"title":"Data Driven Machine Learning Model for Audiometric Threshold classification","authors":"Anagha Gopinath, Akshitha H, Arshya Loomba, Ranveer Kumar, CK Narayanappa","doi":"10.1109/I4C57141.2022.10057711","DOIUrl":"https://doi.org/10.1109/I4C57141.2022.10057711","url":null,"abstract":"Hearing loss is defined as the inability to hear partially or completely, in one or both the ears. It is present in people of all age groups. The continuous exposure to noise in today's world, aging and congenital defects are leading causes of hearing loss. Hearing loss can be present in new born as a result of maternal infections during pregnancy, complications after birth and head trauma. This study will develop a model to estimate the degree of Hearing loss of a sample set of people in the 18–22 age group. The hearing loss was calculated based on the intensity threshold values that was generated by the Smartphone mobile application-based hearing test [1] [2]. This threshold value was compared with the standard audiometric table to classify the sample set into two groups. Support Vector Machine (SVM) was used for building the binary classification model. The Support Vector Machine searches for an optimum hyperplane to classify the two groups. It uses the extreme points, termed as support vectors, to create the hyperplane. The hyperplane is created so as to maximize the margin, which is the distance between the hyperplane and the support vectors. The support vector machine algorithm supports different kernels for building a model. Three different kernels - Linear kernel, Polynomial kernel and Radial Basis function were used with three different training set sizes - 80% training size, 75% training size and 70% training size to select a model of high accuracy. The model with the highest accuracy was tested, and the confusion matrix of the test set data is obtained to verify the results. A classification report provides the values of Precision, Recall and F1 score to assess the quality of the model developed.","PeriodicalId":204296,"journal":{"name":"2022 4th International Conference on Circuits, Control, Communication and Computing (I4C)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130060434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-21DOI: 10.1109/I4C57141.2022.10057759
Varun Cp, Rashmi Agarwal
While system hardening concepts are general, one of the leading causes of the breaches is human error in the misconfiguration. Depending on the type of hardening, different tools and techniques are used. The whole lifespan of technology, from initial installation through setup, maintenance, and support, to end-of-life decommissioning, necessitates system hardening. Additionally, mandated by regulations like PCI DSS (Payment Card Industry Data Security Standard.) and HIPAA (Health Insurance Portability and Accountability Act), systems hardening is something that cyber insurers are increasingly requesting. This paper explains how to automate server security assessments using an ansible agentless framework and utilize them to continue security audits and compliance evaluations throughout risk assessments. The technique and ideas discussed in this paper are more effective when the server environment is undergoing continual change.
{"title":"Automation of Server Security Assessment","authors":"Varun Cp, Rashmi Agarwal","doi":"10.1109/I4C57141.2022.10057759","DOIUrl":"https://doi.org/10.1109/I4C57141.2022.10057759","url":null,"abstract":"While system hardening concepts are general, one of the leading causes of the breaches is human error in the misconfiguration. Depending on the type of hardening, different tools and techniques are used. The whole lifespan of technology, from initial installation through setup, maintenance, and support, to end-of-life decommissioning, necessitates system hardening. Additionally, mandated by regulations like PCI DSS (Payment Card Industry Data Security Standard.) and HIPAA (Health Insurance Portability and Accountability Act), systems hardening is something that cyber insurers are increasingly requesting. This paper explains how to automate server security assessments using an ansible agentless framework and utilize them to continue security audits and compliance evaluations throughout risk assessments. The technique and ideas discussed in this paper are more effective when the server environment is undergoing continual change.","PeriodicalId":204296,"journal":{"name":"2022 4th International Conference on Circuits, Control, Communication and Computing (I4C)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134266003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-21DOI: 10.1109/I4C57141.2022.10057774
Jenan Ayad, F. S. Hasan, A. Ali
Information security considered as one of the main goals when it is transmitted through a wireless communication or even it is required to be saved in a PC. Incorporation of a secure and highly reliable of digital data such as (video, audio, images or text) that be required to transmit from source to destination through a communication channel, is turning into a top of the requirements in present-day wi-fi communications. There are two phases to achieve confidentiality, the data level and the network level. Cryptographic techniques are using in data level security. This study focuses on data-level security phase to establish a secure image transmission through an AWGN channel, where (OFDM) the Orthogonal Frequency Division Multiplexing system is used with different encryption techniques, using cipher, permutation, and scrambling algorithms to ensure secure image transmission using MATLAB program. Several statistical tests were used to check the encryption quality, which are entropy, correlation coefficient, NPCR, UACI, histogram. On the other hand, the image transmission quality is evaluated by Bit-Error-Rate at different SNR conditions, BER, and Peak-Signal-to-Noise Ratio, PSNR. The numerical analysis shows that the tow stage-OFDM system proposed present an overall performance improvement among the earlier cryptosystems. The statistical analysis tests show that this system is considered as simple and robust algorithm from security point of view.
{"title":"Efficient Transmission of Secure Images with OFDM using Chaotic Encryption","authors":"Jenan Ayad, F. S. Hasan, A. Ali","doi":"10.1109/I4C57141.2022.10057774","DOIUrl":"https://doi.org/10.1109/I4C57141.2022.10057774","url":null,"abstract":"Information security considered as one of the main goals when it is transmitted through a wireless communication or even it is required to be saved in a PC. Incorporation of a secure and highly reliable of digital data such as (video, audio, images or text) that be required to transmit from source to destination through a communication channel, is turning into a top of the requirements in present-day wi-fi communications. There are two phases to achieve confidentiality, the data level and the network level. Cryptographic techniques are using in data level security. This study focuses on data-level security phase to establish a secure image transmission through an AWGN channel, where (OFDM) the Orthogonal Frequency Division Multiplexing system is used with different encryption techniques, using cipher, permutation, and scrambling algorithms to ensure secure image transmission using MATLAB program. Several statistical tests were used to check the encryption quality, which are entropy, correlation coefficient, NPCR, UACI, histogram. On the other hand, the image transmission quality is evaluated by Bit-Error-Rate at different SNR conditions, BER, and Peak-Signal-to-Noise Ratio, PSNR. The numerical analysis shows that the tow stage-OFDM system proposed present an overall performance improvement among the earlier cryptosystems. The statistical analysis tests show that this system is considered as simple and robust algorithm from security point of view.","PeriodicalId":204296,"journal":{"name":"2022 4th International Conference on Circuits, Control, Communication and Computing (I4C)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126589483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-21DOI: 10.1109/I4C57141.2022.10057868
M. Karki, Santosh Inamdar
Skin cancer is a most common form of cancers. It occurs when there is irregular growth of skin cells. It is very difficult for early detection and recognition using dermoscopic techniques. The type of cancer will be diagnosed only when the images are taken from the biopsy of the patients. This procedure requires highly qualified dermatologists and more time to differentiate various types of skin cancer. To overcome these problems, many advanced techniques and procedures have been developed on skin cancer classification which will be using less time and least errors. The proposed algorithm uses transfer learning approach which improves classification accuracy with less loss, pre-trained model used are VGG16(visual geometry group), VGG19, ResNet50. These pretrained models have been applied on 3200 dermoscopy skin images taken from ISIC (International Skin Imaging Collaboration). These models were compared with help of hair removal technique using Black top-hat filtering which helped to improve training and test accuracy. ResNet50 has achieved greater accuracy (97.42%) and loss (0.03) followed by VGG19 accuracy (95.61%), loss (0.24) and VGG16 accuracy (94.65%) loss (0.4)
{"title":"Skin Cancer Classification Using Deep Networks","authors":"M. Karki, Santosh Inamdar","doi":"10.1109/I4C57141.2022.10057868","DOIUrl":"https://doi.org/10.1109/I4C57141.2022.10057868","url":null,"abstract":"Skin cancer is a most common form of cancers. It occurs when there is irregular growth of skin cells. It is very difficult for early detection and recognition using dermoscopic techniques. The type of cancer will be diagnosed only when the images are taken from the biopsy of the patients. This procedure requires highly qualified dermatologists and more time to differentiate various types of skin cancer. To overcome these problems, many advanced techniques and procedures have been developed on skin cancer classification which will be using less time and least errors. The proposed algorithm uses transfer learning approach which improves classification accuracy with less loss, pre-trained model used are VGG16(visual geometry group), VGG19, ResNet50. These pretrained models have been applied on 3200 dermoscopy skin images taken from ISIC (International Skin Imaging Collaboration). These models were compared with help of hair removal technique using Black top-hat filtering which helped to improve training and test accuracy. ResNet50 has achieved greater accuracy (97.42%) and loss (0.03) followed by VGG19 accuracy (95.61%), loss (0.24) and VGG16 accuracy (94.65%) loss (0.4)","PeriodicalId":204296,"journal":{"name":"2022 4th International Conference on Circuits, Control, Communication and Computing (I4C)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133132358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}