Meghana Kshirsagar, Krishn Kumar Gupt, G. Vaidya, C. Ryan, Joseph P. Sullivan, Vivek Kshirsagar
Over the past seven decades since the advent of artificial intelligence (AI) technology, researchers have demonstrated and deployed systems incorporating AI in various domains. The absence of model explainability in critical systems such as medical AI and credit risk assessment among others has led to neglect of key ethical and professional principles which can cause considerable harm. With explainability methods, developers can check their models beyond mere performance and identify errors. This leads to increased efficiency in time and reduces development costs. The article summarizes that steering the traditional AI systems toward responsible AI engineering can address concerns raised in the deployment of AI systems and mitigate them by incorporating explainable AI methods. Finally, the article concludes with the societal benefits of the futuristic AI systems and the market shares for revenue generation possible through the deployment of trustworthy and ethical AI systems.
{"title":"Insights Into Incorporating Trustworthiness and Ethics in AI Systems With Explainable AI","authors":"Meghana Kshirsagar, Krishn Kumar Gupt, G. Vaidya, C. Ryan, Joseph P. Sullivan, Vivek Kshirsagar","doi":"10.4018/ijncr.310006","DOIUrl":"https://doi.org/10.4018/ijncr.310006","url":null,"abstract":"Over the past seven decades since the advent of artificial intelligence (AI) technology, researchers have demonstrated and deployed systems incorporating AI in various domains. The absence of model explainability in critical systems such as medical AI and credit risk assessment among others has led to neglect of key ethical and professional principles which can cause considerable harm. With explainability methods, developers can check their models beyond mere performance and identify errors. This leads to increased efficiency in time and reduces development costs. The article summarizes that steering the traditional AI systems toward responsible AI engineering can address concerns raised in the deployment of AI systems and mitigate them by incorporating explainable AI methods. Finally, the article concludes with the societal benefits of the futuristic AI systems and the market shares for revenue generation possible through the deployment of trustworthy and ethical AI systems.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114805247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.4018/ijncr.2021100103
Praveen Kulkarni, M. RajeshT.
Emotions are described as strong feelings that are expressed by an individual in response to reactions to something or someone. Emotions are a very important aspect of day-to-day life interaction. Research shows that more than 90% of communication will happen non-verbally. This paper presents human emotion detection using a fuzzy relational model. The model consists of an image processing stage followed by an emotion recognition phase. The authors additionally made sub-categories in the most important expressions like happy and sad to discover the level of happiness and sadness in one face. Feature extraction along with multi-learning approach will help to test whether the person is truly happy or appearing to be happy. Experimental outcomes on the image dataset point out the accurate performance of the proposed technique. The experiment gives good accuracy results with the authors' own data set and robust with reference to some latest and leading edge.
{"title":"Natural Computing of Human Facial Emotion Using Multi-Learning Fuzzy Approach","authors":"Praveen Kulkarni, M. RajeshT.","doi":"10.4018/ijncr.2021100103","DOIUrl":"https://doi.org/10.4018/ijncr.2021100103","url":null,"abstract":"Emotions are described as strong feelings that are expressed by an individual in response to reactions to something or someone. Emotions are a very important aspect of day-to-day life interaction. Research shows that more than 90% of communication will happen non-verbally. This paper presents human emotion detection using a fuzzy relational model. The model consists of an image processing stage followed by an emotion recognition phase. The authors additionally made sub-categories in the most important expressions like happy and sad to discover the level of happiness and sadness in one face. Feature extraction along with multi-learning approach will help to test whether the person is truly happy or appearing to be happy. Experimental outcomes on the image dataset point out the accurate performance of the proposed technique. The experiment gives good accuracy results with the authors' own data set and robust with reference to some latest and leading edge.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122195290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.4018/ijncr.2021100102
Francisco de Castro, Angelin Gladston
Existing approaches to fruit detection experience difficulty in detecting small fruits with low overall detection accuracy. The reasons why many detectors are unable to handle small fruits better are that fruit data sets are small, and they are not enough to train previous models of YOLO. Further, these models used in fruit detection are initialized by a pre-trained model and then fine-tuned on fruit data sets. The pre-trained model was trained on the ImageNet data set whose objects have a bigger scale than that of the fruits in the fruit pictures. Fruit detection being a fundamental task for automatic yield estimation, the goal is to detect all the fruits in images. YOLO-V3 uses multi-scale prediction to detect the final target, and its network structure is more complex. Thus, in this work, YOLO-V3 is used to predict bounding boxes on different scales and to make multi-scale prediction, thereby making YOLO-V3 more effective for detecting small targets. The feature pyramid mechanism integrates multi-scale feature information to improve the detection accuracy.
{"title":"Detection of Small Oranges Using YOLO v3 Feature Pyramid Mechanism","authors":"Francisco de Castro, Angelin Gladston","doi":"10.4018/ijncr.2021100102","DOIUrl":"https://doi.org/10.4018/ijncr.2021100102","url":null,"abstract":"Existing approaches to fruit detection experience difficulty in detecting small fruits with low overall detection accuracy. The reasons why many detectors are unable to handle small fruits better are that fruit data sets are small, and they are not enough to train previous models of YOLO. Further, these models used in fruit detection are initialized by a pre-trained model and then fine-tuned on fruit data sets. The pre-trained model was trained on the ImageNet data set whose objects have a bigger scale than that of the fruits in the fruit pictures. Fruit detection being a fundamental task for automatic yield estimation, the goal is to detect all the fruits in images. YOLO-V3 uses multi-scale prediction to detect the final target, and its network structure is more complex. Thus, in this work, YOLO-V3 is used to predict bounding boxes on different scales and to make multi-scale prediction, thereby making YOLO-V3 more effective for detecting small targets. The feature pyramid mechanism integrates multi-scale feature information to improve the detection accuracy.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123436989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.4018/ijncr.2021100101
Deepa C. Mulimani, S. G. Totad, Prakashgoud R. Patil
The primary challenge of intrusion detection systems (IDS) is to rapidly identify new attacks, learn from the adversary, and update the intrusion detection immediately. IDS operate in dynamic environments subjected to evolving data streams where data may come from different distributions. This is known as the problem of concept drift. Today's IDS though are equipped with deep learning algorithms most of the times fail to identify concept drift. This paper presents a technique to detect and adapt to concept drifts in streaming data with a large number of features often seen in IDS. The study modifies extreme gradient boosting (XGB) algorithm for adaptability of drifts and optimization for large datasets in IDS. The primary objective is to reduce the number of ‘false positives' and ‘false negatives' in the predictions. The method is tested on streaming data of smaller and larger sizes and compared against non-adaptive XGBoost and logistic regression.
{"title":"Concept Drift Adaptation in Intrusion Detection Systems Using Ensemble Learning","authors":"Deepa C. Mulimani, S. G. Totad, Prakashgoud R. Patil","doi":"10.4018/ijncr.2021100101","DOIUrl":"https://doi.org/10.4018/ijncr.2021100101","url":null,"abstract":"The primary challenge of intrusion detection systems (IDS) is to rapidly identify new attacks, learn from the adversary, and update the intrusion detection immediately. IDS operate in dynamic environments subjected to evolving data streams where data may come from different distributions. This is known as the problem of concept drift. Today's IDS though are equipped with deep learning algorithms most of the times fail to identify concept drift. This paper presents a technique to detect and adapt to concept drifts in streaming data with a large number of features often seen in IDS. The study modifies extreme gradient boosting (XGB) algorithm for adaptability of drifts and optimization for large datasets in IDS. The primary objective is to reduce the number of ‘false positives' and ‘false negatives' in the predictions. The method is tested on streaming data of smaller and larger sizes and compared against non-adaptive XGBoost and logistic regression.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116500743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.4018/ijncr.2021070102
S. M. Jagtap, V. J. Gond
The scrambling of MOSFET below 22nm, 14nm, unwanted Short Channel Effects (SCE) like punch through, drain-induced barrier lowering (DIBL), along with huge leakage current are flowing through the device, which is not recognized for better performance. Multi-gate MOSFET generally measured as Fin-FET is the best substitute vital to stunned short channel effects. The work highlights results of the current-voltage electrical characteristics of the n-channel triple gate Fin-FET gatherings. The paper focuses on the study of geometry-based device design of Fin-FET by changing high k dielectrics materials from silicon SiO2 (3.9), Hafnium Oxide (HfO2), and metal gate work function ranging from 4.1eV to 4.5eV. The approach and simulation of 3Dimensional Fin-FET is carried to evaluate the better performance parameters of device for change in gate length by deploying different dielectrics materials. The effect on ratio of on current (ION) and off current (IOFF), threshold voltage (VTH), subthreshold slope (SS), and drain-induced barrier lowering (DIBL) is observed.
{"title":"Performance Parameter Evaluation of 7nm FinFET by Tuning Metal Work Function and High K Dielectrics","authors":"S. M. Jagtap, V. J. Gond","doi":"10.4018/ijncr.2021070102","DOIUrl":"https://doi.org/10.4018/ijncr.2021070102","url":null,"abstract":"The scrambling of MOSFET below 22nm, 14nm, unwanted Short Channel Effects (SCE) like punch through, drain-induced barrier lowering (DIBL), along with huge leakage current are flowing through the device, which is not recognized for better performance. Multi-gate MOSFET generally measured as Fin-FET is the best substitute vital to stunned short channel effects. The work highlights results of the current-voltage electrical characteristics of the n-channel triple gate Fin-FET gatherings. The paper focuses on the study of geometry-based device design of Fin-FET by changing high k dielectrics materials from silicon SiO2 (3.9), Hafnium Oxide (HfO2), and metal gate work function ranging from 4.1eV to 4.5eV. The approach and simulation of 3Dimensional Fin-FET is carried to evaluate the better performance parameters of device for change in gate length by deploying different dielectrics materials. The effect on ratio of on current (ION) and off current (IOFF), threshold voltage (VTH), subthreshold slope (SS), and drain-induced barrier lowering (DIBL) is observed.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115707186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.4018/ijncr.2021070103
T. Nithish, Geeta R. Bharamagoudar, K. Karibasappa, S. G. Totad
With sensors percolating through everyday living, it may be toted that there is an enormous increase in the availability of real-time streaming and time series data. We also see an exponential increase in number of industry applications with sensors driven by IoT and connected with data sources that change over time. This time-series data presents many technical challenges, opportunities, and threats to industries. Thus, streaming analytics to model an unsupervised machine learning system for detecting unusual/anomalous behavior in real-time must be prominently addressed. In this paper, the authors propose a real-time abnormality detection model using a Facebook prophet that addresses issues related to the improper Facebook collection of data, further leading to faulty analysis and wrong results. The proposed unsupervised model detects abnormalities in the data captured through customer order by considering day and date as constraints. The proposed model is found to be even more efficient in RMSE score. The proposed model delivered enhanced performance compared to other traditional approaches.
{"title":"Real-Time Anomaly Detection Using Facebook Prophet","authors":"T. Nithish, Geeta R. Bharamagoudar, K. Karibasappa, S. G. Totad","doi":"10.4018/ijncr.2021070103","DOIUrl":"https://doi.org/10.4018/ijncr.2021070103","url":null,"abstract":"With sensors percolating through everyday living, it may be toted that there is an enormous increase in the availability of real-time streaming and time series data. We also see an exponential increase in number of industry applications with sensors driven by IoT and connected with data sources that change over time. This time-series data presents many technical challenges, opportunities, and threats to industries. Thus, streaming analytics to model an unsupervised machine learning system for detecting unusual/anomalous behavior in real-time must be prominently addressed. In this paper, the authors propose a real-time abnormality detection model using a Facebook prophet that addresses issues related to the improper Facebook collection of data, further leading to faulty analysis and wrong results. The proposed unsupervised model detects abnormalities in the data captured through customer order by considering day and date as constraints. The proposed model is found to be even more efficient in RMSE score. The proposed model delivered enhanced performance compared to other traditional approaches.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128940820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.4018/ijncr.2021070101
Srinivas Jagirdar, Venkata Subba K. Reddy
Phony news or fake news spreads like a wildfire on social media causing loss to the society. Swift detection of fake news is a priority as it reduces harm to society. This paper developed a phony news detector for Reddit posts using popular machine learning techniques in conjunction with natural language processing techniques. Popular feature extraction algorithms like CountVectorizer (CV) and Term Frequency Inverse Document Frequency (TFIDF) were implemented. These features were fed to Multinomial Naive Bayes (MNB), Random Forest (RF), Support Vector Classifier (SVC), Logistic Regression (LR), AdaBoost, and XGBoost for classifying news as either genuine or phony. Finally, coefficient analysis was performed in order to interpret the best coefficients. The study revealed that the pipeline model of MNB and TFIDF achieved a best accuracy rate of 79.05% when compared to other pipeline models.
虚假新闻或假新闻在社交媒体上像野火一样蔓延,给社会造成损失。迅速发现假新闻是一个优先事项,因为它可以减少对社会的伤害。本文使用流行的机器学习技术与自然语言处理技术相结合,为Reddit帖子开发了一个虚假新闻检测器。实现了常用的特征提取算法,如CountVectorizer (CV)和Term Frequency Inverse Document Frequency (TFIDF)。这些特征被输入到多项式朴素贝叶斯(MNB)、随机森林(RF)、支持向量分类器(SVC)、逻辑回归(LR)、AdaBoost和XGBoost中,用于将新闻分类为真假。最后进行系数分析,以解释最佳系数。研究表明,与其他管道模型相比,MNB和TFIDF的管道模型准确率最高,为79.05%。
{"title":"Phony News Detection in Reddit Using Natural Language Techniques and Machine Learning Pipelines","authors":"Srinivas Jagirdar, Venkata Subba K. Reddy","doi":"10.4018/ijncr.2021070101","DOIUrl":"https://doi.org/10.4018/ijncr.2021070101","url":null,"abstract":"Phony news or fake news spreads like a wildfire on social media causing loss to the society. Swift detection of fake news is a priority as it reduces harm to society. This paper developed a phony news detector for Reddit posts using popular machine learning techniques in conjunction with natural language processing techniques. Popular feature extraction algorithms like CountVectorizer (CV) and Term Frequency Inverse Document Frequency (TFIDF) were implemented. These features were fed to Multinomial Naive Bayes (MNB), Random Forest (RF), Support Vector Classifier (SVC), Logistic Regression (LR), AdaBoost, and XGBoost for classifying news as either genuine or phony. Finally, coefficient analysis was performed in order to interpret the best coefficients. The study revealed that the pipeline model of MNB and TFIDF achieved a best accuracy rate of 79.05% when compared to other pipeline models.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132479960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.4018/IJNCR.2021010102
My Seddiq El Kasmi Alaoui, Said Nouh
In this paper, the authors present a concatenation of Hartmann and Rudolph (HR) partially exploited and a decoder based on hash techniques and syndrome calculation to decode linear block codes. This work consists firstly to use the HR with a reduced number of codewords of the dual code then the HWDec which exploits the output of the HR partially exploited. Researchers have applied the proposed decoder to decode some Bose, Chaudhuri, and Hocquenghem (BCH) and quadratic residue (QR) codes. The simulation and comparison results show that the proposed decoder guarantees very good performances, compared to several competitors, with a much-reduced number of codewords of the dual code. For example, for the BCH(31, 16, 7) code, the good results found are based only on 3.66% of the all codewords of the dual code space, for the same code the reduction rate of the run time varies between 78% and 90% comparing to the use of Hartmann and Rudolph alone. This shows the efficiency, the rapidity, and the reduction of the memory space necessary for the proposed concatenation.
{"title":"Decoding Algorithm by Cooperation Between Hartmann Rudolph Algorithm and a Decoder Based on Syndrome and Hash","authors":"My Seddiq El Kasmi Alaoui, Said Nouh","doi":"10.4018/IJNCR.2021010102","DOIUrl":"https://doi.org/10.4018/IJNCR.2021010102","url":null,"abstract":"In this paper, the authors present a concatenation of Hartmann and Rudolph (HR) partially exploited and a decoder based on hash techniques and syndrome calculation to decode linear block codes. This work consists firstly to use the HR with a reduced number of codewords of the dual code then the HWDec which exploits the output of the HR partially exploited. Researchers have applied the proposed decoder to decode some Bose, Chaudhuri, and Hocquenghem (BCH) and quadratic residue (QR) codes. The simulation and comparison results show that the proposed decoder guarantees very good performances, compared to several competitors, with a much-reduced number of codewords of the dual code. For example, for the BCH(31, 16, 7) code, the good results found are based only on 3.66% of the all codewords of the dual code space, for the same code the reduction rate of the run time varies between 78% and 90% comparing to the use of Hartmann and Rudolph alone. This shows the efficiency, the rapidity, and the reduction of the memory space necessary for the proposed concatenation.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115759495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.4018/ijncr.2020100103
Avishek Nandi, P. Dutta, Md. Nasir
Automatic recognition of facial expressions and modeling of human expressions are very essential in the field of affective computing. The authors have introduced a novel geometric and texture-based method to extract the shapio-geometric features from an image computed by landmarking the geometric locations of facial components using the active appearance model (AAM). Expression-specific analysis of facial landmark points is carried out to select a set of landmark points for each expression to identify features for each specific expression. The shape information matrix (SIM) is constructed the set salient landmark points assign to an expression. Finally, the histogram-oriented gradients (HoG) of SIM are computed which is used for classification with multi-layer perceptron (MLP). The proposed method is tested and validated on four well-known benchmark databases, which are CK+, JAFFE, MMI, and MUG. The proposed system achieved 98.5%, 97.6%, 96.4%, and 97.0% accuracy in CK+, JAFFE, MMI, and MUG database, respectively.
{"title":"Automatic Facial Expression Recognition System Using Shape-Information-Matrix (SIM): An Expression Specific Approach","authors":"Avishek Nandi, P. Dutta, Md. Nasir","doi":"10.4018/ijncr.2020100103","DOIUrl":"https://doi.org/10.4018/ijncr.2020100103","url":null,"abstract":"Automatic recognition of facial expressions and modeling of human expressions are very essential in the field of affective computing. The authors have introduced a novel geometric and texture-based method to extract the shapio-geometric features from an image computed by landmarking the geometric locations of facial components using the active appearance model (AAM). Expression-specific analysis of facial landmark points is carried out to select a set of landmark points for each expression to identify features for each specific expression. The shape information matrix (SIM) is constructed the set salient landmark points assign to an expression. Finally, the histogram-oriented gradients (HoG) of SIM are computed which is used for classification with multi-layer perceptron (MLP). The proposed method is tested and validated on four well-known benchmark databases, which are CK+, JAFFE, MMI, and MUG. The proposed system achieved 98.5%, 97.6%, 96.4%, and 97.0% accuracy in CK+, JAFFE, MMI, and MUG database, respectively.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115428458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.4018/ijncr.2020100102
Rohini S. Hongal, R. Shettar
With rapid technological advancements and enhanced network growth, security contends to play a crucial role. A powerful network security tends to point out diverse mixture of threats and intimidations and blocks them from creeping and getting circulated into the network to preserve the reliability, confidentiality, integrity, and accessibility of computer networks by annihilating illegitimate admittance and corruption of critical information. Secure hash algorithms (SHA) are cryptographic hash functions used to produce a hash value of fixed output bit sizes. In this paper, an algorithm is proposed to strengthen the cryptographic systems by using reversible logic to generate higher and variable hash values, making it difficult to trace the keys. The proposed scheme is simulated and verified using FPGA Virtex ML505 board, the analysis of power and time of which is carried out using Genus tool, proving it to be efficient in terms of power, gate usage, garbage, and quantum cost.
{"title":"A Power-Efficient and Quantum-Resistant N-Bit Cryptography Algorithm","authors":"Rohini S. Hongal, R. Shettar","doi":"10.4018/ijncr.2020100102","DOIUrl":"https://doi.org/10.4018/ijncr.2020100102","url":null,"abstract":"With rapid technological advancements and enhanced network growth, security contends to play a crucial role. A powerful network security tends to point out diverse mixture of threats and intimidations and blocks them from creeping and getting circulated into the network to preserve the reliability, confidentiality, integrity, and accessibility of computer networks by annihilating illegitimate admittance and corruption of critical information. Secure hash algorithms (SHA) are cryptographic hash functions used to produce a hash value of fixed output bit sizes. In this paper, an algorithm is proposed to strengthen the cryptographic systems by using reversible logic to generate higher and variable hash values, making it difficult to trace the keys. The proposed scheme is simulated and verified using FPGA Virtex ML505 board, the analysis of power and time of which is carried out using Genus tool, proving it to be efficient in terms of power, gate usage, garbage, and quantum cost.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131310247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}