Gender Recognition using voice is of enormous prominence in the near future technology as its uses could range from smart assistance robots to customer service sector and many more. Machine learning (ML) models play a vital role in achieving this task. Using the acoustic properties of voice, different ML models classify the gender as male and female. In this research we have used the ML models- Random Forest, Decision Tree, Logistic Regression, Support Vector Machine (SVM), Gradient Boosting, K-Nearest Neighbor (KNN), and ensemble method (KNN, logistic regression, SVM). To propose which algorithm is best for recognizing gender, we have evaluated the models based on results achieved from analysis of accuracy, recall, F1 score, and precision.
{"title":"Gender Recognition by Voice using Machine Learning Techniques","authors":"Sweta Jain, Neha Pandey, Vaidehi Choudhari, Pratik Yawalkar, Amey Admane","doi":"10.47164/ijngc.v14i1.1031","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1031","url":null,"abstract":"Gender Recognition using voice is of enormous prominence in the near future technology as its uses could range from smart assistance robots to customer service sector and many more. Machine learning (ML) models play a vital role in achieving this task. Using the acoustic properties of voice, different ML models classify the gender as male and female. In this research we have used the ML models- Random Forest, Decision Tree, Logistic Regression, Support Vector Machine (SVM), Gradient Boosting, K-Nearest Neighbor (KNN), and ensemble method (KNN, logistic regression, SVM). To propose which algorithm is best for recognizing gender, we have evaluated the models based on results achieved from analysis of accuracy, recall, F1 score, and precision.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"74 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89004757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-15DOI: 10.47164/ijngc.v14i1.1040
Yogesh Thakare, Utkarsha Wankhade, H. Kasturiwale
Natural calamities are the major challenged in front of any counties. Calamities are produced either natural or manmade, such as earthquakes, wild-fires, floods and terrorist attacks etc. Earthquakes mean destruction which not only hampers human but also animals. After the occurrence of earthquakes infrastructure damaged becomes the major issues. Buildings, bridges and houses collapse due to earthquake. Under this debris many people or animals bury. A timely detection and rescue can only save the people who are buried and wounded. Several methods are exits to detect and rescue the human buried under the rubble which are comes in action after the occurring of natural calamities. The propose system is quite different as compared with the existing system. It is already mounted in a buildings or houses nearby in earthquake zones. The advantage of this system is to minimize time require in detecting and rescuing the victim affected buried under the rubble.
{"title":"Intelligent Life Saver System for People Living in Earthquake Zone","authors":"Yogesh Thakare, Utkarsha Wankhade, H. Kasturiwale","doi":"10.47164/ijngc.v14i1.1040","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1040","url":null,"abstract":"Natural calamities are the major challenged in front of any counties. Calamities are produced either natural or manmade, such as earthquakes, wild-fires, floods and terrorist attacks etc. Earthquakes mean destruction which not only hampers human but also animals. After the occurrence of earthquakes infrastructure damaged becomes the major issues. Buildings, bridges and houses collapse due to earthquake. Under this debris many people or animals bury. A timely detection and rescue can only save the people who are buried and wounded. Several methods are exits to detect and rescue the human buried under the rubble which are comes in action after the occurring of natural calamities. The propose system is quite different as compared with the existing system. It is already mounted in a buildings or houses nearby in earthquake zones. The advantage of this system is to minimize time require in detecting and rescuing the victim affected buried under the rubble.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"121 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80222857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-15DOI: 10.47164/ijngc.v14i1.1027
Heena Agrawal, Rahul Agrawal, Rohit Chandani, Sakshi Nema
The placement of WI-FI routers in the network is an intensive problem concerning connectivity and coverage.It directly affects the transmission loss, installation cost, operational complexity, wi-fi network coverage, etc.However, optimizing the location of the routers can resolve these issues and increase network performance. Thus,using major deep-learning models the problem is resolved. The proposed model concentrates on the optimization of the objective function in terms of the empty spaces, hindrances such as concrete walls, metallic objects, etc. in the area, maximum client coverage in the location, and the network connectivity. It is an initial step to ensure the desired network performance such as throughput, connectivity, and coverage of the network.. Furthermore, a Wi-Fi analyzing system for generating the results based on the observations of the Wi-Fi router network is implemented. It analyzes the wireless network, devices in the network, and the connected users. The model also gives a WLAN report of the Wi-Fi router
{"title":"OPTIMAL WIFI POSITION DETECTION USING ARTIFICIAL INTELLIGENCE","authors":"Heena Agrawal, Rahul Agrawal, Rohit Chandani, Sakshi Nema","doi":"10.47164/ijngc.v14i1.1027","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1027","url":null,"abstract":"The placement of WI-FI routers in the network is an intensive problem concerning connectivity and coverage.It directly affects the transmission loss, installation cost, operational complexity, wi-fi network coverage, etc.However, optimizing the location of the routers can resolve these issues and increase network performance. Thus,using major deep-learning models the problem is resolved. The proposed model concentrates on the optimization of the objective function in terms of the empty spaces, hindrances such as concrete walls, metallic objects, etc. in the area, maximum client coverage in the location, and the network connectivity. It is an initial step to ensure the desired network performance such as throughput, connectivity, and coverage of the network.. Furthermore, a Wi-Fi analyzing system for generating the results based on the observations of the Wi-Fi router network is implemented. It analyzes the wireless network, devices in the network, and the connected users. The model also gives a WLAN report of the Wi-Fi router","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"44 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78567402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-15DOI: 10.47164/ijngc.v14i1.1094
Snehal Awachat
Understanding and using extensive, elevated, and heterogeneous biological data continues to be a major obstacle in the transformation of medical services. Digital health records, neuroimaging, sensor readings, and literature, which are all complicated, heterogeneous, inadequately labelled, and frequently unorganized, are all growing in contemporary biology and medicine. Prior to building prediction or sorting designs in front of the attributes, conventional information retrieval and statistical modelling predicates need to do data augmentation to extract useful and more durable features from the information. In the case of complex material and inadequate technical understanding, a variety of problems along both phases. The most recent convolutional technological advancements offer new, efficient frameworks to create end-to-end teaching methods from massive information. Therefore, in paper, we examine the most recent research on using deep techniques to improve the medical field. We propose that deeper learning technologies may be the means of converting large-scale physiological data into enhancing human ability based on the reviewed studies. We additionally draw attention to some drawbacks and the requirement for better technique design and application, particularly in terms of simplicity of comprehension for subject matter experts and social researchers. In order to bridge deeper learning models with natural interpretability, we examine these problems and recommend creating comprehensive and meaningful decipherable architectures.
{"title":"Multi-label Classification Performance using Deep Learning","authors":"Snehal Awachat","doi":"10.47164/ijngc.v14i1.1094","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1094","url":null,"abstract":"Understanding and using extensive, elevated, and heterogeneous biological data continues to be a major obstacle in the transformation of medical services. Digital health records, neuroimaging, sensor readings, and literature, which are all complicated, heterogeneous, inadequately labelled, and frequently unorganized, are all growing in contemporary biology and medicine. Prior to building prediction or sorting designs in front of the attributes, conventional information retrieval and statistical modelling predicates need to do data augmentation to extract useful and more durable features from the information. In the case of complex material and inadequate technical understanding, a variety of problems along both phases. The most recent convolutional technological advancements offer new, efficient frameworks to create end-to-end teaching methods from massive information. Therefore, in paper, we examine the most recent research on using deep techniques to improve the medical field. We propose that deeper learning technologies may be the means of converting large-scale physiological data into enhancing human ability based on the reviewed studies. We additionally draw attention to some drawbacks and the requirement for better technique design and application, particularly in terms of simplicity of comprehension for subject matter experts and social researchers. In order to bridge deeper learning models with natural interpretability, we examine these problems and recommend creating comprehensive and meaningful decipherable architectures.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"24 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77411440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-15DOI: 10.47164/ijngc.v14i1.1006
D. Naidu, Madhav Jha
Cybercriminals use a variation of techniques to fleece their digital footprints, that creates a barrier for law enforcement agencies to impossibly catch and prosecute them. With the known universal truth that whenever a machine tries to connect in adversely to target system. The victim’s machine can see only requests coming from the “proxy” or the VPN server. Now as VPN hides IP addresses it leads the network to be redirected through some special configured remote server which are run by a VPN host. As its consequences, the user’s digital footprint is hidden. the footprint of a VPN server is received by the receiver. This challenges the entire organization or one’s personal system to be in risk. One such solution to the problem is to design “Honeypot system” that will trace an IP address running behind VPN/proxy servers. The machine learning algorithm will able to trace the actual IP address with ISP details. The paper discusses a detection mechanism that will dupe the attackers. Showing inability in locating and identifying real honeypot file. The methods were tested on various platforms and technique outperform in detecting attacker’s system smartly using machine learning.
{"title":"Detection Technique to trace IP behind VPN/Proxy using Machine Learning","authors":"D. Naidu, Madhav Jha","doi":"10.47164/ijngc.v14i1.1006","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1006","url":null,"abstract":"Cybercriminals use a variation of techniques to fleece their digital footprints, that creates a barrier for law enforcement agencies to impossibly catch and prosecute them. With the known universal truth that whenever a machine tries to connect in adversely to target system. The victim’s machine can see only requests coming from the “proxy” or the VPN server. Now as VPN hides IP addresses it leads the network to be redirected through some special configured remote server which are run by a VPN host. As its consequences, the user’s digital footprint is hidden. the footprint of a VPN server is received by the receiver. This challenges the entire organization or one’s personal system to be in risk. One such solution to the problem is to design “Honeypot system” that will trace an IP address running behind VPN/proxy servers. The machine learning algorithm will able to trace the actual IP address with ISP details. The paper discusses a detection mechanism that will dupe the attackers. Showing inability in locating and identifying real honeypot file. The methods were tested on various platforms and technique outperform in detecting attacker’s system smartly using machine learning.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"5 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75635866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is sustainable growth in the industries in various developing nations. Quality maintenance of the product under development is an essential part of the product development process. Product quality affects the performance of the product. Various kinds of issues in the manufacturing can impact negatively on the product. Therefore, it is needed to make sure that the manufactured products are fault-free by establishing and employing such softwares that will ultimately bring some ease in the fault detection process. This paper aims to diagnose faults on steel surfaces by using convolutional neural networks and classification by making use of 5 different types of classifiers. They are Support Vector machines, Naive Bayes Classifier, Decision Tree, K-nearest Neighbors, and Random Forest. We have used 4 different types of models namely, Alexnet, InceptionV3, Resnet and VGG16. The testing accuracy was found to be maximum for the VGG16 model which was recorded to be 75.02%. Among the classifiers, the best accuracy was found out with Random Forest and Decision Tree classifiers to be 74.9% and 74.3% respectively. The defects are classified among the 4 categories of defects and are highlighted using image segmentation.
{"title":"Fault Detection in Steel Surfaces Using Deep Learning Approaches","authors":"Shubham Joshi, Aditi Mukte, Snehal Jaiswal, Khushboo Khurana","doi":"10.47164/ijngc.v14i1.1020","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1020","url":null,"abstract":"There is sustainable growth in the industries in various developing nations. Quality maintenance of the product under development is an essential part of the product development process. Product quality affects the performance of the product. Various kinds of issues in the manufacturing can impact negatively on the product. Therefore, it is needed to make sure that the manufactured products are fault-free by establishing and employing such softwares that will ultimately bring some ease in the fault detection process. This paper aims to diagnose faults on steel surfaces by using convolutional neural networks and classification by making use of 5 different types of classifiers. They are Support Vector machines, Naive Bayes Classifier, Decision Tree, K-nearest Neighbors, and Random Forest. We have used 4 different types of models namely, Alexnet, InceptionV3, Resnet and VGG16. The testing accuracy was found to be maximum for the VGG16 model which was recorded to be 75.02%. Among the classifiers, the best accuracy was found out with Random Forest and Decision Tree classifiers to be 74.9% and 74.3% respectively. The defects are classified among the 4 categories of defects and are highlighted using image segmentation.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"15 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87491941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marine fisheries contribute greatly to the economic aspects of any country. India, having a coastline of almost 8000 KM, a surplus of fisheries potential could be estimated here. Because of this vast coastal area, active reporting of captured fishes is difficult through manual monitoring. Computer-aided approach is the best suitable option during the active season. This paper focuses on investigating an approach for identifying single as well as multiple aquatic animal species in a single image. Further a responsive web as well as mobile application are developed, in which the ML models are integrated. This will help users to access data as per their use. The method used YOLOv5n, a lightweight object detection algorithm, to detect these species. The trained model yielded mAP@0.5:0.95 intersection over union (IoU), and average precision (AP) for each species. The species’ AP varied as well. Few GFLOPs are used by YOLOv5n. This indicates that it is a scaled-down version capable of running on the 5.1 GFLOP Raspberry Pi 3B+. Despite employing substantially fewer GFLOPs, YOLOv5n outperformed Faster R-CNN.
{"title":"Novel Approach to Automatic Identification and Detection of Aquatic Animal Species","authors":"Pratik K.Agrawal, Vaishnavi Kamdi, Ishan Mittal, Pranav Bobde, Amarsingh Kashyap","doi":"10.47164/ijngc.v14i1.1013","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1013","url":null,"abstract":"Marine fisheries contribute greatly to the economic aspects of any country. India, having a coastline of almost 8000 KM, a surplus of fisheries potential could be estimated here. Because of this vast coastal area, active reporting of captured fishes is difficult through manual monitoring. Computer-aided approach is the best suitable option during the active season. This paper focuses on investigating an approach for identifying single as well as multiple aquatic animal species in a single image. Further a responsive web as well as mobile application are developed, in which the ML models are integrated. This will help users to access data as per their use. The method used YOLOv5n, a lightweight object detection algorithm, to detect these species. The trained model yielded mAP@0.5:0.95 intersection over union (IoU), and average precision (AP) for each species. The species’ AP varied as well. Few GFLOPs are used by YOLOv5n. This indicates that it is a scaled-down version capable of running on the 5.1 GFLOP Raspberry Pi 3B+. Despite employing substantially fewer GFLOPs, YOLOv5n outperformed Faster R-CNN.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"25 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78350050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-15DOI: 10.47164/ijngc.v14i1.1032
Suraj Patil, Dnyaneshwar Kirange
The automatic detection of brain tumor from large volumes of MRI images using deep learning is a issue that necessitates substantial computing resources. So,in this study, a brain tumor detection framework using feature fusion from optimized shallow and deep learning models is proposed that efficiently detects different types of tumors. The human brain is a 3D object and the intensity of abnormal tissue varies as per location. As a result, incorporating surrounding tissue into tumor region can help to discriminate between the type of tumor and its severity. To extract deep characteristics from tumor region and adjacent tissues, deep models such as Inception-V3 is employed using transfer learning. Deep features are especially important in tumour detection, however as the network size grows, certain low-level insights about tumor are lost. As a result, a novel optimized shallow model is designed to extract low-level features. To overcome this limitation of information loss, deep and shallow features are fused. Our extensive simulation and experiment done on a publicly available benchmark dataset shows that an optimized hybrid deep learning model with ROI expansion improves tumor detection accuracy by 9%. These findings support the theory that the tissues adjacent to the tumor contain unique information and feature fusion compensates for information loss.
{"title":"An Optimized Deep Learning Model with Feature Fusion for Brain Tumor Detection","authors":"Suraj Patil, Dnyaneshwar Kirange","doi":"10.47164/ijngc.v14i1.1032","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1032","url":null,"abstract":"The automatic detection of brain tumor from large volumes of MRI images using deep learning is a issue that necessitates substantial computing resources. So,in this study, a brain tumor detection framework using feature fusion from optimized shallow and deep learning models is proposed that efficiently detects different types of tumors. The human brain is a 3D object and the intensity of abnormal tissue varies as per location. As a result, incorporating surrounding tissue into tumor region can help to discriminate between the type of tumor and its severity. To extract deep characteristics from tumor region and adjacent tissues, deep models such as Inception-V3 is employed using transfer learning. Deep features are especially important in tumour detection, however as the network size grows, certain low-level insights about tumor are lost. As a result, a novel optimized shallow model is designed to extract low-level features. To overcome this limitation of information loss, deep and shallow features are fused. Our extensive simulation and experiment done on a publicly available benchmark dataset shows that an optimized hybrid deep learning model with ROI expansion improves tumor detection accuracy by 9%. These findings support the theory that the tissues adjacent to the tumor contain unique information and feature fusion compensates for information loss.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"1 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83002782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-15DOI: 10.47164/ijngc.v14i1.1025
S. Rawat, Kavita B. Kalambe, Sagarika Jaywant, Lakshita Werulkar, Mukul Barbate, Tarrun Jaiswalt
Cross-Lingual Summarizer develops a gist of the extract written in English in the National Language of India Hindi. This helps non-anglophonic people to understand what the text says in Hindi. The extractive method of summarization is being used in this paper for summarizing the article. The summary generated in English is then translated into Hindi and made available for Hindi Readers. The Hindi readers get the heart of the article they want to read. Due to the Internet’s explosive growth, access to a vast amount of information is now efficient but getting harder and harder. An approach to text extraction summarization that captures the aboutness of the text document was discussed in this paper. One of the many uses for natural language processing (NLP) that significantly affects our daily lives is text summarization. Who has the time to read through complete articles, documents, or books to determine whether they are helpful with the expansion of digital media and the profusion of articles published? The technique was created using TextRank, which was determined using the idea of PageRank established for each page on a website. The presented approach builds a graph with sentences as nodes and the weight of the edge connecting two sentences as its nodes. Modified inverse sentence-cosine frequency similarity gives different words in a sentence different weights. The success of the procedure is demonstrated by the performance evaluation that supported the summary technique.
{"title":"English to Hindi Cross-Lingual Text Summarizer using TextRank Algorithm","authors":"S. Rawat, Kavita B. Kalambe, Sagarika Jaywant, Lakshita Werulkar, Mukul Barbate, Tarrun Jaiswalt","doi":"10.47164/ijngc.v14i1.1025","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1025","url":null,"abstract":"Cross-Lingual Summarizer develops a gist of the extract written in English in the National Language of India Hindi. This helps non-anglophonic people to understand what the text says in Hindi. The extractive method of summarization is being used in this paper for summarizing the article. The summary generated in English is then translated into Hindi and made available for Hindi Readers. The Hindi readers get the heart of the article they want to read. Due to the Internet’s explosive growth, access to a vast amount of information is now efficient but getting harder and harder. An approach to text extraction summarization that captures the aboutness of the text document was discussed in this paper. One of the many uses for natural language processing (NLP) that significantly affects our daily lives is text summarization. Who has the time to read through complete articles, documents, or books to determine whether they are helpful with the expansion of digital media and the profusion of articles published? The technique was created using TextRank, which was determined using the idea of PageRank established for each page on a website. The presented approach builds a graph with sentences as nodes and the weight of the edge connecting two sentences as its nodes. Modified inverse sentence-cosine frequency similarity gives different words in a sentence different weights. The success of the procedure is demonstrated by the performance evaluation that supported the summary technique.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"64 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84320399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-15DOI: 10.47164/ijngc.v14i1.1021
Narayani Singh, Rahul Patekar, G. Kedia, Neha Tirpude
Peer-to-peer network principles are the foundation of Blockchain Cybersecurity. Blockchain creates a reliable verification method that protects against online dangers. Cryptocurrency on the Blockchain is supported by three pillars: network availability, secrecy, and integrity. A third route toward stronger security, one that is less traveled and not nearly as inviting to attackers, is provided by Blockchain. This method lessens risks, offers robust encryption, and more successfully confirms the ownership and integrity of data. Some passwords frequently referred to as the weakest link in Cybersecurity, may even be unnecessary without them. So we aim to build a secure user authentication system using blockchain and also learn about how SCADA systems work in healthcare.
{"title":"Towards an Efficient Scheme for User Authentication based on Decentralized Blockchain","authors":"Narayani Singh, Rahul Patekar, G. Kedia, Neha Tirpude","doi":"10.47164/ijngc.v14i1.1021","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1021","url":null,"abstract":"Peer-to-peer network principles are the foundation of Blockchain Cybersecurity. Blockchain creates a reliable verification method that protects against online dangers. Cryptocurrency on the Blockchain is supported by three pillars: network availability, secrecy, and integrity. A third route toward stronger security, one that is less traveled and not nearly as inviting to attackers, is provided by Blockchain. This method lessens risks, offers robust encryption, and more successfully confirms the ownership and integrity of data. Some passwords frequently referred to as the weakest link in Cybersecurity, may even be unnecessary without them. So we aim to build a secure user authentication system using blockchain and also learn about how SCADA systems work in healthcare.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"10 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82140673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}