Pub Date : 2022-11-26DOI: 10.47164/ijngc.v13i5.961
Monika Seth, V. Asudani, Sachin Upadhye, Satyajit Uparkar
Private Banks in India, are owned by individuals or group of limited individual and not by government. Here in India, private banks represent that most of the part of equity or shares are hold by private shares holders and not by government. The locus of control is reflected in terms of the particular degree upto which the private shares holders or the bank employee’s belief that they have power over their private banking events. This research study aims to find out locus of control among bank employees with special reference to private banks based on gender difference as well as two dimensions of locus of control viz. external and internal. Survey method was adopted for this study which covers four private banks within the Nagpur city. Sample size of 200 bank employees consists of bank managers, cashiers, clerks accountant were provided with an online Google form questionnaire. The questionnaire consists of seven questions related to their work, job satisfaction, colleagues clients etc. Initially Cronbach’s alpha test using pilot survey on 50 pre samples of bank employees was used to determine the internal consistency of data model. As second step Explorative statistics gave the trends among the bank employees towards the seven questions. The Descriptive statistics then quantitatively describes the features from a collection of information that can reveal the influence towards the three hypotheses under consideration. Lastly the Inferential statistics based two groups of genders interpret the facts as to accept or reject the three hypotheses under consideration. The study concludes the pro and cons and the prime factors which leads towards the external and internal locus of control among the employees.
{"title":"Data Analytical Approach for Evaluating Locus of Control among Private Banks Employees","authors":"Monika Seth, V. Asudani, Sachin Upadhye, Satyajit Uparkar","doi":"10.47164/ijngc.v13i5.961","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.961","url":null,"abstract":"Private Banks in India, are owned by individuals or group of limited individual and not by government. Here in India, private banks represent that most of the part of equity or shares are hold by private shares holders and not by government. The locus of control is reflected in terms of the particular degree upto which the private shares holders or the bank employee’s belief that they have power over their private banking events. This research study aims to find out locus of control among bank employees with special reference to private banks based on gender difference as well as two dimensions of locus of control viz. external and internal. Survey method was adopted for this study which covers four private banks within the Nagpur city. Sample size of 200 bank employees consists of bank managers, cashiers, clerks accountant were provided with an online Google form questionnaire. The questionnaire consists of seven questions related to their work, job satisfaction, colleagues clients etc. Initially Cronbach’s alpha test using pilot survey on 50 pre samples of bank employees was used to determine the internal consistency of data model. As second step Explorative statistics gave the trends among the bank employees towards the seven questions. The Descriptive statistics then quantitatively describes the features from a collection of information that can reveal the influence towards the three hypotheses under consideration. Lastly the Inferential statistics based two groups of genders interpret the facts as to accept or reject the three hypotheses under consideration. The study concludes the pro and cons and the prime factors which leads towards the external and internal locus of control among the employees.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"57 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82870362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-26DOI: 10.47164/ijngc.v13i5.915
Swati Rane, Lakshmappa K. Ragha, Siddalingappagouda Biradar, Vaibhav R. Pandit
The degradation in visual quality of images is often seen due to a variety of noise added inevitably at the time of image acquisition. Its restoration has thus become a fundamental and significant problem in image processing. Many attempts are made in recent past to efficiently denoise images. But, the best possible solution to this problem is still an open research problem. This paper validates the effectiveness of one such popular image denoising approach, where an adaptive image patch clustering is followed by the two step denoising algorithm in Principal Component Analysis (PCA) domain. First step uses Marchenko–Pastur law based hard thresholding of singular values in the singular value decomposition (SVD) domain and the second step removes remaining noise in PCA domain using Linear Minimum Mean-Squared-Error (LMMSE), a soft thresholding. The experimentation is conducted on gray-scale images corrupted by four different noise types namely speckle, salt & pepper, Gaussian, and Poisson. The efficiency of image denoising is quantified in terms of popular image quality metrics peak signal-to-noise ratio (PSNR), structural similarity (SSIM), feature similarity (FSIM), and the denoising time. The comprehensive performance analysis of the denoising approach against the four noise models underlies its suitability to various applications. This certainly gives the new researchers a direction for selection of image denoising method.
{"title":"Using Marchenko–Pastur SVD and Linear MMSE Estimation for Reducing Image Noise","authors":"Swati Rane, Lakshmappa K. Ragha, Siddalingappagouda Biradar, Vaibhav R. Pandit","doi":"10.47164/ijngc.v13i5.915","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.915","url":null,"abstract":"The degradation in visual quality of images is often seen due to a variety of noise added inevitably at the time of image acquisition. Its restoration has thus become a fundamental and significant problem in image processing. Many attempts are made in recent past to efficiently denoise images. But, the best possible solution to this problem is still an open research problem. This paper validates the effectiveness of one such popular image denoising approach, where an adaptive image patch clustering is followed by the two step denoising algorithm in Principal Component Analysis (PCA) domain. First step uses Marchenko–Pastur law based hard thresholding of singular values in the singular value decomposition (SVD) domain and the second step removes remaining noise in PCA domain using Linear Minimum Mean-Squared-Error (LMMSE), a soft thresholding. The experimentation is conducted on gray-scale images corrupted by four different noise types namely speckle, salt & pepper, Gaussian, and Poisson. The efficiency of image denoising is quantified in terms of popular image quality metrics peak signal-to-noise ratio (PSNR), structural similarity (SSIM), feature similarity (FSIM), and the denoising time. The comprehensive performance analysis of the denoising approach against the four noise models underlies its suitability to various applications. This certainly gives the new researchers a direction for selection of image denoising method.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"69 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88147019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social media platforms have made it possible for people and organizations to disseminate information to their peers and target markets. Even while most information is shared with the best of intentions, some people utilize social media to further their own agendas. They might publish untrue or inaccurate information in their posts. Before, during, and after disasters and emergencies, social media is rife with rumors, misinformation, and misleading information. These false rumors and information could also make individuals anxious. How to stop the spread of this incorrect information is one of the main problems that public safety authorities and organizations face. DIVVA is a system that, using some provided input text data, evaluates and validates disaster-related information. The system has two tracks: a validation track and a verification track. Verification will classify the textual inputinto categories related to disasters or not related to disasters. The Validation track, on the other hand, will use the official handles of government disaster relief organizations like the NDRF (National Disaster Response Force) to determine whether the event mentioned in the text data actually happened or not before classifying the disaster- related data as real or fake. Therefore, if many individuals receive erroneous information about a calamity, we can utilize our approach to determine if the information is true or false. Our results show that the Bidirectional LSTM model performs well for the tweet classification (i.e. whether the tweets are related to disaster or not) task with 84% accuracy.
{"title":"Disaster Information Verification and Validation Application Using Machine Learning","authors":"Sameer Shekhar Mishra, Atharva Bisen, Soham Mundhada, Utkarsh Singh, Vrushali Bongirwar","doi":"10.47164/ijngc.v13i5.907","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.907","url":null,"abstract":"Social media platforms have made it possible for people and organizations to disseminate information to their peers and target markets. Even while most information is shared with the best of intentions, some people utilize social media to further their own agendas. They might publish untrue or inaccurate information in their posts. Before, during, and after disasters and emergencies, social media is rife with rumors, misinformation, and misleading information. These false rumors and information could also make individuals anxious. How to stop the spread of this incorrect information is one of the main problems that public safety authorities and organizations face. DIVVA is a system that, using some provided input text data, evaluates and validates disaster-related information. The system has two tracks: a validation track and a verification track. Verification will classify the textual inputinto categories related to disasters or not related to disasters. The Validation track, on the other hand, will use the official handles of government disaster relief organizations like the NDRF (National Disaster Response Force) to determine whether the event mentioned in the text data actually happened or not before classifying the disaster- related data as real or fake. Therefore, if many individuals receive erroneous information about a calamity, we can utilize our approach to determine if the information is true or false. Our results show that the Bidirectional LSTM model performs well for the tweet classification (i.e. whether the tweets are related to disaster or not) task with 84% accuracy.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"34 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75566867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-26DOI: 10.47164/ijngc.v13i5.919
Shubham Anjankar, Somesh Telang, Khushalsingh Bharadwaj, R. Khandelwal
It might be challenging in the field of image forensics to identify the source camera of a picture. This researchproposes a noise adaptable convolutional neural network-based technique for camera identification. For cameraidentification, the suggested solution combines Photo Response Non-Uniformity (PRNU) noise and Noiseprint.Three parallel dimensionality expanded residual networks with convolutional layers of kernel size 1x1 were puttogether for enhanced feature extraction. The experiment mentioned above uses pictures from the ”Vision Dataset”as its subject matter. The experimental findings demonstrate the effectiveness of the suggested methodology inidentifying the source camera at the brand, model, and device levels. When two of the three networks were fedwith PRNU and one with noiseprint, the best performance was obtained.
{"title":"Identification of Source Camera by Amalgamation of PRNU and Noise Print Using Dimensionality Expansive Residual Network","authors":"Shubham Anjankar, Somesh Telang, Khushalsingh Bharadwaj, R. Khandelwal","doi":"10.47164/ijngc.v13i5.919","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.919","url":null,"abstract":"It might be challenging in the field of image forensics to identify the source camera of a picture. This researchproposes a noise adaptable convolutional neural network-based technique for camera identification. For cameraidentification, the suggested solution combines Photo Response Non-Uniformity (PRNU) noise and Noiseprint.Three parallel dimensionality expanded residual networks with convolutional layers of kernel size 1x1 were puttogether for enhanced feature extraction. The experiment mentioned above uses pictures from the ”Vision Dataset”as its subject matter. The experimental findings demonstrate the effectiveness of the suggested methodology inidentifying the source camera at the brand, model, and device levels. When two of the three networks were fedwith PRNU and one with noiseprint, the best performance was obtained.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"91 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79412523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-26DOI: 10.47164/ijngc.v13i5.935
Roshan Gunwantrao Belsare, Premchand B. Ambhore
Fog layer sits between cloud layer and edge-layer and responsible for selection of edge-nodes to process cloud tasks. Fog devices manage routers, gateways and other scheduling components, which makes them highly vulnerable to security attacks. Attackers inject malicious packets fog-server, middleware or sensing layers which causes a wide variety of attacks. These attacks include node capturing, signal jamming, node outage, authorization, selective forwarding, data disclosure etc. To remove these attacks, wide variety of solutions are proposed by researchers, which include authorization, cryptography, error correction, firewall, broadcast authentication, selective disclosure etc. Moreover, these solutions vary with respect to privacy and security quality metrics, attack prevention capabilities and deployment quality of service (QoS). Thus, testing and deployment of these solutions is time consuming, requires additional manpower for performance validation. Hence fog deployments require larger time-to-market and are costly than their corresponding cloud deployments. In order to reduce the time for testing and validation of these resilience techniques, this text reviews various fog security & privacy preservation models and discusses their nuances, advantages, limitations and future research scopes. Furthermore it also performs a detailed performance comparison between the reviewed models, which assists in selecting best possible approach for a given application scenario. This text also recommends various fusion based approaches that can be applied to existing security and privacy models in order to further improve their performance. These approaches include hybridization, selective augmentation and Q-learning based models that assist in improving efficiency of encryption, privacy preservation, while maintaining high QoS levels.
{"title":"Pragmatic evaluation of privacy preservation security models targeted towards fog-based deployments","authors":"Roshan Gunwantrao Belsare, Premchand B. Ambhore","doi":"10.47164/ijngc.v13i5.935","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.935","url":null,"abstract":"Fog layer sits between cloud layer and edge-layer and responsible for selection of edge-nodes to process cloud tasks. Fog devices manage routers, gateways and other scheduling components, which makes them highly vulnerable to security attacks. Attackers inject malicious packets fog-server, middleware or sensing layers which causes a wide variety of attacks. These attacks include node capturing, signal jamming, node outage, authorization, selective forwarding, data disclosure etc. To remove these attacks, wide variety of solutions are proposed by researchers, which include authorization, cryptography, error correction, firewall, broadcast authentication, selective disclosure etc. Moreover, these solutions vary with respect to privacy and security quality metrics, attack prevention capabilities and deployment quality of service (QoS). Thus, testing and deployment of these solutions is time consuming, requires additional manpower for performance validation. Hence fog deployments require larger time-to-market and are costly than their corresponding cloud deployments. In order to reduce the time for testing and validation of these resilience techniques, this text reviews various fog security & privacy preservation models and discusses their nuances, advantages, limitations and future research scopes. Furthermore it also performs a detailed performance comparison between the reviewed models, which assists in selecting best possible approach for a given application scenario. This text also recommends various fusion based approaches that can be applied to existing security and privacy models in order to further improve their performance. These approaches include hybridization, selective augmentation and Q-learning based models that assist in improving efficiency of encryption, privacy preservation, while maintaining high QoS levels.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"498 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80029549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-26DOI: 10.47164/ijngc.v13i5.959
Sandhya N. Dhage, Vijay Kumar Garg
Cotton fungal diseases include rust, alternaria leaf spot, fusarium wilt, grew mildew, and root rots. Identification of these diseases requires design of efficient fungi segmentation, feature representation & classification models. Existing methods that perform these tasks, are highly complex, and require disease-specific segmentation techniques, which limits their scalability levels. Moreover, low-complexity models are generally observed to showcase low accuracy levels, which restricts their applicability for real-time use cases. To overcome these issues, proposed design focused on a novel ensemble segmentation, feature processing & classification model for identification of cotton fungi diseases. The proposed model initially uses a combination of Fuzzy C Means (FCM), Enhanced FCM, KFCM, and saliency maps in order to extract Regions of Interest (RoIs). These RoIs are post-processed via a light-weight colour-feature based disease category identification layer, which assists in selecting the segmented image sets. These image sets are processed via an ensemble feature representation layer, which combines Colour Maps, Edge Maps, Gabor Maps and Convolutional feature sets. Due to evaluation of multiple feature sets, the model is able to improve classification performance for multiple disease types. Extracted features are classified via use of an ensemble classification model that combines Naïve Bayes (NB), Support Vector Machines (SVMs), Logistic Regression (LR), and Multilayer Perceptron (MLP) based classifiers. Due to this combination of segmentation, feature representation & classification models, the proposed Model is capable of improving classification accuracy by 5.9%, precision by 4.5%, recall by 3.8%, and delay by 8.5% when compared with state-of-the-art models, which makes it useful for real-time disease detection of crops.
{"title":"Design of an Ensemble Segmentation, Feature Processing & Classification model for identification of Cotton Fungal diseases","authors":"Sandhya N. Dhage, Vijay Kumar Garg","doi":"10.47164/ijngc.v13i5.959","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.959","url":null,"abstract":"Cotton fungal diseases include rust, alternaria leaf spot, fusarium wilt, grew mildew, and root rots. Identification of these diseases requires design of efficient fungi segmentation, feature representation & classification models. Existing methods that perform these tasks, are highly complex, and require disease-specific segmentation techniques, which limits their scalability levels. Moreover, low-complexity models are generally observed to showcase low accuracy levels, which restricts their applicability for real-time use cases. To overcome these issues, proposed design focused on a novel ensemble segmentation, feature processing & classification model for identification of cotton fungi diseases. The proposed model initially uses a combination of Fuzzy C Means (FCM), Enhanced FCM, KFCM, and saliency maps in order to extract Regions of Interest (RoIs). These RoIs are post-processed via a light-weight colour-feature based disease category identification layer, which assists in selecting the segmented image sets. These image sets are processed via an ensemble feature representation layer, which combines Colour Maps, Edge Maps, Gabor Maps and Convolutional feature sets. Due to evaluation of multiple feature sets, the model is able to improve classification performance for multiple disease types. Extracted features are classified via use of an ensemble classification model that combines Naïve Bayes (NB), Support Vector Machines (SVMs), Logistic Regression (LR), and Multilayer Perceptron (MLP) based classifiers. Due to this combination of segmentation, feature representation & classification models, the proposed Model is capable of improving classification accuracy by 5.9%, precision by 4.5%, recall by 3.8%, and delay by 8.5% when compared with state-of-the-art models, which makes it useful for real-time disease detection of crops.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"7 2 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82824081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-26DOI: 10.47164/ijngc.v13i5.917
Firdous Sadaf M. Ismail, Dattatraya S Adane
Advantages of decentralized systems over centralized approaches are reason for increasing incorporation of blockchain technology in almost every digital task. Core benefits of blockchain include decentralization, persistency, auditability and anonymity. This paper discusses Blockchain Technology from the perspective of related attributes, applications and challenges. Specifically, we try to elaborate on the current work pertaining to application of Blockchain Technology to different domains such as Internet of Things (IoT), Artificial Intelligence (AI), Big Data and Software Defined Networking (SDN) keeping in view the challenges of optimized storage, scalability and security.
{"title":"Blockchain Technology: Growing Usecases and Issues to Tackle","authors":"Firdous Sadaf M. Ismail, Dattatraya S Adane","doi":"10.47164/ijngc.v13i5.917","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.917","url":null,"abstract":"Advantages of decentralized systems over centralized approaches are reason for increasing incorporation of blockchain technology in almost every digital task. Core benefits of blockchain include decentralization, persistency, auditability and anonymity. This paper discusses Blockchain Technology from the perspective of related attributes, applications and challenges. Specifically, we try to elaborate on the current work pertaining to application of Blockchain Technology to different domains such as Internet of Things (IoT), Artificial Intelligence (AI), Big Data and Software Defined Networking (SDN) keeping in view the challenges of optimized storage, scalability and security.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"27 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85167604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-26DOI: 10.47164/ijngc.v13i5.904
Swati Rane, Lakshmappa K. Ragha, Siddalingappagouda Biradar
Tremendous development in Internet of Things (IoT) and mobile devices lead to several images pooled on social media websites and communicated through networking channels. These images are mostly corrupted with impulse noises due to hot pixels generated in the camera sensors and communication channels. Adaptive mean filter technique removes impulse noise at low density but is unsuccessful as noise density increases and computationally expensive. In this paper, automatic adaptive filtering technique for removal of impulse (salt and pepper) noise is demonstrated. The proposed algorithm consists of impulse noise detection and noise removal modules. An automatic impulse noise detection module is based on mean and variance technique that selects the noisy pixels among the entire image. The noise removal module is based on replacement of noisy pixel through mean and edge direction using Gabor filter. The proposed technique demonstrated better robustness compared with existing techniques.
{"title":"Automatic Adaptive Filtering Technique for Removal of Impulse Noise Using Gabor Filter","authors":"Swati Rane, Lakshmappa K. Ragha, Siddalingappagouda Biradar","doi":"10.47164/ijngc.v13i5.904","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.904","url":null,"abstract":"Tremendous development in Internet of Things (IoT) and mobile devices lead to several images pooled on social media websites and communicated through networking channels. These images are mostly corrupted with impulse noises due to hot pixels generated in the camera sensors and communication channels. Adaptive mean filter technique removes impulse noise at low density but is unsuccessful as noise density increases and computationally expensive. In this paper, automatic adaptive filtering technique for removal of impulse (salt and pepper) noise is demonstrated. The proposed algorithm consists of impulse noise detection and noise removal modules. An automatic impulse noise detection module is based on mean and variance technique that selects the noisy pixels among the entire image. The noise removal module is based on replacement of noisy pixel through mean and edge direction using Gabor filter. The proposed technique demonstrated better robustness compared with existing techniques.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"119 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86183641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Covid Security System is a non-contact sanitizer dispenser, that also monitorsvital parameters such as temperature, heart rate, and spo2 sensors based onInternet of Things (IOT) data collection and processing. This device is vital inworkplaces like hospitals, colleges, and many more. The system that has thecapability to record human body temperature based on contactless mechanismefficiently with pandemic situation has looked up to usage of infraredthermometers. In this paper via our research we realize the human bodytemperature fast non-contact by utilizing infrared thermometer (MLX90614) alongwith a pulse oximeter using a pulse sensor (MAX30100) to measure the heartrate in real-time. Parameter like the oxygen level of the user is also rendered andstored on our designed IOT processing framework which can provide alerts bothonline or offline based on previous maintained records. In this research we haverecorded parameters of the sample space comprising of 500 users. The mainaspect of the research is the cost affectability where is the system can be easilyinterfaced with any existing framework of workplaces.
{"title":"Covid Security System Using IOT Monitoring System","authors":"Chandrakant Mohadikar, Rakshit Najbile, Vivek Kaushik, Yash Konghe, Deepak Khushalani, Pankaj Joshi","doi":"10.47164/ijngc.v13i5.908","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.908","url":null,"abstract":"Covid Security System is a non-contact sanitizer dispenser, that also monitorsvital parameters such as temperature, heart rate, and spo2 sensors based onInternet of Things (IOT) data collection and processing. This device is vital inworkplaces like hospitals, colleges, and many more. The system that has thecapability to record human body temperature based on contactless mechanismefficiently with pandemic situation has looked up to usage of infraredthermometers. In this paper via our research we realize the human bodytemperature fast non-contact by utilizing infrared thermometer (MLX90614) alongwith a pulse oximeter using a pulse sensor (MAX30100) to measure the heartrate in real-time. Parameter like the oxygen level of the user is also rendered andstored on our designed IOT processing framework which can provide alerts bothonline or offline based on previous maintained records. In this research we haverecorded parameters of the sample space comprising of 500 users. The mainaspect of the research is the cost affectability where is the system can be easilyinterfaced with any existing framework of workplaces.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"79 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83350851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-26DOI: 10.47164/ijngc.v13i5.916
Gaurav Laud, Aishwarya Pardhi, Ajinkya Wadekar, Shounik Shukla, Varad Loya, Dr. Padma Adane, Viresh Dhawan
Art is mankind's treasure, yet it is often concentrated in the hands of a few. We need a better trade mechanism and technological innovation to enable fair access to artworks. A solution to the problem can be found by using the NFT technology for the trade of artwork. NFTs make the future for the creators’ economy a little brighter, allowing artists to monetize their work by selling to eager collectors, while also collecting due royalties upon future resales of their works. This unique combination of blockchain with artworks provides a layer of security to the ownership of the artwork, and is restoring access to artworks for people from all regions.This research paper is about a decentralized marketplace application where one can buy and sell artworks in the form of NFTs digitally, providing the users with the facility to securely perform transactions of these NFTs that can be verified through Polygon blockchain.
{"title":"ManiacNFT : An Application for NFT Marketplace","authors":"Gaurav Laud, Aishwarya Pardhi, Ajinkya Wadekar, Shounik Shukla, Varad Loya, Dr. Padma Adane, Viresh Dhawan","doi":"10.47164/ijngc.v13i5.916","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.916","url":null,"abstract":"Art is mankind's treasure, yet it is often concentrated in the hands of a few. We need a better trade mechanism and technological innovation to enable fair access to artworks. A solution to the problem can be found by using the NFT technology for the trade of artwork. NFTs make the future for the creators’ economy a little brighter, allowing artists to monetize their work by selling to eager collectors, while also collecting due royalties upon future resales of their works. This unique combination of blockchain with artworks provides a layer of security to the ownership of the artwork, and is restoring access to artworks for people from all regions.This research paper is about a decentralized marketplace application where one can buy and sell artworks in the form of NFTs digitally, providing the users with the facility to securely perform transactions of these NFTs that can be verified through Polygon blockchain.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"38 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80810198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}