We address the task of news-image captioning, which generates a description of an image given the image and its article body as input. The motive is to automatically generate captions for news images which if needed can then be used as reference captions for manually creating news image captions This task is more challenging than conventional image captioning because it requires a joint understanding of image and text. We present an N-Gram model that integrates text and image modalities and attends to textual features from visual features in generating a caption. Experiments based on automatic evaluation metrics and human evaluation show that an article text provides primary information to reproduce news-image captions written by journalists. The results also demonstrate that the proposed model outperforms the state-of-the-art model. In addition, we also confirm that visual features contribute to improving the quality of news-image captions. Also, we present a website that takes an image and its associated article as input and generates a one-liner caption for the same.
{"title":"Transformer based image caption generation for news articles ·","authors":"Ashtavinayak Pande, Atul Pandey, Ayush Solanki, Chinmay Shanbhag, Manish Motghare","doi":"10.47164/ijngc.v14i1.1033","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1033","url":null,"abstract":"We address the task of news-image captioning, which generates a description of an image given the image and its article body as input. The motive is to automatically generate captions for news images which if needed can then be used as reference captions for manually creating news image captions This task is more challenging than conventional image captioning because it requires a joint understanding of image and text. We present an N-Gram model that integrates text and image modalities and attends to textual features from visual features in generating a caption. Experiments based on automatic evaluation metrics and human evaluation show that an article text provides primary information to reproduce news-image captions written by journalists. The results also demonstrate that the proposed model outperforms the state-of-the-art model. In addition, we also confirm that visual features contribute to improving the quality of news-image captions. Also, we present a website that takes an image and its associated article as input and generates a one-liner caption for the same.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"37 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78101425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For yield estimation, it is crucial to achieve quick and precise identification of mango fruits in the natural situations and surroundings. Using imaging with computer vision to accurately detect and count fruits during plant growth is important. It is not just because it is a vital step toward automating procedures like harvesting but also for minimizing labour-intensive human assessments of phenotypic information which can be useful for the farmer. Fruit farmers or cultivators in agriculture would benefit greatly from being able to track and predict production prior to fruit harvest. In order to make the best use of the resources needed for each individual site, such as water use, fertiliser use, and other agricultural chemical compounds. Mango fruit is considered in this paper. A comparative study on Faster R-CNN, YOLOv3 algorithms, and YOLOv4 algorithms, which are widely used in the field of object recognition in the past on various fruits and objects, was conducted to find the best model. The YOLOv4 algorithm was chosen as it was the best technique for mango fruit recognition based on the findings of the above comparative study. A real-time mango fruit detection method utilizing YOLOv4 deep learning algorithm is put forward. The YOLOv4 (You Only Look Once) model was developed under the CSPDarknet53 framework. Also, the number of mangoes in the image or frame was counted and displayed in images as well as videos.
对芒果果实进行快速、准确的鉴定是产量估算的关键。利用计算机视觉成像技术对植物生长过程中的果实进行准确检测和计数是非常重要的。这不仅是因为它是实现收获等过程自动化的重要一步,而且还因为它可以最大限度地减少对农民有用的表型信息的劳动密集型人类评估。果农或农业种植者将从能够在水果收获前跟踪和预测产量中受益匪浅。为了充分利用每个场地所需的资源,例如水的使用,化肥的使用和其他农业化学化合物。本文以芒果果实为研究对象。对比研究过去在物体识别领域广泛使用的Faster R-CNN算法、YOLOv3算法和YOLOv4算法,对各种水果和物体进行识别,寻找最佳模型。基于以上对比研究的结果,选择YOLOv4算法作为芒果果实识别的最佳技术。提出了一种利用YOLOv4深度学习算法的芒果果实实时检测方法。YOLOv4 (You Only Look Once)模型是在CSPDarknet53框架下开发的。此外,图像或帧中的芒果数量被计算并显示在图像和视频中。
{"title":"On Tree Mango Fruit Detection and Counting System","authors":"Romil Mahajan, Ambarish Haridas, Mohit Chandak, Rudar Sharma, Charanjeet Dadiyala","doi":"10.47164/ijngc.v14i1.1022","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1022","url":null,"abstract":"For yield estimation, it is crucial to achieve quick and precise identification of mango fruits in the natural situations and surroundings. Using imaging with computer vision to accurately detect and count fruits during plant growth is important. It is not just because it is a vital step toward automating procedures like harvesting but also for minimizing labour-intensive human assessments of phenotypic information which can be useful for the farmer. Fruit farmers or cultivators in agriculture would benefit greatly from being able to track and predict production prior to fruit harvest. In order to make the best use of the resources needed for each individual site, such as water use, fertiliser use, and other agricultural chemical compounds. Mango fruit is considered in this paper. A comparative study on Faster R-CNN, YOLOv3 algorithms, and YOLOv4 algorithms, which are widely used in the field of object recognition in the past on various fruits and objects, was conducted to find the best model. The YOLOv4 algorithm was chosen as it was the best technique for mango fruit recognition based on the findings of the above comparative study. A real-time mango fruit detection method utilizing YOLOv4 deep learning algorithm is put forward. The YOLOv4 (You Only Look Once) model was developed under the CSPDarknet53 framework. Also, the number of mangoes in the image or frame was counted and displayed in images as well as videos.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"86 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85581386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-15DOI: 10.47164/ijngc.v14i1.1019
Ashwini Zadgaonkar
Due to Digital Revolution, most books and newspaper articles are now available online. Particularly for kids and students, prolonged screen time might be bad for eyesight and attention span. As a result, summarizing algorithms are required to provide long web content in an easily digestible style. The proposed methodology is using term frequency and inverse document frequency driven model, in which the document summary is generated based on each word in a corpus. According to the preferred method, each sentence is rated according to its tf-idf score, and the document summary is produced in a fixed ratio to the original text. Expert summaries froma data set are used for measuring precision and recall using the proposed approach’s ROUGE model. towards the development of such a framework is presented.
{"title":"Frequency-Driven Approach for Extractive Text Summarization","authors":"Ashwini Zadgaonkar","doi":"10.47164/ijngc.v14i1.1019","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1019","url":null,"abstract":"Due to Digital Revolution, most books and newspaper articles are now available online. Particularly for kids and students, prolonged screen time might be bad for eyesight and attention span. As a result, summarizing algorithms are required to provide long web content in an easily digestible style. The proposed methodology is using term frequency and inverse document frequency driven model, in which the document summary is generated based on each word in a corpus. According to the preferred method, each sentence is rated according to its tf-idf score, and the document summary is produced in a fixed ratio to the original text. Expert summaries froma data set are used for measuring precision and recall using the proposed approach’s ROUGE model. towards the development of such a framework is presented.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"4 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79969216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-15DOI: 10.47164/ijngc.v14i1.1050
Siddhant Kumar, P. Parlewar, Rachna Sable
A quick deployment and high-efficiency helium surveillance balloon which can be used as a mobile surveillance systemto monitor and locate trespassers near borders and high-security facilities are presented in the paper. The paperprovides the description of a motorized helium-filled balloon that is remotely controlled and provides a real-timevideo of any site that needs surveillance. The paper also provides the conceptual design, fabrication, and, calculationof the payload connected to of the helium balloon tracker. The payload consists of the control and monitoringsystem which has a camera and sensors and streams this data to the user over the internet which can be used forpatrolling and monitoring infiltration
{"title":"High efficiency and quick deploy UAV for surveillance using helium balloon","authors":"Siddhant Kumar, P. Parlewar, Rachna Sable","doi":"10.47164/ijngc.v14i1.1050","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1050","url":null,"abstract":"A quick deployment and high-efficiency helium surveillance balloon which can be used as a mobile surveillance systemto monitor and locate trespassers near borders and high-security facilities are presented in the paper. The paperprovides the description of a motorized helium-filled balloon that is remotely controlled and provides a real-timevideo of any site that needs surveillance. The paper also provides the conceptual design, fabrication, and, calculationof the payload connected to of the helium balloon tracker. The payload consists of the control and monitoringsystem which has a camera and sensors and streams this data to the user over the internet which can be used forpatrolling and monitoring infiltration","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"43 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87459596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-15DOI: 10.47164/ijngc.v14i1.1007
Vaishali Athawale, Dr. A. S. Alvi
Recommender System recommends relevant items to users, based on user habits or preference. Preference does not have quantitative measure. It is subjective matter. Generally it indirectly measure by items that consumed by users in past. There is a plethora of text available on the web and there are many online platforms that provide text (article) for reading. This is an attempt to develop a Recommender System (RecSys) for the article suggestion for the online article reading to the end user by the online article service provider. RecSys will use collaborative learning, content-based learning and combination of both, i,e, hybrid learning for the recommendation process. The proposed RecSys is tested and trained on is one article sharing platform service and it has been found that the hybrid learning model performed better than other.
{"title":"Online Article Recommender System","authors":"Vaishali Athawale, Dr. A. S. Alvi","doi":"10.47164/ijngc.v14i1.1007","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1007","url":null,"abstract":"Recommender System recommends relevant items to users, based on user habits or preference. Preference does not have quantitative measure. It is subjective matter. Generally it indirectly measure by items that consumed by users in past. There is a plethora of text available on the web and there are many online platforms that provide text (article) for reading. This is an attempt to develop a Recommender System (RecSys) for the article suggestion for the online article reading to the end user by the online article service provider. RecSys will use collaborative learning, content-based learning and combination of both, i,e, hybrid learning for the recommendation process. The proposed RecSys is tested and trained on is one article sharing platform service and it has been found that the hybrid learning model performed better than other.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"20 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80728395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many studies on fatigue detection have been carried out that were focused on experimention over different technologies. Machine vision based driver fatigue detection systems are used to prevent accidents and improve safety on roads. We propose the design of an alerting system for the students that will use real time video of a person to capture the drowsiness level and will signal alert to the student when the student is in that state of fatigue. A device, if enabled with the system, will start the webcam and track the person. An alert will be generated based on the set frame rate when a continuous set of frames are detected as drowsy. The conventional methods cannot capture complex expressions, however the vailability of deep learning models has enabled a substantial research on detection of states of a person in real time. Our system operates in natural lighting conditions and can predict accurately even when the face is covered with glasses, head caps, etc. The system is implemented using YOLOv5 models (You Look Only Once) is an extremely fast and accurate detection model.
目前国内外对疲劳检测的研究主要集中在不同技术的试验上。基于机器视觉的驾驶员疲劳检测系统用于预防事故和提高道路安全性。我们建议为学生设计一个警报系统,该系统将使用一个人的实时视频来捕捉困倦程度,并在学生处于疲劳状态时向学生发出警报信号。一个设备,如果启用了该系统,将启动网络摄像头并跟踪该人。当连续的一组帧被检测为困倦时,将根据设置的帧速率生成警报。传统的方法无法捕捉复杂的表情,然而深度学习模型的可用性使得实时检测人的状态的大量研究成为可能。我们的系统在自然光条件下运行,即使在面部被眼镜、帽子等覆盖的情况下也能准确预测。该系统使用YOLOv5模型(You Look Only Once)实现,这是一种非常快速和准确的检测模型。
{"title":"Real-Time Drowsiness Detection System for Student Tracking using Machine Learning","authors":"Dilipkumar Borikar, Himani Dighorikar, Shridhar Ashtikar, Ishika Bajaj, Shivam Gupta","doi":"10.47164/ijngc.v14i1.992","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.992","url":null,"abstract":"Many studies on fatigue detection have been carried out that were focused on experimention over different technologies. Machine vision based driver fatigue detection systems are used to prevent accidents and improve safety on roads. We propose the design of an alerting system for the students that will use real time video of a person to capture the drowsiness level and will signal alert to the student when the student is in that state of fatigue. A device, if enabled with the system, will start the webcam and track the person. An alert will be generated based on the set frame rate when a continuous set of frames are detected as drowsy. The conventional methods cannot capture complex expressions, however the vailability of deep learning models has enabled a substantial research on detection of states of a person in real time. Our system operates in natural lighting conditions and can predict accurately even when the face is covered with glasses, head caps, etc. The system is implemented using YOLOv5 models (You Look Only Once) is an extremely fast and accurate detection model.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"481 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76771899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-15DOI: 10.47164/ijngc.v14i1.1091
Malhar Deshkar, Dr. Padma Adane, Divyanshu Pandey, Dewansh Chaudhari
One of the fundamental challenges faced in deploying multimedia systems is delivering smooth and uninterrupted audio-visual information anywhere and anytime. In such systems, multimedia content is compressed within a certain format, this requires format conversion for various devices. Thus, a transcoding mechanism is required to make the content adaptive for various devices in the network. Video transcoding converts one digitally encoded format into another, this involves translating any file format containing video and audio at the same time. This is an essential feature for devices that do not support a specific format of media or have limited storage that requires a reduced file size. Through this paper, we provide a novel way of transcoding the block-based video coding schemes using cloud architecture by establishing a video pipelining architecture. The solution discussed in this paper would enable the end users to extract videos in any format and resolution seamlessly, combined with the scalability, reliability, and cost-effectiveness of the cloud. The proposed idea would be lucrative for all the video streaming applications that are currently relying on their legacy infrastructure for video transcoding.
{"title":"A Novel Strategy to Achieve Video Transcoding Using Cloud Computing","authors":"Malhar Deshkar, Dr. Padma Adane, Divyanshu Pandey, Dewansh Chaudhari","doi":"10.47164/ijngc.v14i1.1091","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1091","url":null,"abstract":"One of the fundamental challenges faced in deploying multimedia systems is delivering smooth and uninterrupted audio-visual information anywhere and anytime. In such systems, multimedia content is compressed within a certain format, this requires format conversion for various devices. Thus, a transcoding mechanism is required to make the content adaptive for various devices in the network. Video transcoding converts one digitally encoded format into another, this involves translating any file format containing video and audio at the same time. This is an essential feature for devices that do not support a specific format of media or have limited storage that requires a reduced file size. Through this paper, we provide a novel way of transcoding the block-based video coding schemes using cloud architecture by establishing a video pipelining architecture. The solution discussed in this paper would enable the end users to extract videos in any format and resolution seamlessly, combined with the scalability, reliability, and cost-effectiveness of the cloud. The proposed idea would be lucrative for all the video streaming applications that are currently relying on their legacy infrastructure for video transcoding.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"43 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75945979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-15DOI: 10.47164/ijngc.v14i1.1005
Shubhangi Neware
Milk is considered as complete food as it contains rich set of proteins and vitamins. Therefore determining quality of cow milk plays an important role in today’s research. In this paper four methods are implemented to check quality of cow milk using dataset consists of 1059 milk samples taken from various cows. Three grades of milk grade A, B, C are considered based on different features of cow milk. Various machine learning methods K Nearest neighbors, Logistic regression, Support Vector machine and ANN are implemented. Accuracy of these methods is then compared. It has been observed that the results of KNN (n=3) is more accurate amongst all four methods implemented in the proposed research work.
{"title":"Cow Milk Quality Grading using Machine Learning Methods","authors":"Shubhangi Neware","doi":"10.47164/ijngc.v14i1.1005","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1005","url":null,"abstract":"Milk is considered as complete food as it contains rich set of proteins and vitamins. Therefore determining quality of cow milk plays an important role in today’s research. In this paper four methods are implemented to check quality of cow milk using dataset consists of 1059 milk samples taken from various cows. Three grades of milk grade A, B, C are considered based on different features of cow milk. Various machine learning methods K Nearest neighbors, Logistic regression, Support Vector machine and ANN are implemented. Accuracy of these methods is then compared. It has been observed that the results of KNN (n=3) is more accurate amongst all four methods implemented in the proposed research work.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"28 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80645743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humans are using various devices for interacting with the system like mouse, keyboard, joystick etc. We have developed a real time human computer interaction system for virtual mouse based on the hand gestures. The system is designed in 3 modules as detection of hand, recognition of gestures and human computer interaction with control of mouse events to achieve the higher degree of gesture recognition. We first capture the video using the built-in webcam or USB webcam. Each frame of hand is recognized using a media Pipe palm detection model and using opencv fingertips. The user can move the mouse cursor by moving their fingertip and can perform a click by bringing two fingertips to close. So, this system captures frames using a webcam and detects the hand and fingertips and clicks or moves of the cursor. The system does not require a physical device for cursor movement. The developed system can be extended in other scenarios where human-machine interaction is required with more complex command formats rather than just mouse events.
{"title":"Realtime Hand Gesture Recognition System for Human Computer Interaction","authors":"Shubhangi Tirpude, Devishree Naidu, Piyush Rajkarne, Sanket Sarile, Niraj Saraf, Raghav Maheshwari","doi":"10.47164/ijngc.v14i1.1097","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1097","url":null,"abstract":"Humans are using various devices for interacting with the system like mouse, keyboard, joystick etc. We have developed a real time human computer interaction system for virtual mouse based on the hand gestures. The system is designed in 3 modules as detection of hand, recognition of gestures and human computer interaction with control of mouse events to achieve the higher degree of gesture recognition. We first capture the video using the built-in webcam or USB webcam. Each frame of hand is recognized using a media Pipe palm detection model and using opencv fingertips. The user can move the mouse cursor by moving their fingertip and can perform a click by bringing two fingertips to close. So, this system captures frames using a webcam and detects the hand and fingertips and clicks or moves of the cursor. The system does not require a physical device for cursor movement. The developed system can be extended in other scenarios where human-machine interaction is required with more complex command formats rather than just mouse events.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"28 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90144499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-15DOI: 10.47164/ijngc.v14i1.1043
Mitali Lade, Rashmi Welekar, Charanjeet Dadiyala
Twitter sentiment has been found to be useful in predicting whether the price of Bitcoin will rise or fall will climb or decline. Modelling market activity and hence emotion in the Bitcoin ecosystem gives insight into Bitcoin price forecasts. We take into account not just the emotion retrieved not just from tweets, but also from the quantity of tweets. With the goal of optimising time window within which expressed emotion becomes a credible predictor of price change, we provide data from research that examined the link among both sentiment and future price at various temporal granularities. We demonstrate in this study that not only can price direction be anticipated, but also the magnitude of price movement with same accuracy, and this is the study's major scientific contribution. Non-Fungible Token (NFT) has gained international interest in recent years as a blockchain-based application. The most prevalent kind of NFT that can be stored on many blockchains is digital art. We did studies on CryptoPunks, the most popular collection on the NFT market, in examine and depict each and every major ethical challenges. We investigated ethical concerns from three perspectives: design, trade transactions, and relevant Twitter topics. Using Python libraries, a Twitter crawler, and sentiment analysis tools, we scraped data from Twitter and performed the analysis and prediction on bitcoin and NFTs.
{"title":"Bitcoin Price Prediction and NFT Generator Based on Sentiment Analysis","authors":"Mitali Lade, Rashmi Welekar, Charanjeet Dadiyala","doi":"10.47164/ijngc.v14i1.1043","DOIUrl":"https://doi.org/10.47164/ijngc.v14i1.1043","url":null,"abstract":"Twitter sentiment has been found to be useful in predicting whether the price of Bitcoin will rise or fall will climb or decline. Modelling market activity and hence emotion in the Bitcoin ecosystem gives insight into Bitcoin price forecasts. We take into account not just the emotion retrieved not just from tweets, but also from the quantity of tweets. With the goal of optimising time window within which expressed emotion becomes a credible predictor of price change, we provide data from research that examined the link among both sentiment and future price at various temporal granularities. We demonstrate in this study that not only can price direction be anticipated, but also the magnitude of price movement with same accuracy, and this is the study's major scientific contribution. Non-Fungible Token (NFT) has gained international interest in recent years as a blockchain-based application. The most prevalent kind of NFT that can be stored on many blockchains is digital art. We did studies on CryptoPunks, the most popular collection on the NFT market, in examine and depict each and every major ethical challenges. We investigated ethical concerns from three perspectives: design, trade transactions, and relevant Twitter topics. Using Python libraries, a Twitter crawler, and sentiment analysis tools, we scraped data from Twitter and performed the analysis and prediction on bitcoin and NFTs.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"101 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89925769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}