首页 > 最新文献

International Journal of Scientific Research in Computer Science, Engineering and Information Technology最新文献

英文 中文
Decoding the Threat Landscape : ChatGPT, FraudGPT, and WormGPT in Social Engineering Attacks 破解威胁格局:社交工程攻击中的ChatGPT、FraudGPT和WormGPT
Polra Victor Falade
In the ever-evolving realm of cybersecurity, the rise of generative AI models like ChatGPT, FraudGPT, and WormGPT has introduced both innovative solutions and unprecedented challenges. This research delves into the multifaceted applications of generative AI in social engineering attacks, offering insights into the evolving threat landscape using blog mining technique. Generative AI models have revolutionized the field of cyberattacks, empowering malicious actors to craft convincing and personalized phishing lures, manipulate public opinion through deepfakes, and exploit human cognitive biases. These models, ChatGPT, FraudGPT, and WormGPT, have augmented existing threats and ushered in new dimensions of risk. From phishing campaigns that mimic trusted organizations to deepfake technology impersonating authoritative figures, we explore how generative AI amplifies the arsenal of cybercriminals. Furthermore, we shed light on the vulnerabilities that AI-driven social engineering exploits, including psychological manipulation, targeted phishing, and the crisis of authenticity. To counter these threats, we outline a range of strategies, including traditional security measures, AI-powered security solutions, and collaborative approaches in cybersecurity. We emphasize the importance of staying vigilant, fostering awareness, and strengthening regulations in the battle against AI-enhanced social engineering attacks. In an environment characterized by the rapid evolution of AI models and a lack of training data, defending against generative AI threats requires constant adaptation and the collective efforts of individuals, organizations, and governments. This research seeks to provide a comprehensive understanding of the dynamic interplay between generative AI and social engineering attacks, equipping stakeholders with the knowledge to navigate this intricate cybersecurity landscape.
在不断发展的网络安全领域,ChatGPT、FraudGPT和WormGPT等生成式人工智能模型的兴起既带来了创新的解决方案,也带来了前所未有的挑战。本研究深入探讨了生成式人工智能在社会工程攻击中的多方面应用,利用博客挖掘技术提供了对不断发展的威胁格局的见解。生成式人工智能模型彻底改变了网络攻击领域,使恶意行为者能够制作令人信服的个性化网络钓鱼诱饵,通过深度伪造操纵公众舆论,并利用人类的认知偏见。ChatGPT、FraudGPT和WormGPT这些模型增强了现有的威胁,并引入了新的风险维度。从模仿可信组织的网络钓鱼活动到模仿权威人物的深度伪造技术,我们探讨了生成人工智能如何放大网络犯罪分子的武器库。此外,我们还揭示了人工智能驱动的社会工程所利用的漏洞,包括心理操纵、有针对性的网络钓鱼和真实性危机。为了应对这些威胁,我们概述了一系列战略,包括传统安全措施、人工智能安全解决方案和网络安全协作方法。我们强调在打击人工智能增强的社会工程攻击中保持警惕、提高认识和加强监管的重要性。在人工智能模型快速发展和缺乏训练数据的环境中,防御生成式人工智能威胁需要不断适应和个人、组织和政府的集体努力。本研究旨在全面了解生成式人工智能和社会工程攻击之间的动态相互作用,为利益相关者提供驾驭这一复杂网络安全格局的知识。
{"title":"Decoding the Threat Landscape : ChatGPT, FraudGPT, and WormGPT in Social Engineering Attacks","authors":"Polra Victor Falade","doi":"10.32628/cseit2390533","DOIUrl":"https://doi.org/10.32628/cseit2390533","url":null,"abstract":"In the ever-evolving realm of cybersecurity, the rise of generative AI models like ChatGPT, FraudGPT, and WormGPT has introduced both innovative solutions and unprecedented challenges. This research delves into the multifaceted applications of generative AI in social engineering attacks, offering insights into the evolving threat landscape using blog mining technique. Generative AI models have revolutionized the field of cyberattacks, empowering malicious actors to craft convincing and personalized phishing lures, manipulate public opinion through deepfakes, and exploit human cognitive biases. These models, ChatGPT, FraudGPT, and WormGPT, have augmented existing threats and ushered in new dimensions of risk. From phishing campaigns that mimic trusted organizations to deepfake technology impersonating authoritative figures, we explore how generative AI amplifies the arsenal of cybercriminals. Furthermore, we shed light on the vulnerabilities that AI-driven social engineering exploits, including psychological manipulation, targeted phishing, and the crisis of authenticity. To counter these threats, we outline a range of strategies, including traditional security measures, AI-powered security solutions, and collaborative approaches in cybersecurity. We emphasize the importance of staying vigilant, fostering awareness, and strengthening regulations in the battle against AI-enhanced social engineering attacks. In an environment characterized by the rapid evolution of AI models and a lack of training data, defending against generative AI threats requires constant adaptation and the collective efforts of individuals, organizations, and governments. This research seeks to provide a comprehensive understanding of the dynamic interplay between generative AI and social engineering attacks, equipping stakeholders with the knowledge to navigate this intricate cybersecurity landscape.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"236 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135746063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Development of CNN Model to Avoid Food Spoiling Level 避免食品变质程度的CNN模型的开发
None Sai Prasad Baswoju, None Y Latha, None Ravindra Changala, None Annapurna Gummadi
Food spoilage is a pervasive issue that contributes to food waste and poses significant economic and environmental challenges worldwide. To combat this problem, we propose the development of a Convolutional Neural Network (CNN) model capable of predicting and preventing food spoilage. This paper outlines the methodology, data collection, model architecture, and evaluation of our CNN-based solution, which aims to assist consumers, retailers, and food producers in minimizing food waste. Researchers are working on innovative techniques to preserve the quality of food in an effort to extend its shelf life since grains are prone to spoiling as a result of precipitation, humidity, temperature, and a number of other factors. In order to maintain current standards of food quality, effective surveillance systems for food deterioration are needed. To monitor food quality and control home storage systems, we have created a prototype. To start, we used a Convolutional Neural Network (CNN) model to identify the different types of fruits and vegetables. The suggested system then uses sensors and actuators to check the amount of food spoiling by monitoring the gas emission level, humidity level, and temperature of fruits and vegetables. Additionally, this would regulate the environment and, to the greatest extent feasible, prevent food spoiling. Additionally, based on the freshness and condition of the food, a message alerting the client to the food decomposition level is delivered to their registered cell numbers. The model used turned out to have a 96.3% accuracy rate.
食物腐败是一个普遍存在的问题,它造成了食物浪费,并在全球范围内构成了重大的经济和环境挑战。为了解决这个问题,我们提出了一种卷积神经网络(CNN)模型的发展,能够预测和防止食物变质。本文概述了我们基于cnn的解决方案的方法,数据收集,模型架构和评估,旨在帮助消费者,零售商和食品生产商最大限度地减少食物浪费。研究人员正在研究创新技术,以保持食品的质量,努力延长其保质期,因为谷物容易因降水、湿度、温度和许多其他因素而变质。为了维持现行的食品质量标准,需要有效的食品变质监测系统。为了监控食品质量和控制家庭储存系统,我们创造了一个原型。首先,我们使用卷积神经网络(CNN)模型来识别不同类型的水果和蔬菜。然后,该系统使用传感器和执行器,通过监测气体排放水平、湿度水平和水果和蔬菜的温度,来检查食物变质的程度。此外,这将调节环境,并在可行的最大程度上防止食品变质。此外,根据食物的新鲜度和状况,将一条消息发送给客户端,提醒他们食物的分解程度。所使用的模型的准确率为96.3%。
{"title":"Development of CNN Model to Avoid Food Spoiling Level","authors":"None Sai Prasad Baswoju, None Y Latha, None Ravindra Changala, None Annapurna Gummadi","doi":"10.32628/cseit2390536","DOIUrl":"https://doi.org/10.32628/cseit2390536","url":null,"abstract":"Food spoilage is a pervasive issue that contributes to food waste and poses significant economic and environmental challenges worldwide. To combat this problem, we propose the development of a Convolutional Neural Network (CNN) model capable of predicting and preventing food spoilage. This paper outlines the methodology, data collection, model architecture, and evaluation of our CNN-based solution, which aims to assist consumers, retailers, and food producers in minimizing food waste. Researchers are working on innovative techniques to preserve the quality of food in an effort to extend its shelf life since grains are prone to spoiling as a result of precipitation, humidity, temperature, and a number of other factors. In order to maintain current standards of food quality, effective surveillance systems for food deterioration are needed. To monitor food quality and control home storage systems, we have created a prototype. To start, we used a Convolutional Neural Network (CNN) model to identify the different types of fruits and vegetables. The suggested system then uses sensors and actuators to check the amount of food spoiling by monitoring the gas emission level, humidity level, and temperature of fruits and vegetables. Additionally, this would regulate the environment and, to the greatest extent feasible, prevent food spoiling. Additionally, based on the freshness and condition of the food, a message alerting the client to the food decomposition level is delivered to their registered cell numbers. The model used turned out to have a 96.3% accuracy rate.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136198691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning-Based Detection of Phishing in COVID-19 Theme-Related Emails and Web Links 基于机器学习的COVID-19主题相关电子邮件和Web链接中的网络钓鱼检测
None Usman Ali, None Dr. Isma Farah Siddiqui
During the COVID-19 epidemic phishing dodges increased in frequency mostly the links provided current updates about COVID-19 hence it became easy to trick the victims. Many research studies suggest several solutions to prevent those attacks but still phishing assaults upsurge. There is no only way to perform phishing attacks through web links attackers also perform attacks through electronic mail. This study aims to propose an Effective Model using Ensemble Classifiers to predict phishing using COVID-19-themed emails and Web Links. Our study comprises two types of Datasets. Dataset 1 for web links and Dataset 2 for email. Dataset 1 contains a textual dataset while Dataset 2 contains images that were downloaded from different sources. We select ensemble classifiers including, Random Forest (RF), Ada Boost, Bagging, ExtraTree (ET), and Gradient Boosting (GB). During the analysis, we observed that Dataset 1 achieves the highest accuracy rate as compared to Dataset 2 which is 88.91%. The ET classifier performs with an accuracy rate of 88.91%, a precision rate of 89%, a recall rate of 89%, and an f1 score of 89% which is better as compared to other classifiers over both datasets. Interesting concepts were found during the study.
在COVID-19流行期间,网络钓鱼的频率增加,主要是提供有关COVID-19的最新更新的链接,因此很容易欺骗受害者。许多研究提出了几种解决方案来防止这些攻击,但网络钓鱼攻击仍然高涨。网络钓鱼攻击不仅可以通过网络链接进行,还可以通过电子邮件进行攻击。本研究旨在提出一个使用集成分类器的有效模型来预测使用covid -19主题电子邮件和Web链接的网络钓鱼。我们的研究包括两类数据集。数据集1用于网络链接,数据集2用于电子邮件。数据集1包含一个文本数据集,而数据集2包含从不同来源下载的图像。我们选择的集成分类器包括随机森林(RF)、Ada Boost、Bagging、ExtraTree (ET)和梯度增强(GB)。在分析过程中,我们观察到与Dataset 2相比,Dataset 1的准确率最高,为88.91%。ET分类器的准确率为88.91%,准确率为89%,召回率为89%,f1得分为89%,在这两个数据集上都优于其他分类器。在研究过程中发现了一些有趣的概念。
{"title":"Machine Learning-Based Detection of Phishing in COVID-19 Theme-Related Emails and Web Links","authors":"None Usman Ali, None Dr. Isma Farah Siddiqui","doi":"10.32628/cseit2390563","DOIUrl":"https://doi.org/10.32628/cseit2390563","url":null,"abstract":"During the COVID-19 epidemic phishing dodges increased in frequency mostly the links provided current updates about COVID-19 hence it became easy to trick the victims. Many research studies suggest several solutions to prevent those attacks but still phishing assaults upsurge. There is no only way to perform phishing attacks through web links attackers also perform attacks through electronic mail. This study aims to propose an Effective Model using Ensemble Classifiers to predict phishing using COVID-19-themed emails and Web Links. Our study comprises two types of Datasets. Dataset 1 for web links and Dataset 2 for email. Dataset 1 contains a textual dataset while Dataset 2 contains images that were downloaded from different sources. We select ensemble classifiers including, Random Forest (RF), Ada Boost, Bagging, ExtraTree (ET), and Gradient Boosting (GB). During the analysis, we observed that Dataset 1 achieves the highest accuracy rate as compared to Dataset 2 which is 88.91%. The ET classifier performs with an accuracy rate of 88.91%, a precision rate of 89%, a recall rate of 89%, and an f1 score of 89% which is better as compared to other classifiers over both datasets. Interesting concepts were found during the study.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136198694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Piecewise Linear Approximation-Driven Primal SVM Approach for Improved Iris Classification Efficiency 分段线性逼近驱动的原始支持向量机方法提高虹膜分类效率
Shital Solanki, Dr. Ramesh Prajapati
Classification, a crucial aspect of machine learning, revolves around the meticulous analysis of data. However, the complexity of diverse life forms on Earth poses a challenge in distinguishing species that share similar attributes. The iris flower, with its subspecies exemplifies this challenge. The aim of the paper is to develop a methodology that not only enhances classification accuracy but also effectively addresses computational efficiency, facilitating faster and more practical categorization of iris patterns. This novel approach named Piecewise Linear Approximation based SVM (PLA-SVM) is applied to flower species classification and is benchmarked against alternative machine learning techniques. Implementation is carried out utilizing MATLAB – GUROBI interface of and GUROBI Solver. The performance metrics such as accuracy, precision, F1 score and ROC – AUC Curve are used to compare proposed algorithm performance. This comprehensive analysis enables a comparative study of diverse algorithms, ultimately validating the proposed PLA-SVM technique using the Iris dataset. The numerical implementation results shows that the PLASVM outperforms the existing standard classifiers in terms of different performance matrices.
分类是机器学习的一个关键方面,它围绕着对数据的细致分析。然而,地球上各种生命形式的复杂性给区分具有相似属性的物种带来了挑战。鸢尾花及其亚种就是这种挑战的例证。本文的目的是开发一种既能提高分类精度又能有效解决计算效率的方法,促进虹膜模式更快、更实用的分类。这种基于分段线性逼近的支持向量机(PLA-SVM)的新方法被应用于花卉分类,并与其他机器学习技术进行了基准测试。利用MATLAB - GUROBI接口和GUROBI求解器进行实现。使用准确度、精密度、F1分数和ROC - AUC曲线等性能指标来比较算法的性能。这项全面的分析能够对各种算法进行比较研究,最终验证使用Iris数据集提出的PLA-SVM技术。数值实现结果表明,该算法在不同的性能矩阵上都优于现有的标准分类器。
{"title":"Piecewise Linear Approximation-Driven Primal SVM Approach for Improved Iris Classification Efficiency","authors":"Shital Solanki, Dr. Ramesh Prajapati","doi":"10.32628/cseit12390542","DOIUrl":"https://doi.org/10.32628/cseit12390542","url":null,"abstract":"Classification, a crucial aspect of machine learning, revolves around the meticulous analysis of data. However, the complexity of diverse life forms on Earth poses a challenge in distinguishing species that share similar attributes. The iris flower, with its subspecies exemplifies this challenge. The aim of the paper is to develop a methodology that not only enhances classification accuracy but also effectively addresses computational efficiency, facilitating faster and more practical categorization of iris patterns. This novel approach named Piecewise Linear Approximation based SVM (PLA-SVM) is applied to flower species classification and is benchmarked against alternative machine learning techniques. Implementation is carried out utilizing MATLAB – GUROBI interface of and GUROBI Solver. The performance metrics such as accuracy, precision, F1 score and ROC – AUC Curve are used to compare proposed algorithm performance. This comprehensive analysis enables a comparative study of diverse algorithms, ultimately validating the proposed PLA-SVM technique using the Iris dataset. The numerical implementation results shows that the PLASVM outperforms the existing standard classifiers in terms of different performance matrices.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"280 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135922521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Review of Artificial Intelligence Applications and Modelling AI Framework in Education System 人工智能在教育系统中的应用与建模框架综述
None Patel Karika Digesh
The potential of using artificial intelligence in education to enhance learning, assist teachers and fuel more effective individualized learning is exciting, but also a bit challenging. To even have an intelligent conversation about AI in education, one must first push past imaginary science-fiction scenarios of computers and robots teaching our children, replacing teachers and reducing the human element from what is a fundamentally human activity. AI can automate grading so that the tutor can have more time to teach. AI chatbot can communicate with students as a teaching assistant. This research paper focuses on modelling of AI ingredients in framework of education. AI in future can work as a personal virtual tutor for students, which will be easily accessible at any time and any place.
在教育中使用人工智能来增强学习、协助教师和推动更有效的个性化学习的潜力令人兴奋,但也有点挑战。为了就人工智能在教育中的应用展开一场有意义的对话,我们必须首先超越科幻小说中想象出来的场景,即计算机和机器人教育我们的孩子,取代教师,并从根本上的人类活动中减少人类因素。人工智能可以自动评分,这样导师就有更多的时间来教学。人工智能聊天机器人可以作为助教与学生交流。本文主要研究人工智能成分在教育框架中的建模问题。未来的人工智能可以作为学生的个人虚拟导师,随时随地都可以轻松访问。
{"title":"Review of Artificial Intelligence Applications and Modelling AI Framework in Education System","authors":"None Patel Karika Digesh","doi":"10.32628/cseit2390542","DOIUrl":"https://doi.org/10.32628/cseit2390542","url":null,"abstract":"The potential of using artificial intelligence in education to enhance learning, assist teachers and fuel more effective individualized learning is exciting, but also a bit challenging. To even have an intelligent conversation about AI in education, one must first push past imaginary science-fiction scenarios of computers and robots teaching our children, replacing teachers and reducing the human element from what is a fundamentally human activity. AI can automate grading so that the tutor can have more time to teach. AI chatbot can communicate with students as a teaching assistant. This research paper focuses on modelling of AI ingredients in framework of education. AI in future can work as a personal virtual tutor for students, which will be easily accessible at any time and any place.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136198693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Role of Machine Learning in Managing Crowd Intelligence 机器学习在群体智能管理中的作用
None Mohit Suthar, None Sunil Sharma
Machine learning is one of the essential technologies that is prevailing nowadays in almost every sector of business and education. People are becoming more advanced and developed gaining higher levels of technologies and learning data. Machine learning plays a key role in monitoring and facilitating various aspects of crowd intelligence which includes identification of a good level of workflow, collecting responses from individuals regarding workflow, and testing of various methods that can enable in crowdsourcing of the task. Various methods are adopted under machine learning to improvise and increase the demanded track of career and growth pace of business firms. One of the best methods which are available for analysing data and used by professionals is crowd-powered machine learning which in turn facilitates in automation of the building of analytical models. The following research is also based on a similar aspect in which discussion is been made regarding crowd-powered machine learning as well and an evaluation of the intelligent management of crowd-powered machine learning is also ascertained. Furthermore, the research also discusses the role played by machine intelligence in the management of crowd intelligence in AI. The research has also highlighted the various methods as well as techniques in order to understand the role of machine learning in the effective management of crowd intelligence.
机器学习是当今几乎在商业和教育的每个部门盛行的基本技术之一。人们变得越来越先进和发达,获得了更高水平的技术和学习数据。机器学习在监控和促进人群智能的各个方面发挥着关键作用,包括识别良好的工作流程水平,收集个人对工作流程的响应,以及测试可以实现任务众包的各种方法。在机器学习下,企业采用各种方法来即兴创作和提高职业发展的需求轨迹和成长速度。可用于分析数据并被专业人士使用的最佳方法之一是群体驱动的机器学习,这反过来又促进了分析模型构建的自动化。下面的研究也是基于类似的方面,对众动力机器学习进行了讨论,并对众动力机器学习的智能管理进行了评估。此外,本文还探讨了人工智能中机器智能在群体智能管理中的作用。该研究还强调了各种方法和技术,以了解机器学习在有效管理人群智能中的作用。
{"title":"Role of Machine Learning in Managing Crowd Intelligence","authors":"None Mohit Suthar, None Sunil Sharma","doi":"10.32628/cseit2390525","DOIUrl":"https://doi.org/10.32628/cseit2390525","url":null,"abstract":"Machine learning is one of the essential technologies that is prevailing nowadays in almost every sector of business and education. People are becoming more advanced and developed gaining higher levels of technologies and learning data. Machine learning plays a key role in monitoring and facilitating various aspects of crowd intelligence which includes identification of a good level of workflow, collecting responses from individuals regarding workflow, and testing of various methods that can enable in crowdsourcing of the task. Various methods are adopted under machine learning to improvise and increase the demanded track of career and growth pace of business firms. One of the best methods which are available for analysing data and used by professionals is crowd-powered machine learning which in turn facilitates in automation of the building of analytical models. The following research is also based on a similar aspect in which discussion is been made regarding crowd-powered machine learning as well and an evaluation of the intelligent management of crowd-powered machine learning is also ascertained. Furthermore, the research also discusses the role played by machine intelligence in the management of crowd intelligence in AI. The research has also highlighted the various methods as well as techniques in order to understand the role of machine learning in the effective management of crowd intelligence.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136107532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A System for Diagnosing Alzheimer’s Disease from Brain MRI Images Using Deep Learning Algorithm 基于深度学习算法的脑MRI图像诊断阿尔茨海默病系统
None S. Neelavthi, None P. Arunkumar
In addition to their vulnerability, the complexity of the operations, and the high expenses, disorders of the brain are one of the most challenging diseases to treat. However, because the outcome is unpredictable, the procedure itself does not need to be successful. One of the most prevalent brain diseases in adults, hypertension, can cause varying degrees of memory loss and forgetfulness. Depending on each patient's situation. For these reasons, it's crucial to define memory loss, determine the patient's level of decline, and determine his brain MRI scans are used to identify Alzheimer's disease. In this thesis, we discuss methods and approaches for diagnosing Alzheimer's disease using deep learning. The suggested approach is utilized to enhance patient care, lower expenses, and enable quick and accurate analysis in sizable investigations. Modern deep learning techniques have lately successfully demonstrated performance at the level of a human in various domains, including medical image processing. We propose a deep convolutional network for diagnosing Alzheimer's disease based on the analysis of brain MRI data. Our model outperforms other models for early detection of current techniques because it can distinguish between different stages of Alzheimer's disease.
除了它们的脆弱性、手术的复杂性和高昂的费用外,脑部疾病是最具挑战性的疾病之一。然而,由于结果是不可预测的,手术本身并不需要成功。高血压是成年人中最常见的脑部疾病之一,它会导致不同程度的记忆丧失和健忘。这取决于每个病人的情况。由于这些原因,定义记忆丧失,确定患者衰退的程度,并确定他的大脑MRI扫描用于识别阿尔茨海默病是至关重要的。在本文中,我们讨论了使用深度学习诊断阿尔茨海默病的方法和途径。建议的方法用于提高患者护理,降低费用,并在大规模调查中实现快速准确的分析。现代深度学习技术最近在包括医学图像处理在内的各个领域成功地展示了人类水平的性能。基于对脑MRI数据的分析,提出了一种用于阿尔茨海默病诊断的深度卷积网络。我们的模型在现有技术的早期检测方面优于其他模型,因为它可以区分阿尔茨海默病的不同阶段。
{"title":"A System for Diagnosing Alzheimer’s Disease from Brain MRI Images Using Deep Learning Algorithm","authors":"None S. Neelavthi, None P. Arunkumar","doi":"10.32628/cseit2390530","DOIUrl":"https://doi.org/10.32628/cseit2390530","url":null,"abstract":"In addition to their vulnerability, the complexity of the operations, and the high expenses, disorders of the brain are one of the most challenging diseases to treat. However, because the outcome is unpredictable, the procedure itself does not need to be successful. One of the most prevalent brain diseases in adults, hypertension, can cause varying degrees of memory loss and forgetfulness. Depending on each patient's situation. For these reasons, it's crucial to define memory loss, determine the patient's level of decline, and determine his brain MRI scans are used to identify Alzheimer's disease. In this thesis, we discuss methods and approaches for diagnosing Alzheimer's disease using deep learning. The suggested approach is utilized to enhance patient care, lower expenses, and enable quick and accurate analysis in sizable investigations. Modern deep learning techniques have lately successfully demonstrated performance at the level of a human in various domains, including medical image processing. We propose a deep convolutional network for diagnosing Alzheimer's disease based on the analysis of brain MRI data. Our model outperforms other models for early detection of current techniques because it can distinguish between different stages of Alzheimer's disease.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"367 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136107531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Entrance-Q bank using Mobile Application Development 入口- q银行使用移动应用程序开发
None Manasvi Malhar Sudershan
A Student life is all about gaining knowledge and implementing it. We have many competitive exams out there and students explore themselves and chooses the path as per their interests and skills. Entrance Q-Bank is the best to proceed. It helps you to find previous years question papers where you could test and practice accordingly It is nothing but treasure, so if you can get your hands on previous year papers then it is no less than a hitting a lottery. It manages your time efficiently. It also makes you confident during exams. We choose this as our project because, instead of laying hands on different websites, having an app makes it more advantageous and time saving. On the whole, we hope to implement an app which saves time and make you more confident. We created this application with Android Studio, XML for the User Interface and Java for the backend.
学生生活就是获取知识并付诸实践。我们有很多竞争激烈的考试,学生们可以探索自己,根据自己的兴趣和技能选择道路。入口Q-Bank是最好的继续。它可以帮助你找到往年的试卷,你可以在那里进行测试和练习。这是一笔宝贵的财富,所以如果你能拿到往年的试卷,那就等于中了彩票。它有效地管理你的时间。它也会让你在考试中充满自信。我们选择这个作为我们的项目,因为,而不是把手放在不同的网站,有一个应用程序,使它更有利,节省时间。总的来说,我们希望实现一个节省时间,让你更自信的应用程序。我们使用Android Studio创建了这个应用程序,XML作为用户界面,Java作为后端。
{"title":"Entrance-Q bank using Mobile Application Development","authors":"None Manasvi Malhar Sudershan","doi":"10.32628/cseit2390532","DOIUrl":"https://doi.org/10.32628/cseit2390532","url":null,"abstract":"A Student life is all about gaining knowledge and implementing it. We have many competitive exams out there and students explore themselves and chooses the path as per their interests and skills. Entrance Q-Bank is the best to proceed. It helps you to find previous years question papers where you could test and practice accordingly It is nothing but treasure, so if you can get your hands on previous year papers then it is no less than a hitting a lottery. It manages your time efficiently. It also makes you confident during exams. We choose this as our project because, instead of laying hands on different websites, having an app makes it more advantageous and time saving. On the whole, we hope to implement an app which saves time and make you more confident. We created this application with Android Studio, XML for the User Interface and Java for the backend.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136107533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Card Transaction Security through Cyber Security 通过网络安全提升银行卡交易安全
None Manasvi Malhar Sudershan, None Vasundhara Rao, None SVS Harshitha, None Swetha Mukka, None Sai Varshan
This paper highlights the security issues associated with credit cards and underscores the crucial role of encryption in mitigating the risk of credit or debit card data theft. Credit card encryption encompasses safeguarding the card itself, securing the terminal used for card scanning, and ensuring the protection of card information during transmission between the terminal and a backend computer system. The encryption mechanism is specifically engineered to validate and limit access to card security features. In our project, we developed a web application using VS Code, employing HTML for the frontend and PHP for the backend, and implemented AES encryption as a robust security measure.
本文强调了与信用卡相关的安全问题,并强调了加密在减轻信用卡或借记卡数据被盗风险方面的关键作用。信用卡加密包括保护卡本身,保护用于卡扫描的终端,以及确保在终端和后端计算机系统之间传输卡信息时的保护。加密机制是专门设计来验证和限制访问卡的安全功能。在我们的项目中,我们使用VS Code开发了一个web应用程序,前端使用HTML,后端使用PHP,并实现了AES加密作为强大的安全措施。
{"title":"Enhancing Card Transaction Security through Cyber Security","authors":"None Manasvi Malhar Sudershan, None Vasundhara Rao, None SVS Harshitha, None Swetha Mukka, None Sai Varshan","doi":"10.32628/cseit2390531","DOIUrl":"https://doi.org/10.32628/cseit2390531","url":null,"abstract":"This paper highlights the security issues associated with credit cards and underscores the crucial role of encryption in mitigating the risk of credit or debit card data theft. Credit card encryption encompasses safeguarding the card itself, securing the terminal used for card scanning, and ensuring the protection of card information during transmission between the terminal and a backend computer system. The encryption mechanism is specifically engineered to validate and limit access to card security features. In our project, we developed a web application using VS Code, employing HTML for the frontend and PHP for the backend, and implemented AES encryption as a robust security measure.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136107534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Accessibility with LSTM-Based Sign Language Detection 基于lstm的手语检测增强可访问性
None Azees Abdul, None Adithya Valapa, None Abdul Kayom Md Khairuzzaman
Sign language serves as a vital means of communication for the deaf and hard of hearing community. However, identifying sign language poses a significant challenge due to its complexity and the lack of a standardized global framework. Recent advances in machine learning, particularly Long Short-Term Memory (LSTM) algorithms, offer promise in the field of sign language gesture recognition. This research introduces an innovative method that leverages LSTM, a type of recurrent neural network designed for processing sequential input. Our goal is to create a highly accurate system capable of anticipating and reproducing sign language motions with precision. LSTM's unique capabilities enhance the recognition of complex gestures by capturing the temporal relationships and fine details inherent in sign language. The results of this study demonstrate that LSTM-based approaches outperform existing state-of-the-art techniques, highlighting the effectiveness of LSTM in sign language recognition and their potential to facilitate communication between the deaf and hearing communities.
手语是聋人和重听人交流的重要手段。然而,由于手语的复杂性和缺乏标准化的全球框架,识别手语构成了一项重大挑战。机器学习的最新进展,特别是长短期记忆(LSTM)算法,为手语手势识别领域提供了希望。本研究介绍了一种利用LSTM的创新方法,LSTM是一种用于处理顺序输入的递归神经网络。我们的目标是创建一个高度精确的系统,能够准确地预测和再现手语运动。LSTM的独特功能通过捕捉手语中固有的时间关系和精细细节来增强对复杂手势的识别。本研究的结果表明,基于LSTM的方法优于现有的最先进的技术,突出了LSTM在手语识别中的有效性,以及它们在促进聋人和听力群体之间交流方面的潜力。
{"title":"Enhancing Accessibility with LSTM-Based Sign Language Detection","authors":"None Azees Abdul, None Adithya Valapa, None Abdul Kayom Md Khairuzzaman","doi":"10.32628/cseit2390517","DOIUrl":"https://doi.org/10.32628/cseit2390517","url":null,"abstract":"Sign language serves as a vital means of communication for the deaf and hard of hearing community. However, identifying sign language poses a significant challenge due to its complexity and the lack of a standardized global framework. Recent advances in machine learning, particularly Long Short-Term Memory (LSTM) algorithms, offer promise in the field of sign language gesture recognition. This research introduces an innovative method that leverages LSTM, a type of recurrent neural network designed for processing sequential input. Our goal is to create a highly accurate system capable of anticipating and reproducing sign language motions with precision. LSTM's unique capabilities enhance the recognition of complex gestures by capturing the temporal relationships and fine details inherent in sign language. The results of this study demonstrate that LSTM-based approaches outperform existing state-of-the-art techniques, highlighting the effectiveness of LSTM in sign language recognition and their potential to facilitate communication between the deaf and hearing communities.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136264638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Scientific Research in Computer Science, Engineering and Information Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1