Chandrakant D. Prajapati, Asha K. Patel, Dr. Krupa J. Bhavsar
This review paper provides a concise overview of Data Mining, a multidisciplinary field focused on extracting valuable insights and patterns from extensive datasets. It highlights the use of statistical analysis, machine learning, and pattern recognition techniques to discover hidden relationships and trends within data. The paper emphasizes data mining's significance as a powerful technology that extracts predictive information from large databases, enabling businesses to prioritize crucial data. It showcases how data mining tools predict future trends, empowering proactive, knowledge-driven decision-making. Furthermore, it discusses the superiority of data mining over retrospective tools, offering automated, prospective analyses to resolve complex business questions efficiently. It uncovers hidden patterns and predictive information beyond human expectations. The core concepts of data mining encountered challenges, data analysis techniques, and their profound impact on various domains are also addressed in this paper. The proposed paper offers a comprehensive overview of data mining's importance, applications, and transformative potential in modern data-driven decision-making processes.
{"title":"Impact and Challenges of Data Mining : A Comprehensive Analysis","authors":"Chandrakant D. Prajapati, Asha K. Patel, Dr. Krupa J. Bhavsar","doi":"10.32628/cseit241049","DOIUrl":"https://doi.org/10.32628/cseit241049","url":null,"abstract":"This review paper provides a concise overview of Data Mining, a multidisciplinary field focused on extracting valuable insights and patterns from extensive datasets. It highlights the use of statistical analysis, machine learning, and pattern recognition techniques to discover hidden relationships and trends within data. The paper emphasizes data mining's significance as a powerful technology that extracts predictive information from large databases, enabling businesses to prioritize crucial data. It showcases how data mining tools predict future trends, empowering proactive, knowledge-driven decision-making. Furthermore, it discusses the superiority of data mining over retrospective tools, offering automated, prospective analyses to resolve complex business questions efficiently. It uncovers hidden patterns and predictive information beyond human expectations. The core concepts of data mining encountered challenges, data analysis techniques, and their profound impact on various domains are also addressed in this paper. The proposed paper offers a comprehensive overview of data mining's importance, applications, and transformative potential in modern data-driven decision-making processes.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"58 16","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141804972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Advanced electrical circuits are very concerned about error-free communication. Information mistakes that happen during transmission may result in incorrect information being received. Error correction codes are frequently employed in electrical circuits to safeguard the data stored in memory and registers. One of these forward error correcting codes is the hamming code. It either employs the even parity or odd parity check approach. Here, we used the even parity check approach to implement hamming code. Compared to the parity check approach, hamming code is better. The hamming code is implemented here in Xilinx and uses a 7-bit data transmission scheme with 4 redundant bits. It is also used in the DSCH (Digital Schematic Editor & Simulator) programme. To find double bit errors, a specific parity bit is employed. For error detection and correction in this instance, we employed the SEDC-DED (Single Bit Error Detection and Correction-Double Bit Error Detection) method.
{"title":"Design and Implementation of Hamming Code with Error Correction Using Xilinx","authors":"Ms. Delphine Mary. P, Simran. A","doi":"10.32628/cseit24104117","DOIUrl":"https://doi.org/10.32628/cseit24104117","url":null,"abstract":"Advanced electrical circuits are very concerned about error-free communication. Information mistakes that happen during transmission may result in incorrect information being received. Error correction codes are frequently employed in electrical circuits to safeguard the data stored in memory and registers. One of these forward error correcting codes is the hamming code. It either employs the even parity or odd parity check approach. Here, we used the even parity check approach to implement hamming code. Compared to the parity check approach, hamming code is better. The hamming code is implemented here in Xilinx and uses a 7-bit data transmission scheme with 4 redundant bits. It is also used in the DSCH (Digital Schematic Editor & Simulator) programme. To find double bit errors, a specific parity bit is employed. For error detection and correction in this instance, we employed the SEDC-DED (Single Bit Error Detection and Correction-Double Bit Error Detection) method.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"75 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141802281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, a pansharpening process was conducted to merge the color information of low-resolution RGB images with the details of high-resolution panchromatic images to obtain higher quality images. During this process, weight optimization was performed using the Curvelet Transform method and the Multi Population Based Differential Evolution (MDE) algorithm. The proposed method was tested on Landsat ETM satellite image. For Landsat ETM data, the RGB images have a resolution of 30m, while the panchromatic images have a resolution of 15m. To evaluate the performance of the study, the proposed MDE-optimized Curvelet Transform-based pansharpening method was compared with classical IHS, Brovey, PCA, Gram-Schmidt and Simple Mean methods. The comparison process employed metrics such as RMSE, SAM, COC, RASE, QAVE, SID, and ERGAS. The results indicate that the proposed method outperforms classical methods in terms of both visual quality and numerical accuracy.
{"title":"Enhanced Pansharpening Using Curvelet Transform Optimized by Multi Population Based Differential Evolution","authors":"Mustafa Hüsrevoğlu, Ahmet Emin Karkınlı","doi":"10.32628/cseit24104116","DOIUrl":"https://doi.org/10.32628/cseit24104116","url":null,"abstract":"In this study, a pansharpening process was conducted to merge the color information of low-resolution RGB images with the details of high-resolution panchromatic images to obtain higher quality images. During this process, weight optimization was performed using the Curvelet Transform method and the Multi Population Based Differential Evolution (MDE) algorithm. The proposed method was tested on Landsat ETM satellite image. For Landsat ETM data, the RGB images have a resolution of 30m, while the panchromatic images have a resolution of 15m. To evaluate the performance of the study, the proposed MDE-optimized Curvelet Transform-based pansharpening method was compared with classical IHS, Brovey, PCA, Gram-Schmidt and Simple Mean methods. The comparison process employed metrics such as RMSE, SAM, COC, RASE, QAVE, SID, and ERGAS. The results indicate that the proposed method outperforms classical methods in terms of both visual quality and numerical accuracy.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"107 22","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141812157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Nadeem, Wei Zhang, Sarwat Aslam, Liaqat Ali, Abdul Majid
Alzheimer's is a very challenging brain disease to recognize, diagnose, and treat correctly when it appears in its earliest forms. The primary contribution of this research study is about machine learning models, techniques, and approaches. In contrast, Random Forest and Support Vector Machine (SVM) are particularly suitable for identifying and staging Alzheimer's disease stages using multimodal data sources. In this paper, the aim was to develop well-performing predictive models to help diagnose Alzheimer's disease at an early stage by combining neuroimaging data (MRI/PET images), imaging-based biomarkers, both structural and functional measures from MRI(P) /PET image analysis along with subject-specific demographics like age using clinical features in a probabilistic fashion obtained from the Alzheimer's Disease Neuro-Imaging Initiative (ADNI) database. The methodology focuses on data pre-processing, feature selection, and model building using supervised learning algorithms. The accuracy of the Random Forest model is 78%, having a high performance in classifying some classes while showing different marks of performances across other courses. SVM reached an accuracy of 61%, or the model's performance is good in some classes and not reliable to identify samples from the others. The findings of this study underscore the capabilities and limits of these machine learning models in identifying Alzheimer’s disease and highlight the importance of feature engineering, data pre-processing, and model tuning to increase performance and correct class unevenness and misclassification.
{"title":"Multimodal Data Integration for Early Alzheimer’s Detection Using Random Forest and Support Vector Machines","authors":"Muhammad Nadeem, Wei Zhang, Sarwat Aslam, Liaqat Ali, Abdul Majid","doi":"10.32628/cseit241047","DOIUrl":"https://doi.org/10.32628/cseit241047","url":null,"abstract":"Alzheimer's is a very challenging brain disease to recognize, diagnose, and treat correctly when it appears in its earliest forms. The primary contribution of this research study is about machine learning models, techniques, and approaches. In contrast, Random Forest and Support Vector Machine (SVM) are particularly suitable for identifying and staging Alzheimer's disease stages using multimodal data sources. In this paper, the aim was to develop well-performing predictive models to help diagnose Alzheimer's disease at an early stage by combining neuroimaging data (MRI/PET images), imaging-based biomarkers, both structural and functional measures from MRI(P) /PET image analysis along with subject-specific demographics like age using clinical features in a probabilistic fashion obtained from the Alzheimer's Disease Neuro-Imaging Initiative (ADNI) database. The methodology focuses on data pre-processing, feature selection, and model building using supervised learning algorithms. The accuracy of the Random Forest model is 78%, having a high performance in classifying some classes while showing different marks of performances across other courses. SVM reached an accuracy of 61%, or the model's performance is good in some classes and not reliable to identify samples from the others. The findings of this study underscore the capabilities and limits of these machine learning models in identifying Alzheimer’s disease and highlight the importance of feature engineering, data pre-processing, and model tuning to increase performance and correct class unevenness and misclassification.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"77 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141817836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A large pharmaceuticals corporation utilizing a complex IT infrastructure such as SAP ERP typically faces a substantial volume GMP and Serialization data annually, numbering in the hundreds of thousands. These inquiries, whether initiated over the phone or online via platforms like integration, seek assistance with various issues. Enterprise resource planning (ERP) software streamlines business processes by integrating technology, services, and human resources across interconnected applications. This research proposes implementing an intelligent system to streamline volume of the data and analyzation for the SAP ERP. This system aims to automate responses to user queries, reducing the time required for issue investigation and resolution, and enhancing user responsiveness. Employing machine learning algorithms, the system efficiently interprets and classifies text across multiple categories, facilitating accurate question comprehension. Additionally, it utilizes a specialized framework to retrieve relevant evidence, ensuring the delivery of optimal responses. Furthermore, its conversational AI capabilities enable the creation of chatbots, fostering collaborative problem-solving among user groups in real-time.
使用 SAP ERP 等复杂 IT 基础设施的大型制药企业每年通常要面对大量的 GMP 和序列化数据,数量高达数十万。这些咨询,无论是通过电话还是通过集成等平台在线发起,都是为了寻求各种问题的帮助。企业资源规划(ERP)软件通过在相互关联的应用程序中整合技术、服务和人力资源来简化业务流程。本研究建议实施一个智能系统,以简化 SAP ERP 的数据量和分析。该系统旨在自动回复用户查询,减少问题调查和解决所需的时间,提高用户响应速度。该系统采用机器学习算法,可有效解释和分类多类别文本,便于准确理解问题。此外,它还利用专门框架检索相关证据,确保提供最佳回复。此外,它的对话式人工智能功能还能创建聊天机器人,促进用户群之间实时协作解决问题。
{"title":"The Future of Enterprise resource planning (ERP): Harnessing Artificial Intelligence","authors":"Gaurav Kumar","doi":"10.32628/cseit24104112","DOIUrl":"https://doi.org/10.32628/cseit24104112","url":null,"abstract":"A large pharmaceuticals corporation utilizing a complex IT infrastructure such as SAP ERP typically faces a substantial volume GMP and Serialization data annually, numbering in the hundreds of thousands. These inquiries, whether initiated over the phone or online via platforms like integration, seek assistance with various issues. Enterprise resource planning (ERP) software streamlines business processes by integrating technology, services, and human resources across interconnected applications. This research proposes implementing an intelligent system to streamline volume of the data and analyzation for the SAP ERP. This system aims to automate responses to user queries, reducing the time required for issue investigation and resolution, and enhancing user responsiveness. Employing machine learning algorithms, the system efficiently interprets and classifies text across multiple categories, facilitating accurate question comprehension. Additionally, it utilizes a specialized framework to retrieve relevant evidence, ensuring the delivery of optimal responses. Furthermore, its conversational AI capabilities enable the creation of chatbots, fostering collaborative problem-solving among user groups in real-time.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"109 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141834987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SpaceX and Elon Musk are leading a long-term initiative called Starlink to solve inequities in rural broadband Internet access.In order to provide constant, high-speed Internet access worldwide, the project aims to launch thousands of smallsat-class satellites into low Earth orbit (LEO) as a component of a mega constellation. SpaceX thinks that by utilizing shallow orbits, their technology may surpass the competition. Starlink assures its clients of reduced latency and improved connection quality in contrast to conventional geosynchronous satellite Internet infrastructure (Walker & Elliott, 2021). The purpose of this study is to look at how the Thai public views the Starlink satellite project. An online survey was employed to gather data from a convenience sample of 1,258 people in Thailand, using a quantitative methodology. To analyze the data, binary regression analysis was done. The results showed that factors such as gender, age, computer, laptop, tablet, wearable device, length of Internet use, mobile Internet, Instagram, TikTok, and YouTube could all be used to characterize how the public in Thailand felt about the Starlink Satellite project.The major reason is Starlink needs to come up with a clever plan to encourage customers to adopt satellite Internet in nations where fiber Internet is more readily available and reasonably priced.
{"title":"Starlink Satellite Project in A Developing Country","authors":"Vivek Reddy Gadipally","doi":"10.32628/cseit2410330","DOIUrl":"https://doi.org/10.32628/cseit2410330","url":null,"abstract":"SpaceX and Elon Musk are leading a long-term initiative called Starlink to solve inequities in rural broadband Internet access.In order to provide constant, high-speed Internet access worldwide, the project aims to launch thousands of smallsat-class satellites into low Earth orbit (LEO) as a component of a mega constellation. SpaceX thinks that by utilizing shallow orbits, their technology may surpass the competition. Starlink assures its clients of reduced latency and improved connection quality in contrast to conventional geosynchronous satellite Internet infrastructure (Walker & Elliott, 2021). The purpose of this study is to look at how the Thai public views the Starlink satellite project. An online survey was employed to gather data from a convenience sample of 1,258 people in Thailand, using a quantitative methodology. To analyze the data, binary regression analysis was done. The results showed that factors such as gender, age, computer, laptop, tablet, wearable device, length of Internet use, mobile Internet, Instagram, TikTok, and YouTube could all be used to characterize how the public in Thailand felt about the Starlink Satellite project.The major reason is Starlink needs to come up with a clever plan to encourage customers to adopt satellite Internet in nations where fiber Internet is more readily available and reasonably priced.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"8 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141337013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sharma Vishalkumar Sureshbhai, Dr. Tulsidas Nakrani
Sentiment analysis is possibly one of the most desirable areas of study within Natural Language Processing (NLP). Generative AI can be used in sentiment analysis through the generation of text that reflects the sentiment or emotional tone of a given input. The process typically involves training a generative AI model on a large dataset of text examples labeled with sentiments (positive, negative, neutral, etc.). Once trained, the model can generate new text based on the learned patterns, providing an automated way to analyze sentiments in user reviews, comments, or any other form of textual data. The main goal of this research topic is to identify the emotions as well as opinions of users or customers using textual means. Though a lot of research has been done in this area using a variety of models, sentiment analysis is still regarded as a difficult topic with a lot of unresolved issues. Slang terms, novel languages, grammatical and spelling errors, etc. are some of the current issues. This work aims to conduct a review of the literature by utilizing multiple deep learning methods on a range of data sets. Nearly 21 contributions, covering a variety of sentimental analysis applications, are surveyed in the current literature study. Initially, the analysis looks at the kinds of deep learning algorithms that are being utilized and tries to show the contributions of each work. Additionally, the research focuses on identifying the kind of data that was used. Additionally, each work's performance metrics and setting are assessed, and the conclusion includes appropriate research gaps and challenges. This will help in identifying the non-saturated application for which sentimental analysis is most needed in future studies.
{"title":"A Literature Review : Enhancing Sentiment Analysis of Deep Learning Techniques Using Generative AI Model","authors":"Sharma Vishalkumar Sureshbhai, Dr. Tulsidas Nakrani","doi":"10.32628/cseit24103204","DOIUrl":"https://doi.org/10.32628/cseit24103204","url":null,"abstract":"Sentiment analysis is possibly one of the most desirable areas of study within Natural Language Processing (NLP). Generative AI can be used in sentiment analysis through the generation of text that reflects the sentiment or emotional tone of a given input. The process typically involves training a generative AI model on a large dataset of text examples labeled with sentiments (positive, negative, neutral, etc.). Once trained, the model can generate new text based on the learned patterns, providing an automated way to analyze sentiments in user reviews, comments, or any other form of textual data. The main goal of this research topic is to identify the emotions as well as opinions of users or customers using textual means. Though a lot of research has been done in this area using a variety of models, sentiment analysis is still regarded as a difficult topic with a lot of unresolved issues. Slang terms, novel languages, grammatical and spelling errors, etc. are some of the current issues. This work aims to conduct a review of the literature by utilizing multiple deep learning methods on a range of data sets. Nearly 21 contributions, covering a variety of sentimental analysis applications, are surveyed in the current literature study. Initially, the analysis looks at the kinds of deep learning algorithms that are being utilized and tries to show the contributions of each work. Additionally, the research focuses on identifying the kind of data that was used. Additionally, each work's performance metrics and setting are assessed, and the conclusion includes appropriate research gaps and challenges. This will help in identifying the non-saturated application for which sentimental analysis is most needed in future studies.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"3 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141337031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical image classification, a critical component in medical diagnostics, has significantly advanced through the integration of machine learning (ML) and deep learning (DL) techniques. This review comprehensively explores the evolution, methodologies, and applications of ML and DL in medical image classification. Traditional ML methods, including support vector machines and decision trees, have provided a foundation for early advancements by utilizing handcrafted features. However, the advent of DL, particularly convolutional neural networks (CNNs), has revolutionized the field by enabling automatic feature extraction and achieving superior performance. This review examines various DL architectures, such as ResNet, VGG, and Inception, highlighting their contributions to tasks like tumor detection, organ segmentation, and disease classification. Furthermore, it addresses challenges like data scarcity, interpretability, and computational demands, discussing potential solutions like data augmentation, transfer learning, and model optimization. The review also considers the ethical implications and the need for robust validation to ensure clinical applicability. Through a comparative analysis of existing studies, this review underscores the transformative impact of ML and DL on medical imaging, emphasizing the continuous need for innovation and interdisciplinary collaboration to enhance diagnostic accuracy and patient outcomes.
医学图像分类是医学诊断的关键组成部分,通过整合机器学习(ML)和深度学习(DL)技术,医学图像分类取得了长足的进步。本综述全面探讨了 ML 和 DL 在医学图像分类中的演变、方法和应用。传统的机器学习方法,包括支持向量机和决策树,通过利用手工制作的特征为早期的进步奠定了基础。然而,DL(尤其是卷积神经网络(CNN))的出现实现了自动特征提取并取得了卓越的性能,从而彻底改变了这一领域。本综述探讨了各种卷积神经网络架构,如 ResNet、VGG 和 Inception,重点介绍了它们在肿瘤检测、器官分割和疾病分类等任务中的贡献。此外,它还探讨了数据稀缺性、可解释性和计算需求等挑战,讨论了数据增强、迁移学习和模型优化等潜在解决方案。该综述还考虑了伦理影响以及进行可靠验证以确保临床适用性的必要性。通过对现有研究的比较分析,本综述强调了 ML 和 DL 对医学成像的变革性影响,强调了对创新和跨学科合作的持续需求,以提高诊断准确性和患者预后。
{"title":"A Review on Machine Learning and Deep Learning Methods on Medical Image Classification","authors":"Dr.Sheshang Degadwala, Dhairya Vyas Degadwala","doi":"10.32628/cseit24103205","DOIUrl":"https://doi.org/10.32628/cseit24103205","url":null,"abstract":"Medical image classification, a critical component in medical diagnostics, has significantly advanced through the integration of machine learning (ML) and deep learning (DL) techniques. This review comprehensively explores the evolution, methodologies, and applications of ML and DL in medical image classification. Traditional ML methods, including support vector machines and decision trees, have provided a foundation for early advancements by utilizing handcrafted features. However, the advent of DL, particularly convolutional neural networks (CNNs), has revolutionized the field by enabling automatic feature extraction and achieving superior performance. This review examines various DL architectures, such as ResNet, VGG, and Inception, highlighting their contributions to tasks like tumor detection, organ segmentation, and disease classification. Furthermore, it addresses challenges like data scarcity, interpretability, and computational demands, discussing potential solutions like data augmentation, transfer learning, and model optimization. The review also considers the ethical implications and the need for robust validation to ensure clinical applicability. Through a comparative analysis of existing studies, this review underscores the transformative impact of ML and DL on medical imaging, emphasizing the continuous need for innovation and interdisciplinary collaboration to enhance diagnostic accuracy and patient outcomes.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141336700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This survey provides a comprehensive analysis of the systematic differences and advancements between deep learning (DL) and traditional machine learning (ML) models. By examining a wide array of research papers, the study highlights the unique strengths and applications of both methodologies. Deep learning, with its multi-layered neural networks, excels in handling large, unstructured datasets, making significant strides in image and speech recognition, natural language processing, and complex pattern recognition tasks. Conversely, traditional machine learning models, which rely on feature extraction and simpler algorithms, remain highly effective in structured data scenarios such as classification, regression, and clustering problems. The survey elucidates the criteria for choosing between DL and ML, focusing on factors like data size, computational resources, and specific application requirements. Furthermore, it discusses the evolving landscape of hybrid models that integrate DL and ML techniques to leverage the strengths of both approaches. This analysis provides valuable insights for researchers and practitioners aiming to deploy the most suitable AI models for their specific needs, emphasizing the importance of contextual understanding in the rapidly advancing field of artificial intelligence.
本调查报告全面分析了深度学习(DL)和传统机器学习(ML)模型之间的系统性差异和进步。通过研究大量研究论文,本研究强调了这两种方法的独特优势和应用。深度学习采用多层神经网络,擅长处理大型非结构化数据集,在图像和语音识别、自然语言处理以及复杂模式识别任务方面取得了长足进步。相反,依赖于特征提取和较简单算法的传统机器学习模型在分类、回归和聚类问题等结构化数据场景中仍然非常有效。调查阐明了在 DL 和 ML 之间做出选择的标准,重点关注数据大小、计算资源和特定应用要求等因素。此外,它还讨论了混合模型的演变情况,这些模型集成了 DL 和 ML 技术,以充分利用这两种方法的优势。这项分析为研究人员和从业人员提供了宝贵的见解,使他们能够根据自己的具体需求部署最合适的人工智能模型,同时强调了在快速发展的人工智能领域中理解上下文的重要性。
{"title":"Survey on Systematic Analysis of Deep Learning Models Compare to Machine Learning","authors":"Dr.Sheshang Degadwala, Dhairya Vyas Degadwala","doi":"10.32628/cseit24103206","DOIUrl":"https://doi.org/10.32628/cseit24103206","url":null,"abstract":"This survey provides a comprehensive analysis of the systematic differences and advancements between deep learning (DL) and traditional machine learning (ML) models. By examining a wide array of research papers, the study highlights the unique strengths and applications of both methodologies. Deep learning, with its multi-layered neural networks, excels in handling large, unstructured datasets, making significant strides in image and speech recognition, natural language processing, and complex pattern recognition tasks. Conversely, traditional machine learning models, which rely on feature extraction and simpler algorithms, remain highly effective in structured data scenarios such as classification, regression, and clustering problems. The survey elucidates the criteria for choosing between DL and ML, focusing on factors like data size, computational resources, and specific application requirements. Furthermore, it discusses the evolving landscape of hybrid models that integrate DL and ML techniques to leverage the strengths of both approaches. This analysis provides valuable insights for researchers and practitioners aiming to deploy the most suitable AI models for their specific needs, emphasizing the importance of contextual understanding in the rapidly advancing field of artificial intelligence.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"2 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141336797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miss Nikita C. Nandeshwar, Prof. Dr. K.A. Waghmare, Prof. A.V. Deorankar
The rise in online banking fraud, driven by the underground malware economy, underscores the crucial need for robust fraud analysis systems. Regrettably, the majority of existing approaches rely on black box models that lack transparency and fail to provide justifications to analysts. Additionally, the scarcity of available Internet banking data for the scientific community hinders the development of effective methods. This paper presents a decision support system meticulously crafted to identify and thwart fraud in online banking transactions. The chosen approach involves the application of a Random Forest decision tree model—a supervised machine learning technique renowned for its effectiveness in enhancing fraud detection within online banking systems, yielding substantial real-world impact. Constant monitoring of both the system and data ensures optimal performance, enabling timely responses to deviations. The overarching objective of the system is to furnish analysts with a powerful decision support tool capable of preempting financial crimes before they occur.
{"title":"Research on Advance Machine Learning Based Decision Support System for Frauds Detection and Prevention in Online Banking System","authors":"Miss Nikita C. Nandeshwar, Prof. Dr. K.A. Waghmare, Prof. A.V. Deorankar","doi":"10.32628/cseit24103131","DOIUrl":"https://doi.org/10.32628/cseit24103131","url":null,"abstract":"The rise in online banking fraud, driven by the underground malware economy, underscores the crucial need for robust fraud analysis systems. Regrettably, the majority of existing approaches rely on black box models that lack transparency and fail to provide justifications to analysts. Additionally, the scarcity of available Internet banking data for the scientific community hinders the development of effective methods. This paper presents a decision support system meticulously crafted to identify and thwart fraud in online banking transactions. The chosen approach involves the application of a Random Forest decision tree model—a supervised machine learning technique renowned for its effectiveness in enhancing fraud detection within online banking systems, yielding substantial real-world impact. Constant monitoring of both the system and data ensures optimal performance, enabling timely responses to deviations. The overarching objective of the system is to furnish analysts with a powerful decision support tool capable of preempting financial crimes before they occur.","PeriodicalId":313456,"journal":{"name":"International Journal of Scientific Research in Computer Science, Engineering and Information Technology","volume":"53 18","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141339218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}