首页 > 最新文献

International Journal of Engineering and Computer Science最新文献

英文 中文
A FRAMEWORK FOR MANAGEMENT OF LEAKS AND EQUIPMENT FAILURE IN OIL WELLS 油井泄漏和设备故障管理框架
Pub Date : 2024-07-23 DOI: 10.18535/ijecs/v13i07.4842
Dennis, T. L., A. V I E, Emmah, V. T.
Oil is a precious and critical natural energy resource that is used in numerous ways to drive various industries worldwide. The extraction of oil from underground reservoirs is a complex process that requires a lot of planning, careful execution, and risk management. In this paper, CNN is employed to extract relevant features from sensor primary data collected from various wells. Detecting undesirable events such as leaks and equipment failure in oil wells is crucial for preventing safety hazards, environmental damage and financial losses, making it challenging to identify issues in a timely and accurate manner. This dissertation describes a hybrid model for detecting undesirable events in oil and gas wells using a combination of Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) techniques. The CNN architecture enables effective information extraction by applying convolutional layers and pooling operations to identify patterns and spatial dependencies in the data. The extracted features are then fed into an LSTM network, which can capture temporal dependencies and learning long-term patterns. By utilizing LSTM, the model can effectively analyse the time series data and detect the occurrence of undesirable events, such as abnormal pressure, fluid leakage, or equipment malfunction, in oil and gas wells. The hybrid model leveraging CNN for feature extraction and LSTM for detecting undesirable events in the oil and gas industry presents a comprehensive approach to enhance well monitoring and prevent potential hazards. Achieving high accuracy rates of 99.8% for training and 99.78% for testing demonstrates the efficacy of the proposed model in accurately identifying and classifying undesirable events in oil and gas wells.
石油是一种珍贵而关键的自然能源资源,被广泛用于推动全球各行各业的发展。从地下油藏开采石油是一个复杂的过程,需要大量的规划、谨慎的执行和风险管理。本文采用 CNN 从各种油井收集的传感器原始数据中提取相关特征。检测油井泄漏和设备故障等不良事件对于防止安全隐患、环境破坏和经济损失至关重要,因此及时准确地发现问题具有挑战性。本论文介绍了一种使用卷积神经网络(CNN)和长短期记忆(LSTM)技术的混合模型,用于检测油气井中的不良事件。CNN 架构通过应用卷积层和池化操作来识别数据中的模式和空间依赖关系,从而实现有效的信息提取。然后将提取的特征输入 LSTM 网络,该网络可捕捉时间依赖性并学习长期模式。通过利用 LSTM,该模型可以有效地分析时间序列数据,并检测油气井中发生的异常事件,如异常压力、流体泄漏或设备故障。该混合模型利用 CNN 进行特征提取,利用 LSTM 检测油气行业中的不良事件,为加强油井监测和预防潜在危险提供了一种综合方法。该模型的训练准确率和测试准确率分别高达 99.8%和 99.78%,证明了该模型在准确识别和分类油气井不良事件方面的功效。
{"title":"A FRAMEWORK FOR MANAGEMENT OF LEAKS AND EQUIPMENT FAILURE IN OIL WELLS","authors":"Dennis, T. L., A. V I E, Emmah, V. T.","doi":"10.18535/ijecs/v13i07.4842","DOIUrl":"https://doi.org/10.18535/ijecs/v13i07.4842","url":null,"abstract":"Oil is a precious and critical natural energy resource that is used in numerous ways to drive various industries worldwide. The extraction of oil from underground reservoirs is a complex process that requires a lot of planning, careful execution, and risk management. In this paper, CNN is employed to extract relevant features from sensor primary data collected from various wells. Detecting undesirable events such as leaks and equipment failure in oil wells is crucial for preventing safety hazards, environmental damage and financial losses, making it challenging to identify issues in a timely and accurate manner. This dissertation describes a hybrid model for detecting undesirable events in oil and gas wells using a combination of Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) techniques. The CNN architecture enables effective information extraction by applying convolutional layers and pooling operations to identify patterns and spatial dependencies in the data. The extracted features are then fed into an LSTM network, which can capture temporal dependencies and learning long-term patterns. By utilizing LSTM, the model can effectively analyse the time series data and detect the occurrence of undesirable events, such as abnormal pressure, fluid leakage, or equipment malfunction, in oil and gas wells. The hybrid model leveraging CNN for feature extraction and LSTM for detecting undesirable events in the oil and gas industry presents a comprehensive approach to enhance well monitoring and prevent potential hazards. Achieving high accuracy rates of 99.8% for training and 99.78% for testing demonstrates the efficacy of the proposed model in accurately identifying and classifying undesirable events in oil and gas wells.","PeriodicalId":231371,"journal":{"name":"International Journal of Engineering and Computer Science","volume":"30 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141813031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive Analytics for Demand Forecasting: A deep Learning-based Decision Support System 需求预测分析:基于深度学习的决策支持系统
Pub Date : 2024-07-21 DOI: 10.18535/ijecs/v13i07.4853
Saurabh Kumar, Mr. Amar Nayak
Demand forecasting is a critical component of supply chain management and business operations, enabling organizations to make informed decisions about production, inventory management, and resource allocation. In recent years, predictive analytics has emerged as a powerful tool for enhancing the accuracy and efficiency of demand forecasting. This review paper explores the transformative role of predictive analytics and deep learning in demand forecasting. It examines how these advanced techniques have evolved from traditional models based on past sales data, offering nuanced predictions through sophisticated statistical and machine learning methods. Deep learning, with its neural network structures, brings automatic feature learning, complex pattern handling, and scalability, enhancing forecasting in sectors like retail, manufacturing, and healthcare. The paper reviews various deep learning models, compares them with traditional methods, and discusses their impact on business operations and decision-making. It concludes by looking at future trends in predictive analytics and deep learning in demand forecasting.
需求预测是供应链管理和业务运营的重要组成部分,使企业能够就生产、库存管理和资源分配做出明智决策。近年来,预测分析已成为提高需求预测准确性和效率的有力工具。本综述论文探讨了预测分析和深度学习在需求预测中的变革性作用。它探讨了这些先进技术是如何从基于以往销售数据的传统模型发展而来,通过复杂的统计和机器学习方法提供细致入微的预测。深度学习及其神经网络结构带来了自动特征学习、复杂模式处理和可扩展性,从而增强了零售、制造和医疗保健等行业的预测能力。本文回顾了各种深度学习模型,将它们与传统方法进行了比较,并讨论了它们对业务运营和决策的影响。最后,本文展望了预测分析和深度学习在需求预测中的未来趋势。
{"title":"Predictive Analytics for Demand Forecasting: A deep Learning-based Decision Support System","authors":"Saurabh Kumar, Mr. Amar Nayak","doi":"10.18535/ijecs/v13i07.4853","DOIUrl":"https://doi.org/10.18535/ijecs/v13i07.4853","url":null,"abstract":"Demand forecasting is a critical component of supply chain management and business operations, enabling organizations to make informed decisions about production, inventory management, and resource allocation. In recent years, predictive analytics has emerged as a powerful tool for enhancing the accuracy and efficiency of demand forecasting. This review paper explores the transformative role of predictive analytics and deep learning in demand forecasting. It examines how these advanced techniques have evolved from traditional models based on past sales data, offering nuanced predictions through sophisticated statistical and machine learning methods. Deep learning, with its neural network structures, brings automatic feature learning, complex pattern handling, and scalability, enhancing forecasting in sectors like retail, manufacturing, and healthcare. The paper reviews various deep learning models, compares them with traditional methods, and discusses their impact on business operations and decision-making. It concludes by looking at future trends in predictive analytics and deep learning in demand forecasting.","PeriodicalId":231371,"journal":{"name":"International Journal of Engineering and Computer Science","volume":"31 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141818368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Model for Detection of Malwares on Edge Devices 边缘设备恶意软件检测模型
Pub Date : 2024-07-21 DOI: 10.18535/ijecs/v13i07.4846
Nwagwu, C .B., Taylor O. E., Nwiabu N.D
Abstract- Malware detection is a significant challenge in today's digital landscape. As new forms of malware are continuously being developed, traditional detection techniques often fall short due to their inability to detect these new strains. This paperintroduces meaningful features that effectively capture various types of malware, including viruses, worms, Trojans and Ransomware on Edge devices. The paper used a model that implemented Random forest classifier for feature selection and a support vector machine (SVM) model for Malware detection. Object-Oriented Analysis and Design (OOAD) methodology was used to as the design methodology, which involved identifying and modeling the different components of the system and their interactions. The system was developed using Python programming language, with an emphasis on model deployment via Python Flask for web-based testing and execution. The experimental results demonstrate the effectiveness of the proposed systems when compared with other existing system. The result gotten from proposed system is better than that of the existing system by achieving a detection accuracy of 99.98% which is better than existing techniques. This dissertation presents a promising direction for improving malware detection using support vector machine (SVM) model and highlights the potential for collaborative learning approaches to overcome the challenges of traditional centralized approaches. This result simulates edge device that performs malware detection. It measures the latency for each detection and prints whether the latency is high or low. After the simulation, it plots a graph to visualize the latency over multiple requests. Which shows that the proposed model had low latency between 0.25secs to 0.15 secs on multiple requests.
摘要- 在当今的数字环境中,恶意软件检测是一项重大挑战。随着新型恶意软件的不断开发,传统的检测技术往往因无法检测到这些新的恶意软件而无法发挥作用。本文介绍了有效捕捉 Edge 设备上各类恶意软件(包括病毒、蠕虫、木马和勒索软件)的有意义的特征。该论文使用的模型采用随机森林分类器进行特征选择,并使用支持向量机(SVM)模型进行恶意软件检测。设计方法采用了面向对象的分析和设计(OOAD)方法,包括识别和模拟系统的不同组件及其交互。该系统使用 Python 编程语言开发,重点是通过 Python Flask 进行模型部署,以便进行基于网络的测试和执行。实验结果表明,与其他现有系统相比,提议的系统非常有效。建议系统的检测准确率达到 99.98%,优于现有系统。本论文提出了利用支持向量机(SVM)模型改进恶意软件检测的一个有前途的方向,并强调了协作学习方法在克服传统集中式方法的挑战方面的潜力。该成果模拟了执行恶意软件检测的边缘设备。它测量每次检测的延迟,并打印延迟是高还是低。模拟结束后,它绘制了一张图,直观显示多个请求的延迟情况。结果表明,建议的模型在多个请求中的延迟时间较低,在 0.25 秒到 0.15 秒之间。
{"title":"A Model for Detection of Malwares on Edge Devices","authors":"Nwagwu, C .B., Taylor O. E., Nwiabu N.D","doi":"10.18535/ijecs/v13i07.4846","DOIUrl":"https://doi.org/10.18535/ijecs/v13i07.4846","url":null,"abstract":"Abstract- Malware detection is a significant challenge in today's digital landscape. As new forms of malware are continuously being developed, traditional detection techniques often fall short due to their inability to detect these new strains. This paperintroduces meaningful features that effectively capture various types of malware, including viruses, worms, Trojans and Ransomware on Edge devices. The paper used a model that implemented Random forest classifier for feature selection and a support vector machine (SVM) model for Malware detection. Object-Oriented Analysis and Design (OOAD) methodology was used to as the design methodology, which involved identifying and modeling the different components of the system and their interactions. The system was developed using Python programming language, with an emphasis on model deployment via Python Flask for web-based testing and execution. The experimental results demonstrate the effectiveness of the proposed systems when compared with other existing system. The result gotten from proposed system is better than that of the existing system by achieving a detection accuracy of 99.98% which is better than existing techniques. This dissertation presents a promising direction for improving malware detection using support vector machine (SVM) model and highlights the potential for collaborative learning approaches to overcome the challenges of traditional centralized approaches. This result simulates edge device that performs malware detection. It measures the latency for each detection and prints whether the latency is high or low. After the simulation, it plots a graph to visualize the latency over multiple requests. Which shows that the proposed model had low latency between 0.25secs to 0.15 secs on multiple requests.","PeriodicalId":231371,"journal":{"name":"International Journal of Engineering and Computer Science","volume":"33 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141818537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-Driven Approach to Automated Lyric Generation 自动歌词生成的数据驱动方法
Pub Date : 2024-07-21 DOI: 10.18535/ijecs/v13i07.4839
Jeyadev Needhi, D. Kk, Vishnu G, Ram Prasath G
This project leverages Recurrent Neural Networks(RNNs) to generate coherent and contextually relevant songlyrics. The methodology includes extensive text preprocessing anddataset creation, followed by the construction of a robust modelfeaturing Embedding, Gated Recurrent Unit (GRU), Dense, andDropout layers. The model is compiled and trained using theAdam optimizer, with checkpointing to monitor and optimize thetraining process. Upon successful training on a comprehensivelyrics dataset, the model is thoroughly evaluated and fine-tunedto enhance performance. Finally, the model generates new lyricsfrom a given seed, showcasing its ability to learn intricatelinguistic patterns and structures, thereby offering a powerfultool for creative and original lyric composition.
该项目利用循环神经网络(RNN)生成连贯且与上下文相关的歌词。该方法包括广泛的文本预处理和数据集创建,然后构建一个包含嵌入层、门控递归单元(GRU)、密集层和剔除层的稳健模型。该模型使用亚当优化器进行编译和训练,并通过检查点监控和优化训练过程。在综合数据集上训练成功后,对模型进行全面评估和微调,以提高性能。最后,该模型从给定的种子中生成新歌词,展示了其学习复杂语言模式和结构的能力,从而为创造性和原创性歌词创作提供了强有力的工具。
{"title":"Data-Driven Approach to Automated Lyric Generation","authors":"Jeyadev Needhi, D. Kk, Vishnu G, Ram Prasath G","doi":"10.18535/ijecs/v13i07.4839","DOIUrl":"https://doi.org/10.18535/ijecs/v13i07.4839","url":null,"abstract":"This project leverages Recurrent Neural Networks(RNNs) to generate coherent and contextually relevant songlyrics. The methodology includes extensive text preprocessing anddataset creation, followed by the construction of a robust modelfeaturing Embedding, Gated Recurrent Unit (GRU), Dense, andDropout layers. The model is compiled and trained using theAdam optimizer, with checkpointing to monitor and optimize thetraining process. Upon successful training on a comprehensivelyrics dataset, the model is thoroughly evaluated and fine-tunedto enhance performance. Finally, the model generates new lyricsfrom a given seed, showcasing its ability to learn intricatelinguistic patterns and structures, thereby offering a powerfultool for creative and original lyric composition.","PeriodicalId":231371,"journal":{"name":"International Journal of Engineering and Computer Science","volume":"51 25","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141818072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ENHANCE DOCUMENT VALIDATION UIPATH POWERED SIGNATURE VERIFICATION 加强文件验证 uipath 支持签名验证
Pub Date : 2024-07-17 DOI: 10.18535/ijecs/v13i07.4851
Mrs. K. Gowri, A. Aswath, A. P. Adarsh, R. S. K. Gowtham Balaji
Abstract—Signatures are widely used as a means of personal identification and verification. Many documents like bank cheques and legal transactions require signature verification. Signature-based verification of a large number of documents is a very difficult and time-consuming task. Consequently, an explosive growth has been observed in biometric personal verification and authentication systems that are connected with quantifiable physical unique characteristics (finger prints, hand geometry, face, ear, iris scan, or DNA) or behavioural features (gait, voice etc.). As traditional identity verification methods such as tokens, passwords, pins etc suffer from some fatal flaws and are incapable to satisfy the security necessities, the paper aims to consider a more reliable biometric feature, signature verification for the considering. We present a survey of signature verification systems. We classify and give an account of the various approaches that have been proposed for signature verification.
摘要--签名被广泛用作个人身份识别和验证的一种手段。银行支票和法律交易等许多文件都需要签名验证。对大量文件进行基于签名的验证是一项非常困难和耗时的任务。因此,与可量化的独特物理特征(指纹、手部几何形状、面部、耳部、虹膜扫描或 DNA)或行为特征(步态、声音等)相关联的生物识别个人验证和认证系统出现了爆炸式增长。由于传统的身份验证方法(如令牌、密码、徽章等)存在一些致命缺陷,无法满足安全需求,本文旨在考虑一种更可靠的生物特征--签名验证。我们对签名验证系统进行了调查。我们对已提出的各种签名验证方法进行了分类和说明。
{"title":"ENHANCE DOCUMENT VALIDATION UIPATH POWERED SIGNATURE VERIFICATION","authors":"Mrs. K. Gowri, A. Aswath, A. P. Adarsh, R. S. K. Gowtham Balaji","doi":"10.18535/ijecs/v13i07.4851","DOIUrl":"https://doi.org/10.18535/ijecs/v13i07.4851","url":null,"abstract":"Abstract—Signatures are widely used as a means of personal identification and verification. Many documents like bank cheques and legal transactions require signature verification. Signature-based verification of a large number of documents is a very difficult and time-consuming task. Consequently, an explosive growth has been observed in biometric personal verification and authentication systems that are connected with quantifiable physical unique characteristics (finger prints, hand geometry, face, ear, iris scan, or DNA) or behavioural features (gait, voice etc.). As traditional identity verification methods such as tokens, passwords, pins etc suffer from some fatal flaws and are incapable to satisfy the security necessities, the paper aims to consider a more reliable biometric feature, signature verification for the considering. We present a survey of signature verification systems. We classify and give an account of the various approaches that have been proposed for signature verification.","PeriodicalId":231371,"journal":{"name":"International Journal of Engineering and Computer Science","volume":" 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141827874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature value quantization and reduction process for predicting heart attack possibility and the level of severity by a machine learning model 通过机器学习模型预测心脏病发作可能性和严重程度的特征值量化和还原过程
Pub Date : 2024-07-17 DOI: 10.18535/ijecs/v13i07.4831
Md Zawharul Islam, Md. Atahar Ishrak, A. H. M. Kamal
Heart disease is a prevalent condition nowadays that, if undiagnosed, can be deadly. To predict heart disease, researchers designed many machine learning models. In this study, we propose a model that chooses fewer attribute columns for training, and we use these chosen features to determine the heart problem severity. Correlation Repeated Heat map and Information Gain were used for selecting the features. To train our model we used the UCI Cleveland heart disease dataset. We removed duplicate data to improve the accuracy score, and we also encoded the categorical data collection using the OneHot(OH) encoding method, which can improve prediction accuracy. Support Vector, Logistic Regression, K-Nearest Neighbour, Naive Bayes, Decision Tree, Random Forest, Adaboost, and XGBoost are the eight classifier algorithms that are used in this process overall. Based on repeated heat map correlation, we compare the accuracy score each time. In this proposed method, the Adaboost classification algorithm used by the fbs row heat map achieves the highest accuracy for heart disease detection and it is 92%. By choosing features according to the information gain value, we compare the accuracy score each time in information gain. For both XGBoost and Logistic Regression, we got an accuracy score of 93.44%. However, compared to the XGBoost classification technique, Logistic Regression requires less time. Accuracy, precision, recall, f1-score, sensitivity, specificity, and the AUC of ROC charts were used to evaluate the performance of the model. Overall, the results of our model demonstrate that it is reliable and accurate in identifying cardiac disease and its level of severeness.
心脏病是当今的一种流行病,如果得不到诊断,可能会致命。为了预测心脏病,研究人员设计了许多机器学习模型。在本研究中,我们提出了一种选择较少属性列进行训练的模型,并利用这些所选特征来判断心脏病的严重程度。在选择特征时使用了相关性重复热图和信息增益。为了训练我们的模型,我们使用了 UCI 克利夫兰心脏病数据集。我们删除了重复数据以提高准确率,还使用 OneHot(OH) 编码方法对分类数据集进行了编码,从而提高了预测准确率。支持向量、逻辑回归、K-近邻、Naive Bayes、决策树、随机森林、Adaboost 和 XGBoost 是这一过程中总体使用的八种分类算法。在重复热图相关性的基础上,我们比较每次的准确率得分。在本方法中,fbs 行热图使用的 Adaboost 分类算法对心脏病检测的准确率最高,达到 92%。通过根据信息增益值选择特征,我们比较了每次信息增益的准确率得分。对于 XGBoost 和 Logistic 回归,我们都获得了 93.44% 的准确率。不过,与 XGBoost 分类技术相比,逻辑回归所需的时间更短。准确率、精确度、召回率、f1 分数、灵敏度、特异性和 ROC 图的 AUC 均用于评估模型的性能。总体而言,我们的模型结果表明,它在识别心脏疾病及其严重程度方面是可靠和准确的。
{"title":"Feature value quantization and reduction process for predicting heart attack possibility and the level of severity by a machine learning model","authors":"Md Zawharul Islam, Md. Atahar Ishrak, A. H. M. Kamal","doi":"10.18535/ijecs/v13i07.4831","DOIUrl":"https://doi.org/10.18535/ijecs/v13i07.4831","url":null,"abstract":"Heart disease is a prevalent condition nowadays that, if undiagnosed, can be deadly. To predict heart disease, \u0000researchers designed many machine learning models. In this study, we propose a model that chooses fewer attribute columns for training, and we use these chosen features to determine the heart problem severity. Correlation Repeated Heat map and Information Gain were used for selecting the features. To train our model we used the UCI Cleveland heart disease dataset. We removed duplicate data to improve the accuracy score, and we also encoded the categorical data collection using the OneHot(OH) encoding method, which can improve prediction accuracy. Support Vector, Logistic Regression, K-Nearest Neighbour, Naive Bayes, Decision Tree, Random Forest, Adaboost, and XGBoost are the eight classifier algorithms that are used in this process overall. Based on repeated heat map correlation, we compare the accuracy score each time. In this proposed method, the Adaboost classification algorithm used by the fbs row heat map achieves the highest accuracy for heart disease detection and it is 92%. By choosing features according to the information gain value, we compare the accuracy score each time in information gain. For both XGBoost and Logistic Regression, we got an accuracy score of 93.44%. However, compared to the XGBoost classification technique, Logistic Regression requires less time. Accuracy, precision, recall, f1-score, sensitivity, specificity, and the AUC of ROC charts were used to evaluate the performance of the model. Overall, the results of our model demonstrate that it is reliable and accurate in identifying cardiac disease and its level of severeness.","PeriodicalId":231371,"journal":{"name":"International Journal of Engineering and Computer Science","volume":" 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141830728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Plant Disease Detection through Transfer Learning by Incorporating MemoryAugmented Networks and Meta-Learning Approaches 结合记忆增强网络和元学习方法,通过迁移学习提高植物病害检测能力
Pub Date : 2024-07-14 DOI: 10.18535/ijecs/v13i07.4852
Dr. Mohana Priya C
Transfer learning has revolutionized automated plant disease detection by leveraging pre-trained convolutional neural networks (CNNs) on large-scale datasets like ImageNet. This paper explores advanced methodologies in transfer learning, focusing on the integration of memory-augmented networks and meta-learning approaches. These enhancements aim to improve model adaptation to new disease types and environmental conditions, thereby enhancing accuracy and robustness in agricultural applications. The paper reviews existing literature, discusses methodologies, and suggests future research directions to advance the field of AI-driven plant pathology.  
通过在 ImageNet 等大规模数据集上利用预先训练好的卷积神经网络(CNN),迁移学习为植物病害自动检测带来了革命性的变化。本文探讨了迁移学习的先进方法,重点是记忆增强网络和元学习方法的整合。这些改进旨在提高模型对新疾病类型和环境条件的适应性,从而提高农业应用的准确性和鲁棒性。本文回顾了现有文献,讨论了相关方法,并提出了未来的研究方向,以推动人工智能驱动的植物病理学领域的发展。
{"title":"Enhancing Plant Disease Detection through Transfer Learning by Incorporating MemoryAugmented Networks and Meta-Learning Approaches","authors":"Dr. Mohana Priya C","doi":"10.18535/ijecs/v13i07.4852","DOIUrl":"https://doi.org/10.18535/ijecs/v13i07.4852","url":null,"abstract":"Transfer learning has revolutionized automated plant disease detection by leveraging pre-trained convolutional neural networks (CNNs) on large-scale datasets like ImageNet. This paper explores advanced methodologies in transfer learning, focusing on the integration of memory-augmented networks and meta-learning approaches. These enhancements aim to improve model adaptation to new disease types and environmental conditions, thereby enhancing accuracy and robustness in agricultural applications. The paper reviews existing literature, discusses methodologies, and suggests future research directions to advance the field of AI-driven plant pathology. \u0000 ","PeriodicalId":231371,"journal":{"name":"International Journal of Engineering and Computer Science","volume":" 28","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141833892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Optimization of Voice-Assisted File Management Systems 语音辅助文件管理系统的性能优化
Pub Date : 2024-07-14 DOI: 10.18535/ijecs/v13i07.4854
Jeyadev Needhi, Ram Prasath G, Vishnu G, D. Kk
In this paper, we present a novel approach for managing the file system in Linux using a voice assistant. Our system allows users to perform file system operations such as creating directories, renaming files, and deleting files by issuing voice commands. We develop a voice assistant using Python libraries and integrate it with the file system in Linux. The voice assistant is capable of understanding natural language and executing commands based on the user’s voice inputs. We conduct experiments to evaluate the performance of the system and demonstrate that our approach is effective and efficient in managing the file system using voice commands. Our system can enhance the accessibility and usability of the file system in Linux for individuals with disabilities or those who prefer a hands-free approach to file management.
在本文中,我们提出了一种使用语音助手管理 Linux 文件系统的新方法。我们的系统允许用户通过发出语音命令来执行创建目录、重命名文件和删除文件等文件系统操作。我们使用 Python 库开发了一个语音助手,并将其与 Linux 中的文件系统集成。语音助手能够理解自然语言,并根据用户的语音输入执行命令。我们进行了实验来评估系统的性能,结果表明我们的方法在使用语音命令管理文件系统方面是有效和高效的。我们的系统可以提高 Linux 中文件系统的可访问性和可用性,适合残障人士或喜欢免提文件管理方法的人使用。
{"title":"Performance Optimization of Voice-Assisted File Management Systems","authors":"Jeyadev Needhi, Ram Prasath G, Vishnu G, D. Kk","doi":"10.18535/ijecs/v13i07.4854","DOIUrl":"https://doi.org/10.18535/ijecs/v13i07.4854","url":null,"abstract":"In this paper, we present a novel approach for managing the file system in Linux using a voice assistant. Our system allows users to perform file system operations such as creating directories, renaming files, and deleting files by issuing voice commands. We develop a voice assistant using Python libraries and integrate it with the file system in Linux. The voice assistant is capable of understanding natural language and executing commands based on the user’s voice inputs. We conduct experiments to evaluate the performance of the system and demonstrate that our approach is effective and efficient in managing the file system using voice commands. Our system can enhance the accessibility and usability of the file system in Linux for individuals with disabilities or those who prefer a hands-free approach to file management.","PeriodicalId":231371,"journal":{"name":"International Journal of Engineering and Computer Science","volume":" 30","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141833993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning Algorithms for Predictive Maintenance in Autonomous Vehicles 用于自动驾驶汽车预测性维护的机器学习算法
Pub Date : 2024-07-11 DOI: 10.18535/ijecs/v13i01.4786
Chirag Vinalbhai Shah
The complexity and hazards of autonomous vehicle systems have posed a significant challenge in predictive maintenance. Since the incompetence of autonomous vehicle system software and hardware could lead to life-threatening crashes, maintenance should be performed regularly to protect human safety. For automotive systems, predicting future failures and taking actions in advance to maintain system reliability and safety is very crucial in large-scale product design. This paper will explore several machine learning algorithms including regression techniques, classification techniques, ensemble techniques, clustering techniques, and deep learning techniques used for system maintenance need assessment in autonomous vehicles. Experimental results indicate that predictive maintenance can be greatly helpful for autonomous vehicles either in improving system design or mitigating the risk of threats.
自动驾驶汽车系统的复杂性和危险性给预测性维护带来了巨大挑战。由于自动驾驶汽车系统软件和硬件的无能可能导致危及生命的撞车事故,因此应定期进行维护以保护人类安全。对于汽车系统而言,预测未来故障并提前采取行动以维护系统可靠性和安全性在大规模产品设计中至关重要。本文将探讨几种机器学习算法,包括用于自动驾驶汽车系统维护需求评估的回归技术、分类技术、集合技术、聚类技术和深度学习技术。实验结果表明,预测性维护对自动驾驶汽车改进系统设计或降低威胁风险大有帮助。
{"title":"Machine Learning Algorithms for Predictive Maintenance in Autonomous Vehicles","authors":"Chirag Vinalbhai Shah","doi":"10.18535/ijecs/v13i01.4786","DOIUrl":"https://doi.org/10.18535/ijecs/v13i01.4786","url":null,"abstract":"The complexity and hazards of autonomous vehicle systems have posed a significant challenge in predictive maintenance. Since the incompetence of autonomous vehicle system software and hardware could lead to life-threatening crashes, maintenance should be performed regularly to protect human safety. For automotive systems, predicting future failures and taking actions in advance to maintain system reliability and safety is very crucial in large-scale product design. This paper will explore several machine learning algorithms including regression techniques, classification techniques, ensemble techniques, clustering techniques, and deep learning techniques used for system maintenance need assessment in autonomous vehicles. Experimental results indicate that predictive maintenance can be greatly helpful for autonomous vehicles either in improving system design or mitigating the risk of threats.","PeriodicalId":231371,"journal":{"name":"International Journal of Engineering and Computer Science","volume":"61 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141655388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Convergence of AI, ML, and IoT in Automotive Systems: A Future Perspective on Edge Computing 人工智能、ML 和物联网在汽车系统中的融合:边缘计算的未来视角
Pub Date : 2024-07-11 DOI: 10.18535/ijecs/v11i05.4673
Dilip Kumar Vaka
Edge computing, where sensing, control, and intelligent processing occur near where data is acquired, is poised to be a fundamental enabler of several imminent disruptive future computing paradigms for emerging applications such as CPS, IoT, and more sophisticated AI-driven services. In this context, we posit the convergence of AI, ML, and IoT in automotive systems, the infrastructure required to enable it, and where edge computing will play a pivotal role in the real-world deployment of this ecosystem. We also review a few digital infrastructure technologies that can vastly enhance these next-generation digital automotive systems. This is examined through the investigation of real-world scenarios provided by our partner companies, the prominent Consumer Electronics Show (CES), and other sources. First, it is demonstrated through several industrial benchmarks that the proposed digital infrastructure technologies provide significant alleviation in terms of application accuracy, and at times even take the benefits beyond even 1x equivalent DNN accelerator-based systems in resource-constrained edge computing environments. After this, the challenges of designing and deploying them in real-world automotive systems are outlined. The paper concludes with the verifiable thesis that edge computing technologies need to play a significant role in the next-generation digital automotive system development so that ML-driven AI systems of the future are designed and deployed successfully in the field and can deliver their intent of providing superior user experience, enhanced safety, and convenience. 
边缘计算是在获取数据的附近进行传感、控制和智能处理的计算模式,它将成为 CPS、物联网和更复杂的人工智能驱动服务等新兴应用中几种即将出现的颠覆性未来计算模式的基本推动力。在此背景下,我们提出了人工智能、ML 和物联网在汽车系统中的融合,实现这一融合所需的基础设施,以及边缘计算将在这一生态系统的实际部署中发挥关键作用的领域。我们还回顾了一些数字基础设施技术,这些技术可以极大地增强下一代数字汽车系统。我们将通过调查我们的合作伙伴公司、著名的消费电子展(CES)和其他来源提供的真实场景来研究这些技术。首先,通过几个工业基准测试证明,在资源受限的边缘计算环境中,所提出的数字基础架构技术能显著提高应用的准确性,有时其优势甚至超过 1 倍的基于 DNN 加速器的系统。随后,概述了在真实世界的汽车系统中设计和部署这些技术所面临的挑战。本文最后提出了一个可验证的论点,即边缘计算技术需要在下一代数字汽车系统开发中发挥重要作用,以便在现场成功设计和部署未来的 ML 驱动型人工智能系统,并实现其提供卓越用户体验、增强安全性和便利性的目标。
{"title":"The Convergence of AI, ML, and IoT in Automotive Systems: A Future Perspective on Edge Computing","authors":"Dilip Kumar Vaka","doi":"10.18535/ijecs/v11i05.4673","DOIUrl":"https://doi.org/10.18535/ijecs/v11i05.4673","url":null,"abstract":"Edge computing, where sensing, control, and intelligent processing occur near where data is acquired, is poised to be a fundamental enabler of several imminent disruptive future computing paradigms for emerging applications such as CPS, IoT, and more sophisticated AI-driven services. In this context, we posit the convergence of AI, ML, and IoT in automotive systems, the infrastructure required to enable it, and where edge computing will play a pivotal role in the real-world deployment of this ecosystem. We also review a few digital infrastructure technologies that can vastly enhance these next-generation digital automotive systems. This is examined through the investigation of real-world scenarios provided by our partner companies, the prominent Consumer Electronics Show (CES), and other sources. First, it is demonstrated through several industrial benchmarks that the proposed digital infrastructure technologies provide significant alleviation in terms of application accuracy, and at times even take the benefits beyond even 1x equivalent DNN accelerator-based systems in resource-constrained edge computing environments. After this, the challenges of designing and deploying them in real-world automotive systems are outlined. The paper concludes with the verifiable thesis that edge computing technologies need to play a significant role in the next-generation digital automotive system development so that ML-driven AI systems of the future are designed and deployed successfully in the field and can deliver their intent of providing superior user experience, enhanced safety, and convenience.\u0000 ","PeriodicalId":231371,"journal":{"name":"International Journal of Engineering and Computer Science","volume":"138 28","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141655885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Engineering and Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1