Pub Date : 2024-04-19DOI: 10.17485/ijst/v17i16.160
G. Janaki, R. Sarulatha
Objective: To bring forth a new conception in the time-honoured field of Diophantine triples, namely “Geophine triple”. To examine the feasibility of proliferating an unending sequence of Geophine triples from Geophine pairs with the property comprising Padovan and Bernstein polynomial. Method: Established Geophine triples employing Padovan and Bernstein polynomial by the method of polynomial manipulations. Findings: An unending sequences of Geophine triples and with the property and are promulgated from Geophine pairs, precisely involving Padovan and Bernstein polynomials and few numerical representation of the sequences are computed using MATLAB. Novelty: This article carries an innovative approach of determining this definite type of triples using Geometric mean and thereby, two infinite sequences of Geophine triples with the property are ascertained. Also, few numerical representations of the sequences utilizing MATLAB program are figured out, thus broadening the scope of computational Number Theory. Keywords: Polynomial Diophantine triple, Geophine triple, Bernstein polynomial, Padovan polynomials, Pell’s equation, Special Polynomials
{"title":"On Sequences of Geophine Triples Involving Padovan and Bernstein Polynomial with Propitious Property","authors":"G. Janaki, R. Sarulatha","doi":"10.17485/ijst/v17i16.160","DOIUrl":"https://doi.org/10.17485/ijst/v17i16.160","url":null,"abstract":"Objective: To bring forth a new conception in the time-honoured field of Diophantine triples, namely “Geophine triple”. To examine the feasibility of proliferating an unending sequence of Geophine triples from Geophine pairs with the property comprising Padovan and Bernstein polynomial. Method: Established Geophine triples employing Padovan and Bernstein polynomial by the method of polynomial manipulations. Findings: An unending sequences of Geophine triples and with the property and are promulgated from Geophine pairs, precisely involving Padovan and Bernstein polynomials and few numerical representation of the sequences are computed using MATLAB. Novelty: This article carries an innovative approach of determining this definite type of triples using Geometric mean and thereby, two infinite sequences of Geophine triples with the property are ascertained. Also, few numerical representations of the sequences utilizing MATLAB program are figured out, thus broadening the scope of computational Number Theory. Keywords: Polynomial Diophantine triple, Geophine triple, Bernstein polynomial, Padovan polynomials, Pell’s equation, Special Polynomials","PeriodicalId":13296,"journal":{"name":"Indian journal of science and technology","volume":" 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140685724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-19DOI: 10.17485/ijst/v17i16.539
T. S. Ashika, S. Asha
Objectives: To explore the least upper bound of graphs by radio contra harmonic labeling. Methods: Contra harmonic mean function or , radio mean labeling condition and radio harmonic mean labeling are used. Findings: Here we introduce radio contra harmonic mean labeling and its least upper bound, designated as radio contra harmonic mean number, by formulating the constraints. Novelty: Based on radio mean labeling and contra harmonic mean labeling, the new concept of radio contra harmonic mean labeling was established. The bounds of some special graphs are encountered here. This kind of labeling is employed in secure communication networks and is also applicable in X-rays, crystallography, coding theory, computing etc. Keywords: Radio contra harmonic mean labeling, rchmn (G), Order, Diameter, Smallest span
目标通过无线电忌谐标记探索图的最小上界。方法使用反谐波均值函数或、无线电均值标注条件和无线电谐波均值标注。研究结果在此,我们介绍了无线电反谐波均值标注及其最小上界,并通过制定约束条件将其命名为无线电反谐波均值数。新颖性: 在无线电均值标注和反谐波均值标注的基础上,建立了无线电反谐波均值标注的新概念。这里遇到了一些特殊图形的边界。这种标注可用于安全通信网络,也适用于 X 射线、晶体学、编码理论、计算等领域。关键词无线电反谐波均值标注 rchmn (G) 有序 直径 最小跨度
{"title":"Radio Contra Harmonic Mean Number of Graphs","authors":"T. S. Ashika, S. Asha","doi":"10.17485/ijst/v17i16.539","DOIUrl":"https://doi.org/10.17485/ijst/v17i16.539","url":null,"abstract":"Objectives: To explore the least upper bound of graphs by radio contra harmonic labeling. Methods: Contra harmonic mean function or , radio mean labeling condition and radio harmonic mean labeling are used. Findings: Here we introduce radio contra harmonic mean labeling and its least upper bound, designated as radio contra harmonic mean number, by formulating the constraints. Novelty: Based on radio mean labeling and contra harmonic mean labeling, the new concept of radio contra harmonic mean labeling was established. The bounds of some special graphs are encountered here. This kind of labeling is employed in secure communication networks and is also applicable in X-rays, crystallography, coding theory, computing etc. Keywords: Radio contra harmonic mean labeling, rchmn (G), Order, Diameter, Smallest span","PeriodicalId":13296,"journal":{"name":"Indian journal of science and technology","volume":" 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140684350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-19DOI: 10.17485/ijst/v17i16.3274
S. Shanmugapriya, N. Priya
Objectives: The purpose of the proposed IT-TALB load balancing algorithm is to dynamically allocate the user's workload to the appropriate virtual machine in an Infrastructure as a Service (IaaS) cloud environment. Methods: This research work includes several key procedures. The user's workloads are distributed to the data center controller (DCC), which in turn uses the ECO-SBP service broker policy to select the efficient data center (DC) for processing the loads. The DCC forwards the load to the selected DC, and the IT-TALB load balancer picks the best Virtual Machine (VM) using CloudAnalyst simulation tool for load allocations according to metrics such as its size, current number of loads, and load size. IT-TALB partitions the available and busy VMs separately and stores them in the TreeMap structure. This algorithm also incorporates the scalability of the given VM when the load size is not compatible with the existing VMs by extending the resources of underutilized VMs. Findings: The research finding demonstrates that the proposed IT-TALB algorithm improves IaaS cloud performance compared to the existing algorithms. It achieves optimum load balancing, reduces the searching time of the VM, avoids the load waiting time, improves throughput, minimizes the response time, and enhances the resource utilization ratio. IT-TALB yields a throughput and resource utilization ratio of 98 to 99 percent. Novelty: The novelty of this research is that the IT-TALB algorithm incorporates the scalability of the underutilized VM and also introduces new metrics such as throughput and resource utilization ratio in the CloudAnalyst simulation tool for assessing the performance of the proposed algorithm. This study provides information for analyzing the proposed IT-TALB strategies with the existing two algorithms such as TLB and TALB in order to show its performance. Keywords: Cloud Computing, Infrastructure as a Service, Load Balancing, Throttled Load Balancing, Virtual Machine
{"title":"The Proposed IT-TALB in Infrastructure as a Service Cloud","authors":"S. Shanmugapriya, N. Priya","doi":"10.17485/ijst/v17i16.3274","DOIUrl":"https://doi.org/10.17485/ijst/v17i16.3274","url":null,"abstract":"Objectives: The purpose of the proposed IT-TALB load balancing algorithm is to dynamically allocate the user's workload to the appropriate virtual machine in an Infrastructure as a Service (IaaS) cloud environment. Methods: This research work includes several key procedures. The user's workloads are distributed to the data center controller (DCC), which in turn uses the ECO-SBP service broker policy to select the efficient data center (DC) for processing the loads. The DCC forwards the load to the selected DC, and the IT-TALB load balancer picks the best Virtual Machine (VM) using CloudAnalyst simulation tool for load allocations according to metrics such as its size, current number of loads, and load size. IT-TALB partitions the available and busy VMs separately and stores them in the TreeMap structure. This algorithm also incorporates the scalability of the given VM when the load size is not compatible with the existing VMs by extending the resources of underutilized VMs. Findings: The research finding demonstrates that the proposed IT-TALB algorithm improves IaaS cloud performance compared to the existing algorithms. It achieves optimum load balancing, reduces the searching time of the VM, avoids the load waiting time, improves throughput, minimizes the response time, and enhances the resource utilization ratio. IT-TALB yields a throughput and resource utilization ratio of 98 to 99 percent. Novelty: The novelty of this research is that the IT-TALB algorithm incorporates the scalability of the underutilized VM and also introduces new metrics such as throughput and resource utilization ratio in the CloudAnalyst simulation tool for assessing the performance of the proposed algorithm. This study provides information for analyzing the proposed IT-TALB strategies with the existing two algorithms such as TLB and TALB in order to show its performance. Keywords: Cloud Computing, Infrastructure as a Service, Load Balancing, Throttled Load Balancing, Virtual Machine","PeriodicalId":13296,"journal":{"name":"Indian journal of science and technology","volume":" 48","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140683797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-19DOI: 10.17485/ijst/v17i16.2937
P. K. Manjhi, Ninian Nauneet Kujur
Objectives: In this article, we aim to find a series of Hadamard matrices by suitable selection of the special class of matrices given in the Goethals and Seidel array and study the pattern obtained. Methods: In the presented work, the search technique of Hadamard matrices has been done by selecting special class of (0,1) negacyclic matrices instead of the back diagonal identity matrix given in Geothals and Seidel arrays and the possible existence of negacyclic matrices for the corresponding four matrices have been explored in each case. Findings: Corresponding to the special class of (0,1) negacyclic matrices, no sets of four negacyclic matrices have been obtained in the Goethal Seidel array, for even orders. For odd orders, except in the case when all four matrices are equal and the case when there is a pair of equal matrices, many outputs have been obtained for the remaining cases, the search domain being upto 11,9 and 7 for the case of two different, three different and four different matrices respectively, in the Goethal Seidel array. Novelty: The selection of special class of negacyclic matrices instead of the back diagonal identity matrix and finding the corresponding four negacyclic matrices in Goethals and Seidel arrays for constructing Hadamard matrices provides a new approach to the construction of Hadamard matrices. Keywords: Hadamard matrix, Circulant matrix, Williamson matrices, Orthogonal array, Goethals and Seidel array
{"title":"On Goethals and Seidel Array","authors":"P. K. Manjhi, Ninian Nauneet Kujur","doi":"10.17485/ijst/v17i16.2937","DOIUrl":"https://doi.org/10.17485/ijst/v17i16.2937","url":null,"abstract":"Objectives: In this article, we aim to find a series of Hadamard matrices by suitable selection of the special class of matrices given in the Goethals and Seidel array and study the pattern obtained. Methods: In the presented work, the search technique of Hadamard matrices has been done by selecting special class of (0,1) negacyclic matrices instead of the back diagonal identity matrix given in Geothals and Seidel arrays and the possible existence of negacyclic matrices for the corresponding four matrices have been explored in each case. Findings: Corresponding to the special class of (0,1) negacyclic matrices, no sets of four negacyclic matrices have been obtained in the Goethal Seidel array, for even orders. For odd orders, except in the case when all four matrices are equal and the case when there is a pair of equal matrices, many outputs have been obtained for the remaining cases, the search domain being upto 11,9 and 7 for the case of two different, three different and four different matrices respectively, in the Goethal Seidel array. Novelty: The selection of special class of negacyclic matrices instead of the back diagonal identity matrix and finding the corresponding four negacyclic matrices in Goethals and Seidel arrays for constructing Hadamard matrices provides a new approach to the construction of Hadamard matrices. Keywords: Hadamard matrix, Circulant matrix, Williamson matrices, Orthogonal array, Goethals and Seidel array","PeriodicalId":13296,"journal":{"name":"Indian journal of science and technology","volume":" 70","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140685124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-19DOI: 10.17485/ijst/v17i16.222
M. R. Reddy, B. S. Rao, K. Rosaiah
Objectives: To prepare the percentile-based acceptance sampling plans for the Exponentiated Inverse Kumaraswamy Distribution (EIKD) at a specific truncation time to inspect the defective lots corresponding to the desired acceptance level. Methods: The failure probability value is estimated using the cumulative probability function F(.) at time ‘t’ which is converted in terms of the scale parameter σ as 100th percentile using quantile function. The minimum size of the sample, Operating Characteristic (OC) and the minimum ratios are calculated for a required levels of consumer’s as well as producer’s risk. Findings: The percentile-based sampling plans are obtained through the minimal size of the sample ‘n’ under a truncated life test with a target acceptance number c in a manner that the proportion of accepting a lot which is not good (consumer’s risk) would not be more than . These values are calculated at The function of probability of acceptance for variations in the quality of a lot (OC function) L(p) of the sample plan are evaluated for the acceptance values of c=1 and c=5. The minimum ratio values are calculated for the acceptability of the lot with producers’ risk of using the sampling plan. Novelty: The modernity of this study is the designing of the acceptance sampling plans to a non-normal data using an asymmetrical distribution that has all three shape parameters. Also, the monitor of the implementation and suitability of statistical quality control and process control aspects using Exponentiated Inverse Kumaraswamy Distribution when compared to other asymmetrical distributions which has at least one scale parameter. Keywords: Sampling plans, Consumer's risk, Operating characteristics function, Truncated life tests, Producer's risk
{"title":"Acceptance Sampling Plans based on Percentiles of Exponentiated Inverse Kumaraswamy Distribution","authors":"M. R. Reddy, B. S. Rao, K. Rosaiah","doi":"10.17485/ijst/v17i16.222","DOIUrl":"https://doi.org/10.17485/ijst/v17i16.222","url":null,"abstract":"Objectives: To prepare the percentile-based acceptance sampling plans for the Exponentiated Inverse Kumaraswamy Distribution (EIKD) at a specific truncation time to inspect the defective lots corresponding to the desired acceptance level. Methods: The failure probability value is estimated using the cumulative probability function F(.) at time ‘t’ which is converted in terms of the scale parameter σ as 100th percentile using quantile function. The minimum size of the sample, Operating Characteristic (OC) and the minimum ratios are calculated for a required levels of consumer’s as well as producer’s risk. Findings: The percentile-based sampling plans are obtained through the minimal size of the sample ‘n’ under a truncated life test with a target acceptance number c in a manner that the proportion of accepting a lot which is not good (consumer’s risk) would not be more than . These values are calculated at The function of probability of acceptance for variations in the quality of a lot (OC function) L(p) of the sample plan are evaluated for the acceptance values of c=1 and c=5. The minimum ratio values are calculated for the acceptability of the lot with producers’ risk of using the sampling plan. Novelty: The modernity of this study is the designing of the acceptance sampling plans to a non-normal data using an asymmetrical distribution that has all three shape parameters. Also, the monitor of the implementation and suitability of statistical quality control and process control aspects using Exponentiated Inverse Kumaraswamy Distribution when compared to other asymmetrical distributions which has at least one scale parameter. Keywords: Sampling plans, Consumer's risk, Operating characteristics function, Truncated life tests, Producer's risk","PeriodicalId":13296,"journal":{"name":"Indian journal of science and technology","volume":" 40","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140685732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.17485/ijst/v17i15.3144
M. A. Kumar, V. Srinivasan, P. R. Raju
Objectives: The aim of this study is to examine the tribological properties and microstructural of functionally graded material (FGM) composites based on magnesium (Mg) base material. Magnesium alloys are commonly employed in the development of biomaterials for implant applications owing to their favorable corrosion properties. The research objective is to produce Zn/Mo reinforced functionally graded magnesium composites using the centrifugal casting. Methods: A centrifugal process was employed to fabricate a functionally graded material (FGM) consisting of three layers with a cylindrical shape. The base material used for this FGM was Magnesium, which was alloyed with 10% of Zn and 10% of Mo. The developed FGMs have been analyzed for their mechanical and microstructural characteristics. The microstructure of the samples were analyzed via the Optical Microscope (OM). It is identified that denser particle molybdenum (Mo) have influenced the mechanical and microstructural characteristics of the FGM composites. Findings: Results recommend that, all the three layered testing’s, Mg (80%) +Zn (10%) + Mo (10%) composite exhibited favorable mechanical and microstructural properties. It is identified that denser particle of Mo which is influenced the microstructural characteristics. The alteration in micro hardness in the direction of centrifugal force is observed, and it is observed that the minimum wear loss for sliding wear samples A, B & C of Mg based FGM alloy were found to be 0.0018 g, 0.028 g and 0.031 g respectively, while the maximum wear loss for sliding wear samples A, B & C of FGM alloy were found to be 0.0021 g, 0.41 g and 0.31 g respectively. Novelty: In this study, a novel three-layered centrifugal casting technique was devised. Owing to its rapid degradability, the anticipated duration of the implants within the human body would be significantly shorter in comparison to alternative biomaterials such as Titanium and Stainless steel. Furthermore, the findings from the conducted tests strongly advocate for the utilization of this technique in biomedical implantations. Keywords: Functionally graded material (FGM), Centrifugal casting, Tribological characteristics, microstructural behavior, and bioimplants
研究目的本研究旨在考察基于镁(Mg)基材的功能分级材料(FGM)复合材料的摩擦学特性和微观结构。由于镁合金具有良好的腐蚀性能,因此常用于开发植入应用的生物材料。研究目的是利用离心铸造法生产 Zn/Mo 增强功能分级镁复合材料。方法:采用离心工艺制造由三层圆柱形材料组成的功能分级材料(FGM)。这种 FGM 的基础材料是镁,其中添加了 10% 的锌和 10% 的钼。已对开发的 FGM 进行了机械和微观结构特性分析。样品的微观结构通过光学显微镜(OM)进行分析。结果表明,密度较大的钼(Mo)颗粒影响了 FGM 复合材料的机械和微观结构特性。研究结果结果表明,所有三层测试、镁(80%)+锌(10%)+钼(10%)复合材料都表现出良好的机械和微观结构特性。结果表明,密度较大的钼颗粒影响了微结构特性。观察到微硬度在离心力方向上的变化,并发现镁基 FGM 合金滑动磨损样品 A、B 和 C 的最小磨损量分别为 0.0018 g、0.028 g 和 0.031 g,而 FGM 合金滑动磨损样品 A、B 和 C 的最大磨损量分别为 0.0021 g、0.41 g 和 0.31 g。新颖性:本研究设计了一种新颖的三层离心铸造技术。由于其快速降解性,与钛和不锈钢等替代生物材料相比,植入物在人体内的预期持续时间将大大缩短。此外,测试结果也有力地证明了在生物医学植入物中使用这种技术的可行性。关键词功能分级材料(FGM)、离心铸造、摩擦学特性、微结构行为和生物植入物
{"title":"Effect of Three Layered Centrifugal Casting on Tribological and Microstructural Characteristics of Mg Based Functionally Graded Material Alloy","authors":"M. A. Kumar, V. Srinivasan, P. R. Raju","doi":"10.17485/ijst/v17i15.3144","DOIUrl":"https://doi.org/10.17485/ijst/v17i15.3144","url":null,"abstract":"Objectives: The aim of this study is to examine the tribological properties and microstructural of functionally graded material (FGM) composites based on magnesium (Mg) base material. Magnesium alloys are commonly employed in the development of biomaterials for implant applications owing to their favorable corrosion properties. The research objective is to produce Zn/Mo reinforced functionally graded magnesium composites using the centrifugal casting. Methods: A centrifugal process was employed to fabricate a functionally graded material (FGM) consisting of three layers with a cylindrical shape. The base material used for this FGM was Magnesium, which was alloyed with 10% of Zn and 10% of Mo. The developed FGMs have been analyzed for their mechanical and microstructural characteristics. The microstructure of the samples were analyzed via the Optical Microscope (OM). It is identified that denser particle molybdenum (Mo) have influenced the mechanical and microstructural characteristics of the FGM composites. Findings: Results recommend that, all the three layered testing’s, Mg (80%) +Zn (10%) + Mo (10%) composite exhibited favorable mechanical and microstructural properties. It is identified that denser particle of Mo which is influenced the microstructural characteristics. The alteration in micro hardness in the direction of centrifugal force is observed, and it is observed that the minimum wear loss for sliding wear samples A, B & C of Mg based FGM alloy were found to be 0.0018 g, 0.028 g and 0.031 g respectively, while the maximum wear loss for sliding wear samples A, B & C of FGM alloy were found to be 0.0021 g, 0.41 g and 0.31 g respectively. Novelty: In this study, a novel three-layered centrifugal casting technique was devised. Owing to its rapid degradability, the anticipated duration of the implants within the human body would be significantly shorter in comparison to alternative biomaterials such as Titanium and Stainless steel. Furthermore, the findings from the conducted tests strongly advocate for the utilization of this technique in biomedical implantations. Keywords: Functionally graded material (FGM), Centrifugal casting, Tribological characteristics, microstructural behavior, and bioimplants","PeriodicalId":13296,"journal":{"name":"Indian journal of science and technology","volume":"355 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140698036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.17485/ijst/v17i15.2979
Pramod P. Ghogare, Husain H. Dawoodi, Manoj P. Patil
Objective: This article proposes a content-based spam email classification by applying various text pre-processing techniques. NLP techniques have been applied to pre-process the content of an email to get the optimal performance of spam email classification using machine learning. Method: Several combinations of pre-processing methods, such as stopping, removing tags, converting to lower case, removing punctuation, removing special characters, and natural language processing, were applied to the extracted content from the email with machine learning algorithms like NB, SVM, and RF to classify an email as ham or spam. The standard datasets like Enron and SpamAssassin, along with the personal email dataset collected from Yahoo Mail, are used to evaluate the performance of the models. Findings: Applying stemming in pre-processing to the RF classifier yielded the best results, achieving 99.2% accuracy on the SpamAssassin dataset and 99.3% accuracy on the Enron dataset. Lemmatization followed closely with 99% accuracy. In real-world testing on a personal Yahoo email dataset, the proposed method significantly improved accuracy from 89.82% to 97.28% compared to the email service provider's built-in classifier. Additionally, the study found that SVM performs accurately when stop words are retained. Novelty: This article introduces a unique perspective by highlighting the fine-tuning of pre-processing techniques. The focus is on removing tags and certain special characters, while retaining those that improve spam email classification accuracy. Unlike prior works that primarily emphasize algorithmic approaches and pre-defined processing functions, our research delves into the intricacies of data preparation, showcasing its significant impact on spam email classifiers. These findings emphasize the crucial role of pre-processing and contribute to a more nuanced understanding of effective strategies for robust spam detection. Keywords: Spam, Classification, Pre-processing, NLP, Machine Learning
{"title":"Enhancing Spam Email Classification Using Effective Preprocessing Strategies and Optimal Machine Learning Algorithms","authors":"Pramod P. Ghogare, Husain H. Dawoodi, Manoj P. Patil","doi":"10.17485/ijst/v17i15.2979","DOIUrl":"https://doi.org/10.17485/ijst/v17i15.2979","url":null,"abstract":"Objective: This article proposes a content-based spam email classification by applying various text pre-processing techniques. NLP techniques have been applied to pre-process the content of an email to get the optimal performance of spam email classification using machine learning. Method: Several combinations of pre-processing methods, such as stopping, removing tags, converting to lower case, removing punctuation, removing special characters, and natural language processing, were applied to the extracted content from the email with machine learning algorithms like NB, SVM, and RF to classify an email as ham or spam. The standard datasets like Enron and SpamAssassin, along with the personal email dataset collected from Yahoo Mail, are used to evaluate the performance of the models. Findings: Applying stemming in pre-processing to the RF classifier yielded the best results, achieving 99.2% accuracy on the SpamAssassin dataset and 99.3% accuracy on the Enron dataset. Lemmatization followed closely with 99% accuracy. In real-world testing on a personal Yahoo email dataset, the proposed method significantly improved accuracy from 89.82% to 97.28% compared to the email service provider's built-in classifier. Additionally, the study found that SVM performs accurately when stop words are retained. Novelty: This article introduces a unique perspective by highlighting the fine-tuning of pre-processing techniques. The focus is on removing tags and certain special characters, while retaining those that improve spam email classification accuracy. Unlike prior works that primarily emphasize algorithmic approaches and pre-defined processing functions, our research delves into the intricacies of data preparation, showcasing its significant impact on spam email classifiers. These findings emphasize the crucial role of pre-processing and contribute to a more nuanced understanding of effective strategies for robust spam detection. Keywords: Spam, Classification, Pre-processing, NLP, Machine Learning","PeriodicalId":13296,"journal":{"name":"Indian journal of science and technology","volume":"5 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140695612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.17485/ijst/v17i15.228
Kiran Kumari Singh
Objective: The present study aims to investigate thermal comfort in two cities in Punjab, India based on land surface temperature (LST), urban hot spots (UHS) and urban thermal field variance index (UTFVI). Method: Landsat 8 OLI/TIRS data are used to derive land surface temperature (LST), normalized difference vegetation index (NDVI) and enhanced built-up and bareness index (EBBI) for the year 2019. UTFVI reflects urban thermal conditions and demarcates comfort and discomfort zones in the cities. Findings: The results revealed that the mean LST (μ) of Ludhiana and Amritsar cities is 32.80 °C and 30.70 °C, respectively. LST shows a strong negative correlation with NDVI (-0.710 for Amritsar and -0.754 for Ludhiana) and a positive correlation with EBBI (0.531 for Amritsar and 0.541 for Ludhiana). About 57 and 52 per cent of geographical areas in Ludhiana and Amritsar city respectively are experiencing bad to worst ecological conditions. Novelty: (i) The study derived LST-based thermal comfort for the summer month in Amritsar and Ludhiana cities of Punjab which provide important information to urban planners and policymakers to design sustainable urban development policies to mitigate heat-related issues. (ii) Such information can be used to take steps to improve the situation in smart cities like Amritsar and Ludhiana. Keywords: Land surface temperature (LST), urban thermal field variance index (UTFVI), NDVI, Landsat 8, Punjab
{"title":"Land Surface Temperature and Thermal Comfort in the Cities of Punjab, India: Assessment Based on Remote Sensing Data","authors":"Kiran Kumari Singh","doi":"10.17485/ijst/v17i15.228","DOIUrl":"https://doi.org/10.17485/ijst/v17i15.228","url":null,"abstract":"Objective: The present study aims to investigate thermal comfort in two cities in Punjab, India based on land surface temperature (LST), urban hot spots (UHS) and urban thermal field variance index (UTFVI). Method: Landsat 8 OLI/TIRS data are used to derive land surface temperature (LST), normalized difference vegetation index (NDVI) and enhanced built-up and bareness index (EBBI) for the year 2019. UTFVI reflects urban thermal conditions and demarcates comfort and discomfort zones in the cities. Findings: The results revealed that the mean LST (μ) of Ludhiana and Amritsar cities is 32.80 °C and 30.70 °C, respectively. LST shows a strong negative correlation with NDVI (-0.710 for Amritsar and -0.754 for Ludhiana) and a positive correlation with EBBI (0.531 for Amritsar and 0.541 for Ludhiana). About 57 and 52 per cent of geographical areas in Ludhiana and Amritsar city respectively are experiencing bad to worst ecological conditions. Novelty: (i) The study derived LST-based thermal comfort for the summer month in Amritsar and Ludhiana cities of Punjab which provide important information to urban planners and policymakers to design sustainable urban development policies to mitigate heat-related issues. (ii) Such information can be used to take steps to improve the situation in smart cities like Amritsar and Ludhiana. Keywords: Land surface temperature (LST), urban thermal field variance index (UTFVI), NDVI, Landsat 8, Punjab","PeriodicalId":13296,"journal":{"name":"Indian journal of science and technology","volume":"7 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140696225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.17485/ijst/v17i15.330
Emebet Kibkab, N. Berhane
Objectives: To assess the effectiveness of the potential of the Rhizobium leguminosarum bv.viciae strain in two different soils with the Ashebka faba bean variety as the host plant. Method: Soil physicochemical analysis and the most probable number were done according to their standard procedure. The pot was laid out in a complete randomized design with three replications. Three top strains were selected as inoculants for faba beans grown on the slightly acidic Shentia soil and the slightly neutral Dabat soil with their control. The symbiotic effectiveness of the strains was evaluated based on plant agronomy and the total nitrogen of the plant. The results of the strains were analyzed by SPSS version 26. Findings: The results of shoot dry weight show that all strains accumulated 81mg/p for isolate WK1E1, followed by 85mg/p for isolate GR1E1, and finally 87mg/p-1 for isolate co-inoculant on Dabat soil, and 78mg/p for isolate WK1E1, followed by 82mg/p for GR1E1 and finally 85mg/p-1 for co-inoculant Shentia soil. The nodule number record range from 138mg/p-1, and 173mg/p-1 for isolate WK1E1 and co-inoculate, respectively for Dabat site. 139mg/p-1 for isolate WK1E1 and 165 mg/p-1 for isolate co-inoculate Shentia site. Maximum mean shoot dry mass (91mg/p) was scored by positive nitrogen and the minimum (29mg/p) by the negative control nitrogen treated control for Dabat. The maximum mean shoot dry mass (86mg/p) was scored by the positive nitrogen treated control and the minimum (29mg/p) by the negative control for Shentia. For Dabat soil, the relative effectiveness expressed as a percentage of shoot dry mass of inoculated over total nitrogen control, showed that 89, 93, and 95.6 and Shentia soil the relative effectiveness expressed as a percentage of shoot dry weight of inoculants over nitrogen treated control, showed that 90.6, 95.3, and 98.8 for isolates WK1E1, GR1E1, and co-inoculant, respectively for both soils. Positive correlations were also observed concerning nodule numbers with other agronomic parameters. Novelty: No such testing was attempted in that study area before, and this new idea came because the Rhizobium leguminosarum bv.viciae isolates tested their survival, compatibility, and effectiveness in two different soils with Ashebka faba bean as the host. Keywords: Ashebka faba bean, most probable estimation, Rhizobium leguminosarum bv.Vicae, inoculums, symbiosis
{"title":"Assessment of the Potential of Rhizobium leguminosarum bv.viciae on Two Different Soils with the Ashebka faba Bean Variety (Viciae faba L.) as the Host Plant","authors":"Emebet Kibkab, N. Berhane","doi":"10.17485/ijst/v17i15.330","DOIUrl":"https://doi.org/10.17485/ijst/v17i15.330","url":null,"abstract":"Objectives: To assess the effectiveness of the potential of the Rhizobium leguminosarum bv.viciae strain in two different soils with the Ashebka faba bean variety as the host plant. Method: Soil physicochemical analysis and the most probable number were done according to their standard procedure. The pot was laid out in a complete randomized design with three replications. Three top strains were selected as inoculants for faba beans grown on the slightly acidic Shentia soil and the slightly neutral Dabat soil with their control. The symbiotic effectiveness of the strains was evaluated based on plant agronomy and the total nitrogen of the plant. The results of the strains were analyzed by SPSS version 26. Findings: The results of shoot dry weight show that all strains accumulated 81mg/p for isolate WK1E1, followed by 85mg/p for isolate GR1E1, and finally 87mg/p-1 for isolate co-inoculant on Dabat soil, and 78mg/p for isolate WK1E1, followed by 82mg/p for GR1E1 and finally 85mg/p-1 for co-inoculant Shentia soil. The nodule number record range from 138mg/p-1, and 173mg/p-1 for isolate WK1E1 and co-inoculate, respectively for Dabat site. 139mg/p-1 for isolate WK1E1 and 165 mg/p-1 for isolate co-inoculate Shentia site. Maximum mean shoot dry mass (91mg/p) was scored by positive nitrogen and the minimum (29mg/p) by the negative control nitrogen treated control for Dabat. The maximum mean shoot dry mass (86mg/p) was scored by the positive nitrogen treated control and the minimum (29mg/p) by the negative control for Shentia. For Dabat soil, the relative effectiveness expressed as a percentage of shoot dry mass of inoculated over total nitrogen control, showed that 89, 93, and 95.6 and Shentia soil the relative effectiveness expressed as a percentage of shoot dry weight of inoculants over nitrogen treated control, showed that 90.6, 95.3, and 98.8 for isolates WK1E1, GR1E1, and co-inoculant, respectively for both soils. Positive correlations were also observed concerning nodule numbers with other agronomic parameters. Novelty: No such testing was attempted in that study area before, and this new idea came because the Rhizobium leguminosarum bv.viciae isolates tested their survival, compatibility, and effectiveness in two different soils with Ashebka faba bean as the host. Keywords: Ashebka faba bean, most probable estimation, Rhizobium leguminosarum bv.Vicae, inoculums, symbiosis","PeriodicalId":13296,"journal":{"name":"Indian journal of science and technology","volume":"28 31","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140697097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.17485/ijst/v17i15.617
Janhavi Satam, Sangeeta Vhatkar
Objective: Using a variety of datasets from the Ethereum documentation and Smart Contract Dataset repository, this study tackles the crucial problem of classifying smart contract vulnerabilities. Methods: Our study uses a three-module method and focuses on the Resource 3 Dataset, which contains over 2,000 Ethereum smart contracts, including inherited contracts. The groundwork for deep learning model training is laid in Module 1 by extracting bytecode from Solidity files and creating images thereafter. In Colab, Module 2 entails importing data, pre-processing, SMOTE balancing, and building three deep learning models: CNN, XCEPTION, and EfficientNet-B2. Module 3 is a Flask-based web application created in Visual Studio Code that enables vulnerability predictions, bytecode extraction, and user interaction. Findings: With an overall accuracy of 71 percent, the Convolutional Neural Network (CNN) displays its effectiveness in classifying vulnerabilities. Although the accuracy of XCEPTION and EfficientNet-B2 is 69% and 75%, respectively, the latter is the top performer. Novelty & Applications: The online application adds to the comprehensive examination of smart contract security by giving users an easy-to-use interface. The EfficientNet-B2 model stands out as a dependable tool for precise vulnerability classification, and this study advances our understanding of and efforts to mitigate vulnerabilities in Ethereum smart contracts. Keywords: Smart Contracts, Vulnerability Classification, Ethereum, Deep Learning, Convolutional Neural Network (CNN)
{"title":"Securing Smart Contracts: Harnessing the Power of Efficient NetB2 Detection","authors":"Janhavi Satam, Sangeeta Vhatkar","doi":"10.17485/ijst/v17i15.617","DOIUrl":"https://doi.org/10.17485/ijst/v17i15.617","url":null,"abstract":"Objective: Using a variety of datasets from the Ethereum documentation and Smart Contract Dataset repository, this study tackles the crucial problem of classifying smart contract vulnerabilities. Methods: Our study uses a three-module method and focuses on the Resource 3 Dataset, which contains over 2,000 Ethereum smart contracts, including inherited contracts. The groundwork for deep learning model training is laid in Module 1 by extracting bytecode from Solidity files and creating images thereafter. In Colab, Module 2 entails importing data, pre-processing, SMOTE balancing, and building three deep learning models: CNN, XCEPTION, and EfficientNet-B2. Module 3 is a Flask-based web application created in Visual Studio Code that enables vulnerability predictions, bytecode extraction, and user interaction. Findings: With an overall accuracy of 71 percent, the Convolutional Neural Network (CNN) displays its effectiveness in classifying vulnerabilities. Although the accuracy of XCEPTION and EfficientNet-B2 is 69% and 75%, respectively, the latter is the top performer. Novelty & Applications: The online application adds to the comprehensive examination of smart contract security by giving users an easy-to-use interface. The EfficientNet-B2 model stands out as a dependable tool for precise vulnerability classification, and this study advances our understanding of and efforts to mitigate vulnerabilities in Ethereum smart contracts. Keywords: Smart Contracts, Vulnerability Classification, Ethereum, Deep Learning, Convolutional Neural Network (CNN)","PeriodicalId":13296,"journal":{"name":"Indian journal of science and technology","volume":"4 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140696357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}