The advent and development of technology has resulted in significant changes in the literary production, consumption and dissemination across the world. Various evaluative studies that analyze the progress of technological innovations in the backdrop of digital era emphasize the positive outcome that technological growth has caused on English literature as well as in the general literary ecosystem. These studies disclose how the popular digital innovative have influenced and inspired authors to embrace different writing approaches as well as explore diverse narratives in their literary careers. Apart from enhancing effectiveness and productivity of authors in the production of literature, technology has also made the whole process of publishing and accessibility easier with online platforms that offer self-publishing and apps that promote readers engagement. On the other hand, the technological advancement successfully distorted the conventional publishing methods to better ways that promote the marketing of literature and encourage discussions and debates on literary works. The intersection of literature, publishing and digital innovations rises scope of studying ways of how technology have impacted the literary sphere and further help explore methods of incorporating new domains of technology in improving the same.
{"title":"The Impact of Technology on English Literature and the Publishing Industry: An Evaluative Study","authors":"Shipra Joshi","doi":"10.17762/itii.v7i2.805","DOIUrl":"https://doi.org/10.17762/itii.v7i2.805","url":null,"abstract":"The advent and development of technology has resulted in significant changes in the literary production, consumption and dissemination across the world. Various evaluative studies that analyze the progress of technological innovations in the backdrop of digital era emphasize the positive outcome that technological growth has caused on English literature as well as in the general literary ecosystem. These studies disclose how the popular digital innovative have influenced and inspired authors to embrace different writing approaches as well as explore diverse narratives in their literary careers. Apart from enhancing effectiveness and productivity of authors in the production of literature, technology has also made the whole process of publishing and accessibility easier with online platforms that offer self-publishing and apps that promote readers engagement. On the other hand, the technological advancement successfully distorted the conventional publishing methods to better ways that promote the marketing of literature and encourage discussions and debates on literary works. The intersection of literature, publishing and digital innovations rises scope of studying ways of how technology have impacted the literary sphere and further help explore methods of incorporating new domains of technology in improving the same.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74198526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional medicines can treat many problems because they are based on natural treatments. Because of their historical applications and useful treatments, numerous of these conventional medications have been the subject of extensive pharmacological research of their antibacterial, antiviral, and anti-inflammatory effects. Natural resources are frequently used as a primary or secondary source by academics and pharmaceutical corporations when developing new drugs. A wide variety of plants have long been used as a source of traditional medicine by people in many different cultures. Numerous research have examined the possible antibacterial and antiviral properties of these plants. Since there are so many different kinds of natural sources, including plants, choose the proper one as a starting point is crucial for accurate screening results. Due to their, “effectiveness in treating diseases and lower risk of side effects than synthetic treatments, the usage of plant-based medications has significantly expanded in the modern world. The current study was aimed to confirm the identity, quality and purity of some locally available potential medicinal plants such as Drymaria cordata (whole plant), Alstonia scholaris (bark), Hydrocotyle sibthorpioides (whole plant), Centella asiatica (whole plant), Senna hirsuta (leaf), Oroxylum indicum (bark),
Senna occidentalis (leaf), Stephania japonica (tuber) and Solanum indicum (root) in powdered form”.
The powdered plant components underwent preliminary phytochemical analysis as well as pharmacognostic tests, physical evaluation and heavy metal analysis. Initial phytochemical study of the various extracts indicated that triterpenoids were absent, but alkaloids, phenolics, carbohydrates and amino acids were present. The powder was studied under a microscope to reveal its, “distinguishing characteristics, including calcium oxalate crystals, fibres, stone cells, trichomes, stomata, xylem vessels, pitted spiral vessels, etc. The colour, smell, fragrance, and texture of the ground plant were all acceptable. The physical characteristics that affect the flow rate of the powder with respect to Carr's index and Hausner's ratio were found to be good to passable, with the exception of Hydrocotyle sibthorpioides (the complete plant) and Oroxylum indicum (bark), which were not easily passable. During the heavy metal test, lead, cadmium, and bismuth were not found. As a result, the current study may be utilised as a benchmark reference for the quality control analysis of the herbal medicine, either alone or in combination”.
{"title":"Preparing Herbal Formulations through Indigenous and Modern Methods: An Experimental Study","authors":"Shatakshi Lall","doi":"10.17762/itii.v7i2.807","DOIUrl":"https://doi.org/10.17762/itii.v7i2.807","url":null,"abstract":"Traditional medicines can treat many problems because they are based on natural treatments. Because of their historical applications and useful treatments, numerous of these conventional medications have been the subject of extensive pharmacological research of their antibacterial, antiviral, and anti-inflammatory effects. Natural resources are frequently used as a primary or secondary source by academics and pharmaceutical corporations when developing new drugs. A wide variety of plants have long been used as a source of traditional medicine by people in many different cultures. Numerous research have examined the possible antibacterial and antiviral properties of these plants. Since there are so many different kinds of natural sources, including plants, choose the proper one as a starting point is crucial for accurate screening results. Due to their, “effectiveness in treating diseases and lower risk of side effects than synthetic treatments, the usage of plant-based medications has significantly expanded in the modern world. The current study was aimed to confirm the identity, quality and purity of some locally available potential medicinal plants such as Drymaria cordata (whole plant), Alstonia scholaris (bark), Hydrocotyle sibthorpioides (whole plant), Centella asiatica (whole plant), Senna hirsuta (leaf), Oroxylum indicum (bark),
 
 Senna occidentalis (leaf), Stephania japonica (tuber) and Solanum indicum (root) in powdered form”. 
 The powdered plant components underwent preliminary phytochemical analysis as well as pharmacognostic tests, physical evaluation and heavy metal analysis. Initial phytochemical study of the various extracts indicated that triterpenoids were absent, but alkaloids, phenolics, carbohydrates and amino acids were present. The powder was studied under a microscope to reveal its, “distinguishing characteristics, including calcium oxalate crystals, fibres, stone cells, trichomes, stomata, xylem vessels, pitted spiral vessels, etc. The colour, smell, fragrance, and texture of the ground plant were all acceptable. The physical characteristics that affect the flow rate of the powder with respect to Carr's index and Hausner's ratio were found to be good to passable, with the exception of Hydrocotyle sibthorpioides (the complete plant) and Oroxylum indicum (bark), which were not easily passable. During the heavy metal test, lead, cadmium, and bismuth were not found. As a result, the current study may be utilised as a benchmark reference for the quality control analysis of the herbal medicine, either alone or in combination”.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135090753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of CAD in architectural practice and education has drawn fierce criticism in recent years, according to literature. The mental efficacy of modern students in institutions and, primarily, practicing architects was thought to be negatively impacted. Contrary to the prevalent practice of using Computer-Aided Design (CAD) in the design process, the earlier method of drafting was positioned favorably. This study is a moralist polemicist against broad generalizations like that. Before proposing a curative remedy, it aims to assess the viability of the goals of previous literature. In that respect, the objective goal of this study is to analyze and contrast quantitatively the advantages and drawbacks of using computer-aided design (CAD) vs conventional approaches in architectural practice and education. Secondarily, it seeks to vehemently amplify whether computer-aided design (CAD) use should be continued or discontinued based on an analysis of identified CAD users. Therefore, a variety of interdependent schemata were developed to organize the scope's boundaries in order to achieve the full phenomena of this expansive aim. The methodology sold follows the quantitative paradigm. From the viewpoint of experts, secondary data for the theoretical framework was gathered through databases, books, and journals.
{"title":"Use of Computer Designing for Architectural Infrastructures in Different Terrain","authors":"Sanjay Painuly","doi":"10.17762/itii.v7i2.806","DOIUrl":"https://doi.org/10.17762/itii.v7i2.806","url":null,"abstract":"The use of CAD in architectural practice and education has drawn fierce criticism in recent years, according to literature. The mental efficacy of modern students in institutions and, primarily, practicing architects was thought to be negatively impacted. Contrary to the prevalent practice of using Computer-Aided Design (CAD) in the design process, the earlier method of drafting was positioned favorably. This study is a moralist polemicist against broad generalizations like that. Before proposing a curative remedy, it aims to assess the viability of the goals of previous literature. In that respect, the objective goal of this study is to analyze and contrast quantitatively the advantages and drawbacks of using computer-aided design (CAD) vs conventional approaches in architectural practice and education. Secondarily, it seeks to vehemently amplify whether computer-aided design (CAD) use should be continued or discontinued based on an analysis of identified CAD users. Therefore, a variety of interdependent schemata were developed to organize the scope's boundaries in order to achieve the full phenomena of this expansive aim. The methodology sold follows the quantitative paradigm. From the viewpoint of experts, secondary data for the theoretical framework was gathered through databases, books, and journals.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90241841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IoT is the acronym for Internet of Things acronym. At present IoT is a buzzword amongst academia, research and industry communities. Everything surrounded by us have developed abilities to communicate via the medium of internet. The Routing information plays a vital role in establishing communication between nodes in the space of IoT. Maximum energy of such connected nodes is consumed in the process of routing the packets. In this context optimizing the network lifetime with minimal energy consumption becomes important for efficient implementation of IoT infrastructure. This literature review is has the objective to identify the limitations existing in improving the network usability and thus enhance the network lifetime. The focus of this review is to consider various parameters like Quality of Service (QoS), efficient node deployment techniques, Network lifetime for Wireless Sensor Networks (WSN). A comprehensive and systematic study of Routing challenges encountered in an IoT network is accomplished. Further the performance of various energy routing protocols are studied.
{"title":"A COMPREHENSIVE REVIEW OF ENERGY-BASED ROUTING STRATEGIES FOR INTERNET OF THINGS","authors":"M. Srinivasulu","doi":"10.17762/ITII.V9I2.449","DOIUrl":"https://doi.org/10.17762/ITII.V9I2.449","url":null,"abstract":"IoT is the acronym for Internet of Things acronym. At present IoT is a buzzword amongst academia, research and industry communities. Everything surrounded by us have developed abilities to communicate via the medium of internet. The Routing information plays a vital role in establishing communication between nodes in the space of IoT. Maximum energy of such connected nodes is consumed in the process of routing the packets. In this context optimizing the network lifetime with minimal energy consumption becomes important for efficient implementation of IoT infrastructure. This literature review is has the objective to identify the limitations existing in improving the network usability and thus enhance the network lifetime. The focus of this review is to consider various parameters like Quality of Service (QoS), efficient node deployment techniques, Network lifetime for Wireless Sensor Networks (WSN). A comprehensive and systematic study of Routing challenges encountered in an IoT network is accomplished. Further the performance of various energy routing protocols are studied.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89929902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fused floating point operations play a major role in many DSP applications to reduce operational area & power consumption. Radix-2r multiplier (using 7-bit encoder technique) & pipeline feedforward-cutset-free carry-lookahead adder(PFCF-CLA) are used to enhance the traditional FDP unit. Pipeline concept is also infused into system to get the desired pipeline fused floating-point dot product (PFFDP) operations. Synthesis results are obtained using 60nm standard library with 1GHz clock. Power consumption of single & double precision operations are 2.24mW & 3.67mW respectively. The die areas are 27.48 mm2 , 46.72mm2 with an execution time of 1.91 ns , 2.07 ns for a single & double precision operations respectively. Comparison with previous data has also been performed. The area-delay product(ADP) & power-delay product(PDP) of our proposed architecture are 18%,22% & 27%,18% for single and double precision operations respectively.
{"title":"Area and Power Efficient Fused Floating-point Dot Product Unit based on Radix-2r Multiplier & Pipeline Feedforward-Cutset-Free Carry-Lookahead Adder","authors":"M. M. Babu, K. R. Naidu","doi":"10.17762/ITII.V9I2.411","DOIUrl":"https://doi.org/10.17762/ITII.V9I2.411","url":null,"abstract":"Fused floating point operations play a major role in many DSP applications to reduce operational area & power consumption. Radix-2r multiplier (using 7-bit encoder technique) & pipeline feedforward-cutset-free carry-lookahead adder(PFCF-CLA) are used to enhance the traditional FDP unit. Pipeline concept is also infused into system to get the desired pipeline fused floating-point dot product (PFFDP) operations. Synthesis results are obtained using 60nm standard library with 1GHz clock. Power consumption of single & double precision operations are 2.24mW & 3.67mW respectively. The die areas are 27.48 mm2 , 46.72mm2 with an execution time of 1.91 ns , 2.07 ns for a single & double precision operations respectively. Comparison with previous data has also been performed. The area-delay product(ADP) & power-delay product(PDP) of our proposed architecture are 18%,22% & 27%,18% for single and double precision operations respectively.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74601165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In previous years, the usage of additive layer processing grew considerably. Different companies, including motor cars, aerospace, equipment, communications and medical devices utilize additional layer production. However, at present, processed additive layer products comprise less than one percent of all items manufactured. If the prices of additive layer processing systems decline, the manner in which customers communicate with suppliers will be modified. Additional development layer innovations provide the market and culture with different possibilities. It will make the personalized development of strong lightweight goods simpler, and prototypes that with past manufacturing techniques were not feasible. However, the application of this device may be hampered and delayed by numerous obstacles. Many situations require higher costs than conventional approaches for making a component utilizing additive layer production techniques. This study reviews the cost literature for the development of additive layer and attempts to recognize situations in which additive production may be cost-effective and also to identify new methods of minimizing costs in the usage of this technology
{"title":"POTENTIAL ANALYSIS OF ADDITIVE LAYER MANUFACTURING TECHNOLOGIES USED FOR PROCESSING POLYMER COMPONENTS","authors":"Rohit Pandey","doi":"10.17762/ITII.V9I2.359","DOIUrl":"https://doi.org/10.17762/ITII.V9I2.359","url":null,"abstract":"In previous years, the usage of additive layer processing grew considerably. Different companies, including motor cars, aerospace, equipment, communications and medical devices utilize additional layer production. However, at present, processed additive layer products comprise less than one percent of all items manufactured. If the prices of additive layer processing systems decline, the manner in which customers communicate with suppliers will be modified. Additional development layer innovations provide the market and culture with different possibilities. It will make the personalized development of strong lightweight goods simpler, and prototypes that with past manufacturing techniques were not feasible. However, the application of this device may be hampered and delayed by numerous obstacles. Many situations require higher costs than conventional approaches for making a component utilizing additive layer production techniques. This study reviews the cost literature for the development of additive layer and attempts to recognize situations in which additive production may be cost-effective and also to identify new methods of minimizing costs in the usage of this technology \u0000 ","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90863678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current research into the development of additive layer costs shows that this technology is economical in producing small batches with ongoing centralized manufacture; however improved automation may contribute to cost efficiency in distributed manufacturing. Due to the difficulty of which additive production costs are calculated, the reach of the current studies is small. Many of today's studies analyze single-part development. Many that look at assemblies prefer not to look at the impact of the supply chain, such as inventory and shipping prices and lower probability of interruption. Analysis currently also shows that the expense of content is a significant part of the cost of a commodity made using additive layer. Technologies may, therefore, also be compatible, with two technologies being implemented side by side and advantages larger than if independently adopted. Growing usage of additive processing may contribute to a decrease in raw material costs through saving in scale. This could result in further implementation of additive layer processing through the decreased cost of the raw material. The expense of raw materials will often save on a scale if specific materials are more popular than a host of other materials. The production method for additive layers is still a significant cost driver, but this cost has decreased continuously. The average price dropped 51% between 2001 and 2011 after inflation changes
{"title":"POTENTIAL INVESTIGATION AND ANALYTICAL MODELING OF ADDITIVE LAYER MANUFACTURING PROCESSES FOR METAL TOOLS COMPONENTS PRODUCTION","authors":"Rohit Pandey","doi":"10.17762/ITII.V9I2.360","DOIUrl":"https://doi.org/10.17762/ITII.V9I2.360","url":null,"abstract":"Current research into the development of additive layer costs shows that this technology is economical in producing small batches with ongoing centralized manufacture; however improved automation may contribute to cost efficiency in distributed manufacturing. Due to the difficulty of which additive production costs are calculated, the reach of the current studies is small. Many of today's studies analyze single-part development. Many that look at assemblies prefer not to look at the impact of the supply chain, such as inventory and shipping prices and lower probability of interruption. Analysis currently also shows that the expense of content is a significant part of the cost of a commodity made using additive layer. Technologies may, therefore, also be compatible, with two technologies being implemented side by side and advantages larger than if independently adopted. Growing usage of additive processing may contribute to a decrease in raw material costs through saving in scale. This could result in further implementation of additive layer processing through the decreased cost of the raw material. The expense of raw materials will often save on a scale if specific materials are more popular than a host of other materials. The production method for additive layers is still a significant cost driver, but this cost has decreased continuously. The average price dropped 51% between 2001 and 2011 after inflation changes","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91167286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Principal Component Analysis and Shannon Entropy are some of the most widely used methods for feature extraction and selection. PCA reduces the data to a new subspace with low dimensions by calculating the eigenvectors from eigenvalues out of a covariance matrix and thereby reduces the features to a smaller number capturing the significant information. Shannon entropy is based on probability distribution to calculate the significant information content. Information gain shows the importance of a given attribute in the set of feature vectors. The paper has introduced a hybrid technique Info_PCA which captures the properties of Information gain and PCA that overall reduces the dimensionality and thereby increases the accuracy of the machine learning technique. It also demonstrates the individual implementation of Information gain for feature selection and PCA for dimensionality reduction on two different datasets collected from the UCI machine learning repository. One of the major aims is to determine the important attributes in a given set of training feature vectors to differentiate the classes. The paper has shown a comparative analysis on the classification accuracy obtained by the application of Information Gain, PCA and Info_PCA applied individually on the two different datasets for feature extraction followed by ANN classifier where the results of hybrid technique Info_PCA achieves maximum accuracy and minimum loss in comparison to other feature extraction techniques.
{"title":"Info_PCA: A Hybrid Technique to Improve Accuracy by Dimensionality Reduction","authors":"Surabhi Lingwal","doi":"10.17762/ITII.V9I2.370","DOIUrl":"https://doi.org/10.17762/ITII.V9I2.370","url":null,"abstract":"Principal Component Analysis and Shannon Entropy are some of the most widely used methods for feature extraction and selection. PCA reduces the data to a new subspace with low dimensions by calculating the eigenvectors from eigenvalues out of a covariance matrix and thereby reduces the features to a smaller number capturing the significant information. Shannon entropy is based on probability distribution to calculate the significant information content. Information gain shows the importance of a given attribute in the set of feature vectors. The paper has introduced a hybrid technique Info_PCA which captures the properties of Information gain and PCA that overall reduces the dimensionality and thereby increases the accuracy of the machine learning technique. It also demonstrates the individual implementation of Information gain for feature selection and PCA for dimensionality reduction on two different datasets collected from the UCI machine learning repository. One of the major aims is to determine the important attributes in a given set of training feature vectors to differentiate the classes. The paper has shown a comparative analysis on the classification accuracy obtained by the application of Information Gain, PCA and Info_PCA applied individually on the two different datasets for feature extraction followed by ANN classifier where the results of hybrid technique Info_PCA achieves maximum accuracy and minimum loss in comparison to other feature extraction techniques.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85720325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Noisy phone calls are aggravating and distracting, as well as frustrating. They may be classed as 'nuisance', 'emergency', 'random', and 'unsolicited' calls. Users have no inherent privileges on the internet; rather, their personalities are produced without any arrangement or evidence of involvement. It costs the U.S. communications company $8 billion per year to avoid call spam on the phone grid. Between January 2014 and June 2018, the FTC (Federal Trade Commission) received over 22 million reports of fraudulent and illegal telemarketing calls. Nowadays, the mobile network is used to issue automatic phone calls such as robocalls. Since it operates on text, we struggle with the following: What tactics and methods do we use to combat spam? Telephone TFD (Telecom Fraud Detection) here is discussed first. Concerning spam, we advanced our proposal by proposing a targeted traffic detection using a single weighted credibility algorithm with appropriate weighting criteria.
{"title":"TFD: TELECOM FRAUD DETECTION USING CONSOLIDATED WEIGHTED REPUTATION ALGORITHM","authors":"J. Anbarasi, V. Radha","doi":"10.17762/ITII.V9I2.325","DOIUrl":"https://doi.org/10.17762/ITII.V9I2.325","url":null,"abstract":"Noisy phone calls are aggravating and distracting, as well as frustrating. They may be classed as 'nuisance', 'emergency', 'random', and 'unsolicited' calls. Users have no inherent privileges on the internet; rather, their personalities are produced without any arrangement or evidence of involvement. It costs the U.S. communications company $8 billion per year to avoid call spam on the phone grid. Between January 2014 and June 2018, the FTC (Federal Trade Commission) received over 22 million reports of fraudulent and illegal telemarketing calls. Nowadays, the mobile network is used to issue automatic phone calls such as robocalls. Since it operates on text, we struggle with the following: What tactics and methods do we use to combat spam? Telephone TFD (Telecom Fraud Detection) here is discussed first. Concerning spam, we advanced our proposal by proposing a targeted traffic detection using a single weighted credibility algorithm with appropriate weighting criteria.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89627980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feature Selection and Extraction is a very significant and mandatory part in the domain of image processing. After the relevant preprocessing operations, the relevant features have to be extracted using suitable algorithms. In multispectral imagery, the features are identified and extracted based on the applications and objectives of the analysis such as color, texture, brightness, intensity etc. Some of the prominent algorithms used for feature extraction are mean shift algorithm, Principal Component transformation, Wavelet based Transformation, Local Binary Patterns etc. Texture based feature detection and extraction is the most prominent method adopted which involves multispectral images. With respect to hyperspectral images, dimensionality is a critical issue to be dealt appropriately.
{"title":"NDVI COMPUTATION OF LISS III IMAGES USING QGIS","authors":"Vijayalakshmi, D. Kumar, S. Kumar, P. Thejaswini","doi":"10.17762/ITII.V9I1.291","DOIUrl":"https://doi.org/10.17762/ITII.V9I1.291","url":null,"abstract":"Feature Selection and Extraction is a very significant and mandatory part in the domain of image processing. After the relevant preprocessing operations, the relevant features have to be extracted using suitable algorithms. In multispectral imagery, the features are identified and extracted based on the applications and objectives of the analysis such as color, texture, brightness, intensity etc. Some of the prominent algorithms used for feature extraction are mean shift algorithm, Principal Component transformation, Wavelet based Transformation, Local Binary Patterns etc. Texture based feature detection and extraction is the most prominent method adopted which involves multispectral images. With respect to hyperspectral images, dimensionality is a critical issue to be dealt appropriately.","PeriodicalId":40759,"journal":{"name":"Information Technology in Industry","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80533665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}