Diego Armando Castillo-Ñopo, Khateryn Fiorela Loyola-Blanco, Raul Castro-Marca, Gian Davor La Rosa-Gavino, Jherson Giovanny Aragón-Retuerto, Hegel Alvaro Rafael-Sifuentes, William Joel Marín Rodriguez
This article deals with project management in information systems, whose relevance lies in the vital importance of these systems in modern companies. Information systems are essential for decision making and data management in today's interconnected world. Project management, on the other hand, coordinates elements such as scope, resources, costs, schedules and risks to achieve defined objectives. The systems development life cycle (SDLC) structures the process, encompassing phases such as scope definition, planning, execution, monitoring and closure. These phases are integrated with risk management, which identifies, evaluates and mitigates threats and opportunities. Mitigation strategies act before adversity, while contingency planning prepares for the unforeseen. That is why risk management is integrated throughout the project life cycle to anticipate and address challenges. The combination of both aspects is critical in a constantly evolving technology environment. In addition, organizational culture and communication play a critical role. A culture of awareness and accountability, transparency in communication and active stakeholder participation are essential. Training and continuous adaptation allow learning from past experiences and improving practices.
{"title":"Risk management in large-scale information system projects","authors":"Diego Armando Castillo-Ñopo, Khateryn Fiorela Loyola-Blanco, Raul Castro-Marca, Gian Davor La Rosa-Gavino, Jherson Giovanny Aragón-Retuerto, Hegel Alvaro Rafael-Sifuentes, William Joel Marín Rodriguez","doi":"10.4108/eetsis.4608","DOIUrl":"https://doi.org/10.4108/eetsis.4608","url":null,"abstract":"This article deals with project management in information systems, whose relevance lies in the vital importance of these systems in modern companies. Information systems are essential for decision making and data management in today's interconnected world. Project management, on the other hand, coordinates elements such as scope, resources, costs, schedules and risks to achieve defined objectives. The systems development life cycle (SDLC) structures the process, encompassing phases such as scope definition, planning, execution, monitoring and closure. These phases are integrated with risk management, which identifies, evaluates and mitigates threats and opportunities. Mitigation strategies act before adversity, while contingency planning prepares for the unforeseen. That is why risk management is integrated throughout the project life cycle to anticipate and address challenges. The combination of both aspects is critical in a constantly evolving technology environment. In addition, organizational culture and communication play a critical role. A culture of awareness and accountability, transparency in communication and active stakeholder participation are essential. Training and continuous adaptation allow learning from past experiences and improving practices.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":" 22","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140216195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karla Martell, Rosa Cueto-Orbe, S. Vela-del-Aguila, Julio Iván Torres-Manrique, Karen Reátegui-Villacorta, C. Alejandría-Castro
Abstract: This article reviews the challenges and opportunities facing companies in business management in the era of information. Challenges in managing large volumes of data, emerging trends in cybersecurity, and companies' ability to adapt to the digitalized environment are analyzed. The methodology used includes an exhaustive search of articles in indexed journals and the application of inclusion criteria to select 50 relevant articles. Key findings include obstacles in data management, the increasing sophistication of cyber threats, and business adaptation strategies such as digital transformation and the integration of emerging technologies. In conclusion, the importance of addressing these challenges and leveraging the opportunities presented by technology to enhance business efficiency and competitiveness is highlighted.
{"title":"Business Management in the Information Age: Use of Systems, Data Processing and Scalability for Organizational Efficiency","authors":"Karla Martell, Rosa Cueto-Orbe, S. Vela-del-Aguila, Julio Iván Torres-Manrique, Karen Reátegui-Villacorta, C. Alejandría-Castro","doi":"10.4108/eetsis.5408","DOIUrl":"https://doi.org/10.4108/eetsis.5408","url":null,"abstract":"Abstract: This article reviews the challenges and opportunities facing companies in business management in the era of information. Challenges in managing large volumes of data, emerging trends in cybersecurity, and companies' ability to adapt to the digitalized environment are analyzed. The methodology used includes an exhaustive search of articles in indexed journals and the application of inclusion criteria to select 50 relevant articles. Key findings include obstacles in data management, the increasing sophistication of cyber threats, and business adaptation strategies such as digital transformation and the integration of emerging technologies. In conclusion, the importance of addressing these challenges and leveraging the opportunities presented by technology to enhance business efficiency and competitiveness is highlighted.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":" 61","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140221215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
INTRODUCTION: The satellite's physical and technical capabilities limit high spectral and spatial resolution image acquisition. In Remote Sensing (RS), when high spatial and spectral resolution data is essential for specific Geographic Information System (GIS) applications, Pan Sharpening (PanS) becomes imperative in obtaining such data. OBJECTIVES: Study aims to enhance the spatial resolution of the multispectral Landsat-8 (L8) images using a synthetic panchromatic band generated by averaging four fine-resolution bands in the Sentinel-2 (S2) images. METHODS: Evaluation of the proposed multi-satellite PanS approach, three different PanS techniques, Smoothed Filter Intensity Modulation (SFIM), Gram-Schmidt (GS), and High Pass Filter Additive (HPFA) are used for two different study areas. The techniques' effectiveness was evaluated using well-known Image Quality Assessment Metrics (IQAM) such as Root Mean Square Error (RMSE), Correlation Coefficient (CC), Erreur Relative Globale Adimensionnelle de Synthèse (ERGAS), and Relative Average Spectral Error (RASE). This study leveraged the GEE platform for datasets and implementation. RESULTS: The promising values were provided by the GS technique, followed by the SFIM technique, whereas the HPFA technique produced the lowest quantitative result. CONCLUSION: In this study, the spectral bands of the MS image’s performance show apparent variation with respect to that of the different PanS techniques used.
{"title":"Image Quality Assessment of Multi-Satellite Pan-Sharpening Approach: A Case Study using Sentinel-2 Synthetic Panchromatic Image and Landsat-8","authors":"Greetta Pinheiro, Ishfaq Hussain Rather, Aditya Raj, S. Minz, Sushil Kumar","doi":"10.4108/eetsis.5496","DOIUrl":"https://doi.org/10.4108/eetsis.5496","url":null,"abstract":"INTRODUCTION: The satellite's physical and technical capabilities limit high spectral and spatial resolution image acquisition. In Remote Sensing (RS), when high spatial and spectral resolution data is essential for specific Geographic Information System (GIS) applications, Pan Sharpening (PanS) becomes imperative in obtaining such data. \u0000OBJECTIVES: Study aims to enhance the spatial resolution of the multispectral Landsat-8 (L8) images using a synthetic panchromatic band generated by averaging four fine-resolution bands in the Sentinel-2 (S2) images. \u0000METHODS: Evaluation of the proposed multi-satellite PanS approach, three different PanS techniques, Smoothed Filter Intensity Modulation (SFIM), Gram-Schmidt (GS), and High Pass Filter Additive (HPFA) are used for two different study areas. The techniques' effectiveness was evaluated using well-known Image Quality Assessment Metrics (IQAM) such as Root Mean Square Error (RMSE), Correlation Coefficient (CC), Erreur Relative Globale Adimensionnelle de Synthèse (ERGAS), and Relative Average Spectral Error (RASE). This study leveraged the GEE platform for datasets and implementation. \u0000RESULTS: The promising values were provided by the GS technique, followed by the SFIM technique, whereas the HPFA technique produced the lowest quantitative result. \u0000CONCLUSION: In this study, the spectral bands of the MS image’s performance show apparent variation with respect to that of the different PanS techniques used.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":" 18","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140221707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
INTRODUCTION: The processing and storage capacities of the Internet of Everything (IoE) platform are restricted, but the cloud can readily provide efficient computing resources and scalable storage. The Internet of Everything (IoE) has expanded its capabilities recently by employing cloud resources in multiple ways. Cloud service providers (CSP) offer storage resources where extra data can be stored. These methods can be used to store user data over the CSP while maintaining data integrity and security. The secure storage of data is jeopardized by concerns like malicious system damage, even though the CSP's storage devices are highly centralized. Substantial security advancements have been made recently as a result of using blockchain technology to protect data transported to networks. In addition, the system's inclusive efficacy is enhanced, which lowers costs in comparison to earlier systems. OBJECTIVES: The main objective of the study is to a blockchain-based data integrity verification scheme is presented to provide greater scalability and utilization of cloud resources while preventing data from entering the cloud from being corrupted. METHODS: In this paper, we propose a novel method of implementing blockchain in order to enhance the security of data stores in cloud. RESULTS: The simulations indicate that the proposed approach is more effective in terms of data security and data integrity. Furthermore, the comparative investigation demonstrated that the purported methodology is far more effective and competent than prevailing methodologies. CONCLUSIONS: The model evaluations demonstrated that the proposed approach is quite effective in data security.
{"title":"Blockchain based Quantum Resistant Signature Algorithm for Data Integrity Verification in Cloud and Internet of Everything","authors":"P. Shrivastava, Bashir Alam, Mansaf Alam","doi":"10.4108/eetsis.5488","DOIUrl":"https://doi.org/10.4108/eetsis.5488","url":null,"abstract":" \u0000INTRODUCTION: The processing and storage capacities of the Internet of Everything (IoE) platform are restricted, but the cloud can readily provide efficient computing resources and scalable storage. The Internet of Everything (IoE) has expanded its capabilities recently by employing cloud resources in multiple ways. Cloud service providers (CSP) offer storage resources where extra data can be stored. These methods can be used to store user data over the CSP while maintaining data integrity and security. The secure storage of data is jeopardized by concerns like malicious system damage, even though the CSP's storage devices are highly centralized. Substantial security advancements have been made recently as a result of using blockchain technology to protect data transported to networks. In addition, the system's inclusive efficacy is enhanced, which lowers costs in comparison to earlier systems. \u0000OBJECTIVES: The main objective of the study is to a blockchain-based data integrity verification scheme is presented to provide greater scalability and utilization of cloud resources while preventing data from entering the cloud from being corrupted. \u0000METHODS: In this paper, we propose a novel method of implementing blockchain in order to enhance the security of data stores in cloud. \u0000RESULTS: The simulations indicate that the proposed approach is more effective in terms of data security and data integrity. Furthermore, the comparative investigation demonstrated that the purported methodology is far more effective and competent than prevailing methodologies. \u0000CONCLUSIONS: The model evaluations demonstrated that the proposed approach is quite effective in data security.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"3 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140225763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
INTRODUCTION: From the perspective of blockchain, it establishes a credit risk evaluation index system for supply chain finance applicable to blockchain, constructs an accurate credit risk prediction model, and provides a reliable guarantee for the research of credit risk in supply chain finance.OBJECTIVES: To address the inefficiency of the current credit risk prediction and evaluation model for supply chain finance.METHODS: This paper proposes a combined blockchain supply chain financial credit risk prediction and evaluation method based on kernel principal component analysis and intelligent optimisation algorithm to improve Deep Echo State Network. Firstly, the evaluation system is constructed by describing the supply chain financial credit risk prediction and evaluation problem based on blockchain technology, analysing the evaluation indexes, and constructing the evaluation system; then, the parameters of DeepESN network are optimized by combining the kernel principal component analysis method with the JSO algorithm to construct the credit risk prediction and evaluation model of supply chain finance; finally, the effectiveness, robustness, and real-time performance of the proposed method are verified by simulation experiment analysis.RESULTS: The results show that the proposed method improves the prediction efficiency of the prediction model.CONCLUSION: The problems of insufficient scientific construction of index system and poor efficiency of risk prediction model of B2B E-commerce transaction size prediction method are effectively solved.
{"title":"Research on Credit Risk Prediction Method of Blockchain Applied to Supply Chain Finance","authors":"Yue Liu, Wangke Lin","doi":"10.4108/eetsis.5300","DOIUrl":"https://doi.org/10.4108/eetsis.5300","url":null,"abstract":"INTRODUCTION: From the perspective of blockchain, it establishes a credit risk evaluation index system for supply chain finance applicable to blockchain, constructs an accurate credit risk prediction model, and provides a reliable guarantee for the research of credit risk in supply chain finance.OBJECTIVES: To address the inefficiency of the current credit risk prediction and evaluation model for supply chain finance.METHODS: This paper proposes a combined blockchain supply chain financial credit risk prediction and evaluation method based on kernel principal component analysis and intelligent optimisation algorithm to improve Deep Echo State Network. Firstly, the evaluation system is constructed by describing the supply chain financial credit risk prediction and evaluation problem based on blockchain technology, analysing the evaluation indexes, and constructing the evaluation system; then, the parameters of DeepESN network are optimized by combining the kernel principal component analysis method with the JSO algorithm to construct the credit risk prediction and evaluation model of supply chain finance; finally, the effectiveness, robustness, and real-time performance of the proposed method are verified by simulation experiment analysis.RESULTS: The results show that the proposed method improves the prediction efficiency of the prediction model.CONCLUSION: The problems of insufficient scientific construction of index system and poor efficiency of risk prediction model of B2B E-commerce transaction size prediction method are effectively solved.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"37 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140229172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Chat-Bot utilizes Sequence-to-Sequence Model with the Attention Mechanism, in order to interpret and address user inputs effectively. The whole model consists of Data gathering, Data preprocessing, Seq2seq Model, Training and Tuning. Data preprocessing involves cleaning of any irrelevant data, before converting them into the numerical format. The Seq2Seq Model is comprised of two components: an Encoder and a Decoder. Both Encoder and Decoder along with the Attention Mechanism allow dialogue management, which empowers the Model to answer the user in the most accurate and relevant manner. The output generated by the Bot is in the Natural Language only. Once the building of the Seq2Seq Model is completed, training of the model takes place in which the model is fed with the preprocessed data, during training it tries to minimize the loss function between the predicted output and the ground truth output. Performance is computed using metrics such as perplexity, BLEU score, and ROUGE score on a held-out validation set. In order to meet non-functional requirements, our system needs to maintain a response time of under one second with an accuracy target exceeding 90%.
{"title":"Evaluating Performance of Conversational Bot Using Seq2Seq Model and Attention Mechanism","authors":"Karandeep Saluja, Shashwat Agarwal, Sanjeev Kumar, Tanupriya Choudhury","doi":"10.4108/eetsis.5457","DOIUrl":"https://doi.org/10.4108/eetsis.5457","url":null,"abstract":"The Chat-Bot utilizes Sequence-to-Sequence Model with the Attention Mechanism, in order to interpret and address user inputs effectively. The whole model consists of Data gathering, Data preprocessing, Seq2seq Model, Training and Tuning. Data preprocessing involves cleaning of any irrelevant data, before converting them into the numerical format. The Seq2Seq Model is comprised of two components: an Encoder and a Decoder. Both Encoder and Decoder along with the Attention Mechanism allow dialogue management, which empowers the Model to answer the user in the most accurate and relevant manner. The output generated by the Bot is in the Natural Language only. Once the building of the Seq2Seq Model is completed, training of the model takes place in which the model is fed with the preprocessed data, during training it tries to minimize the loss function between the predicted output and the ground truth output. Performance is computed using metrics such as perplexity, BLEU score, and ROUGE score on a held-out validation set. In order to meet non-functional requirements, our system needs to maintain a response time of under one second with an accuracy target exceeding 90%.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"223 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140233465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
INTRODUCTION: The rapid change in artificial intelligence has evaluated ideological and political education ability in colleges and universities as a significant challenge.OBJECTIVES: To assess the level of competence of universities in ideological and political education to determine the effectiveness and efficacy of educational programs and to provide a basis for improving and upgrading academic competence.METHODS: Based on the CIPP model, the author constructed an index system and selected a suitable evaluation model to conduct a study on the evaluation of ideological and political competence of colleges and universities in the context of Artificial Intelligence, which helps to understand the background conditions, resource allocation, teaching activities and quality of teaching of educational programs, as well as the level of ideological and political literacy of the students and their achievements.RESULTS: The evaluation results show that this kind of evaluation research helps to improve and enhance the capacity of ideological and political education in colleges and universities, and at the same time, it can dig into the implementation effect of the educational program, find problems and shortcomings, and promote the continuous improvement of the educational program.CONCLUSION: Through evaluation, the quality and level of ideological and political education in colleges and universities can improve students' ideological and political literacy and sense of social responsibility. In addition, based on this, it makes the development of ideological and political ability in colleges and universities can be better adapted to the era of artificial intelligence.
{"title":"Study on Evaluation of Execution Capability Based on Artificial Intelligence CIPP Model","authors":"Hui Dong","doi":"10.4108/eetsis.5234","DOIUrl":"https://doi.org/10.4108/eetsis.5234","url":null,"abstract":"INTRODUCTION: The rapid change in artificial intelligence has evaluated ideological and political education ability in colleges and universities as a significant challenge.OBJECTIVES: To assess the level of competence of universities in ideological and political education to determine the effectiveness and efficacy of educational programs and to provide a basis for improving and upgrading academic competence.METHODS: Based on the CIPP model, the author constructed an index system and selected a suitable evaluation model to conduct a study on the evaluation of ideological and political competence of colleges and universities in the context of Artificial Intelligence, which helps to understand the background conditions, resource allocation, teaching activities and quality of teaching of educational programs, as well as the level of ideological and political literacy of the students and their achievements.RESULTS: The evaluation results show that this kind of evaluation research helps to improve and enhance the capacity of ideological and political education in colleges and universities, and at the same time, it can dig into the implementation effect of the educational program, find problems and shortcomings, and promote the continuous improvement of the educational program.CONCLUSION: Through evaluation, the quality and level of ideological and political education in colleges and universities can improve students' ideological and political literacy and sense of social responsibility. In addition, based on this, it makes the development of ideological and political ability in colleges and universities can be better adapted to the era of artificial intelligence.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"17 35","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140240593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
INTRODUCTION: The Graph Coloring Problem (GCP) involves coloring the vertices of a graph in such a way that no two adjacent vertices share the same color while using the minimum number of colors possible. OBJECTIVES: The main objective of the study is While keeping the constraint that no two neighbouring vertices have the same colour, the goal is to reduce the number of colours needed to colour a graph's vertices. It further investigate how various techniques impact the execution time as the number of nodes in the graph increases. METHODS: In this paper, we propose a novel method of implementing a Genetic Algorithm (GA) to address the GCP. RESULTS: When the solution is implemented on a highly specified Google Cloud instance, we likewise see a significant increase in performance. The parallel execution on Google Cloud shows significantly faster execution times than both the serial implementation and the parallel execution on a local workstation. This exemplifies the benefits of cloud computing for computational heavy jobs like GCP. CONCLUSION: This study illustrates that a promising solution to the Graph Coloring Problem is provided by Genetic Algorithms. Although the GA-based approach does not provide an optimal result, it frequently produces excellent approximations in a reasonable length of time for a variety of real-world situations.
{"title":"A Solution to Graph Coloring Problem Using Genetic Algorithm","authors":"Karan Malhotra, Karan D. Vasa, Neha Chaudhary, Ankit Vishnoi, Varun Sapra","doi":"10.4108/eetsis.5437","DOIUrl":"https://doi.org/10.4108/eetsis.5437","url":null,"abstract":"INTRODUCTION: The Graph Coloring Problem (GCP) involves coloring the vertices of a graph in such a way that no two adjacent vertices share the same color while using the minimum number of colors possible. \u0000OBJECTIVES: The main objective of the study is While keeping the constraint that no two neighbouring vertices have the same colour, the goal is to reduce the number of colours needed to colour a graph's vertices. It further investigate how various techniques impact the execution time as the number of nodes in the graph increases. \u0000METHODS: In this paper, we propose a novel method of implementing a Genetic Algorithm (GA) to address the GCP. \u0000RESULTS: When the solution is implemented on a highly specified Google Cloud instance, we likewise see a significant increase in performance. The parallel execution on Google Cloud shows significantly faster execution times than both the serial implementation and the parallel execution on a local workstation. This exemplifies the benefits of cloud computing for computational heavy jobs like GCP. \u0000CONCLUSION: This study illustrates that a promising solution to the Graph Coloring Problem is provided by Genetic Algorithms. Although the GA-based approach does not provide an optimal result, it frequently produces excellent approximations in a reasonable length of time for a variety of real-world situations.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"2 33","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140241321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
INTRODUCTION: The article discusses the key steps in digital visual design reengineering, with a special emphasis on the importance of information decoding and feature extraction for flat cultural heritage. These processes not only minimize damage to the aesthetic heritage itself but also feature high quality, efficiency, and recyclability.OBJECTIVES: The aim of the article is to explore the issues of gene extraction methods in digital visual design reengineering, proposing a visual gene extraction method through an improved K-means clustering algorithm.METHODS: A visual gene extraction method based on an improved K-means clustering algorithm is proposed. Initially analyzing the digital visual design reengineering process, combined with a color extraction method using the improved JSO algorithm-based K-means clustering algorithm, a gene extraction and clustering method for digital visual design reengineering is proposed and validated through experiments.RESULT: The results show that the proposed method improves the accuracy, robustness, and real-time performance of clustering. Through comparative analysis with Dunhuang murals, the effectiveness of the color extraction method based on the K-means-JSO algorithm in the application of digital visual design reengineering is verified. The method based on the K-means-GWO algorithm performs best in terms of average clustering time and standard deviation. The optimization curve of color extraction based on the K-means-JSO algorithm converges faster and with better accuracy compared to the K-means-ABC, K-means-GWO, K-means-DE, K-means-CMAES, and K-means-WWCD algorithms.CONCLUSION: The color extraction method of the K-means clustering algorithm improved by the JSO algorithm proposed in this paper solves the problems of insufficient standardization in feature selection, lack of generalization ability, and inefficiency in visual gene extraction methods.
{"title":"Digital Visual Design Reengineering and Application Based on K-means Clustering Algorithm","authors":"Lijie Ren, Hyunsuk Kim","doi":"10.4108/eetsis.5233","DOIUrl":"https://doi.org/10.4108/eetsis.5233","url":null,"abstract":"INTRODUCTION: The article discusses the key steps in digital visual design reengineering, with a special emphasis on the importance of information decoding and feature extraction for flat cultural heritage. These processes not only minimize damage to the aesthetic heritage itself but also feature high quality, efficiency, and recyclability.OBJECTIVES: The aim of the article is to explore the issues of gene extraction methods in digital visual design reengineering, proposing a visual gene extraction method through an improved K-means clustering algorithm.METHODS: A visual gene extraction method based on an improved K-means clustering algorithm is proposed. Initially analyzing the digital visual design reengineering process, combined with a color extraction method using the improved JSO algorithm-based K-means clustering algorithm, a gene extraction and clustering method for digital visual design reengineering is proposed and validated through experiments.RESULT: The results show that the proposed method improves the accuracy, robustness, and real-time performance of clustering. Through comparative analysis with Dunhuang murals, the effectiveness of the color extraction method based on the K-means-JSO algorithm in the application of digital visual design reengineering is verified. The method based on the K-means-GWO algorithm performs best in terms of average clustering time and standard deviation. The optimization curve of color extraction based on the K-means-JSO algorithm converges faster and with better accuracy compared to the K-means-ABC, K-means-GWO, K-means-DE, K-means-CMAES, and K-means-WWCD algorithms.CONCLUSION: The color extraction method of the K-means clustering algorithm improved by the JSO algorithm proposed in this paper solves the problems of insufficient standardization in feature selection, lack of generalization ability, and inefficiency in visual gene extraction methods.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"13 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140241639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless Sensor Networks (WSNs) play a pivotal role in various applications, including environmental monitoring, industrial automation, and healthcare. However, the limited energy resources of sensor nodes pose a significant challenge to the longevity and performance of WSNs. To address this challenge, this paper presents an Optimized Energy Efficient Protocol in Wireless Sensor Networks through Cluster Head Selection Using Residual Energy and Distance Metrics (OEE-WCRD). This research paper presents a novel approach to cluster head selection in WSNs by harnessing a combination of residual energy and distance metrics. The proposed method aims to significantly enhance the energy efficiency of WSNs by prioritizing nodes with ample residual energy and proximity to their neighbors as cluster heads. Through extensive simulations and evaluations, we demonstrate the effectiveness of this approach in prolonging network lifetime, optimizing data aggregation, and ultimately advancing the energy efficiency of WSNs, making it a valuable contribution to the field of WSNs protocols.
{"title":"OEE-WCRD: Optimizing Energy Efficiency in Wireless Sensor Networks through Cluster Head Selection Using Residual Energy and Distance Metrics","authors":"Lalit Kumar Tyagi, Anoop Kumar","doi":"10.4108/eetsis.4268","DOIUrl":"https://doi.org/10.4108/eetsis.4268","url":null,"abstract":"Wireless Sensor Networks (WSNs) play a pivotal role in various applications, including environmental monitoring, industrial automation, and healthcare. However, the limited energy resources of sensor nodes pose a significant challenge to the longevity and performance of WSNs. To address this challenge, this paper presents an Optimized Energy Efficient Protocol in Wireless Sensor Networks through Cluster Head Selection Using Residual Energy and Distance Metrics (OEE-WCRD). This research paper presents a novel approach to cluster head selection in WSNs by harnessing a combination of residual energy and distance metrics. The proposed method aims to significantly enhance the energy efficiency of WSNs by prioritizing nodes with ample residual energy and proximity to their neighbors as cluster heads. Through extensive simulations and evaluations, we demonstrate the effectiveness of this approach in prolonging network lifetime, optimizing data aggregation, and ultimately advancing the energy efficiency of WSNs, making it a valuable contribution to the field of WSNs protocols.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"25 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140262953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}